text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
=1arrows,positioning,decorations.markings,decorations.pathmorphing,calc.75ex ∼ .27ex > .75ex ∼ .27ex < #1 #1
http://arxiv.org/abs/1702.08577v1
{ "authors": [ "Claudia de Rham", "Scott Melville", "Andrew J. Tolley", "Shuang-Yong Zhou" ], "categories": [ "hep-th", "astro-ph.CO", "gr-qc", "hep-ph" ], "primary_category": "hep-th", "published": "20170227231055", "title": "Massive Galileon Positivity Bounds" }
ymatsumoto@cfca.nao.ac.jp1Planetary Exploration Research Center, Chiba Institute of Technology, Narashino, Chiba, 275-0016, Japan 2Center for Computational Astrophysics, National Astronomical Observatory of Japan, Osawa, Mitaka, Tokyo, 181-8588, Japan 3Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Chondrules are primitive materials in the Solar System.They are formed in the first about 3 Myr of the Solar System's history. This timescale is longer than that of Mars formation,and it is conceivable that protoplanets, planetesimals and chondrules might have existed simultaneously in the solar nebula. Due to protoplanets perturbation on the planetesimal dynamics and chondrule accretion on them,all the formed chondrules are unlikely to be accreted by planetesimals.We investigate the amount of chondrules accreted by planetesimals in such a condition.We assume that a protoplanet is in oligarchic growth, and we perform analytical calculations of chondrule accretion both by a protoplanet and by planetesimals. Through the oligarchic growth stage, planetesimals accrete about half of the formed chondrules.The smallest planetesimals get the largest amount of the chondrules, compared with the amount accreted by more massive planetesimals.We perform a parameter study and find that this fraction is not largely changed for a wide range of parameter sets. § INTRODUCTIONChondrules are mm-sized spherical particles found in chondritic meteorites.The properties of them suggest that their precursors were melted by some flash heating events in gas nebula <cit.>.They make up ∼ 20 to 80 % of most chondrites' volume,and their formation started from the time of Ca-Al rich inclusions (CAIs) formation and continued for at least ∼ 3 Myr <cit.>. This means that the heating events are common in the first 3 Myr of the Solar System formation. Several formation mechanisms of chondrules are proposed <cit.>. These include X-wind model <cit.>, nebular lightning model <cit.>, nebula shock model <cit.>, and impact jetting model <cit.>.These models can reproduce some petrologic and geochemical aspects of chondrules <cit.>. The models also need to explain chondrule abundance.The amount of chondrules, which can be inferred from chondrites, is not equal to the produced amount of chondrules.This is not only because the present asteroid belt mass is much smaller than that of the primordial one,but also because it is unclear how parent bodies of chondrites formed.When the currently available chondrites were generated as fragments of massive bodies <cit.>, there is the following possibility <cit.>;even if planetesimals originally did not contain any chondritic materials, they could accrete chondrules as chondrules formed with time.In this case, the planetesimals could have a chondrule-rich surface layer.Following the subsequent collisional cascade, such a surface layer could break into chondrites.In order to examine this possibility, it is important to investigate how chondrules were accreted by massive bodies such as planetesimals and protoplanets. Recent studies <cit.> investigated the accretion process of small particles onto massive bodies in laminar disk gas,known as pebble accretion. The particles which are strongly affected by gas drag, such as chondrules, boulders, or fragments of larger bodies,are efficiently accreted by massive planetesimals and protoplanets. Accretion of chondrules through pebble accretion was studied by <cit.>. They considered the situation that planetesimals are born and grow in an ocean of chondrules. However, chondrules were formed during 3 My after CAI formation,and it is conceivable that planet formation actively took place at that time. In fact, <cit.> suggested that the timescale of Mars formation is 1.8^+0.9_-1.0 Myr or less after CAI formation. If such a body is in the planetesimal swarm, which can be the parent bodies of chondrites, runaway and oligarchic growth of the body occur <cit.>.It is therefore crucial to explore how chondrule formation and accretion occur simultaneously with the growth of protoplanets. The accretion efficiency of chondrules by planetesimals decreases when protoplanets affect the dynamics of planetesimals <cit.>. This is because planetesimals tend to be kicked out from the pebble sea due to the gravitational interaction with protoplanets,which increases both eccentricity and inclination of the planetesimals.<cit.> studied the pebble accretion of chondrules by planetesimals, assuming that chondrules are formed by impact jetting.In this formation scenario, chondrule-forming impacts are realized when protoplanets are present in planetesimal disks <cit.>.They found that there are certain ranges of parameters that satisfy the timescale of chondrule formation,magnetic field strength estimated from the Semarkona ordinary chondrite <cit.>, and the condition of efficient pebble accretion.However, the accretion efficiency of chondrules onto planetesimals and a protoplanet is not directly calculated in the previous studies. In this paper, we investigate chondrule accretion under the presence of a growing protoplanet.Since the timescale of runaway growth is much smaller than that of chondrule formation,we consider that a protoplanet is already in the oligarchic stage, which is put in a swarm of planetesimals. We adopt the impact jetting model as a chondrule forming process in the fiducial model. We calculate the growth of a protoplanet analytically. The chondrule accretion rates by a protoplanet and planetesimals are also calculated in each timestep as the protoplanet grows. Moreover, we obtain the mass of accreted chondrules. Our model is described in detail in section <ref>. In section <ref>, we present the results in which the timescale of chondrule accretionby a protoplanet and planetesimals and the amount of chondrules accreted by them are both shown. In section <ref>, we discuss implications of chondrule accretion and physical processes that are not included in this paper. Finally, section <ref> contains our conclusions.§ MODELlll 1 Summary of Key Quantities Symbol Meanings Value ρ_ g Gas volume density at the disk midplane f_ d Increment factor of ρ_ g and Σ_d h_ g Gas pressure scale height τ_ g Timescale of disk gas depletion r Orbital radius T_K Orbital period M Mass of the protoplanet τ_ pr Timescale of the protoplanet growth t_ iso Time until the protoplanet reaches the isolation mass M_ iso Isolation mass of the protoplanet M_ esc Mass of the protoplanet when impact velocities exceed 2.5 ^-1 M_ ini Mass of the protoplanet when the oligarchic growth begins m_ pl Mass of planetesimals R_ pl Radius of planetesimals ρ_ pl Material density of planetesimals 2 g cm^-3 e_ pl Eccentricity of planetesimals in oligarchic growth n_ pl Number of planetesimals M_ ch Mass of field chondrules r_ ch Characteristic size of chondrules 1 mm ρ_ s Bulk density of chondrules 3.3 g cm^-3 ρ_ ch Spatial density of chondrules in protoplanetary disk h_ ch Scale height of chondrules τ_ stop Timescale of gas drag on chondrules F_ ch Mass fraction of planetesimals that can eventually generate chondrules via impact jetting 10^-2 r_ H Hill radius r_ B Bondi radius M_t Transition mass f_ acc Increment factor for chondrule accretion by planetesimals M_ acc Mass of accreted chondrules τ_ acc Timescale of chondrule accretion τ_ B Timescale that chondrules across r_ B f_ r,i The mass fraction of chondrule accreted by planetesimals in i-th mass bin f_ m,ch The mass fraction of chondrules with respect to an accreting planetesimal Δ R_ ch The thickness of the chondrule layer on a planetesimal We introduce our models that are constituted from the combination of a disk model, chondrule formation model, and chondrule accretion model. We consider the mass of the smallest planetesimals (m_ pl,min), an orbital radius (r), timescale of gas depletion (τ_g),and the accretion enhancement factor (f_ acc) as parameters.In our fiducial model, m_ pl,min=10^23 g planetesimals are located at r=2 au, the gas density is constant with time (τ_ g=∞), and f_ acc=1. This set of parameters are adopted because the timescale of chondrule formation by the impact jetting process is consistent with data from chondrites <cit.>and chondrules can be accreted efficiently by planetesimals <cit.>, While the size of 10^23 g planetesimals may apparently be (about 230 km radius with the material density of 2^-3) too large for asteroids, <cit.> showed that the size distribution of asteroids can be reproduced when the initial planetesimals are larger than 100 km sized ones. Table <ref> summarizes the important physical quantities.§.§ Disk modelAt first, we introduce a disk model that consists of dust and gas.We adopt a power-law disk model similar to the minimum-mass solar nebula model <cit.>.Following <cit.> and <cit.>, we give the surface density of dust (Σ_ d)and the surface density of gas (Σ_ g), asΣ_ d =10 × f_d ( r/1 )^-3/2^-2, Σ_ g =2400 × f_d ( r/1 )^-3/2^-2,where f_d is an increment factor. In this paper, f_ d is a parameter. Reflecting the results of <cit.>, we consider a massive disk case, f_d=3, in our fiducial model. The stellar mass is 1 solar mass.Under the optically thin limit, the disk temperature is given by T=280 ( r/1 )^-1/2,and the sound speed (c_ s), gas pressure scale height (h_ g), and density of gas (ρ_ g) arec_ s =1.1×10^5 ( r/1 )^-1/4^-1, h_ g =4.7 × 10^-2( r/1 )^5/4, ρ_ g =2×10^-9 f_d ( r/1 )^-11/4^-3,respectively. In some calculations, we consider gas depletion.For these calculations, the timescale of gas depletion (τ_g), and Σ_ g and ρ_ gare multiplied by exp(-t/τ_ g), where t is time (cf. equations (<ref>) and (<ref>)).In disks, gas component moves with a sub-Keplerian velocity.The velocity can be written as (1-η)v_K, where v_K is the Keplerian velocity, and η≃ 1.8× 10^-3( r/1 )^1/2,<cit.>. For the velocities of chondrules, it is determined by the degree of coupling with gas. In this paper, we adopt 1 mm as a chondrule size, which is a typical value for chondrules found in chondrites <cit.>. Provided that chondrules are subjected to the Epstein drag force, their stopping time τ_ stop is given by τ_ stop = ρ_ s r_ ch/c_ sρ_ g≃5.0 × 10^-5 f_d^-1( r_ ch/) ( ρ_ s/^-3)×( r /)^3/2 T_K ,where r_ ch is the radius of chondrules, ρ_ s is the material density of them <cit.>,and T_K is the orbital period, T_K=2π/Ω_K, where Ω_K is a Kepler frequency. Since the stopping time is much shorter than the orbital period, chondrules are well coupled with disk gas, and chondrules are on circular orbits. This indicates that when chondrules were formed by impact jetting, chondrules could go out of the feeding zone of a protoplanet along with the gas motion there. The vertical scale height of chondrules (h_ ch) is important for the accretion of chondrules <cit.>.Since vertical diffusion of chondrules is affected by turbulence, h_ ch is determined by the strength of turbulence and τ_ stop. We use the α_ eff parameter to describe the strength of turbulence <cit.>. As suggested for protoplanetary disks, magnetic fields and the resultant disk turbulence probably played an important role for the evolution of the solar nebula.For this case, α_ eff can be written as a function of magnetic fields <cit.>; α_ eff = ⟨ B_r B_ϕ⟩/Σ_ g h_ gΩ_K^2≤⟨ B ⟩^2/Σ_ g h_ gΩ_K^2 ,where B, B_r, B_ϕ are the strength, radial component, and azimuthal component of magnetic fields of the solar nebula around the chondrule-forming region, respectively. Once the value of α_ eff is given, the scale height of chondrules can be given as <cit.>, h_ ch = H/√(1+H^2) h_ g,where H is a quantity derived from the condition that turbulent vertical diffusion (α_ eff) balances out with dust settling toward the midplane,which is characterized by τ_ stop.In the actual formula, H can be written asH= (1/1+γ_ turb)^1/4(α_ eff/τ_ stopΩ_K )^1/2=0.29 (3/1+2(γ_ turb/2) )^1/4( ⟨ B ⟩/ 50 mG )×( ρ_ s/3.3 ^-3)^-1/2( r_ ch/ 1 mm)^-1/2( r/ 1 au)^7/8 ,where γ_ turn is a quantity related to the nature of turbulence.Based on the experimental results obtained from Semarkona ordinary chondrite,the typical value of ⟨ B ⟩ is ⟨ B ⟩≃50 - 540 mG for the solar nebula <cit.>. §.§ Growth of a protoplanetWe use the same model of a protoplanetary growth as the one used in <cit.> (see their section 2).We put a protoplanet in a planetesimal swarm.The initial mass of the protoplanet is defined as M_ ini =50 m_ pl,min( m_ pl,min/10^23)^-2/5( Σ_ d/10 ^-2)^3/5 ×( r/1 )^6/5 ,where m_ pl,min is the mass of the smallest planetesimals. When a protoplanet exceeds this mass, oligarchic growth begins <cit.>.The accretion rate of a protoplanet (dM/dt) is given by dM/dt =CπΣ_ d2GMR/⟨ e^2_ pl⟩ r v_ K,where C is the accretion acceleration factor, C=2, R is the radius of the protoplanet,and ⟨ e^2_ pl⟩^1/2 is the root mean square equilibrium eccentricity of planetesimals.The radius of the protoplanet is calculated with ρ_ pr=2^-3, where ρ_ pr is the material density of the protoplanet. The equilibrium eccentricity in the oligarchic growth stage is ⟨ e^2_ pl⟩^1/2 ≃ 5.6×10^-2( m_ pl/10^23) ^1/15( ρ_ pl/2 ^-3) ^2/15 ×( ρ_ g/2×10^-9^-3) ^-1/5( r/1)^-1/5 ×( M/M_⊕)^1/3,where ρ_ pl is the material density of planetesimals.Note that laminar disks are assumed to obtain equation (<ref>) <cit.>.We also assume that a feeding zone of the protoplanet is 10 Hill radius. The growth of the protoplanet continues until its mass reaches the isolation mass <cit.>,M_ iso =0.16M_⊕( Σ_ d/10^-2)^3/2( r/)^3.§.§ Chondrule formationIn our calculations, we normally adopt the impact jetting model as a chondrule formation model.When the impact velocity of planetesimals exceeds 2.5 km s^-1, chondrules are formed <cit.>.The impact velocity (v_ imp) is given by v_ imp=√(v_ esc^2+(⟨ e_ pl^2 ⟩^1/2v_K)^2), where v_ esc is the escape velocity. We consider protoplanet-planetesimal collisions as chondrule forming impacts. This is because planetesimal-planetesimal collisions are much less effective in generating chondrules than protoplanet-planetesimal ones <cit.>. In this situation, the mass of chondrules produced during dt becomes F_ ch dM,where F_ ch is the mass fraction of chondrules generated by a jetting collision. When we consider protoplanet-planetesimal collisions and a threshold velocity for chondrule forming impacts as 2.5 km s^-1,F_ ch≃ 0.01 <cit.>, which is adopted in our calculations. When the mass of the protoplanet reaches the isolation mass, the mass of the cumulative formed chondrules is ≃0.01M_ iso. The timescale of protoplanet growth (τ_ pr) is τ_ pr =f_τM/dM/dt=2.7 × 10^5 × f_τ f_d^-7/5( m_ pl,min/ 10^23 g)^2/15 ×( ρ_ pl/ 2 g cm^-3)^4/15( r / 1au )^27/10( M/0.1M_⊕)^1/3 ×( ρ_ pr/ 2 g cm^-3)^1/3yr ,where f_τ is a correcting factor, f_τ=3 <cit.>. In the impact jetting model, the timescale of chondrule formation is between when mass of protoplanet reaches M=M_ esc≃0.018M_⊕,which is the mass that the escape velocity becomes equal to 2.5 km s^-1, and M=M_ iso. Figure <ref> shows time evolutions of the mass of the protoplanet (M), the eccentricity of the smallest planetesimals (e_ pl, min),and the mass of cumulative formed chondrules (M_ ch,cum) in our fiducial model. Since the eccentricities of planetesimals follow the Rayleigh distribution <cit.>, e_ pl≃⟨ e_ pl^2⟩^1/2.The collision velocity exceeds 2.5 km s^-1 at 3.3×10^5 yr.The protoplanet reaches the isolation mass, which is 1.4M_⊕ at a time of t_ iso=2.4×10^6 yr. Chondrules are formed during a span of 2×10^6 yr, which is consistent with the formation timescale of chondrules suggested from chondrites. The mass of cumulative formed chondrules is 0.99×10^-2M_ iso≃ F_ chM_ iso. We also consider two different models for chondrule formation.In these models, the production rate of chondrules is different from that of the impact jetting model.In the first model, we assume that chondrules are formed with a constant rate during M_ esc≤ M < M_ iso.This model is hereafter referred to as the constant production rate model.In the second model, it is assumed that the production rate decreases linearly with time.This model is hereafter called as the decreasing production rate model.Since the production rate of the impact jetting model increases with time (F_ ch dM), we can examine all the three distinct models for chondrule formation.Note that the total mass of chondrules formed in all the models is about 0.01M_ iso;in the constant production rate model, ≃ F_ chM_ iso/2×10^6 ≃4×10^19 g of chondrules are formed per year,while in the decreasing production rate model, the mass of chondrules formed at M=M_ esc is ≃ 7×10^19 g,which is ten times larger than that at M=M_ iso.§.§ Chondrule accretionIn the following, we describe how a protoplanet and planetesimals accrete chondrules. Our estimation is based on <cit.>.In order to explicitly compare the accretion efficiencies of chondrules by a protoplanet with that by planetesimals,we assume that these massive objects are exposed to the same amount of chondrules.In other words, we estimate the accretion timescale of chondrules by a protoplanet and by planetesimals, independently.Chondrule masses accreted by a protoplanet and by planetesimals are derived from these timescales. §.§.§ ProtoplanetThe relative velocity (Δ v) between an accreting body and chondrules is important to estimate chondrule accretion. The relative velocity is caused by the eccentricity of the body, gas drag, and Keplerian shear. In our simulations, Δ v between the protoplanet and chondrules is written as η v_K. The eccentricity of the protoplanet is ∼√(m_ pl,min/M )e_ pl,min by energy equipartition.In our parameter range, η is larger than the eccentricity of the protoplanet and Keplerian shear. If we consider larger pebbles or a larger protoplanet, Δ v is determined by Keplerian shear as in the case of the estimation done by <cit.>. Disk turbulence excites the eccentricity of a protoplanet <cit.>.However, the turbulence is weak (4×10^-5≤α_ eff≤ 5×10^-3) when 50≤⟨ B⟩≤ 540 mG in the solar nebula <cit.>.In addition, a longer time is needed for a protoplanet to experience the eccentricity pump-up by disk turbulencethan to undergo the eccentricity damping by dynamical friction from planetesimals.We do not consider the effect of turbulence on the protoplanet, and hence Δ v=η v_K.There are two modes when a protoplanet accretes chondrules <cit.>: the drift accretion mode, and the Hill accretion one.These two modes are divided by a transition mass (M_t).Comparing the Bondi radius r_ B = GM/Δ v^2 with the Hill radius r_ H = (M/3M_⊙)^1/3 r, we can get the transition mass (M_ t), M_ t = Δ v^3/√(3)GΩ_K = 1.1× 10^-3( r/1 )^3/2 M_⊕ ,which is the mass that r_ B=r_ H. Since M_ esc>M_ t, the protoplanet is in the Hill accretion mode <cit.>. Given that chondrule are well coupled with gas (see equation <ref>), the accretion radius (r_ acc) of chondrules by a protoplanet in the Hill mode is determined as what follows;chondrule accretion by a protoplanet can be achieved when the timescale that the gravitational pull arising from a protoplanet can affect chondrules' orbits becomes comparable to the stopping time of chondrules:[In <cit.>, this accretion process is named as settling, sincechondrules reside in the strong coupling regime.]Δ v/GM/r_acc^2 = τ_ stop⇔ r_acc = √(τ_ stopGM/η rΩ_K)=7.2×10^-2 f_d^-1/2( r_ ch/)^1/2 ×( ρ_s/^-3)^1/2( r /)^1/2( M/M_⊕)^1/6 r_ H.Substituting r_ H=1.0×10^-2( M/M_⊕)^1/3r, r_ acc=7.2×10^-4 (M/M_⊕)^1/2 (r/1)^3/2 au. The chondrule accreting rate by the protoplanet (Ṁ_ acc, pr) isṀ_ acc, pr = πρ_ chr_acc^2Δ v, where ρ_ ch is the spatial density of chondrules.The density of chondrules can be given as ρ_ ch=M_ ch/( 2π^3/2rΔ r h_ ch ), where Δ r is the orbital width that chondrules are distributed in, and we give Δ r=h_ ch. Note that a specific choice of Δ r does not affect our conclusions,because the accretion timescales of chondrules both by a protoplanet and by planetesimals have the same dependence on ρ_ ch (see below).Now, we derive the accretion rate (Ṁ_ acc, pr) and the timescale (τ_ acc, pr) of chondrules accreted by a protoplanet.Considering the protoplanet at 2 au and H=0.53, Ṁ_ acc,pr becomesṀ_ acc, pr = πρ_ chr_acc^2Δ v≃ π(M_ ch/ 2π^3/2r h_ ch^2 ) r_acc^2η v_K =3.1× 10^-7(f_d/3)^-1(H^2/(1+H^2) /0.25)^-1( r /)×( r_ ch/) ( ρ_s/^-3) ( M/M_ esc) T_K^-1 M_ch.The timescale of chondrule accretion by the protoplanet is determined by τ_ acc, pr≡ M_ ch/Ṁ_ acc, pr, τ_ acc,pr =0.91× 10^7(f_d/3) (H^2/(1+H^2) /0.25) ( r /)^1/2 ×( r_ ch/)^-1( ρ_s/^-3)^-1( M/M_ esc)^-1.Since the chondrule accretion radius becomes larger with increasing M (see equation <ref>), τ_ acc,pr decreases with increasing M. §.§.§ PlanetesimalsNext, we consider chondrule accretion by planetesimals.While we follow a basic formalism that has been developed by <cit.> and <cit.>,the picture of chondrule accretion by planetesimals in our estimation is different from theirs.In the oligarchic growth, random velocities and numbers of planetesimals are changed according to the mass growth of the protoplanet. These largely affect the chondrule accretion rate of planetesimals. When the mass of planetesimals exceeds M_ t, the accretion radius of chondrules is described in the same way as that of a protoplanet (see equations <ref> and <ref>).In the following, we consider planetesimals that have smaller masses than M_ t, i.e., in the drift accretion mode <cit.>. In this mode, chondrule accretion radius is determined according to τ_ B/τ_ stop, where τ_ B=r_ B/Δ v <cit.>. When 1<τ_ B/τ_ stop, chondrules are strongly affected by gas drag, and planetesimals can not accrete chondrules in whole r_ B.This case corresponds to the settling regime in <cit.> (also see Section <ref>).For this case, the accretion radius is determined by the balance between gravitational pull from a planetesimal and gas-drag acting on chondrules. The accretion radius increases up to r_ B as τ_ B / τ_ stop decreases.Since we consider chondrules (that is, a constant value of τ_ stop), τ_ B / τ_ stop decreases as m_ pl becomes smaller or Δ v becomes larger.For the case that r_ acc = r_ B, chondrule accretion becomes the most efficient in the sense that all the chondrules in the Bondi radius will spiral towards planetesimals.This arises because chondrules experience less gas-drag as their orbit is deflected by planetesimals.This settling regime continues until τ_ B / τ_ stop≃ 0.25 at which gravitational focusing of a planetesimal regulates the dynamics of chondrules.For this case, the accretion radius is given by the gravitational focusing.This case is called the hyperbolic regime, and planetesimals are in this regime when τ_ B/τ_ stop<0.25 <cit.>. In the hyperbolic regime, the orbit of a pebble is determined only by the gravitation interaction of a large body,while in the settling regime, that is affected both by gas-drag and by the gravitational interaction, which is called pebble accretion in <cit.>. The relative velocity between planetesimals and chondrules is Δ v=e_ plv_K for all the three cases(r_acc = (τ_ B/τ_ stop)^-1/2 r_B, r_ acc = r_ B, and r_ acc∼ R_ pl).This is because e_ pl is larger than η and Kepler shear. Since eccentricities of planetesimals increase according to M (equation <ref>), τ_ B/τ_ stop is changed as the protoplanet mass (M) increases,τ_ B/τ_ stop =2.7×10^-2 f_d ( m_pl/10^23) ^4/5( ρ_pl/2 ^-3) ^-2/5 ×( ρ_ g/2×10^-9^-3) ^3/5( r/2)^-9/10 ×( M/M_ esc)^-1( r_ ch/)^-1( ρ_s/^-3)^-1 .As explicitly seen in equation (<ref>), Δ v increases and τ_ B becomes smaller following the mass growth of protoplanets. Figure <ref> shows τ_ B/τ_ stop as a function of M in our fiducial model. We set 20 bins between m_ pl,min and M_ ini (see equation <ref>).During chondrule formation (M>M_ esc), τ_ B/τ_ stop of the smallest planetesimals (pl_ min) is always smaller than 0.25 (that is, the hyperbolic regime) .The median mass planetesimals (pl_ mid), which have √(m_ pl,minM_ ini) mass, also spend most of the span of chondrule formation in the hyperbolic regime.In this figure, only the largest planetesimals, which have m_ pl,min^1/20M_ ini^19/20 mass, accrete chondrules via the pebble accretion. In our simulations, the size distribution of planetesimals is taken into account when the accretion timescale is estimated.The number of planetesimals is given by the power law, n_ pl= f_n (m_ pl/M_ ini)^-2,where n_ pl is the number of planetesimals in a bin <cit.>. To keep the total mass of planetesimals (∑ m_ pl n_ pl) constant for all the simulations, n_ pl is multiplied by a factor f_n.This factor is approximately proportional to m_ pl,min^-1/5Σ_ d^-6/5r^-12/5. In our fiducial model, f_n=1, and the total mass of planetesimals always corresponds to the one in our fiducial model if f_d=3.While the size distribution of planetesimals is included in our estimate,it is reasonable to assume that planetesimals in each mass bin accrete chondrules from their whole r_ acc. This is because planetesimals' cross-sections of accretion are much smaller than 2π rΔ r. Then the mass accretion rate of chondrules by planetesimals in each bin is computed as the summation of that by each planetesimal. Since the protoplanet's cross-section of accretion is also much smaller than 2π rΔ r, we assume that a protoplanet and planetesimals do not compete in accreting chondrules. Also, to accurately estimate the accretion efficiency of chondrules only by planetesimals,the reduction of n_ pl due to the protoplanet growth is neglected in our simulations.In other words, both a protoplanet and planetesimals are exposed to the same amount of chondrules. This assumption may end up with that the total mass of chondrules accreted by planetesimals may be overestimated.Nonetheless, our estimate is useful in the sense that once the total amount of chondrules accreted by single planetesimals is obtained,we can readily calculate how much of chondrules are eventually accreted by planetesimals in each mass bin. We now derive the accretion radius (r_ acc) of chondrules by planetesimals and its timescale (τ_ acc, pl).At first, we consider 1<τ_ B/τ_ stop.In this case, the chondrule accretion radius is (as done in Section <ref>) Δ v/ G m_ pl/r_ acc^2= τ_ stop⇔ r_ acc = ( τ_ B/τ_ stop)^-1/2 r_ B,since the orbits of chondrules are both affected by the gas drag and planetesimal gravity.Here, the Bondi radius of a planetesimal is r_ B= Gm_ pl/Δ v^2.In this situation, the chondrule accreting rate by planetesimals (Ṁ_ acc, pl) isṀ_ acc, pl = n_ plπρ_ ch[(τ_ B/τ_ stop)^-1/2 r_ B]^2 e_ pl v_K. The timescale of chondrule accretion by planetesimals isτ_ acc, pl ≡M_ ch/Ṁ_ acc, pl=1.7×10^7 f_n^-1(f_d/3) ( m_ pl/M_ ini/1/120)^2 ( H^2/(1+H^2) /0.25)×( m_ pl/10^23) ^-1( ρ_ g/2×10^-9^-3)^8/5( r/2)^21/10 ×( r_ ch/)^-1( ρ_s/^-3)^-1. When 0.25<τ_ B/τ_ stop<1, planetesimals accrete chondrules from the whole Bondi radius, r_ acc=r_ B <cit.> In this case, τ_ acc, pl becomesτ_ acc, pl =2.1 ×10^8f_n^-1( m_ pl/M_ ini/1/120)^2 ( H^2/(1+H^2) /0.25)×( m_ pl/10^23) ^-27/15( ρ_ pl/2 ^-3) ^2/3( r/2)^3 ×( ρ_ g/2×10^-9^-3) ( M/M_ esc) . When τ_ B/τ_ stop<0.25, the chondrule accretion is in hyperbolic regime,and the gravitational scattering plays the dominant role in accreting chondrules.Planetesimals can accrete chondrules only from the gravitationally-enhanced cross section, R_ pl√(1+(v_ esc/Δ v)^2),where R_ pl is the radius of planetesimals <cit.>.The timescale of the chondrule accretion is τ_ acc, pl =1.2×10^7f_n^-1( m_ pl/M_ ini/1/120)^2 ( H^2/(1+H^2) /0.25)×( m_ pl/10^23) ^-11/15( ρ_ pl/2 ^-3) ^8/15( r/2)^21/5 ×( ρ_ g/2×10^-9^-3)^1/5( M/M_ esc)^-1/3 ×( 1+(v_ esc/e_ pl v_K)^2 )^-1. Orbital inclinations can also affect chondrule accretion <cit.>.This quantity comes into play in our model, because planetesimals and a protoplanet coexist in the system.When the inclinations of the planetesimals are larger than h_ ch/r, the planetesimals can not accrete chondrules in whole their orbits. We calibrate the effect of the inclination by computing the ratio of the orbital period to a time intervalduring which planetesimals reside within the height of h_ ch from the midplane. The inclinations of planetesimals are given by i_ pl=e_ pl/2.Using Hill's equations <cit.>, this ratio can be described as f_i_ pl = 4/Ω_K(h_ch/ i_plr )/T_K = 2/π(h_ch/ i_plr ),by which Ṁ_ acc is multiplied, when ri_ pl>h_ ch. The derivation of f_i_ pl is summarized in Appendix.§.§.§ The resultant timescale of accreting chondrulesThe timescales of chondrule accretion in the fiducial model are shown in Figure <ref>.These timescales by a protoplanet and by planetesimals in each mass range are plotted as a function of M.The timescale by the protoplanet decreases with increasing M (see the red solid line). This is because r_ acc increases with increasing M (equations <ref> and <ref>). In the case of pl_ min planetesimals (see the blue dashed line),planetesimals are in the hyperbolic regime, and τ_ acc,pl is given by equation (<ref>). In this regime, τ_ acc,pl depends on M only through e_ pl, which increases with increasing M (equation <ref>). As a result, the planetesimals can encounter more chondrules as a protoplanet becomes more massive. This is why τ_ acc,pl decreases gradually with increasing M when M<0.7M_⊕.When M>0.7M_⊕, the inclination of pl_ min planetesimals becomes larger than h_ ch/r.For this case, the planetesimals have less chance to accrete chondrules, simply because the planetesimals can stay in the chondrule sea for a shorter time.Consequently, τ_ acc becomes longer. The effect of the inclination (equation <ref>) increases τ_ acc,pl as increasing M. For pl_ mid planetesimals (see the green line), two similar features are seen in the behavior of τ_ acc, compared with the pl_ min planetesimal case;the first one is that the accretion timescale decreases slowly with increasing M when 0.025 < M/M_⊕< 0.47.This is again because the planetesimals are in the hyperbolic regime.The other feature is that τ_ acc increases with M, which is caused by the inclination effect. Since the inclination of pl_ mid planetesimals grows faster than that of pl_ min planetesimals,the effect of f_i_ pl becomes important when the protoplanet reaches 0.4M_⊕. There is another noticeable feature for the case of pl_ mid; the accretion timescale jumps at M=0.02M_⊕.This jump is caused by discontinuous change of r_ acc between the settling regime and the hyperbolic regimewhich occurs at τ_ B / τ_ stop at 0.25 (see Figure <ref>).The same jump is also seen in τ_ acc,pl of pl_ max planetesimals as well (see the purple dashed line). For pl_ max planetesimals, τ_ acc,pl is constant when M<0.03M_⊕ (Equation <ref>).In this mass range, 1<τ_ B/τ_ stop, and hence the accretion radius is smaller than the Bondi radius (see equation <ref>).Since the relative velocity is determined by e_ pl v_K, r_ acc∝ e_ pl^-1/2, it indicates that r_ acc shirks with increasing M.At the same time, however, τ_ acc, pl∝ r_ acc,pl^2 Δ v.As a result, the accretion timescale in this case does not depend on M.After the protoplanet has larger masses than 0.03M_⊕, τ_ acc,pl increases with increasing M (equation <ref>),since r_ acc=r_ B in this regime and e_ pl dependence on τ_ acc,pl is not canceled out anymore. When M>0.14M_⊕, τ_ acc,pl evolves according to equation (<ref>), that is the hyperbolic regime.Figure <ref> shows that τ_ acc,pr is shorter than any τ_ acc,pl when M>0.04M_⊕.This suggests that most chondrules would be accreted by a protoplanet.For planetesimals, τ_ acc,pl of pl_ min planetesimals is the smallest.While the timescale of chondrule accretion by a single planetesimal becomes shorter with increasing m_ pl,τ_ acc,pl of planetesimals in each mass bin becomes longer with increasing m_ pl. This is simply because the number of planetesimals is taken into account when computing τ_ acc, pl. In some of the following simulations, the effect of disk turbulence on chondrule accretion by planetesimals will be examinedby multiplying τ_ acc, pl by a factor of f_ acc^-1.This is because a number of effects triggered by disk turbulence have been proposed.These include that chondrules can be concentrated by weak turbulence <cit.>, that eccentricities of planetesimals are excited by turbulence <cit.>,and that the collision probability between planetesimals and chondrules is changed by turbulence <cit.>.In this paper, we take into account only a turbulent effect that can change the collisional probability.This can be done by changing the value of f_ acc. The concentration process of chondrules by turbulence in the oligarchic growth would be affected by protoplanets.While random torque arising from disk turbulence can pump up planetesimals' eccentricities,the eccentricity excitation by a protoplanet is likely to be more important in our configuration <cit.>.Thus, the concentration of chondrules and eccentricity excitation by turbulence are not included in our simulations. Note that the estimation of h_ ch includes the turbulent effect <cit.>.§ CHONDRULE FORMATION AND ACCRETIONlccccccp18em 2 Summary of Simulations Section m_ pl,min f_ d r τ_ g f_ acc F_ ch Chondrule formation model <ref> (fiducial) 10^23 g 3 2 au ∞ 1 0.01 Impact jetting <ref> 10^19 - 10^24 g 3 2 au ∞ 1 0.01 Impact jetting <ref> 10^23 g 1-10 2 au ∞ 1 0.01 Impact jetting <ref> 10^23 g 3 1 - 2.5 au ∞ 1 0.01 Impact jetting <ref> 10^23 g 3 2 au 10^6 yr -∞ 1 0.01 Impact jetting <ref> 10^23 g 3 2 au ∞ 0.3 - 10 0.01 Impact jetting <ref> 10^23 g 3 2 au ∞ 1 0.01 - 0.10 Impact jetting <ref> 10^23 g 3 2 au ∞ 1 0.01 Impact jetting, constant production rate, decreasing production rate We perform simulations of chondrule formation and accretion, in which all the models are combined, following <ref>. In other words, the growth of a protoplanet, formation of chondrules, accretion of them both by the protoplanet and by planetesimals are computed simultaneously.At first, we discuss the procedure of our simulations. Then, chondrule formation and accretion in our fiducial model are presented.We explore the parameter dependences of M_ acc and τ_ acc.The parameter ranges in each model are summarized in Table <ref>.§.§ SynthesisOur simulations are composed of the growth of a protoplanet, chondrule formation, and chondrule accretion by the protoplanet and planetesimals.To synthesize these effects, we perform simulations based on the following procedure. The mass of a protoplanet is increased by dM, which is calculated by equation (<ref>) until its isolation mass, in a time interval dt. After v_ imp reaches 2.5 km s^-1, F_ chdM chondrules are formed in dt. These chondrules are dealt as field chondrules. The mass of field chondrules (M_ ch) is the sum of the remaining field chondrules in the previous step and F_ chdM.Field chondrules are accreted by the protoplanet and planetesimals. The chondrule mass accreted by the protoplanet in dt (Ṁ_ acc,prdt) is given by equation (<ref>).The chondrule mass by planetesimals (Ṁ_ acc,pldt) depends on the accretion mode of planetesimals in each mass range (see <ref>). The mass of the remaining field chondrules is given by M_ ch-(Ṁ_ acc,pr+∑Ṁ_ acc,pl )dt. Then, a sequence of processes that can occur in a timestep (dt) are ended. These processes are repeated until 3×10^6 yr to assess chondrule formation and accretion. Note that while Ṁ_ acc,pr and Ṁ_ acc,pl are calculated independently, they are computed from the same amount of field chondrules.Some parameters affect either τ_ acc,pr or τ_ acc,pl, but not both. In such a case, both M_ acc,pr and M_ acc,pl are changed, since M_ ch is changed.§.§ Fiducial modelFigure <ref> shows the mass of cumulative formed chondrules (M_ ch, cum)and those accreted by a protoplanet (M_ acc,pr) and planetesimals (M_ acc,pl) as a function of time. The protoplanet accretes the largest amount of chondrules, and finally it accretes 5.0× 10^-3 M_ iso, which is equal to 51% of the formed chondrules (see the red, solid line).The smallest mass planetesimals have the second largest amount of the chondrules (see the blue, dashed line).They finally have 1.2× 10^-3M_ iso, which is 12% of them.The chondrule mass accreted by all the planetesimals in single mass bins becomes smaller as m_ pl increases, since τ_ acc,pl becomes longer (<ref>). The summation of chondrules that are accreted by all the planetesimals in all the mass bins is 44% of the formed chondrules.The most of the formed chondrules are accreted by the protoplanet and planetesimals (see the dot-dashed line). We find that chondrules are not accreted soon after they formed.This is simply because the accretion timescale is ≳ 10^5 yr, which is much longer than the timescale of a collision, even for a protoplanet (see Figure <ref>).This feature can also be seen in Figure <ref>; for a given value of chondrule mass (M_ ch,cum and M_ acc),there is a time-lag for the mass of chondrules accreted by all bodies (the dot-dashed line) to catch up with the cumulative value (the dotted line).This time-lag roughly corresponds to the accretion timescale of chondrules.Our results thus suggest that chondrules should have stayed in the solar nebula for 0.1 - 1 Myr.It is interesting that this time interval is roughly consistent with the isotope analysis of chondrules <cit.>.In their study, the so-called compound chondrules, which are aggregates of two or more chondrules, were isotopically analyzed.They found that the secondary melting events occurred about 1 Myr after the primary melting happened.This infers that some of chondrules kept staying in the solar nebula for about 1 Myr.§.§ The dependence on m_ pl,minIn this section, we examine the effect of planetesimal mass on chondrule formation and accretion. We change the mass of the smallest planetesimals (m_ pl,min) and then perform similar simulations.Figure <ref> shows M_ acc and τ_ acc of a protoplanet and pl_ min planetesimalsat 3×10^6 yr as a function of m_ pl,min.The timescale of chondrule accretion by the protoplanet is constant with changing m_ pl,min since it is independent of m_ pl, min (equation <ref>).The timescale by pl_ min planetesimals increases with increasing m_ pl,min.Considering m_ pl=m_ pl,min, equation (<ref>) is proportional to m_ pl,min^4/15. This dependence comes from the product of n_ pl,min∝ m_ pl,min^-1, e_ pl,min∝ m_ pl,min^1/15,and r_ acc^2∼ R_ pl,min^2∝ m_ pl,min^2/3. However, Figure <ref> shows that τ_ acc,pl of pl_ min planetesimals changes more rapidly than m_ pl,min^4/15. This arises because the accretion timescale is additionally affected by the effect of the inclination (f_i_ pl) when m_ pl,min>10^21 g.In the case of m_ pl,min≤10^21 g, i_ pl,min is smaller than h_ ch/r, even when M=M_ iso,and τ_ acc,pl of pl_ min planetesimals changes according to m_ pl,min^4/15. Figure <ref> also shows that M_ acc,pr increases as m_ pl,min increases when m_ pl,min<10^24 g. This occurs because M_ acc,pl decreases as m_ pl,min increases. When Ṁ_ acc,pl becomes smaller, more chondrules remain as field chondrules in a step. Since the mass of field chondrules increases, the chondrule accretion rate by a protoplanet (Ṁ_ acc,pr=M_ ch/τ_ acc,pr) becomes larger at the subsequent timesteps. On the contrary, M_ acc,pr at m_ pl,min=10^24 g becomes smaller than that at m_ pl,min=10^23 g. When m_ pl,min≥10^24 g, the mass of the protoplanet does not reach M_ iso within 3 × 10^6 yr,since τ_ pr becomes larger due to larger e_ pl,min (see equation <ref>). Then, the cumulative formed chondrules mass is smaller than F_ chM_ iso.Since the total mass of chondrules decreases, M_ acc,pr also decreases. As increasing m_ pl,min, M_ acc,pl_ min decreases due to the increase of τ_ acc,pl (see equation <ref>, also see Figure <ref>).Except for m_ pl,min=10^24 g, the protoplanet accretes 0.018M_ iso - 0.050M_ iso, which is equal to 19% - 50% of the formed chondrules.On the other hand, planetesimals accrete 44% - 81% of the formed chondrules in total.The smallest planetesimals get the larger amount of chondrules in planetesimals, which is 12% - 28% of the formed chondrules. §.§ The dependence on f_ d<cit.> showed that there are appropriate values of f_d and m_ pl for chondrule formation and accretion by the impact jetting process. In this section, we examine how the timescale of chondrule accretion and amount of accreted chondrules depend on f_ d. We adopt f_ d = 1, 2,3 (fiducial), 5, and 10.Figure <ref> shows M_ acc and τ_ acc as a function of f_ d.Note that M_ iso is proportional to f_ d^3/2. As f_ d increases, τ_ acc,pr and τ_ acc,pl become shorter.The protoplanet does not reach its isolation mass within 3×10^6 yr, when f_ d<2.7 (equation <ref>). This is why τ_ acc of the protoplanet and pl_ min planetesimals inflects around f_ d=3. Since τ_ pr∝ m_ pl,min^2/15 f_ d^-9/10, the protoplanet can get M_ iso if f_ d=1 andm_ pl,min≤ 1.2×10^20 g.When the protoplanet gets M_ iso, the f_ d dependence on τ_ acc,pr is causedby r_ acc^2∝ f_ dM_ iso^-1∝ f_ d^-1/2. The dependence on τ_ acc,pl_ min isτ_ acc,pl_min∝ n_ pl r_ acc^2 e_ pl f_i_ pl^-1∝ f_ d^-13/10. The dependence of τ_ acc,pl_min on f_ d is stronger than that of τ_ acc,pr. Then M_ acc,pl_min/M_ iso becomes larger as f_ d increases. It is important that the mass ratio of accreted chondrules between a protoplanet and planetesimals does not change very much when f_ d > 3. Even for the case of f_ d < 3, the trend of our results does not change; most chondrules are accreted by a protoplanet. Thus, the results obtained from our fiducial case can be applicable for a wide range of disk masses. §.§ The dependence on rThe orbital radius varies the timescales of chondrule formation and accretion. We perform simulations with changing orbital radii from 1.0 au to 2.5 au.The timescale of chondrule accretion becomes longer as r increases.Figure <ref> shows M_ acc and τ_ acc of a protoplanet and pl_ min planetesimalsat 3×10^6 yr as a function of r.The accreted chondrules by both the protoplanet and by pl_ min planetesimals drop at r=2.5 au (see the top panel). This is because the protoplanet does not reach M_ iso within in 3 × 10^6 yr, as discussed in the above section.In the following, we consider chondrule accretion at r<2.5 au.Based on the derivation in Section <ref>, τ_ acc,pr∝ r^3/2 under the approximation of H^2/(1+H^2)∼ H^2 (equation <ref>),while τ_ acc,pl_ min changes more rapidly, which is given as τ_ acc,pl_min∝ r^24/5 (equation <ref>).This indicates that as r decreases, both τ_ acc of the protoplanet and the pl_ min planetesimals decrease.We find that τ_ acc,pl_ min≃τ_ acc,pr when 3×10^6 yr at 1.0 au (see Figure <ref>).Then, more chondrules are accreted by pl_ min planetesimals than by the protoplanet at 1.0 au.For this case, the protoplanet accretes about 12% of the formed chondrules and 88% of them are accreted by planetesimals.Although τ_ acc,pl_ min at 1.0 au is about 20 times shorter than that at 2.0 au,M_ acc,pl_ min at 1.0 au is 23 % of formed chondrules, which is only twice larger than that at 2.0 au. In other words, M_ acc,pl is relatively insensitive to the change of τ_ acc,pl. When τ_ acc,pl becomes small, chondrules are more quickly accreted, and M_ ch becomes smaller at the same time. Then, the final values of M_ acc, which are given by ∫Ṁ_ acc dt=∫ (M_ ch/τ_ acc)dt, are not proportional to τ_ acc^-1. §.§ The dependence on τ_ gThe above simulations are performed without gas depletion.When the gas density and surface density are changed with time, τ_ stop and e_ pl also vary.Since we give gas depletion by exp(-t/τ_ g), τ_ stop and e_ pl increase as gas disks evolve with time;τ_ stop∝ρ_ g^-1∝exp(t/τ_ g) (equation <ref>),and e_ pl∝ρ_ g^-1/5∝exp(0.2t/τ_ g) (equation <ref>). This means that when τ_ g≳ t_ iso, which is 2.4×10^6 yr in our fiducial model (<ref>),τ_ stop and e_ pl are changed only by a factor of a few.Note that H does not depend on τ_ g, since the τ_ g dependence is cancelled,due to H∝ (α_ eff / τ_ stop )^1/2∝ (Σ_ g^-1/ρ_ g^-1)^1/2 (equations <ref> and <ref>). We perform simulations with τ_ g=10^6 yr, 2×10^6 yr, 3×10^6 yr, 5×10^6 yr, and 10^7 yr. Note that while we consider the cases of τ_ g = 10^6 yr and 2 × 10^6 yr only for completeness,the results for the case of τ≥ 3 Myr are more appropriate for chondrules found in chondrites.This is because chondrule formation likely continued until 3 Myr after CAI formation,and a gas disk would be needed for chondrule formation at that time <cit.>. Our fiducial model can be viewed as τ_ g=infinity. Figure <ref> shows the resultant values of M_ acc and τ_ acc for the protoplanet and pl_ min planetesimals at 3×10^6 yr.As τ_ g increases, τ_ acc,pr increases, and τ_ acc,pl is hardly changed.The τ_ g dependence on τ_ acc arises from r_ acc^-2Δ v^-1. In the case of the protoplanet, r_ acc^-2Δ v^-1∝τ_ stop^-1∝exp(- t/τ_ g) (equations <ref> and <ref>). This is why τ_ acc,pr increases with increasing τ_ g under τ_ g≳ t_ iso. For pl_ min planetesimals, τ_ acc,pl is multiplied by f_i_ pl^-1 at t=3× 10^6 yr.Since τ_ acc,pl∝ r_ acc^-2Δ v^-1f_i_ pl^-1, which is approximately proportional to i_ pl/e_ pl,τ_ acc,pl does not depend on τ_ g.For this case, M_ acc of the protoplanet and pl_ min planetesimals keep similar values, compared with the fiducial case. When τ_ g≲ t_ iso, M_ acc of them increases as τ_ g increases. This is because gas depletion occurs before the protoplanet reaches its isolation mass.Due to the gas depletion, e_ pl increases, and τ_ pr becomes longer (equations <ref> and <ref>). The protoplanet does not get M_ iso in 3×10^6 yr, and M_ acc becomes a small value, since cumulative formed chondrules mass becomes small.§.§ The dependence on f_ accIn this paper, our model is developed, based on the oligarchic growth model in laminar disks <cit.>. As described in <ref>, chondrule accretion can be affected by disk turbulence.In this section, we multiply τ_ acc,pl by f_ acc to consider the case of more effective accretion of chondrules, which can be triggered by disk turbulence. We adopt f_ acc=0.3, 1 (fiducial), 3, and 10. In these simulations, τ_ acc,pl∝ f_ acc^-1, and τ_ acc,pr is constant with changing f_ acc (see Figure <ref>). Our results show that the chondrule mass accreted by pl_ min planetesimals does not change in proportional to f_ acc (see Figure <ref>).As we see in <ref>, the dependence of M_ acc,pl on τ_ acc,pl is weak, since Ṁ_ acc=M_ ch/τ_ acc,and M_ ch becomes smaller when τ_ acc,pl becomes small. As a result, the M_ acc,pl_ min dependence on f_ acc becomes small,and pl_ min planetesimals accrete 24% of the formed chondrules even when f_ acc=10. §.§ The other dependencesWe also perform simulations with changing F_ ch and chondrule formation models. When we change F_ ch, the mass of the formed chondrule is changed in proportional to F_ ch.Since τ_ acc does not depend on F_ ch, M_ acc of a protoplanet and planetesimals is proportional to F_ ch. When we change chondrule formation models, we fix the timescale of chondrule formation (i.e., M_ esc≤ M≤ M_ iso)and the total mass of the formed chondrules (see Section <ref>). We perform simulations with the constant production rate model and decreasing production rate model (<ref>). The chondrule mass accreted by pl_ min planetesimals increases in the following order, the impact jetting model (fiducial),the constant production rate model, and the the decreasing production rate model.This is because pl_ min planetesimals accrete more chondrules than the protoplanet when M≃ M_ esc (see Figure <ref>).However, the final chondrule mass accreted by pl_ min planetesimals changes slightly, 1.2×10^-3M_ iso in the impact jetting model,1.3×10^-3M_ iso in the constant production rate model, and 1.5×10^-3M_ iso in the the decreasing production rate model.This arises because the condition that τ_ acc,pl_ min<τ_ acc,pr is satisfied only for the initial 2×10^5 yrin the total chondrule forming timescale of 2 × 10^6 yr.And, τ_ acc,pl_ min in this initial time interval is about 5×10^6 yr, which is quite long, compared with the interval. This is why the resultant chondrule masses accreted by planetesimals become similar values for all the models.Thus, the chondrule formation models have little influence on our results.§ DISCUSSION §.§ Chondrules on planetesimalsA protoplanet accretes most chondrules in many simulations.It accretes about 50% of the formed chondrules under the condition that m_ pl,min∼ 10^23 g, f_ d=3, r≃ 2 au, and f_ acc≃ 1.The remnant chondrules are accreted by planetesimals.For planetesimals, the smallest planetesimals accrete the largest mass of them.Under the above condition, pl_ min planetesimals finally get about 10% of the formed chondrules. The other 40% of them are accreted by the other planetesimals. These chondrules would not contribute to planetesimal growth.The planetesimals in each mass bin get f_ r,i F_ ch M_ iso (f_ r,i≤ f_ r,min≃0.1) chondrules, where f_ r,i is the mass fraction of chondrule accreted by planetesimals in i-th mass binand given by f_ r,i= M_ acc,pl_i / ( M_ acc,pr + ∑ M_ acc,pl )≃ M_ acc,pl_i/F_ ch M_ iso. The mass fraction of chondrules with respect to an accreting planetesimal is f_ m,ch = f_ r,i F_ chM_ iso/m_ pln_ pl=6.0 ×10^-2 f_ r,i( m_ pl/10^23) ( Σ_ d/11^-2)^3/10 ×( r/)^3/5( m_ pl,min/10^23)^-6/5,where Σ_ d is about 11 ^-2 at 2 au when f_ d=3.In the case of the smallest planetesimals, i.e., m_ pl=m_ pl,min, f_ r,min≃ 0.1, we find that f_ m,ch=6.0×10^-3,which means that mass of the accreted chondrules are much smaller than the planetesimal mass. This equation is seemingly proportional to m_ pl.However, since f_ r,i decreases with increasing m_ pl (see <ref> and Figure <ref>), f_ m,ch keeps small values. The dependence of f_ r,i on m_ pl can be derived from τ_ acc,pl. Considering that planetesimals are in hyperbolic regime, f_ r,i∝ m_ pl^-19/15 with the condition that f_i_ pl=1(see equation (<ref>)), we obtain that f_ m,ch∝ m_ pl^-4/15. The small f_ m,ch means that accreted chondrule does not change mass of planetesimals.On the other hand,this fraction is too small to reproduce the fractional abundance of chondrules in chondrites <cit.>. In other words, when the current samples of chondrites originated from fragments of massive bodies,our results suggest that fragments arising only from planetesimals' surfaces can satisfy the measured abundance of chondrules in chondrites.The accreted chondrules by planetesimals make a chondrule-rich layer in the surface region of planetesimals. The thickness of this layer normalized by R_ pl is computed as Δ R_ ch/R_ pl =f_ m,chm_ pl/ 4π R_ pl^3 ρ_ s=1.2 × 10^-2 f_ r,i( ρ_ pl/2 ^-3) ( ρ_ s/3.3 ^-3)^-1 ×( m_ pl/10^23) ( Σ_ d/11^-2)^3/10( r/)^3/5 ×( m_ pl,min/10^23)^-6/5 .Figure <ref> shows the results of Δ R_ ch/R_ pl as a function of m_ pl,which are obtained from our calculations of the accreted chondrule mass (see Section <ref>).We find that for the case of m_ pl,min=10^21 g (see the green dots), the results are characterized well by m_ pl^-4/15while for the case of m_pl,min=10^19 g (see the blue dots), they are by m_ pl^-1/3.These can be explained by the behavior of Δ R_ ch/R_ pl (∝ f_ r,i m_ pl);for the former case, Δ R_ ch/R_ pl∝ m_ pl^-4/15 under the condition that f_i_ pl=1.For the latter one, Δ R_ ch/R_ pl∝ m_ pl^-1/3 when f_i_ pl=1 is given by equation (<ref>).Our results also show that for the case of m_ pl,min=10^23 g, the dependence of Δ R_ ch/R_ pl on m_ plis weaker than m_ pl^-4/15, since larger mass planetesimals are in settling regime. In the case of m_ pl,min=m_ pl=10^23 g, which are planetesimals with the radius of 230 km, the planetesimal has the 0.27 km chondrule layer on its surface. <cit.> showed that the majority of ejecta arise from a very thin surface layer, which is about 100 m from the surface. Then, the (high) abundance of chondrules in chondrites can be potentially explained by the chondrule layer if the original materials of chondrites are in this layer.Based on a high fractional abundance of chondrules in chondrites, it can be expectedthat there was not a large amount of dust, which has a similar Stokes number to chondrules in the solar nebula at that time.§.§ Other effectsIn our simulations, we assume that chondrules stay at their formed orbits.Theoretical studies suggested that chondrules migrate inward due to gas drag. This migration timescale is ∼ 10^5 yr <cit.>. This timescale is shorter than the timescales of chondrule accretion (Figure <ref>),which indicates that chondrule would migrate inward before they are accreted by a protoplanet and planetesimals. On the other hand, the isotopic measurement of compound chondrules suggested that the chondrules stayed in the solar nebula for 1 Myr <cit.>. Some mechanism, such as a radial pressure bump <cit.> or vortices <cit.>,would be needed to have kept chondrules from migration.We consider only one protoplanet in our present calculations.There is nonetheless a possibility that other protoplanets and even fully formed planets might have existed in the solar nebula at that time. The presence of other protoplanets would not change our results, since their orbital separation is ∼10r_ H≃0.2 au, which is larger than h_ ch.The chondrules produced by a protoplanet are accreted only by the protoplanet and surrounding planetesimals.Formation of giant planets affects the eccentricities of planetesimals.The perturbation from giant planets makes planetesimals dynamically hot. If the timescale of protoplanet growth becomes longer and the protoplanet does not get its isolation mass within a disk lifetime,the mass of accreted chondrules would decrease, as we see in <ref>.In a subsequent paper, we will perform full N-body simulations of planetary growth under the existence of a giant planet,and examine the eccentricities of planetesimals and formation of chondrules by impact jetting (S. Oshino et al., in prep). We have not considered the space and velocity distribution of chondrules and planetesimals in our calculations. When planetesimals have larger eccentricities and inclinations due to the perturbations from giant planets,or chondrules are spatially concentrated by some mechanism such as streaming instability <cit.>,the relative velocity and collisional probability between planetesimals and chondrules are largely changed in an orbit, especially for the vertical direction. <cit.> examine how disk turbulence affects the collisional probability between dust particles and planetesimals including their 3D spatial distributions.However, accretion of dust particles onto planetesimals taking into account of both their spatial and velocity distributions remains to be explored.Meanwhile, our results are not largely changed as long as the picture of oligarchic growth in our fiducial model is not changed. In <ref>, we perform simulations with m_ pl,min<10^20 g for completeness.However, the mass of planetesimals strongly affects the onset of runaway growth <cit.>. When m_ pl,min is smaller than a threshold value, planetesimals grow up orderly until certain conditions are satisfied such that runaway growth begins. Even if runaway growth occurs in a swarm of planetesimals that have m_ pl,min < 10^20 g, the mass distribution of planetesimals in oligarchic growthwould be affected by m_ pl,min <cit.>. It is also important to comment on the isolation mass which regulates the end of chondrule formation in our simulations. In our fiducial model, the isolation mass of a protoplanet is 1.4M_⊕.Even if f_ d=1, the final mass of protoplanets is larger than the current mass of the asteroid belt. Such large bodies can be eliminated by the perturbations from giant planets or planetary migration.After the giant planets are formed, protoplanets are scattered by their perturbations <cit.>. And, type I migration becomes effective when the protoplanetary mass is larger than ∼ 0.1-1M_⊕at 1-3 au <cit.>. While we have so far considered the possibility that chondrules formed via impact jetting will be accreted by their surrounding planetesimals,it might be interesting to discuss another possibility: formation of planetesimals directly from chondrules ejected from planetesimal collisions.This possibility may work well to account for the currently existing meteoritic data <cit.>. Unless planetesimal formation from chondrules is not a dominant process, our results would not be largely changed,since these planetesimals also produce chondrules by impact jetting.§ CONCLUSIONSInvestigating the process of chondrule accretion provides us with profound insights into the origins of our Solar System, as well as their formation process.When a large number of massive planetesimals are present, which can accrete chondrules, they grow up to be a protoplanet. The isotope measurement suggested that the timescale of Mars formation is less than the timescale of chondrule formation <cit.>. We have investigated chondrule accretion onto a protoplanet and planetesimals in oligarchic growth <cit.> using the simple analytical approach. In our simulations, we consider an impact jetting model as the chondrule formation model.When the collision velocity exceeds 2.5 ^-1, chondrules are formed via planetesimal collisions <cit.>.The mass of the cumulative formed chondrules is about 1 % of the protoplanet mass when planetesimal collisions transform about 1 % the impactor's mass into (the progenitor of) chondrules. The protoplanet accretes about half of the formed chondrules.The other half are accreted by planetesimals.In our simulations, we divide planetesimals into 20 mass bins.The smallest planetesimal bin has the largest amount of chondrules in all planetesimals, which is about 10% of the formed chondrules. We have performed simulations with changing the mass of the smallest planetesimals, the orbital radius, the timescale of gas depletion,the efficiency of chondrule accretion by planetesimals, chondrule formation efficiency in the impact jetting model, and chondrule formation models.Under the condition that a protoplanet reaches its isolation mass,the amount of chondrules accreted by the smallest planetesimals is about 10% of the formed chondrules for all the runs. This amount is poorly changed by the chondrule formation models, since it is determined by the timescales of chondrule accretion. The mass of chondrules accreted by planetesimals is too small to explain the chondrule fraction in chondrites. Our results indicate that chondrules accreted by planetesimals make a layer on their surfaces.Only if chondrites come from this layer, the chondrule fraction in chondrites may be explained. The authors thank a referee, B. Johnson, for helpful comments. Numerical computations were carried out on the PC cluster at Center for Computational Astrophysics, National Astronomical Observatory of Japan. Y. H. is supported by JPL/Caltech. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. When a planetesimal stays in z≤ h_ ch, where z is the distance from the midplane,the planetesimal can accrete chondrules. The motion of a planetesimal is given by Hill's equation <cit.>, z = i_ pl r sin( Ω_K t - Ω_ pl) ,where Ω_ pl is the longitude of the ascending node of a planetesimal.A planetesimal stays in z≤ h_ ch, until t ≤1/Ω_K(h_ch/ i_plr ),from this passed its ascending node. The fraction of the timescale that a planetesimal stays in |z|≤ h_ ch in a orbit (f_i_ pl) is given as f_i_ pl = 4/Ω_K(h_ch/ i_plr )/T_K = 2/π(h_ ch/ i_ plr ).This factor is defined when h_ ch<i_ plr. [Adachi et al.(1976)]Adachi+1976Adachi, I., Hayashi, C., & Nakazawa, K. 1976,Prog. Theor. Phys., 56, 6[Akaki et al.(2007)]Akaki+2007 Akaki, T., Nakamura, T., Noguchi, T., & Tsuchiyama, A. 2007, , 656, L29 [Alexander et al.(2008)]Alexander+2008 Alexander, C. M. O. '., Grossman, J. N., Ebel, D. S., & Ciesla, F. J. 2008, Science, 320, 1617 [Connelly et al.(2012)]Connelly+2012 Connelly, J. N., Bizzarro, M., Krot, A. N., et al. 2012, Science, 338, 651 [Cuzzi et al.(2001)]Cuzzi+2001 Cuzzi, J. N., Hogan, R. C., Paque, J. M., & Dobrovolskis, A. R. 2001, , 546, 496 [Cuzzi et al.(2010)]Cuzzi+2010 Cuzzi, J. N., Hogan, R. C., & Bottke, W. F. 2010, , 208, 518 [Dauphas & Pourmand(2011)]Dauphas Pourmand2011 Dauphas, N., & Pourmand, A. 2011, , 473, 489 [DeMeo et al.(2015)]Demeo+2015 DeMeo, F. E., Alexander, C. M. O., Walsh, K. J., Chapman, C. R., & Binzel, R. P. 2015, Asteroids IV, 13 [Desch & Cuzzi(2000)]Desch Cuzzi2000 Desch, S. J., & Cuzzi, J. N. 2000, , 143, 87 [Desch et al.(2012)]Desch+2012 Desch, S. J., Morris, M. A., Connolly, H. C., & Boss, A. P. 2012, Meteoritics and Planetary Science, 47, 1139 [Dubrulle et al.(1995)]Dubrulle+1995 Dubrulle, B., Morfill, G., & Sterzik, M. 1995, , 114, 237 [Fu, R. et al.(2014)]Fu_R+2014Fu, R. S., Weiss, B. P., Lima, E. A., et al. 2014, Science, 346, 1089[Fu, W. et al.(2014)]Fu_W+2014 Fu, W., Li, H., Lubow, S., Li, S., & Liang, E. 2014, , 795, L39 [Guillot et al.(2014)]Guillot+2014 Guillot, T., Ida, S., & Ormel, C. W. 2014, , 572, A72 [Hasegawa et al.(2016 a)]Hasegawa+2016a Hasegawa, Y., Wakita, S., Matsumoto, Y., & Oshino, S. 2016, , 816, 8 [Hasegawa et al.(2016 b)]Hasegawa+2016b Hasegawa, Y., Turner, N. J., Masiero, J., et al. 2016, , 820, L12 [Hayashi(1981)]Hayashi1981 Hayashi, C. 1981, Progress of Theoretical Physics Supplement, 70, 35 [Hewins et al.(2005)]Hewins+2005 Hewins, R. H., Connolly, H. C., Lofgren, G. E., Jr., & Libourel, G. 2005, Chondrites and the Protoplanetary Disk, 341, 286 [Iida et al.(2001)]Iida+2001 Iida, A., Nakamoto, T., Susa, H., & Nakagawa, Y. 2001, , 153, 430 [Ida et al.(2008)]Ida+2008 Ida, S., Guillot, T., & Morbidelli, A. 2008, , 686, 1292-1301 [Ida & Makino(1993)]Ida Makino1993 Ida, S., & Makino, J. 1993, , 106, 210 [Johansen et al.(2015)]Johansen+2015 Johansen, A., Mac Low, M.-M., Lacerda, P., & Bizzarro, M. 2015, Science Advances, 1, 1500109 [Johnson et al.(2015)]Johnson+2015 Johnson, B. C., Minton, D. A., Melosh, H. J., & Zuber, M. T. 2015, , 517, 339 [Kobayashi et al.(2016)]Kobayashi+2016 Kobayashi, H., Tanaka, H., & Okuzumi, S. 2016, , 817, 105 [Kokubo & Ida(1996)]Kokubo Ida1996 Kokubo, E., & Ida, S. 1996, , 123, 180[Kokubo & Ida(1998)]Kokubo Ida1998 Kokubo, E., & Ida, S. 1998, , 131, 171[Kokubo & Ida(2000)]Kokubo Ida2000 Kokubo, E., & Ida, S. 2000, , 143, 15 [Kokubo & Ida(2002)]Kokubo Ida2002 Kokubo, E., & Ida, S. 2002, , 581, 666[Lambrechts & Johansen(2012)]LJ2012 Lambrechts, M., & Johansen, A. 2012, , 544, A32 [Levison et al.(2015)]Levison+2015 Levison, H. F., Kretke, K. A., & Duncan, M. J. 2015, , 524, 322 [Mann et al.(2016)]Mann+2016 Mann, C. R., Boley, A. C., & Morris, M. A. 2016, , 818, 103 [Morbidelli et al.(2009)]Morbidelli+2009 Morbidelli, A., Bottke, W. F., Nesvorný, D., & Levison, H. F. 2009, , 204, 558 [Morishima(2017)]Morishima2017 Morishima, R. 2017, , 281, 459 [Morishima et al.(2008)]Morishima+2008 Morishima, R., Schmidt, M. W., Stadel, J., & Moore, B. 2008, , 685, 1247-1261 [Muranushi(2010)]Muranushi2010 Muranushi, T. 2010, , 401, 2641 [Nakagawa et al.(1986)]Nakagawa+1986 Nakagawa, Y., Sekiya, M., & Hayahsi, C. 1986, Icarus, 67, 375[Nakazawa & Ida(1988)]Nakazawa Ida1988 Nakazawa, K., & Ida, S. 1988, Progress of Theoretical Physics Supplement, 96, 167[Ormel & Klahr(2010)]Ormel Klahr2010 Ormel, C. W., & Klahr, H. H. 2010, , 520, A43 [Petit et al.(2002)]Petit+2002 Petit, J.-M., Chambers, J., Franklin, F., & Nagasawa, M. 2002, Asteroids III, 711 [Rubin(2000)]Rubin2000 Rubin, A. E. 2000, Earth Science Reviews, 50, 3 [Scott(2007)]Scott2007 Scott, E. R. D. 2007, Annual Review of Earth and Planetary Sciences, 35, 577 [Scott & Krot(2005)]Scott Krot2005 Scott, E. R. D., & Krot, A. N. 2005, Meteorites, Comets and Planets: Treatise on Geochemistry, Volume 1. Edited by A. M. Davis. Executive Editors: H. D. Holland and K. K. Turekian. ISBN 0-08-044720-1. Published by Elsevier B. V., Amsterdam, The Netherlands, 2005, p.143, 143 [Shakura & Sunyaev(1973)]Shakura Sunyaev1973 Shakura, N. I., & Sunyaev, R. A. 1973, , 24, 337 [Shu et al.(1996)]Shu+1996 Shu, F. H., Shang, H., & Lee, T. 1996, Science, 271, 1545 [Shu et al.(2001)]Shu+2001 Shu, F. H., Shang, H., Gounelle, M., Glassgold, A. E., & Lee, T. 2001, , 548, 1029 [Taki et al.(2016)]Taki+2016 Taki, T., Fujimoto, M., & Ida, S. 2016, , 591, A86[Wakita et al.(2016 a)]Wakita+2016L Wakita, S., Matsumoto, Y., Oshino, S., & Hasegawa, Y. 2016, Lunar and Planetary Science Conference, 47, 1078 [Wakita et al.(2016 b)]Wakita+2016b Wakita, S., Matsumoto, Y., Oshino, S., & Hasegawa, Y. 2017, , 834, 125 [Wardle(2007)]Wardle2007 Wardle, M. 2007, , 311, 35 [Weidenschilling(1977)]Weidenschilling1977 Weidenschilling, S. J. 1977, , 180, 57 [Wetherill & Stewart(1989)]Wetherill Stewart1989 Wetherill, G. W., & Stewart, G. R. 1989, , 77, 330[Youdin & Johansen(2007)]Youdin Johansen2007 Youdin, A., & Johansen, A. 2007, , 662, 613
http://arxiv.org/abs/1702.07989v1
{ "authors": [ "Yuji Matsumoto", "Shoichi Oshino", "Yasuhiro Hasegawa", "Shigeru Wakita" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170226045735", "title": "Chondrule Accretion with a Growing Protoplanet" }
adr1,adr2]Bulat N. Galimzyanovcor1 bulatgnmail@gmail.com[cor1]Corresponding author adr1,adr2]Anatolii V. Mokshin anatolii.mokshin@mail.ru[adr1]Institute of Physics, Kazan Federal University, 420008 Kazan, Russia [adr2]Landau Institute for Theoretical Physics, Russian Academy of Sciences, 142432 Chernogolovka, Russia Analysis of three-particle correlations is performed on the basis of simulation data of atomic dynamics in liquid and amorphous aluminium. A three-particle correlation function is introduced to characterize the relative positions of various three particles — the so-called triplets. Various configurations of triplets are found by calculation of pair and three-particle correlation functions. It was found that in the case of liquid aluminium with temperatures 1000K, 1500K, and 2000K the three-particle correlations are more pronounced within the spatial scales, comparable with a size of the second coordination sphere. In the case of amorphous aluminium with temperatures 50K, 100K, and 150K these correlations in the mutual arrangement of three particles are manifested up to spatial scales, which are comparable with a size of the third coordination sphere. Temporal evolution of three-particle correlations is analyzed by using a time-dependent three-particle correlation function, for which an integro-differential equation of type of the generalized Langevin equation is output with help of projection operators technique. A solution of this equation by means of mode-coupling theory is compared with our simulation results. It was found that this solution correctly reproduces the behavior of the time-dependent three-particle correlation functions for liquid and amorphous aluminium.Atomic dynamics simulation; Liquid aluminium; Amorphous system; Structural analysis; Three-particle correlations § INTRODUCTION At present time the main attention is paid to study the structure of condensed systems, which are in equilibrium or in metastable states <cit.>. Often, the traditional experimental methods of structural analysis do not allow to correctly identify the presence of some structures in bulk systems due to their small sizes, either low concentrations in the system, or due to relatively short lifetimes. Usually, information about the structure of condensed systems is extracted by using microscopic methods or by methods of X-ray and neutron diffraction. Here, the static structure factor determined from the experimental data is critical quantity, which is related with the pair distribution function, g(r) <cit.>. At the same time, the pair distribution function g(r) can be determined on the basis of atomic/molecular dynamics simulations, and then it can be compared with experimental data.In addition, the three-particle correlations has an essential impact for different processes in condensed systems. Thus, an account for three-particle correlations is required to explain the dynamic heterogeneity in liquids <cit.>, to describe of transport properties in chemical reactions <cit.>, to study the structural heterogeneity of materials at mechanical deformations <cit.>, to detect the nuclei of on ordered phase (i.e. crystalline, quasicrystalline) <cit.>, to describe the amorphization of liquids at rapid cooling <cit.>. Direct evaluation of three-particle correlations by means of experimental measurements is extremely difficult problem <cit.>. Here, the special methods must be adopted to extract such information <cit.>. On the other hand, detailed information about the three-particle correlations can be obtained on the basis of atomic/molecular dynamics simulations data. Note that early studies of the three-particle correlations were focused mainly on simple liquids such as the Lennard-Jones fluid, the hard spheres system, and the colloidal systems <cit.>. In some recent studies, on the basis of data of the molecular dynamics simulation the three-particle correlations were estimated in such systems as carbon nanotubes, electrolytes, metallic melts and alloys <cit.>. In these studies, the information about three-particle correlations is usually extracted from the time evolution of two and more parameters characterized the positions and trajectories of the particles relative each other.In the present work, the original method of three-particle structural analysis and evaluation of time-dependent three-particle correlations is proposed, where the arbitrary trajectories of motion of the various three particles (that will be denoted as triplets) are considered. The method allows one to identify the presence of ordered crystalline and “stable” disordered structures, which are difficult to be detected by conventional methods of structural analysis (for example, such as the Voronoi's tessellation method <cit.>, the Delaunay's triangulation method <cit.>, the bond-orientational order parameters <cit.>). The applicability of this method will be demonstrated for the case of the liquid and amorphous aluminium.§ SIMULATION DETAILS We performed the atomic dynamics simulation of liquid and amorphous aluminium. The system contains N=864 atoms, located into a cubic simulation cell with periodic boundary conditions in all directions. The interatomic forces are calculated through EAM-potential <cit.>. The velocities and coordinates of atoms are determined through the integrating Newton's equations of motion by using Velocity-Verlet algorithm with the time-step Δ t=1fs.Initially, a crystalline sample with fcc lattice and numerical density ρ=1.23 σ^-3 (or mass density 2300kg/m^3) was prepared, where σ=2.86Å is the effective diameter of the aluminium atom. Further, the system was melted to the temperatures T=1000K, 1500K, and 2000K. The amorphous samples were generated through the fast cooling with the rate 10^12K/c of a melt at the temperature 2000K to the temperatures T=50K, 100K, and 150K (the melting temperature is T_m≃934K). The simulations were performed in NpT-ensemble at constant pressure p=1atm.§ METHODS §.§ Three-particle correlation function Let us consider a system, consisting of N classical particles with same masses m, which are located into the simulation cubic cell with a volume V. From the geometric point of view, locations of any three particles generate a triangle (i.e. triplet). This triplet is characterized by the area, S. Then, the area of ith triplet, S_i, at time t (here i∈{1,2, ...,N_T}, N_T is the number of all possible triplets in the system) is defined byS_i(t)={l_i(t)·[l_i(t)-r_i^(12)(t)]·[l_i(t)-r_i^(23)(t)]·[l_i(t)-r_i^(31)(t)] }^1/2.Here, l_i(t)=[r_i^(12)(t)+r_i^(23)(t)+r_i^(31)(t)]/2 is the semiperimeter of ith triplet; r_i^(12), r_i^(23), and r_i^(31) are the distances between the vertices of ith triplet with conditional labels 1, 2, and 3. It follows from Eq.(<ref>) that the area of ith triplet, S_i, takes positive values and values closed to zero. Different triplets can be correlated to the same values S_i, and the triplets are independent from each other (i.e. have no common vertices) or interconnected (i.e. one or two vertices are mutual). To estimate the quantity S_i one vertex of triplet is considered as central (i.e. fixed), relative to which the positions of the other vertices must be defined [see Fig.<ref>]. To determine the probability of emergence of the triplets with the area S, the three-particle correlation function is introducedg(S)=1/N_T∑_i=1^N_Tδ(S-S_i).Here, the number of the all triplets in the system with N particles is defined byN_T=N(N-1)(N-2)/6.It follows from Eq.(<ref>) that for a system with N=500 particles the number of triplets N_T is more than 20 000 000. It demonstrates that treatment of simulation results requires significant computing resources. Therefore, at realization of the three-particle structural analysis we can restrict our attention to consider a spherical region with a fixed radius R_c in the center of which the main (i.e. fixed) particle of triplet is located. The optimal value of radius R_c corresponds to the distance at which the pair correlation function g(r) of considered system ceases to oscillate. §.§ Dynamics of three-particle correlations In a system with N particles the coordinates q⃗ and impulses p⃗ form a 6N-dimensional phase space. The time evolution of the system will be defined by the Hamiltonian H(q⃗,p⃗), which can be written through the canonical Liouville equation of motion as follows <cit.>:dA(t)/dt={H(q⃗,p⃗), A(t)}=∑_i=1^N(∂ H(q⃗,p⃗)/∂ p_i∂ A(t)/∂ q_i-∂ H(q⃗,p⃗)/∂ q_i∂ A(t)/∂ p_i)ordA(t)/dt=iLA(t),where L is the Liouville operator, {...} is the Poisson brackets, A is the dynamic variable that obtained from results of simulation. By means of the technique of Zwanzig-Mori's projection operatorsΠ_0=A_0(0)⟩⟨ A_0^*(0)/⟨|A_0(0)|^2⟩,P_0=1-Π_0,one obtains from Eq.(<ref>) the following non-Markovian equation <cit.>:dF(t)/dt=-Ω_1^2∫_0^tM_1(τ)F(t-τ)dτ,whereF(t)=⟨ A_0^*(0)A_0(t)⟩/⟨|A_0(0)|^2⟩is the time correlation function;M_1(τ)=⟨ A_1^*(0)e^iL_22^0τA_1(0)⟩/⟨|A_1(t)|^2⟩is the first order memory function;Ω_1^2=⟨|A_1(0)|^2⟩/⟨|A_0(0)|^2⟩is the first-order frequency parameter, andA_1(t)=iLA_0(t)is the next dynamics variable. By considering that the Liouville equation for dynamic variable A_1(t)dA_1(t)/dt=iL_22^0A_1(t),the kinetic integro-differential equation can be obtained for the first-order memory function as follows <cit.>:dM_1(t)/dt=-Ω_2^2∫_0^tM_2(τ)M_1(t-τ)dτ.Similarly, a chain of equations can be constructed asdM_n-1(t)/dt=-Ω_n^2∫_0^tM_n(τ)M_n-1(t-τ)dτ, n=1, 2, 3,...Here, the nth order frequency parameter will be determined as follows <cit.>:Ω_n^2=⟨|A_n(0)|^2⟩/⟨|A_n-1(0)|^2⟩.Using the Laplace transformation, M_n(s)=∫_0^∞e^-stM_n(t)dt (where s=iω), the chain of equations (<ref>) can be rewritten as a continued fraction <cit.>F(s)=1/ s+Ω_1^2/ s+Ω_2^2/ s+…. From Eq.(<ref>) one obtains <cit.>F̈(k,t)+Ω_1^2(k)F(k,t)+Ω_2^2(k)∫_0^tM_2(k,t-τ)Ḟ(k,τ)dτ=0,where k=|k⃗| is the wave number <cit.>. By using the key condition of the mode-coupling theory <cit.>Ω_2^2(k)M_2(k,t)=φ(k)δ(t)+Ω_1^2(k)[υ_1F(k,t)+υ_2F(k,t)^p],the Eq.(<ref>) can be rewritten in the formF̈(k,t)+Ω_1^2(k)F(k,t)+φ(k)Ḟ(k,t)δ(t)+ +Ω_1^2(k)∫_0^t[υ_1F(k,t-τ)+υ_2F(k,t-τ)^p]Ḟ(k,τ)dτ=0.Here, Ω_1(k) and φ(k) are the frequency parameters, δ(t) is the Dirac's delta function, υ_1≥0, υ_2≥0 (υ_1+υ_2≠0) are the weight of the corresponding contributions, the parameter p>1 can be fractional. The exact solution of Eq.(<ref>) will be defined through the frequency parameters Ω_1(k), φ(k) as well as through the characteristics υ_1, υ_2, and p <cit.>.A numerical solution of integro-differential Eq.(<ref>) can be found from <cit.>:z_n+Ω_1^2x_n+Ω_1^2τ∑_i=0^n[υ_1x_i+υ_2x_i^p]z_n-i=0, y_n+1=y_n+τ z_n, x_n+1=x_n+τ y_n+1.Let us take the quantitys⃗_j(t)=μ_jS_j(t)N⃗_j/σ,j=1,2,...,N_Tas a dynamical variable. If the value A_0=s⃗_j(t) is taken as initial dynamical variable [see Eq.(<ref>)], then the time correlation function will be defined asF_T(k,t)=1/N_T∑_j=1^N_Texp[-ik⃗(s⃗_j(t)-s⃗_j(0))],and x_n=F_T(k,t), y_n=Ḟ_T(k,t), z_n=F̈_T(k,t), where τ=0.01fs, while initial conditions in Eq.(<ref>) are F_T(k,t=0)=1 and Ḟ_T(k,t=0)=0 <cit.>. Here, S_j(t) is the area of jth triplet at time t, N⃗_j=n_j1e⃗_⃗x⃗+n_j2e⃗_⃗y⃗+n_j3e⃗_⃗z⃗ is the vector of normal to the plane of this triangle [see illustration in Fig.<ref>], μ_j=±[n_j1^2+n_j2^2+n_j3^2]^-1/2 is the normalization constant determined by a sign of parameter n_j4 from equation of the plane n_j1x+n_j2y+n_j3z+n_j4=0, which connects of vertices of jth triplet. At n_j4>0 we have a value μ_j<0, otherwise we have a value μ_j>0.§ RESULTS AND DISCUSSIONS §.§ Behavior of single triplet The time dependent area of a triplet is evaluated at different temperatures. Thus, in Fig.<ref> (top panel) it is shown the time-dependent area of a triplet S(t) in liquid aluminium at temperature 1000K. It can be seen that the quantity S(t) takes the values in the range 0.25nm^2<S<1nm^2. The evolution of S(t) in the case of liquid aluminium is characterized by the high-frequency fluctuations of a relatively small amplitude, which are originated due to collective motion of atoms. The low-frequency fluctuations with a relatively large amplitude define a main trend of S(t), and they defined by transition to the diffusive motion of particles, when an atom or several atoms leave a nearest environment. Fig.<ref> (bottom panel) represents the time-dependent area of a triplet in amorphous aluminium at temperature T=100K. Contrasted to the liquid, the diffusive regime of S(t) is not observed, that is due to extremely low mobility of atoms. The fluctuations with relatively small amplitude are seen; wherein the area S(t) takes the values within a narrow range 0.29nm^2<S<0.36nm^2. These fluctuations is caused by vibrations of atoms in a surrounded of “neighbors”. §.§ Features of three-particle correlations The distribution function g(S) was computed by Eq.(<ref>) for the liquid and amorphous systems, and those triplets were considered, only which located within a sphere of the radius R_c=3 σ. Also, the pair distribution function was determined as follows <cit.>:g(r)=V/4π r^2N∑_i=1^N⟨Δ n_i(r)/Δ r⟩,where Δ n_i(r) is the probability to find the pair of the atoms separated by the distance r. For convenience, the distances between the atoms we will measure in the units of σ and the area of triplet in the units of σ^2 (here, σ=2.86Å).Fig.<ref> shows the curves g(S) and g(r) for liquid aluminium at temperatures 1000K, 1500K, and 2000K. As seen the pair distribution function g(r) of liquid aluminium has oscillations and contains the maxima at distances r=r_m1^(L), r_m2^(L), r_m3^(L), and r_m4^(L), which characterize the correlation lengths. The estimated inter-particle distances r=r_m1^(L), r_m2^(L), r_m3^(L), and r_m4^(L) are given in Table <ref>. The undistinguished maximum at r≃ r_m4^(L) is indication that the pair correlations are practically absent at large distances. For the reason that both the functions g(S) and g(r) correspond to the same system, it is quite reasonable to assume that the maxima in function g(S), located at S=S_m1^(L) and S_m2^(L), are associated with the triplets, in which the distances between pairs of atoms correspond to correlation lengths r_m1^(L), r_m2^(L), r_m3^(L), and r_m4^(L). Then, the different possible configurations of three atoms can be selected, where the atoms that form these triplets are remote to the distances r=r_m1^(L), r_m2^(L), r_m3^(L), and r_m4^(L). As a result of such selection, five different widespread configurations appear in liquid aluminium, which are depicted in Fig.<ref> with labels CL1, CL2,..., CL5. From analysis of simulation data, it follows that the triplets with configurations CL1 and CL2, which correspond to the maximum at S_m1^(L) in the function g(S) that also give impact to the main maxima of the function g(r) at r_m1^(L) and r_m2^(L). In this case, a mutual arrangement of three atoms covers a spatial scale, which are comparable with size of the first and second coordination spheres. Also, the presence of triplets with configurations CL1 and CL2 is observed in other model liquids <cit.>.So, Zahn et al. observed the triplets with inter-particle distances r≃1.0 σ and r≃1.9 σ (i.e. with configurations similar to CL1 and CL2, where r=r_m1^(L)≃0.96 σ and r_m2^(L)≃1.82 σ) in two-dimensional colloidal liquid, where three-particle correlation functions were calculated from particle configurations (see Fig.2 and Fig.4 in Ref.<cit.>). The triplets CL3, CL4, and CL5 that correspond to the maximum of function g(S) at S_m2^(L), are usually involved to formation of maxima of the function g(r) at r_m3^(L) and r_m4^(L). Remarkable that the structure of the system can be restored by connecting the different configurations CL1, CL2, CL3, CL4, and CL5.Fig.<ref> represents the functions g(S) and g(r) for amorphous system at temperatures 50K, 100K, and 150K. Five pronounced maxima are detected for g(S), and eight maxima are most pronounced for the function g(r). As seen from Fig.<ref> (top panel), the function g(S) contain the maxima at S=S_m1^(A), S_m2^(A), S_m5^(A), as S_m3^(A) and S_m4^(A), which are not observed for liquid system. The estimated inter-particle distances, which correspond to positions of maxima of the functions g(S) and g(r), are given in Table <ref>. In the case of amorphous aluminium, various triplets are detected, where twelve configurations can be chosen as widespread, which depicted in Fig.<ref> with labels CA1, CA2,..., CA12. Wherein, the most of the configurations except of CA1, CA3, and CA4 cover the spatial scale, which exceeds the size of the second coordination sphere. The presence of maxima at small areas S (for example, the maxima at S=S_m1^(A) and S_m2^(A)), which correspond to CA1 and CA2, can be indication of structural ordering, as well as evidence of the presence of quasi-ordered structures. At the same time, the configurations CA1 and CA2 together with CA3 and CA4 generate the main maximum of the function g(r) at the distance r=r_m1^(A). The formation of triples CA5, CA6,..., CA9, that correspond to the maxima of function g(S) at S=S_m3^(A) and S_m4^(A), leads to splitting of second maximum of the function g(r). The configurations CA2, CA4, and CA11 participate to formation of the maximum of g(r) at r=r_m2^(A), and give impact to the maxima of the three-particle correlation function g(S) at S=S_m2^(A), S_m3^(A), and S_m5^(A). The presence of maximum in the function g(S) at S=S_m5^(A) is due to the triplets CA10, CA11,..., CA12, the number of which is negligible. The configurations CL1, CL2,..., CL5 and CA1, CA2,..., CA12 appear because the atoms that form these triplets are surrounded by neighbors, which slow down their motion – so-called cage effect. According to effective neighborhood model proposed by Vorselaars et al., the cage effect occurs in liquids and glasses, where the particles are trapped in a local energy minimum <cit.>. Namely, a single particle at motion jumps from one cage to another cage (i.e. from one local energy minimum to another local energy minimum) <cit.>. In the case of aluminium atoms with configurations, which are depicted in Fig.<ref> and Fig.<ref>, the jumps between different cages lead to transitions between various triplets. Usually, these transitions occur between triplets with similar configurations, for example, between CL1 and CL2, between CL3 and CL4 in the case of liquid aluminium as well as between CA1 and CA2, between CA3 and CA4, between CA6 and CA10, between CA7 and CA8 in the case of amorphous aluminium. Unlike liquid, in the amorphous system the transitions between the triplets of different configurations occur very slowly, where the jumps of atoms between different cages occur for a long time due to the high viscosity. §.§ Time-dependent three-particle correlations The analysis of the time-dependent three-particle correlations is also done by means of the time correlation function F_T(k, t), defined by Eq.(<ref>). As an example, the function F_T(k, t) obtained for liquid and amorphous aluminium at different temperatures is presented in Fig.<ref>. The calculations were performed at fixed value of the wave number k=4.6Å^- 1. We note that the behavior of the function F_T(k,t) is similar to behavior of incoherent scattering function F(k,t) <cit.>. The function F_T(k,t) demonstrates the fast decay to zero in the case of liquid system, that is caused with attenuation of three-particle correlations. The rate of attenuation is expected to be correlated with the structural relaxation time. In the case of amorphous aluminium at temperatures 50K, 100K, and 150K, the function F_T(k,t) is characterized by more complex shape. On the other hand, the function F_T(k,t) was computed by Eq.(<ref>) on the basis of simulation results at k=4.6Å^-1 and the temperatures T=50K, 100K, 150K, 1000K, 1500K, and 2000K. Then, the Eq.(<ref>) was solved numerically according to scheme (<ref>)–(<ref>), while the parameters υ_1, υ_2, Ω_1^2, p were taken as adjustable. The values of the parameters are given in Table <ref>. As can be seen from Fig.<ref>, the theory with Eq.(<ref>) reproduces the behavior of function F_T(k,t) at all considered temperatures. Further, for the case of amorphous system, the theory allows one to reproduce the emergence of plateau in function F_T(k,t). The frequency parameter, Ω_1^2, takes the relatively small values 3.3×10^25s^-2≤Ω_1^2≤9.8×10^25s^-2 for amorphous system and large values 64×10^25s^-2≤Ω_1^2≤130×10^25s^-2 for liquid system. Here, the increase of the parameter Ω_1^2 with temperature can be due to increase of transition rate between triplets of various configurations.§ CONCLUSION In the present work, the analysis of the three-particle correlations in many-particle systems is suggested. By atomic dynamics simulation of liquid and amorphous aluminium, the applicability of the proposed method of three-particle structural analysis is demonstrated to identify the structures, which are generated by various triplets. By applying the calculation of the pair and three-particle distribution functions, the triplets of various configurations are found. It has been shown that these triplets, which are formed due to particle correlations can cover the spatial scales, which are comparable with sizes of the second and third coordination spheres. On the other hand, it was found that the time evolution of the three-particle correlations in liquid and amorphous aluminium can be recovered from transition between triplets of various configurations. Then, it is shown that the time-dependent three-particle correlations in these systems are reproducible by the integro-differential equation (<ref>). Here, an agreement between theoretical results and our atomic dynamics simulation data is observed.§ ACKNOWLEDGMENTSThe work was supported by the grant of the President of Russian Federation: MD-5792.2016.2. The atomic dynamics calculations were performed on the computing cluster of Kazan Federal University and Joint Supercomputer Center of RAS.ReferencesKashchiev_2000 D. Kashchiev, Nucleation: Basic Theory with Applications, Butterworth-Heinemann, Oxford,2000.March_Tosi_1991 N.H. March, M.P. Tosi, Atomic dynamics in liquids, Dover, New-York, 1991.Steinhardt_Ronchetti_1983 P. Steinhardt, D. Nelson, and M. Ronchetti, Bond-orientational order in liquids and glasses, Phys. Rev. B. 28 (1983) 784-805. https://doi.org/10.1103/PhysRevB.28.784Zahn_Maret_2003 K. Zahn, G. Maret, C. Ruß, H.H. von Grunberg, Three-Particle Correlations in Simple Liquids, Phys. Rev. Lett. 91 (2003) 115502 1-4. https://doi.org/10.1103/PhysRevLett.91.115502Hurley_1996 M.M. Hurley, P. Harrowell, NonGaussian behavior and the dynamical complexity of particle motion in a dense two dimensional liquid, J. Chem. Phys. 105 (1996) 10521-10526. http://dx.doi.org/10.1063/1.472941Vorselaars_2007 B. Vorselaars, A.V. Lyulin, K. Karatasos, M.A.J. Michels, Non-Gaussian nature of glassy dynamics by cage to cage motion, Phys. Rev. E. 75 (2007) 011504 1-6. https://doi.org/10.1103/PhysRevE.75.011504Lazaridis_2000 T. Lazaridis, Solvent Reorganization Energy and Entropy in Hydrophobic Hydration, J. Phys. Chem. B. 104 (2000) 4964-4979. http://pubs.acs.org/doi/pdf/10.1021/jp994261aWang_Dhont_2002H. Wang, M. P. Lettinga, and J. K. G. Dhont, Microstructure of a near-critical colloidal dispersion under stationary shear flow, J. Phys. Condens. Matter. 14 (2002) 7599-7615. http://dx.doi.org/10.1088/0953-8984/14/33/304Mokshin_Galimzyanov_2013 A.V. Mokshin, B.N. Galimzyanov, J.-L. Barrat, Extension of classical nucleation theory for uniformly sheared systems, Phys. Rev. E. 87 (2013) 062307 1-5. https://doi.org/10.1103/PhysRevE.87.062307Dzugutov_1993 M. Dzugutov, Formation of a Dodecagonal Quasicrystalline Phase in a Simple Monatomic Liquid, Phys. Rev. Lett. 70 (1993) 2924-2927. https://doi.org/10.1103/PhysRevLett.70.2924Doye_Wales_2001 J.P.K. Doye, D.J. Wales, Polytetrahedral Clusters, Phys. Rev. Lett. 86 (2001) 5719-5722. https://doi.org/10.1103/PhysRevLett.86.5719Tokuyama_2007 M. Tokuyama, Similarities in diversely different glass-forming systems, Physica A. 378 (2007) 157-166. http://dx.doi.org/10.1016/j.physa.2006.12.047Vaulina_Petrov_Fortov_2004 O.S. Vaulina, O.F. Petrov, V.E. Fortov, A.V. Chernyshev, A.V. Gavrikov, and O.A. Shakhova, Three-Particle Correlations in Nonideal Dusty Plasma, Phys. Rev. Lett. 93 (2004) 035004 1-4. https://doi.org/10.1103/PhysRevLett.93.035004Ma_Zuo_2007 G.L. Ma, Y.G. Ma, S. Zhang, X.Z. Cai, J.H. Chen, Z.J. He, H.Z. Huang, J.L. Long, W.Q. Shen, X.H. Shi, C. Zhong, J.X. Zuo, Three-particle correlations from parton cascades in Au + Au collisions, Phys. Lett. B. 647 (2007) 122-127. http://dx.doi.org/10.1016/j.physletb.2007.02.008Egelstaff_Page_1969 P.A. Egelstaff, D.I. Page, and C.R.T. Heard, Experimental Study of the Triplet Correlation Function for simple liquids, Phys. Lett. A. 30 (1969) 376-377. http://dx.doi.org/10.1088/0022-3719/4/12/002Montfrooij_Graaf_1991 W. Montfrooij, L.A. de Graaf, P.J. van der Bosch, A.K. Soper, and W.S. Howells, Density and temperature dependence of the structure factor of dense fluid helium, J. Phys. Condens. Matter. 3 (1991) 4089-4096. http://dx.doi.org/10.1088/0953-8984/3/22/018Alder_1964 B.J. Alder, Triplet Correlations in Hard Spheres, Phys. Rev. Lett. 12 (1964) 317-319. https://doi.org/10.1103/PhysRevLett.12.317Gupta_1982 S. Gupta, J.M. Haile, and W.A. Steele, Use of Computer Simulation to Determine the Triplet Distribution Function in Dense Fluids, Chem. Phys. 72 (1982) 425-440. http://dx.doi.org/10.1016/0301-0104(82)85138-0McNeil_1983W.J. McNeil, W.G. Madden, A.D.J. Haymet, and S.A. Rice, Triplet correlation functions in the LennardJones fluid: Tests against molecular dynamics simulations, J. Chem. Phys. 78 (1983) 388-398. http://dx.doi.org/10.1063/1.444514Attard_1992 P. Attard, G. Stell, Three-particle correlations in a hard-sphere fluid, Chem. Phys. Lett. 189 (1992) 128-132. http://dx.doi.org/10.1016/0009-2614(92)85110-VGaskell_1988 T. Gaskell, An improved description of three-particle correlations in liquids and a modified Born-Green equation, J. Phys. C: Solid State Phys. 21 (1988) 1-6. http://dx.doi.org/10.1088/0022-3719/21/1/003Deilmann_2016 T. Deilmann, M. Druppel, M. Rohlfing, Three-particle correlation from a Many-Body Perspective: Trions in a Carbon Nanotube, Phys. Rev. Lett. 116 (2016) 196804 1-6. https://doi.org/10.1103/PhysRevLett.116.196804Medvedev_2000 N.N. Medvedev, The Voronoi-Delaunay method for non-crystalline structures, Siberian Branch of RAS, Novosibirsk, 2000.Schachter_1980 D. Lee, B. Schachter, Two Algorithms for Constructing a Delaunay Triangulation, Int. Jour. Comp. and Inf. Sc. 9 (1980) 219-242. http://link.springer.com/article/10.1007/BF00977785Ercolessi_1994 F. Ercolessi, J.B. Adams, Interatomic Potentials from First-Principles Calculations: The Force-Matching Method, Europhys. Lett. 26 (1994) 583-588. http://dx.doi.org/10.1209/0295-5075/26/8/005Winey_2009 J.M. Winey, A. Kubota, Y.M. Gupta, A thermodynamic approach to determine accurate potentials for molecular dynamics simulations: thermoelastic response of aluminum, Modelling Simul. Mater. Sci. Eng. 17 (2009) 055004 1-14. http://dx.doi.org/10.1088/0965-0393/17/5/055004Mokshin_Chvanova_Khrm_2012 A.V. Mokshin, A.V. Chvanova and R.M. Khusnutdinoff, Mode-Coupling Approximation in Fractional-Power Generalization: Particle Dynamics in Supercooled Liquids and Glasses, Theor. Math. Phys. 171 (2012) 541-552. http://link.springer.com/article/10.1007/s11232-012-0052-3Zwanzig_2001 R. Zwanzig, Nonequilibrium statistical mechanics, Oxford Univ. Press, Oxford, 2001.Yulmetyev_2005 R.M. Yulmetyev, A.V. Mokshin, P.Hänggi, Universal approach to overcoming nonstationarity, unsteadiness and non-Markovity of stochastic processes in complex systems, Physica A. 345 (2005) 303-325. http://dx.doi.org/10.1016/j.physa.2004.07.001Yulmetyev_2009 R. Yulmetyev, R. Khusnutdinoff, T. Tezel, Y. Iravul, B. Tuzel, P. Hänggi, The study of dynamic singularities of seismic signals by the generalized Langevin equation, Physica A. 388 (2009) 3629-3635. http://dx.doi.org/10.1016/j.physa.2009.05.010Mokshin_tmf_2015 A.V. Mokshin, Self-Consistent Approach to the Description of Relaxation Processes in Classical Multiparticle Systems, Theor. Math. Phys. 183 (2015) 449-477. http://link.springer.com/article/10.1007/s11232-015-0274-2Khusnutdinov_Mokshin_2010 R.M. Khusnutdinoff, A.V. Mokshin, Local Structural Order and Single-Particle Dynamics in Metallic Glass, Bulletin of the RAS: Physics. 74 (2010) 640-643. http://link.springer.com/article/10.3103/S1062873810050163Poole_1998 P.H. Poole, C. Donati, S.C. Glotzer, Spatial correlations of particle displacements in a glass-forming liquid, Physica A. 261 (1998) 51–59. http://dx.doi.org/10.1016/S0378-4371(98)00376-8Khusnutdinoff_2012 R.M. Khusnutdinoff, A.V. Mokshin, Vibrational features of water at the low-density/high-density liquid structural transformations, Physica A. 391 (2012) 2842-2847. http://dx.doi.org/10.1016/j.physa.2011.12.037Hansen_McDonald_2006 J.P. Hansen, I. R. McDonald, Theory of Simple Liquids, Academic Press, London, 2006.§ TABLES
http://arxiv.org/abs/1702.08189v1
{ "authors": [ "Bulat N. Galimzyanov", "Anatolii V. Mokshin" ], "categories": [ "cond-mat.stat-mech", "cond-mat.dis-nn" ], "primary_category": "cond-mat.stat-mech", "published": "20170227084542", "title": "Three-Particle Correlations in Liquid and Amorphous Aluminium" }
Matricial Wasserstein-1 Distance Yongxin Chen, Tryphon T. Georgiou, Lipeng Ning,and Allen Tannenbaum Y. Chen is with the Department of Medical Physics, Memorial Sloan Kettering Cancer Center, NY; email: chen2468@umn.edu T. T. Georgiou is with the Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA; email: tryphon@uci.edu L. Ning is with Brigham and Women's Hospital (Harvard Medical School), MA.; email: lning@bwh.harvard.edu A. Tannenbaum is with the Departments of Computer Science and Applied Mathematics & Statistics, Stony Brook University, NY; email: allen.tannenbaum@stonybrook.edu December 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We propose an extension of the Wasserstein 1-metric (W_1) for matrix probability densities, matrix-valued density measures, and an unbalanced interpretation of mass transport. We use duality theory and, in particular, a “dual of the dual” formulation of W_1. This matrix analogue of the Earth Mover's Distance has several attractive features including ease of computation. § INTRODUCTION Optimal mass transport (OMT) has proven to be a powerful methodology for numerous problems in physics, probability, information theory, fluid mechanics, econometrics, systems and control, computer vision, and signal/image processing <cit.>. Developments along purely controls-related issues ensued when it was recognized that mass transport may be naturally reformulated as a stochastic control problem; see <cit.> and the references therein.Historically, the problem of OMT <cit.> began with the question of minimizing the effort of transporting one distribution to another, typically with a cost proportional to the Euclidean distance between starting and ending points of the mass being transported. However, the control-theoretic reformulation <cit.> which was at the root of the aforementioned developments was based on the choice of a quadratic cost. The quadratic cost allowed the interpretation of the transport effort as an action integral and gave rise to a Riemannian structure on the space of distributions <cit.>. The originality in our present work is two-fold. First, we formulate thetransport problem with an L_1 cost in a similar manner, as a control problem with an L_1-path cost functional, and secondly,we develop theory for shaping flows of matrix-valued distributions which is a non-trivial generalization of classical OMT.The relevance of OMT on flows of matrix-valued distributions was already recognized in <cit.> and was cast as a control problem as well, albeit in a quadratic-cost setting. At that point, interest in the geometry of matrix-valued distributions stemmed from applications to spectral analysis of vector-valued time series (see <cit.> and the references therein). Yet soon it became aparent that flows of matrix-valued distributions represent evolution of quantum systems. In fact, there has been a burst of activity in applying ideas of quantum mechanics to OMT of matrix-valued densities as well as, utilizing an OMT framework to study the dynamics of quantum systems: three groups <cit.> independently and simultaneously developed quantum mechanical frameworks for defining a Wasserstein-2 distance on matrix-valued densities (normalized to have trace 1), via a variational formalism generalizing thework of <cit.>. We note that <cit.> develop matrix-valued generalizations of the Wasserstein 2-metric (W_2) and explore the Riemannian-like structure for studying the entropic flows of quantum states.Thus, in our present note, we develop a natural extension of the Wasserstein 1-metric to matrix-valued densities and matrix-valued measures. Our point of view is somewhat different from the earlier works on matricial Wassserstein-2 metrics. We mainly use duality theory <cit.>. Further, we do not employ the Benamou and Brenier <cit.> control formulation of OMT, but rather the Kantorovich-Rubinstein duality. This new scheme is computationally more attractive and, moreover, it is especially appealing when specialized to weighted graphs (discrete spaces) that are sparse (few edges), as is the case for many real-world networks <cit.>.The present paper is structured as follows. Section <ref> is a quick review of several different formulations of Wasserstein-1 distance in the scalar setting. Using the quantum gradient operator defined in Section <ref>, we generalize the Wasserstein-1 metric to the space of density matrices in Section <ref>. The case where the two marginal matrices have different traces is discussed in Section <ref>. We finally extend the framework to deal with matrix-valued densities in Section <ref>, which may find applications in multivariate spectra analysis as well as comparing stable multi-inputs multi-outputs (MIMO) systems. The paper concludes with an academic example in Section <ref>. § OPTIMAL MASS TRANSPORTWe begin with duality theory, explained for scalar densities, upon which our matricial generalization of the Wasserstein-1 metric is based.Given two probability densities ρ_0 and ρ_1 on ^m, the Wasserstein-1 distance between them is W_1 (ρ_0,ρ_1):=inf_π∈Π(ρ_0,ρ_1)∫_^m×^mx-yπ(dx,dy),where Π(ρ_0,ρ_1) denotes the set of couplings between ρ_0 and ρ_1. The Wasserstein-1 distance has a dual formulation via the following result due to Kantorovich and Rubinstein <cit.>: W_1 (ρ_0,ρ_1) =sup_f {∫_^m f(x)(ρ_0(x)-ρ_1(x))dx..∫_^mf_ Lip≤ 1}, where f_ Lip denotes the Lipschitz constant. When f is differentiable, f_ Lip =∇_x f. It follows that, W_1 (ρ_0,ρ_1) =sup_f {∫_^m f(x)(ρ_0(x)-ρ_1(x))dx..∫_^m∇_x f≤ 1}, Starting from (<ref>), by once again considering the dual, we readily obtain the very important reformulation W_1 (ρ_0,ρ_1) =inf_u(·){∫_^mu(x) dx ..∫_^mρ_0-ρ_1 +∇_x· u=0 },where the (Lagrange) optimization variable u now represents flux. Alternatively, this can be written as the control-optimization problem in the Benamou-Brenier style <cit.> W_1 (ρ_0,ρ_1) =inf_u(·,·){∫_0^1∫_^mu(t,x) dx dt.x∫_^m∂ρ(t,x)/∂ t+∇_x· u(t,x)=0,.∫_^m ρ(0,x)=ρ_0(x), ρ_1(1,x)=ρ_1(x) }, This “dual of the dual” formulation turns the Kantorovich and Rubinstein into a control problem to determine a suitable velocity (control vector) u. We remark that from a computational standpoint, when applied to discrete spaces (graphs), this formulation leads to a very substantial computational benefit in the case of sparse graphs; this is due to the fact that (<ref>) involves solving systems of the order of the square of the number of nodes, while equation (<ref>), solving systems of the order of the number of edges. § GRADIENT ON SPACE OF HERMITIAN MATRICES We closely follow the treatment in <cit.>. In particular, we will need a notion of gradient on the space of Hermitian matrices and its dual, i.e. the divergence.Denote byandthe set ofn× n Hermitian and skew-Hermitian matrices, respectively. We will assume that all of our matrices are of fixed size n× n. Next, we denote the space of block-column vectors consisting of N elements inandas ^N and ^N, respectively. We also let _+ and _++ denote the cones of nonnegative and positive-definite matrices, respectively, and := {ρ∈_+|(ρ)=1}, _+:= {ρ∈_++|(ρ)=1}.We note that the tangent space of _+, at any ρ∈+ is given by T_ρ={δ∈|(δ)=0},and we use the standard notion of inner product, namely ⟨ X,Y⟩=(X^*Y),for bothand . For X, Y∈^N (^N), ⟨ X, Y⟩=∑_k=1^N (X_k^*Y_k).Given X=[X_1^*,⋯,X_N^*]^* ∈^N (^N), Y∈ (), set XY=[[ X_1; ⋮; X_N ]]Y := [[X_1Y; ⋮; X_N Y ]],and YX=Y[[ X_1; ⋮; X_N ]] := [[ YX_1;⋮; YX_N ]].For a given L∈^N we define ∇_L: →^N,   X ↦[ [L_1 X-XL_1; ⋮; L_N X-X L_N ]]to be the gradient operator. By analogy with the ordinary multivariable calculus, we refer to its dual with respect to the Hilbert-Schmidt inner product as the (negative) divergence operator, and this is ∇_L^*: ^N →,  Y= [ [ Y_1; ⋮; Y_N ]] ↦∑_k^N L_k Y_k-Y_k L_k,i.e., ∇_L^* is defined by means of the identity ⟨∇_L X , Y⟩ =⟨ X , ∇_L^* Y⟩.A standing assumption throughout, is that the null space of ∇_L, denoted by ker(∇_L), contains only scalar multiples of the identity matrix.§ WASSERTEIN-1 DISTANCE FOR DENSITY MATRICES In this section, we show that both (<ref>) and (<ref>) have natural counterparts for probability density matrices, i.e. matrices in . This set-up obviously works for matrices in _+ of equal trace.We treat (<ref>) as our starting definition and define the W_1 distance in the space of density matrices as W_1 (ρ_0,ρ_1):=sup_f∈{[f(ρ_0-ρ_1)] | ∇_L f≤ 1}.Here · is the operator norm. The above is well-defined since by assumption, the null space of ∇_L is spanned by the identity matrix I. As above, we have that ∇_L f= [[L_1f-fL_1;⋮; L_N f-fL_N ]].This should be compared to the Connes spectral distance <cit.>, which is given by d_D(ρ_0,ρ_1)=sup_f∈{[f(ρ_0-ρ_1)] | [D,f]≤ 1}. It is not difficult to see that the dual of (<ref>) is Ŵ_1 (ρ_0,ρ_1)=inf_u∈^N{u_* | ρ_0-ρ_1-∇_L^* u=0},which is the counterpart of (<ref>). Here ·_* denotes the nuclear norm <cit.>. In particular, we have the following theorems. Notation as above. Then W_1 (ρ_0,ρ_1)=Ŵ_1 (ρ_0,ρ_1). We start from (<ref>) and use the fact that u_*=sup_g∈^N, g≤ 1⟨ u, g⟩.It follows Ŵ_1 (ρ_0,ρ_1) = inf_usup_f{u_* +⟨ f, ρ_0-ρ_1-∇_L^* u⟩}= inf_usup_f,g≤ 1{⟨ u, g⟩ +⟨ f, ρ_0-ρ_1-∇_L^* u⟩}= inf_usup_f,g≤ 1{⟨ u, g-∇_Lf⟩ +⟨ f, ρ_0-ρ_1⟩}≥ sup_f,g≤ 1inf_u{⟨ u, g-∇_Lf⟩ +⟨ f, ρ_0-ρ_1⟩}= sup_f,g≤ 1{⟨ f, ρ_0-ρ_1⟩ |  g=∇_Lf}= sup_f{⟨ f, ρ_0-ρ_1⟩ | ∇_L f≤ 1}=W_1(ρ_0,ρ_1).This implies that (<ref>) and (<ref>) are dual to each other. Since both of them are strictly feasible, the duality gap is zero. Therefore W_1 (ρ_0,ρ_1)=Ŵ_1 (ρ_0,ρ_1).The W_1 distance defined as in (<ref>) is a metric on the space of density matrices . Obviously W_1(ρ_0,ρ_1)≥ 0 holds with equality if and only if ρ_0=ρ_1. The symmetric property that W_1(ρ_0,ρ_1)=W_1(ρ_1,ρ_0) is also clear from the definition. Here we prove the triangle inequality. That is, for any ρ_0, ρ_1, ρ_2∈), we have W_1(ρ_0,ρ_2)≤ W_1(ρ_0,ρ_1)+W_1(ρ_1,ρ_2).It is easier to see this from the dual formulation (<ref>). Let u_1, u_2 be the optimal fluxes for (ρ_0,ρ_1) and (ρ_1,ρ_2) respectively. Then u_1+u_2 is a feasible flux for (ρ_0,ρ_2), namely, ρ_0-ρ_2-∇_L^* (u_1+u_2)=0.It follows that W_1(ρ_0,ρ_2)≤u_1+u_2_*≤u_1_*+u_2_*,which completes the proof. § WASSERTEIN-1 DISTANCE: THE UNBALANCED CASEIn this section, we extend the definition of Wasserstein-1 distance to the space nonnegative matrices _+, i.e., we remove the constraint of both matrices having equal traces. Compare also with some very interesting recentwork <cit.> on fast computational methods for W_1 in the unbalanced scalar case.In order to compare matrices of unequal trace we relax the constraint in (<ref>), which forces (ρ_0)=(ρ_1), by introducing a “source” term v∈. That is, we replace our continuity equation (<ref>) with ρ_0-ρ_1-∇_L^* u-v=0.With this added source, we define a Wasserstein-1 distance in _+ as follows. Given ρ_0, ρ_1∈_+, we define V_1 (ρ_0,ρ_1)=inf_u∈^N v∈{u_*+αv_*|ρ_0-ρ_1-∇_L^* u-v=0}.Here α>0 measures the relative significance between u and v.Another natural way to compare ρ_0,ρ_1∈_+ is by finding μ, ν∈_+ having equal trace that are close to ρ_0, ρ_1 in some norm (here taken to be the nuclear norm), as well as close to one another. More specifically, we seek μ, ν to minimize W_1(μ,ν)+αρ_0-μ_*+αρ_1-ν_*.Putting the two terms together we obtain the following definition of Wasserstein-1 distance V̂_1(ρ_0,ρ_1) = inf_u∈^Nμ, ν∈_+u_*+αρ_0-μ_*+αρ_1-ν_* μ-ν-∇_L^* u=0, (μ)=(ν). It turns out these two relaxations of W_1 are in fact equivalent.With notation and assumptions as above, V_1(ρ_0,ρ_1)= V̂_1(ρ_0,ρ_1).Clearly, V̂_1(ρ_0,ρ_1) ≥ V_1(ρ_0,ρ_1). On the other hand, let u, v be a minimizer of (<ref>), and v=v_1-v_0 with v_0,v_1∈_+, i.e., v_0,v_1 are the negative and positive parts of v respectively, then μ=ρ_0+v_0, ν=ρ_1+v_1 together with u is a feasible solution to (<ref>). With this solution, V̂_1(ρ_0,ρ_1)≤ u_*+αρ_0-μ_*+αρ_1-ν_* = u_*+αv_0_*+αv_1_* = u_*+αv_*,which implies that V̂_1(ρ_0,ρ_1)≤ V_1(ρ_0,ρ_1). This completes the proof.The formula (<ref>) defines a metric on _+. The proof follows exactly the same lines as in Theorem <ref>. Using the technique of Lagrangian multipliers one can deduce the dual formulation of (<ref>) and establish the following:Notation as above. Then V_1(ρ_0,ρ_1)=sup_f∈{[f(ρ_0-ρ_1)] | ∇_L f≤ 1,  f≤α}.Straight calculation gives V_1 (ρ_0,ρ_1) = inf_u,vsup_f{u_*+αv_*+ ⟨ f, ρ_0-ρ_1-∇_L^* u-v⟩}= inf_u,vsup_f,g≤ 1, h≤ 1{⟨ u, g⟩+α⟨ v,h⟩ + ⟨ f, ρ_0-ρ_1-∇_L^* u-v⟩} ≥sup_f,g≤ 1, h≤ 1inf_u,v{⟨ u, g-∇_L f⟩ +⟨ v,α h-f⟩ + ⟨ f, ρ_0-ρ_1⟩} =sup_f {⟨ f, ρ_0-ρ_1⟩ | ∇_L f≤ 1,  f≤α}.This together with the strong duality completes the proof.§ WASSERSTEIN-1 DISTANCE FOR MATRIX-VALUED DENSITIESWith little effort we are able to generalize the definition of Wasserstein-1 distance to the space of matrix-valued densities. Examples of matrix-valued densities include power spectra of multivariate time series, stress tensors, diffusion tensors and so on, and hence our motivation in considering matrix-valued distribution on possibly more than a one dimensional spatial coordinates.Given two matrix-valued densities ρ_0, ρ_1 satisfying ∫_^m(ρ_0(x))dx=∫_^m(ρ_1(x))dx, we can define their Wasserstein-1 distance as W_1 (ρ_0,ρ_1) := sup_f∈{∫_^m[f(x)(ρ_0(x)-ρ_1(x))]dx | . .[∇_x f ∇_L f]≤ 1},or through its dual W_1 (ρ_0,ρ_1) = inf_u_1∈^m u_2∈^N{∫_^m[u_1(x) u_2(x)]_*dx | . . ρ_0-ρ_1+∇_x· u_1-∇_L^* u_2=0}. For more general densities where condition (<ref>) may not be valid, we define V_1 (ρ_0,ρ_1) := sup_f∈{∫_^m[f(x)(ρ_0(x)-ρ_1(x))]dx | .. [∇_x f ∇_L f]≤ 1, f≤α},or, equivalently, V_1 (ρ_0,ρ_1) = inf_u_1∈^m u_2∈^N, v∈{∫_^m[u_1(x) u_2(x)]_*+αv_*dx | . . ρ_0-ρ_1+∇_x· u_1-∇_L^* u_2-v=0}. One can introduce positive coefficients β_1>0, β_2>0 totrade-off the relative importance of u_1 and u_2 in establishing correspondence between the two distributions as follows: V_1 (ρ_0,ρ_1) :=sup_f∈{∫_^m[f(x)(ρ_0(x)-ρ_1(x))]dx | .. [β_1∇_x f β_2∇_L f]≤ 1, f≤α},or, equivalently, V_1 (ρ_0,ρ_1) =inf_u_1∈^m u_2∈^N, v∈{∫_^m[u_1(x) u_2(x))_*+αv_*dx | .. ρ_0-ρ_1+β_1∇_x· u_1-β_2∇_L^* u_2-v=0}. § EXAMPLEWe use our framework to compare power spectra of multivariate time series (in discrete time). Evidently, the distance between two power spectra induces a distance between corresponding linear modeling filters and, thereby, can be used to compare (stable) MIMO systems <cit.>.Consider the three power spectra as shown in Figure <ref> (in different colors). What is shown in the three subplots are power spectra of two time series (in subplots (a) and (c)) and their cross-spectrum (in subplot (b)) as functions of time (the phase of the cross spectra are not shown). Thus, the three different colors represent the three different matrix-valued power spectra given by: ρ_0(θ)= [1 0.40 1] [0.01 000.7/|a_0(e^jθ)|^2] [1 00.4 1] ρ_1(θ)=[1 0.50.5e^jθ1][0.5/|a_1(e^jθ)|^20 00.5/|a_1(e^jθ)|^2] [1 0.5e^-jθ 0.5 1] ρ_2(θ)= [1 00.4e^jθ1] [2/|a_2(e^jθ)|^200 0.02 ] [1 0.4e^-jθ0 1]where a_0(z) =(1-1.9cos(π/6)z-0.95^2z^2)×(1-1.5cos(π/3)z+0.75^2z^2) a_1(z) =(1-1.9cos(2π/3)z-0.95^2z^2)×(1-1.5cos(5π/8)z+0.75^2z^2) a_2(z) = (1-1.9cos(5π/12)z-0.95^2z^2)×(1-1.5cos(π/2)z+0.75^2z^2). The distances between the each pair for different β_1,β_2 values, α=1, and the choice L=[L_1,L_2] withL1=[10 00], L_2=[11 10],are tabulated in Table <ref>. We observe that when the penalty on the rotation part is large (β_1>>β_2),we have V_1(ρ_0,ρ_2)>V_1(ρ_0,ρ_1) and V_1(ρ_0,ρ_2)>V_1(ρ_2,ρ_1). On the other hand, when the penalty on translation is large relative to the cost of rotation (β_1<<β_2), we have V_1(ρ_0,ρ_1)>V_1(ρ_0,ρ_2) and V_1(ρ_0,ρ_1)>V_1(ρ_1,ρ_2). These findings are in agreement with the intuition when observing the relative frequency directionality of power in the three spectra. More specifically, ρ_1 requires a significant drift in directionality before we can match it with the other two, while this is less important when comparing ρ_0 and ρ_2. For this latter case, it is the actual frequency where the power resides that distinguishes the two while the directionality is more in agreement.What this example underscores is the ability of the metric to be tailored to applications where we need to trade off and compromise, in a principled way, between two vastly different features of matrix-valued distributions, i.e., spatial location versus directionality of the “intensity.” What was achieved in this paper is the construction of a suitable and easily computable metric that can be utilized for this purpose.§ FUTURE RESEARCH We introduced generalization of the scalar W_1 distance to matrices and matrix-valued measures. This new metric, W_1, is computationally simpler and more attractive than earlier metrics, based on quadratic cost criteria. In fact, our “dual of the dual” formulation makes the metric especially attractive when comparing matrix-valued data on a discrete space (graph, network).We note that the Wasserstein 1-metric has been used as a tool in defining curvature <cit.> and in analyzing the robustness of complex networks derived from scalar-valued data <cit.>. The formalism presented in the current work, suggests alternative notions of curvature and robustness when the nodes of a network carry matrix-valued data, e.g., in diffusion tensor imaging. We plan to pursue such issues in future work.§ ACKNOWLEDGEMENTSThis project was supported by AFOSR grants (FA9550-15-1-0045 and FA9550-17-1-0435), grants from the National Center for Research Resources (P41- RR-013218) and the National Institute of Biomedical Imaging and Bioengineering (P41-EB-015902), National Science Foundation (NSF), and National Institutes of Health (P30-CA-008748 and 1U24CA18092401A1).plain 99 French J.-D.  Benamou and Y. Brenier, “A computational fluid mechanics solution to the Monge–Kantorovich mass transfer problem,” Numerische Mathematik 84 (2000), pp. 375-393.nuclear E. Candes and T. Tao, “The power of convex relaxation: Near-optimal matrix completion,” IEEE Trans. Inform. Theory 56:5 (2009), pp. 2053-2080.Carlen E. Carlen and J. Maas, “Gradient flow and entropy inequalities for quantum Markov semigroups with detailed balance,” https://arxiv.org/abs/1609.01254, 2016.Chen Y. Chen, T. T. Georgiou, A. Tannenbaum, “Matrix optimal mass transport: a quantum mechanical approach,” https://arxiv.org/abs/1610.03041, 2016.CGP Y. Chen, T. T. Georgiou, M. Pavon, “On the relation between optimal transport and Schrödinger bridges: A stochastic control viewpoint,” Journal of Optimization Theory and Applications 169:2 (2016), pp. 671-691.CGP0 Y. Chen, T. T. Georgiou, M. Pavon, “Optimal transport over a linear dynamical system,” IEEE Transactions on Automatic Control, to appear.CGP1 Y. Chen, T. T. Georgiou, M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part I,” IEEE Transactions on Automatic Control 61:5 (2016), pp. 1158-1169.CGP2 Y. Chen, T. T. Georgiou, M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, Part II,” IEEE Transactions on Automatic Control 61:5 (2016), pp. 1170-1180.CGPTY. Chen, T. T. Georgiou, M. Pavon, and A. Tannenbaum, “Robust transport over networks,” IEEE Transactions on Automatic Control, to appear. connes ] A. Connes, Noncommutative Geometry, Academic Press Inc., San Diego (1994), available at http://www.alainconnes.org/downloads.html.DP P. Dai Pra, “A stochastic control approach to reciprocal diffusion processes,” Applied Mathematics and Optimization 23:1 (1991), pp. 313-329.Evans L. C. Evans, Partial differential equations and Monge–Kantorovich mass transfer, in Current Developments in Mathematics, International Press, Boston, MA, 1999, pp. 65–126. Jordan R. Jordan, D. Kinderlehrer, and F. Otto, “The variational formulation of the Fokker-Planck equation,” SIAM J. Math. Anal. 29 (1998), pp. 1-17.Kantorovich1948 L. V. Kantorovich, “On a problem of Monge,” Uspekhi Mat. Nauk. 3 (1948), pp. 225–226. PW M. Pavon and A. Wakolbinger, “On free energy, stochastic control, and Schrödinger processes,” Modeling, Estimation and Control of Systems with Uncertainty, G.B. Di Masi, A.Gombani, A.Kurzhanski Eds., Birkauser, Boston, pp. 334-348 (1991).Leonard C. Léonard, “A survey of the Schrödinger problem and some of its connections with optimal transport,” Discrete Contin. Dyn. Syst. A 34(4) (2014), pp. 1533-1574.osher W. Li, P. Yin, and S. Osher, “A fast algorithm for unbalanced L^1 Monge-Kantorovich problem,” preprint. McC97 R. McCann, “A convexity principle for interacting gases,” Adv. Math. 128 (1997), pp. 153–179. Mielke M. Mittnenzweig and A. Mielke, “An entropic gradient structure for Lindblad equations and GENERIC for quantum systems coupled to macroscopic models,” https://arxiv.org/abs/1609.05765, 2016.Mueller M  Mueller, P. Karasev, I. Kolesov, and A. Tannenbaum, “Optical flow estimation for flame detection in videos,” IEEE Trans. Image Processing 22:7 (2013), pp. 2786-2797.MT T. Mikami and M. Thieullen, “Optimal transportation problem by stochastic optimal control,” SIAM Journal Control and Optimization 47:3 (2008), pp. 1127-1139.Lipeng L. Ning, T. Georgiou, and A. Tannenbaum, “On matrix–valued Monge-Kantorovich optimal mass transport,” IEEE Transactions on Automatic Control 60:2 (2015), pp. 373-382.NinGeo14 L. Ning, and Tryphon T. Georgiou, “Metrics for matrix-valued measures via test functions,” 53rd IEEE Conference on Decision and Control (CDC), 2014.Ollivier Y. Olliver, “Ricci curvature of Markov chains on metric spaces,” J. Funct. Anal. 256 (2009), pp. 810-864.Otto F. Otto, “The geometry of dissipative evolution equations: the porous medium equation,” Communications in Partial Differential Equations 26 (2001), pp. 101-174.Rachev S. Rachev and L. Rüschendorf, Mass Transportation Problems, Volumes I and II, Probability and Its Applications, Springer, New York, 1998.Sandhu R.Sandhu, T. Georgiou, E. Reznik, L. Zhu, I. Kolesov, Y. Senbabaoglu, and A. Tannenbaum1, “Graph curvature for differentiating cancer networks,” Scientific Reports (Nature), vol. 5, 12323; doi: 10.1038/srep12323 (2015).Sandhu1 R. Sandhu, T. Georgiou, and A. Tannenbaum, “Ricci curvature: An economic indicator for market fragility and systemic risk,” Science Advances, vol. 2, doi: 10.1126/sciadv.1501495, 2016.TGT E. Tannenbaum, T. Georgiou, and A. Tannenbaum, A., “Signals and control aspects of optimal mass transport and the Boltzmann entropy,” in 49th IEEE Conference on Decision and Control (CDC), December 2010.Villani C. Villani, Topics in Optimal Transportation, Graduate Studies in Mathematics, vol. 58, AMS, Providence, RI, 2003.jonck C.Wang, E. Jonckheere, and R. Banirazi, “Wireless network capacity versus Ollivier-Ricci curvature under Heat Diffusion (HD) protocol,” Proceedings of ACC, 2013.
http://arxiv.org/abs/1702.07921v2
{ "authors": [ "Yongxin Chen", "Tryphon T. Georgiou", "Lipeng Ning", "Allen Tannenbaum" ], "categories": [ "math.FA", "math-ph", "math.MP" ], "primary_category": "math.FA", "published": "20170225165814", "title": "Matricial Wasserstein-1 Distance" }
http://arxiv.org/abs/1702.08127v1
{ "authors": [ "Eric Bonnetier", "Hai Zhang" ], "categories": [ "math.SP", "math.AP" ], "primary_category": "math.SP", "published": "20170227022531", "title": "Characterization of the essential spectrum of the Neumann-Poincaré operator in 2D domains with corner via Weyl sequences" }
.9exTraffic Control and Fuel Consumption Reduction via Moving Bottlenecks] Traffic Flow Control and Fuel Consumption Reduction via Moving Bottlenecks R. A. Ramadan]Rabie A. Ramadan [Rabie A. Ramadan] Department of Mathematics Temple University 1805 North Broad Street Philadelphia, PA 19122 rabie.ramadan@temple.edu B. Seibold]Benjamin Seibold [Benjamin Seibold] Department of Mathematics Temple University 1805 North Broad Street Philadelphia, PA 19122 seibold@temple.edu http://www.math.temple.edu/seibold [2000]35L65; 35Q91; 91B74Moving bottlenecks, such as slow-driving vehicles, are commonly thought of as impediments to efficient traffic flow. Here, we demonstrate that in certain situations, moving bottlenecks—properly controlled—can actually be beneficial for the traffic flow, in that they reduce the overall fuel consumption, without imposing any delays on the other vehicles. As an important practical example, we study a fixed bottleneck (e.g., an accident) that has occurred further downstream. This new possibility of traffic control is particularly attractive with autonomous vehicles, which (a) will have fast access to non-local information, such as incidents and congestion downstream; and (b) can execute driving protocols accurately.[ [ December 30, 2023 ===================== § INTRODUCTION This paper demonstrates that, in certain situations, traffic flow can be controlled via a single moving bottleneck so that the overall fuel consumption is reduced; yet, no delay is imposed on the travel time of the other vehicles. An important scenario is the control of traffic flow upstream of a fixed bottleneck, where traffic is still in free-flow.A fixed bottleneck (FB) is a region on a road (we focus solely on highways here) at which the throughput is reduced. Common examples are work zones, road features (curves, climbs), and traffic incidents. Throughput reduction can be caused by lane reduction, speed reduction, or both. A moving bottleneck (MB) follows the same principles, however, it moves along the road. Common examples are slow-moving vehicles or moving road work zones (see <cit.> for the fundamentals and traffic flow theory of bottlenecks). Here we focus on bottlenecks with regions of influence much shorter than the length scales of interest, so we model them as fixed or moving points (with zero length).The evolution of the traffic density ρ(x,t) along the road is modeled macroscopically via the Lighthill-Whitham-Richards (LWR) model <cit.>ρ_t+(Q(ρ))_x = 0 ,where Q(ρ) = ρ U(ρ) is the flux function, encoding the fundamental diagram (FD), and U(ρ) is the bulk velocity vs. density relationship. Being a hyperbolic conservation law, Eq. (<ref>) models sharp transition zones (e.g., upstream ends on traffic jams) as moving discontinuities (“shocks”). A shock between two states ρ_1<ρ_2 moves at a speed s = (Q(ρ_2)-Q(ρ_1))/(ρ_2-ρ_1), given by the slope of the secant line in the FD connecting the two states.While the techniques presented herein apply for general FD shapes, we consider two specific examples: the quadratic Greenshields <cit.> flux Q(ρ) = ρ u_m(1-ρ/ρ_m), and the triangular Newell–Daganzo <cit.> flux. The former is used to demonstrate the theory for a strictly concave flux; and the latter—better resembling true data <cit.>—is used to obtain quantitative estimates.Regarding the model description, it should be stressed that the LWR model captures—even with the best FD—only the large scale equilibrium/bulk flow. On a microscopic scale, vehicles may exhibit non-equilibrium behavior (such as velocity oscillations), particularly near bottlenecks. Such effects can in principle be described by second-order macroscopic models <cit.>. However, the focus of this study is on the equilibrium quantities only. When it comes to fuel consumption estimates (see <ref>), non-equilibrium effects that are present in highly congested flow are likely to amplify the impact of the fundamental ideas presented here (see the discussion in <ref>).At the position of bottlenecks, coupling conditions as in <cit.> are provided. An important distinction of this work from existing models for MBs <cit.> is that in those, the evolution of the MB itself depends on the ambient traffic state, thus yielding coupled PDE–ODE models. In contrast, here the speed of the MB is a control parameter. Moreover, as we restrict to piecewise constant density profiles (as they naturally arise in LWR with bottlenecks), we are only employing MB speed profiles that are constant in time, while the control is active (see <ref>).The idea to control traffic flow via MBs is important in the context of autonomous vehicles (AVs). Via vehicle-to-vehicle and/or vehicle-to-infrastructure communication, they will have fast access to non-local information, such as incidents and congestion downstream. Moreover, they can execute driving protocols very accurately, and thus can automatically adapt to microscopic small-scale deviations from the macroscopic large-scale traffic profile; better than any human driver could. And in contrast to traditional, costly, means of traffic control, such as ramp metering or variable speed limits, AVs will be in the traffic stream (in a few years) either way. Therefore, the cost of using them for traffic control may amount to as little as a compensation of their operator. Finally, because only a single MB is needed, the ideas developed here can in principle be applied in the near future when we will have very low AV penetration rates.This paper is organized as follows. First we describe the modeling framework (<ref>), which describes the setup (<ref>), the dynamics of the bottlenecks (<ref>), the choice of a realistic FD (<ref>), the used fuel consumption estimates (<ref>), and the formulas used to quantify savings in fuel (<ref>). Next, we present quantitative results (<ref>), including optimal fuel consumption reduction values (<ref>, <ref>), and the sensitivity of the outcomes with respect to important model parameters (<ref>). The paper closes with a discussion of further benefits of the presented control strategies (<ref>), and general conclusions (<ref>). § MODELING FRAMEWORK §.§ Outline of the Problem SetupThe mathematical constructions described below hold for very general setups. However, to obtain meaningful estimates of the potential fuel consumption reduction generated by MBs, we make certain modeling choices that are close to real situations, however with some simplifications/idealizations. Specifically, we consider a highway (of infinite length) with 3 lanes, uniform road conditions, and no ramps. The jamming density is ρ_m. The initial traffic density is constant ρ_0. At time t=t_0, a FB (e.g., an accident) arises somewhere on the highway, closing down 2 lanes. The FB creates two new states on the highway: a state with high density ρ_b>ρ_0, traveling backwards in the form of a shock wave with constant velocity s_b, and a state with low density ρ_low<ρ_0, traveling forward in the form of a shock wave with constant velocity s_low.At some time t=t_1≥ t_0, let x=0 be the position of the backwards moving shock produced by the FB. At that time t=t_1, we activate a MB at position x=-d, i.e., at a distance d upstream from the high-density congestion, a vehicle in the right lane starts driving at a constant speed s, which is slower than the ambient equilibrium traffic flow. This MB creates two new states on the highway: a state with low density ρ_2<ρ_0 ahead of it, traveling forward at a constant velocity s_2>s, and a state with high density ρ_1>ρ_0 behind it, traveling at constant velocity s_1<s. This last state could be moving backwards or forward, depending on ρ_0 and the choice of s (see <ref>). The traffic states before and after the activation of the two bottlenecks are depicted in Fig. <ref>, in terms of density profiles ρ vs. x, as well as visualized how the states can look in terms of vehicles.If we consider a concave downwards FD (such as the Greenshields flux), or if we consider a triangular FD and have ρ_0 in the free flow regime, the traffic waves produced by the FB and the MB interact three times. The temporal evolution of the traffic state on the highway, shown in Fig. <ref>, is as follows. The first shock wave interaction (t=t_2) happens when the low density state ahead of the MB meets the high density state upstream of the FB. This interaction results in a new shock wave that could either move forward or backwards, with constant velocity s_3. The second interaction (t=t_3) happens when the controlled vehicle (the MB) meets the traffic wave that has arisen from the first interaction. At this time t=t_3, we deactivate the MB (because it now enters the highly congested state ρ_b, in which there is no more point to slow down traffic). This second interaction results in a new shock travelling backwards at a constant velocity s_4. The third and final interaction (t=t_4) happens when the shock produced by the second interaction catches up to the high density state produced by the MB. After t=t_4, the influence of the MB has vanished, i.e., the traffic state is the same that it would have been without the activation of the MB. This implies that none of the vehicles on the highway have experienced any delay, except for the MB itself (which is negligible in a large-scale, macroscopic, perspective).Below, we demonstrate that the introduction of the MB can have a positive impact (i.e., reduction) on the collective consumption of fuel by all the vehicles on the highway. Moreover, we determine the maximum possible reduction of fuel consumption with respect to the two control parameters allowed here: the MB speed s, and its initial distance d from the the position of the backwards moving shock produced by the FB.§.§ Bottleneck DynamicsThe theory presented in this section applies to generic concave down flux functions, including piecewise linear fluxes. We use the Greenshields flux Q(ρ) = ρ u_m(1-ρ/ρ_m) to visualize the theory and geometric constructions, as it is representative of a general concave down flux function. Here ρ_m is the jamming density, and u_m is the maximum speed. Quantitative estimates will then be obtained using a data-fitted piecewise linear flux.§.§.§ Fixed bottleneckWe consider a highway with 3 lanes, with the FB blocking 2 lanes. The maximum flux allowed past the FB is one third[Reduced passing speeds would yield even lower maximum flux; the construction works similarly.] of the original maximum flux Q_m = u_mρ_m/4. If Q(ρ_0)<Q_m/3, the FB has no effect on the traffic density of the highway (all vehicles can pass). In turn, if Q(ρ_0)>Q_m/3, two new density states are produced, corresponding to the two roots of the relationship Q(ρ) = Q_m/3, given by the intersection of the blue line with the thick red curve in Fig. <ref>, panel (a). Let ρ_b denote the high density state upstream of the FB, and ρ_low the low density state ahead of the FB. The shock at the downstream end of the low density region moves forward with velocity s_low = (Q(ρ_low)-Q(ρ_0))/(ρ_low-ρ_0). Likewise, the shock at the upstream end of the high density region moves backwards, with velocity s_b = (Q(ρ_b)-Q(ρ_0))/(ρ_b-ρ_0). §.§.§ Moving bottleneckThe effect of a MB on the macroscopic traffic state is similar to a FB, however with fluxes considered in a moving frame of reference. In particular, a MB does not always affect the density profile along the highway. If the density ρ_0 is sufficiently low, all vehicles can pass the MB and the traffic state remains unchanged. In turn, if traffic is sufficiently dense (the regime in which traffic control is largely of interest), the presence of a MB produces a higher density state in its wake, while producing a lower density state ahead of itself.We consider the MB to occupy one out of the three lanes of the highway. The fundamental diagram corresponding to the remaining two lanes at the position of the MB is β Q(ρ/β), where 0<β<1 is the reduction factor of the original fundamental diagram of the highway with three lanes. The most natural choice for the situation at hand is β = 2/3; however, lower values could arise due to lane-changing “friction” effects caused by the MB (see <ref>).Given a flow rate curve Q(ρ), the flux relative to a frame of reference moving with speed s is Q(ρ)-sρ. Therefore, the maximum flux past the MB is Q^rel = β Q(ρ/β)-sρ. Geometrically (see Fig. <ref>, panel (b)), the maximum relative flux Q^rel_m is the intercept of the line with slope s (blue) that is tangent to the reduced FD β Q(ρ/β) (dashed red).As the MB is driving at a constant speed s, the high (ρ_1) and low (ρ_2) density states produced by the MB correspond to two points on the FD Q(ρ) connected by a line with slope s. To maximize Q^rel, these two points are the intersections of Q(ρ) and the maximal relative flux line (blue line in Fig. <ref>, panel (b)). Note that with a MB, it is possible that the high density state ρ_1 is actually in the free flow regime. This is impossible with a FB. Note further that the densities ρ_1 and ρ_2 are independent of ρ_0. That base density ρ_0 affects only whether the MB has an effect or not (if ρ_0≥ρ_1 or ρ_0≤ρ_2, the MB has no effect).In turn, if ρ_1<ρ_0<ρ_2, the MB produces a high density state (ρ_1) behind it, whose upstream shock travels with velocity s_1 = (Q(ρ_1)-Q(ρ_0))/(ρ_1-ρ_0), and a low density state (ρ_2) ahead, whose downstream shock travels with velocity s_2 = (Q(ρ_2)-Q(ρ_0))/(ρ_2-ρ_0). The concavity of Q(ρ) implies that s_1<s<s_2.§.§ Data-Fitted Fundamental DiagramIn order to obtain quantitative estimates that reflect the overall fuel consumption of real traffic flow, we consider a data-fitted triangular piecewise linear Newell–Daganzo <cit.> flux function throughout the rest of this paper:Q(ρ) = Q_mρ/ρ_cfor 0≤ρ≤ρ_c Q_mρ_m-ρ/ρ_m-ρ_cfor ρ_c≤ρ≤ρ_m .Here ρ_c is the critical density at which the maximum flux, Q_m, is achieved. We chooseρ_m=#lanes/7.5m = 3/7.5m = 400veh/km ,where the 7.5m represent 5m average vehicle length, plus 50% safety distance, as justified in <cit.>. Moreover, motivated by German highways <cit.>, we choose the free flow speed u_m = 140 km/hr, and ρ_c=ρ_m/8 = 50 veh/km. The resulting maximum flux is Q_m=u_mρ_c= 7000 veh/hr. Figure <ref> shows the graph of the FD with these values. §.§ Fuel ConsumptionOur aim is to quantify the overall fuel consumption (FC) of all vehicles on the highway in various situations. The relationship between the speed of a vehicle and its FC efficiency (Liters/km) is discussed in <cit.>: vehicles consume more fuel per distance traveled when they are driving at very low speeds (a certain amount of fuel is used to just keep the engine, and accessories, running) or at very high speeds (more energy is needed to overcome air drag).Figure 41 in <cit.> provides fuel consumption efficiency data (Liters/km) for four types of vehicles (Ford Explorer, Ford Focus, Honda Civic, and Honda Accord), as functions of the vehicle speed. Multiplying the FC efficiency by the vehicle speed yields the FC rate (Liters/hr). The FC rates for the four vehicles as functions of the vehicle speed are shown in Fig. <ref>, panel (a). We average these four curves (with equal weights), and approximate (in a least squares sense) the resulting data points via a sixth order polynomial (which is accurate up to a 5% error). This average FC rate function reads as:K(s) = 5.7×10^-12s^6 - 3.6×10^-9s^5 + 7.6×10^-7s^4 - 6.1×10^-5s^3 + 1.9×10^-3s^2 + 1.6×10^-2s + 0.99 .Here units have been omitted for notational efficiency (s is in km/hr and K is in Liters/hr). The function is given by the red curve in Fig. <ref>, panel (a).Assuming the LWR model (<ref>), we can now quantify the FC rate of the whole traffic, as a function of the traffic density. To that end, we (a) re-parametrize (<ref>) in terms of the density ρ, as given by the LWR model; and (b) multiply K by ρ to obtain the FC rate of traffic, rather than of a single vehicle. Considering the triangular FD (<ref>) from <ref>, the bulk velocity vs. density relationship isU(ρ)= u_mfor 0≤ρ≤ρ_c u_m (ρ_c/ρ) (ρ_m-ρ/ρ_m-ρ_c)for ρ_c≤ρ≤ρ_m = 140km/hrfor ρ≤50 veh/km 8000/hr/ρ - 20km/hrfor ρ≥50 veh/km .The FC rate (Liters/hr) of one vehicle vs. the traffic density at the vehicle's position is then f(ρ) = K(U(ρ)). And the total FC rate (Liters/hr) of a segment of the highway vs. the total traffic density of this segment is given by F(ρ) = ρ f(ρ). Figure <ref>, panel (b) shows this function. It is linear is ρ when 0 ≤ρ≤ρ_c, because in the free flow regime the speed U(ρ) = u_m is constant. Moreover, it reaches its maximum at the critical density.§.§ Quantifying Savings in FuelBased on the FC rate function F(ρ), constructed above, we can quantify the impact of a MB on the total FC of all vehicles on a segment of the highway. The following two scenarios are being compared: * Scenario A: The FB arises, but the MB is not activated (uncontrolled case).* Scenario B: The FB arises, and the MB is activated (controlled case).Because the effect of the MB vanishes after some finite time (see <ref>), both scenarios return to the same traffic state eventually, however via different traffic state evolutions before that.§.§.§ Absolute savings in fuelWe consider the total amount of fuel that is saved per hour due to the implementation of the controlled scenario B, rather than simply letting scenario A unfold. Let ρ_A(x,t) be the density on the highway in scenario A, and ρ_B(x,t) be the density on the highway in scenario B. Due to the finite speed of information propagation in the LWR model (<ref>), a MB can only affect the traffic state in its vicinity (in space-time). It cannot affect the traffic density on all of the highway, and its effect is limited to a finite interval in space and in time (t∈ [t_1,t_4]). Specifically, we defineΩ := {(x,t)|ρ_A(x,t) ≠ρ_B(x,t)}to be the domain of influence of the MB (shown in Fig. <ref>, panel (a)). The total FC of traffic in Ω isG_X^Ω = ∬_Ω F(ρ_X(x,t))dx dt ,X ∈{A,B} .Therefore, the total fuel saved due to the MB isW = G_A^Ω-G_B^Ω ,and we measure the fuel saving rateY = W/t_4-t_1as the total amount of fuel saved (due to the MB), divided by the duration of influence of the MB. §.§.§ Relative savings in fuelA relative measure of the impact of the MB control on the fuel consumed is obtained by dividing the total fuel saved (<ref>) by the total fuel consumed by the traffic flow. The challenge in this notion is to define which vehicles (and when) should be included in that “total”. Clearly, it only makes sense to incorporate vehicles in the vicinity (in space-time) of the MB. Below we describe two possible notions of a “total” fuel consumption.We define the local relative FC savings asR^Ω = 1- G_B^Ω/G_A^Ω ,which is the total fuel saved (by the control) relative to the total fuel consumed by traffic on the domain of influence of the MB, Ω, defined above. Note that (i) the size and shape of the domain Ω depends on the control parameters; and (ii) the segment of highway that is considered via Ω changes in time (see Fig. <ref>, panel (a)).A geometrically simpler reference domain can be defined as follows. Consider the segment of the highway x∈ [z,0] where z = min(-d,s_b t_4), over the time interval t∈ [t_1,t_4]. This rectangle in space-time (shown in Fig. <ref>, panel (b)) represents the portion of highway that is affected by the MB at any time, and the time interval from the MB's activation until its effect has vanished. By construction, this domain includes the domain of influence Ω. Analogous to (<ref>), the total FC of traffic (in scenario A or B) over this rectangular domain isG_X = ∫_t_1^t_4∫_z^0F(ρ_X(x,t))dx dt ,X ∈{A,B} ,and the global relative FC savings are given byR = 1- G_B/G_A . Clearly, the quantity (<ref>) leads to the largest values of relative FC savings, as the domain Ω is the smallest reasonable reference region. In contrast, the larger rectangular domain (z,0)×(t_1,t_4) leads to smaller estimates of relative FC savings. A rationale for considering that larger domain (besides geometric simplicity) is that this region in space-time marks a fixed segment of highway that is under “MB control”. Due to this structure, R is a conservative measure of the relative impact of the MB, while R^Ω provides an upper bound on the effect of the MB in terms of relative fuel savings. The true relative impact of the MB is somewhere between R and R^Ω.The quantities Y, R, and R^Ω depend on the parameters ρ_0, s, and d. Therefore, given an initial traffic density ρ_0, we can determine for which choices of the control parameters s and d the effect of the MB on FC reduction is maximized (with respect to Y, R, or R^Ω). Note that, because of the macroscopic description, the analysis neglects the FC of the MB itself. That is justified, because on a large highway segment, its contribution is insignificant relative to the bulk flow. § RESULTS §.§ Effect of Distance dAlthough only one MB is needed to implement the suggested control, multiple AVs may be on the highway that could act as potential control vehicles. For a 3-lane density of ρ_0 = 50 veh/km, an AV penetration rate of merely 4% implies that, on average, every 500m an AV can be found. Therefore, if AVs are roughly evenly distributed along the highway, a desired distance d between the congestion zone (caused by the FB) and the activated MB can be realized up to 250m precision. Under this premise, the distance d becomes a control parameter.We aim to choose d such that fuel savings are maximized. As a matter of fact, the distance d exactly re-scales the evolution of the density on the highway with respect to both x and t, as shown in Fig. <ref>. This is evident by the fact that all speeds s_1, s_2, s_b, and s_low, as well as the function F(ρ), are independent of d. Therefore, the absolute fuel savings rate Y scales linearly with d, i.e., Y(λ d) = λ Y(d) for λ>0. This implies that if we were to activate only a single MB on the highway, it is advisable to maximize d, as long as the effects of that MB will have vanished by the time the FB clears. This last requirement is important: any vehicles held back by the MB that do not hit the high density state ρ_b anymore (because the FB has been cleared), will experience an actual delay to their travel time, relative to the uncontrolled scenario. Clearly, this situation is undesirable.While the absolute fuel saved is proportional to the distance d, the relative fuel savings R^Ω and R are invariant with respect to d. The reason is that the size of both the local and global domains in the (x,t)-plane is proportional to d^2, because both the spatial extent and the duration of the control are linear in d.§.§ Effect of the Moving Bottleneck's SpeedThe speed s of the MB determines how much it impacts the traffic on the highway. Higher values of s make the effect of the MB milder, whereas low values of s make the MB behave more like a FB, thus inducing drastic velocity changes in its vicinity. Because all quantities ρ_1, ρ_2, s_1, s_2, s_b, and s_low depend on s, the fuel consumption measures Y, R, and R^Ω vary with s as well.Figure <ref> shows a plot of the fuel savings rate Y vs. the MB speed s, for four different initial densities ρ_0 in the free flow regime. In all calculations, the initial distance d between the control vehicle and the highly congested region is 40 km. At a free flow speed of 140 km/hr, this distance is traveled in a little above 15 minutes. In the first case (ρ_0=33 veh/km), shown in panel (a), the MB has no effect on the traffic flow for any choice of s, because ρ_0 ≤βρ_c, i.e., all vehicles can pass the MB without being held back. Therefore Y = 0. In the other three cases, we obtain optimal MB speeds s^* for which Y is maximal.Figures <ref> and <ref> show the corresponding plots for the relative fuel savings, R^Ω vs. s and R vs. s, respectively. The same four choices of initial densities ρ_0 are considered. The distance d is irrelevant for these two quantities. As in Fig. <ref>, the lowest density renders the MB ineffective, therefore R^Ω(s) = 0 = R(s) as well. Note that the graphs in Fig. <ref> possess a kink, because the area of the global domain [z,0] × [t_1,t_4], as a function of s, is not differentiable.As one can see in Figures <ref>, <ref>, and <ref>, the maximal FC reduction generally occurs at very low MB speeds. In practice, safety concerns will prevent one from operating a MB at such low speeds. Therefore, the results suggest that for the given situation, within reasonable ranges of MB speeds, the functions Y(s), R^Ω(s), and R(s) will generally be strictly decreasing. As a consequence, one would generally want to operate the MB at the slowest possible speed that is deemed safe.For example, if we disallow the controlled vehicle to drive at a speed below 70% of the speed of the ambient equilibrium traffic flow, we obtain a maximum total fuel saved of Y=1826 Liters/hr, and maximum relative fuel savings of R^Ω=15.82% and R=8.27%, for the initial density ρ_0=48 veh/km, and d=40 km. §.§ Effect of Parameter βIn the previous sections, we have assumed the natural value β=2/3 (2 of the 3 lanes are blocked). In this section, we examine how the results vary if we take a different value for β. For instance, values of β<2/3 could arise if lane-changing in the wake of the MB produce extra “friction” effects.Figure <ref> shows plots of the fuel savings rate Y vs. β on the interval 0<β<1, for two different densities ρ_0, and for four different choices of MB speeds s, namely: s=s^* (the optimal MB speed at which Y is maximized; which may be very low), s=84 km/hr, s=98 km/hr, and s=112 km/hr. The MB works better (i.e., provides a larger FC saving) for smaller values of β. The reason is that the smaller β, the larger the potential impact of the MB on the traffic state. Note that the red curves corresponding to s=s^* are affected by β is two ways: first, Y directly depends on β; and second, Y depends on s^* and s^* depends on β. That last dependence s^*(β) can be complicated; in fact, for the considered fuel consumption model, it may possess segments where s^*=0. This segment is visible in panel (b) between β=0.33 and β=0.44.The plots in Fig. <ref>, panel (a) show that, while the friction-less choice β=2/3, considered above, did not lead to any effects due to the MB (for the given low density ρ_0), lower values of β render the MB effective. Panel (b) shows that already some mild friction effects could significantly increase the effectiveness of the MB control. For instance, when ρ_0=45 veh/km, at s=112 km/hr, a reduction of β from 2/3 to 1/2 roughly doubles the total amount of fuel saved. § FURTHER BENEFITS OF MOVING BOTTLENECK CONTROLThe investigations above have used the total fuel consumption of the vehicles on the road to quantify the benefit of the traffic control via a MB. The key reason for doing so is that there are precise data available for the vehicles' fuel consumption rate as a function of their velocity (see <ref>). However, in reality there are greater benefits to the proposed controls than captured by the reduction in consumed fuel. Here, we outline some of them.The effectiveness of the MB control stems from the fact that for some vehicles, it replaces the rapid transition from high vehicle velocities to very slow traffic flow (uncontrolled case) by two less severe velocity transitions. The inserted middle state (of density ρ_1) results in less fuel wasted due to high speed air drag.In addition, the presence of the middle state also reduces air pollution. A direct reduction in vehicle emissions results from the reduction in consumed fuel. In addition, emissions due to acceleration and deceleration will be reduced as well. The LWR model considered in this paper makes the simplifying assumption that vehicles reach their equilibrium velocities instantaneously and always maintain them precisely. However, in real life vehicles will accelerate and decelerate; and in particular in the highly congested state ρ_b, stop-and-go traffic tends to arise <cit.>, for which the LWR model captures only the effective average bulk flow properties correctly. The data in <cit.> suggests that fuel consumption, and particularly emission rates of HC, CO, and NO_x are much higher when vehicles are accelerating, compared to when they are traveling at constant velocities. This implies that the MB, by reducing the length of the traffic jam, will reduce the amount of harmful emissions, that contribute to air pollution.Moreover, traffic jams (the ρ_b state) are associated with severely elevated local air pollution, because a lot of vehicles are localized to a small area, many of which may be idling <cit.>. A quantitative link between traffic congestion, reduced air quality (direct measurements of PM_2.5, PM_10, and NO_x), and health related problems and respiratory symptoms has been provided in <cit.>.In the same fashion, further adverse effects of unsteady traffic flow in the high density state ρ_b, such as wear and tear (brakes), or the risk of vehicle collisions, are mitigated by the proposed MB control as well. It can therefore be stated that the benefits of the traffic control strategies developed in this paper go far beyond a mere reduction in fuel consumption.§ CONCLUSIONSIn this paper, we have combined the theory of moving bottlenecks (MBs) and vehicular fuel consumption to suggest a new methodology of traffic control that reduces the overall fuel consumption on highways in certain situations. While the proposed methodology (creating controlled MBs) can in principle be implemented via human-controlled vehicles (e.g., police cars), it carries particular promise in the context of autonomous vehicles (AVs) that will be in the traffic stream in a few years. The proposed ideas work with a single AV, so they are amenable already at extremely low AV penetration rates.Also, we focused in this paper only on fuel consumption to measure the impact of the MB. We found that to get positive fuel savings, the amount of vehicles traveling at very high speeds must be reduced, where air drag is the main factor for high high fuel consumption. It is important to note that, in the presence of a FB downstream, slowing down fast vehicles will not delay them, but rather only change their speed profile in time. Another positive effect the MB will have in reality is the reduction of air pollution, see the discussion in <ref>.Finally, the approach of reducing FC via MBs has been studied quantitatively, using realistic FDs and FC curves. Various FC reduction curves have been provided, suggesting optimal values for the speed of the MB that maximize the reduction in the overall FC. The actual FC reduction achieved by the MB varies strongly with the traffic situation. Considering certain safety constraints, FC reduction rates of more than 8% can be achieved when considering the full jam produced by a traffic incident. Likewise, when restricting the FC reduction only to the parts of the jam that can be affected by the MB, a FC reduction of about 16% is possible. Also, considering a realistic distance d, we were able to get fuel savings of about 1800 Liters/hr. Due to the nature of the traffic control, the reductions in FC come at very small cost. plain
http://arxiv.org/abs/1702.07995v1
{ "authors": [ "Rabie A. Ramadan", "Benjamin Seibold" ], "categories": [ "physics.soc-ph", "35L65, 35Q91, 91B74" ], "primary_category": "physics.soc-ph", "published": "20170226075651", "title": "Traffic Flow Control and Fuel Consumption Reduction via Moving Bottlenecks" }
=0.0in =-0.75in 6.5in 9.15in =10000 =10000
http://arxiv.org/abs/1702.08455v1
{ "authors": [ "A. Tawfik", "E. Abou El Dahab" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170227174150", "title": "FLRW Cosmology with Horava-Lifshitz Gravity: Impacts of Equations of State" }
#10=#10=0 1=/ 1=1 0>1to 0/ #1to 1#1/
http://arxiv.org/abs/1702.08195v1
{ "authors": [ "Daniel Boer", "Piet J. Mulders", "Jian Zhou", "Ya-jin Zhou" ], "categories": [ "hep-ph", "hep-ex", "nucl-ex" ], "primary_category": "hep-ph", "published": "20170227090356", "title": "Suppression of maximal linear gluon polarization in angular asymmetries" }
^1School of Physics and Centre of Excellence for Quantum Computation and Communication Technology, UNSW Australia, Sydney, New South Wales 2052, Australiag.tettamanzi@unsw.edu.au The ability to apply GHz frequencies to control the quantum state of a single P atom is an essential requirement for the fast gate pulsing needed for qubit control in donor based silicon quantum computation. Here we demonstrate this with nanosecond accuracy in an all epitaxial single atom transistor by applying excitation signals at frequencies up to ≈ 13 GHz to heavily phosphorous doped silicon leads. These measurements allow the differentiation between the excited states of the single atom and the density of states in the one dimensional leads. Our pulse spectroscopy experiments confirm the presence of an excited state at an energy ≈ 9 meV consistent with the first excited state of a single P donor in silicon. The relaxation rate of this first excited state to ground is estimated to be larger than 2.5 GHz, consistent with theoretical predictions. These results represent a systematic investigation of how an atomically precise single atom transistor device behaves under rf excitations.Probing the Quantum States of a Single Atom Transistor at Microwave Frequencies Michelle Y. Simmons^1 December 30, 2023 ===============================================================================Advances in Si device fabrication technology over the past decade have driven the scale of transistors down to the atomic level. The ultimate limit of this scaling is to fabricate a transistor with just one single dopant atom as the active component of the device and this has been realised using scanning tunnelling microscope (STM) lithography. <cit.> The spin states of individual P donor electrons and nuclei have extremely long coherence times when incorporated into a crystal composed of isotopically purified ^28Si, <cit.> making them excellent candidates for quantum information processing applications. <cit.> STM lithography offers the potential to scale up such qubits by providing a means to position individual P atoms in a Si lattice, and align them with sub-nanometer precision to monolayer doped control electrodes. This technique has already demonstrated double <cit.> and triple <cit.> quantum dot devices, controllable exchange interactions between electrons, <cit.> and the ability to initialise and read out the spin states of single electrons bound to the donor with extremely high fidelity. <cit.> Most recently, these monolayer doped gates have shown to be immune to background charge fluctuations making them excellent interconnects for silicon based quantum computer. <cit.>Besides the ability to create devices with atomic precision, another requirement for quantum information processing and high-speed logic applications is the ability to control the quantum states of the donor electrons at sub nanosecond timescales. Control signals in the GHz regime are desirable for dispersive readout <cit.> and for controlling exchange interactions for non-adiabatic gate operations <cit.>. Indeed, a recently proposed scheme for implementing the surface-code error correction protocol in silicon relies on the ability to propagate signals through such devices with sub-nanosecond timing precision. <cit.> Recent impurity-based quantum charge pump devices have been shown to be robust in terms of immunity to pumping errors when operated at GHz frequencies. <cit.> However, to date these experiments have been performed on devices containing random ion implanted impurities. <cit.> STM fabrication capabilities can allow high-precision ( ≲ nm) positioning of the dopant <cit.>, and when combined with high-speed control of quantum states, it will provide devices for quantum metrology. <cit.>In this paper we investigate the propagation of high-frequency signals to the monolayer-doped leads used in atomically precise devices. Previous results have demonstrated the ability to apply radio frequency (≈ 300 MHz) transmission using dispersive measurements for manipulation of the quantum states. <cit.> Here we present a systematic study of the propagation of high frequency signals in atomically precise devices. In this work we demonstrate high frequency capacitive coupling (up to ∼ 13 GHz) to the states of a single-atom transistor <cit.> fabricated via scanning tunnelling microscope lithography, see Fig. <ref>a), important for the implementation of quantum information processing <cit.> and quantum metrology. <cit.> We report transient spectroscopy experiments <cit.> that confirm the existence of the excited state of the P donor located at an energy of ≈ 9 meV ± 1 meV and we extract bounds from 2.5 GHz to 162 GHz for the relaxation rates from the first excited state to the ground state, Γ_ES, i.e. in good agreement with previous experiments <cit.> and theoretical estimations. <cit.> It is important to note that such a large range in the extracted value of Γ_ES can be linked to the strong tunnel coupling of the state to source/drain leads in this particular device, making the experiments needed for a more quantitative result infeasible. <cit.> However, in the long term this coupling can be controlled by the geometry of the tunnel junctions, which can be engineered with sub-nm precision during fabrication. <cit.> § RESULTS AND DISCUSSION In contrast to surface gate-defined quantum dot devices, which typically make use of macroscopic metal electrodes to propagate high-frequency signals, atomic precision devices rely on electrodes formed using highly phosphorus doped silicon (∼ 2.5 × 10^14 cm^-2) where the phosphorus dopants form a monatomic layer within the Si crystal patterned in the same lithographic step as the single donor atom, see Fig. <ref> a). Within the monolayer of dopants the average separation of the donors is ≲ 1 nm giving rise to a highly disordered two-dimensional electron gas. Disorder scattering in these degenerately doped leads gives rise to a resistance of ≈ hundreds Ohms per square comparable to that found in silicon quantum dots <cit.> but one order of magniture higher than the values observed in conventional transistors. <cit.> However, another very important difference is that the self-capacitance of the atomically thin monolayer wires are negligible with the cross capacitances to the other leads being quite small, estimated to be around the aF <cit.>. As a consequence, very little current (≈ nA) is required to carry a high-frequency voltage signal along these wires if compared to the tens of nA's necessary for quantum dots. <cit.> Fig. <ref> shows in a) an STM image of the device and in b) a schematic of the measurement circuit used, illustrating how both dc and rf signals can be applied to gate 1 (G1) and to gate 2 (G2) via bias-tees. The pink areas in Fig. <ref> a) show the highly P doped monolayer regions (see also methods section) comprising tunnel coupled source/drain (S/D) leads and capacitively coupled gates (G1/G2) surrounding a single phosphorus atom. Several step edges separating the individual atomic planes are clearly visible in the STM image. To test the frequency response of the monolayer doped gates, the D^+ to D^0 current peak related to current flow through the isolated P atom <cit.> can be capacitively addressed by two gates (i.e. G1 and G2), allowing an independent rf signal to be added to each of the two gates and the device to be studied in both the dc and the rf domains. The use of rf signals is particularly attractive for these atomic-scale devices as the very narrow leads (≲ 5 nm) used to address the donor are quasi-1D, make it difficult, by using simple dc bias spectroscopy, to distinguish the signatures in current related to the excited states of the donor from the features related to the density of the states (DOS). <cit.> Later we will show how we apply transient current spectroscopy as described in Refs. <cit.> to clarify some of the transport mechanisms that can arise throughout the excited state spectrum of a single atom transistor.In Fig. <ref> c) a schematic of the excitation spectrum of the D^0 state of a P donor in Si <cit.> is shown highlighting the 1s(A_1), 1s(T_2) and 1s(E) valley states and the 2p_0 and 2p_+/- orbital states of the single donor. In Fig. <ref> we observe the evolution of the current peak related to the ground state (GS) of the D^0 state as a function of the power of the sinusoidal rf signal added to the dc voltage of gate 1. The possibility of capacitively addressing this D^0 GS is confirmed for high frequencies up to ≈ 13 GHz, where, as expected, when an rf signal with sufficient power is in use, the position of the D^0 current peak splits in two, with the splitting being proportional to the square root of the power of the provided excitation. This doubling of the current peak is observed for more than 4 orders of magnitude change in the frequency (i.e. from 1 MHz to ∼ 13 GHz). In Fig. <ref>d) we show a schematic describing how the doubling appears at different power and the underlying mechanisms causing it. When the rf signal is applied to one of the two gates (G1 or G2), during each rf cycle, the GS can occupy a range of positions represented by the green/red regions in the schematic of Fig. <ref>d) where the green and red regions simply refer to the voltage change rate at which the donor GS crosses the bias windows and depends on the timing of the sine wave (green = low rate of change of the sine; red = high rate of change of the sine). To clarify, at any point in time of the sine period the current is proportional to the portion of integrated time that the states spend within the bias windows. Hence, if the variation in time of the position of the state is minimal (i.e. d[sine(ω t)]/dt|_ω t =± 90^o ≈ 0), as in the green regions in Fig. <ref>d), it is possible for electrons to tunnel resonantly between the source and the drain via the state and it is possible to observe a current. However, if this variation in time is maximum (i.e. d[sin(ω t)]/dt|_ω t =0^o ≫ 1), as in the red regions, only negligible current can be observed.In Fig. <ref> we now turn to the impact of the rf on the response on the excited states of the donor atom. Fig. <ref> a) shows the excited state spectrum at the D^+ to D^0 transition with no rf signal applied also consistent with previous measurements of this device. <cit.> In this figure the dc charge stability diagram focusses on the position of the first excited state, 1s (T_2). Since we have the availability of two gates and the device is highly symmetric, the donor states can be capacitively addressed with both G1 and G2. As a consequence we can address the states in two different regimes either when G1 and G2 are tied together and varied simultaneously or when G1 is fixed and G2 is varied. By addressing the donor in those two different regimes we observe the same spectrum as in the original measurements <cit.> where, as expected, <cit.> the positions of each level are insensitive to the changes in electric field related to the different measurement configurations, see Fig. <ref>a) and Fig. <ref>c). Obtaining the same results with different regimes of addressing the states is important as it demonstrates the reproducibility under different electric field conditions. In Fig. <ref> b) we now present the same spectrum but with an rf excitation of ν = 12.785 GHz applied to G1. We observe the same doubling of current signature as observed in Fig. <ref>c), but now for the excited state spectrum, e.g. 1s(T_2). This first excited state is located at 10 meV ± 2 meV, consistent with the previously measured bulk value for the first excited state of a single P donor in bulk silicon <cit.> as shown in Fig. <ref>c). Likewise in Fig. <ref> c) we observe the same effect but now using an rf excitation at ν = 10 GHz applied to G1 and with both gates addressed in dc, as in Fig. <ref>. These results are similar to those discussed in Fig. <ref>d), however this time the capacitive coupling is demonstrated for the 1s(T_2) level of the excited state spectrum and shows robustness to the electric field across the donor.Quantum information and quantum metrology applications require precise and independent rf control of different gates, <cit.> as many quantum logic gate operations include fast manipulation of electron states. These operations require the absolute synchronisation in phase (time) between the rf signal individually applied to different gates and of their relative coupling to quantum states. To ascertain if this is possible in our device, in Fig. <ref> we present results obtained from applying sinusoidal rf excitations of 250 MHz to the bias-tees of both G1 and G2. Here, the provided rf excitations are of equal amplitude but there is a varying difference in the absolute phase between the two signals. Hence, Fig. <ref> ultimately allows to quantify the level of synchronisation in time between the capacitive coupling between G1 and the GS and the one between G2 and the GS. The result of these measurements confirms that, within the limit of precision of the source ( ≈ 10 ps, see methods section), a very similar capacitive coupling between each gate and the donor state <cit.> is in place and is preserved in the rf regime. These results show that, by precision STM patterning, it is possible to have control of the device symmetry and, as a result, to observe accurate nanosecond synchronisation between different gates up to 0.25 GHz frequencies. The results presented so far are of relevance for the field of quantum computations as they demonstrate the control of energy states at f ≳ 10 GHz, i.e. the high frequencies required for several quantum computer proposals which, require synchronous sub-ns pulses to be applied to quantum states <cit.>. Precision transistors can also be used for single-electron transfer applications, such as the ones necessary for quantum metrology, <cit.> where independent and precise control in time of more than one gate is needed.In the next section we shall show how, using excited state spectroscopy at ν = 50 MHz, <cit.> we can distinguish the electron excited states spectrum of the donor from the 1D confinement-related DOS of the quasi one dimensional leads. <cit.> As in the previous experiments, when we apply a square wave signal to one of gates addressing the state, we observe a characteristic V-shape of the current as a function of increasing pulse voltage (see Fig. <ref> where V_pulse represents the voltage amplitude provided to the bias-tee). The V-shape of the current represents the doubling of the ground state peak when square pulses are applied to G1 and is observed both for positive (Fig. <ref>a) and for negative (Fig. <ref>b) source bias voltages. This process is schematically described in Fig. <ref>f) for negative biases. In Fig. <ref> a) and b) the left branch shows the current where the ground state is pulsed from far above the bias window, while the right branch represents the dc ground state signature, which is shifted by the introduction of the pulse. There is an additional feature, labelled "#", observed when a negative bias is applied to the source, as in Fig. <ref> b), which we attribute to the 1^st excited state of the donor electron as explained in the next section. It is worthwhile to remember that the DOS in the one dimensional leads cannot be associated with this additional feature because the DOS signature is not V_Pulse-dependent but only S/D bias-dependent. <cit.> Hence, in these experiments we can address both the excited and ground state spectrum at low bias such that pulse spectroscopy allows to distinguish transport via the excited state and the DOS in the leads in a way not possible via dc spectroscopy. <cit.> The Coulomb diamonds and the doubling observed in Fig. <ref> a) and b) allow a direct conversion between gate voltage and energy. From the position of the red dot in Fig. <ref>b) at V_Pulse = 120 mV ± 10 mV and using 0.075 for the final correction factor of the applied power (see methods section), we can determine an excited state energy of 9 meV ± 1 meV.This pulsed-estimated value for the excited state energy 1s(T_2) lies close to the one extracted from the dc data in Fig. <ref>c) ≈ 10 meV ± 2 meV (black arrow and black dashed lines), see also red ellipses and red dashed lines in Fig. <ref>b) and Fig. <ref>c). The position of the other visible peak for the excited state (1s(E), white arrow and white dashed line around 13.5 meV), is also very close to the expected bulk values, i.e. 11.7 meV and from previous estimations made of this device, in the dc mode. <cit.>It is important to understand why the overall dc excited state spectrum is more visible for negative bias (e.g. see Fig. <ref> c)) which indicates that the transparencies of the source/drain to 1^st excited state barriers (Γ_Se/Γ_De) are asymmetric, with the latter being more transparent. <cit.> The asymmetry in the tunnel barriers (i.e. Γ_Se/Γ_De ≪ 1) can be better understood by looking at Fig. <ref>g) where the negative bias regime is schematically illustrated. This figure shows that if the electrons moving from source to drain via the 1^st excited state encounter first a slow barrier, Γ_Se, and then a fast one, Γ_De, then the dc excited state signature will be more visible compared to the opposite case of positive bias where an electron will first encounter a fast barrier and then a slow one. In the later case electrons are most likely to relax to the ground state before tunnelling through the slow barrier and the excited state signature will be less visible. Furthermore, as the same asymmetry applies for pulsing experiments and, at negative bias and for sufficiently slow relaxation from the 1s (T_2) excited state to the ground state, Γ_ES, if compared with Γ_De, we see the excited state line once the pulse amplitude exceeds the ES energy (black dashed line, the pale red dot and the # symbol in Fig. <ref>b)). This because the asymmetry will allows a better visibility of the excited state 1s (T_2) but this time at low bias and without the presence of the DOS signature complicating the picture. It can be easily seen <cit.> that the same asymmetry observed in the tunnel barriers Γ_Se/Γ_De is also true for Γ_S/Γ_D with Γ_S and Γ_D being the source to ground state and the drain to ground state barriers, respectively, where the following two inequalities can also be obtained: <cit.>Γ_Se ≳ Γ_S Γ_De ≳ Γ_Dwhere these two inequalities are due to the typical larger spatial extent of the excited state wave functions compared to the ground state ones.[1]The estimation of these rates comes from the assumption that, in this device, the transport at low temperatures (≈ 100 mK) is in the life time broadening regime which allow to extract a first set of indicative values for the values of Γ_S ≈ 150 MHz and Γ_D ≈ 164.5 GHz, see also Ref <cit.>. The values of the two barriers Γ_S and Γ_D have been already quantified via a simple modelling[1] to be Γ_S ≅ 150 MHz and Γ_D ≅ 164.5 GHz confirming the expected asymmetry of the barriers (Γ_S/Γ_D ≈ 10^-3), not unusual for these systems. <cit.> Since Eq. <ref> is true, it follows that Γ_ES << Γ_D. As a consequence we can obtain bounds for Γ_ES from the following points: *  The rise time from 10 % to 90 % of the maximum amplitude <cit.> of the used pulsed signal is 90 ps (11 GHz), hence the pulse brings the excited state in resonance within this 11 GHz range of frequencies. This gives us the information that Γ_S is < 11 GHz in agreement with what already discussed. *  The amplitude of the excited state signal in Fig. <ref>b) is ≈ 4 pA, hence it is possible to estimate that ≈ 50 % of the electrons are loaded via the excited state during each individual pulse. Also, the edge of the square pulse <cit.> can never be sharp as in an ideal case, hence this indicates that Γ_Se cannot be much faster than Γ_S, in agreement with Equ. <ref>.*  If a positive bias is applied to the device, as in Fig. <ref>a), the electrons first encounter the fast barrier and then the slow one, Γ_Se. As no extra signal can be observed for this regime, this indicates that the electrons are always relaxing to the ground state before being able to tunnel to the source, leading to the conclusion that Γ_Se ≪ Γ_ES, in agreement with recent theoretical estimations. <cit.> The set of observations just discussed together with and Eq. <ref> and Eq. <ref>allow us to determine approximative bounds for Γ_ES as in the following inequalities:Γ_D ≫ Γ_ES ≫ Γ_S, hence 164.5 GHz ≫ Γ_ES ≫ 150 MHz. To test this hypothesis further, in Fig. <ref>d) traces are taken for a fixed V_Pulsed = 160 mV (as in the red dashed line in Fig. <ref>b)) across the excited state signal. Here we can see by adding different low pass filters to the pulse line (at room temperature and one at the time) we can change the pulse rise time and observe if the extra signal related to the excited state can be attenuated. In fact, by increasing the rise time of the pulse to 400 ps (i.e. by using a 2.5 GHz low pass filter), the extra signal can be completely suppressed. As shown schematically in Fig. <ref>g) and in ref., <cit.> the fast rise time of the pulse is a fundamental requirement for the observation of the resonant tunnelling via the excited state. If the rise of the pulse edge is too slow compared to Γ_ES and Γ_S, then the electrons cannot resonantly tunnel via the excited state but instead have always an higher chance to first tunnel to the ground state cancelling the possibility of observing the extra current signature.Here we argue that the use of the filters and the reduction of the rise time to 400 ps ultimately favours tunnelling via the ground state rather than via the excited state. This allows us to give a better estimation of the value for Γ_S ≈ 2.5 GHz, since only when the rise time and Γ_S have similar values can the extra signal related to the resonant tunnelling via the excited state be suppressed. Note that this value of Γ_S is higher than the value of Γ_S extracted from dc transport,[1] but still realistic. This correction on the estimation of Γ_S, leads also to a slightly improved estimation of Γ_D ≅ 162 GHz,[1] while still confirming the asymmetry between the two barrier rates. The use of filters described above and schematically drawn in Fig. <ref>h) can provide a rough estimate for Γ_S, since it is not easy to determine the final influence that the filter has to the shape of the pulse, <cit.> however it gives a better indication for the bounds of Γ_ES. In fact, this discussion suggests that a better defined range for the value of Γ_ES is: 162 GHz ≫ Γ_ES ≫ 2.5 GHz, which is compatible with theoretical predictions and with experimental observations. <cit.> We have shown how to extract limits for the value of the relaxation rate of the first excited state of an isolated P donor. As traditionally these quantities are difficult to measure <cit.> or estimate theoretically <cit.> this is a relevant result for the fields of Si quantum information and Si quantum metrology. In these planar doped devices the barriers Γ_S and Γ_Se are tuneable only during fabrication allowing us to control the tunnel rates by an order of magnitude with precision lithography using current techniques, with future experiments aimed at improving this further. <cit.> This non tunability of the barriers during experiments represents an ultimate limit to the pulsing frequency that can be used. Hence, the higher pulse frequency, of the same order of magnitude as the relaxation rates ≈ 10 GHz, necessary to obtain a quantitative value <cit.> of Γ_ES as in Refs. <cit.> is not accessible. However, the regime explored in these experiments demonstrates the potential of the fast pulsing technique with all epitaxial monolayer doped gates. The discussion contained in this last section also explains why no excited state substructure can be observed in this Fig. <ref>, as the use of a sinusoidal excitation doesn't provide the appropriate conditions (as in Fig. <ref>g)) for the electrons to resonantly tunnel via excited states when the S/D bias is small.§ CONCLUSIONS In conclusion, in this work we demonstrated fast rf control of the excited state spectrum of a P atom in a single atom transistor using all epitaxial monolayer doped gates. This control was performed at GHz speed and with nanosecond synchronisation needed to execute quantum gate operations in several silicon based quantum computer proposals. Pulsed spectroscopy measurements with selective transport via excited states allowed us to differentiate between the excited states of the single atom and the density of states in the one dimensional leads in a manner not possible via dc spectroscopy. From these measurements we demonstrated a possible range of values for the relaxation times from the first excited state to the ground state. Such excited state relaxation rate information will help in the assessment on how realistic is the use of the silicon quantum valley-orbital degree of freedom for quantum logic and quantum metrology applications. <cit.> This work shows that with precision single atom fabrication technologies with epitaxial monolayer doped gates we can apply voltages up to GHz frequencies to control the spin states of the qubits. With the recent demonstration of the suppression of charge noise in these systems <cit.> this bodes well for precision donor based qubits in silicon. § ACKNOWLEDGMENTS G. C. Tettamanzi acknowledges financial support from the ARC-Discovery Early Career Research Award (ARC-DECRA) scheme, project title : Single Atom Based Quantum Metrology and ID: DE120100702 for the development of the setup used in these experiments. M.Y.S. acknowledge a Laureate Fellowship (FL130100171).§ METHODS AND EXPERIMENTAL Fabrication of the Single Atom Transistor Device. The device is fabricated on a low-doped (1-10 Ω cm) silicon wafer prepared with a Si(100) 2x1 surface reconstruction using a flash anneal to 1150 ^∘C, before it is passivated by atomic hydrogen. Controlled voltage and current pulses on the STM tip locally desorb this hydrogen layer to define the device features with atomic precision, leaving behind chemically active Si unpaired bonds. PH_3 gas introduced into the chamber binds to the surface in the regions where the hydrogen was desorbed. An anneal to 350 ^∘C causes the P atoms to incorporate into the top layer of the Si crystal. The P doped features are then encapsulated by low temperature (≲ 250 ^oC) solid source Si molecular beam epitaxy. The all epitaxial doped leads are electrically contacted by first using reactive ion etching to etch holes in the encapsulation down to the doped layer, then the holes are filled by evaporation of Al to make ohmic contact to the P doped layer. The P doped leads in this device are ∼ 1000 nm long and widen between ≈ 5 nm in the central part of the device to 800 nm in the contact region, with in an estimated 36 kΩ of two terminal resistance along the length of the leads. <cit.> Low Temperature and rf Measurements. The device is mounted on a cold finger of ^4He pot of an Oxford Variable Temperature Insert (VTI) operated at 1.2 K. A low noise battery operated measurement setup was used to measure the source/drain current and to apply the dc voltages. To apply the sinusoidal rf input to the gates via the bias-tees an Agilent E8257C source (operating up to 40 GHz) and a two channels Agilent 81180A source were used. The inter-channel time skew control of the Agilent 81180A source goes from -3 ns to +3 ns with 10-ps precision and determines the best possible control in time/phase between the two different rf signals (10-ps which is equivalent to 0.9 degrees for the used ν = 250 MHz of our experiments). rf signals can be transferred to the bias-tees via high performance coax rf lines. These lines have silver-plated copper-nickel inner conductor and copper-nickel outer conductor (i.e. attenuation ranging between the sub dBm/m to the few dBm/m at 20 GHz). SK coaxial rf connectors are used in all these rf lines and 6 dBm attenuators are placed as close as possible to the bonding pads (≲ 1 cm). The bias-tees are built with typical resistance and capacitance values of R = 1 MΩ and C = 1 nF, respectively. The used values for R and for C leads to characteristic RC times of around the few ms and high pass filter cutting frequencies of around a 0.1 KHz. These bias-tees have also been tested independently with a Keysight N9918A FieldFox handheld microwave analyser and have shown to operate with no resonances and with the expected linear increases of the losses up to the 26.5 GHz (i.e. the limit of our analyser).Furthermore, a correction factor ≅ 0.75 estimated via the Δ V_pulsed/Δ V_Gs ≈ 150/200 of the V-shape in Fig. <ref>a) and in Fig. <ref>b) is used to take into account the attenuation of the signal at the bias-tee level (for ν = 50 MHz) while the gate lever arm has been already estimated to ≈ 0.1 <cit.> making the final correction factor of the applied power equal to 0.075. Indeed, experiments as shown in Fig. <ref> have been possible up to 20 ≲ GHz, knowing that neither the rf source or attenuation in the rf lines are a limitation to these experiments for ν up to 40 GHz and the bias-tees attenuation is not a limitation to these experiments for ν up to 26.5 GHz. The limitation on the maximum frequency of operation of our device is most likely due to imperfect 50 Ω matching at the interface between the Al/Si bonding wire and the bonding pad of the device (used to connect the device to the external setup). The pulsing experiments have been performed with an HP 8131 and with an Agilent 81180A AWG in combination with a fast switching optical isolator from Delft university (http://qtwork.tudelft.nl/ schouten/ivvi/doc-mod/docs5d.htm). Overall, the 10 % to 90 % rising time was estimated with a fast oscilloscope to be ≅ 90 ps for the pulses used in Fig. <ref>, hence we believe that the AWG is not a limiting factor to the excited state spectroscopy experiments. § ADDENDUM In our paper we provided an upper bound for the relaxation from the 1s (T_2) excited state to the ground state, Γ_ES. This was determined from our estimation that the drain to ground state fast tunnel barrier, Γ_D was equal to 162 GHz. We have now realised that in Figure 5g) of the paper, where the transport mechanisms that allowed the observation of the excited state signature are schematically explained, we neglected to consider the process where an electron tunnels from drain to the ground state, see green arrow in Figure <ref> below. This neglected process can lead to blockading of the loading of the excited state, which no longer contributes to a net current through the Single Atom Transistor. The fact that we nevertheless observe the excited state resonance in the experiment suggests that the rate for this loading (i.e. Γ_D) is not greater than the source to excited state (Γ_Se) as previously estimated and is much smaller than 162 GHz. It does however indicate that these two rates are of the same order of magnitude, and the competition between the two processes contributes to the observed current (4 pA). This interpretation is a minor adjustment to that given in the paper, only requiring Γ_Se and Γ_D being of the same order of magnitude while being much smaller than Γ_ES. In this new interpretation of the data the estimation of the source to ground state slow tunnel barrier, Γ_S, being much smaller of Γ_D is confirmed. The same applies for the lower bound for Γ_ES obtained in the paper and discussed in the abstract. However, it is no longer possible to estimate an higher bound for the value of the Γ_ES which was previously stated as 162 GHz. 99§ REFERENCESSch136104 Schofield, S. R., Curson, N. J., Simmons, M. Y., Rue, F. J., Hallam, T., Oberbeck, L., and Clark, R. G. Atomically Precise Placement of Single Dopants in Si. Phys. Rev. Lett., 2003, 91, 136104. Muh2014Muhonen, J. T., Dehollain, J. P., Laucht, A., Hudson, F. E., Kalra, R., Sekiguchi, T., Itoh, K. M., Jamieson, D. N., McCallum, J. C., Dzurak, A. S. and Morello, A. Storing Quantum Information For 30 Seconds in a Nanoelectronic Device. Nat. Nanotechnol., 2014, 9, 986-991. Tyr2012Tyryshkin, A. M., Tojo, S., Morton, J. J. L., Riemann, H., Abrosimov, N. V., Becker, P., Pohl, H.-J., Schenkel, T., Thewalt, M. L. W., Itoh, K. M. and Lyon, S. A. Electron Spin Coherence Exceeding Seconds In High-Purity Silicon. Nat. Mater., 2012, 1, 143-147. Ste2012Steger, M, Saeedi, K., Thewalt,M. L. W., Morton, J. J. L., Riemann, H., Abrosimov, N. V., Becker, P., Pohl, H.-J. Quantum Information Storage For Over 180 s Using Donor Spins In A ^28Si Semiconductor Vacuum. Science, 2012, 336, 1280-1283.Kane133 Kane, B. E. A Silicon-Based Nuclear Spin Quantum Computer. Nature, 1998, 393, 133. Loss120 Loss, D. and DiVincenzo, D. P. Quantum Computation With Quantum Dots. Phys. Rev. A, 1998, 57, 120.Pla2012Pla, J. J., Tan, K. Y., Dehollain, J. P., Lim, W. H., Morton,J. J. L., Zwanenburg, F. A., Jamieson, D. N., Dzurak, A. S. and Morello, A. High-Fidelity Readout And Control Of A Nuclear Spin Qubit In Silicon. Nature, 2012, 489, 541-545.Web4001 Weber, B. Mahapatra, S., Watson, T. F. and Simmons, M. Y., Engineering Independent Electrostatic Control of Atomic-Scale (∼ 4 nm) Silicon Double Quantum Dots. Nano Lett., 2012, 12, 4001. Wat1830 Watson, T. F., Weber, B., Miwa, J.A., Mahapatra, S, Heijnen, R.M.P. and Simmons, M. Y. Transport in asymmetrically coupled donor-based silicon triple quantum dots. Nano Lett., 2014, 14 1830. Web430 Weber, B., Tan, M., Mahapatra, S, Watson, T. F., Ryu, H., Rahman, R., Hollenberg, L. C. L., Klimeck, G. and Simmons, M. Y. Spin Blockade and Exchange in Coulomb-Confined Silicon Double Quantum Dots. Nat. Nanotechnol., 2014, 9, 430.Wat2015 Watson, T. F., Weber, B., House, M. G., Büch, H. and Simmons, M. Y. High-Fidelity Rapid Initialisation and Read-Out of an Electron Spin via The Single Donor D^- Charge State. Phys. Rev. Lett., 2015, 115, 166806.Sha233304 Shamim, S., Mahapatra, S., Polley, C., Simmons, M. Y. and Ghosh, A. Suppression of Low-Frequency Noise in Two-Dimensional Electron Gas at Degenerately Doped Si:P δ Layers.Phys. Rev. B, 2011, 83, 233304.Sha236602 Shamim, S., Mahapatra, S., Scappucci, G., Klesse, W. M., Simmons M. Y. and Ghosh, A. Spontaneous Breaking of Time-Reversal Symmetry in Strongly Interacting Two-Dimensional Electron Layers in Silicon and Germanium. Phys. Rev. Lett., 2014, 112, 236602. Sha3 Shamim, S., Weber, B., Thompson, D. W., Simmons M. Y. and Ghosh, A.Ultralow-Noise Atomic-Scale Structures for Quantum Circuitry in Silicon. Nano Lett., 2016, 16, 5779.Hou15 House, M. G., Kobayashi, T., Weber, B., Hile, S. J., Watson, van der Heijden, J., Rogge, S. and Simmons, M. Y. Radio Frequency Measurements of Tunnel Couplings and Singlet-Triplet Spin States in Si:P Quantum Dots. Nat. Commun., 2015, 6, 8848.Pet2180 Petta, J. R., Johnson, A. C., Taylor, J. M., Laird, E. A., Yacoby, A., Lukin, M. D., Marcus, C. M., Hanson, M. P. and Gossard, A. C. Coherent Manipulation of Coupled Electron Spins in Semiconductor Quantum Dots. Science, 2005, 309,2180. Hill15 Hill, C. D., Peretz, E., Hile, S. J., House, M. G., Fuechsle, M., Rogge, S., Simmons, M. Y. and Hollenberg, L. C. L. A Surface Code Quantum Computer in Silicon. Sci. Adv., 2015, 1, e1500707. Tet063036 Tettamanzi, G. C., Wacquez, R. and Rogge S. Charge Pumping Through a Single Donor Atom. New J. Phys. 2014, 16, 063036. Fuj207 Yamahata, G., Nishiguchi, K. and Fujiwara, A. Gigahertz Single-Trap Electron Pumps in Silicon. Nat. Commun., 2014, 5, 5038. Fue242Fuechsle, M., Miwa, J. A., Mahapatra, S., Ryu, H., Lee, S., Warschkow, O., Hollenberg, L. C. L., Klimeck, G. and Simmons, M. Y. A Single-Atom Transistor. Nat. Nanotechnol. 2012, 7, 242.Fue2011 Fuechsle, M., Precision Few-Electron Silicon Quantum Dots., 2011, Chapter 9, pp 131. http://handle.unsw.edu.au/1959.4/51332.Ram1297 Ramdas, A. K. and Rodriguez, S. Spectroscopy of the Solid-State Analogs of the Hydrogen Atom: Donors and Acceptors in Semiconductors. Rep. Prog. Phys., 1981, 44, 1297. Fuj081304 Fujisawa, T., Tokura Y. and Hirayama, Y. Transient Current Spectroscopy of a Quantum Dot in the Coulomb Blockade Regime. Phys. Rev. B, 2001, 63, 081304(R).Volk1753 Volk, C., Neumann, C., Kazarski, S., Fringes, S, Engels, S., Haupt, F., Müller, A. and Stampfer, C. Probing relaxation times in graphene quantum dots. Nat. Commun., 2013, 4, 1753.Zhu093104 Zhukavin, R. Kh., Shastin, V. N., Pavlov, S. G., Hübers, H.-W., Hovenier, J. N., Klaassen, T. O. and van der Meer, A. F. G.Terahertz Gain on Shallow Donor Transitions in Silicon. J. Appl. Phys., 2007, 102, 093104. HubS211 Hübers, H.-W., Pavlov, S. G. and Shastin, V. N. Terahertz Lasers Based on Germanium and Silicon. Semicond. Sci. Technol., 2005, 20, S211. Tah075302 Tahan, C. and Joynt, R. Relaxation of Excited Spin, Orbital, and Valley Qubit States in Ideal Silicon Quantum Dots. Phys. Rev. B, 2014, 89, 075302.Cam2013 Campbell, H., Critical Challenges in Donor Based Quantum Computation. 2013. http://handle.unsw.edu.au/1959.4/52953.Zwa961 Zwanenburg, F. A., Dzurak, A. S., Morello, A., Simmons, M. Y., Hollenberg, L. C. L., Klimeck, G., Rogge, S., Coppersmith, S. N. and Eriksson, M. A., Silicon Quantum Electronics. Rev. Mod. Phys., 2013, 85, 961. Kei7080 Keizer, J. G., McKibbin, S. R., and Simmons, M. Y. The Impact of Dopant Segregation on the Maximum Carrier Density in Si:P Multilayers. ACS Nano, 2015, 9, 7080. Ryu374 Ryu, H., Lee, S., Fuechsle, M., Miwa, J. A., Mahapatra, S., Hollenberg, L. C. L., Simmons, M. Y. and Klimeck, G. A Tight-Binding Study of Single-Atom Transistors. Small, 2015, 11, 374.Mot161304 Möttönen, M., Tan, K. Y., Chan, K. W., Zwanenburg, F. A., Lim, W. H., Escott, C. C., Pirkkalainen, J.-M., Morello, A., Yang, C., van Donkelaar, J. A., Alves, A. D. C., Jamieson, D. N., Hollenberg, L. C. L. and Dzurak, A. S. Probe and Control of the Reservoir Density of States in Single-Electron Devices. Phys. Rev. B, 2010, 81, 161304(R). Rah165314 Rahman, R., Lansbergen, G. P., Park, S. H., Verduijn, J. Klimeck, G., Rogge, S. and Hollenberg, L. C. L. Orbital Stark Effect and Quantum Confinement Transition of Donors in Silicon. Phys. Rev. B, 2009, 80, 165314. Lan136602 Lansbergen, G. P., Rahman, R., Verduijn, J., Tettamanzi, G. C., Collaert, N., Biesemans, S., Klimeck, G., Hollenberg, L. C. L. and Rogge, S. Lifetime-Enhanced Transport in Silicon due to Spin and Valley Blockade. Phys. Rev. Lett., 2011, 107, 136602. Riw235401 Riwar, R.-P., Roche, B., Jehl, X. and Splettstoesser, J. Readout of Relaxation Rates by Non-Adiabatic Pumping Spectroscopy. Phys. Rev. B, 2016, 93, 235401. Tet046803 Tettamanzi, G. C., Verduijn, J., Lansbergen, G. P., Blaauboer, M., Caldern, M. J., Aguado, R., and Rogge, S. Magnetic-Field Probing of an SU(4) Kondo Resonance in a Single-Atom Transistor. Phys. Rev. Lett., 2012, 108, 046803. Cul126804 Culcer, D., Saraiva, A. L., Koiller, B., Hu, X. and Das Sarma, S. Valley-Based Noise-Resistant Quantum Computation Using Si Quantum Dots. Phys. Rev. Lett., 2012, 108, 126804.
http://arxiv.org/abs/1702.08569v1
{ "authors": [ "Giuseppe Carlo Tettamanzi", "Samuel James Hile", "Matthew Gregory House", "Martin Fuechsle", "Sven Rogge", "Michelle Y. Simmons" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227223647", "title": "Probing the Quantum States of a Single Atom Transistor at Microwave Frequencies" }
APS/123-QED Present address: Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Sendai 980-8577, Japan knawa@tagen.tohoku.ac.jpmasashi@issp.u-tokyo.ac.jp Present address: Max-Planck Institute for Solid State Research, Stuttgart, 70569 Stuttgart, Germanykyhv@kuchem.kyoto-u.ac.jp^1Department of Chemistry, Graduate School of Science, Kyoto University, Kyoto 657-8502 Japan ^2Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan ^3Laboratoire National des Champs Magnétiques Intenses, LNCMI-CNRS (UPR3228), EMFL, UGA, UPS and INSA, BP 166, 38042 Grenoble Cedex 9, FranceWe report on the dynamics of the spin-1/2 quasi-one-dimensional frustrated magnet LiCuVO_4measured by nuclear spin relaxation in high magnetic fields 10–34 T, in which the ground state hasspin-density-wave order. The spin fluctuations in the paramagnetic phase exhibit striking anisotropy withrespect to the magnetic field.The transverse excitation spectrum probed by ^51V nuclei has an excitation gap, which increases with field.On the other hand, the gapless longitudinal fluctuations sensed by ^7Li nuclei grow with lowering temperature,but tend to be suppressed with increasing field.Such anisotropic spin dynamics and its field dependence agree with the theoretical predictionsand are ascribed to the formation of bound magnon pairs, a remarkable consequence of the frustration betweenferromagnetic nearest neighbor and antiferromagnetic next-nearest-neighbor interactions. Valid PACS appear here Dynamics of Bound Magnon Pairs in theQuasi-One-Dimensional Frustrated Magnet LiCuVO_4 K. Yoshimura^1 December 30, 2023 ======================================================================================= § INTRODUCTIONFrustrated spin systems with competing interactions provide an active playground to explore exotic quantumstates such as various types of spin liquids, valence bond solids, or spin nematics<cit.>. A typical example is the spin-1/2quasi-one-dimensional frustrated Heisenberg magnets with competing ferromagnetic nearestneighbor interaction J_1 and antiferromagnetic next-nearest neighbor interaction J_2 <cit.>. Properties of such J_1–J_2 chains in magnetic fields have been extensivelystudied theoretically, leading to the prediction for novel spin nematic and spin density wave (SDW) phases.A distinct feature of J_1–J_2 chains is that the lowest energy excitation in the fully polarized statejust above the saturation is not a single magnon but a bound magnon pair, which is stable for a wide range ofα≡ J_1/J_2 ≥ -2.7 <cit.>.The bound magnon pairs undergo a Bose-Einstein condensation when the field is reduced below saturation,resulting in a spin nematic order that breaks the spin rotation symmetry but preserves the time reversal symmetry. When the field is further reduced, magnon pairs with their increased density exhibit spatial order.This leads to an SDW state, where the longitudinal magnetization has a spatial modulation.At very low fields, however, magnon pairing is not a valid concept and a classical helical spinorder is expected to appear. These different phases of J_1–J_2 chains in magnetic fields are expected to show distinct spin dynamics<cit.>.When the bound magnon pairs are formed, an energygap will appear in the transverse spin excitations (perpendicular to the external field) because suchexcitations cost energy to unbind the magnon pairs.The longitudinal spin correlation, on the other hand, has a quasi-long-range order fora purely one-dimensional system with a power-law decay. The crossover from SDW to nematic phases isaccompanied by a change of the power law exponent, making the SDW (nematic) correlation less(more) dominant at higher fields. Since nematic correlation cannot be measured directly, it is veryimportant to examine the spin dynamics in a wide range of fields to test these theoretical predictions.The nuclear relaxation rate is one of the best probes for this purpose as proposed by Sato et al. <cit.>.Several cuprates are known to be experimental realizations of J_1–J_2 chains, among whichLiCuVO_4 is the most studied material<cit.>.The crystal structure contains edge-sharingCuO_4 plaquettes, forming spin-1/2 frustrated chains along the b axis <cit.>.An incommensurate helical order was observed below T_N = 2.1 K atzero or low fields <cit.>,while a longitudinal SDW order appearsabove 7 T <cit.>.The magnetization curve exhibits anomalous linear field variation in a narrow range of fields 41–45 T for H ∥ c immediately below saturation, which was thought to be a signature of the spin nematic phase <cit.>. The origin of this linear variation is still under discussion. High-field NMR experiments performed by Büttgen et al. have revealed that this is not a bulk property but is likely to be caused by defects<cit.>. On the other hand, recent NMR experiments have indicated that the linear variation is present as a bulk property between 42.41 and 43.55 T<cit.>. The reason why the detected magnetization is so different is not clear but likely due to different defect concentrations.Although recent studies on LiCuVO_4 have developed a better understanding on its static properties, spin dynamics in magnetic fieldsremains poorly investigated. A drastic suppression of transverse spin fluctuations has been revealedby the NMR experiments upon increasing the field across the helical to SDW boundary, supporting the presenceof an energy gap <cit.>.In this paper, we report on systematic measurements of nuclearrelaxation rate 1/T_1 of ^7Li and ^51V nuclei in LiCuVO_4 in the paramagnetic statein a wide range of field values 10–34 T, where the ground state has an SDW order.By carefully choosing nuclei and field directions, we were able to detect the transverseand longitudinal spin fluctuations separately. Our results agree with the theoretical predictions forthe J_1–J_2 chains, thereby providing microscopic understanding of the anomalous spin dynamicsof bound magnon pairs. § EXPERIMENTSThe nuclear spin-lattice relaxation rate (1/T_1) was measured for ^7Li and ^51V nuclei on a single crystal of the size 1.0 × 1.2 × 0.5mm^3, grown by a fluxmethod <cit.>. A superconducting magnet was used to obtain magnetic fields up to 16 T,in which either the a- or c axis of the crystal was oriented along the field within 0.3 deg. Higher fieldsup to 34 T were obtained by a 20 MW resistive magnet at LNCMI Grenoble, where the accuracy of thecrystal orientation was within 2 deg. The ordering temperature T_ N was determined from the temperature dependence of the ^51V NMR line width to check the sample quality (see Appendix <ref> for the details). The inversion recovery method was used to determine 1/T_1. The recovery curve can be fit to an exponential function in the paramagnetic phase. In the ordered phase,however, a stretched exponential function had to be used due to inhomogeneous relaxation. § RESULTS AND DISCUSSIONSThe temperature dependencies of 1/T_1 at ^7Li and ^51V nuclei (1/^7T_1 and 1/^51T_1)for various magnetic fields along the a and c directions are shown in Fig. <ref>. They exhibit remarkable variation depending on the nuclei and the direction of magnetic field. To understand such behavior,we consider the general expression for 1/T_1<cit.>, 1/T_1^ξ = 1/N∑_𝐪{Γ^⊥_ξ(𝐪) S_⊥ (𝐪, ω) + Γ^∥_ξ(𝐪) S_∥ (𝐪, ω) },where N is the number of magnetic ions, ξ = a, b or c denotes the field direction, andS_⊥ (𝐪, ω) (S_∥ (𝐪, ω)) is the wave-vector-dependent dynamical spin-correlation function perpendicular (parallel) to the magnetic field at the NMR frequency ω. The coefficientsΓ^⊥_ξ(𝐪) and Γ^∥_ξ(𝐪)are defined as<cit.>,Γ^⊥_a(𝐪)= γ_N^2/2{ g_bb^2 | A(𝐪)_bb |^2 + g_cc^2 | A(𝐪)_cc |^2 + (g_bb^2 + g_cc^2) | A(𝐪)_bc |^2 } Γ^∥_a(𝐪)= γ_N^2/2 g_aa^2 (| A(𝐪)_ab |^2 + | A(𝐪)_ac |^2 )Γ^⊥_c(𝐪)= γ_N^2/2{ g_aa^2 | A(𝐪)_aa |^2 + g_bb^2 | A(𝐪)_bb |^2 + (g_aa^2 + g_bb^2) | A(𝐪)_ab |^2 } Γ^∥_c(𝐪)= γ_N^2/2 g_cc^2 (| A(𝐪)_ac |^2 + | A(𝐪)_bc |^2 ),where γ_N, g_μν, and A(𝐪)_μν are a gyromagnetic ratio, a μν component of a g tensor, and a Fourier sum of a hyperfine coupling constant, A(𝐪)_μν = ∑_i A(𝐫_i)_μν e^i 𝐪·𝐫, respectively. The sum is taken over all Cu sites within 60-Å distance from the nuclei<cit.>.In the following discussion, we present analyses using data at relatively low temperatures close to T_ N. At such a low temperature, it is reasonable to assume that the dominant fluctuations are associated with the ordering wave vector 𝐐_0,S (𝐪, ω) ≃ N δ(𝐪 - 𝐐_0) ⟨ S (𝐪, ω) ⟩, where ⟨⋯⟩ indicates the average over 𝐪. Then, Γ_ξ(𝐪) in Eq. (<ref>) can be replaced by the value at 𝐐_0, leading to the relation1/T_1^ξ≃Γ^⊥_ξ(𝐐_0) ⟨ S_⊥ (𝐪, ω) ⟩+ Γ^∥_ξ(𝐐_0) ⟨ S_∥ (𝐪, ω) ⟩,which is applicable in a limited temperature range close to T_ N. Owing to Eq. (<ref>), ⟨ S (𝐪, ω) ⟩ can be roughly estimated from 1/T_1 and Γ_ξ(𝐐_0). The field dependence of Γ_ξ(𝐐_0) is shown in Fig. <ref>. It is calculated by replacing 𝐪 in Eq. (<ref>) by 𝐐_0, which is related to the magnetization ⟨ S_z ⟩ as 𝐐_0 = 2 π(1, 1/2-⟨ S_z ⟩, 0) <cit.> in the SDW phase.Let us first discuss the results for ^51V nuclei with H ∥ a (Fig. <ref>(c)). As shown in Fig. <ref>(c), ^51Γ^∥_a ≡ ^51Γ^∥_a(𝐐_0) = 0 holds independently on the magnetic field. The longitudinal fluctuations are canceled out due to a local symmetry of a V nuclei; magnetic moments along the a direction cannot induce the internal field along the b and c directions since a V nuclei is located in the middle of two ferromagnetically coupled chains<cit.>. Thus, only the transverse fluctuations should contribute to 1/^51T_1^a independently on the magnetic field as⟨ S_⊥ (𝐪, ω) ⟩ = 1/^51Γ^⊥_a1/^51T_1^a.A remarkable feature is that 1/T_1 decreases steeply with decreasing temperature in the paramagnetic phase, indicating an energy gap in the transverse spin excitations. This result is in sharp contrast to the behavior at a lower field (4 T) reported in Ref. NMR5, where the ground state has a helicalspin order and 1/^51T_1^a shows a pronounced peak near T_ N due to criticalslowing down of the transverse spin fluctuations. Instead, at higher fields, 1/^51T_1^a shows no anomaly at the transitioninto the SDW state for the field below 16 T.However, a small peak appears near T_ N at highest field values. This could be caused by longitudinal spin fluctuation ⟨ S_∥ (𝐪, ω) ⟩ if the interchain correlation along the a direction gets shorter at higher fields as suggested by neutron-scattering experiments <cit.>, which would result in non-zero Γ^∥_a <cit.>.The activated T dependence above T_ N can be confirmed from a semi-logarithmic plot of 1/^51T_1^aagainst 1/T, allowing us to determine the energy gap Δ_a at various fields.The field dependence of Δ_a is determined from an exponential fit,1/^51T_1^a∝exp(-Δ_a/T) ,as shown in Fig. <ref>(a). The fitting range is selected as the temperature range where Eq. (<ref>) is applicable from T_N to a few K above T_N. For 1/T_1 measured at 27 and 34 T, the fitting window is slightly shifted to high temperatures to minimize the contribution from ⟨ S_∥ (𝐪, ω) ⟩. Figure <ref>(b) shows the field dependence of Δ_a (red circles), together with Δ_c for 𝐇∥ c (blue triangles) determined from the data of 1/^7T_1^c and 1/^51T_1^c as described later.For both directions the energy gap is absent at low fields, where the ground state has helical order, but appears when the SDWcorrelation becomes dominant and grows with increasing field. However, it tends to saturateat higher fields near the saturation field.The energy gap in the transverse spin excitations is a direct consequence of boundmagnon pairs predicted theoretically for the J_1–J_2 chains. This gap does not correspond to a Zeeman energy since a Zeeman gap would cause gapped longitudinal spin excitations, which are inconsistent with gapless longitudinal spin excitations as we will discuss below. The observed gap is well explained as a binding energy of magnon pairs. The solid (dashed) line in Fig. <ref>(b) shows the result of the density-matrix renormalization-group (DMRG) calculation forthe binding energy of magnon pairs in the J_1–J_2 chains with α≡ J_1/J_2 = -1.0and J_2 = 51 K (α = -0.5 and J_2 = 41 K) <cit.>. The value of J_2 is selected so that the saturation field, H_sat = J_2 (4 + 2 α- α^2)/2 (1 + α) <cit.>, agrees with an experimental value, 41.4 T for H ∥ c <cit.>. The qualitative feature of the field dependence of the gap is well reproduced by the DMRG calculation. Small deviation at low fields should be due to interchain couplings which destabilize the SDW order. The effect of interchain couplings will be discussed in detail later.The quantitative comparison leads to the conclusion that |α| is slightly smaller than 1 (about 0.8), while the estimation of |α| has been quitecontroversial in previous studies <cit.>. For instance, Enderle et al. analyzed thespin-wave dispersion obtained by inelastic neutronscattering experiments and determined as α = -0.4 (J_2 = 44 K) <cit.>. However, Nishimoto et al. made DMRG calculations and reproduced the dispersion well with α = -1.4 (J_2 = 60 K) <cit.>. The inconsistency is due to non-trivial renormalization of the exchange parameters, which can be easily affected by strong quantum fluctuations enhanced by frustration. In addition, analyses of magnetic susceptibility give different results: Koo et al. concluded that negative Weiss temperature of θ_W = -(J_1+J_2)/2 strongly indicates |α| < 1 <cit.>, while Sirker indicated α = -2.0 (J_2 = 91 K) from DMRG calculations <cit.>. The analyses may be sensitive to a fitting temperature range and free parameters such as a temperature independent term χ_0. Furthermore, density functional theory calculations also give both results of |α| < 1 <cit.> and |α| > 1 <cit.>. In the present paper, the field dependence of Δ supports |α| < 1.Let us now turn to the temperature and field dependence of the longitudinal spin-correlation function⟨ S_∥ (𝐪, ω) ⟩. This is best represented by 1/T_1 at Li nuclei with the fieldalong the c direction (1/^7T_1^c) since this is the only case that satisfies the conditionΓ^∥≫Γ^⊥ (see Fig. <ref>(b)). As shown in Fig. <ref>(b), 1/^7T_1^c exhibitsa pronounced peak near T_ N, indicatingcritical divergence of the low-frequency component of gaplesslongitudinal spin fluctuations associated with the SDW order. This is in sharp contrast to the gapped behavior ofthe transverse fluctuations. Theories have indeed predicted such anisotropic spin fluctuations for the J_1–J_2 chains,qualitatively consistent with our results.However, longitudinal spin excitations in purely one-dimensional models are described by aTomonaga-Luttinger (TL) liquid, leading to a power-law divergence of 1/T_1 toward T=0 <cit.>,in contrast to the experimentally observed peak near T_ N driven by three dimensional ordering.Thus the results of 1D theories cannot be used directly to fit our data.Instead, we take a phenomenological approach to extract ⟨ S_∥ (𝐪, ω) ⟩from the 1/^7T^c_1 data. Since ^7Γ^∥_c > ^7Γ^⊥_c and⟨ S_∥ (𝐪, ω) ⟩≫⟨ S_⊥ (𝐪, ω) ⟩ nearT_ N, we neglect the first term in Eq. (<ref>) and determine⟨ S_∥ (𝐪, ω) ⟩ by⟨ S_∥ (𝐪, ω) ⟩ = 1/^7Γ^∥_c1/^7T_1^c.The top panel of Fig. <ref>(c) shows the field dependence of⟨ S_∥ (𝐪, ω) ⟩ at the peak temperature of 1/^7T^c_1(denoted as ⟨ S_∥⟩_max). With increasing field, ⟨ S_∥⟩_max first increases, then exhibits a maximum at H ∼0.4 H_sat (16 T), and decreases above 0.4 H_sat. Since the peak temperature of 1/^7T^c_1 is slightly shifted from T_ N, we also show ⟨ S_⊥ (𝐪, ω) ⟩ at T_ N (denoted as ⟨ S_∥⟩ (T_ N)). The field dependence of ⟨ S_∥⟩ (T_ N) is qualitatively similar to that of ⟨ S_∥⟩_max but the maximum shifts to a higher field.Note that T_N also shows similar behavior (the middle panel of Fig. <ref>(c)), supporting that the fluctuations observed by NMR are indeed related to the three dimensional ordering.The temperature dependence of 1/^7T^c_1 is fitted to a power law,1/^7T_1^c = ^7A(T-T^*/T^*)^-ν_c,using a fitting parameter ^7A and a phenomenological parameter T^* instead of T_N to improve the fit. The difference between T^* and T_N is smaller than 0.2 K and thefit is good except very near the peak as shown by the green line in Fig. <ref>(b). The exponent ν_c provides a measure of the strength of critical fluctuations. As displayed in the lowerpanel of Fig. <ref>(c), ν_c shows a similar field dependence as⟨ S_∥ (𝐪, ω) ⟩ and T_ N.The non-monotonic field dependence with a broad peak commonly observed for all the plots in Fig. <ref>(c) indicates that, approaching from the high field side, the longitudinal SDW correlation gets enhanced with decreasing field down to H/H_ sat∼ 0.4–0.6, then reduced towards the phase boundarywith the helical state. The former behavior is indeed consistent with the theoretical prediction forthe one dimensional J_1–J_2 chains described as a TL liquid of bound magnon pairs.The longitudinal spin correlation S_∥(x) and the nematic correlation N(x) both show long rangealgebraic decay, S_∥(x) ∼ x^-η and N(x) ∼ x^-1/η.At high fields near the saturation, the nematic correlation is dominant (η > 1) due to thegain in kinetic energy of dilute bound magnon pairs. With decreasing the field, η gets smaller, making theSDW correlation dominant (η < 1) <cit.> due to interaction among magnon pairswith their increased density. The SDW fluctuations contribute to 1/T_1 as⟨ S_∥ (𝐪, ω) ⟩∝ T^η-1 <cit.>,which is enhanced at lower fields with smaller η, consistent with the experimental observation. What is not predicted by the 1D theories is the reduction of SDW correlation with further decreasing the fieldand approaching the boundary with the helical phase.This can be explained by considering the interchain coupling. According to theanalysis of the spin-wave dispersion <cit.>, the most dominant interchain interactionis ferromagnetic and connects a spin on one chain to two spins on the neighboring chainin the ab plane separated by a,whereas the nearest neighbor distance along a chain is b/2 (see Fig. <ref>). Since the SDW order occurs at the wave vector𝐐_0 = 2 π(1, 1/2-⟨ S_z ⟩, 0), this coupling is more frustrated for smaller magnetization. Therefore, three dimensional ordering should be suppressed at lower fields while the 1D correlation remains strong. A similar mechanism has been discussed concerning the stability of the SDW phase in an spatially anisotropic spin-1/2 triangular lattice antiferromagnet <cit.>.So far we have discussed the transverse and longitudinal fluctuations separately based on the data of1/^51T_1^a and 1/^7T_1^c, respectively. Now we can see that at sufficiently high field(of the order of 0.4 H_s) 1/^51T_1^c shown inFig. <ref>(d) exhibits characteristic behavior of both contributions in different temperature ranges.Since ^51Γ^⊥_c > ^51Γ^∥_c, 1/^51T_1^c shows an activated behaviorof ⟨ S_⊥ (𝐪, ω) ⟩ at high temperatures. At low temperatures, however, ⟨ S_∥ (𝐪, ω) ⟩ becomesmuch larger than ⟨ S_⊥ (𝐪, ω) ⟩ and 1/^51T_1^c follows thebehavior of ⟨ S_⊥ (𝐪, ω) ⟩ with a peak near T_ N. The peak value of 1/^51T_1^c get reduced with increasing field consistent with the results of1/^7T_1^c. Qualitatively similar behavior is observed also for 1/^7T_1^a (Fig. <ref>a).The temperature dependence of 1/^51T_1^c can be indeed fit to a sum of the two contributions at each field value, 1/^51T_1^c = ^51A(T-T^*/T^*)^-ν_c + ^51Bexp (-Δ_c/T),with three fitting parameters, ^51A, ^51B, and Δ_c, while the values of T^* and ν_c aredetermined from the fitting of the 1/^7T_1^c data to Eq. (<ref>). An example is shown by the green solid line in Fig. <ref>(d). The obtained energy gap Δ_c in the transverse spin excitations for H ∥ cis plotted against H/H_ sat in Fig. <ref>(b). The magnitude and the field dependenceof the energy gap are quite similar for H ∥ a and H ∥ c. In addition, the longitudinal contribution in Eq. (<ref>) agrees well with that in Eq. (<ref>). This is confirmed by the correspondence between the field dependencies of ^51A/^7A and ^51Γ^∥_c/^7Γ^∥_c shown in Fig. <ref>, since ^51A ((T - T^*)/T^*)^-ν_c = ^51Γ^∥_c⟨ S_∥ (𝐪, ω) ⟩ and ^7A ((T - T^*)/T^*)^-ν_c = ^7Γ^∥_c⟨ S_∥ (𝐪, ω) ⟩ lead to ^51A/^7A = ^51Γ^∥_c/^7Γ^∥_c independently on the magnetic field.Finally, we emphasize that the anisotropic spin fluctuations observed in LiCuVO_4 are a specific hallmark for thefrustrated spin systems with bound magnon pairs. In particular, the energy gap in the transverse excitations,which grows with field, provides decisive evidence for the magnon binding. Although several other spin systems have SDW ground states in magnetic fields, for example, 1D antiferromagnetic spin-1/2 chains with Ising anisotropy<cit.> or spatially anisotropic spin-1/2 triangular lattice antiferromagnets <cit.>,none of these show an energy gap in the transverse excitations. Indeed, in most cases the transverseantiferromagnetic correlation becomes dominant at high fields near the saturation, contrary to its suppression due to magnon binding. The good consistency between our results and 1D theories makes it very likely that spin nematic correlation becomes dominant at higher fields even though the three-dimensional nematic ordermay be prevented by disorder <cit.>.§ SUMMARYIn conclusion, we have examined field dependence of spin dynamics in the frustrated J_1–J_2 chain spin system LiCuVO_4 by NMR experiments. Appropriate choice of the nuclei (^7Li or^51V) and the field directions, with the aid of thorough knowledge of the hyperfine coupling tensors, enabledus to analyze the transverse and longitudinal spin dynamics separately. Their contrasting temperature andfield dependencies are consistent with the theoretical predictions for the frustrated J_1–J_2 chains. This demonstrates that further exploration of clean defect-free materials withJ_1–J_2 chains remains a promising route to discover an elusive spin nematic phase.We thank C. Michioka, H. Ueda, M. Sato, T. Hikihara, T. Momoi, A. Smerald, N. Shannon, and O. A. Starykhfor fruitful discussions. This paper was supported by Japan Society for the Promotion of Science KAKENHI (B)(Grant No. 21340093, No. 16H04131, and No. 25287083); the Ministry of Education, Culture, Sports, Science, and Technology GCOE program; a Grant-in-Aid for Science Research from Graduate School of Science, Kyoto University; EuroMagNET II network under the European Commission Contract No. FP7-INFRASTRUCTURES-228043; and was carried out under the Visiting Researcher's Program of the Institute for Solid State Physics, the University of Tokyo. § DETERMINATION OF THE TRANSITION TEMPERATUREThe transition temperature T_N is determined from the variation of the ^51V NMR spectra. Figure <ref>(a) shows typical ^51V field swept NMR spectra measured at theNMR frequency of 337.79 MHz. The NMR line shape changes from a single peak pattern at high temperatures to a double-horn pattern atthe lowest temperature, indicating occurrence of an SDW order.However, the line shape changes rather gradually over a finite range of temperature likely dueto disorder. Therefore, it is difficult to determine T_N simply from visual inspection and an unbiasedsystematic method is required. We calculated the second and fourth moments, M_2 and M_4, defined as M_2≡∫ dH (M_1 - H)^2 I(H) M_4≡∫ dH (M_1 - H)^4 I(H),where I(H) is the normalized NMR spectrum (∫ dH I(H) = 1) and M_1 is the first momentM_1 ≡∫ dH H I(H) .The ratio M_2^2/M_4 is plotted against temperature in Fig. <ref>(b) for various field values.This ratio is much smaller than 1 for a singly peaked symmetric line, for example, M_2^2/M_4 = 1/3 for a Gaussian, but approaches 1 if the spectrum consists of two well-separated lines. Therefore, we expecta rapid increase of this ratio at the onset of an incommensurate spin order. Such behavior is indeed observed in the experimental plots of Fig. <ref>(b).We determined T_N from the point of steepest slope. The field dependence of T_Nthus determined is shown in the middle panel of Fig. 2(c). This procedure gives T_N which is 0–1 K smaller (depending on the magnetic field) than that in the previous study <cit.>. The discrepancy is partly due to difference in the methods to determine T_N; temperature dependencies of integrated NMR intensity are used to determine T_N in the previous study <cit.>. We have confirmed that application of our procedure to the previous results reduces thedifferences of T_N to less than 0.3 K. The residual difference may be due to a sample-dependence related to disorder such as Li-deficiencies. 99 Balents L. Balents, Nature (London) 464, 199 (2010). Lacroix Introduction to Frustrated Magnetism, edited by C. Lacroix, P. Mendels, and F. Mila (Springer New York, 2011). nematic F. Andreev and I. A. Grishchuk, Zh. Eksp. Teor. Fiz. 87 467 (1984). nematic1 A. V. Chubukov, Phys. Rev. B 44, 4693 (1991). nematic2 N. Shannon, T. Momoi, and P. Sindzingre, Phys. Rev. Lett. 96, 027213 (2006). octupole T. Momoi, P. Sindzingre, and N. Shannon, Phys. Rev. Lett. 97, 257204 (2006). 1Dnematic2 A. Smerald and N. Shannon, Phys. Rev. B 93, 184419 (2016). 1Dtheory0 L. Kecke, T. Momoi, and A. Furusaki, Phys. Rev. B 76, 060407 (2007). 1Dtheory1 T. Vekua, A. Honecker, H.-J. Mikeska, and F. Heidrich-Meisner, Phys. Rev. B 76, 174420 (2007). 1Dtheory2 T. Hikihara, L Kecke, T. Momoi, and A. Furusaki, Phys. Rev. B 78, 144404 (2008). 1Dtheory3 J. Sudan, A. Lûscher, and A. M. Lâuchli, Phys. Rev. B 80, 140402 (2009). 1Dnematic H. T. Ueda and K. Totsuka, Phys. Rev. B 80, 014417 (2009).1Dnematic0 M. E. Zhitomirsky and H. Tsunetsugu, Europhys. Lett. 92, 37001 (2010).1Dnematic1 M. Sato, T. Hikihara, and T. Momoi, Phys. Rev. Lett. 110, 077206 (2013). 1Dnematic3 H. T. Ueda and K. Totsuka, cond-mat arXiv: 1406.1960 (2014). nematic3 O. A. Starykh and L. Balents, Phys. Rev. B 89, 104407 (2014). excitation H. Onishi, J. Phys. Soc. Jpn., 84, 083702 (2015).interchain S. Nishimoto, S. -L. Drechsler, R. Kuzian, J. Richter, and J. van den Brink, Phys. Rev. B 92, 214415 (2015). 1Dnematic4 L. Balents and O. A. Starykh, Phys. Rev. Lett. 116, 177201 (2016). 1DtheoryofT11 M. Sato, T. Momoi, and A. Furusaki, Phys. Rev. B 79, 060406 (2009). 1DtheoryofT12 M. Sato, T. Hikihara, and T. Momoi, Phys. Rev. B 83, 064405 (2011). cryst M. A. Lafontaine, M. Leblanc and G. Ferey, Acta Cryst. C45,1205 (1989). growth A. V. Prokofiev, D. Wichert, and W. Assmus, J. Cryst. Growth 220 345 (2000). growth2 A. V. Prokofiev, I. G. Vasilyeva, V. N. Ikorskii, V. V. Malakhov, I. P. Asanov, and W. Assmus, J. Solid State Chem. 177 3131 (2004). neutron0 B. J. Gibson, R. K. Kremer, A. V. Prokofiev, W. Assmus, and G. J. McIntyre, Physica B 350, e253 (2004).inelastic M. Enderle, C. Mukherjee, B. Fåk, R. K. Kremer, J.-M. Broto, H. Rosner, S.-L. Drechsler,J. Richter, J. Malek, A. Prokofiev, W. Assmus, S. Pujol, J.-L. Raggazzoni, H. Rakoto, M. Rheinstâdter andH. M. Rønnow, Europhys. Lett. 70, 237 (2005). neutron1 M. Mourigal, M. Enderle, R. K. Kremer, J. M. Law, and B. Fåk, Phys. Rev. B 83, 100409 (2011). NMR2 N. Büttgen, H. -A. Krug von Nidda, L. E. Svistov, L. A. Prozorova, A. Prokofiev, and W. Aßmus, Phys. Rev. B 76, 014440 (2007). NMR5 K. Nawa, M. Takigawa, M. Yoshida, and K. Yoshimura, J. Phys. Soc. Jpn. 82, 094709 (2013). neutron2 T. Masuda, M. Hagihala, Y. Kondoh, K. Kaneko, and N. Metoki, J. Phys. Soc. Jpn. 80, 113705 (2011). neutron3 M. Mourigal, M. Enderle, B. Fåk, R. K. Kremer, J. M. Law, A. Schneidewind, A. Hiess, andA. Prokofiev, Phys. Rev. Lett. 109, 027203 (2012). NMR3 N. Büttgen, W. Kraetschmer, L. E. Svistov, L. A. Prozorova, and A. Prokofiev, Phys. Rev. B 81, 052403 (2010). NMR4 N. Büttgen, P. Kuhns, A. Prokofiev, A. P. Reyes, and L. E. Svistov, Phys. Rev. B 85, 214421 (2012). magnetization L. E. Svistov, T. Fujita, H. Yamaguchi, S. Kimura, K. Omura, A. Prokofiev, A. I. Smirnov,Z. Honda, and M. Hagiwara, JETP Lett. 93, 21 (2011). HFNMR N. Büttgen, K. Nawa, T. Fujita, M. Hagiwara, P. Kuhns, A. Prokofiev, A.P. Reyes,L. E. Svistov, K. Yoshimura, and M. Takigawa, Phys. Rev. B 90, 134401 (2014). NMR6 A. Orlova, E. L. Green, J. M. Law, D. I. Gorbunov, G. Chanda, S. Krämer, M. Horvatić, R. K. Kremer, J. Wosnitza, and G. L. J. A. Rikken, Phys. Rev. Lett. 118, 247201 (2017). smerald1 A. Smerald and N. Shannon, Phys. Rev. B 84, 184437 (2011). smerald2 A. Smerald, Theory of the Nuclear Magnetic 1/T_1 Relaxation Rate in Conventional and Unconventional Magnets (Springer Thesis 2013). note_Gammaq To improve the estimation of Γ, the range of the sum is expanded for ^51V nuclei compared to Ref NMR5.note_MH We confirmed that the magnetization of our sample (H ∥ c) is almost consistent with that presented in Ref. magnetization below 35 T [K. Nawa, A. Matsuo, K. Kindo, M. Takigawa, and K. Yoshimura (unpublished)]. Thus, the magnetization curve of our sample is used to estimate Γ_c, and that (H ∥ a) in Ref. magnetization is used to estimate Γ_a. correlationfunc S. Nishimoto, S. -L. Drechsler, R. Kuzian, J. Richter, J. Málek, M. Schmitt, J. Brink, and H. Rosner: Europhys. Lett. 98, 37007 (2012). GGAU H. -J. Koo, C. Lee, M. -H. Whangbo, G. J. McIntyre, and R. K. Kremer, Inorg. Chem. 50, 3582 (2011). susceptibility J. Sirker, Phys. Rev. B 81, 014419 (2010). Starykh O. A. Starykh, H. Katsura, and L. Balents, Physl. Rev. B 82, 014421 (2010). 1DIsing F. D. M. Haldane, Phys. Rev. Lett., 45, 1358 (1980). 1DIsing2 S. Kimura, T. Takeuchi, K. Okunishi, M. Hagiwara, Z. He, K Kindo, T. Taniyama, and M. Itoh, Phys. Rev. Lett. 100, 057202 (2008).
http://arxiv.org/abs/1702.08573v2
{ "authors": [ "Kazuhiro Nawa", "Masashi Takigawa", "Steffen Krämer", "Mladen Horvatić", "Claude Berthier", "Makoto Yoshida", "Kazuyoshi Yoshimura" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170227225251", "title": "Dynamics of Bound Magnon Pairs in the Quasi-One-Dimensional Frustrated Magnet LiCuVO_4" }
sunyfphy@physics.tamu.edu Cyclotron Institute and Department of Physics and Astronomy, Texas A&M University, College Station, Texas 77843, USA ko@comp.tamu.edu Cyclotron Institute and Department of Physics and Astronomy, Texas A&M University, College Station, Texas 77843, USAUsing a multiphase transport model, we study three-particle mixed harmonic correlations in relativistic heavy ion collisions by considering the observable C_m,n,m+n=⟨⟨cos(mϕ_1+nϕ_2-(m+n)ϕ_3)⟩⟩, where ϕ_1,2,3 are azimuthal angles of all particle triplets. We find that except for C_123, our results on the centrality dependence of C_112, C_224 and C_235 as well as the relative pseudorapidity dependence of C_123 and C_224 in Au+Au collisions at √(s)=200 GeV agree reasonable well with the experimental data from the STAR Collaboration. We discuss the implications of our results.Three-particle correlations in relativistic heavy ion collisions in a multiphase transport model Che Ming Ko December 30, 2023 ================================================================================================§ INTRODUCTION The study of anisotropic flow in relativistic heavy ion collisions has provided important information on the properties of the produced quark-gluon plasma (QGP).In earlier studies, the large elliptic flow observed in experiments at the BNL Relativistic Heavy Ion Collider (RHIC) for non-central collisions was found to be describable by ideal hydrodynamics. This has led to the conclusion that the produced QGP is an ideal fluid and thus a strongly interacting matter <cit.>.More recent studies indicate that the experimental data on anisotropic flows could be better understood using viscous hydrodynamics  <cit.> with a specific viscosity that is only about a factor of two larger than the theoretically predicted lower bound <cit.>.In particular, the larger triangle flow observed in experiments not only put a more stringent constraint on the specific viscosity of the QGP <cit.> but also reveal the importance of initial spatial fluctuations in heavy ion collisions <cit.>.To study the effect of initial-state fluctuations and that of final-state interactions with better precision, new observables based on n-particle correlations have been proposed <cit.>, since the ratio between these correlations and corresponding anisotropic flows can provide information on initial fluctuations. In the present study, we use a multiphase transport (AMPT) model <cit.> to study the three-particle correlations and compare the results with recent experimental measurements by the STAR Collaboration <cit.>.The paper is organized as follows. In the next section, we briefly describe the AMPT model and the parameters used in our calculations. In Sec. III, both two-particle and three-particle correlations are described.Results on anisotropic flows and three-particle correlations obtained from the AMPT model are presented and compared with experimental data in Sec. IV. Finally, a summary is given in Sec. IV.§ THE AMPT MODEL The AMPT model is a hybrid model consisting of four stages of heavy ion collisions at ultrarelativistic energies: initial conditions, parton scatterings, conversion from the partonic matter to hadronic matter, and hadron scatterings <cit.> . There are two versions of the AMPT model, which are the default AMPT model and the AMPT model with string melting. In both versions, the initial conditions are generated from the heavy ion jet interaction generator (HIJING) model <cit.>. In the default version, only minijet partons from HIJING are included in the partonic stage via Zhang's parton cascade (ZPC) <cit.>. After their scatterings, minijet partons are recombined with their parent strings to form excited strings, which are then converted to hadrons through the Lund string fragmentation model. In the string melting version, all hadrons produced from HIJING are converted to partons according to their valence quark flavors and spin structure, and these partons are evolved via the ZPC.At the end of their scatterings, quarks and antiquarks are converted to hadrons via a simple coalescence model. Specifically, two nearest quark and antiquark are combined into a meson, and three nearest quarks (antiquarks) are combined into a baryon (antibaryon), with their species determined bythe flavor and invariant mass of coalescing quarks and antiquarks. Scatterings among hadrons in both the default and string melting versions are described by a relativistic transport (ART) model <cit.> until kinetic freeze-out.In the present study, we use the string melting version of the AMPT model with the parameter set B of Ref. <cit.>, i.e., using the values a=0.5 and b=0.9 GeV^-2 in the Lund string fragmentation function f(z)∝ z^-1(1-z)^aexp(-bm_⊥^2/z), where z is the light-cone momentum fraction of the produced hadron of transverse mass m_⊥ with respect to that of the fragmenting string; and the values α_s=0.33 and μ=3.2 fm^-1 in the parton scattering cross section σ≈ 9πα_s^2/(2μ^2).This parameter set has been shown to give a better description of the charged particle multiplicity density, transverse momentum spectrum, and elliptic flow in heavy ion collisions at RHIC.§ TWO- AND THREE-PARTICLE CORRELATIONS The two-particle correlation of particles in certain rapidity and transverse momentum range in heavy ion collisions is defined by <cit.>c_n{2} = ⟨⟨∑_i jcos(n(ϕ_i-ϕ_j))/M(M-1)⟩⟩,where the sum is over all possible particle pairs i and j in a single event with M particles in that rapidity and transverse momentum range, ϕ_i and ϕ_j are azimuthal angles of their transverse momenta in the transverse plane of the collision, and the ⟨⟨·⟩⟩ denotes the average over events. Using the identity ∑_i j=∑_i,j-∑_i=j, the numerator in the above equation can be written as∑_i jcos(n(ϕ_i-ϕ_j)) =M^2(⟨cos nϕ⟩^2+⟨sin nϕ⟩^2-1/M),where ⟨·⟩ denotes the average over all particles in a single event.In the two-particle cumulant method <cit.>, anisotropic flow coefficients are simply given by the square root of the two-particle correlation, i.e., v_n{2}=√(c_n{2}).Similarly, the three-particle correlation, denoted as C_m,n,m+n, is defined by <cit.>C_m,n,m+n= ⟨⟨∑_i j kcos(mϕ_i+nϕ_j-(m+n)ϕ_k)/M(M-1)(M-2)⟩⟩.Using the identity ∑_i jk=∑_i,j,k-∑_j=i,k-∑_k=i,j-∑_i,k=j+2∑_i=j=k  <cit.>, the numerator can also be written as∑_i j kcos(mϕ_i+nϕ_j-(m+n)ϕ_k)= ∑_i,j,kcos(mϕ_i+nϕ_j-(m+n)ϕ_k) -∑_i,jcos(m(ϕ_i-ϕ_j))-∑_i,jcos(n(ϕ_i-ϕ_j)) -∑_i,jcos((m+n)(ϕ_i-ϕ_j))+2M =M^3(⟨cos mϕ⟩⟨cos nϕ⟩⟨cos(m+n)ϕ⟩ -⟨sin mϕ⟩⟨sin nϕ⟩⟨cos(m+n)ϕ⟩ +⟨sin mϕ⟩⟨cos nϕ⟩⟨sin(m+n)ϕ⟩ +⟨cos mϕ⟩⟨sin nϕ⟩⟨sin(m+n)ϕ⟩) -M^2(⟨cos mϕ⟩^2+⟨sin mϕ⟩^2) -M^2(⟨cos nϕ⟩^2+⟨sin nϕ⟩^2) -M^2(⟨cos(m+n)ϕ⟩^2+⟨sin(m+n)ϕ⟩^2)+2M,which shows that the number of terms in the three-particle correlation can be reduced from M^3 to the order of M, thus improving significantly the efficiency of the calculation when M is large.§ RESULTS In this section, we show the anisotropic flow of charged particles and their three-particle mixed harmonic correlations obtained from the AMPT model in Au+Au collisions at √(s_ NN)=200 GeV at RHIC and compare them with experimental data measured by the STAR Collaboration. §.§ Charged particle anisotropic flow In Fig. <ref>, we show the participant number N_ part or centrality dependence of anisotropic flow from n=1 to 4 for mid-pseudorapidity (|η|<1) charged particles of transverse momentum p_T>0.2 GeV/c in Au+Au collisions at √(s_ NN)=200 GeV.In particular, the participant numbers chosen in our calculations are those in collisions at impact parameters of 2.2, 4.1, 5.8, 7.5, 8.8, and 10.0 fm, corresponding, respectively, to centrality bins of 0-5%, 5-10%, 10-20%, 20-30%, 30-40% and 40-50% in the STAR experiment.We calculate the anisotropic flow from the two-particle cumulant method <cit.>.Note that we do not include the anisotropic flow for n=5 because of the large uncertainty in both experimental data <cit.> and our results.It is seen that the results from the AMPT model agree qualitatively with the experimental data <cit.>. Quantitatively, the AMPT slightly overestimates the measured elliptic flow v_2{2} and triangular flow v_3{2} for the most-central collisions and underestimate them for peripheral collisions.On the other hand, our results for the directed flow v_1{2} underestimate for the most-central collisions and overestimate for peripheral collisions. For the quadrupolar flow, our results are slightly smaller than the data for all centralities.In both experimental data and results from the AMPT, v_1{2}^2 are negative for more peripheral collisions.This can be understood from Eq.(<ref>). In the case of including all particles in a collision, the first term ⟨cosϕ⟩^2+⟨sinϕ⟩^2 should be identically zero due to the conservation of total transverse momentum. Since only mid-pseudorapidity (|η|<1) particles of transverse momentum p_T>0.2 GeV/c are included in calculating v_1{2}^2, the value of the first term can be nonzero but small.This can lead to positive values of v_1{2}^2 for large M. With decreasing particle number M for more peripheral collisions, the first term in Eq.(<ref>) can become smaller than the second term 1/M, resulting in a negative value for v_1{2}^2. §.§ Three-particle correlations For the three-particle correlations, we have calculated C_112, C_123, C_224 and C_235, and compared them to the experimental results <cit.>. Figure <ref> shows C_m,n,m+n× N_ part^2 for the four cases as functions of the number of participant nucleons.For C_112 in the upper left panel of Fig. <ref>, our results show good agreement with the experimental data, although there are some discrepancy in more central collisions.Values of C_112 from both our calculations and the experimental measurement are negative for all centralities.Besides possible non-flow effects from momentum conservation in the AMPT simulations, this could imply that the angles Ψ_1 and Ψ_2 of the reaction planes for directed and elliptic flows are likely to be perpendicular to each other.Our results on C_123, shown in the upper right panel of Fig. <ref>, are seen to agree with the experimental data within their error bars for most-central collisions but are smaller than the experimental data for mid-central collisions. Their essentially zero values indicate that the directed and triangular flows or the angles Ψ_1 and Ψ_3 of their reaction planes are not sufficiently correlated in the AMPT model for mid-central collisions.For C_224 shown in the lower left panel, our results agree extremely well with experimental data for all centralities, although there are small difference between our results on elliptic and quadrupolar flows and those measured in experiments as shown in Fig. <ref>.Their large values further indicate that there is a strong correlation between the angles Ψ_2 and Ψ_4 of their reaction planes.The lower right panel shows our results for C_235, which are seen to show similar trend and magnitude as experimental data, although overestimating the data in most-central collisions and underestimating it in mid-central collisions. Since the AMPT reproduces reasonably well various anisotropic flows measured in experiments as shown in Fig. <ref>, the above results on its reasonable success in describing also the measured three-particle correlations clearly indicate that the initial states in the AMPT model are quite similar to what are generated in heavy ion collisions. §.§ Relative pseudorapidity dependence of C_m,n,m+n In this section, we study the relative pseudorapidity |Δη| dependence of C_m,n,m+n for mid-pseudorapidity (|η|<1) charged particle of transverse momentum p_T>0.2 GeV/c in Au+Au collisions at √(s_NN)=200 GeV and centrality 20-30%. In particular, we consider the pseudorapidity difference between the first and the second particle (η_1-η_2) or the first and the third particle (η_1-η_3). The upper left panel of Fig. <ref> shows that C_123 from the AMPT model only changes slightly with |η_1-η_2| as in the experimental data. These results imply that there is negligible breaking of boost invariance as seen in terms of the pseudorapidity dependence of the angle Ψ_2 of elliptic flow. The |η_1-η_3| dependence of the results from the AMPT model, shown in the upper right panel, shows, on the other hand, a strong decrease with increasing |η_1-η_3| as in the data, indicating that azimuthal angles Ψ_1 and Ψ_3 of the reaction planes for the directed and triangular flows change strongly with rapidity. Since our results for C_123 are smaller than experimental data for small values of |η_1-η_2| and |η_1-η_3|, the reaction planes for the directed and triangular flows in the AMPT model is thus less correlated than measured in experiments. This is probably due to the Hambury-Brown-Twiss interference of identical particles at small Δη <cit.>, which is not included in the AMPT.The lower two panels show the |η_1-η_2| and |η_1-η_3| dependence of C_224, and both are seen to agree with experimental data very well.As for C_123 in the upper left panel, C_224 in the lower left panel also changes little with |η_1-η_2|, indicating that the reaction plane for the elliptic flow has a weak dependence on rapidity.The lower right panel shows that C_224 decreases slightly with increasing |η_1-η_3|, implying that the reaction plane for the quadrupolar flow changes with rapidity and thus breaks slightly the boost invariance.§ SUMMARY Using the AMPT model with parameters for the Lund string fragmentation and parton scattering taken from Ref. <cit.>, we have calculated the centrality dependence of various anisotropic flows in Au+Au collisions at √(s_ NN)=200 GeV from the two-particle cumulant method.The obtained results are seen to agree with experimental data from the STAR Collaboration in both the trend and magnitude. We have found that the square of the directed flow v_1{2}^2 can be negative in more peripheral collisions as in experiments, and this has been attributed to the net total transverse momentum of particles included in the evaluation and the small number of particles in more peripheral collisions.We have also used the AMPT model to study various three-particle correlations in Au+Au collisions at √(s_ NN)=200 GeV as functions of centrality, which contain information on both flow harmonics and correlations among their reaction planes. We have found that our results for C_112, C_224 and C_235 generally agree with experimental data both in their magnitude and dependence on the participant number of collisions. In particular, our results for C_224 agree very well with the data, although our results for the elliptic and quadrupolar flows differ slightly from the data.For C_123, our results show that for mid-central collisions there is a weaker correlation between the angles of the reaction plane for the directed, elliptic and triangular flows for mid-central collisions in AMPT model than in the experimental data. We have further studied the dependence of three-particle correlations on the relative pseudorapidity |η_1-η_2| and |η_1-η_3| between first and second particles as well as between first and third particles. Our results are seen to agree with experimental data for C_123 and C_224, and indicate that the boost invariance is weakly broken in the angles of the reaction planes for the elliptic and quadrupolar flows but strongly broken in those for the directed and triangular flows.These results have led us to conclude that the AMPT model with its fluctuating initial conditions and strong partonic scatterings can capture the essential collision dynamics of relativistic heavy ion collisions as revealed in the measured anisotropic flows and three-particles correlations.§ ACKNOWLEDGEMENTS We thank Prithwish Tribedy for discussions that led to present study and for his critical reading of the manuscript. This work was supported in part by the US Department of Energy under Contract No. DE-SC0015266 and the Welch Foundation under Grant No. A-1358.
http://arxiv.org/abs/1702.07807v1
{ "authors": [ "Yifeng Sun", "Che Ming Ko" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170225000055", "title": "Three-particle correlations in relativistic heavy ion collisions in a multiphase transport model" }
[ Adam M. Oberman December 30, 2023 ===================== As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization,using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. An important model input is the at-risk susceptible population, which can vary with a number of factors including climate, elevation, population density, and socio-economic status. We informed this initial condition using the highest historically reported dengue incidence modified by the probable dengue reporting rates in the chosen countries. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number, or the expected number of new human infections arising from a single infected human, to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges.§ INTRODUCTION Mosquito-borne diseases contribute significantly to the overall morbidity and mortality caused by infectious diseases in Central and South America. Newly emergent pathogens, such as Zika virus in 2015, highlight the need for data and models to understand the public health impact of associated diseases and develop mitigation strategies to combat their spread. In particular, since Zika virus is a newly emergent pathogen in the Americas, its impact on the naïve population is relatively unknown.The disease was first discovered in isolation from a rhesus macaque in the Zika forest of Uganda in 1974 <cit.>. While infrequent human cases were confirmed in later years in both Africa and Southeast Asia, it was not until April 2007 that an outbreak outside of these traditional areas occurred on Yap Island in the North Pacific <cit.>, and this was followed by another outbreak occurring in French Polynesia beginning October of 2013 <cit.>.However, the most significant Zika outbreak began within Central and South America in 2015 <cit.> and is currently ongoing. Thus far it has resulted in an estimated 714,636 infections, which includes both suspected and confirmed cases within Latin American and Non-Latin Caribbean countries (accessed Jan 10, 2017) <cit.>. This study focuses on the behavior of the current burgeoning epidemic in Colombia, El Salvador, and Suriname.Zika is transmitted to humans primarily through bites from infected Aedes aegypti and Aedes albopictus mosquitoes. The transmission is in both directions, that is, infected mosquitoes infect humans and infected humans infect mosquitoes. Upon transmission of the virus from mosquito to human, an individual will become infectious within 3 to 12 days.Symptoms of infection include fever, rash, joint pain, conjunctivitis, muscle pain and headache.Recovery from Zika virus disease may require anywhere from 3 to 14 days after becoming infectious, but once contracted humans are immune from the virus for life.Many people infected with Zika may be asymptomaticor will only display mild symptoms that do not require medical attention.An estimated 80% of persons infected with Zika virus are asymptomatic <cit.>. Thus, there is a high occurrence of under-reporting for confirmed or suspected Zika cases.In fact, the number of infected individuals who report their symptoms is estimated to be between 7% and 17% <cit.> of the total number infected by the virus. Zika can also be transmitted vertically, as a mother can pass the virus to her child during pregnancy, and this can lead to a variety of developmental issues. Most notably, Zika is a cause of microcephaly and other severe fetal brain defects <cit.>.Recent evidence further suggests an association between the virus and a higher incidence of Guillain-Barré syndrome, a disease in which the immune system damages nerve cells causing muscle weakness and sometimes paralysis <cit.>.The potential for sexual transmission of Zika virus has also been confirmed <cit.>.For instance, the Center for Disease Control (CDC) has determined that Zika can remain in semen longer than in other body fluids, including vaginal fluids, urine, and blood.It should be noted, however, that while these latter means of transmission exist, the number of new human infections produced in this way is relatively low compared to mosquito-borne infections <cit.>, and therefore, in the interest of simplicity and reducing the number of parameters to fit, we will neglect them in formulating a model. In general, mathematical modeling has been extensively used to understand disease dynamics and the impact of mitigation strategies <cit.>. A few recent papers have developed models that focus on the behavior of the Zika epidemic.For example, Gao et al. <cit.>, proposed an SEIR/SEI model to understand the effects of sexually transmitted Zika in Brazil, Colombia, and El Salvador.The authors did not, however, distinguish between countries as they assume that the three nations of interest share common parameter values.Additionally, Towers et al. <cit.> presented a model that incorporates spatial heterogeneity in populations at a granular level.Their model focused specifically on the spread of Zika within Barranquilla, Colombia and included a sexual transmission term, but concluded that the effects are not significant enough to sustain the disease in the absence of mosquitoes. An approximation of the basic reproductive number was obtained using maximum likelihood methods.Finally, a study of the impact of short term dispersal on the dynamics of Zika virus is analyzed in <cit.>.The model formulated within <cit.> does distinguish between asymptomatic and symptomatic infected populations, and focuses on the estimation of the reproductive number between two close communities.In contrast to previous studies, we adopt a more global approach to understanding the dynamics of the current Zika epidemic and present a model that can be used to study disease transmission at the country level.We identify differences between countries in regards to parameter values and reproductive numbers.We discuss the appropriateness of country level analysis for Colombia, El Salvador, and Suriname and quantify the uncertainty within the resulting biological parameter values.In the following section, we develop a Susceptible-Exposed-Infectious-Recovered (SEIR) /SEI type model which distinguishes between the reported infected population, who are considered symptomatic, and the unreported infected population, who may be asymptomatic or experience symptoms that are not severe enough to seek medical attention.The data available from the Pan American Health Organization (PAHO) serves to motivate the use of a split infectious population.The number of reported cases of Zika, both suspected and confirmed, is reported from PAHO by country.Hence, we develop a model for three countries of interest assuming that the dynamics of the disease within each may be associated with different parameter values. In Section 3, we address the additional complication that the total population of a country cannot be used as the initial susceptible population due to the biology and bionomics of the Aedes species and human contact with them, varying in regards to temperature, humidity <cit.>, sanitation, demographics and elevation <cit.>.Not everyone within a country is equally susceptible to the Zika infection due to geographic diversity; hence, we calculate the unique at-risk population within each country to use as the initial susceptible population. Using the deterministic model at baseline parameter values, the at-risk population we compute yields realistic initial conditions that compare well with PAHO data. With the initial population sizes for the epidemic in each country known and fixed, we embed the deterministic system into a stochastic process to quantify the uncertainty of the parameter values. This method allows us to use biologically valid parameter ranges and Approximate Bayesian Computation (ABC) methods to obtain parameter distributions that are distinct to each country.Section 4 provides a detailed explanation of our implementation of the ABC method along with the strengths and weaknesses discovered in applying this method to our particular model.We summarize the results of the obtained posterior distributions in Section 5. The last section is dedicated to discussion and conclusions.§ A DETERMINISTIC ZIKA MODEL§.§ Vector-borne SEIR Model with Case Reportingline = [draw, -latex'] The spread of Zika relies primarily on interactions between humans and mosquitoes.In Figure <ref>, members of the at-risk human population, S_h, are bitten by an infectious mosquito and become exposed, or infectedwithout yet being infectious, at a rate λ_h.Humans within the exposed population, E_h,progress from this state to the infectious compartment at a per capita rate, ν_h.Note that there is a portion of infectious humans who are either asymptomatic or experience less severe symptoms, and therefore go unreported.The parameter ψ denotes the proportion of humans who seek medical assistance and are reported as either suspected or confirmed Zika patients.Thus, there are two categories for infectious humans: I_r_h, the reported infectious population, and I_h, the unreported infectious population.Both infectious populations then recover at the same per capita rate of γ_h, where 1/γ_h is the average time spent as infectious.Upon recovery,humans acquire lifetime immunity.Similarly, a member of the susceptible mosquito population, S_v,becomes exposed at a rate λ_v when susceptible mosquitoes bite an infectious human, resulting in transmission.The exposed mosquitoes, E_v, transition to the infectious compartment at a per capita rate ν_v, where 1/ν_v is the average extrinsic incubation period.Once a mosquito is infectious it will remain so for the duration of its lifespan which can range between 8 - 35 days <cit.>.Finally, because of the long duration of infection for mosquitoes relative to their lifespan, demography (i.e., births and deaths) is included within their population dynamics. In total, the system is described by the following eight coupled, nonlinear ordinary differential equations:. dS_h/dt =- λ_h(t) S_h dE_h/dt =λ_h(t) S_h - ν_h E_h dI_r_h/dt = ψν_h E_h - γ_h I_r_h dI_h/dt =(1-ψ)ν_h E_h -γ_h I_h dR_h/dt =γ_h I_r_h + γ_h I_h dS_v/dt =μ_v N_v- λ_v(t) S_v - μ_v S_v dE_v/dt =λ_v(t) S_v -ν_v E_v -μ_v E_v dI_v/dt =ν_v E_v - μ_v I_v. } The population state variables are described within Table <ref> and parameters are presented in Table <ref>. The quantities N_h and N_v represent the total populations of humans and mosquitoes within the model, and remain constant. We note that the equation for the evolution of R_h(t) decouples from the system, as it is determined merely by computing the remaining population values. §.§ Basic reproductive numberWe define the basic reproductive number, ℛ_0, as the expected number of secondary infections by a single infectious individual over the duration of the infectious period within a fully susceptible population <cit.>.As there is more than one class of infectives involved, we utilize the next generation method to derive an explicit formula for ℛ_0, defined mathematically by the spectral radius of the next generation matrix <cit.>. We follow the process in <cit.> and define x = [E_h, I_r_h,I_h, E_v, I_v, S_h, R_h, S_v]^T; thus reordering the presentation of populations from the original system to ensure our calculations possess the correct biological representation.Let ℱ_i(x) be the rate of appearance of new infections in compartment i.We indicate the rate of transfer of individuals out of compartment i as 𝒱^-_i(x)andthe rate of transfer of individuals into compartment i by all other means as 𝒱^+_i(x).Thus, our system can be expressed in a condensed version as ẋ_i = ℱ_i(x) - 𝒱_i(x) where 𝒱_i(x) = 𝒱^-_i(x)-𝒱^+_i(x) for i = 1,..., 8.Next, we compute F = [∂ℱ_i/∂ x_j(x_0)] and V = [∂𝒱_i/∂ x_j(x_0)] for the exposed and infected compartments, namely for1 ≤ i,j ≤ 5, where x_0=[0,0,0,0,0,H_0,0,K_v] is the disease free equilibrium state with H_0 and K_v being the initial population sizes of humans and mosquitoes respectively, and obtain the following 5 × 5 matrices: 𝐅 = ( 2[ 0 0 0 0 β_hvσ_v; 0 0 0 0 0; 0 0 0 0 0; 0 β_vhσ_v K_v/H_0 β_vhσ_v K_v/H_0 0 0; 0 0 0 0 0 ]) 𝐕 = ( 2[ν_h0000; -ν_h ψγ_h000; -ν_h(1- ψ)0γ_h00;000 μ_v +ν_v0;000 -ν_vμ_v ])Hence, we calculate the Reproductive Number as:ℛ_0 := ρ(FV^-1) = σ_v √(K_v β_hvβ_vhν_v)/√(H_0 γ_h μ_v(μ_v+ν_v))=√(R_hv R_vh) where ρ(A) represents the spectral radius of the matrix A, and we have defined the quantities R_hv = (ν_v/μ_v + ν_v)(σ_v/μ_v)β_hv and R_vh = (K_v/H_0)(σ_v/γ_h)β_vh. Here, R_hv is the expected number of secondary infections in a fully susceptible human population resulting from one newly introduced infected mosquito.It is composed of the product of three terms.The first term, ν_v/μ_v + ν_v, represents the probability that an exposed mosquito will survive the extrinsic incubation period.The second term, σ_v/μ_v, is the number of human bites an infectious mosquito would make if humans were freely available.The third term, β_hv, is the probability of transmission occurrence given that a human is bitten by an infected mosquito.The number of secondary infections in a fully susceptible population of mosquitos resulting from one newly introduced infected human is represented by R_vh.This value is also formed by the product of three terms.The first, K_v/H_0, is the vector to host ratio.The σ_v/γ_h term is the maximum number of bites an infectious human will experience before recovery without impediment to mosquito bites.Finally, given that a susceptible mosquito bites an infectious human, β_vh is the probability of transmission from human to mosquito.The type reproductive number, or expected number of secondary human cases resulting from one newly infectious human, is ℛ_0^T := (ℛ_0)^2 <cit.>. § INCIDENCE RATES AND PARAMETER ESTIMATIONAlthough the traditional approach when modeling disease dynamics is to assume the total population within a country is susceptible, we deviate from this convention. Specifically, we assume the susceptible population depends on the biology and binomics of the Aedes species as well as the country's geography and uniquely calculate the at-risk population (described in the following section). We then use the at-risk population as the size of the initial susceptible population within simulations. While the deterministic model (<ref>) captures the dynamics of the epidemic fairly well, we follow the opinion of the authors in <cit.>, namely that more attention should be paid to how uncertainty in parameter estimates might affect model predictions.Thus, in Section <ref>, we focus on incorporating and quantifying the uncertainty of the disease process and the parameter values by means of embedding the deterministic model into a stochastic process. §.§ Calculated At-Risk PopulationThe data utilized herein was reported by the Pan American Health Organization (PAHO) <cit.>, and is given by the number of cases, both confirmed and suspected, of Zika per week at the country level for Colombia, El Salvador and Suriname.Simulations performed by using the entire country population as the number of initially susceptible humans mischaracterized the disease dynamics, leading to overestimates in the final size of an epidemic, see Appendix (Figure <ref>).Since dengue and Zika occur in the same areas, share a common vector and have similar asymptomatic rates, we calculate the at-risk population size per country for a Zika outbreak based on historical data for dengue from 1995 to 2015 within Colombia, El Salvador, and Suriname <cit.>.The year of highest incidence for dengue provides an approximation for the number of susceptible individuals in a fully naïve at-risk population, which coincides with the dynamics of a newly emerging pathogen, such as Zika, that would spread rapidly in a completely susceptible population.The World Health Organization (WHO) released a report on dengue <cit.>, stating “...available results suggest that the actual number of cases of dengue may range from 3 to 27 times the reported [dengue] number."Hence, we use a scalar multiple of the reported number of dengue cases during the highest incidence rate year to obtain a reasonable at-risk population count for the number of initial susceptible individuals with regards to a Zika epidemic.Within each of the three countries, the at-risk population value is strictly less than the total country population. Table <ref> indicates the year with the highest incidence rate and the reported number of cases for that year in each country of interest.See the Appendix (Figure <ref>) for all historical data on dengue incidence rates for Colombia, El Salvador and Suriname.We simulated Model (<ref>) using the ode45 solver in MATLAB with chosen baseline parameter values (Table <ref>) while assuming that a single infectious human exists at the start of the epidemic. The at-risk population size for Colombia is 2.75 times the number of reported dengue cases from 2010.We estimated the multiplier for the at-risk population based on the best fit for the currently ongoing epidemic. The at-risk population size for El Salvador is 1.425 times the number of reported dengue cases from 2014, while the at-risk population size for Suriname is 7.75 times the number of reported dengue cases from 2005.The initial size of the susceptible mosquito population is double the number of the at-risk population per country. When starting our analysis with only one infected human, the simulated epidemic takes several weeks to ramp up to a detectable (reportable) level. If we begin simulations on the first day that a case is reported, the simulated peak occurs after the reported peak. Thus, we use shifted initial conditions (Table <ref>) that correspond to the population state sizes obtained approximately on day 38, 118, and 50 from the original simulation using Model (<ref>) of the epidemics in Colombia, El Salvador and Suriname, respectively.We then repeat the process of simulating Model (<ref>) using the shifted initial conditions to obtain solutions for the number of reported cases whose peak reporting weeks align more closely in time with that of the data. The precise values of the shifted initial conditions for each country, found in Table <ref>, are of the form [S_h,E_h,I_r_h,I_h,R_h,S_v, E_v,I_v].The number of reported cases from our simulations, using the calculated at-risk population and shifted initial conditions, are compared to the PAHO Zika data in Figure <ref>.The solutions, which are similar to the data in peak and general shape, identify reasonable initial conditions.Although the parameter ranges considered are biologically reasonable, it is unknown which values within these ranges are the most accurate.To obtain an expected value for a given parameter and describe the associated uncertainty within this quantity, we embed the deterministic Model (<ref>) into a stochastic process enabling statistical inference.This process is described in Section <ref>. §.§ An Embedded Stochastic ModelPrevious literature has provided biologically relevant parameter ranges (see Table <ref>) for the model.However, to obtain insight as to the distribution of these parameters across their ranges, we embed the deterministic system of ordinary differential equations (<ref>) within a discrete-time stochastic process to obtain a corresponding stochastic model (<ref>, <ref>) and perform an analysis in a Bayesian paradigm. This approach adds a probabilistic component to both the contact/infection process and the disease progression process by capturing uncertainty at each modeling stage, rather than just within the contact/infection process <cit.>.Therefore, embedding deterministic models into stochastic processes serves to (a) create a stochastic model informed by population-level dynamics which includes uncertainty in the entire disease process rather than just due to data collection, (b) more adequately capture uncertainty present in the modeling framework, which is particularly important in small to medium sized epidemics, and (c) provide a convenient framework for fast computation of model parameters.The process by which we embed Model (<ref>) into a stochastic process, <cit.>, is summarized in the following paragraphs. Previously, we obtained specific rates for the transfer of both humans and mosquitos between their respective compartments, and these can be used to inform the stochastic analogues of these parameters.Throughout, we impose that the new model be conservative, i.e. S_h + E_h + I_r_h + I_h+R_h = N_h and S_v + E_v +I_v = N_v where N_h and N_v are the total human and mosquito populations, respectively.The assumption of conservation within the human population is plausible as the time span of the epidemic is much smaller than the average lifespan of a human.We hold the mosquito population as constant for the convenience of calculations and analysis.To approximate a continuous SEIR type model, we account for all events/transitions that may occur during time interval (i, i+h_i] to be assigned to the related compartment on day i.Thus, the rate of change for a given population size can be approximated by the difference between the previous and current time steps.The model now takes the form .S_i+h_i =S_i-E^*_i+h_iE_i+h_i =E_i+E_i+h_i^*-I_i+h_i^*Ir_i+h_i = Ir_i +ψ I_i+h_i^* - RIr_i+h_i^*I_i+h_i =I_i+(1-ψ)I_i+h_i^*-RI_i+h_i^*R_i+h_i = R_i + RIr_i+h_i^* + RI_i+h_i^*Sv_i+h_i =Sv_i +dEv_i+h_i^*+ dIv_i+h_i^*- Ev_i+h_i^*Ev_i+h_i =Ev_i + Ev_i+h_i^* - Iv_i+h_i^* - dEv_i+h_i^*Iv_i+h_i =Iv_i+Iv_i+h_i^*-dIv_i+h_i^*. } We index time by i, and the temporal offset, h_i, which may not be constant but is known. All quantities represent counts and quantities denoted by an asterisk represent transition counts. In regards to the terms ψ I_i+h_i^* and (1-ψ)I_i+h_i^*, the parameter ψ may yield values which are not integer counts.To be more liberal with the reporting size we calculated the smallest integer not less than the corresponding value of ψ I_i+h_i^*. To be more conservative with under-reporting size we calculated the largest integer not greater than the corresponding value of (1-ψ)I_i+h_i^*. When individuals can transition into a compartment via multiple routes, (e.g., recovered individuals can recover either with or without being reported as infected, RIr_i+h_i^* and RI_i+h_i^*), two letters are used to denote the transition.In these cases, the first letter denotes to which compartment the individual transitions and the second denotes the compartment from which the individual transitions.The transition compartments which represent birth/death counts of the mosquito popultion on day i+h_i are denoted, dEv_i+h_i^* and dIv_i+h_i^*.Note that these values are drawn based on the calculated population sizes at time i+h_i.The compartments are labeled as follows: S - susceptible humans, E - latently infected humans, Ir - infectious reported humans, I - infectious unreported humans, R - recovered humans, Sv - susceptible mosquitoes, Ev - latently infected mosquitoes, Iv - infectious mosquitoes.Finally, the stochastic components of the model are given by.E^*_i+h_i ∼Bin (S_i,1-exp(-λ_h h_i) ),I_i+h_i^*∼Bin (E_i,1-exp(-ν_h h_i))RIr_i+h_i^* ∼Bin (Ir_i, 1-exp(-γ h_i)),RI_i+h_i^*∼Bin (I_i,1-exp(-γ h_i))Ev_i+h_i^* ∼Bin (Ev_i,1-exp(-λ_v h_i)),Iv_i+h_i^*∼Bin (Iv_i,1-exp(-ν_v h_i))dEv_i+h_i^* ∼Bin (Ev_i+h_i,1-exp(-μ_h h_i)),dIv_i+h_i^*∼Bin (Iv_i+h_i,1-exp(-μ_h h_i)). } § ABC ALGORITHM AND COMPUTATION Previous investigations <cit.> have established biologically accepted ranges for the parameters Θ = [σ_v, β_hv, ν_h, ψ, β_vh, ν_v, μ_v] used within (<ref>), but the conversion of Model (<ref>) to Model (<ref>, <ref>) will incorporate uncertainty into the disease process, and thus the values of these parameters.The stochastic model (<ref>, <ref>) also allows for Bayesian inference on parameter posteriors which have the formf(Θ | Y) ∝ f(Y | Θ)π(Θ) where f(Y | Θ) is the stochastic data model, which for fixed values of Θ can be used to generate the random epidemic process, and π(Θ) is the prior distribution of the parameters. Thus, given both the data model and the prior distribution, we are able to calculate the posterior distribution f(Θ | Y) up to a proportionality constant. This serves to update the distribution of the biologically accepted parameter ranges based on the actual epidemic data which we denote as Y.The data model is constructed as a product of binomials with different sample sizes and probabilities across every time point. Determining a Maximum Likelihood Estimate (MLE) to provide a point estimate and standard deviation for parameters may provide estimates outside of the valid biological ranges. In SEIR models, Markov Chain Monte Carlo (MCMC) methods produce parameter autocorrelations in chains which becomes problematic for tuning. We can avoid these obstacles by using Approximate Bayesian Computation (ABC).This method was introduced by <cit.> to obtain an approximation of the true posterior distribution, f(Θ | Y). ABC samples from the posterior by randomly selecting parameter values from the prior that could adequately generate the data.In particular, random draws, Θ^*, from the prior distribution produce generated data sets, X, which are then compared to a given epidemic data set, Y, by means of a chosen distance metric, ρ(X,Y).Those values of Θ^* that generate data sets which fit the given data will be accepted as valid draws from the posterior distribution, and this implicitly conditions the posterior on Y.The algorithm of the computation is as follows, where N is the total number of accepted generated data sets X: For j ≤ N * Draw Θ^*∼Unif(a_k,b_k), where (a_k,b_k) is the corresponding parameter range found in Table <ref> for all k = 1,...,7. * Generate X, time series data of the number of reported infectives, from Model (<ref>, <ref>)* Calculate fitness of data using ρ(X,Y)* Set Θ_[j]←Θ^* if ρ(X,Y) ≤ϵand set j ← j+ 1 else return to Step 1. The metric, ρ(X,Y), is considered a distance between the generated data set, X, and the observed data set, Y, typically based on sufficient statistics of the parameter space Θ.If ρ(X,Y) is small enough, i.e., ρ(X,Y) ≤ϵ for some fixed small value ϵ > 0, thenf(Θ |ρ(X,Y) ≤ϵ ) ≈ f(Θ | Y).One often uses sufficient statistics in defining ρ(X,Y). These are statistics for which the data distribution conditioned on the sufficient statistic is free of Θ (e.g., the sufficient statistic contains all information about Θ that the full data set contains).By definition, the full data set is a sufficient statistic for Θ.Since a sufficient statistic of lower dimension than the full data set cannot be readily computed for our model, we consider a pointwise envelope metric comparing point by point the L_1-norm at every time step.Thus, if every data point of X and Y is close, then it directly follows that any statistics computed from X and Y, including sufficient statistics, will also be close.For the ABC method, the exact posterior distribution ofΘ can be found by accepting the simulated data set in which X = Y.This is computationally infeasible, however, so we instead consider X, such thatX_t/Y_t∈ (ϵ_1,ϵ_2) for all time steps t with a corresponding peak week and epidemic duration to Y, as an acceptable epidemic.We call (ϵ_1,ϵ_2) the envelope of tolerance around the observed data set, Y, which is a common method for assessing a stochastic SEIR model fit <cit.>. In Figure <ref>, the observed data set is compared to randomly generated data sets without calculating a metric for validation of the drawn parameter values.Figure <ref>shows that many randomly drawn and biologically accepted values of Θ generate epidemic outcomes which have distinctly different characteristics than the observed epidemic.Conversely, Figure <ref> demonstrates that epidemic outcomes of the accepted Θ^* values generate epidemics with similar characteristics to that of the observed epidemic in regards to the total number of new infections each day, peak week occurrence, and peak value.Figure <ref>(b) and <ref>(c) show generated epidemics similar to the data in both peak and duration.Note that this does not occur in Figure <ref>(a).The accepted parameter values obtained using the ABC method yield outbreaks which vary greatly in peak occurrence, leading us to conclude that the dynamics of the country-level data are different than those in the deterministic model. Therefore the ABC method does not generate infectious curves similar to the Colombia data. Figure <ref>(a) was generated from acceptances using an envelope of (1/20,20).A tighter envelope of (1/10,10) was computed; however, the method ran for approximately 35 days to obtain the same number of acceptances as the (1/20,20) envelope with no visible changes in the same plot as Figure <ref>(a).Because acceptances for tighter envelopes are not possible, we can conclude the model is improperly specified to capture the dynamics of Colombia at the country level. Reasons for this outcome are discussed in Section <ref>.Figure <ref> depicts the effect of generating accepted epidemics with smaller and smaller envelopes of tolerance on the posterior distributions.We plot the kernel densities of both the reporting rate, ψ, and reproductive number, ℛ_0, distributions for El Salvador and Suriname.While only two parameters are shown here, the same observations are found in the kernel density plots of the other parameters of interest.We see in Figure <ref> that envelopes of various size produce similar but different kernel densities, with the tighter envelopes producing a more peaked density distribution.A slight shift in the peak of the kernel densities is observed in Figures <ref>(a) and <ref>(d).This would indicate the need to shrink the envelope further to obtain an estimation that is closer to the true posterior distribution.However, attempts to generate data sets satisfying tighter envelopes could not produce acceptances, indicating that these envelope sizes are the best possible fits of the stochastic SEI_rIRS_vE_vI_v model to the observed country-level data.Further analysis would require refining the dynamics of Model (1) and Model (2,3) or using data at a more refined spatial scale.§ RESULTSWe assigned uniform distributions to the accepted biological ranges found in Table <ref>. After 10,000 acceptances from the ABC algorithm, the histograms and kernel densities for selected parameters from El Salvador are given in Figures <ref> and <ref>.Selected parameters from Suriname are given in Figures <ref> and <ref>.Figure <ref> displays the number of reported cases from the generated epidemics based on 10,000 accepted parameter values. The time series of the number of total cases during the generated epidemics of both reported and unreported infectives is found in Figure <ref>.The histogram and kernel density plots of the total number of Zika virus cases, reported and unreported are shown in Figure <ref>. § CONCLUSIONS AND DISCUSSIONWithin the present study, a new stochastic model was formulated to describe the spread of the Zika virus within Colombia, Suriname, and El Salvador. The variability in per capita susceptibility within each country was introduced by uniquely calculating the at-risk population based on historical data for dengue virus, whose epidemic characteristics are similar to that of Zika.The initial population state sizes were fit to this data in order to create similar values of weekly reported cases of Zika, both suspected and confirmed, to those reported by the PAHO using baseline values of parameters to evaluate Model (ode).Once the at-risk population was estimated, the initial population state sizes for these nations were fixed in order to estimate parameter values and obtain more informative distributions than the accepted biological parameter ranges.The deterministic system (ode) was then embedded into a stochastic process to obtain a more general stochastic model (<ref>, <ref>).For each of the three nations, an ABC algorithm was implemented using (<ref>, <ref>) to compute approximate posterior distributions of the parameters conditioned on the data.By obtaining these posterior distributions, the uncertainty in parameter values for each country can be quantified by way of informative statistics, such as the mean, mode, median and variance, to more accurately describe rates within the system. Properties of the disease within El Salavador and Suriname were accurately described by the model.El Salvador is estimated to have a mean reporting rate of 16.5% which is near the upper bound of the previous estimate of <cit.> with a credible interval of [12.5%, 22%].The mean values of the forcing terms, λ_h and λ_v, are approximately 0.28 and 0.31 with credible intervals of [0.0191, 0.7782] and [0.0119, 0.9244], respectively.Suriname has a mean reporting rate greater than the predicted interval of <cit.> at 18.8% with a credible interval of [13%, 27%]. The forcing terms of Suriname, λ_h and λ_v, had mean values of 0.17 and 0.43 with credible intervals of [0.0165,0.4054] and [0.0429,1.1241], respectively.The basic reproductive number, ℛ_0, was defined such that ℛ_0^2 yields the number of secondary human infections within a fully susceptible population arising from a single new human infection.In Suriname, the mean value of ℛ_0^2 was 5.31, while the mean reproductive number was 4.35 in El Salvador.These quantities are similar to other predictions for the mean of ℛ_0^2 found in <cit.>.Though this analysis has provided additional insight into the spread of the disease, our methods were unable to accurately estimate the aforementioned statistics for Colombia (see Section <ref>) as the ABC method revealed a poor fit for the data obtained from this nation.One possible reason for this poor fit could be the appearance of a second peak in the epidemic, see Figure <ref>(a).While there is a distinct and large peak during EW 26, a smaller second peak occurs during EW 34.This second peak may result from reporting error as this data was in no way cleaned once obtained from the PAHO website.Another possible explanation is a second outbreak in a disjoint location from the first outbreak.In particular, the extreme topographical variations within Colombia could lead to a delay in the spread of the disease across the entire country.Thus, we conclude that the epidemiological characteristics of Zika in Colombia must be studied at a more granular spatial level, differentiating amongst regions or even counties and cities.We also estimated the total number of infected people in El Salvador to be 72,721 (with an estimated 12,107 reported cases) and the total number of people infected in Suriname to be 21,390 (with an estimated 4,132 reported cases). So, about 95% of the at-risk (high-risk) populations were infected by the end of the outbreak. The predicted reported country-level incidence for El Salvador is 0.0019 and for Suriname is 0.0073. However, the true country-level incidence for El Salvador is predicted to be 0.0119 while for Suriname it is 0.0396 assuming total populations of 6,117,145 and 540,612, respectively. From a public health perspective, our model indicates that about 6 times as many people were infected than were reported in El Salvador and 5 times as many were infected than were reported in Suriname. Depending on the percent of the at-risk population that was pregnant during the outbreak, our model suggests that a larger number of birth defects than indicated by the reported number of cases could be expected in these countries. Interestingly, although the values for the mosquito extrinsic incubation period and mosquito lifespan had relatively wide ranges even among mean, median, and mode values, the probability of a mosquito surviving the incubation period (Tables 12 and 13) was quite similar across statistics and countries at about 60%. This corresponds to the extrinsic incubation period lasting around 2/3 of the average mosquito lifespan. A major difference between predicted parameter values for El Salvador and Suriname occurs in the transmission probabilities. While the values for β_hv and β_vh for El Salvador were similar (both estimated to be about 0.42), in Suriname, the mosquito-to-human probability of transmission, β_hv was consistently less than half that of the human-to-mosquito transmission, β_vh, with a median of 0.21 and 0.56, respectively. Since these terms capture many intrinsic uncertainties in the transmission process, it is hard to interpret the meaning of this difference. It could be an artifact of the model or could indicate a reduced efficiency of the mosquitoes in Suriname in passing on the virus.In conclusion, our research provides important parameter estimates for the spread of Zika in El Salvador and Suriname, along with uncertainty quantification and credible intervals for those parameters. We estimated the basic and type reproduction numbers and the total number of people infected - quantities needed to inform assessments of economic cost and risk, among other factors. We found that the type reproduction number is higher in Suriname than in El Salvador, indicating a higher risk in Suriname. This could be explained by differences in climate between the two countries or in other socio-economic or geographic factors affecting mosquito-borne disease transmission. Additionally, our methods could be applied to other countries or regions experiencing outbreaks to estimate region-specific parameters and provide decision makers with important information about surveillance and control both at present and in the future. Another advantage of this method is that it can indicate what scale is appropriate for these calculations. For example, we found that a country-level analysis of the Colombia data was not appropriate. In the future, it would be interesting to apply the model to a regional Colombia data set. In future studies, the initial population state sizes obtained in the calculations of the respective at-risk populations could be considered parameters themselves. Hence, one could perform a statistical analysis on acceptable ranges for such initial values to quantify the uncertainty of these populations. For instance, this could be done by generating a posterior distribution for the initial susceptible population for each country, as well as the parameters in the system.Additionally, the mosquito population for each country was assumed to be constant for convenience.However, utilizing a more realistic model of the total mosquito population which changes in time (similar to the methods in <cit.>) may yield different results, and this would also be a suitable direction for future research. Finally, the recovery period, 1/γ_h, was held constant in the current study due to the assumption that its mean value had been medically established.Still, another investigation using the methods developed herein, but considering a uniformly-distributed prior for the recovery period, may provide better insight into the distribution and expected value of this period. We conclude that additional studies are needed to fully understand Zika virus transmission dynamics. However, our research suggests that the reporting rates in El Salvador and Suriname are quite low and thus, additional surveillance systems may be needed to measure the true burden of Zika in these countries.§ ACKNOWLEDGMENTSThis work was supported by NSF SEES grant CHE - 1314029, NSF RAPID (DEB 1641130), and NSF EDT grant DMS-1551229. SP was partially supported by NSF under grant DMS-1614586.SD was partially supported by NIH/NIGMS/MIDAS under grant U01-GM097658-01. LANL is operated by Los Alamos National Security, LLC for the Department of Energy under contract DE-AC52-06NA25396. Approved for public release: LA-UR-17-20963. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.§ APPENDIX §.§ Results of Model <ref> using total country populations as the initial size susceptible populations §.§ Other parameter distributions §.§ Other Reproductive Number distributions acm
http://arxiv.org/abs/1702.08560v1
{ "authors": [ "Deborah P. Shutt", "Carrie A. Manore", "Stephen Pankavich", "Aaron T. Porter", "Sara Y. Del Valle" ], "categories": [ "q-bio.PE", "q-bio.QM", "stat.AP", "92D30" ], "primary_category": "q-bio.PE", "published": "20170227222203", "title": "Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America" }
Efficient Learning of Mixed Membership Models Zilong TanDepartment of Computer ScienceDuke University Email: ztan@cs.duke.edu Sayan MukherjeeDepartments of Statistical ScienceComputer Science, Mathematics, Biostatistics & Bioinformatics Duke University Email: sayan@stat.duke.edu December 30, 2023 =============================================================================================================================================================================================================================================================We present an efficient algorithm for learning mixed membership models when the number of variables p is much larger than the number of hidden components k. This algorithm reduces the computational complexity of state-of-the-art tensor methods, which require decomposing an O(p^3) tensor, to factorizing O(p/k) sub-tensors each of size O(k^3). In addition, we address the issue of negative entries in the empirical method of moments based estimators. We provide sufficient conditions under which our approach has provable guarantees. Our approach obtains competitive empirical results on both simulated and real data. § INTRODUCTION Mixed membership models <cit.> have been used extensively across applications ranging from modeling population structure in genetics<cit.> to topic modeling of documents <cit.>. Mixed membership models useDirichlet latent variables to define cluster membership where samples can partially belong to each of k latent components. Parameter estimation for such latent variables models (LVMs) using maximum likelihood methods such as expectation maximization is computationally intensive for large data, for example, if number of samples n is large.Parameter estimation using the method of moments for LVMs is an attractive scalable alternative that has been shown to have certain theoretical and computational advantages over maximum likelihood methods in the setting when n is large. For LVMs, method of moments approaches reduce to tensor methods—the moments of the model parameters are expressed as a function of statistics of the observations in a tensor form. Inference in this setting becomes a problem of tensor factorization. Computational advantages of using tensor methods have been observed for many popular models, including latent Dirichlet allocation <cit.>, spherical Gaussian mixture models <cit.>, hidden Markov models <cit.>, independent component analysis <cit.>, and multi-view models <cit.>. An appealing property of tensor methods is the guarantee of a unique decomposition under mild conditions <cit.>. There are two complications to using standard tensor decomposition methods <cit.> for LVMs. The first problem is computation and space complexity. Given p variables in the LVM, parameter inference requires factorizing typically a non-orthogonal estimator tensor of size O(p^3) <cit.>, which is prohibitive for large p. When the estimator is orthogonal and symmetric, this can be done in O(p^2 log p) <cit.>. Online tensor decomposition <cit.> uses dimension reduction to instead factorize a reduced k-by-k-by-k tensor. However, the dimension reduction can be slower than decomposing the estimator directly for large sample sizes, as well as suffer from high variance <cit.>. We introduce a simple factorization with improved complexity for the general case where the parameters are not required to be orthogonal.The second problem arises from negative entries in the empirical moments tensor. LVMs for count data are constrained to have nonnegative parameters. However, the empirical moments tensor computed from the data may contain negative elements due to sampling variation and noise. Indeed, for small sample sizes or data with many small or zero counts, there will be many negative entries in the empirical moments tensor. General tensor decomposition algorithms <cit.>, including the tensor power method (TPM) <cit.>, do not guarantee the nonnegativity of model parameters. Approaches such as positive/nonnegative tensor factorization <cit.> also do not address this situation as they require all the elements of the tensor to be factorized to be nonnegative. With robust tensor methods <cit.>, sparse negative entries may potentially be treated as corrupted elements; however, these methods are not applicable in this setting since there can be many negative elements.In this paper, we introduce a novel parameter inference algorithm called partitioned tensor parallel quadratic programming (PTPQP) that is efficient in the setting where the number of variables p is much larger than the number of latent components k. The algorithm is also robust to negative entries in the empirical moments tensor. There are two key innovations in the PTPQP algorithm. The first innovation is a partitioning technique which recovers the parameters through factorizing O(p/k) much smaller sub-tensors each of size O(k^3). This technique can also be combined with methods <cit.> to obtain further improved complexities. The second innovation is a parallel quadratic programming <cit.> based algorithm to factor tensors with negative entries under the constraint that the factors are all nonnegative. To the best of our knowledge, this is the first algorithm designed to address the problem of negative entries in empirical estimator tensors. We show that the proposed factorization algorithm converges linearly with respect to each factor matrix. We also provide sufficient conditions under which the partitioned factorization scheme is consistent, the parameter estimates converge to the true parameters.§ PRELIMINARIES Notations We use bold lowercase letters to represent vectors and bold capital letters for matrices. Tensors are denoted by calligraphic capital letters. The subscript notation A_j refers to j-th column of matrix A. We denote thej-th column of the identity matrix as e_j and 1 is a vector of ones. We further write diag(x) for a diagonal matrix whose diagonal entries are x, and diag(A) to mean a vector of the diagonal entries of A.Element-wise matrix operators include ≻ and ≽, e.g., A≽ 0 means that A has nonnegative entries. (·)_+ refers to element-wise max(·,0). * and ⊘ respectively represent element-wise multiplication and division. Moreover, × refers to the outer product and ⊙ denotes the Khatri-Rao product. ·_F and ·_2 represent the Frobenius norm and spectral norm, respectively.Tensor basics This paper uses similar tensor notations as <cit.>. In particular, we are primarily concerned with Kruskal tensors in ℝ^d_1 × d_2 × d_3, which can be expressed in the form of𝒯 = ∑_j=1^r A_j ×B_j ×C_j,where A, B, and C are respectively d_1-by-r, d_2-by-r, and d_3-by-r factor matrices. The rank of 𝒯 is defined as the smallest r that admits such a decomposition. The decomposition is known as the CP (CANDECOMP/PARAFAC) decomposition. The j-mode unfolding of 𝒯, denoted by 𝒯_(j), for j=1,2,3 is a d_j-by-(∏_t≠ j d_t) matrix whose rows are serializations of the tensor fixing the index of the j-th dimension. The unfoldings have the following well-known compact expressions:𝒯_(1) = A(C⊙B)^⊤, 𝒯_(2) = B(C⊙A)^⊤, 𝒯_(3) = C(B⊙A)^⊤. § LEARNING THROUGH METHOD OF MOMENTS§.§ Generalized Dirichlet latent variable modelsA generalized Dirichlet latent variable model (GDLM) was proposed in <cit.> for the joint distribution of n observations y_1,y_2,⋯,y_n. Each observation y_i consists of p variables y_i = (y_i1,y_i2,⋯,y_ip)^⊤. GDLM assumes a generative process involving k hidden components. For each observation, sample a random Dirichlet vector x_i = (x_i1,x_i2,⋯,x_ik)^⊤∈Δ^k-1 with concentration parameter α = (α_1,α_2,⋯,α_k)^⊤. The elements of x_i are the membership probabilities for y_i to belong to each of the k components. Specifically,y_ij∼∑_h=1^k x_ih g_j (θ_jh),where g_j(θ_jh) is the density of the j-th variable specific to component h with parameter θ_j = (θ_j1,θ_j2,⋯, θ_jk). One advantage of GLDM is that y_ij can take categorical values. Let d_j denote the number of categories for the j-th variable (set d_j = 1 for scalar variables), θ_j becomes a d_j-by-k probability matrix where the c-th row corresponds to category c. We aim to accurately recover θ_j from independent copies of y_i involving variables of mixed data types, either categorical or non-categorical. §.§ Moment-based estimators The moment estimators of latent variable models typically take the form of a tensor <cit.>. Consider the estimators of GDLM <cit.> for example. Let b_ij = e_y_ij if variable j is categorical; b_ij = y_ij otherwise. The second- and third- order parameter estimators for variable j, s, and t are writtenℳ^js = 𝔼[b_ij×b_is] - α_0/α_0 + 1𝔼[b_ij] 𝔼[b_is]^⊤ ℳ^jst = 𝔼[b_ij×b_is×b_it] + 2α_0^2/(α_0 + 1)(α_0 + 2)𝔼[b_ij] ×𝔼[b_is] ×𝔼[b_it]= - α_0/α_0 + 2(𝔼[𝔼[b_ij] ×b_is×b_it] + 𝔼[b_ij×𝔼[b_is] ×b_it] + 𝔼[b_ij×b_is×𝔼[b_it]] ).Alternatively, ℳ^js and ℳ^jst have the following CP decomposition into parameters θ_j:ℳ^js = ∑_h≥ 1α_h/α_0 (α_0 + 1)θ_jh×θ_sh, θ_uv∈ℝ^d_i ℳ^jst = ∑_h≥ 12α_h/α_0(α_0 + 1)(α_0 + 2)θ_jh×θ_sh×θ_th, θ_uv∈ℝ^d_i.<ref> provides the derivation details of these estimators. For the special case of latent Dirichlet allocation, ℳ^js and ℳ^jst are scalar joint probabilities.The parameters θ_j are typically obtained by factorizing the block tensor ℳ_2 whose (j,s)-th element is the empirical ℳ^js and/or ℳ_3 whose (j,s,t)-th element is the empirical ℳ^jst <cit.>. Note that θ_j are generally non-orthogonal, and thus preprocessing steps (see <ref>) are needed for orthogonal decomposition methods <cit.>. The preprocessing can be expensive and often leads to suboptimal performance <cit.>. Here, we highlight a few relevant observations: * ℳ^js alone does not yield unique parameters θ_j due to the well-known rotation problem. Suppose that θ_j^* and θ_s^* are the ground-truth parameters satisfying (<ref>) and any invertible R, there exists decomposition θ_j^' = θ_j^* R and θ_s^' = R^-1θ_s^* that also satisfy (<ref>) but are not ground-truth parameters.The ground-truth parameters are not uniquely identifiable through ℳ^js, this is true even when enforcing nonnegativity constraints on parameters <cit.>.* ℳ^jst is sufficient to uniquely recover the parameters under certain mild conditions <cit.>; for example, when any two of θ_j, θ_s, and θ_t have linearly independent columns and the columns of the third are pair-wise linearly independent <cit.>.* The empirical estimator ℳ^jst generally contains negative entries due to variance and noise. The fraction of negative entries can approach 50%, as we shall see in experiments. We address this issue in <ref>.* While the decomposition (<ref>) can be unique up to permutation and rescaling, the correspondence between each column of the factor matrix and each hidden component may not be consistent across multiple decompositions. Techniques for achieving consistency are developed in <ref>.§.§ Computational complexity Tensor methods such as TPM typically decompose the O(p^3 d_max^3) full estimator tensor that includes all variables. More efficient algorithms have been developed for the case that parameters are orthogonal <cit.>, and when the sample size is small <cit.>. However, these methods do not apply in the general case where the parameters are non-orthogonal and the sample size can be potentially large. A key insight underlying our approach is that it is sufficient to recover the parameters by factorizing only O(p/k) much smaller sub-tensors each of size O(k^3). This technique can also be combined with the aforementioned methods to further improve the complexity in certain cases.§ AN EFFICIENT ALGORITHMIn this section, we develop partitioned tensor parallel quadratic programming (PTPQP) an efficient approximate algorithm for learning mixed membership models. We first introduce a novel partitioning-and-matching scheme that reduces parameter estimation to factorizing a sequence of sub-tensors. Then, we develop a nonnegative factorization algorithm that can handle negative entries in the sub-tensors. §.§ Partitioned factorization Factorizing the full tensor formed by all ℳ^jst is expensive while a three-variable tensor ℳ^jst in (<ref>) alone may not be sufficient to determine θ_j when k is large. In this section, we consider factorizing the sub-tensors corresponding to a cover of the set of variables [p] such that each sub-tensor admits an identifiable CP decomposition (<ref>), i.e. unique up to permutation and rescaling of columns. This gives the parameters for all variables. Suppose that p > k and the maximum number of categories d_max is a constant, the aggregated size of the sub-tensors can be much smaller, i.e., O(p k^2), than the size O(p^3) of the full estimator.Let π^j, π^s, and π^t denote ordered subsets ⊆[p], with cardinality |π_j| = p_j, |π_s| = p_s, and |π_t| = p_t, respectively. Consider the p_j-by-p_s-by-p_t block tensor [For block tensor operations, see e.g., <cit.>.] ℳ^π^jπ^sπ^t whose (u,v,w)-th element is the tensor ℳ_uvw^π^jπ^sπ^t = ℳ^π_u^jπ_v^sπ_w^t. From (<ref>), we have thatℳ^π^jπ^sπ^t = ∑_h=1^k 2α_h/α_0(α_0 + 1)(α_0 + 2)[ θ_π_1^jh; θ_π_2^jh;⋮; θ_π_p_j^jh ]×[ θ_π_1^sh; θ_π_2^sh;⋮; θ_π_p_s^sh ]×[ θ_π_1^th; θ_π_2^th;⋮; θ_π_p_t^th ].Clearly, the block tensor is identifiable if it has an identifiable sub-tensor. Suppose that a sub-tensor ℳ^π^uπ^vπ^w is identifiable, then one can construct an identifiable tensor ℳ^π^j'π^s'π^t' fromℳ^π^jπ^sπ^t by setting π^j' = π^j ∪π^u, π^s' = π^s ∪π^v, π^t' = π^t ∪π^w.We further remark that a sub-tensor can be identifiable under mild conditions, for example, if the sum of the Kruskal rank of the three factor matrices is at least than 2 k + 2 <cit.>. Given an identifiable sub-tensor ℳ^π^uπ^vπ^w of anchor variables indexed by π^u, π^v, and π^w, the partitioning produces a set of sub-tensors (partitions) constructed through (<ref>), that includes all variables. Thus, ℳ^π^uπ^vπ^w is a common sub-tensor shared across all partitions. We choose anchor variables whose parameter matrices are of full column rank to obtain an identifiable ℳ^π^uπ^vπ^w. Finally, one can divide the rest of variables evenly and randomly into the partitions. §.§ Matching parameters with hidden components Since the factorization of a partition (<ref>) can only be identifiable up to permutation and rescaling of the columns of constituent θ_j, the correspondence between the columns of θ_j and hidden components can differ across partitions. To enforce consistency, we associate a permutation operator ψ^j for each variable j such that (ψ^j θ_j)_h are the parameters specific to hidden component h across all variables j. Consider the following vector representation of ψ:ψ= (ψ_1, ψ_2, ⋯, ψ_k), ψ_i ∈[k] ψA = [A_ψ_1, A_ψ_1, ⋯, A_ψ_k].Observe that ψ^j=ψ^s=ψ^t within a factorization of ℳ^jst, and this also holds for the partitioned factorization (<ref>) of ℳ^π^jπ^sπ^t as well, i.e., ψ^x = ψ^y, ∀ x, y ∈π^j∪π^s∪π^t.Consider the factorizations of ℳ^π^jπ^sπ^t and ℳ^π^uπ^vπ^w and suppose that ∃ x∈(π^j∪π^s∪π^t) ∩(π^u∪π^v∪π^w). The permutation operator for one factorization is determined given the other by column matching the parameters of variable x in both factorizations. Thus, an inductive way to achieve a consistent factorization is to start with one factorization, and let its permutation be the identity (1,2,⋯,k), then perform the factorization over new sets of variables with at least one variable in common with the initial factorization. Permutations for the sequential factorizations are determined via column matching parameter matrices of the common variables.Given two factorized parameter matrices θ_j and θ_j^' of variable j, our goal is to find a consistent permutation ψ (of θ_j with respect to θ_j^') such that (ψθ_j)_h and θ_jh^' correspond to the same hidden component for all h ∈[k]. We now present an algorithm with provable guarantees to compute a consistent permutation. Smallest angle matchingA simple matching algorithm is to match the two columns of the two parameter matrices that have the smallest angle between them. Consider the factorizations of ℳ^jst and ℳ^juv which yield respectively parameters θ_j and θ_j^' for the common variable j. Given the permutation ψ^j for ℳ^jst, the permutation ψ^u for ℳ^juv is computed by:ψ_s^u = max_t ( θ̅_j^'⊤ψ^j θ̅_j )_ts.Here, θ̅_j and θ̅_j^' represent respectively the normalized θ_j and θ_j^' with each column having unit Euclidean norm. There are cases that ψ^u computed via (<ref>) is not consistent: 1) ψ^u contains duplicate entries and hence is ineligible; and 2) since θ_j and θ_j^' are the factorized parameter matrices which are generally perturbed from the ground-truth, the resulting ψ^u may differ from the consistent permutation. To cope with these cases, we establish in <ref> the sufficient conditions for ψ^u to be consistent. Orthogonal Procrustes matchingOne issue with the smallest angle matching is that each column is paired independently. It is easy for multiple columns to be paired with a common nearest neighbor. We describe a more robust algorithm based on the orthogonal Procrustes problem, and show improved guarantees. Since a consistent permutation is orthogonal, a natural relaxation is to only require the operator to be orthogonal. This is an orthogonal Procrustes problem, formulated in the same settings as <ref>min_Ψθ̅_j^'Ψ - ψ^j θ̅_j_F^2, s.t. Ψ^⊤Ψ = I.Let θ̅_j^'⊤ψ^j θ̅_j = UΣV^⊤ be the singular value decomposition (SVD), the solution Ψ^* is given by the polar factor <cit.>Ψ^*= UV^⊤.Here, Ψ^* is orthogonal and does not immediately imply the desired permutation ψ^u. To compute ψ^u, one can additionally restrict Ψ to be a permutation matrix, and solve for ψ^u using linear programming <cit.>. Aside from efficiency, one fundamental question is that under what assumptions the objective (<ref>) yields the consistent permutation.Given the solution Ψ^* to the Procrustes problem, we propose the following simple algorithm for computing ψ^u:ψ_s^u = max_t Ψ_ts^* .We first establish through <ref> that if ψ^u obtained using (<ref>) is a valid permutation, i.e., no duplicate entries, then it is optimal in terms of the objective (<ref>). The ψ^u obtained using (<ref>) satisfiesψ^uθ̅_j^' - ψ^j θ̅_j_F^2 ≤ψθ̅_j^' - ψ^j θ̅_j_F^2for all permutations ψ. First, rewrite the objective (<ref>) as followsθ̅_j^'Ψ - ψ^j θ̅_j_F^2= Ψ^⊤θ̅_j^'⊤θ̅_j^'Ψ + (ψ^jθ̅_j)^⊤ψ^jθ̅_j - 2Ψ^⊤θ̅_j^'⊤ψ^jθ̅_j = θ̅_j^'_F^2 + θ̅_j_F^2 - 2Ψ^⊤θ̅_j^'⊤ψ^jθ̅_j.Recall the SVD θ̅_j^'⊤ψ^jθ̅_j = UΣV^⊤, and write Ψ = UV^⊤ + E. Keeping only terms that depend on E in (<ref>) to obtain -2E^⊤UΣV^⊤. Thus, the optimization (<ref>) is equivalent tomax_EV^⊤E^⊤UΣ, s.t.(UV^⊤ + E)^⊤(UV^⊤ + E) = I.From the constraint, we obtain E^⊤E = -2V^⊤E^⊤U. The optimization now becomesmin_EE^⊤EΣ = min_E∑_j (E^⊤E)_jjΣ_jj.Let us now restrict each column of Ψ to be in {e_j|j=1,2,⋯,k }, but not necessarily distinct. Suppose that Ψ_j = e_y. We have that E_j = e_y - (UV^⊤)_j. Clearly, (<ref>) and hence (<ref>) are minimized with y = max_t (UV^⊤)_tj. In section <ref> we state sufficient conditions under which the objective (<ref>) yields a consistent permutation. §.§ Approximate nonnegative factorization In previous sections, we reduced the inference problem to factorizing partitioned sub-tensors. We now present a factorization algorithm for the sub-tensors that contain negative entries. Our goal is to approximate a sub-tensor ℳ by a sub-tensor ℳ = ∑_j A_j ×B_j ×C_j where the factors A, B, and C are nonnegative. The Frobenius norm is used to quantify the approximationmin_A, B, C≽ 0ℳ - ℳ_F.Note that we do not assume that ℳ≽ 0 in (<ref>) which distinguishes our optimization problemfrom other approximate factorization algorithms <cit.>. In <ref>, we provide some details as to why negative entries are problematic for standard approximate factorization algorithms. We can rewrite (<ref>) using the 1-mode unfolding asmin_A, B, C≽ 0 ℳ_(1) - A(C⊙B)^⊤_F .Equivalent formulations with respect to the 2-mode and 3-mode unfoldings can be readily obtained from (<ref>). We point out that another widely-used error measure — the I-divergence <cit.> — may not be suitable for our learning problem. The optimization using I-divergence is given bymin_A, B, C≽ 0∑_u,v,w [ℳ_uvwlogℳ_uvw/ℳ_uvw - ℳ_uvw + ℳ_uvw].This optimization is useful for nonnegative ℳ when each entry follows a Poisson distribution. In this case, the objective is equivalent to the sum of Kullback-Leibler divergence across all entries of ℳ:∑_u,v,w D_KL(Pois(x; ℳ_uvw)Pois(x;ℳ_uvw)).However, the Poisson assumption does not generally hold for the estimator tensor (<ref>). §.§ Handling negative entries in empirical estimators We first illustrate that factorizing a tensor with negative entries using either positive tensor factorization <cit.> or nonnegative tensor factorization <cit.> will either result in factors that violatethe the nonnegativity constraint or the result of the algorithm diverges. In addition, we show that general tensor decompositions cannot enforce the factor nonnegativity even after rounding the negative entries to zero.We then present a simple method based on weighted nonnegative matrix factorization (WNMF) <cit.> that enforce the factor nonnegativity constraint. We further generalize this method using parallel quadratic programming (PQP) <cit.> to obtain a method with a provable convergence rate. Issue of negative entriesIf the tensor is strictly nonnegative, the optimization specified in (<ref>) can be reduced to nonnegative matrix factorization (NMF). Solvers abound for NMF including the celebrated Lee-Seung's multiplicative updates <cit.>. The reduction is done by viewing (<ref>) as Y - WH_F^2 with Y = ℳ_(1)^jst, W = A, and H = (C⊙B)^⊤, and alternatingW_st← W_st(YH^⊤)_st/(WHH^⊤)_st,over each unfolding and factor matrix W. Obviously, the updates may yield negative entries in W when the unfolding contains negative entries. In addition, convergence relies on the nonnegativity of the unfolding <cit.>. This issue extends to their tensor factorization variants <cit.> known as the positive tensor factorization and nonnegative tensor factorization. For these approaches, a naive resolution is to round negative entries of ℳ^jst to 0, this however lacks theoretical guarantees.It is important to note that the rounding does not help general tensor decompositions like TPM. The following example illustrates that the unique decomposition (up to permutation and rescaling) of a positive tensor can contain negative entries. Consider a 2-by-2-by-2 positive tensor, whose 1-mode unfolding is given by[ [c c | c c] 1 3 2 2; 2 2 2 2 ],where the vertical bar separates two frontal slices. It has the following decomposition, written in the form of (<ref>):A = C = [ 1 1; 1 0 ], B = [2 -1;21 ].Since all factors are of full-rank, the decomposition is unique up to permutation and rescaling of columns <cit.>. Thus, a general tensor decomposition yields a B with negative entries regardless of rescaling. §.§ Factorization via WNMFSince the ground-truth ℳ^jst are nonnegative, we may “ignore" the negative entries of ℳ^jst by treating them as missing values. This idea leads to the following modified objective:min_W,H≽ 0Ω * (Y - WH)_F^2where Y, W, H are chosen identically as (<ref>), and we defineΩ_uv =1,Y_uv≥ 00,Y_uv < 0 .The optimization can be carried out using WNMF. Here, we modify the original updates by introducing a positive constant ϵ to ensure that the updates are well-defined:W_uv← W_uv[(Ω*Y) H^⊤]_uv + ϵ/[((WH)*Ω) H^⊤]_uv + ϵ,H_uv← H_uv[W^⊤(Ω*Y) ]_uv + ϵ/[W^⊤(Ω*(WH)) ]_uv + ϵ.<ref> states the correctness of the modified updates (<ref>).The objective (<ref>) is non-increasing under the multiplicative updates (<ref>). We prove the update for H, and the update for W follows by applying the update to Ω^⊤ * (v^⊤ - H^⊤W^⊤)_F. First, consider the error Frobenius norm for a column h of H, and the corresponding columns ω of Ω and v of V,F(h) = ω * (v - Wh)_F^2.The following G(·,·) is an auxiliary function of F(·):G(h,h^t) = F(h) + (h - h^t)^⊤∇ F(h^t) + 1/2(h - h^t)^⊤K(h - h^⊤),where we defineK = diag([ W^⊤diag(ω) Wh^t + ϵ1] ⊘h^t ).Clearly, G(h,h) = F(h), and one can show that G(h,h^t) ≥ F(h) by rewritingF(h) = F(h) + (h - h^t)^⊤∇ F(h^t) + 1/2(h - h^t)^⊤W^⊤diag(ω) W(h - h^⊤),where we note that ω*ω = ω from the Boolean definition of ω. Comparing (<ref>) with (<ref>), it is sufficient to show that K - W^⊤diag(ω) W is positive semi-definite. Now consider the scaled matrixU = diag(h^t) Kdiag(h^t) -diag(h^t) W^⊤diag(ω) Wdiag(h^t)= diag(W^⊤diag(ω) Wh^t + ϵ1) diag(h^t) -diag(h^t) W^⊤diag(ω) Wdiag(h^t).Observe that U is strictly diagonally dominant as U1≻ 0 and the off-diagonal entries are negative. Also note that all diagonal entries of U are positive, it follows that U is positive semi-definite. We thereby conclude that K - W^⊤diag(ω) W is positive semi-definite.Let h^t+1 = min_h G(h,h^t), we have that F(h^t) = G(h^t,h^t) ≥ G(h^t+1,h^t) ≥ F(h^t+1). The minimizer h^t+1 is obtained by setting ∇_h^t+1 G(h^t+1,h^t) = 0, which yields- ∇ F(h^t)= K(h^t+1 - h^t) W^⊤[ω*(v - Wh^t)] = Kh^t+1 - W^⊤diag(ω) Wh^t - ϵ1 h^t+1 = h^t * [W^⊤(ω*v) + ϵ1] ⊘[W^⊤(ω * (Wh^t)) + ϵ1].The particular choice of Ω guarantees that h^t+1 is always positive.§.§ Parallel quadratic programmingWe now generalize the WNMF approach using parallel quadratic programming to obtain a convergence rate. Let 𝕊_++ denote the set of symmetric positive definite matrices, we consider the following optimization problemmin_x1/2x^⊤Qx + z^⊤xs.t.x≥ 0,Q∈𝕊_++,which can be solved by iterating multiplicative updates <cit.>. We use the parallel quadratic programming (PQP) algorithm <cit.> to solve (<ref>), partly because it has a provable linear convergence rate. The PQP multiplicative update for (<ref>) takes the following simple form:x←x * (Q^- x + z^-) ⊘(Q^+ x + z^+),withQ^+= (Q)_+ + diag(γ),Q^-= (-Q)_+ + diag(γ) z^+= (z)_+ + ϕ,z^-= (-z)_+ + ϕ.Here γ and ϕ are arguments to PQP, we will discuss these arguments in section <ref>. The update maintains nonnegativity since all items are nonnegative. We make the following observation.The multiplicative updates for Lee-Seung and WNMF are special cases of PQP. Since the WNMF (<ref>) generalizes Lee-Seung, which is the case that Ω has all ones, we need only to prove for WNMF. Let Λ = Ω*Ω and γ = 0, some matrix algebra reveals the following PQP updatesW_uv ← W_uv[((Λ*Y) H^⊤)_+]_uv + Φ_uv/[((WH)*Λ) H^⊤ + ((-Λ*Y) H^⊤)_+ ]_uv + Φ_uvH_uv ← H_uv[(W^⊤(Λ*Y))_+]_uv + Φ_uv^'/[W^⊤(Λ*(WH) ) + (-W^⊤(Λ*Y) )_+ ]_uv + Φ_uv^'.Comparing (<ref>) to (<ref>), they are equivalent if Φ_uv = Φ_uv^' = ϵ. We can now solve the approximate nonnegative factorization problem stated in (<ref>) using (<ref>). <ref> states the multiplicative updates. A more detailed discussion of Φ is included in <ref>. We present pseudo-code in<ref>.For optimization (<ref>), the following update converges linearly to a local optimumA←A * [(-Z)_+ + Φ] ⊘[AQ + (Z)_+ + Φ]withQ = (C^⊤C)*(B^⊤B),Z = -ℳ_(1)(C⊙B) Φ≻1/2( √(diag(ZQ^-1Z^⊤)/λ_min(Q))diag(Q)^⊤ - |Z| )_+,where λ_min(·) is the smallest eigenvalue. Similar updates for B and C are obtained using (<ref>). We apply PQP updates (<ref>) to each row of A. Let v_j: and A_j: be the j-th row of ℳ_(1) and A, respectively. Fixing the current factor estimates B and C, the optimization with respect to A_j: follows from (<ref>):min_A_j:≽ 0v_j:^⊤ - (C⊙B)A_j:^⊤_F= min_A_j:≽ 01/2A_j:(C⊙B)^⊤(C⊙B) A_j:^⊤ - v_j:(C⊙B) A_j:^⊤.Now the updates (<ref>) can be applied immediately, where we set γ = 0 and Φ according to <ref> in <ref>. Using the identity (C⊙B)^⊤(C⊙B) =(C^⊤C)*(B^⊤B) and performing the updates simultaneous for all rows of A give (<ref>).§.§ Proposed approachTo summarize, the proposed approach, referred to as PTPQP, consists of three steps. Given the indexes of anchor variables π^u∪π^v∪π^w, the variables [p]\(π^u∪π^v∪π^w) are first evenly divided into r partitions, and the anchor variables are added to each partition. The second step consists of forming and factorizing the sub-tensor of each partition using <ref>, this step can be parallelized. Third, normalize the anchor matrix [θ^π^u⊤,θ^π^v⊤,θ^π^w⊤]^⊤ formed by the anchor variable parameters to have unit column Euclidean norm, and then use either (<ref>) or (<ref>) to match over the anchor matrix. Efficiency Most of the computational cost is in the factorization. Consider one partition, and let ℳ^π_j π_s π_t be the corresponding sub-tensor, the sub-tensor size is ∏_π∈{π^j,π^s,π^t}∑_h∈π d_h. The maximum number of categories for a variable is generally a constant for the GDLM. Under smallest partitioning, this size is determined by the sub-tensor of anchor variables, i.e., O(k^3), which corresponds to(p/k) partitions. One benefit of PTPQP is that the number of sub-tensor factorizations is linear in p due to the partitioned factorization, this results in significant efficiency gains when p ≫ k. Furthermore, PTPQP is easy to be parallelized across multiple CPUs and machines, since the computation as well as data are not distributed across partitions. § PROVABLE GUARANTEES In this section, we state the main theoretical results of the proposed partitioned factorization and tensor PQP factorization. §.§ Sufficient conditions for guaranteed matching<ref> and <ref> state that when the anchor parameter matrices from two factorizations are “close", the proposed matching algorithms obtain a consistent permutation.Suppose that θ_j is the ground-truth matrix for variable j. Solving (<ref>) results in a consistent permutation if for all factors θ_j of variable jθ_jh - θ_jh_2/θ_jh_2 < 1 - √(1/2 + √(1/8(1+max_u<v(θ̅_j^⊤θ̅_j)_uv)))for all h∈[k], where θ̅_jh = θ_jh / θ_jh_2. Consider the smallest pair-wise angle α_min between the columns of θ̅_j, we have thatcosα_min = max_u<v(θ̅_j^⊤θ̅_j)_uv.Denote by α the maximum angle between the column of a factorized parameter matrix θ_j and the corresponding column of the ground-truth. It is sufficient to ensure thatα < 1/4α_min.Consider any two columns s≠ t of the ground-truth parameter matrix, and the corresponding perturbed columns {θ_js, θ_jt} and {θ_js^', θ_jt^'} from two factorizations. We have that∠(θ_js, θ_js^')≤ 2α ∠(θ_js, θ_jt^')≥∠(θ̅_js, θ̅_jt) - 2α≥α_min - 2α.From (<ref>), we have that ∠(θ_js, θ_js^') < ∠(θ_js, θ_jt^'),as desired for (<ref>) to work correctly. Now consider the inner product of a perturbed column and the ground-truth, it holds that<θ_jh/θ_jh, θ_jh + ϵ/θ_jh + ϵ>= θ_jh^2 - ϵ^2 + θ_jh + ϵ^2/2θ_jhθ_jh+ϵ≥θ_jh- ϵ/2θ_jh + θ_jh+ϵ/2θ_jh≥ 1 - ϵ/θ_jh.Thus, a sufficient condition for (<ref>) to yield the consistent permutation is1 - ϵ/θ_jh > cos(1/4α_min),which written in analytic form proves the theorem. <ref> states that one obtains a consistent permutation by solving (<ref>) in the columns of the ground-truth parameter matrix are distinct from each other in angles and the factorized parameter matrix is near the ground-truth in Frobenius norm. Thus, a good anchor variable for the partitioned factorization (<ref>) is one whose parameter matrix has distant columns in angles.The bound in <ref> can be made sharp for certain θ_j, and thus the smallest angle matching algorithm has general guarantees only when the perturbation is small, i.e., the relative error ratio is less than 1- √(2+√(2))/2 ≈ 1/13.Suppose that θ and θ^' are two factorized parameter matrices for a variable. Solving (<ref>) results in a consistent permutation ψ, ifE_2 < σ_k(θ^⊤θ) and -E_2/ρlog( 1 - ρ/ν) < 2-√(2)/4withρ = σ_1(E) + σ_2(E), ν = σ_k(θ^⊤θ) + σ_k-1(θ^⊤θ)where the error matrix is define as E = (ψθ)^⊤( θ^' -ψθ), and σ_j(·) denotes the j-th largest singular value.The proof of Theorem <ref> follows from the following two Lemmas. Suppose that ψ is the consistent permutation of θ with respect to θ^'. Formula (<ref>) is guaranteed to recover ψ, ifdiag(UV^⊤) ≻√(2)/21,where U and V are the left and right singular matrices of (ψθ)^⊤θ^'. We need to show that (<ref>) yields ψ for the orthogonal Procrustes problem min_Ψ^⊤Ψ = IθΨ - θ^'_F. From the solution (<ref>), it is easy to show that the minimizer Ψ^* of min_Ψ^⊤Ψ = Iθ^'Ψ - θ_F and the minimizer Ψ^' of min_Ψ^⊤Ψ = I(ψθ) Ψ - θ^'_F satisfyΨ^'⊤ = ψΨ^*.Note that Ψ^*⊤ is the desired minimizer ofmin_Ψ^⊤Ψ = IθΨ - θ^'_F, and thus it remains to show that (<ref>) gives ψ when applied to Ψ^*⊤, or equivalently max_t Ψ_st^* = ψ_s. Since the row and column vectors of Ψ^* have unit Euclidean norm, the following dual statements imply each othermax_t Ψ_ts^' = j ⇔max_t Ψ_jt^' = s ∀ j,s ∈[k],if condition (<ref>) holds. Under this condition, we also have that (<ref>) gives the identity permutation [1,2,⋯,k] for the orthogonal Procrustes problem min_Ψ^⊤Ψ = I(ψθ) Ψ - θ^'_F. Thus, applying (<ref>) to both sides of (<ref>) yieldsmax_t Ψ_tψ_s^* = s,which implies (<ref>) from (<ref>).(Mathias)Suppose that A∈ℝ^n × n is nonsingular. Then for any E∈ℝ^n× n with σ_1(E) < σ_n(A) and any unitarily invariant norm ·, it holds thatμ(A + E) - μ(A)≤ -2E/E_2log( 1 - E_2/σ_n(A) + σ_n-1(A)),where μ(·) represents the unitary factor of the polar decomposition, and ·_k is the Ky Fan k-norm. Let H = (ψθ)^⊤ψθ, which has the same singular values as θ^⊤θ. Denote by μ(·) the unitary factor of the polar decomposition. Using the fact that μ(H) = I, the sufficient condition of <ref> is restated asdiag(μ(H) - μ(H + E)) ≺(1 - √(2)/2)1.Also note thatmax_t|diag(μ(H) - μ(H + E))_t| ≤μ(H) - μ(H + E)_2.Thus, it suffices to enforce the right term to be less than 1-√(2)/2. From <ref>, this can be achieved by letting-2E_2/E_2log( 1 - E_2/σ_n(A) + σ_n-1(A)) ≤ 1 - √(2)/2. The first condition in <ref> requires that at least one of θ and θ^' must have full column rank. We may exchange θ and θ^' in <ref> to first obtain the consistent permutation of θ^' with respect to θ, ψ then follows immediately.<ref> states that solving (<ref>) recovers a consistent permutation whenever the error spectral norm is small as compared to the smallest singular value of θ^⊤θ. This is especially useful for θ∈ℝ^d × k with the number of rows d much larger than the number of columns k. In particular, for θ with independent and identically distributed subgaussian entries, σ_k(θ^⊤θ) is at least of the order (√(d) - √(k-1))^2 <cit.>. §.§ Convergence The following theorem states a sufficient condition for PQP to achieve linear convergence rate. The theorem statement and proof is an adaptation of results stated in <cit.>—the proof in <cit.> overlooks a required condition on ϕ and the conditionγ≽diag(Q_jj) in the original proof is unnecessary.The PQP algorithm given by (<ref>) monotonically decreases the objective (<ref>) and has linear convergence, ifγ≽(-Q)_+ 1andϕ≻1/2( √(z^⊤Q^-1z/λ_min(Q))diag(Q) - |z| )_+,where λ_min(·) is the smallest eigenvalue. First, the condition γ≥(-Q)_+ 1 suffices to ensure that the updates monotonically decrease (<ref>) <cit.>. Thus, it remains to show the condition on ϕ. Suppose that the i-th element of the optimum x^* is perturbed by a non-zero ϵ > - x_i^*. Let x = x^* + ϵe_i, and applying one update gives x^'. Denote the i-th row of Q^+, Q^-, and Q respectively by P_i, N_i, and Q_i, then it holds that P_i e_i = Q_ii + γ_i and N_i e_i = γ_i by definition. We now consider the ratio of errors between successive iterations:|x_i^' - x_i^*/x_i - x_i^*|= |1/ϵ(N_i (x^* + ϵe_i) + z_i^-/P_i(x^* + ϵe_i) + z_i^+x_i - x_i^*) |= | N_ix^* + ϵγ_i + z_i^-/P_ix^* + ϵ Q_ii + ϵγ_i + z_i^+x_i^* + ϵ/ϵ - x_i^*/ϵ|= | N_ix^* + ϵγ_i + z_i^-/P_ix^* + ϵ Q_ii + ϵγ_i + z_i^+ - x_i^*/ϵQ_ix^* + z_i + ϵ Q_ii/P_ix^* + ϵ Q_ii + ϵγ_i + z_i^+|.From the KKT first-order optimality condition x_i^* (Q_i x^* + z_i) = 0, we simplify the ratio as|x_i^' - x_i^*/x_i - x_i^*|=| N_ix^* + ϵγ_i + z_i^- - x_i^* Q_ii/P_ix^* + ϵ Q_ii + ϵγ_i + z_i^+|.Observe that the denominator is nonnegative. We also have that the denominator is greater than the numerator using the KKT optimality condition Q_i x^* + z_i ≥ 0:P_ix^* + ϵ Q_ii + ϵγ_i + z_i^+ - (N_ix^* + ϵγ_i + z_i^- - x_i^* Q_ii) > Q_i x^* + z_i ≥ 0.To achieve linear convergence rate, we may enforce the ratio to be less than one. Equivalently,P_ix^* + ϵ Q_ii + ϵγ_i + z_i^+ + N_ix^* + ϵγ_i + z_i^- - x_i^* Q_ii > 0.It suffices to setϕ_i > 1/2(Q_ii x_i^* - |z_i|)_+. To get rid of x^* in (<ref>), we have the following inequality| 1/2x^*⊤Qx^* + z^⊤x^*| ≤1/2z^⊤Q^-1z,where the right term is the negative of the minimum of the unconstrained problem, assuming that Q is non-singular. If Q is singular, then x^* can be unbounded. Further simplify the inequality using KKT optimality conditions as|x^*⊤Qx^* |≤ z^⊤Q^-1z λ_min(Q)x^*_2^2≤ z^⊤Q^-1z x^*_∞ ≤√(z^⊤Q^-1z/λ_min(Q)).Combining with (<ref>) completes the proof. § RESULTS ON REAL AND SIMULATED DATA We compare the proposed algorithm ptpqp with state-of-the-art approaches including: 1) the tensor power method tpm <cit.> and matrix simultaneous diagonalization, nojd0 and nojd1 <cit.>—two general tensor decomposition methods; 2) nonnegative tensor factorization hals <cit.>; and 3) generalized method of moments meld <cit.>. We use the online code provided by the corresponding authors. §.§ Learning GDLMs on simulated dataWe adapt a simulation study from <cit.> to compare runtime and accuracy of parameter estimation. We consider a GDLMwhere each variable takes categorical values {0,1,2,3} and the parameters of theDirichlet mixing distribution are {α_j = 0.1}_j=1^k. We initially consider 25 variables. The true parameters for each hidden component hare drawn from the Dirichlet distribution Dir(0.5,0.5,0.5,0.5). The resulting moment estimator is a 100-by-100-by-100 tensor. We vary the number of components k and add noise by replacing a fraction δ of the observations with draws from a discrete uniform distribution. We also vary the number of samples n=100,500,1000,5000, number of clustersk=3,5,10,20, and contamination δ=0,0.05,0.1. Across these settings we found that the empirical third-order estimator typically exhibits between 20% and 50% negative entries. Accuracy of inference Accuracy is measured by root-mean-square error (RMSE) which we compare across algorithms as a function of the number of componentsfor various sample sizes and levels of contamination, see <ref>. Both hals and ptpqp are consistently among the top estimators, and ptpqp outperforms hals as n grows. For small sample sizes and many hidden components meld achieves the smallest RMSE.The RMSE of tpm is relatively large,probably due to the whitening technique used to approximately transform the nonorthogonal factorization into an orthogonal one, see <cit.>. The most relevant observation is thatptpqp outperforms other methods for large, noisy data.Computational cost We examined how runtime scales as a function of the number of partitions. For the same model we set p=1000 variables and n=1000 samples. The tensor is now 4000-by-4000-by-4000. We evaluated the runtime of ptpqp (without parallelization) with thenumber of partitions set to {30,40,50,100,200}. On a laptop with Intel i7-4702HQ@2.20GHz CPU and 8GB memory, ptpqp with 100 partitions completes within 3.5 min, 4 min, and 5 min for k=4,8,12, respectively. In addition, the runtime monotonically decreases with the number of partitions. Further speedups can be obtained by parallelizing the factorization of partitions across multiple CPUs or machines. §.§ Predicting crowdsourced labelsIn <cit.>, a combination of EM and tensor decompositions was used to predict crowdsourcing annotations. The task is to predict the true label given incomplete and noisy observations from a set of workers, this is a mixed membership problem <cit.>. In <cit.> a third-order tensor estimator was proposed to obtain an initial estimate for the EM algorithm. We compare the predictive performance on five data sets of several tensor decomposition methods as well as the EM algorithm initialized with majority voting by the workers (MV+EM). The fraction of incorrect predictions and the size of each dataset are in the table below. Note that ptpqp matches or outperforms the other tensor methods on all but one dataset, and even outperforms MV+EM on two datasets.§ CONCLUSIONSWe proposed an efficient algorithm for learning mixed mixture models based on the idea of partitioned factorizations. The key challenge is to consistently match the partitioned parameters with the hidden components. We provided sufficient conditions to ensure consistency. In addition, we have also developed a nonnegative approximation to handle the negative entries in the empirical method of moments estimators, a problem not addressed by several recent tensor methods. Results on synthetic and real data corroborate that the proposed approach achieves improved inference accuracy as well as computational efficiency than state-of-the-art methods.§ CODE Code for all the simulations is available from Zilong Tan's GitHub repository <https://github.com/ZilongTan/ptpqp>. § ACKNOWLEDGEMENTSZ.T. would like to thank Rong Ge for sharing helpful insights. S.M.would like to thank Lek-Heng Lim for insights. Z.T. would like to acknowledge the support of grants NSF CNS-1423128,NSF IIS-1423124, and NSF CNS-1218981. S.M. would like to acknowledge the support of grants NSF IIS-1546331, NSF DMS-1418261, NSF IIS-1320357, NSF DMS-1045153, and NSF DMS-1613261.IEEE§ DIRICHLET MOMENTS For a Dirichlet random vector x with concentration parameters α, the component moments can be easily shown by the integral𝔼[x_i]= α_i/α_0 𝔼[x_i x_j]= α_i α_j/α_0(α_0 + 1),i≠ j 𝔼[x_i^2]=α_i(α_i+1)/α_0(α_0+1) 𝔼[x_j x_s x_t]= α_j α_s α_t/α_0(α_0+1)(α_0+2),j≠ s ≠ t 𝔼[x_j^2 x_s]= α_j(α_j + 1) α_s/α_0(α_0+1)(α_0+2),j≠ s𝔼[x_j^3]= α_j(α_j + 1) (α_j + 2)/α_0(α_0+1)(α_0+2),where α_0 = ∑_i≥ 1α_i. Comparing the second and third order component moments, we arrive at the following cross-moments:𝔼[x]= 1/α_0α 𝔼[x×x]= 1/α_0(α_0 + 1)αα^⊤ + 1/α_0(α_0 + 1)diag(α) = α_0/α_0+1𝔼[x] 𝔼[x]^⊤ + diag(α)/α_0(α_0+1) 𝔼[x×x×x]=1/α_0 (α_0 + 1) (α_0 + 2) [ α×α×α + ∑_i≥ 1α_i (α×e_i ×e_i) + ∑_i≥ 1α_i (e_i ×α×e_i ) . = +. ∑_i≥ 1α_i (e_i ×e_i ×α) + 2∑_i≥ 1α_i (e_i ×e_i ×e_i) ].To express the parameters as third-order cross-moments, first observe that the following holds for a Dirichlet random vector x:1/α_0^2 (α_0 + 1)∑_i≥ 1α_i (α×e_i×e_i + e_i×α×e_i + e_i×e_i×α) = 𝔼[𝔼[x] ×x×x] + 𝔼[x×𝔼[x] ×x] + 𝔼[x×x×𝔼[x]] - 3α_0/α_0 + 1𝔼[x] ×𝔼[x] ×𝔼[x].This is an immediate result from the the second-order component moments. Combining with (<ref>) yields∑_i≥ 12α_i e_i ×e_i ×e_i/α_0 (α_0+1) (α_0+2) = 𝔼[x×x×x] + 2α_0^2/(α_0 + 1)(α_0 + 2)𝔼[x] ×𝔼[x] ×𝔼[x]- α_0/α_0 + 2(𝔼[𝔼[x] ×x×x] + 𝔼[x×𝔼[x] ×x] + 𝔼[x×x×𝔼[x]] ).§.§ Derivation of moment estimatorsOur goal is to derive the estimators of parameter vectors θ_j for each variable j using the first- and second- order empirical cross-moments of b_ij. In GDLM, the expectation of variable j conditioned on x is written𝔼[b_ij| x] = θ_j x.Thus, the expected observation of variable j is given by𝔼[b_ij] = 𝔼[𝔼[b_ij| x]] = θ_j 𝔼[x] = θ_j α/α_0.Now consider two variables b_ij and b_is which are generated with the same latent factors x. Combining (<ref>) and (<ref>) to obtain∑_r≥ 1α_r/α_0 (α_0 + 1)θ_jr×θ_sr= 𝔼[b_ij×b_is] - α_0/α_0 + 1𝔼[b_ij] 𝔼[b_is]^⊤.For three variables b_ij, b_is, and b_it, we can write𝔼[b_ij×b_is×b_it] = 𝔼[x×x×x] ×_1 θ_j ×_2 θ_s ×_3 θ_t. Using (<ref>), we establish that∑_i≥ 12α_i θ_j ×θ_s ×θ_t/α_0 (α_0+1) (α_0+2) = 𝔼[b_ij×b_is×b_it] + 2α_0^2/(α_0 + 1)(α_0 + 2)𝔼[b_ij] ×𝔼[b_is] ×𝔼[b_it]- α_0/α_0 + 2(𝔼[𝔼[b_ij] ×b_is×b_it] + 𝔼[b_ij×𝔼[b_is] ×b_it] + 𝔼[b_ij×b_is×𝔼[b_it]] ). § APPROXIMATE ORTHOGONALIZATION IN THE TENSOR POWER METHOD TPM requires the tensor to be decomposed to be symmetric, and the factor matrices to be orthogonal. Specifically, it performs the following decompositionℳ_3^' = ∑_i=1^r λ_i u_i ×u_i ×u_i,where u_i are orthonormal vectors. Thus, TPM does not immediately apply to the general CP decomposition (<ref>).The general resolution is to first use the symmetric tensor embedding <cit.>, forming a larger symmetric tensor ℳ_3 that contains the asymmetric tensor to be decomposed. The formed ℳ_3 is a sparse (∑_i=1^p d_i)-by-(∑_i=1^p d_i)-by-(∑_i=1^p d_i) tensor of which 7/9 entries are zero. The space and computation complexities rapidly become prohibitive when the number of variables p and the category counts d_j grow. Next, TPM requires an addition empirical second-order estimator ℳ_2 for orthogonalizing the factor matrices of ℳ_3 to obtain ℳ_3^' <cit.>. This is done by computing the whitening transformation from ℳ_2. However, the whitening technique based on empirical ℳ_2 is often a cause of suboptimal performance <cit.>.
http://arxiv.org/abs/1702.07933v3
{ "authors": [ "Zilong Tan", "Sayan Mukherjee" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170225180057", "title": "Efficient Learning of Mixed Membership Models" }
[Electronic conduction properties of indium tin oxide]Electronic conduction properties of indium tin oxide: single-particle and many-body transport^1NCTU-RIKEN Joint Research Laboratory, Institute of Physics and Department of Electrophysics, National Chiao Tung University, Hsinchu 30010, Taiwan ^2Tianjin Key Laboratory of Low Dimensional Materials Physics and Preparing Technology, Department of Physics, Tianjin University, Tianjin 300072, China ^∗Email: jjlin@mail.nctu.edu.tw Indium tin oxide (Sn-doped In_2O_3-δ or ITO) is an interesting and technologically important transparent conducting oxide. This class of material has been extensively investigated for decades, with research efforts focusing on the application aspects. The fundamental issues of the electronic conduction properties of ITO from 300 K down to low temperatures have rarely been addressed. Studies of the electrical-transport properties over a wide range of temperature are essential to unraveling the underlying electronic dynamics and microscopic electronic parameters. In this Topical Review, we show that one can learn rich physics in ITO material, including the semi-classical Boltzmann transport, the quantum-interference electron transport, and the electron-electron interaction effects in the presence of disorder and granularity. To reveal the avenues and opportunities that the ITO material provides for fundamental research, we demonstrate a variety of charge transport properties in different forms of ITO structures, including homogeneous polycrystalline films, homogeneous single-crystalline nanowires, and inhomogeneous ultrathin films. We not only address new physics phenomena that arise in ITO but also illustrate the versatility of the stable ITO material forms for potential applications. We emphasize that, microscopically, the rich electronic conduction properties of ITO originate from the inherited free-electron-like energy bandstructure and low-carrier concentration (as compared with that in typical metals) characteristics of this class of material. Furthermore, a low carrier concentration leads to slow electron-phonon relaxation, which causes (i) a small residual resistance ratio, (ii) a linear electron diffusion thermoelectric power in a wide temperature range 1-300 K, and (iii) a weak electron dephasing rate. We focus our discussion on the metallic-like ITO material. 73.23.-b; 73.50.Lw; 7215.Qm; 72.80.Tm Signal Denoising Using the Minimum-Probability-of-Error Criterion Juhn-Jong Lin^1,∗ and Zhi-Qing Li^2 Received: date / Accepted: date =================================================================plain§ INTRODUCTION Transparent conducting oxides (TCOs) constitute an appearing and unique class of materials that simultaneously possess high electrical conductivity, σ, and high optical transparency at the visible frequencies <cit.>. These combined electrical and optical properties render the TCOs to be widely used, for example, as transparent electrodes in numerous optoelectronic devices, such as flat panel displays, photovoltaic electrochromics, solar cells, energy-efficient windows, and resistive touch panes <cit.>. Currently, the major industrial TCO films are made of indium tin oxide (Sn-doped In_2O_3-δ or so-called ITO), F-doped tin oxide, and group III elements doped zinc oxide. Among them, the ITO films are probably the most widely used TCOs, owing to the ITO's readiness for fabrication and patterning as well as their high quality and reliability implemented in commercial products.On the fundamental research side, our current understanding of the origins for the combined properties of high electrical conductivity and high optical transparency is based on both theoretical and experimental studies <cit.>. The electronic energy bandstructure of ITO has been theoretically calculated by several authors <cit.>. It is now known that the bottom of the conduction band of the parent In_2O_3 is mainly derived from the hybridization of the In 5s electronic states with the O 2s states. The energy-momentum dispersion near the bottom of the conduction band reveals a parabolic character, manifesting the nature of s-like electronic states (see a schematic in figure <ref>). The Fermi level lies in the middle of the conduction and valence bands, rendering In_2O_3 a wide-band-gap insulator. Upon doping, the Sn 5s electrons contribute significantly to the electronic states around the bottom of the conduction band, causing the Fermi level to shift upward into the conduction band. Meanwhile, the shape of the conduction band at the Fermi level faithfully retains the intrinsic parabolic character. This unique material property makes ITO a highly degenerate n-type semiconductor or, alternatively, a low-carrier-concentration metal. As a consequence of the s-like parabolic energy bandstructure, the electronic conduction properties of this class of material demonstrate marked free-carrier-like characteristics. The charge transport properties of ITO can thus be quantitatively described by those simple models formulated basing upon a free electron Fermi gas. Indeed, the levels of close quantitative agreement between theoretical calculations and experimental measurements obtained for ITO are not achievable even for alkali (Li, Na, K) and noble (Cu, Ag, Au) metals, as we shall present in this Topical Review.In practice, the conduction electron concentration, n, in optimally doped ITO (corresponding to approximately 8 at.% of Sn doping) can reach a level as high as n ≈ 10^20–10^21 cm^-3 <cit.>. This level of n is two to three orders of magnitude lower than that (≈ 10^22–10^23 cm^-3 <cit.>) in typical metals. The room temperature resistivity can be as low as ρ(300 K) ≈ 150 μΩ cm (see table <ref>). This magnitude is comparable with that of the technologically important titanium-aluminum alloys <cit.>. In terms of the optical properties, the typical plasma frequency is ω_p ≃ 0.7–1 eV <cit.>, while the typical energy band gap is E_g ≃ 3.7–4.0 eV. Hence, optimally doped ITO possesses a high optical transparency which exceeds 90% transmittance at the visible light frequencies <cit.>. A value of ω_p ≃ 1 eV corresponds to a radiation frequency of f_p = ω_p/2 π≃ 2.4 × 10^14 Hz, which is approximately one fifth of the visible light frequency and roughly one fiftieth of the plasma frequency of a typical metal. For optoelectronic applications, on one hand, one would like to dope ITO with a Sn level as high as technologically feasible in order to obtain a high electrical conductivity σ. On the other hand, since ω_p ∝√(n), one has to keep n sufficiently low such that the visible light can propagate through the ITO structure.Owing to their technological importance, it is natural that there already exist in the literature a number of review articles on the ITO as well as TCO materials <cit.>. The early studies up to 1982, covering the deposition methods, crystal structures, scattering mechanisms of conduction electrons, and the opticalproperties of In_2O_3, SnO_2 and ITO, were reviewed by Jarzȩbski <cit.>. Hamberg and Granqvist discussed the optical properties of ITO films fabricated by the reactive electron-gun evaporation onto heated glass substrates <cit.>. The development up to 2000 on the various aspects of utilizing TCOs was summarized in reports considering, for example, characterizations <cit.>, applications and processing <cit.>, criteria for choosing transparent conductors <cit.>, new n- and p-type TCOs <cit.>, and the chemical and thin-film strategies for new TCOs <cit.>. The recent progresses in new TCO materials and TCOs based devices were discussed in <cit.> and <cit.>. King and Veal recently surveyed the current theoretical understanding of the effects of defects, impurities, and surface states on the electrical conduction in TCOs <cit.>.In this Topical Review, we stress the free-electron-like energy bandstructure and the low-n features (as compared with typical metals) of the ITO material. These inherited intrinsic electronic characteristics make ITO a model system which is ideal for not only revealing the semi-classical Boltzmann transport behaviors (section 2) but also studying new physics such as the quantum-interference weak-localization (WL) effect and the universal conduction fluctuations (UCFs) in miniature structures (section 3). The responsible electron dephasing (electron-electron scattering, electron-phonon scattering, and spin-orbit scattering) processes are discussed. Furthermore, we show that this class of material provides a very useful platform for experimentally testing the recent theories of granular metals <cit.>. In the last case, ultrathin ITO films can be intentionally made to be slightly inhomogeneous or granular, while the coupling between neighboring grains remains sufficiently strong so that the system retains global metallic-like conduction (section 4). To illustrate the unique and numerous avenues provided by ITO for the studies of the aforementioned semi-classical versus quantum electron transport, as well as homogeneous versus inhomogeneous charge transport, we cover polycrystalline (ultra)thin and thick ITO films and single-crystalline ITO nanowires in this Topical Review. We demonstrate that high-quality ITO structures can indeed be readily fabricated into various forms which, apart from being powerful for addressing fundamental electronic conduction properties, may be useful for potential technological applications. Furthermore, owing to the similarities in electronic bandstructure between ITO and other TCO materials <cit.>, we expect that the electronic processes and mechanisms discussed in this Topic Review should be useful for understanding and interpreting the results obtained on general TCOs.We do not cover insulating or amorphous ITO materials in this Topical Review, where the electronic conduction processes can be due to thermally excited hopping <cit.>. In addition to the conventional Mott <cit.> and Efros-Shklovskii <cit.> hopping conduction mechanisms in homogeneous strongly disordered systems, electronic conduction due to the thermal charging effect <cit.> and, more recently, the variable-range-hopping process <cit.> in inhomogeneous (granular) systems have been discussed in literature. On the other hand, the possible occurrence of superconductivity in ITO has been explored in references <cit.>. § FREE-ELECTRON-LIKE BOLTZMANN TRANSPORT: HOMOGENEOUS INDIUM TIN OXIDE FILMS AND NANOWIRES The electrical-transport properties of ITO films have extensively been discussed in the literature. However, previous studies have mainly concentrated on the influences of deposition methods and conditions on the ρ(300 K) values. While those studies have provided useful information for improving the fabrication of high-quality ITO films, they did not deal with the underlying electronic conduction processes in ITO. In subsection 2.1, we first briefly summarize the theoretical calculations of the electronic energy bandstructure of ITO and explain why this class of material behaves like a highly degenerate semiconductor or a low-n metal. In subsection 2.2, we discuss the overall temperature behavior of resistivity ρ(T) in ITO and show that ρ (T) can be well described by the standard Boltzmann transport equation in a wide temperature range. In subsection 2.3, we demonstrate that the thermoelectric power (Seebeck coefficient, or thermopower), S(T), in ITO follows an approximately linear temperature dependence in the wide temperature range from 1 K up to well above room temperature. This linear thermoelectric power originates from the diffusion of electrons in the presence of a temperature gradient and provides a powerful, direct manifestation of the robust free-carrier-like characteristic of ITO. The reason why the phonon-drag contribution to thermoelectric power in ITO is absent is heuristically discussed.§.§ Free-carrier-like bandstructure and relevant electronic parameters §.§.§ Electronic energy bandstructure Since the electronic energy bandstructure plays a key role in governing the charge transport properties of a given material, we first discuss the electronic bandstructure of ITO. Based on their x-ray photoemission spectroscopy studies, Fan and Goodenough <cit.> first suggested a schematic energy band model for the undoped and Sn-doped In_2O_3 in 1977. A heuristic energy-band model for ITO was proposed by Hamberg <cit.> in 1984. In their heuristic model (shown in figure <ref>), the bottom (top) of the conduction (valence) band of In_2O_3 was taken to be parabolic. They further proposed that the shapes of the conduction band and the valence band remained unchanged upon Sn doping. This simple bandstructure model is qualitatively in line with that obtained by later theoretical calculations <cit.>.The first ab initio bandstructure calculations for the ITO material were carried out by Odaka <cit.>, and Mryasov and Freeman <cit.> in 2001. Later on, Medvedeva <cit.> calculated the bandstructure of In_2O_3, and Medvedeva and Hettiarachchi <cit.> calculated the bandstructure of 6.25 at.% Sn-doped In_2O_3. Figures <ref>(a) and <ref>(b), respectively, show the electronic bandstructures of stoichiometric In_2O_3 and 6.25 at.% Sn-doped In_2O_3 obtained in <cit.>.For In_2O_3, the conduction bandexhibits a free-electron-like, parabolic characteristic around the Γ point, where the bottom of the conduction band originates from the hybridization of In 5s and O 2s electronic states. Medvedeva and Hettiarachchi found that the effective electron mass, m^∗, near the Γ point is nearly isotropic. Similar theoretical results were shortly after obtained by Fuchs and Bechstedt <cit.>, and Karazhanov <cit.>.Upon Sn doping, the Sn 5s states further hybridize with the In 5s and O 2s states to form the bottom of the conduction band. Furthermore, the Fermi level in ITO shifts upward into the conduction band, leading to the bandstructure depicted in figure <ref>(b). Theoretical calculations indicate that the Sn 5s states contribute nearly one fourth of the total electronic density of states at the Fermi level, N(E_F), while the In 5s and O 2s states contribute the rest. At this particular doping level, the s-like symmetry of the original bandstructure around the Fermi level in the parent In_2O_3 is essentially unaltered. Thus, the conduction electrons at the Fermi level in ITO possess strong free-carrier-like features. Meanwhile, Fuchs and Bechstedt <cit.> found that the average effective electron mass increases slightly with increasing carrier concentration n. At a level of n ≃ 10^20 cm^-3, they obtained a value m^∗≃ 0.3 m_e, where m_e is the free-electron mass. Their result agreed with that derived from optical measurements of the Drude term to free carriers <cit.>.In brief, the combined electronic bandstructure characteristics of a wide energy gap, a small m^∗, and in particular a low n as well as a free-carrier-like dispersion at E_F, are the crucial ingredients to make ITO, on one hand, possess high electrical conductivity while, on the other hand, reveal high optical transparency. §.§.§ Relevant electronic parametersExperimentally, a reliable method to check the metal-like energy bandstructure of a material is to examine the temperature T dependence of n. For a metal or a highly degenerate semiconductor, n does not vary with T.Figure <ref> shows the variation of n with temperature for a few as-deposited (before annealing) and annealed ITO films studied by Kikuchi <cit.>. It is clear that n remains constant in a wide T range from liquid-helium temperatures up to 300 K. In the as-deposited sample, the n value approaches ∼ 1 × 10^21 cm^-3. Temperature independent n in the ITO material has been reported by a number of groups <cit.>.For the convenience of the discussion of charge transport properties in ITO in this Topical Review, we would like to estimate the values of relevant electronic parameters. Consider a high-quality ITO sample having a value of ρ(300 K) ≃ 150 μΩ cm, a carrier concentration n ≃ 1 × 10^21 cm^-3, and an effective mass m^∗≃ 0.35 m_e. Applying the free-electron model, we obtain the Fermi wavenumber k_F = (3 π^2 n)^1/3≃ 3.1 × 10^9 m^-1, the Fermi velocity v_F = ħ k_F/m^∗≃ 1.0 × 10^6 m/s, and the Fermi energy E_F = ħ^2 k_F^2/(2m^∗) ≃ 1.0 eV. The electron mean free time is τ = m^∗ /(ne^2 ρ) ≃ 8.3 × 10^-15 s, corresponding to the electron mean free path l = v_F τ≃ 8.3 nm. The electron diffusion constant D = v_F l/3 ≃ 28 cm^2/s. Thus, the dimensionless product k_F ł≃ 26. Note that k_F ł is an important physical quantity which characterizes the degree of disorder in a conductor. A k_Fl value of order a few tens indicates that high-quality ITO is a weakly disordered metal, and should thus be rich in a variety of quantum-interference transport phenomena.In practice, the ρ and n values in ITO films can vary widely with the deposition methods and conditions, Sn doping levels, and the post thermal treatment conditions.In table <ref>, we list some representative values for ITO films prepared by different techniques. This table indicates that those ITO films fabricated by the DC magnetron sputtering method possess relatively high (low) n (ρ) values. Since the films thus prepared are compact and they adhere well to the substrate surface, this low-cost technique is thus the most widely used ITO deposition method in the industrial production nowadays. Recently, researchers have also carried out molecular-beam-epitaxial growth studies of ITO structures <cit.>, but the crystal quality obtained was not as high as that previously achieved in the epitaxial films grown by a pulsed-laser deposition technique <cit.>. We mention in passing that, apart from the bulk properties <cit.>, the effect on electronic processes of the surface states due to oxygen vacancies in undoped In_2O_3-δ <cit.> as well as doped TCOs <cit.> has recently drawn theoretical and experimental attention. §.§ Temperature behavior of electrical resistivity The temperature dependence of resistivity ρ (T) from 300 K down to liquid-helium temperatures provides key information for the understanding of the electrical conduction processes in a conductor. Li and Lin <cit.> have measured ρ (T) between 0.4 and 300 K in a number of 125 and 240 nm thick polycrystalline ITO films prepared by the standard RF sputtering deposition method. Their films had relatively low values of ρ(300 K) ≃ 200 μΩ cm. Their results are shown in figure <ref>. Li and Lin found that the ρ(T) data between ∼ 25 and 300 K can be well described by the Bloch-Grüneisen formulaρ=ρ_e + ρ_e- ph(T)=ρ_e + β T ( T/θ_D)^4 ∫_0^θ_D/Tx^5 dx/(e^x - 1)(1 - e^x) ,where ρ_e is a residual resistivity, β is an electron-phonon (e-ph) coupling constant, and θ_D is the Debye temperature. The solid curves in the main panel of figure <ref> are the theoretical predications of equation (<ref>). This figure demonstrates that ITO is a metal, with ρ decreasing with decreasing temperature (or, a positive temperature coefficient of resistivity, i.e., (1/ρ)(dρ/dT) > 0). In particular, the temperature dependence of ρ (T) can be well described by the standard Boltzmann transport equation.The first term on the right hand side of equation (<ref>) originates from the elastic scattering of electrons with defects. The second term originates from the inelastic scattering of electrons with lattice vibrations (phonons). Using the Drude formula σ = ne^2 τ /m^∗,one rewrites ρ = (m^∗ /ne^2)(1/τ_e + 1/τ_e- ph) = ρ_e + ρ_e- ph(T), where e is the electronic charge, τ_e is the electron elastic mean free time, and τ_e- ph is the e-ph relaxation time. From figure <ref>, one finds a small resistivity ratio ρ(300 K)/ρ(25 K) ≃ 1.1, corresponding to the ratio of scattering rates 1/τ_e- ph≃ 0.1(1/τ_e). This observation explicitly suggests that the e-ph relaxation in the ITO material is weak, and hence the contribution of the e-ph scattering to ρ(300 K) is only approximately one tenth of that of the electron elastic scattering with imperfections. A slow e-ph relaxation rate is a general intrinsic property of low-n conductors, see below for further discussion.[For comparison, we note that in typical disordered metals, a measured small residual resistivity ratio ρ(300 K)/ρ(4 K) is usually due to a large elastic electron scattering rate 1/τ_e, because the e-ph relaxation is considerably fast in typical metals, see for example references <cit.>.] The presence a moderate level of disorder in ITO films result in significant quantum-interference weak-localization (WL) and electron-electron interaction (EEI) effects at low temperatures. These two effects cause small corrections to the residual resistivity, which increase with reducing temperature. Close inspection of the inset of figure <ref> indicates a well-defined, logarithmic temperature dependent resistivity rise below ∼ 25 K. The two-dimensional (2D) WL and EEI effects will be discussed in section 3.In addition to comparatively thick films, present-day RF sputtering deposition technology has advanced such that relatively thin films can be made metallic. In a recent study, Lin <cit.> found that the temperature dependence of ρ (T) below 300 K for 15 nm thick polycrystalline ITO films can also be described by the Bloch-Grüneisen formula. However, the ρ (T) curve reaches a minimum around 150 K. At lower temperatures, ρ (T) increases with decreasing temperature, signifying much more pronounced 2D WL and EEI effects than in thicker films (figure <ref>).The temperature dependence of resistivity in single-crystalline ITO nanowire has been investigated by Chiu <cit.>. They measured individual ITO nanowires from 300 K down to 1.5 K employing an electron-beam lithographic four-probe configuration. Figure <ref> shows a plot of the normalized resistivity, ρ(T)/ρ(300K), as a function of temperature for four ITO nanowires. The solid curves are the theoretical predications of equation (<ref>), indicating that the experimental ρ(T) data can be well described by the Bloch-Grüneisen formula. However, it is surprising that, in the wide temperature range 1–300 K, the resistivity drops by no more than ∼ 20%, even though these nanowires are single-crystalline. This observation strongly suggests that these nanowires must contain high levels of point defects which are not detectable by the high-resolution transmission electron microscopy studies <cit.>. It is worth noting that these nanowires are three-dimensional (3D) with respect to the Boltzmann transport, because the electron elastic mean free paths ℓ_e = v_F τ_e ≈ 5–11 nm are smaller than the nanowire diameters d ≈ 110–220 nm. On the other hand, the nanowires are one-dimensional (1D) with respect to the WL effect and the UCF phenomena, because the electron dephasing length L_φ = √(D τ_φ) > d at low temperatures, where τ_φ is the electron dephasing time (see section 3).From least-squares fits of the measured ρ(T) to equation (<ref>), several groups have obtained a comparatively high Debye temperature of θ_D ∼ 1000 K in ITO thick films <cit.>, thin films <cit.> and nanowires <cit.>. This magnitude of θ_D is much higher than those (∼ 200–400 K<cit.>) in typical metals.[In applying equation (<ref>) to describe the ρ (T) data in figures <ref> and <ref>, we have focused on the temperature regime below room temperature. At room temperature and above, the interaction of electrons with polar optical phonons is strong. By taking into consideration electron–polar optical phonon interaction, Preissler <cit.> obtained a value of θ_D ≃ 700 K from studies of Hall mobility in In_2O_3. These studies suggest a high Debye temperature in the In_2O_3 based material.]In addition to films and nanowires, nanoscale ITO particles can be made metallic. Ederth <cit.> studied the temperature behavior of porous thin films comprising of ITO nanoparticles. Their films were produced by spin coating a dispersion of ITO nanoparticles (mean grain size ≈ 16 nm) onto glass substrates, followed by post thermal treatment. They found that the temperature coefficient of resistivity was negative (i.e., (1/ρ)(dρ/dT) < 0) between 77 and 300 K. However, their ρ(T) data obeyed the `thermally fluctuation-induced-tunneling conduction' (FITC) process <cit.>. Figure <ref> shows the normalized resistivity, ρ(T)/ρ(273 K), as a function of temperature for four ITO nanoparticle films studied by Ederth et al. The symbols are the experimental data, and the solid curves are the FITC theory predictions. Theoretically, the FITC model considered the global electrical conduction of an inhomogeneous system consisting of metal grains separated by very thin insulating barriers. The thin insulating barriers were modeled as mesoscopic tunnel junctions. Hence, an observation of the FITC processes occurring in porous ITO films implies that the constituent ITO nanoparticles are metallic. Indeed, in section 4, we will discuss that the metallic feature of ITO nanoparticles has provided a powerful platform to experimentally test the recent theories of granular metals <cit.>.We notice in passing that the overall temperature behavior of resistivity in other TCOs, such as Al-doped ZnO <cit.>, Ga-doped ZnO <cit.>, Nb-doped TiO_2 <cit.>, and F-doped SnO_2 <cit.>, can also be described by the standard Boltzmann transport equation (<ref>).§.§ Linear temperature dependence of thermoelectric power The thermoelectric power is an important physical quantity which describes the electronic conduction behaviors in the presence of a temperature gradient and under the open circuit situation. Studies of the temperature dependence of thermopower, S(T), can provide useful information about the electronic density of states at the Fermi level N(E_F), the magnitude of E_F, the responsible carrier types (electrons and/or holes), as well as the phonon-electron and phonon-phonon relaxation processes in the material. In a metal, the thermopower arises from two contributions and can be expressed as S(T) = S_d(T) + S_g(T), where S_d (T) is the electron-diffusion contribution, and S_g (T) is the phonon-drag contribution <cit.>. §.§.§ Electron-diffusion thermopower The electron diffusion contribution stems from the diffusion of thermal electrons in the presence of a temperature gradient. A general form is given by the Mott formula <cit.>S_d(T) = - π^2 k_B^2 T/3|e| E_Fd lnσ (E)/d ln E|_E = E_F ,where k_B is the Boltzmann constant, and σ (E) is the conductivity of electrons that have energy E. The Mott formula is derived under the assumption that the phonon distribution is itself in overall equilibrium at temperature T. Note that in the case of hole conduction the minus sign in equation (<ref>) should be replaced by a plus sign.Consider a free electron Fermi gas. By substituting the Einstein relation σ (E) = N(E) e^2 D(E) into equation (<ref>), where D(E) = v^2(E) τ (E)/3 is the electron diffusion constant in a 3D conductor with respect to the Boltzmann transport, and v(E) is the electron velocity, one obtainsS_d(T) = - π^2 k_B^2 T/3|e| E_F[ 3/2 + d lnτ (E)/d ln E] |_E = E_F .Equation (<ref>) predicts a linear temperature dependence of S_d. The slope of this linear T dependence varies inversely with E_F, and its precise value is governed by the energy dependence of mean-free time τ (E) ∝ E^q, where q is an exponent of order unity.The temperature behavior of S_d in the low temperature limit (which is pertinent to ITO) can be approximated as follows. At T ≪θ_D and in the presence of notable defect scattering such that the electron mean free path l(E) = v(E) τ (E) is nearly a constant, i.e., τ (E) ∝ 1/v(E) ∝ 1/√(E), equation (<ref>) reduces toS_d = - π^2 k_B^2 T/3|e|E_F .Since the typical E_F value in ITO is one order of magnitude smaller than that in a typical metal, the S_d value in the former is thus approximately one order of magnitude larger than that in the latter. Alternatively, equation (<ref>) can be rewritten in the following form: S_d = -2C_e/(3n|e|), where C_e = π^2 n k_B^2 T /(2E_F) is the electronic specific heat per unit volume. This expression will be used in equation (<ref>).The temperature behavior of thermopower in ITO films has been studied by several groups <cit.>. Figure <ref> shows the measured S(T) data between 5 and 300 K for one as-grown and three annealed ITO films. This figure clearly indicates that S is negative and varies essentially linearly with T in the wide temperature range 5–300 K. The negative sign confirms that electrons are the major charge carriers in ITO.Recall the fact that the Debye temperature θ_D ∼ 1000 K in ITO <cit.>. Therefore, one may safely ascribe the measured S below 300 K (figure <ref>) mainly to the diffusion thermopower S_d(T). The straight solid lines in figure <ref> are least-squares fits to equation (<ref>). From the extracted slopes, one can compute the E_F value in each sample. The value of electron concentration n can thus be deduced through the free-electron-model expression E_F = (ħ^2/2 m^∗)(3 π^2 n)^2/3. In ITO structures, the extracted values of E_F generally lie in the range ≈ 0.5–1 eV<cit.>, corresponding to values of n ≈ 10^20–10^21 cm^-3. Therefore, ITO can be treated as a highly degenerate semiconductor or a low-n metal, as mentioned.It is worth noting that the n values in ITO films obtained from S(T) measurements agree well with those obtained from the Hall coefficient, R_H = 1/(n_He), measurements. Figure <ref> shows the extracted values of n (squares) and the Hall concentration n_H (circles) for a number of as-grown and annealed ITO films <cit.>. It is seen that the n values agree with the n_H values to within 30% or better (except for the films annealed at 200 ^∘C, see discussion in <cit.>). This observation provides a strong experimental support for the validity of the theoretical predictions of a free-carrier-like energy bandstructure in ITO. In fact, such kind of prevailing linearity in S(T) from liquid-helium temperatures all the way up to at least 300 K (figure <ref>) is seldom seen in any textbook simple metals, where the phonon-drag contribution S_g(T) often causes profound, non-monotonic temperature behavior of S(T) (see, for example, reference <cit.> and the figures 7.10 and 7.12 in reference <cit.>). Thus, ITO does serve as a model system for studying electronic conduction phenomena and extracting reliable electronic parameters. §.§.§ Phonon-drag thermopower We would like to comment on the negligible phonon-drag contribution to the measured S(T) in the ITO material. The phonon-drag term stems from the interaction between heat conducting phonons with conduction electrons. In ITO (figures <ref>), the prevailing linearity over a wide range of temperature is a direct and strong indication of the absence of the phonon-drag contribution. The reason for the practically complete suppression of the phonon-drag term can be explained as follows. Considering the phonon scattering processes and ignoring their frequency dependence, the phonon-drag thermopower S_g(T) at T < θ_D can be approximated by <cit.>S_g≃- C_g/3n|e|( τ_ ph/τ_ ph + τ_ ph-e) ≃- C_g/3n|e|( τ_ ph/τ_ ph-e) ≃1/2( τ_ ph/τ_e- ph) S_d,where C_g is the lattice specific heat per unit volume, τ_ ph is the phonon relaxation time due to all kinds of phonon scattering processes (such as phonon-phonon (ph-ph) scattering, phonon scattering with imperfections, etc.) except the phonon-electron (ph-e) scattering, and τ_ ph-e is the ph-e scattering time. In writing equation (<ref>), we have assumed that τ_ ph≪τ_ ph-e. Note that we have also applied the the energy-balance equation C_e /τ_e- ph = C_g /τ_ ph-e (references <cit.>) to replace τ_ ph-e by τ_e- ph.Consider a representative temperature of 100 K ∼ 0.1θ_D in ITO. We take the phonon mean free path to be few nanometers long <cit.>, which corresponds to a relaxation time τ_ ph(100 K) ∼ 10^-12 s, with a sound velocity v_p ≃ 4400 m/s in ITO <cit.>. According to our previous studies of the weak-localization effect in ITO films <cit.>, we estimate τ_e- ph(100 K) ∼ 10^-11 s. Thus, equation (<ref>) indicates that the phonon-drag term would contribute only a few percent to the measured thermopower at a temperature of 100 K. The underlying physics for the smallness of the phonon-drag term S_g can further be reasoned as follows. (i) The value of τ_ ph in ITO is generally very short due to the presence of a moderately high level of disorder in this class of material. (ii) Since the e-ph coupling strength in a conductor is proportional to the carrier concentration n <cit.>, the relaxation time τ_e- ph in ITO is thus notably long compared with that in typical metals. (See further discussion in subsection 3.1.2.) These two intrinsic material characteristics combine to cause a small τ_ ph/τ_e- ph ratio, and hence S_g ≪ S_d in the ITO material. By the same token, a linear temperature dependence of S(T) with negligible contribution from S_g has recently been observed in F-doped SnO_2 films <cit.>. § QUANTUM-INTERFERENCE TRANSPORT AT LOW TEMPERATURE: HOMOGENEOUS INDIUM TIN OXIDE FILMS AND NANOWIRES In section 2, we have examined the temperature dependence of electrical resistivity and thermoelectric power over a wide temperature range to demonstrate that the electronic conduction properties of metallic ITO obey the standard Boltzmann transport equation. In particular, being inherited with a free-carrier-like energy bandstructure, the essential electronic parameters can be reliably extracted from combined ρ (T), S(T) and Hall coefficient R_H measurements. In this section, we show that metallic ITO also opens avenues for the studies of quantum electron transport properties. We shall focus on the quantum-interference weak-localization (WL) effect and the universal conductance fluctuation (UCF) phenomenon, which manifest in ITO films and nanowires at low temperatures. The many-body electron-electron interaction (EEI) effect in homogeneous disordered systems will not be explicitly discussed in this Topical Review, but will be briefly mentioned where appropriate.§.§ Weak-localization effect and electron dephasing time The WL effect and electron dephasing in disordered conductors have been studied for three decades <cit.>. During this time, the mesoscopic and nanoscale physics underlying these processes has witnessed significant theoretical and experimental advances. Over years, the WL effect has also been explored in a few TCO materials, including ITO <cit.>, and ZnO based materials <cit.>. In this subsection, we address the experimental 3D, 2D, and 1D WL effects in ITO thick films, thin films, and nanowires, respectively. In particular, we show that ITO has a relatively long electron dephasing (phase-breaking) length, L_φ(T) = √(D τ_φ), and a relatively weak e-ph relaxation rate 1/τ_e- ph, where D is the electron diffusion constant, and τ_φ is the electron dephasing time. As a consequence, the WL effect in ITO can persist up to a high measurement temperature of ∼ 100 K. For comparison, in typical normal metals, the WL effect can often be observed only up to ∼ 20–30 K, due to a comparatively strong e-ph relaxation rate as the temperature increases to above liquid-helium temperatures <cit.>. Furthermore, as a consequence of the small 1/τ_e- ph, one may use ITO thick films to explicitly examine the 3D small-energy-transfer electron-electron (e-e) scattering rate, 1/τ_ee, 3D^N, for the first time in the literature <cit.>. A long L_φ also causes the 1D WL effect and the UCF phenomenon to significantly manifest in ITO nanowires with diameters d < L_φ. Since the electronic parameters, such as E_F and D, are well known in ITO, the value of τ_φ can be reliably extracted and closely compared with the theoretical calculations. Such levels of close comparison between experimental and theoretical values are nontrivial for many typical metals. §.§.§ Weak-localization magnetoresistance in various dimensionsAs discussed in section 2, ρ(T) of ITO samples decrease by small amounts (≲ 10% in polycrystalline films and ≲ 20% in single-crystalline nanowires) as the temperature decreases from 300 K down to liquid-helium (or liquid-nitrogen) temperatures, suggesting the presence of moderately high levels of disorder in all kinds of ITO materials. Thus, the WL effect must prevail in ITO. In 1983, Ohyama <cit.> measured ITO thin films and found negative magnetoresistance (MR) and logarithmic temperature dependence of resistance in a wide temperature range 1.5–100 K. They explained the negative MR in terms of the 2D WL effect and the logarithmic temperature dependence of resistance in terms of a sum of the 2D WL and EEI effects. Figure <ref> shows a plot of the positive magnetoconductance (i.e., negative MR) induced by the WL effect in a 7.5 nm thick ITO film measured by Ohyama and coworkers. It is seen that the experimental data (symbols) can be well described by the 2D WL theory predictions (solid curves).Recently, with the advances of nanoscience and technology, the 1D WL effect has been investigated in single-crystalline ITO nanowires <cit.>. In particular, since L_φ is relatively long in the ITO material at low temperatures (see below), the quasi-1D dimensional criterion L_φ > d is readily achieved. Thus, significant 1D WL effects can be seen in ITO nanowires. Indeed, figure <ref>(a) shows a plot of the negative MR due to the 1D WL effect in a 60 nm diameter ITO nanowire studied by Hsu <cit.>. This nanowire had a low resistivity value of ρ(10 K) ≃ 185 μΩ cm. The magnetic field was applied perpendicular to the nanowire axis. The data (symbols) is well described by the 1D WL theory predictions (solid curves). The extracted dephasing lengths are L_φ(0.25 K) ≃ 520 nm and L_φ(40 K) ≃ 150 nm. Similarly, the negative MR in the 3D WL effect can be observed in ITO thick films and is well described by the 3D WL theory predictions. (The explicit theoretical predictions for the 1D, 2D, and 3D MR in the WL effect can be found in <cit.> and references therein.)§.§.§ Electron dephasing time Measurements of MR in the WL effect allows one to extract the value of τ_φ. Detailed studies of the electron dephasing processes in ITO thin films have recently been carried out by Wu <cit.>. They have measured the negative MR due to the 2D WL effect and extracted the τ_φ values in two series of 15 and 21 nm thick ITO films in a wide temperature range 0.3–90 K. Figure <ref> shows a plot of representative variation of extracted 1/τ_φ with temperature. In general, the responsible dephasing processes are determined by the sample dimensionality, level of disorder, and measurement temperature <cit.>. In 3D weakly disordered metals, e-ph scattering is often the dominant dephasing mechanism <cit.>, while in reduced dimensions (2D and 1D), the e-e scattering is the major dephasing process <cit.>. As T → 0 K, a constant or very weakly temperature dependent dephasing process may exist in a given sample, the physical origin for which is yet to be fully identified <cit.>. In ITO, as already mentioned, the e-ph relaxation rate is very weak.The total electron dephasing rate 1/τ_φ(T) (the solid curves) in figure <ref> for the 2D ITO thin films studied by Wu <cit.> is described by1/τ_φ (T) = 1/τ_φ^0 + A_ee, 2D^N T + A_ee, 2DT^2 ln( E_F/k_B T),where the first, second, and third terms on the right-hand side of the equation stand for the “saturation" term, the small-energy-transfer (Nyquist) e-e scattering term, and the large-energy-transfer e-e scattering term, respectively. The small-energy-transfer term is dominant at low temperatures of T < ħ /(k_B τ_e), while the large-energy-transfer term is dominant at high temperatures of T > ħ /(k_B τ_e). By comparing their measured 1/τ_φ(T) with equation (<ref>), Wu found that their extracted values of the e-e scattering strengths A_ee, 2D^N ≈ 3 × 10^9 K^-1 s^-1 and A_ee, 2D≈ 9 × 10^6 K^-2 s^-1 are consistent with the theoretical values to within a factor of ∼ 3 and ∼ 5, respectively.[The theoretical expressions for the small-energy-transfer and large-energy-transfer e-e scattering strengths, respectively, are A_ee, 2D^N = (e^2/2 πħ^2) R_ k_B ln(πħ /e^2 R_) and A_ee, 2D = π k_B^2 /(2 ħ E_F), where R_ is the sheet resistance. In the comparison of experiment with theory, the R_ value was directly measured, and the E_F value was extracted from thermoelectric power measurement.] Considering that the ITO material is a disordered In_2-xSn_xO_3-δ with random Sn dopants and possible oxygen vacancies, such levels of agreement between experimental and theoretical values are satisfactory. The good theoretical estimates must derive from thefree-carrier-like energy bandstructure characteristics of ITO, which renders evaluations of the electronic parameters reliable. In terms of dephasing length, figure <ref> gives rise to relatively long length scales of L_φ(0.3 K) ≈ 500 nm and L_φ(60 K) ≈ 45 nm.The e-e scattering rate in other-dimensional ITO samples has also been studied. In the case of 1D nanowires, due to the sample dimensionality effect, the Nyquist e-e scattering rate obeys a 1/τ_ee, 1D^N ∝ T^2/3 temperature law <cit.>. This scattering process is largely responsible for the 1D WL MR shown in figures <ref>(a) and <ref>(b), as analyzed and discussed in <cit.>. In the case of 3D thick films, the temperature dependence of the Nyquist rate changes to the 1/τ_ee, 3D^N ∝ T^3/2 temperature law <cit.>. Owing to the intrinsic weak e-ph coupling in this material, ITO provides a valuable platform for detailed study of the 3D small-energy-transfer e-e scattering process over wide ranges of temperature and disorder, as discussed below.In a 3D weakly disordered metal, the e-e scattering rate has been calculated by Schmid in 1974 and his result is given by <cit.>1/τ_ee = π/8(k_B T)^2/ħ E_F + √(3)/2ħ√(E_F)( k_B T/k_F l)^3/2 .A similar result has also been obtained by Altshuler and Aronov <cit.>. The first term on the right-hand side of equation (<ref>) is the e-e scattering rate in a perfect, periodic potential,while the second term is the enhanced contribution due to the presence of imperfections (defects, impurities, interfaces, etc.) in the sample. Microscopically, the second term stands for the Nyquist e-e scattering process and is dominant at low temperatures of T < ħ/(k_B τ_e), while the first term represents the large-energy-transfer process and dominates at high temperatures of T > ħ/(k_B τ_e) (references <cit.>). We shall denote the second term by 1/τ_ee, 3D^N = A_ee, 3D^N T^3/2.In 3D weakly disordered typical metals, the e-ph scattering is strong and dominates over the e-e scattering <cit.>. Thus, equation (<ref>) has been difficult to test in a quantitative manner for decades, even though the mesoscopic physics has witnessed marvelous advances.Very recently, Zhang et al <cit.> have measured the low magnetic field MRs in a series of 3D ITO films with thicknesses exceeding 1 micrometer. Their polycrystalline samples were prepared by the standard RF sputtering deposition method in an Ar and O_2 mixture. During deposition, the oxygen content, together with the substrate temperature, was varied to “tune" the electron concentration as well as the amount of disorder. By comparing the MR data with the 3D WL theory, Zhang extracted the dephasing rate 1/τ_φ as plotted in figure <ref>(a). Clearly, one observes a strict 1/τ_φ∝ T^3/2 temperature dependence in a wide T range 4–35 K. Quantitatively, the scattering rate of the first term in equation (<ref>) is about one order of magnitude smaller than that of the second term even at T = 35 K in ITO. Thus, the contribution of the first term can be safely ignored. The straight solid lines in figures <ref>(a) are described by 1/τ_φ = 1/τ_φ^0 + A_ee, 3D^N T^3/2, where 1/τ_φ^0 is a constant, and A_ee, 3D^N ≃ (2.1–2.8)× 10^8 K^-3/2 s^-1 for various samples. These experimental A_ee, 3D^N values are within a factor of ∼ 3 of the theoretical values given by the second term of equation (<ref>).Furthermore, applying the free-electron model, Zhang <cit.> rewrote the second term on the right hand of equation (<ref>) into the form 1/τ_ee, 3D^N = A_ee, 3D^N T^3/2 = (1.22 √(m^∗) / ħ^2) (k_B T)^3/2 k_F^-5/2l^-3/2. This expression allows one to check the combined disorder (k_F^-3/2l^-3/2) and carrier concentration (k_F^-1) dependence of 1/τ_ee, 3D^N at a given temperature. Figure <ref>(b) shows a plot of the variation of the extracted 1/τ_φ with k_F^-5/2l^-3/2 at two T values of 5 and 15 K. Obviously, a variation 1/τ_φ∝ k_F^-5/2l^-3/2 is observed. Quantitatively, the experimental slopes (≃ 1.2 × 10^19 and 3.7 × 10^19 m^-1 s^-1 at 5 and 15 K, respectively) in figure <ref>(b) are within a factor of ∼ 5 of the theoretical values. Thus, the experimental dephasing rate 1/τ_φ≃ 1/τ_ee, 3D^N = A_ee, 3D^N T^3/2 in ITO thick films quantitatively confirms the temperature, disorder and carrier concentration dependences of the Schmid-Altshuler-Aronov theory of 3D small-energy-transfer e-e scattering in disordered metals <cit.>. Electron-phonon relaxation rate. We would like to comment on the reason why the e-e scattering dominates the electron dephasing rate in 3D ITO thick films (figure <ref>) in a wide T range up to several tens of degrees of kelvin. The reason is owing to the fact that the ITO material possesses relatively low n values which result in a greatly suppressed 1/τ_e- ph≪ 1/τ_ee,3D^N. Theoretically, it is established that the electron scattering by transverse vibrations of defects and impurities dominates the e-ph relaxation. In the quasi-ballistic limit (q_Tl>1, where q_T is the wavenumber of a thermal phonon),[In high-quality ITO structures, q_Tl ≈ 0.1 T <cit.>, and hence the quasi-ballistic limit is valid above ∼ 10 K. In disordered normal metals, due to a relatively short electron mean free path l = 3π^2 ħ /(e^2 k_F^2 ρ) ∝ 1/k_F^2 for a same ρ value, the quasi-ballistic regime is more difficult to realize in experiment. For example, a polycrystalline Ti_73Al_27 alloy <cit.> (an amorphous CuZrAl alloy <cit.>) with ρ≈ 225 μΩ cm (≈ 200 μΩ cm) has a value of q_T l ≈ 0.006 T (≈ 0.01 T).] the electron-transverse phonon scattering rate is given by <cit.>1/τ_e - t,ph=3π^2 k_B^2 β_t/(p_F u_t)(p_F l)T^2,whereβ_t = (2E_F/3)^2N(E_F)/(2ρ_m u_t^2) is the electron-transverse phonon coupling constant, p_F is the Fermi momentum, u_tis the transverse sound velocity, and ρ_m is the mass density. Since the electronic parameters E_F, p_F, N(E_F) and l in ITO samples are known, the theoretical value of equation (<ref>) can be computed and is of the magnitude 1/τ_e - t,ph∼ 4×10^6 T^2 K^-2 s^-1. Note that this relaxation rate is about one order of magnitude smaller than 1/τ_ee, 3D^N even at a relatively high temperature of 40 K. A weak e-ph relaxation rate allows the quantum-interference WL effect and UCF phenomena to persist up to a few tens of degrees of kelvin in ITO.[The electron dephasing length L_φ = √(D τ_φ)≃√(D τ_e- ph) above a few degrees of kelvin is much shorter in a typical disordered metal than in ITO, due to both a much shorter τ_e- ph and a smaller diffusion constant D ∝ 1/(N(E_F) ρ) ∝ 1/N(E_F) for a same ρ value in the former.]We reiterate that equation (<ref>) predicts a relaxation rate 1/τ_e - t,ph∝ n. On the other hand, equation (<ref>) predicts a scattering rate 1/τ_ee, 3D^N ∝ n^-5/6. Thus, the ratio of these two scattering rates varies approximately inversely with the square of n, namely, (1/τ_ee, 3D^N)/(1/τ_e - t,ph) ∝ n^-2. Since the n values in ITO samples are relatively low, the 3D small-energy-transfer e-e scattering rate can thus be enhanced over the e-ph relaxation rate. This observation can be extended to other TCO materials, and is worth of further investigations.We also would like to note that, in recent studies of superconducting hot electron bolometers, a weak e-ph relaxation rate has been observed in quasi-2D heterostructures containing ultrathin La_2-xSr_xCuO_4 (LSCO) layers <cit.>. LSCO has a n value about two orders of magnitude lower that in the conventional superconductor NbN, and hence τ_e- ph(LSCO) is nearly two orders of magnitude longer than τ_e- ph(NbN). In short, we remark that slow e-ph relaxation is a general intrinsic property of low-n conductors. Generally speaking, one may keep in mind that the relaxation rate varies approximately as 1/τ_e- ph∝ n (references <cit.>). Spin-orbit scattering time. According to the recent measurements on a good number of ITO films <cit.> and nanowires <cit.> down to as low as 0.25 K, only negative MR was observed (see, for example, figure <ref>(a)). This result suggests that the spin-orbit scattering rate, 1/τ_ so, is relatively weak in ITO. Even at sub-kelvin temperatures where the inelastic electron scattering events are scarce, one still obtains 1/τ_ so < 1/τ_ee^N(0.25 K) in many ITO samples. In other words, the ITO material possesses an inherent long spin-orbit scattering length L_ so = √(D τ_ so). In typical ITO films <cit.>, the extracted length scale is L_ so > 500 nm, corresponding to a scattering time τ_ so > 250 ps. This τ_ so value is one to two orders of magnitude longer than those in typical metals, such as Ag films <cit.> and Sn-doped Ti_73Al_27 alloys <cit.>.In practice, the strength of spin-orbit coupling in a given metal can be tuned by varying the level of disorder. In general, the spin-orbit scattering rate can be approximately expressed by 1/τ_ so∝ Z^4/τ_e ∝ρ, where Z is the atomic number of the relevant (heavy) scatterer. Indeed, an enhancement of the spin-orbit scattering rate has been achieved in an ITO nanowire which was intentionally made to have a high resistivity value of ρ(10K)=1030μΩcm <cit.>. Hsu then observed positive MR at temperatures T < 4 K in low magnetic fields, see figure <ref>(b). A positive MR is a direct manifestation of the weak-antilocalization effect which results from the scattering rates 1/τ_ so > 1/τ_ee, 1D^N at T < 4 K. At higher temperatures, a negative MR was recovered, suggesting that 1/τ_ so < 1/τ_ee, 1D^N at T > 4 K. In this high-ρ ITO nanowire, Hsu obtained a moderate length scale L_ so≈ 95 nm, corresponding to a scattering time τ_ so≈ 15 ps. The capability of tuning the spin-orbit coupling strength might be useful for the future implementation of nanoscale spintronic devices <cit.>. Recently, Shinozaki <cit.> have observed an increasing ratio (1/τ_ so)/(1/τ^N_ee, 3D) with increasing ρ in a series of amorphous indium-zinc-oxide and indium-(tin,gallium)-zinc-oxide thick films.§.§ Universal conductance fluctuations Universal conductance fluctuations (UCFs) are a fundamental phenomenon in mesoscopic physics. The UCFs originate from the quantum interference between electron partial waves that propagate along different trajectories in a miniature system in which classical self-averaging is absent or incomplete <cit.>. Thus, the shape of the UCF patterns (called `magneto-fingerprints') is very sensitive to the specific impurity configuration of a given sample. The UCFs have previously been experimentally observed in lithographic metal and semiconductor mesoscopic structures at low temperatures <cit.>, where the electron dephasing length L_φ is comparable to the sample size. Recently, UCFs have been observed in new artificial materials, including epitaxial InAs nanowires <cit.>, lithographic ferromagnets <cit.>, carbon nanotubes <cit.>, graphene <cit.>, and topological insulators <cit.>. These new observations in artificially synthesized materials have enriched and deepened quantum electron transport physics.Wagner <cit.> have measured the UCFs in lithographically defined ferromagnetic (Ga,Mn)As nanowires. Figure <ref>(a) shows their measured conductance G as a function of magnetic field B for three wires at T = 20 mK. The wires were ∼ 20 nm wide and 100, 200, or 300 nm long. Figure <ref>(b) shows G versus B at several different temperatures between 20 mK and 1 K for the 200 nm long wire. The magnetic field was applied perpendicular to the wire axis. Figure <ref>(b) clearly reveals that the UCFs are observable below ∼ 0.5 K. Figure <ref>(a) demonstrates that the UCF amplitude significantly decreases with increasing sample length, suggesting a fairly short dephasing length of L_φ(20 mK) ≈ 100 nm. For the 100 nm long wire, the peak-to-peak UCF amplitude reaches a value of e^2/h at 20 mK, where h is the Planck constant. Impurity reconfiguration. Let us return to the case of ITO. Since L_φ can reach ≈ 500 nm at low temperatures, the ITO nanowires are very useful for the investigations of the 1D UCF phenomena. Yang <cit.> have recently carried out the magneto-transport measurements on individual ITO nanowires with a focus on studying the UCFs. Their nanowires were made by implanting Sn ions into In_2O_3-δ nanowires. Figures <ref>(a)–(d) show four plots of the variation of the UCFs, denoted by δ G_ UCF(T,B), with magnetic field B for a 110 nm diameter ITO nanowire at several temperatures.[The universal conductance fluctuation δ G_ UCF(T,B) is defined by subtracting a smooth magneto-conductance background (including the WL MR contribution) from the measured G(T,B).] The magnetic field was applied perpendicular to the nanowire axis. Here, after the first run at liquid-helium temperatures, the nanowire was thermally cycled to room temperature, at which it stayed overnight, and cooled down again for the magneto-transport measurements at liquid-helium temperatures. The thermal cycling to room temperature was repeated twice, and the sample was thus measured for three times at three different cooldowns. The idea was that a thermal cycling to 300 K could possibly induce impurity reconfiguration in the given nanowire. A new impurity configuration must lead to differing trajectories of the propagating electron partial waves, which in turn cause distinct quantum interference. As a result, the shape of the UCF patterns should be completely changed. Figure <ref>(a) shows δ G_ UCF(T,B) as a function of B at several temperatures measured at the first cooldown. Figure <ref>(b) shows δ G_ UCF(T,B) as a function of B at several temperatures measured at the second cooldown, and figure <ref>(c) shows those measured at the third cooldown.A number of important UCF features and the underlying physics can be learned from close inspection of these figures. (i) Inspection of figures <ref>(a)–(c) indicates that the UCF magnitudes decrease with increasing temperature and disappear at ∼ 25 K. Thus, these quantum conductance fluctuations are distinctly different from the classical thermal noise whose resistance fluctuation magnitudes increase with increasing temperature. (ii) During a given cooldown, the shape of the UCF patterns at different temperatures remains the same to a large extent. This observation implies that the impurity configuration is frozen for a considerable period of time if the nanowire is constantly kept at liquid-helium temperatures. A given impurity configuration gives rise to a specific `magneto-fingerprint,' strongly suggesting that the UCF phenomena is a robust manifestation of an intrinsic quantum-interference effect. (iii) At a given temperature, the UCFs among different cooldowns reveal similar peak-to-peak magnitudes. (iv) Figure <ref>(d) shows a plot of the δ G_ UCF(T = 0.26 K,B) curves taken from figure <ref>(a) (top curve) and figure <ref>(b) (middle curve), and their difference (bottom curve). This figure is convenient for close inspection and comparison. The top two curves reveal completely different shapes of the UCF patterns, strongly reflecting that a thermal cycling to 300 K has induced an impurity reconfiguration. On the other hand, the UCF magnitudes of these two curves retain similar, with a peak-to-peak value of δ G_ UCF(T = 0.26 K) ≈ 0.5 e^2/h for both curves. The reason for retaining a similar UCF magnitude is as follows. The UCF magnitudes in a given nanowire are governed by the L_φ values, which are determined by the level of disorder, i.e., the ρ value (or the R_ value in 2D), see subsection 3.1.2. The ρ (R_) value of a sample is determined by the total number of impurities, but insensitive to the specific spatial distribution of the impurities (provided that the impurity concentration is uniform throughout the sample).[The UCF studies also allow extractions of the L_φ (T) values in a miniature sample. The values thus obtained are in fair accord with those extracted from the WL MR measurements. In addition to L_φ, the thermal diffusion length L_T plays a key role in governing the UCF magnitudes.] Classical self-averaging and thermal averaging at finite temperatures. In the case of a quasi-1D wire with length L, the UCF theory predicts a root-mean-square conductance fluctuation magnitude of √(⟨ (δ G_ UCF)^2 ⟩)≃ 0.73 e^2/h in the limit of T → 0 K <cit.>. At this low T limit, the wire behaves as a single phase-coherent regime. As the temperature gradually increases from absolute zero, L_φ (T) becomes progressively shorter and one has to take into account the classical self-averaging effect. That is, the phase-coherent regime is expected to be cut off by L_φ and the UCF magnitude √(⟨ (δ G_ UCF)^2 ⟩) is predicted to be suppressed by a factor (L_φ/L)^3/2 under the condition L_φ < L_T, where L_T = √(D ħ/k_BT)∝ 1/√(T) is the thermal diffusion length defined in the EEI theory. The suppression of the UCF magnitudes originates from the fact that the UCFs of different phase-coherent regimes fluctuate statistically independently. If the temperature further increases such that L_T < L_φ or, equivalently, the thermal energy exceeds the Thouless energy k_BT > ħ/τ_φ, one also has to take into account the thermal averaging effect. That is, the phase-coherent regime is now expected to be cut off by L_T and the UCF magnitude √(⟨ (δ G_ UCF)^2 ⟩) is predicted to be suppressed by a factor (L_T/L) √(L_φ/L). These theoretical concepts have been well accepted by the mesoscopic physics communities for three decades, but have rarely been experimentally tested in a quantitative manner. The lack of experimental information was mainly due to the fact that the UCFs could be observed only at temperatures below 1 K in conventional lithographic metal and semiconductor mesoscopic structures. Fortunately, the observations of the UCFs in ITO nanowires over a wide range of temperature from below 1 K up to above 10 K now provides us a unique opportunity to verify these subtle UCF theory predictions.Figure <ref> shows a plot of the variation of measured √(⟨ (δ G_ UCF)^2 ⟩) with temperature for three ITO nanowires studied by Yang <cit.>. Surprisingly, the theoretical predictions invoking the thermal averaging effect (dashed curves) diverge significantly from the measured UCF magnitudes (symbols). In figure <ref>, the theoretical curves vary approximately as 1/√(T), while the experiment reveals a much slower temperature dependence. In other words, the phase-coherent regime in the 1D UCF phenomenon is not cut off by L_T, even though the experiment well satisfied the condition k_BT > ħ/τ_φ (L_T < L_φ). The reason why the thermal averaging effect played no significant role in figure <ref> is not understood. The ITO nanowires make experimentally feasible to reexamine whether any ingredients in the theoretical concepts for thermal averaging in mesoscopic physics might have been overlooked (overestimated).In summary, the UCF phenomena manifest rich and subtle quantum-interference properties of a mesoscopic or nanoscale structure. They provide crucial information about the impurity configuration in a particular sample. In ITO nanowires, the UCF signals persist up to 20–30 K. For comparison, recall that in conventional lithographic metal samples, the UCFs (including magnetic-field dependent UCFs and temporal UCFs <cit.>) can only be observed at sub-kelvin temperatures <cit.>. Such pronounced conductance fluctuations provide valuable opportunities for critical examinations of the underlying UCF physics <cit.>. The presence of marked UCFs suggest that there must exist a large amount of point defects in artificially synthesized ITO nanostructures, even though the nanowires exhibit a single crystalline structure under high-resolution transmission electron microscopy studies.[We note that it has recently been found that high levels of point defects appear in most artificially grown single-crystalline nanostructures, including ITO, RuO_2 <cit.>, and ZnO <cit.> nanowires.] § MANY-BODY ELECTRON TRANSPORT IN GRANULAR METALS: INHOMOGENEOUS INDIUM TIN OXIDE ULTRATHIN FILMS In this section, we discuss the electrical-transport properties of inhomogeneous ITO ultrathin films (average thickness ≈ 5–15 nm) which reveal new many-body physical phenomena that are absent in homogeneous disordered systems. These new physical properties, including logarithmic temperature dependences of both longitudinal electrical conductivity and Hall transport in a wide range of temperature, have recently been theoretically predicted <cit.>, but not yet experimentally tested in detail.Generally speaking, granular metals are composite materials that are composed of finely dispersed mixtures of immiscible metal and insulator grains. In many cases, the insulating constituent may form an amorphous matrix <cit.>. In terms of electrical-transport properties, three distinct regimes can be achieved in a given granular system, i.e., the metallic, the insulating (dielectric), and the metal-insulator transition regimes. These three regimes can be conveniently categorized by a quantity called G_T. Here G_T is the average tunneling conductance between neighboring(metal) grains and is a key parameter which determines the global electrical properties of a given granular array. G_T can be expressed in units of e^2/ħ and written as G_T = g_T (2e^2/ħ), where ħ is the Planck constant divided by 2π, and g_T is a dimensionless average tunneling conductance. The factor 2 arises from the two allowed spin directions for a tunneling electron. When g_T > g_T^c (g_T < g_T^c) the system lies in the metallic (insulating) regime. A metal-insulator transition occurs at g_T = g_T^c. Here g_T^c = (1/2πd̃) ln(E_c/ δ̃) is a critical dimensionless tunneling conductance whose value depends on the dimensionality of the granular array d̃, where E_c is the charging energy, and δ̃ is the mean energy level spacing in a grain (references <cit.>). In experiments, the magnitude of g_T^c is of order unity or somewhat smaller <cit.>.Over decades, there has been extensive theoretical and experimental research on the microstructures and electrical-transport properties of granular systems <cit.>. New discoveries have continuously been made and a good understanding of the physical properties conceptualized. For example, the giant Hall effect (GHE) has recently been discovered in Cu_v(SiO_2)_1-v <cit.> and Mo_v(SnO_2)_1-v <cit.> granular films under the conditions that the grain size a ≪ L_φ and the metal volume fraction v is around the quantum percolation threshold v_q <cit.>. The GHE is a novel physical phenomenon which manifests a huge Hall coefficient R_H that is enhanced by ∼ 3 orders of magnitude when v approaches v_q from the metallic side. The GHE is theoretically explained to arise from the local quantum-interference effect in the presence of rich microstructures in a metal-insulator composite constituting of nanoscale granules <cit.>. While the single-particle local quantum interference causes the new GHE, in the following discussion we shall focus on the many-body electronic transport properties in granular systems.In the rest of this section, we concentrate on the region with g_T ≫ 1 or g_T ≫ g_T^c. The material systems that we are interested in can thus be termed `granular metals.' In particular, we shall demonstrate that inhomogeneous ITO ultrathin films are an ideal granular metal system which provides valuable and unique playgrounds for critically testing the recent theories of granular metals. These new theories of granular metals are concerned with the many-body electron-electron (e-e) interaction effect in inhomogeneous disordered systems. They focus on the electronic conduction properties in the temperature regime above moderately low temperatures (T > g_T δ̃/k_B) where the WL effect is predicted to be comparatively small or negligible <cit.>. In practice, one can explicitly measure the e-e interaction effect by applying a weak perpendicular magnetic field to suppress the quantum-interference WL effect.§.§ Longitudinal electrical conductivity For a long time, the electrical-transport properties of granular metals have not been explicitly considered theoretically. It has widely been taken for granted that the transport properties would be similar to those in homogeneous disordered metals <cit.>. It was only recently that Efetov, Beloborodov, and coworkers have investigated the many-body Coulomb e-e interaction effect in granular metals. They <cit.> found that the influences of e-e interaction on the longitudinal electrical conductivity σ (T) and the electronic density of states N(E) in granular metals are dramatically different from those in homogeneous disordered metals. In particular, for granular metals with g_0 ≫ g_T and g_T ≫ 1, the intergrain e-e interaction effect causes a correction to σ in the temperature range g_T δ̃< k_BT < E_c. Here g_0 = G_0/(2e^2/ħ), and G_0 is the conductance of a single metal grain. In this temperature interval of practical experimental interest, the total conductivity is given by <cit.>σ=σ_0 + δσ =σ_0 [ 1 - 1/2π g_T d̃ln( g_TE_c/k_BT) ],where σ_0 = G_Ta^2 - d̃ is the tunneling conductivity between neighboring grains in the absence of Coulomb interaction, and a is the average radius of the metal grain. Note that the correction term δσ is negative and possesses a logarithmic temperature dependence. That is, the Coulomb e-e interaction slightly suppresses intergrain electron tunneling conduction, giving rise to δσ /σ_0 ∝ - 1/g_T for g_T ≫ 1. This δσ∝ln T temperature law is robust and independent of the array dimensionality d̃. It should also be noted that this correction term δσ does not exist in the EEI theory of homogeneous disordered metals <cit.>.Soon after the theoretical prediction of equation (<ref>), the electrical-transport properties of several granular systems were studied, including Pt/C composite nanowires <cit.>, B-doped nano-crystalline diamond films <cit.>, and granular Cr films <cit.>. The δσ∝ln T temperature law has been confirmed. In addition, a large suppression in the electronic density of states around the Fermi energy N(E_F) has been found in studies of the differential conductances of Al/AlO_x/Cr tunnel junctions <cit.>, and thin Pd-ZrO_2 granular films <cit.>. This last experimental result also qualitatively confirmed the prediction of the theory of granular metals <cit.>. However, a quantitative comparison is not possible, due to the lack of a theoretical expression for N(T,V) at finite voltages and finite temperatures.Figure <ref> shows the variation of longitudinal electrical conductivity with logarithm of temperature for four inhomogeneous ITO ultrathin films studied by Zhang <cit.>. These films were grown by the RF deposition method onto glass substrates. They were ≈ 10±3 nm thick, and the average grain sizes were in the range ≈ 24–38 nm. Therefore, the samples can be treated as 2D random granular arrays. (Each sample was nominally covered by one layer of ITO granules.) The conductivities were measured in a perpendicular magnetic field of 7 T in order to suppress any residual 2D WL effect. Inspection of figure <ref> clearly demonstrates a δσ∝ln T variation over a wide temperature range from ∼ 3 K to T^∗, whereT^∗ = T^∗ (E_c) is the maximum temperature below which the ln T law holds. Therefore, the prediction of equation (<ref>) is confirmed. Quantitatively, from the least-squares fits (the straight solid lines in figure <ref>), values ofthe intergrain tunneling conductance g_T ≃ 7–31 were obtained. Therefore, the theoretical criterion of g_T ≫ 1 for equation (<ref>) to be valid is satisfied. We reiterate that the δσ∝ln T temperature law observed in figure <ref> is not due to the more familiar 2D EEI effect which widely appears in homogeneous disordered systems <cit.>.§.§ Hall transport Apart from the longitudinal electrical conductivity, Kharitonov and Efetov <cit.> have investigated the influence of Coulomb interaction on the Hall resistivity, ρ_xy, by taking the electron dynamics inside individual grains into account. They found that there also exists a correction to the Hall resistivity in the wide temperature range g_T δ̃≲ k_B T ≲min(g_T E_c, E_Th), where E_Th = D_0 ħ /a^2 is the Thouless energy of a grain of radius a, D_0 is the electron diffusion constant in the grain, and min(g_T E_c, E_Th) denotes the minimum value of the set (g_T E_c, E_Th). The resulting Hall resistivity is given by <cit.>ρ_xy (T) =ρ_xy,0 + δρ_xy =B/n^∗ e[ 1 + c_d/4π g_Tln( min(g_T E_c, E_Th)/k_BT) ],where n^∗ is the effective carrier concentration, c_d is a numerical factor of order unity, and ρ_xy,0 = B/(n^∗ e) is the Hall resistivity of the granular array in the absence of the Coulomb e-e interaction effect. We point out that the microscopic mechanisms leading to the ln T temperature behaviors in equations (<ref>) and (<ref>) are distinctly different. The longitudinal conductivity correction δσ originates from the renormalization of intergrain tunneling conductance g_T, while the Hall resistivity correction δρ_xy stems from virtual electron diffusion inside individual grains <cit.>.As mentioned previously, the theoretical predication of equation (<ref>) has been experimentally tested in a few granular systems. On the contrary, the prediction of equation (<ref>) is far more difficult to verify in real material systems. The major reason is due to the fact that the ρ_xy,0 magnitude (∝ 1/n^∗) in a granular metal with g_T ≫ 1 is already small and difficult to measure. Obviously, the e-e interaction induced correction term δρ_xy is even much smaller. Typically, the ratio δρ_xy/ρ_xy,0∼ 1/g_T is on the order of a few percent and equation (<ref>) is a perturbation theory prediction.In section 2, we have stressed that the carrier concentration in the ITO material is ∼ 2 to 3 orders of magnitude lower than those in typical metals. Thus, generally speaking, the Hall coefficient, R_H = ρ_xy/B, in ITO granular films would be ∼ 2 to 3 orders of magnitude larger than those in conventional granular films made of normal-metal granules. The theoretical predication of equation (<ref>) can hence be experimentally tested by utilizing inhomogeneous ITO ultrathin films.In addition to the observation in figure <ref>, Zhang <cit.> have studied the Hall transport in inhomogeneous ITO ultrathin films. Figure <ref> shows the temperature dependence of R_H for four samples they have measured. Evidently, one sees a robust R_H ∝ln T variation over a wide temperature range from ∼ 2 K to T_ max, where T_ max is a temperature below which the ln T law holds. The T_ max value for a given granular array is determined by the constituent grain parameters E_c and E_ Th as well as the intergrain tunneling parameter g_T. For those ITO ultrathin films shown in figure <ref>, the experimental T_ max values varied from ∼ 50 to ∼ 120 K. Quantitatively, the correction term contributes a small magnitude of [R_H(2K) - R_H (T_ max)] / R_H (T_ max) ≃δρ_xy(2K) /ρ_xy,0≲ 5%, where R_H(T_ max) ≃ 1/(n^∗ e) is the Hall coefficient in the absence of the Coulomb e-e interaction effect. The experimental data (symbols) can be well described by the theoretical predictions (solid straight lines) with satisfactory values of the adjustable parameters. Thus, the prediction of equation (<ref>) is experimentally confirmed for the first time in the literature.In summary, the simultaneous experimental observations of δσ∝ln T (figure <ref>) and δρ_xy∝ln T (figure <ref>) laws over a wide range of temperature from liquid-helium temperature up to and above liquid-nitrogen temperature strongly support the recent theoretical concepts for charge transport in granular metals, i.e., equations (<ref>) and (<ref>), which are formulated under the condition that the intergrain tunneling conductivity g_T ≫ 1. We note again that the free-carrier-like and, especially, the low-n characteristics of the ITO material have made possible a close experimental examination of equation (<ref>). While measurements of δσ are relatively easy, finding a proper granular metal with g_T ≫ 1 to measure the small correction term δρ_xy is definitely nontrivial. The ITO material made into an inhomogeneous ultrathin film form has opened up avenues for exploring the many-body Coulomb effects in condensed matter physics.Recently, the thermoelectric power in the presence of granularity and in the limit of g_T ≫ 1 has been theoretically calculated <cit.>. It was predicted that the granularity could lead to substantial improvement in thermodynamic properties and, in particular, the figure of merit of granular materials could be high. Experimental investigations in this direction would be worthwhile in light of the development of useful thermoelectric materials. On the other hand, it has recently been reported that the presence of granularity causes an enhancement of the flicker noise (1/f noise) level in ITO films. This is ascribed to atomic diffusion along grain boundaries or dynamics of two-level systems near the grain boundaries <cit.>. Since the 1/f noise could potentially hinder the miniature device performance, it would be of interest and importance to explore its properties in inhomogeneous ITO ultrathin films. § CONCLUSION Indium tin oxide (ITO) is a very interesting and useful transparent conducting oxide (TCO) material. It is stable at ambient conditions and can be readily grown into a variety of forms, including polycrystalline thin and thick films, and single-crystalline nanowires. They can simultaneously have electrical resistivities as low as ≈ 150 μΩ cm at room temperature and optical transparencies as high as ≈ 90% transmittance at the visible light frequencies. Apart from their technological issues, the electronic conduction properties of ITO have rarely been systematically explored as a condensed matter physics research subject and down to fundamental levels. In this Topical Review, we have focused on metallic ITO structures. We have shown that the overall electrical resistivity and thermoelectric power can be described by the Boltzmann transport equation. A linear dependence on temperature of thermoelectric power in a wide range of temperature eloquently manifests the free-carrier-like energy bandstructure around the Fermi level of this class of material. At liquid-helium temperatures, marked weak-localization effect and universal conductance fluctuations emerge. ITO provides a rich playground for studying these quantum interference phenomena in all three dimensions, which leads to an improved understanding of the underlying physics governing the properties of mesoscopic and nanoscale structures. Inhomogeneous ITO ultrathin films have opened up unique and valuable avenues for studying the many-body electron-electron interaction effect in granular metals. These new theoretical predictions cannot be addressed by employing conventional granular systems.The objective of this Topical Review is not only to present the charge transport properties of ITO but also to demonstrate that the ITO material is versatile and powerful for unraveling new physics. Microscopically, the intrinsic electronic properties that make ITO an appealing technological as well as academic material are the free-carrier-like energy bandstructure and a low level of carrier concentration. Owing to the inherent free-carrier-like characteristics, the electronic parameters can be reliably evaluated through the free-electron model, which in turn facilitate critical tests of a variety of lasting and new theoretical predictions. A low carrier concentration gives rise to slow electron-phonon relaxation, which manifests the linear electron diffusion thermoelectric power and also yields a weak electron dephasing rate in the ITO material. In light of the development and search for useful TCOs, it would be of great interest to investigate whether the numerous aspects of the novel electronic conduction properties that we have addressed in this Topical Review might also manifest in other, such as ZnO- and SnO_2-based, TCO materials. The authors thank Yuri Galperin, Andrei Sergeev, and Igor Beloborodov for valuable suggestions and comments, and David Rees for careful reading of the manuscript. We are grateful to Shao-Pin Chiu, Yi-Fu Chen, Chih-Yuan Wu, Yao-Wen Hsu, Bo-Tsung Lin, Ping-Yu Yang, and Yu-Jie Zhang for their collaborations at the various stages of our lasting research on ITO. One of us (JJL) also would like to thank Hsin-Fei Meng for incidentally igniting his interest in the marvelous electronic conduction properties of the ITO material a decade ago. This work was supported at NCTU by the Taiwan Ministry of Science and Technology through Grant No. NSC 102-2120-M-009-003 and the MOE ATU Program, and at TJU by the NSF of China through Grant No. 11174216 and the Research Fund for the Doctoral Program of Higher Education through Grant No. 20120032110065.tocsectionAcknowledgments § REFERENCEStocsectionReferences10Holland1955 Holland L 1956 Vacuum Deposition of Thin Films (New York: Wiley) p. 492Jarzebski-PSSa1982 Jarzȩbski Z M 1982 Phys. Stat. Sol. (a) 71 13book-Facchetti2010 Facchetti A and Marks T J 2010 Transparent Electronics: From Synthesis to Applications (Wiley, United Kingdom)Ginley-MRS Ginley D S and Bright C2000 MRS Bull. 25 15Granqvist-TSF2002 Granqvist C G and Hultåker A 2002 Thin Solid Films 411 1Granqvis-Solar2007 Granqvis C G 2007 Sol. Energy Mater. Sol. Cells 91 1529Hamberg-PRB1984 Hamberg I, Granqvist C G, Berggren K F, Sernelius B E and Engström L 1984 Phys. Rev. B 30 3240Gerfin-JAP1996 Gerfin T and Grätzel M 1996 J. Appl. Phys. 79 1722Schroer-PRB1993 Schröer P, Krüger P and Pollmann J 1993 Phys. Rev. B 47 6971Imai2003 Imai Y J, Watanabe A and Shimono I 2003 J. Mater. Sci.: Mater. Electr. 14 149Imai2004 Imai Y J and Watanabe A 2004 J. Mater. Sci.: Mater. Electr. 15 743Karazhanov-PRB2007 Karazhanov S Z, Ravindran P,Kjekshus A, Fjellvåg H and Svensson B G 2007 Phys. Rev. B 75 155104Zunger-PRL2002 Kılıç Ç and Zunger A 2002 Phys. Rev. Lett. 88 095501Robertson-PRB1984 Robertson J 1984 Phys. Rev. B 30 3520Mishra-PRB1995 Mishra K C, Johnson K H and Schmidt P C 1995 Phys. Rev. B 51 13972LZQ-JAP2009 Li Z Q, Yin Y L, Liu X D and Song Q G 2009 J. Appl. Phys. 106 083701Schleife-PRB2011 Schleife A, Varley J B, Fuchs F, Rödl C, Bechstedt F,Rinke P, Janotti A and Van de Walle C G 2011 Phys. Rev. B 83 035116LXD-APL2008 Liu X D, Jiang E Y, Li Z Q and Song Q G 2008 Appl. Phys. Lett. 92 252104Osorio-GuillenPRL2008 Osorio-Guillén J, Lany S and Zunger A 2008 Phys. Rev. Lett. 100 036601Orita-JJAP2010 Orita N 2010 Japn. J. Appl. Phys. 49 055801Chen-JAP2010 Chen D M, Xu G, Miao L, Chen L H, Nakao S and Jin P 2010 J. Appl. Phys. 107 063707Huy-PRB2011 Huy H A, Aradi B, Frauenheim T and Deák P 2011 Phys. Rev. B 83 155201Yamamoto-PRB2012 Yamamoto T andOhno T 2012 Phys. Rev. B 85 033104Yang-JACS2005 Yang Y, Jin S, Medevdeva J E, Ireland J R, Metz A W, Ni J, Hersam M C, Freeman A J and Marks T J 2005 J. Am. Chem. Soc. 127 8796Medevdeva-EPL2005 Medevdeva J E and Freeman A J 2005 Europhys. Lett. 69 583Odaka-JJPS2001 Odaka H, Shigesato Y,Murakami T and Iwata S 2001 Jpn. J. Appl. Phys. (Part 1) 40 3231Mryasov-PRB2001Mryasov O N andFreeman A J 2001 Phys. Rev. B 64 233111Medvedeva-PRL2006 Medvedeva J E 2006 Phys. Rev. Lett. 97 086401Medvedeva-PRB2010 Medvedeva J E and Hettiarachchi C L 2010 Phys. Rev. B 81 125116King-PRL2008 King P D C,Veal T D,Payne D J,Bourlange A, Egdell R G andMcConville C F 2008 Phys. Rev. Lett. 101 116808King-PRL2010 King P D C, Veal T D, McConville C F, Zúñiga Pérez J, Muñoz Sanjosé V, Hopkinson M, Rienks E D L, JensenM F and Hofmann P 2010 Phys. Rev. Lett. 104 256803Lany-PRL2012 Lany S, Zakutayev A, Mason T O,Wager J F, Poeppelmeier K R, Perkins J D, Berry J J,Ginley D S and Zunger A 2012 Phys. Rev. Lett. 108 016802Zhang-PRL2013 Zhang K H L, Egdell R G, Offi F, Iacobucci S, Petaccia L, Gorovikov S and King P D C 2013 Phys. Rev. Lett. 110 056803Taga-JVST2000 Taga N, Shigesato Y and Kamei M 2000 J. Vac. Sci. Technol. A 18 1663Rauf-JAP1996 Rauf I A 1996 J. Appl. Phys. 79 4057Kittel Kittel C 2005 Introduction to Solid State Physics 8th edn (New York: Wiley)Lin-PRB1993-TiAl Lin J J, Yu C and Yao Y D 1993 Phys. Rev. B 48 4864Hsu-PRB1999-TiAl Hsu S Y, Sheng P J and Lin J J 1999 Phys. Rev. B 60 3940Guillena-JAP2007 Guillén C andHerrero J 2007 J. Appl. Phys. 101 073514Kim-JAP1999 Kim H, Gilmore C M, Piqué A, Horwitz J S, Mattoussi H, Murata H, Kafafi Z H and Chrisey D B 1999 J. Appl. Phys. 86 6451Chopra Chopra K L,Major S and Pandya D K 1983 Thin Solid Films 102 1Hamberg-JAP Hamberg I and Granqvist C G 1986 J. Appl. Phys. 60 R123Lewis-MRS Lewis B G and Paine D C 2000 MRS Bull. 25 22Kawazoe-MRS Kawazoe K, Yanagi H, Ueda K and Hosono H 2000 MRS Bull. 25 28Minami-MRS Minami T 2000 MRS Bull. 25 38Freeman-MRS Freeman A J, Poeppelmeier K R, Mason T O, Chang R P H and Marks T J 2000 MRS Bull. 25 45Gordon-MRS Gordon R G2000 MRS Bull. 25 52Coutts-MRS Coutts T J, Young D L and Li X 2000 MRS Bull. 25 58Exarhos-TSF2007 Exarhos G J and Zhou X D 2007 Thin Solid Films 515 7025Hosono-TSF2007 Hosono H 2007 Thin Solid Films 515 6000King-JPC2011 King P D C and Veal T D 2011 J. Phys.: Condens. Matter 23 334214.Beloborodov-RMP2007 Beloborodov I S,Lopatin A V, Vinokur V M and Efetov K B 2007 Rev. Mod. Phys. 79 469Efetov-PRB2003Efetov K B and Tschersich A 2003 Phys. Rev. B 67 174205Efetov-EPL2003 Efetov K B and Tschersich A 2002 Europhys. Lett. 59 114Beloborodov-PRL2003 Beloborodov I S, Efetov K B, Lopatin A V andVinokur V M 2003Phys. Rev. Lett. 91 246801Kharitonov-PRL2007Kharitonov M Yu and Efetov K B 2007 Phys. Rev. Lett. 99 056803Kharitonov-PRB2008 Kharitonov M Yu and Efetov K B 2008 Phys. Rev. B 77 045116Liu-JAP2003Liu C,Matsutani T,Asanuma T,Murai M, Kiuchi M,Alves E and Reis M 2003 J. Appl. Phys. 93 2262Iwatsubo-Vacuum2006 Iwatsubo S 2006 Vacuum 80 708Shigesato-Vacuum2000 Shigesato Y,Koshi-ishi R,Kawashima T and Ohsako J 2000 Vacuum 59 614LXD-JAP2008 Liu X D, Jiang E Y and Zhang D X 2008 J. Appl. Phys. 104 073711Kytin-APA2014 Kytin V G, Kulbachinskii V A, Reukova O V, Galperin Y M, Johansen T H, Diplas S and Ulyashin A G 2014 Appl. Phys. A: Materials Science & Processing 114 957Mott-Book1979 Mott N F and Davis E A 1979 Electronic Processes in Non-Crystalline Materials 2nd edn (Clarendon: Oxford)Shklovskii-book1984 Shklovskii B I and Efros A L 1984 Electronic Properties of Doped Semiconductors (Berlin: Springer-Verlag)Sheng-PRL1973 Sheng P, Abeles B and Arie Y 1973 Phys. Rev. Lett. 31 44Beloborodov-PRB2005 Beloborodov I S, Lopatin A V and Vinokur V M 2005 Phys. Rev. B 72 125121Ohyama-JPSJ1985 Ohyama T, Okamoto M and Otsuka E1985 J. Phys. Soc. Jpn. 54 1041Mori-JAP1993 Mori N 1993 J. Appl. Phys. 73 1327Chiu-Nanotechnology2009 Chiu S P, Chung H F, Lin Y H, Kai J J, Chen F R and Lin J J 2009 Nanotechnology 20 105203Aliev-APL2012 Aliev A E, Xiong K, Cho K and Salamon M B 2012 Appl. Phys. Lett. 101 252603Fan-JAP1977 Fan J C C and Goodenough J B 1977 J. Appl. Phys. 48 3524Fuchs-PRB2008 Fuchs F and Bechstedt F 2008 Phys. Rev. B 77 155107Kikuchi-Vacuum2000 Kikuchi N, Kusano E, Nanto H, Kinbara A and Hosono H 2000 Vacuum 59 492Huang-TSF1987 Huang K F, Uen T M, Gou Y S, Huang C R and Yan H C 1987 Thin Solid Films 148 7Shigesato-JAP1993 Shigesato Y, Paine D C and Haynes T E 1993 J. Appl. Phys. 73 3805Taga-JAP1996 Taga N, Odaka H, Shigesato Y, Yasui I, Kamei M and Haynes T E 1996 J. Appl. Phys. 80 978Tab-4 Mizuhashi M 1980 Thin Solid Films 70 91Tab-5 Agnihotry S A, Saini K K, Saxena T K, Nagpal K C and Chandra S 1985 J. Phys. D 18 2087Tab-6 Jan S W and Lee S C1987 J. Electrochem. Soc. 134 2056Tab-8 Nanto H, Minami T, Orito S and Takata S 1988 J. Appl. Phys. 63 2711Tab-9 Wu W F and Chiou B S 1994 Thin Solid Films 247 201Tab-10 Joshi R N, Singh V P and McClure J C 1995 Thin Solid Films 257 32Tab-11 Wu C Y, Lin B T, Zhang Y J and Lin J J 2012 Phys. Rev. B 85 104204Tab-12 Kane J and Schweizer H P 1975 Thin Solid Films 29 155Tab-13 Ryabova L A, Salun V S and Serbinov I A 1982 Thin Solid Films 92 327Tab-14 Maruyama T and Fukui K 1991 Thin Solid Films 203 197Tab-15 Xu J J, Shaikh A S and Vest R W 1988 Thin Solid Films 161 273Tab-16 Furusaki T and Kodaira K 1991 High Performance Ceramic Films and Coatings ed Vincenzini P (Amsterdam: Elsevier) p. 241Tab-17 Nishio K, Sei T and Tsushiya T 1996 J. Mater. Sci. 31 1761Tab-18 Alam M J andCameron D C 2000 Thin Solid Films 377–378 455Tab-19 Ramaiah K S, Raja V S, Bhatnagar A K, Tomlinson R D, Pilkington R D, Hill A E, Chang S J, Su Y K and Juang F S 2000 Semic. Sci. Technol. 15 676Tab-20 Hichoua A El, Kachouaneb A, Bubendorffc J L, Addoub M, Ebothec J, Troyonc M and Bougrine A 2004 Thin Solid Films 458 263Tab-21 Moholkar A V, Pawar S M, Rajpure K Y, Ganesan V and Bhosale C H 2008 J. Alloy. Compd. 464 387Tab-2 Guo E J, Guo H, Lu H, Jin K, He M and Yang G 2011 Appl. Phys. Lett. 98 011905Tab-3 Bierwagen O and Speck J S 2014 Phys. Stat. Sol. (a) 211 48Ohta-APL2000 Ohta H, Orita M, Hirano M, Tanji H, Kawazoe H and Hosono H 2000 Appl. Phys. Letts. 76 2740Nistor-JPCM2010 Nistor M, Perriére J, Hebert C and Seiler W 2010 J. Phys.: Condens. Matter 22 045006Seiler-SCM2013 Seiler W, Nistor M, Hebert C and Perrière J 2013 Sol. Energy Mater. Sol. Cells 116 34LZQ-JAP2004 Li Z Q and Lin J J 2004 J. Appl. Phys. 96 5918Zhong-prl1998 Zhong Y L and Lin J J 1998 Phys. Rev. Lett. 80 588ZhongYL-PRL2010 Zhong Y L, Sergeev A, Chen C D and Lin J J 2010 Phys. Rev. Lett. 104 206803WuCY-PRB1998 Wu C Y, Jian W B and Lin J J 1998 Phys. Rev. B 57 11232Lin-TSF2010 Lin B T, Chen Y F, Lin J J and Wu C Y 2010 Thin Solid Films 5186997Preissler-PRB2013 Preissler N, Bierwagen O, Ramu A T and Speck J S 2013 Phys. Rev. B 88 085305Ederth-PRB2003 Ederth J, Johnsson P, Niklasson G A, Hoel A, Hultåker A, Heszler P, Granqvist C G, van Doorn A R and Jongerius M J 2003 Phys. Rev. B 68 155410Sheng-PRL1978 Sheng P, Sichel E K and Gittleman J I 1978 Phys. Rev. Lett. 40 1197Sheng-PRB1980 Sheng P 1980 Phys. Rev. B 21 2180Lin-nanotechnology2008 Lin Y H, Chiu S P and Lin J J 2008 Nanotechnology 19 365201ZhangYJ-PRB2011 Zhang Y J, Li Z Q and Lin J J 2011 Phys. Rev. B 84 052202Bamiduro-APL2007 Bamiduro O, Mustafa H, Mundle R, Konda R B andPradhan A K 2007 Appl. Phys. Lett. 90 252108LiuXD-ASS2012 Liu X D, Liu J, Chen S and Li Z Q 2012 Appl. Surf. Sci. 263 486Yang-APL2012 Yang Y, Zhang Y J, Liu X D and Li Z Q 2012 Appl. Phys. Lett. 100 262101Bhosle-APL2006Bhosle V,Tiwari A andNarayan J 2006 Appl. Phys. Lett. 88 032106Ahn-APL2007 Ahn B D, Oh S H, Kim H J, Jung M H andKo Y G 2007 Appl. Phys. Lett. 91 252109Furubayashi-APL2005 Furubayashi Y,Hitosugi T,Yamamoto Y,Inaba K,Kinoda G,Hirose Y, Shimada T and Hasegawa T 2005 Appl. Phys. Lett. 86 252101Zheng-ASS2009 Zheng X W and Li Z Q 2009 Appl. Surf. Sci. 2558104Amorim-JPCS2014 Amorim C A, Dalmaschio C J, Melzi A L R, Leite E R and Chiquito A J 2014 J. Phys. Chem. Solids 75 583Lang-Arxiv1406 Lang W J and Li Z Q 2014 (arXiv:1406.5269)MacDonald MacDonald D K C 1962 Thermoelectricity: An Introduction to the Principles (New York: Wiley)Siebold Siebold T and Ziemann P 1995 Phys. Rev. B 51 6328WCY-JAP2010 Wu C Y, Thanh T V, Chen Y F, Lee J K, and Lin J J 2010 J. Appl. Phys. 108 123708Guilmeau-JAP2009 Guilmeau E, Bérardan D, Simon C, Maignan A, Raveau B, Ovono D O and Delorme F, 2009 J. Appl. Phys. 106 053715Blatt Blatt F J 1970 Physics of Electronic Conduction in Solids (New York: McGraw-Hill)Reizer1986 Reizer M Yu and Sergeev A V 1986 Sov. Phys.–JETP 63 616Sergeev1996 Sergeev A V and Reizer M Yu 1996 Int. J. Mod. Phys. B 10 635Ashida-JAP2009 Ashida T, Miyamura A, Oka N, Sato Y, Yagi T, Taketoshi N, Baba T and Shigesato Y 2009 J. Appl. Phys. 105 073709Cordfunke-JPCS1992 Cordfunke E H P and Westrum Jr E F 1992 J. Phys. Chem. Solids 53 361Wang-CI2012 Wang L M, Chang C Y, Yeh S T, Chen S W, Peng Z A, Bair S C, Lee D S, Liao F C and Kuo Y K 2012 Ceram. Int. 38 1167Wu-PRB2012 Wu C Y, Lin B T, Zhang Y J, Li Z Q, and Lin J J 2012 Phys. Rev. B 85 104204Bergmann-PR1984 Bergmann G 1984 Phys. Rep. 107 1Altshuler-EEbook1985 Altshuler B L and Aronov A G 1985 Electron–Electron Interactions in Disordered Systems ed A L Efros and M Pollak (Amsterdam: Elsevier)Fukuyama-EEbook1985 Fukuyama H 1985 Electron–Electron Interactions in Disordered Systems ed A L Efros and M Pollak (Amsterdam: Elsevier)Lee-PMP1985 Lee P A and Ramakrishnan T V 1985 Rev. Mod. Phys. 57 287Chakravarty-PR1986 Chakravarty S and Schmid A 1986 Phys. Rep. 140 193LinJJ-JPCM2002 Lin J J and Bird J P 2002 J. Phys.: Condens. Matter 14 R501Ohyama-JPSJ1983 Ohyama T, Okamoto M and Otsuka E 1983 J. Phys. Soc. Jpn. 52 3571Chiquito-NL2007 Chiquito A J,Lanfredi A J C,de Oliveira R F M, Pozzi L P and Leite E R 2007 Nano Lett. 7 1439Hsun-PRB2010 Hsu Y W, Chiu S P, Lien A S and Lin J J 2010 Phys. Rev. B 82 195429YangPY-PRB2012 Yang P Y, Wang L Y, Hsu Y W and Lin J J 2012 Phys. Rev. B 85 085423ZhangYJ-EPL2013 Zhang Y J, Li Z Q and Lin J J 2013 Europhys. Lett. 103 47002LiuXD-JAP2007 Liu X D,Jiang E Y and Li Z Q 2007 J. Appl. Phys. 102 073708Shinozaka-JJSJ2007 Shinozaka B, Makise K, Shimane Y, Nakamura H and Inoue K 2007 J. Phys. Soc. Jap. 76 074718Chiu-Nano2013 Chiu S P, Lu J G and Lin J J 2013 Nanotechnology 24 245203Altshuler-JPC1982Altshuler B L,Aronov A G andKhmelnitsky D E 1982 J. Phys. C 15 7367Rammer-PRB1986Rammer J andSchmid A 1986 Phys. Rev. B 34 1352Fukuyama-PRB1983 Fukuyama H and Abrahams E 1983 Phys. Rev. B 27 5976Pierre-PRB2003 Pierre F,Gougam A B, Anthore A, Pothier H, Esteve D and Birge N O 2003 Phys. Rev. B 68 085413LinJJ-PRB1987b Lin J J and Giordano N 1987 Phys. Rev. B 35 1071HuangSM-PRL2007 Huang S M, Lee T C, Akimoto H, Kono K and Lin J J 2007 Phys. Rev. Lett. 99 046601LinJJ-JPSJ2003 Lin J J, Li T J and Zhong Y L 2003 J. Phys. Soc. Jpn. 72 Suppl. A 7Golubev-PhysicaE2007 Golubev D SandZaikin A D 2007 Physica E 40 32Rotter-JPA2009 Rotter I 2009 J. Phys. A 42 153001Schmid1974 Schmid A 1974 Z. Phys. 271 251.Altshuler-JETP1979 Altshuler B L and Aronov A G 1979 JETP Lett. 30 482Sergeev-PRB2000 Sergeev A and Mitin V 2000 Phys. Rev. B 61 6041Wen-APL2013 Wen B, Yakobov R, Vitkalov S A and Sergeev A 2013 Appl. Phys. Lett. 103 222601HsuSY-PRB1999 Hsu S Y, Sheng P J and Lin J J 1999 Phys. Rev. B 60 3940LiL-PRB2006 Li L, Lin S T, Dong C and Lin J J 2006 Phys. Rev. B 74 172201Bergmann-PRB1985 Bergmann G and Horriar-Esser C 1985 Phys. Rev. B 31 1161Zutic-RMP2004 Žutić I,Fabian J and Sarma S D 2004 Rev. Mod. Phys. 76 323Shinozaki-JAP2013 Shinozaki B, Hidaka K, Ezaki S, Makise K, Asano T, Tomai S, Yano K and Nakamura H 2013 J. Appl. Phys. 113 153707Lee-PRL1985 Lee P A and Stone A D 1985 Phys. Rev. Lett. 55 1622Lee-PRB1987 Lee P A, Stone A D and Fukuyama H 1987 Phys. Rev. B 35 1039Washburn-AdvPhys1986 Washburn S and Webb R A 1986 Adv. Phys. 35 375Beenakker-PRB1988 Beenakker C W J and van Houten H 1988 Phys. Rev. B 37 6544Washburn-RPP1992 Washburn S and Webb R A 1992 Rep. Prog. Phys. 55 1311Thornton-PRB1987 Thornton T J, Pepper M, Ahmed H, Davies G J and Andrews D 1987 Phys. Rev. B 36 4514(R)Hansen-PRB2005Hansen A E, BjörkM T, Fasth C, Thelander C and Samuelson L 2005 Phys. Rev. B 71 205328Wagner-PRL2006 Wagner K, Neumaier D, Reinwald M, Wegscheider W and Weiss D 2006 Phys. Rev. Lett. 97 056803Man-PRL2005 Man H Tand Morpurgo A F 2005 Phys. Rev. Lett. 95 026801Berezovsky-NanoTe2010 Berezovsky J, Borunda M F, Heller E J and Westervelt R M 2010 Nanotechnology 21 274013Checkelsky-PRL2011 Checkelsky J G, Hor Y S, Cava R J and Ong N P 2011 Phys. Rev. Lett. 106 196801Chiu-PRB2013 Chiu S P and Lin J J 2013 Phys. Rev. B 87 035122Beutler-PRL1987 Beutler D E, Meisenheimer T L and Giordano N 1987 Phys. Rev. Lett. 58 1240Lien-PRB2011Lien A S,Wang L Y, Chu C S and Lin J J 2011 Phys. Rev. B 84 155432Chiu-nano-2009a Chiu S P, Lin Y H and Lin J J 2009 Nanotechnology 20 015203Abeles-AP1975Abeles B,Sheng P, Coutts M D andArie Y1975 Adv. Phys. 24 407Abeles-book1976 Abeles B 1976 Applied Solid State Science: Advance in Materials and Device Research ed Wolf R (New York San Franciso London: Academic Press)Sun-PRB2010 Sun Y C,Yeh S S andLin J J 2010 Phys. Rev. B 82 054203ZhangXX-PRL2001 Zhang X X, Wan C, Liu H, Li Z Q, Sheng P and J. J. Lin 2001 Phys. Rev. Lett. 86 5562WuYN-PRB2011 Wu Y N, Li Z Q and Lin J J 2011 Phys. Rev. B 82 092202WanC-PRB2002 Wan C and Sheng P 2002 Phys. Rev. B 66 075309Beloborodov-PRB2004b Beloborodov I S, Lopatin A V, Schwiete G and Vinokur V M 2004 Phys. Rev. B 70 073404Rotkina-PRB2005 Rotkina L, Oh S,Eckstein J N andRotkin S V 2005 Phys. Rev. B 72 233407Sachser-PRL2011 Sachser R, Porrati F,Schwalb C H andHuth M 2011 Phys. Rev. Lett. 107 206803Achatz-PRB2009 Achatz P, Gajewski W, Bustarret E,Marcenat C, Piquerel R,Chapelier C,Dubouchet T,Williams O A, Haenen K,Garrido J A andStutzmann M 2009 Phys. Rev. B 79 201203Bakkali-EPL2013 Bakkali H and Dominguez M 2013 Europhys. Lett. 104 17007Beloborodov-PRB2004a Beloborodov I S, Lopatin A V and Vinokur V M 2004 Phys. Rev. B 70 205120Beloborodov-PRB2009 Glatz A and Beloborodov I S 2009 Phys. Rev. B 79 041404(R)Yeh-APL2013 Yeh S S, Hsu W M, Lee J K, Lee Y J and Lin J J 2013 Appl. Phys. Lett. 103 123118
http://arxiv.org/abs/1702.07845v1
{ "authors": [ "Juhn-Jong Lin", "Zhi-Qing Li" ], "categories": [ "cond-mat.mes-hall", "cond-mat.dis-nn" ], "primary_category": "cond-mat.mes-hall", "published": "20170225072010", "title": "Electronic conduction properties of indium tin oxide: single-particle and many-body transport" }
A.Samoletov@liverpool.ac.ukDepartment of Mathematical Sciences, University of Liverpool, Liverpool, UKInstitute for Physics and Technology, Donetsk, Ukraine B.Vasiev@liverpool.ac.uk Department of Mathematical Sciences, University of Liverpool, Liverpool, UK Dynamical equations describing physical systems in contact with the thermal bath are commonly extended by mathematical tools called “thermostats”. These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and insures invariance of the canonical measure.The principle covers both stochastic and deterministic thermostat schemes.Our method has a clear advantage over a range of proposed and widely used thermostat schemes which are based on formal mathematical reasoning. Following the derivation of proposed principle we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme. Dynamic principle for ensemble control tools B. Vasiev ============================================§ INTRODUCTION Analysis of molecular systems is an essential part of research in a range of disciplines in natural sciences and in engineering <cit.>. As molecular systems affected by environmental thermodynamic conditions, they are studied in the context of statistical physics ensembles. Methods of dynamical sampling of the corresponding probability measures are important for applications and they are under extensive study and development <cit.>. The traditional application of thermostats is molecular dynamics (MD), that is sampling of equilibrium systems with known potential energy functions, V(q), where q is a system'sconfiguration. However, the ability to sample equilibrium ensembles at constant temperature T would also imply the ability to sample arbitrary probability measures. Indeed, as an alternative to the conventional MD practice, one may use a probability density σ(q), theoretical or extracted from experimental data, to define the potential function as V(q)=-k_BT lnσ(q), where k_B is the Boltzmann constant.Thermostats embedded into dynamical equations bring in the so-obtained dynamics rich mathematical content. Such dynamical systems with an invariant probability measure have become increasingly popular for mathematical studies in a wide range of applications including investigation of non-equilibrium phenomena <cit.>, mathematical biology models <cit.>, multiscale models <cit.>, Bayesian statistics and Bayesian machine learning applications <cit.>, superstatistics <cit.>.Here, we present a unified approach for derivation of thermostats sampling the canonical ensemble. The corresponding method is derived using fundamental physical arguments that facilitate understanding physics of thermostat schemes in general, and elucidate physics of the Nosé-Hoover (NH) and the Nosé-Hoover-Langevin (NHL) dynamics in particular. Besides, our method allows to build a plethora of thermostats, stochastic as well as deterministic, including those previously proposed. We expect that it can also be adjusted to arbitrary probability measures.Classical mechanics and equilibrium statistical physics are adequately described in terms of the Hamiltonian dynamics. Dynamic thermostat schemes involve modified Hamiltonian equations of motion where certain temperature control tools are included. The modified dynamics can be deterministic as well as stochastic <cit.>. Recently proposed NHL thermostats <cit.> combine deterministic dynamics with stochastic perturbations. This combination ensures ergodicity and allows “gentle” perturbation of the physical dynamics that is often desired <cit.>.To introduce our scheme, we consider a dynamical system S consisting of N particles in d-dimensional space (𝒩=dN degrees of freedom) described by the Hamiltonian function H(x), where x=(p,q) is a point in the phase space ℳ=ℝ^2dN, p={𝐩_i∈ℝ^d} _i=1^N are momentum variables and q={𝐪_i∈ℝ^d} _i=1^N are position variables. The Hamiltonian dynamics has the form, ẋ=J∇H(x) in the phase space ℳ, where J is the symplectic unit. The canonical ensemble describes the system S in contact with the heat bath Σ (an energy reservoir permanently staying in the thermal equilibrium with the thermodynamic temperature T), and S may exchange energy with Σ only in the form of heat. Thus, the temperature of the system S is fixed while its energy, E, is allowed to fluctuate. The canonical distribution has the form ρ_∞(x)∝exp[-β H(x)], where β=(k_BT)^-1. On average along an ergodic trajectory ⟨ E(t)⟩ =E(T)=const. Rate of energy exchange between the system S and the thermal bath Σ depends on the temperature T. Note that Hamiltonian system is unable to sample the canonical distribution since there is no energy exchange between the system and the heat bath. To describe the heat transfer, it is necessary to modify the equations of motion in a way that the dynamics becomes non-Hamiltonian <cit.>. Suppose ẋ=G(x) is a modified law of motion and Ḣ(x)=G(x)·∇H(x) is the rate of energy change (depending on T) such that ⟨G(x)·∇H(x)⟩ =0, that is the energy is constant on average. Let G(x)·∇H(x)∝ F(x,β) where the temperature dependence is a key. In order to state the dynamic principle governing temperature control tools, a few definitions are required.§ MICROSCOPIC TEMPERATURE EXPRESSIONS Consider F(x,β) such that ⟨ F(x,β)⟩ =0 for all β>0. This condition is denoted as F(x,β)∼0 while the function F(x,β) is called the microscopic temperature expression (TE).For the system with H(x)=K(p)+V(q) examples of TEs include the kinetic TE, F_kin(p,β)=2K(p)β-𝒩, and the configurational TE, F_conf(q,β)=(∇V(q))^2β-Δ V(q) <cit.>.Various TEs can be obtained in the following manner. Suppose that F(x,β) is a polynomial in β, F(x,β)=∑_n=0^2L+1φ_n(x)β^n∼0, where L∈ℤ_≥0 and functions {φ_n(x)} _n=0^2L+1, are subject to specification. Rewrite F(x,β) in the formF(x,β)=∑_k=0^L(φ_2k(x)+βφ_2k+1(x))β^2k∼0for all β>0. Thus, from (<ref>) it follows that φ_2k(x)+βφ_2k+1(x)∼0 for all k∈{ 0,1,…,L}.To find φ_2k(x) and φ_2k+1(x) satisfying this condition consider the basic expression, F(x,β)=β φ_1(x)+φ_0(x). Substituting φ(x)∂_iH(x) for φ_1(x), where φ(x) is an arbitrary function, and utilizing the identity, ∂_ie^-β H(x)=-β∂_iH(x)e^-β H(x) for all i=1,…,2dN and x∈ℳ, where ∂_i≡∂/∂ x_i, we get ∂_iφ(x)+φ_0(x)∼0. Then, excluding φ_0(x) from the basic expression, we arrive at F(x,β)=β φ(x)∂_iH(x)-∂_iφ(x)∼0 (or φ(x)∂_iH(x)-k_BT∂_iφ(x)∼0) for each and every x_i in ℳ provided that φ(x)exp[-β H(x)]→0 as |x|→∞. This result can be represented in a compact form. Suppose φ_0(x) is a vector field on ℳ such that φ_0(x)exp[-β H(x)]→0 as |x|→∞. Then F_0(x,T)=φ_0(x)·∇H(x)-k_BT∇·φ_0(x)∼0.This form of TE was previously discussed<cit.>. More general TEs are allowed, e.g. vector fields F(x,β)=β ∇H(x)×φ(x)-∇×φ(x)∼0, and so on. As a further generalization we introduce the notationF_l(x,T)=φ_l(x)·∇H(x)-k_BT∇·φ_l(x),where l=0,1,...,L, φ_0(x)=φ(x), and {φ_l(x)} _l=0^L is a set of vector fields such that φ_l(x)exp[-β H(x)]→0 as |x|→∞. Then the general scalar TE can be represented asF_L(x,T)=∑_l=0^LF_l(x,T)(k_BT)^2l∼0for all L∈ℤ_≥0. A particular example of the use of such a TE in a limited context (L=1 and φ_l(x)∝(p,0) leading to the kinetic TE) can be found in the literature<cit.>. In what follows we focus mainly on F_0(x,T) and only to a certain extent on F_L(x,T) where L≥1.Although the expression (<ref>) implies the existence of infinite number of TEs, they all are equivalent from the thermodynamic perspective. However, the time interval required to achieve a specified accuracy in ⟨ F(x,β)⟩ =0 can differ for different TEs <cit.>. In general, physical systems are often distinguished by multimodal distributions and by existence of metastable states. Their dynamics is characterized by processes occurring on a number of timescales. We assume that TEs can be associated with dynamical processes occurring on various time scales, and thus, they can be combined in multiscale models.§ DYNAMIC PRINCIPLE Now we claim the following dynamic principle for ensemble control tools:Let F(x,T) be a TE. Then there exists the dynamical system, ẋ=G(x), such that ∇H(x)·G(x)∝ F(x,T).Relationship (<ref>) states that the rates of dynamical fluctuations in energy and in TE are proportional, both are zero on average and there is no energy release along a whole trajectory in ℳ. It is a necessary condition for any thermostat. In what follows, with implication of the fundamental requirements of statistical physics, we show that the relationship (<ref>) leads to a general method for obtaining stochastic and deterministic thermostats.Let us consider the exchange of energy between the system S and the thermal bath Σ. Any system placed in the heat bath should in some extent perturb it and be affected by backward influence of this perturbation. There exists a subsystem S_ad of Σ such that S_ad is involved in a joint dynamics with S. The rest of the heat bath is assumed to be unperturbed, permanently staying in thermal equilibrium. This is an approximation that is based on separation of relevant time scales. For instance, Brownian dynamics assumes that characteristic time scales of S and Σ are well separated and the system S does not perturb Σ. If the time scale is refined (which is of particular importance for small systems) then we have to take into account joint dynamics of S and S_ad. We will show that this case is closely related to NHL <cit.> and NH dynamics <cit.>.Thus, we have two cases: () the system S doesn't perturb the thermal bath and there are no new dynamic variables. The thermal bath in this case can only be taken into account implicitly via stochastic perturbations (similar to the Langevin dynamics); () the system S perturbs a part (S_ad) of the thermal bath Σ, while the rest of the thermal bath remains unperturbed. We assume that there is no direct energy exchange between S and Σ.Fundamentals of the statistical mechanics require that the systems S and S_ad are statistically independent at thermal equilibrium.Let us consider casesandin detail. §.§ Stochastic dynamics Suppose∇H(x)·ẋ=λ F_0(x,T), where λ is a constant. Without loss of generality, we can consider modified Hamiltonian dynamics in the form, ẋ=J∇H(x)+ψ(x,λ), and consequently: ∇H(x)·ψ(x,λ)=λ F_0(x,T),where the vector field ψ(x,λ) is to be found. Since the thermal bath does not appear in equation (<ref>) explicitly, only stochastic thermal noise may be involved in the dynamics. To find ψ, we introduce 2𝒩-vector of independent thermal white noises, ξ(t), such that ⟨ξ(t)⟩ =0, ⟨ξ_i(t)ξ_j(t')⟩ =2λ k_BTδ_ijδ(t-t'), and the vector field, Φ(x), such that⟨ξ(t)·Φ(x)⟩ =λ k_BT ⟨∇·φ(x)⟩,where ⟨⋯⟩ is the Gaussian average over all realizations of ξ(t). Using Novikov's formula <cit.>, we get⟨ξ(t)·Φ(x)⟩ = ∑_i,k⟨∂Φ_k/∂ x_iδ x_i(t)/δξ_k(t)⟩λ k_BT.Suppose δ x_i(t)/δξ_k(t)=ζ_i(x)δ_ik, where the vector field ζ(x) is such that each component ζ_i(x) does not depend on x_i, that is∇∘ζ(x)=0,where ∘ denotes the component-wise (Hadamard) product of two vectors and 0 is the null vector. Then ∇·φ(x)=∇·(ζ(x)∘Φ(x)). Thus, we get φ(x)=ζ(x)∘Φ(x) and it follows that Φ(x)=ζ^-1(x)∘φ(x), where ζ^-1(x) is the vector field such that ζ^-1(x)∘ζ(x)=1. Assuming φ(x)=η(x)∘∇H(x), where η(x)≡ζ(x)∘ζ(x), we get ψ(x,λ)=-λη(x)∘∇H(x)+ζ(x)∘ξ(t)and the modified Hamiltonian dynamics takes the form of stochastic differential equation (SDE): ẋ=J∇H(x)-λη(x)∘∇H(x)+ζ(x)∘ξ(t). The Fokker-Planck equation (FPE) corresponding to SDE (<ref>) has the form ∂_tρ=ℱ^*ρ, where ℱ^*ρ=-J∇H(x)·∇ρ+λ∇·[η(x)∘∇H(x) ρ]+λ k_BT∇·[η(x)∘∇ρ].Note that the last term here was found using the following specific relationship for the vector field ζ(x):(ζ(x)∘∇)·(ζ(x)∘∇ρ)=∇·[η(x)∘∇ρ].Invariant probability density for dynamics (<ref>) is determined by the equation ℱ^*ρ=0. It is expected that this is a unique invariant density <cit.>. We claim that for the defined above vector field ζ(y) the canonical density, ρ_∞∝exp[-β H(x)], is invariant for the stochastic dynamics given by (<ref>), that is ℱ^*ρ_∞=0.The proof is by direct calculation.The Langevin equation is a particular case of (<ref>). For example, for the system with H(x)=p^2/2m+V(q), where x=(p,q)∈ℝ^2 we have: if ζ=(1,0), thenṗ=-V'(q)-λp/m+ξ(t), q̇=p/m;if ζ=(0,1), thenṗ=-V'(q), q̇=p/m-λ V'(q)+ξ(t). The procedure for obtaining stochastic dynamics (<ref>) is essentially a general and can be a quite straightforwardly extended to other TEs, for example, the general scalar TE (<ref>). Indeed, let us introduce the set of 2𝒩-vectors of independent thermal white noises, {ξ(l;t)} _l=0^L, L∈ℤ_≥0, such that ⟨ξ(l;t)⟩ =0, ⟨ξ_i(l;t)ξ_j(l';t')⟩ =2λ_lk_BTδ_ijδ_ll'δ(t-t'), and the set of vector fields, {ζ(l;x)} _l=0^L, L∈ℤ_≥0, such that ∇∘ζ(l;x)=0 for any l≥0, where ∘ denotes the component-wise (Hadamard) product of two vectors and 0 is the null vector. Starting from the relationship,∇H(x)·ψ(x,λ)=∑_l=0^Lλ_lF_l(x,T) (k_BT)^2l,and then strictly following arguments as stated above, we getψ(x,λ)=-∑_l=0^Lλ_lη(l;x)∘∇H(x) (k_BT)^2l+∑_l=0^Lζ(l;x)∘ξ(l;t) (k_BT)^l. where η(x)≡ζ(x)∘ζ(x). Thus, we arrive at the following stochastic dynamicsẋ=J∇H(x)-∑_l=0^Lλ_lη(l;x)∘∇H(x) (k_BT)^2l+∑_l=0^Lζ(l;x)∘ξ(l;t) (k_BT)^l.One can verify that the canonical measure is invariant for this stochastic equation of motion. Generally speaking, the dynamics(<ref>) includes 2𝒩(L + 1) independent white noise processes. This seems impractical. However, we can point out that (<ref>) potentially useful for multi-timescale stochastic simulations. As a simple example, let H(x)=p^2/2m+V(q), L=1, ζ(0;x)=(1,0), and ζ(1;x)=(0,1), then we arrive at the stochastic dynamics with two timescales involved,ṗ=-V'(q)-λ_0p/m+ξ_p(0;t), q̇=p/m-λ_1(k_BT)^2 V'(q)+k_BT ξ_q(1;t),where ⟨ξ_p(0;t)⟩ =0, ⟨ξ_q(1;t)⟩ =0, ⟨ξ_p(0;t)ξ_q(1;t)⟩ =0, ⟨ξ_p(0;t)ξ_p(0;t')⟩ =2λ_0k_BTδ(t-t'), ⟨ξ_q(1;t)ξ_q(1;t')⟩ =2λ_1k_BTδ(t-t'), as specified above.Analysis of p- and q-dynamics can be performed in reduced systems following the separation of these variables according to their time scales <cit.>. §.§ Deterministic and stochastic dynamics Let S_ad be associated with an even-dimensional phase space ℳ_ad, the Hamiltonian function h(y), y∈ℳ_ad, and the Hamiltonian dynamics, ẏ=J_y∇_yh(y), where J_y is the symplectic unit. Without loss of generality, we can assume that the modified Hamiltonian dynamics of the system composed by S and S_ad has the form, ẋ=J_x∇_xH(x)+ψ(x,y), ẏ=J_y∇_yh(y)+ψ^∗(y,x),where ψ(x,y) and ψ^*(y,x) are vector fields on ℳ and ℳ_ad correspondingly. To derive deterministic dynamics, let us temporarily ignore the heat exchange between S_ad and Σ, that is, ∇_yh(y)·ψ^∗(y,x)=λ^∗F^∗(y,T) and ∇_xH(x)·ψ(x,y)=λF(x,T). As discussed above, theserelationships lead to the stochastic dynamics. Systems S and S_ad must be statistically independent in the thermal equilibrium, so that ∇_xH(x)·ẋ∼0 and ∇_yh(y)·ẏ∼0 are satisfied simultaneously. Thus, we assume that ∇_xH(x)·ψ(x,y) =g(x)F_0^∗(y,T), ∇_yh(y)·ψ^∗(y,x) =-g^∗(y)F_0(x,T),where g(x) and g^∗(y) are some vague functions, andF_0(x,T)=φ(x)·∇_xH(x)-k_BT ∇_x·φ(x),F_0^∗(y,T)=Q(y)·∇_yh(y)-k_BT ∇_y·Q(y),are TEs for the systems S and S_ad correspondingly. These relationships are valid for any H(x) and h(y). To specify ψ(x,y) and ψ^*(y,x), we assume that g(x)=a(x)·∇_xH(x), g^∗(y)=b(y)·∇_yh(y), where a(x) and b(y) are vector fields on ℳ and ℳ_ad, respectively. It follows thatψ(x,y)=a(x) F_0^∗(y,T),ψ^∗(y,x)=b(y) F_0(x,T).To determine the relationship between the vector fields a(x), b(y) and TEs F_0(x,T), F_0^∗(y,T), recall that if the combined system S+S_ad is isolated, then Ḣ(x)=-ḣ(y); and if T≠0, then Ḣ(x)+ḣ(y)∼0.Straightforward calculations show thata(x)=φ(x),b(y)=Q(y),provided that b(y)exp[-β h(y)]→0 as |y|→∞ and a(x)exp[-β H(x)]→0 as |x|→∞. As a result, we have the equations of motion ẋ=J_x∇_xH(x)+F_0^∗(y,T)φ(x), ẏ=J_y∇_yh(y)-F_0(x,T)Q(y),which are generalized NH equations. The Liouville equation associated with the system (<ref>) has the form ∂_tρ=-ℒ^*ρ, where ℒ^*ρ=∇_x·(ẋρ)+∇_y·(ẏρ). Invariant probability densities are determined by the equation ℒ^*ρ=0.We claim that if Q(y) and φ(x) are the defined above vector fields, then the canonical density ρ_∞∝exp[-β H(x)]·exp[-β h(y)] is invariant for dynamics (<ref>), that is ℒ^*ρ_∞=0. The proof is by direct calculation.As a particular case, let Q(y) be an incompressible vector field (i.e. ∇_y·Q(y)=0 for all y∈ℳ_ad). Then we arrive at the NH equations ẋ=J_x∇_xH(x)+(Q(y)·∇_yh(y))φ(x), ẏ=J_y∇_yh(y)-F_0(x,T)Q(y). Now we include into our consideration the effect of the thermal bath Σ on S_ad dynamics, that is the relationship ∇_yh(y)·ψ^∗=λ F_0^∗(y,T). Following the arguments and notations used to derive SDE (<ref>), we arrive at the stochastic dynamics: ẋ=J_x∇_xH(x)+F_0^∗(y,T) φ(x), ẏ=J_y∇_yh(y)-F_0(x,T) Q(y)-λη(y)∘∇_yh(y)+ζ(y)∘ξ(t),which are generalized NHL equations<cit.>. In the particular case of an incompressible vector field Q(y) we get the NHL equations: ẋ=J_x∇_xH(x)+(Q(y)·∇_yh(y)) φ(x), ẏ=J_y∇_yh(y)-F_0(x,T) Q(y)-λη(y)∘∇_yh(y)+ζ(y)∘ξ(t), FPE corresponding to (<ref>) has the form ∂_tρ=ℱ^*ρ, where ℱ^*ρ=-J_x∇_xH(x)·∇_xρ-J_y∇_yh(y)·∇_yρ-F_0^∗(y,T)∇_x·[φ(x)ρ]+F_0(x,T)∇_y·[Q(y)ρ]+λ k_BT∇_y·[η(y)∘∇_yρ]+λ∇_y·[η(y)∘∇_yh(y)ρ].Invariant probability density for the SDE (<ref>) is determined by the equation ℱ^*ρ=0. We claim that if Q(y), φ(x), and ζ(y) are the defined above vector fields , then the canonical density, ρ_∞∝exp[-β H(x)]·exp[-β h(y)], is invariant for the NHL dynamics (<ref>), that is ℱ^*ρ_∞=0. The proof is by direct calculation. Besides, we expect that this dynamics is ergodic <cit.>.Commonly used NH <cit.> and NHL <cit.> thermostats are particular cases of thermostats given by (<ref>) and (<ref>) correspondingly. For example, by substituting ζ^2/2Q for h(y), y=(ζ,η)∈ℝ^2, (-Q,0) for Q(y) and (𝐩,0) for φ(x) in (<ref>) we get classical NH equations <cit.>.It is worth to note that the case of the general TE can be considered straightforwardly following the method of dynamic principle, as developed above. Assume that∇_xH(x)·ψ(x,y) =∑_l=0^Lg_l(x)F_l^∗(y,T) (k_BT)^2l, ∇_yh(y)·ψ^∗(y,x) =-∑_l=0^Lg_l^∗(y)F_l(x,T) (k_BT)^2l.These relationships must be valid for any H(x) and h(y). To specify ψ(x,y) and ψ^*(y,x), we set g_l(x)=a_l(x)·∇_xH(x), g_l^∗(y)=b_l(y)·∇_yh(y), from what follows that a_l(x)=φ_l(x) and b_l(y)=Q_l(y). Thus,ψ(x,y)=∑_l=0^LF_l^∗(y,T) (k_BT)^2lφ_l(x), ψ^∗(y,x)=-∑_l=0^LF_l(x,T) (k_BT)^2lQ_l(y).Finally, we arrive at the deterministic equations of motion (modified Hamiltonian dynamics),ẋ= J_x∇_xH(x)+∑_l=0^LF_l^∗(y,T) (k_BT)^2lφ_l(x), ẏ=J_y∇_yh(y)-∑_l=0^LF_l(x,T) (k_BT)^2lQ_l(y).We will not discuss the equations (<ref>) in detail and only note that the canonical measure is invariant for this dynamics, and a generalization to stochastic NHL type dynamics can be obtained. Strictly speaking, such a generalization is important since it simulates an equilibrium reservoir of the energy and ensures the ergodicity of dynamics. To outline a connection between equations of motion (<ref>) and known deterministic thermostats<cit.>, we provide the following simple example. Let L=1, H(x)=p^2/2m+V(q), h(y)=η_0^2/2Q_0+η_1^2/2Q_1, φ_0(x)=(p,0), φ_1(x)=(p^3,0), Q_0(y)=(-Q_0,0,0,0), and Q_1(y)=(0,-Q_1,0,0), thenṗ=-V'(q)-η_0p-η_1k_BT p^3, q̇=p/m, η̇_0=Q_0(p^2/m-k_BT), η̇_1=Q_1(p^4/m-3k_BTp^2)(k_BT)^2,thedynamic equations equipped with the control of first two moments of the equilibrium kinetic energy<cit.>. Similarly, we can obtain dynamic equations that control the configurational temperature moments. § REDESIGN OF NHL THERMOSTAT In this Section we consider an alternative to the conventional NH and NHL thermostat schemes.This alternative (seen as a particular case of dynamical equations (<ref>))is based on the consideration of physically reasonable chain of interactions,S↭ S_ad↭Σ, that is, the system S_ad is a buffer between the physical system S and the infinite energy reservoir Σ. Consider the dynamical equations (<ref>) and (<ref>), and assume that ∇_x·φ(x)=0, ∇_y·Q(y)≠0. Note, that these assumptions are opposite to the requirements for the NH and NHL dynamics, where ∇_x·φ(x)≠0, ∇_y·Q(y)=0. We get ẋ =J_x∇_xH(x) +[Q(y)·∇_yh(y)-k_BT ∇_y·Q(y)] φ(x),ẏ =J_y∇_yh(y)-(φ(x)·∇_xH(x))Q(y),andẋ =J_x∇_xH(x) +[Q(y)·∇_yh(y)-k_BT ∇_y·Q(y)] φ(x),ẏ =J_y∇_yh(y)-(φ(x)·∇_xH(x)) Q(y) -λη(y)∘∇_yh(y)+ζ(y)∘ξ(t),where vector fields involved are such as indicated above. Thus, there is plenty of freedom in specification of particular thermostat equations of motion. To illustrate the redesigned NH and NHL thermostat dynamical systems (described by the equations (<ref>) and (<ref>) correspondingly) let us consider system S with the Hamiltonian function H(p,q),H(p,q)=p^2/2m+1/2mω^2q^2, x=(p,q)∈ℝ×ℝ,that is a harmonic oscillator of mass m and frequency ω, and system S_ad with the Hamiltonian function h(v,u),h(v,u)=v^2/2μ, y=(v,u)∈ℝ×ℝ,that is a free particle of mass μ. Harmonic oscillators are among central instruments in analysis of many physical problems, classical as well as quantum mechanical. It is known that generating the canonical statistics for a harmonic oscillator is a hard problem. For example, the NH scheme is proven to be non-ergodic<cit.> and the NHL scheme<cit.>, and earlier the NHC scheme<cit.>, was proposed to overcome this difficulty. Anyway, it is important for any dynamic thermostat to correctly generate the canonical statistics for a harmonic oscillator.The deterministic thermostat dynamics (<ref>) as well as stochastic dynamics (<ref>) allow a plethora of further specifications. To be as close as possible to redesign of original NH dynamics<cit.>, we set Q(y)=(v,0), ∇·Q=1, and φ(x)=(γ,0), where γ is a dimensional parameter, ∇·φ=0. Thus, we arrive at the following equations of motion: ṗ =-mω^2q+γ[v^2/μ-k_BT],q̇ =1/mp,v̇ =-γp/mv,u̇ =v/μ;andṗ =-mω^2q+γ[v^2/μ-k_BT],q̇ =1/mp,v̇ =-γp/mv-λv/μ+ξ(t),u̇ =v/μ;where ζ=(1,0) and ⟨ξ(t)ξ(t')⟩ =2λ k_BTδ(t-t'). Note, thatequations (<ref>) and (<ref>) are redesign of NH (denote RNH) and NHL (RNHL) thermostats correspondingly.System (<ref>) has two integrals of motion, that isI_1=vexp(γ q)=const,I_2=p^2/2m+1/2mω^2q^2+v^2/2μ+γ k_BT/mq=const,indicating the lack of ergodicity. For example, if all parameters of the system (<ref>) are set equal to unity, m=1,ω=1,μ=1,γ=1,k_BT=1, and initial conditions are p=1,q=0,v=1, then the phase trajectory is represented by the closed curve and the Poincaré section (p,q) shown on Figure <ref>. This is expected from the existence of two integrals of motion, that is I_1 and I_2. It is clear that the trajectory does not explore the phase space available for the harmonic oscillator. This ergodicity problem is not surprising, the convenient NH dynamic suffer from the same problem. It is questionablethat the situation can be improved with a more complex φ and Q, for example, φ=(γ_1,γ_2), φ=(γ_1mω^2q,γ_21mp), Q=(v,u), and so on. If φ=(γ_1,γ_2), then we getṗ =-mω^2q+γ_1[v^2/μ-k_BT],q̇ =p/m+γ_2[v^2/μ-k_BT],v̇ =-(γ_1p/m+γ_2μω^2q)v,u̇ =v/μ,and it is easy to show that this dynamics is not ergodic.Our next illustration will be devoted to the system described by thermostat dynamical equations (<ref>). We will show, by means of numerical simulations, that a certain realization of the whole length chain of physically reasonable interactions, that is S↭ S_ad↭Σ, generates the correct statistics.Let us consider the case when all parameters of the system (<ref>) are set equal to unity, m=1,ω=1,μ=1,γ=1,k_BT=1,λ=1, and the initial conditions are: p=0,q=0,v=0. Phase trajectories of length 10^6 are generated using the Euler method with a time step of Δ t=0.0005. We have repeated simulations using the fourth-order Runge-Kutta method with a random contribution held once for the entire interval from t to t+Δ t, and arrive at the same result.Figure <ref>(a) shows the Poincaré section (p,q) for a harmonic oscillator equipped with the temperature control tool (<ref>). This figure demonstrates that the trajectory generates proper sampling of the full phase space of the harmonic oscillator. Figures <ref>(b) and <ref>(c) show the momentum and position distribution functions from simulations ascompared with the exact analytical expressions. In both cases, the Gaussian distribution is generated in agreement with the theoretical prediction. Presented results serve as an evidence of ergodic sampling the canonical statistics. A key difference between the NHL and RNHL schemes is that the latter relates the temperature control tool to the system S_ad rather than to the system S, and the corresponding variable, v, must be Gaussian, according to the equations (<ref>). Thus, it is important that the RNHL dynamical equations properly generate the Gaussian statistics of v variable. Figure <ref> shows the v-distribution function from simulations as compared with the exact analytical solution and indicates a good agreement between them. § CONCLUSION In conclusion, we emphasize that the method proposed in this work is based on the fundamental laws of statistical physics and offers a unified approach in developing stochastic and deterministic thermostats. For clarity of presentation we have illustrated our method using a few simple TEs and restricted our consideration by Markov dynamics. The presented method allowed us to obtain a wide spectrum of stochastic and deterministic dynamical systems with the invariant canonical measure. We note that the idea of presented method is general and adaptable to a variety of TEs so that it can be used to produce thermostats of novel types. For example the thermostat for the system with non-Markov dynamics, i.e. the one described by the equation ∇H(x(t))·ẋ(t)∝∫_0^tdt'G(t-t')F(x(t'),T). As a second example of new type of thermostats we can mention the one for the gradient dynamical system. We realize that non-trivial new thermostats should be verified by test simulations. In our follow up work we will focus on these and other applications of the presented method. This work has been supported by the BBSRC grant BB/K002430/1 to BV.apsrev4-1
http://arxiv.org/abs/1702.08399v3
{ "authors": [ "A. Samoletov", "B. Vasiev" ], "categories": [ "physics.data-an" ], "primary_category": "physics.data-an", "published": "20170227174740", "title": "Dynamic principle for ensemble control tools" }
Synergistic Team CompositionEwa Andrejczuk Artificial Intelligence Research Institute (IIIA-CSIC) Change Management Tool S.L Barcelona, Spain ewa@iiia.csic.es Juan A. Rodríguez-AguilarArtificial Intelligence Research Institute (IIIA-CSIC) Barcelona, Spain jar@iiia.csic.es Carme Roig Institut Torras i Bages L'Hospitalet de Llobregat, Spain mroig112@xtec.cat Carles Sierra Artificial Intelligence Research Institute (IIIA-CSIC) Barcelona, Spain sierra@iiia.csic.esDecember 30, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== For the family of polynomials in one variableP:=x^n+a_1x^n-1+⋯ +a_n, n≥ 4, we consider itshigher-order discriminant sets {D̃_m=0}, whereD̃_m:=Res(P,P^(m)),m=2, …, n-2, and their projections in the spaces ofthe variables a^k:=(a_1,… ,a_k-1,a_k+1,… ,a_n).Set P^(m):=∑ _j=0^n-mc_ja_jx^n-m-j, P_m,k:=c_kP-x^mP^(m).We show that Res(D̃_m,∂D̃_m/∂ a_k,a_k)= A_m,kB_m,kC_m,k^2, where A_m,k=a_n^n-m-k,B_m,k=Res(P_m,k,P_m,k')if 1≤ k≤ n-m and A_m,k=a_n-m^n-k,B_m,k=Res(P^(m),P^(m+1)) if n-m+1≤ k≤ n. The equation C_m,k=0 defines the projection in the spaceof the variables a^kof the closure of the set of values of (a_1,… ,a_n)for which P and P^(m) have twodistinct roots in common. The polynomials B_m,k,C_m,k∈ℂ[a^k]are irreducible. The result is generalized to the case when P^(m)is replaced by a polynomial P_*:=∑ _j=0^n-mb_ja_jx^n-m-j,0≠ b_i≠ b_j≠ 0 for i≠ j.AMS classification: 12E05; 12D05 Key words: polynomial in one variable; discriminant set;resultant; multiple root § INTRODUCTION In this paper we consider for n≥ 4 the general family of monic polynomialsin one variableP(x,a):=x^n+a_1x^n-1+⋯ +a_n, x,a_j∈ℂ. For its mthderivative w.r.t. x we setP^(m):=c_0x^n-m+c_1a_1x^n-m-1+⋯ +c_n-ma_n-m,where c_j=(n-j)!/(n-m-j)!.For m=1, …,n-1 we define the mth order discriminant of P asD̃_m:=Res(P,P^(m)) which is the determinant of the Sylvester matrixS(P, P^(m)). We remind that S(P, P^(m)) is(2n-m)× (2n-m), its first (resp. (n-m+1)st) row equals(1,a_1,… ,a_n,0,… ,0)  (resp.   (c_0,c_1a_1,… ,c_n-ma_n-m,0,… ,0)  )  ,the second (resp. (n-m+2)nd) row is obtained from this one by shiftingby one position to the right and by adding 0 to the left etc. We say thatthe variable a_j is of quasi-homogeneous weight j because up toa sign it equals the jth elementary symmetric polynomial in the roots ofthe polynomial P; the quasi-homogeneous weight of x is 1.There are at least two problems in which such discriminants are of interest.One ofthem is the Casas-Alvero conjecture that if a complex univariate polynomial hasa root in common with each of its nonconstant derivatives, then it is a powerof a linear polynomial, see <cit.>, <cit.> and <cit.>and the claim in <cit.> that the answer tothe conjecture is positive. Another one is the study of thepossible arrangements of the roots of a hyperbolic polynomial (i.e. real andwith all roots real) and of all its nonconstant derivatives on the real line.This problem can be generalized to a class of polynomial-like functionscharacterized by the property their nth derivative to vanish nowhere.It turns out that for this class Rolle's theorem gives only necessary,but not sufficientconditions for realizability of a given arrangement by the zeros ofa polynomial-like function, see <cit.>, <cit.>, <cit.>and <cit.>. Picturesof discriminants for the cases n=4 and n=5 can be found in <cit.>.Properties of the discriminant set {D̃_1=0} for real polynomialsare proved in <cit.>. A closely related question to the one of thearrangement of the roots of a hyperbolic polynomial is the oneto study overdetermined strata in the space of the coefficients of thefamily of polynomials P (the definition is given by B. Z. Shapiroin <cit.>); these are sets of values of the coefficients for which thereare more equalities between roots of the polynomial and its derivatives thanexpected. Example: the family of polynomials x^4+ax^3+bx^2+cx+d depends on4 parameters two of which can be eliminated by shifting and rescaling thevariable x which gives (up to a nonzero constant factor) the familyS:=x^4-x^2+cx+d. For c=0, d=1/2 the polynomial has two double roots± 1/√(2), and 0 is a common root for S' and S”'. This makesthree independent equalities, i.e. more than the number of parameters. Forpolynomials of small degree, overdetermined strata have been studied in<cit.> and <cit.>. The study of overdetermined strata is interestingboth in the case of complex and in the case of real coefficients. In what follows we enlarge the context by considering instead of thecouple of polynomials (P,P^(m)) the couple (P,P_*), whereP_*:=∑ _j=0^n-mb_ja_jx^n-m-j, b_j≠ 0 and b_i≠ b_jfor i≠ j. By abuse of notation we set D̃_m:=Res(P,P_*). The polynomial D̃_m is irreducible. It is a degree n polynomialin each of the variables a_j, j=1, …, n-m, and a degreen-m polynomial in each of the variables a_j, j=n-m+1, …, n.It contains monomials M_j:=± b_j^na_j^n(1-b_0/b_j)^ja_n^n-m-j,j=1, …, n-m, andN_s:=± b_n-m^m-sa_n-m^m-sb_0^n-m+sa_n-m+s^n-m, s=1, …, m-1.It is quasi-homogeneous, ofquasi-homogeneous weight n(n-m). The monomial M_j (resp. N_s) isthe only monomial containing a_j^n (resp. a_n-m+s^n-m).We prove first the presence in D̃_m of the monomials M_j and N_s.For each j fixed, 1≤ j≤ n-m, one can subtract the (n-m+ν )th rowof S(P,P_*) multiplied by 1/b_j from its νth one,ν =1, …, n-m. We denote by T the new matrix. One has T= S(P,P_*) and the variable a_j is not present in the first n-m rows of T. Thus thereremains a single term of T containing n factors a_j; it isobtained when the entries b_ja_j in positions (n-m+μ ,j+μ ) of T,μ =1, …, n, are multiplied by the entries a_n in positions(ℓ ,n+ℓ ), ℓ =j+1, …, n-m,and by the entries 1-b_0/b_j inpositions (ℓ ,ℓ ), ℓ =1, …, j; this gives the monomialM_j. (If when computing S(P,P_*)one chooses to multiply the n entries b_ja_j, then they mustbe multiplied by entries of the matrix obtained from S(P,P_*) by deletingthe rows and columns of the entries b_ja_j. This matrix is block-diagonal,its upper left block is upper-triangular, with diagonal entries equal to1-b_0/b_j, its right lower block is lower-diagonal, with diagonalentries equal to a_n. Hence M_j is the only monomial containing nfactors a_j.)To obtain the monomial N_s one chooses in the definition of Tabove j=n-m. Hence the first n-mrows of T do not contain the variable a_n-m. The monomial N_s isobtained by multiplying the entries a_n-m+s in positions(r,n-m+s+r), r=1, …, n-m, by the entries b_n-ma_n-m inpositions (q,q), q=2n-2m+s+1, …, 2n-m and by the entries b_0in positions (n-m+p,p), p=1, …, n-m+s. The monomial N_sis the only one containing n-m factors a_n-m+s (proved by analogy withthe similar claim about the monomial M_j).The matrix S(P,P_*) contains each of the variables a_j, j=1, …,n-m (resp. a_s, s=n-m+1, …, n)in exactly n (resp. n-m) of its columns.The presence of the monomials M_j (resp. N_s) in D̃_m shows thatD̃_m is a degree n polynomial in the variables a_j and a degreen-m one in the variables a_s. Quasi-homogeneity of D̃_m follows from the fact that its zero setand the zero sets of the polynomials P and P_* remaininvariant under the quasi-homogeneous dilatations x↦ tx,a_κ↦ t^κa_κ, κ =1, …, n.Each of the monomials M_j and N_s is of quasi-homogeneous weight n(n-m). Irreducibility of D̃_m results from the impossibility to presentsimultaneously all monomials M_j and N_s as products of two monomials,of quasi-homogeneous weights u and n(n-m)-u,for any 1≤ u≤ n(n-m)-1. For Q,R∈ℂ[x] we denote by Res(Q,R) the resultant of Qand R and we write P^(m) for d^mP/dx^m.This refers also to the case when the coefficients of Q and Rdepend on parameters. We set a:=(a_1,… ,a_n) (resp.a^j=(a_1,… ,a_j-1,a_j+1,… ,a_n)) and we denote by𝒜≃ℂ^n (resp. 𝒜^j≃ℂ^n-1)the space of the variables a (resp. a^j).For K,L∈ℂ[a] we write S(K,L,a_k) and Res(K,L,a_k)for the Sylvester matrix and the resultant of K and Lwhen considered as polynomials in a_k. We setD̃_m,k:=Res(D̃_m,∂D̃_m/∂ a_k,a_k).For a matrix A we denote by A_k,ℓ its entry in position (k,ℓ )and by [A]_k,ℓ the matrix obtained from Aby deleting its kth row and ℓth column. By Ω(indexed, with accent or not) we denote throughout the papernonspecified nonzero constants.By P_m,k (1≤ k≤ n-m) we denote the polynomial b_kP-x^mP_*; its coefficients of x^n and x^k equal b_k-b_0≠ 0 and 0. For 1≤ m≤ n-2 we denote by Θ and M̃the subsets of the hypersurface{D̃_m=0}⊂𝒜 such that for a∈Θ (resp.for a∈M̃) the polynomial P has a root which is a doubleroot of P_* (resp. the polynomials P and P_* have twosimple roots in common). The remaining roots of P and P_* are presumedsimple and mutually distinct. We call the set M̃the Maxwell stratum of {D̃_m=0}.In the present paper we prove the following theorem; Suppose that 2≤ m≤ n-2. Then:(1) The polynomial D̃_m,k can be represented in the formD̃_m,k=A_m,kB_m,kC_m,k^2 , where A_m,k=a_n^n-m-k if k=1, …, n-m, andA_m,k=a_n-m^n-k if k=n-m+1, …, n,B_m,k and C_m,k are irreduciblepolynomials in the variables a^k.(2) One has B_m,k=Res(P_m,k,P_m,k') if k=1, …, n-m, andB_m,k=Res(P_*,P_*') if k=n-m+1, …, n.(3) The equation C_m,k=0 defines the projection in the space 𝒜^kof the closure of the Maxwell stratum.The paper is structured as follows. After some examples and remarks inSection <ref>, we justify in Section <ref> the form of the factor A_m,k, seeProposition <ref>; Section <ref> begins with Lemma <ref>which gives the form of the determinant ofcertain matrices that appear in the proof of Theorem <ref>.Section <ref> contains Lemma <ref> and Statements <ref>, <ref>and <ref> (the latter claims that the factors B_m,k and C_m,kare irreducible). They imply that one hasD̃_m,k=A_m,kB_m,k^s_m,kC_m,k^r_m,k, where s_m,k, r_m,k∈ℕ, seeRemark <ref>. Thus after Section <ref> thereremains to show only thats_m,k=1 and r_m,k=2.In Section <ref> we prove Theorem <ref>in the case m=n-2, see Proposition <ref>. In Section <ref>we show that s_m,k=1. We finish the proofof Theorem <ref> in Section <ref>, by induction on nand m, as follows. Statement <ref> deduces formula (<ref>)for n=n_0+1, k=k_0+1 from formula (<ref>)for n=n_0, k=k_0. Statement <ref> justifies formula (<ref>)for n=n_0, 2≤ m<n_0-2, k=1using formula (<ref>) for n=n_0, m=n_0-2, k=1(recall that the latter is justified in Section <ref>).Acknowledgement. The author is deeply grateful to B. Z. Shapiro fromthe University of Stockholm for havingpointed out to him the importance to study discriminantsand for the fruitful discussions of this subject. § EXAMPLES AND REMARKSAlthough Theorem <ref> speaks about the case 2≤ m≤ n-2, our first example treatsthe case m=1 in order to show its differences with the case2≤ m≤ n-2: For n=3, m=1 we set P:=x^3+ax^2+bx+c, P_*:=x^2+Aax+Bb,0≠ A,B≠ 1, A≠ B. Then[D̃_1 = (1-A)B(B-A)a^2b^2+(3AB-A-2B)abc+c^2+ A^2(1-A)a^3c+B(1-B)^2b^3;;D̃_1,1 = -A^2(A-1)^2c(-27A^2(1-A)c^2+4(A-B)^3b^3) (-Ac^2+(1-A)B^2(1-B)b^3)^2;;D̃_1,2 =-B^2(B-1)^2c(-27B(1-B)^2c+4(A-B)^3a^3)(-(1-B)c+A(1-A)^2Ba^3)^2;;D̃_1,3 =-(-4Bb+A^2a^2)((1-B)b-A(1-A)a^2)^2 . ]The condition P and P_* to have two roots in common istantamount to P_* dividing P. One has P=(x+a(1-A))P_*+W_1x+W_0, where W_1:=(1-B)b-A(1-A)a^2  ,  W_0:=c-B(1-A)ab .The quadratic factors in the above presentations of D̃_1,k,k=1, 2 and 3, are obtained by eliminating respectively a, b and cfrom the system of equations W_1=W_0=0 which is the necessary and sufficientcondition P_* to divide P.In the particular case A=2/3, B=1/3 (i.e. P_*=P'/3) one obtainsD̃_1,1=(-2^6/3^15)c(-27c^2+b^3)^3  ,  D̃_1,2=(-2^6/3^15)c(-27c+a^3)^3  ,  D̃_1,3=(2^4/3^6)(3b-a^2)^3 . (1) For n≥ 4, m=1 and P_*=P'a result similar to Theorem <ref> holdstrue. Namely, if n≥ 4, then D̃_1,kis of the form A_1,kB_1,k^3C_1,k^2, where for m=1 the polynomialsB_m,k and C_m,k are defined in the same way as for 2≤ m≤ n-2(with P_*=P'), but A_1,k=a_n^min (1,n-k)+max (0,n-k-2), see <cit.> and <cit.>.Hence for m=1 and P_*=P^(m)there are two differences w.r.t. the case m≥ 2 – the degree 3(instead of 1) ofB_1,k, and A_1,n-1=a_n (instead of A_1,n-1=1). This difference can beassumed to stemfrom the fact that for m=1, if P has a root of multiplicity ≥ 3, thenthis is a root of multiplicity ≥ 2 for P'. This explanation isdetailed below and in Remark <ref>.For n=4 and for generic values of b_j the polynomialsD̃_1,k, up to a constant nonzero factor, are of the form[ D̃_1,1= (b_1/b_0)^3(1-b_1/b_0)^2a_4^2 B̃_1,1C̃_1,1^2, D̃_1,2=-(b_2/b_0)^2(1-b_2/b_0)^2a_4 B̃_1,2C̃_1,2^2,; ; D̃_1,3=-(b_3/b_0)^2(1-b_3/b_0)^3a_4 B̃_1,3C̃_1,3^2, D̃_1,4= B̃_1,4C̃_1,4^2, ]where the polynomials B̃_1,k and C̃_1,k, whenconsidered as polynomials in the variables a_j and b_j, are irreducible.Set b_1=3b_0/4, b_2=b_0/2, b_3=b_0/4. This is the case P_*=P'; wewrite B̃_1,k|_b_1=3b_0/4, b_2=b_0/2, b_3=b_0/4=B_1,k andC̃_1,k|_b_1=3b_0/4, b_2=b_0/2, b_3=b_0/4=C_1,k. In this casethe polynomials C̃_1,k become reducible; they equalB_1,kC_1,k which explains the presence of the cubic factor B_1,k^3.Thus for m=1 the genericity condition0≠ b_j≠ b_i≠ 0 (which we assume to hold truein the formulation of Theorem <ref>) is not sufficient in orderto have the presentation (<ref>) for D̃_m,k. At the sametime imposing a more restrictive condition means leaving outside themost interesting case P_*=P'.(2) For m=n-1the analog of the factor C_m,k does not exist because P_* has a singleroot -b_1/b_0. For P_*=P^(n-1):=n!(x+a_1/n) this is x=-a_1/n.In this case one finds thatD̃_n-1=(-1)^n(n!)^nP(-a_1/n).To see this one subtractsfor j=1, …, n the jth column of the Sylvester matrixS(P,x+a_1/n) multiplied by -a_1/n from its (j+1)st column. This yieldsan (n+1)× (n+1)-matrix W whose entry in position (1,n+1) equalsP(-a_1/n) and which below the first row has units in positions(ν +1,ν ), ν =1, …, n, and zeros elsewhere.Hence W=(-1)^nP(-a_1/n). There remains to remind thatD̃_n-1= S(P,n!(x+a_1/n))=(n!)^n W.One finds directly thatD̃_n-1,k=∂D̃_n-1/∂ a_k= (-1)^n(n!)^n(-a_1/n)^n-k,2≤ k≤ n. To find also D̃_n-1,1 one first observes that P_n-1,1(x)/(n-1)!=-(n-1)x^n+a_2x^n-2+a_3x^n-3+⋯ +a_n and thatP(-a_1/n)=P_n-1,1(-a_1/n)/(n-1)!. Hence up to a nonzero rational factor thedeterminants of the matrices S(P_n-1,1,P_n-1,1') andS(D̃_n-1,∂D̃_n-1/∂ a_1,a_1) coincide, i.e.D̃_n-1,1=ĉRes(P_n-1,1,P_n-1,1'),ĉ∈ℚ.(3) The fact that the factor C_m,k is squared (see formula (<ref>))is not astonishing. At a generic point of the Maxwell stratum the hypersurface{D̃_m=0}⊂𝒜 is locally the intersectionof two analytic hypersurfaces, see Statement <ref>. Consider a pointΨ∈𝒜^k close to the projection Λ _0in 𝒜^k of a genericpoint Λ∈M̃. There exist two pointsK_j∈{D̃_m=0},j=1, 2, which belong to these hypersurfaces and are close to Λ, andwhose common projection in 𝒜^kis Ψ. There existsa loop γ⊂𝒜^k, Ψ∈γ,whichcircumvents the projection in 𝒜^kof the set Θ∪M̃ such that if one followsthe two liftings on {D̃_m=0} of the points of γ whichat Ψ are the points K_j, then upon one tour along γ theseliftings are exchanged. Hence in order to define the projection of M̃ in𝒜^k by the zeros of an analytic function one has toeliminate this monodromy of rank 2 by taking the square of C_m,k. Forthe case m=1 a detailed construction of such a path γ is givenin <cit.>.For n=4 we consider the case of real polynomials.We write P=x^4+ax^3+bx^2+cx+d and we limit ourselves tothe situation when P_*:=P^(m).On Fig. <ref> we show the sets {D̃_1=0} |_a=0and {D̃_2=0} |_a=0 when b, c and d are real.The sets {D̃_1=0} |_a=0 and {D̃_2=0} |_a=0are invariant under thequasi-homogeneous dilatations a↦ ta, b↦ t^2b, c↦ t^3c,d↦ t^4d, therefore the intersections of the sets with the subspaces{ b=0} and { b=± 1} give a sufficient idea about them. For each ofthese three intersections we represent the axes c and d, see Fig. <ref>.For b=-1 the set {D̃_1=0} |_a=0 is a curve with oneself-intersection point at S and two ordinary 2/3-cusps at U and V;it is drawn in solid line. At U and V the polynomial Phas one triple and one simple real root. The set{D̃_2=0} |_a=0,b=-1 consists of twostraight (dashed) lines intersecting at H and tangent to the set{D̃_1=0} |_a=0 at the cusps U and V.The sets {D̃_1=0} |_a=0,b=0 and {D̃_1=0} |_a=0,b=1 areparabola-like curves, the former has a 4/3-singularity at the origin whilethe latter is smooth everywhere.The set {D̃_1=0} |_a=0,b=1 containsan isolated double point T.The set {D̃_2=0} |_a=0,b=0 (resp.{D̃_2=0} |_a=0,b=1) is the c-axis (resp. the point L).The points S, T and for b=0 the originbelong to a parabola (because the quasi-homogeneous weights of thevariables a_2 and a_4 equal 2 and 4 respectively).So do the points H, Land the origin for b=0. At S (resp. T) the polynomial P has tworeal (resp. two imaginary conjugate) double roots. At H and L thepolynomial P is divisible by P”.Globally the set {D̃_2=0} |_a=0 is diffeomorphicto a Whitney umbrella. The set {D̃_2=0} |_a=0 is smoothalong the c-axis for b=0 (except at the origin)and its tangent plane is the cd-plane. § THE FACTOR A_M,K The following lemma will be used in several places of this paper: Consider a p× p-matrix A having nonzero entries only i) on the diagonal (denoted by r_j,in positions (j,j), j=1, …, p);ii) in positions(ν ,ν +s), ν =1, …, p-s (denoted by q_ν),1≤ s≤ p-1, and iii) in positions (μ +p-s,μ ), μ =1, …, s (denoted byq_μ +p-s). Then A=r_1⋯ r_p± q_1⋯ q_p.Developing A w.r.t. its first row one obtains the equality A=r_1 B+(-1)^s+2q_1 C  ,   where  B=[A]_1,1   and   C=[A]_1,s+1 .The matrix B contains p-1 entries r_j (namely, r_2, …, r_p)and p-2 entries q_ν (the ones with 1≠ν≠ 1+p-s). In the sameway, the matrix C contains p-2 entries r_j (1≠ j≠ s+1)and p-1 entries q_ν (ν≠ 1). When finding B one can develop it w.r.t.that row or column in which there is an entry r_j and there is no entryq_ν. By doing so p-1 times one finds thatB=r_2⋯ r_p. The + sign of this product follows fromthe entries r_j being situated on the diagonal. When finding C one candevelop it w.r.t. that row or column in which there is an entry q_νand there is no entry r_j. By doing so p-1 times one finds thatC=± q_2⋯ q_p which proves the lemma. In the present section we prove the following proposition: (1) For k=n-m+1, …, n, the polynomial D̃_m,k isnot divisible by any of the variables a_j, j≠ n-m.(2) For k=1,… ,n-m, the polynomial D̃_m,k isnot divisible by any of the variables a_j, j≠ n.(3) For k=1, …, n-m, the polynomial D̃_m,k is divisible bya_n^n-m-k and not divisible by a_n^n-m-k+1.(4) For k=n-m+1, … ,n, it is divisible bya_n-m^n-k and not divisible by a_n-m^n-k+1.We show first that for a_i=0, n-m≠ i≠ k, the polynomial D̃_mis of the form Ω 'a_n-m^n+Ω”a_n-m^n-ka_k^n-m.Indeed, in this case one can listthe nonzero entries of the(2n-m)× (2n-m)-matrix S(P,P_*)and the positions in which they are situated:[ 1 (j,j) , a_n-m (j,j+n-m) ,;; a_k (j,j+k) ,j=1,… ,n-m ,;; b_0 (ν +n-m,ν ) ,b_n-ma_n-m (ν +n-m,ν +n-m) , ν =1,… ,n . ] Subtract the (μ +n-m)th row multiplied by 1/b_n-m from the μthone for μ =1, …, n-m. This makes disappear the termsa_n-m inpositions (j,j+n-m) while the terms 1 in positions (j,j) becomeequal to Ω _*:=1-b_0/b_n-m.The determinant of the matrix doesn't change. Wedenote the new matrix by T.To compute T one can develop it n-k timesw.r.t. the last column; each time one has a single nonzero entryin this column, this is b_n-ma_n-m in position (2n-m-ℓ ,2n-m-ℓ ), ℓ =0, …, n-k-1. The matrix T_1 which remains after deletingthe last n-k rows and columns ofT has the following nonzero entries, in the following positions: [Ω _* (j,j) , a_k (j,j+k) ,j=1,… ,n-m ,;; b_0 (ν +n-m,ν ) ,b_n-ma_n-m (ν +n-m,ν +n-m) , ν =1,… ,k . ] Clearly T=(b_n-ma_n-m)^n-k T_1.On the other hand the matrix T_1 satisfies the conditions ofLemma <ref> with p=n-m+k and s=k. HenceT_1=Ω̃'a_n-m^k+Ω̃”a_k^n-m and D̃_m=Ω 'a_n-m^n+Ω”a_n-m^n-ka_k^n-m.But then the (2n-2m-1)× (2n-2m-1)-Sylvester matrixS^*:=S(D̃_m,∂D̃_m/∂ a_k,a_k)has only the followingnonzero entries, in the following positions: [Ω”a_n-m^n-k(j,j), Ω 'a_n-m^n(j,j+n-m), j=1,… ,n-m-1 ,; ; (n-m)Ω”a_n-m^n-k(ν +n-m-1,ν ),,ν =1… ,n-m  . ]Part (1) follows from S^*= ± (Ω 'a_n-m^n)^n-m-1((n-m)Ω”a_n-m^n-k)^n-m≢0. We prove that for a_i=0, k≠ i≠ n, the polynomial D̃_mis of the form Ω ^†a_n^n-m+Ω ^††a_n^n-m-ka_k^n.Indeed, we list below the nonzero entries of the matrix S(P,P_*)and their positions: [ 1 (j,j) , a_k (j,j+k) ,;; a_n (j,j+n) ,j=1,… ,n-m ,;; b_0 (ν +n-m,ν ) ,b_ka_k (ν +n-m,ν +k) , ν =1,… ,n . ] One can develop n-m-k times S(P,P_*) w.r.t. its last column,where the only nonzero entries equal a_n. ThusS(P,P_*)=± a_n^n-m-k H, where H is obtained fromS(P,P_*) by deleting the last n-m-k columns and the rows with indicesk+1, …, n-m. The matrix H has the following nonzero entries, inthe following positions: [ 1 (j,j) , a_k (j,j+k) ,;; a_n (j,j+n) ,j=1,… ,k ,;; b_0 (ν +k,ν ) ,b_ka_k (ν +k,ν +k) , ν =1,… ,n . ]For μ =1, …, k one can subtract the (μ +k)th row multipliedby 1/b_k from the μth one to make disappear the terms a_k inpositions (μ ,μ +k); the entries 1 in positions (μ ,μ ) changeto Ω ^*:=1-b_0/b_k. We denote the newly obtained matrix by H_1.ObviouslyH_1= H; we list the nonzero entries of H_1 and their respectivepositions: [Ω ^* (j,j) , a_n (j,j+n) ,j=1,… ,k ,;; b_0 (ν +k,ν ) ,b_ka_k (ν +k,ν +k) , ν =1,… ,n . ]One applies Lemma <ref> with p=n+k, s=nto the matrix H_1 to conclude thatH= H_1=(Ω ^*)^k(b_ka_k)^n± b_0^na_n^k, soD̃_m= S(P,P_*)= Ω ^†a_n^n-m+Ω ^††a_n^n-m-ka_k^n. But then the (2n-1)× (2n-1)-Sylvester matrixS(D̃,∂D̃/∂ a_k,a_k) has only the followingnonzero entries, in the following positions: [Ω ^††a_n^n-m-k (j,j) , Ω ^†a_n^n-m (j,j+n) ,j=1,… ,n-1 ,;; nΩ ^††a_n^n-m-k (ν +n-1,ν ) , ν =1,… ,n . ]Its determinant equals± (Ω ^†a_n^n-m)^n-1(nΩ ^††a_n^n-m-k)^n ≢0 which proves part (2). For k=1, … ,n-m the polynomialD̃_m contains the monomialM_k:=± b_k^na_k^n(1-b_0/b_k)^ka_n^n-m-k, and it does not contain any other monomial ofthe form Ω a_k^nE, where E is a product of powersof variables a_i with i≠ k, see Proposition <ref>.Hence the first column of the (2n-1)× (2n-1)-matrixY:=S(D̃_m,∂D̃_m/∂ a_k,a_k) contains only twononzero entries, and these are Y_1,1=± b_k^n(1-b_0/b_k)^ka_n^n-m-k andY_n,1=± nb_k^n(1-b_0/b_k)^ka_n^n-m-k. ThusΔ := Y is divisible by a_n^n-m-k. We consider two cases:Case 1: k=n-m. We have to prove that D̃_m,n-m|_a_n=0≢0.Set a_j=0 for n-m≠ j≠ n-1. Hence the nonzero entries of the matrixS(P,P_*) and their positions are[ 1 (j,j) , a_n-m (j,j+n-m) ,;; a_n-1 (j,j+n-1) ,j=1,… ,n-m ,;; b_0 (ν +n-m,ν ) ,b_n-ma_n-m (ν +n-m,ν +n-m) , ν =1,… ,n . ]One can subtract the (j+n-m)th row multiplied by 1/b_n-m from thejth one, j=1, …, n-m, to make disappear the terms a_n-min the first n-m rows. This doesn't change S(P,P_*). The terms1 in positions (j,j) are replaced by 1-b_0/b_n-m.Hence D̃_m is of the form Ω _1a_n-m^n+Ω _2a_n-1^n-ma_n-m (one first developsS(P,P_*) w.r.t. the last column, where there is a single nonzeroentry b_n-ma_n-m in position (2n-m,2n-m), and then appliesLemma <ref> with p=2n-m-1 and s=n-1). Thus the matrixS^H:=S(D̃_m,∂D̃_m/∂ a_n-m,a_n-m)contains only the following nonzero entries, in the following positions: [Ω _1 (j,j) , Ω _2a_n-1^n-m (j,j+n-1) ,j=1,… ,n-1 ,;; nΩ _1 (ν +n-1,ν ) , Ω _2a_n-1^n-m (ν +n-1,ν +n-1) , ν =1,… ,n . ]One can subtract the (j+n-1)st row from the jth one, j=1, …,n-1, to make disappear the terms Ω _2a_n-1^n-m in the firstn-1 rows; the terms Ω _1 become (1-n)Ω _1. HenceS^H=Ω _3a_n-1^n(n-m)≢0. Case 2: 1≤ k≤ n-m-1.To prove that Δ is not divisible by a_n^n-m-k+1we develop it w.r.t.its first column: Δ := (± b_k^n(1-b_0/b_k)^ka_n^n-m-k)( ([Y]_1,1)+ (-1)^n+1n ([Y]_n,1)) .Our aim is to show that for a_n=0 the sum Z:= ([Y]_1,1)+ (-1)^n+1n ([Y]_n,1) is nonzero; this implies a_n^n-m-k+1 not dividingΔ. Notice that for a_n=0 the onlynonzero entries in the second column of Y (i.e. ofY|_a_n=0=:Y^0) are Y^0_1,2 andY^0_n,2=(n-1)Y^0_1,2. ThusZ|_a_n=0=(Y^0_1,2+(-1)^n+1(-1)^nnY^0_n,2) (Y^†)= (1-n(n-1))Y^0_1,2( Y^†) ,where the matrix Y^† is obtained from Y^0by deleting its first two columns, its first and its nth rows.The entry Y^0_1,2 is a not identically equal to 0 polynomial in thevariables a_j, k≠ j≠ n. Indeed, this is the coefficient ofa_k^n-1 in R^0:=Res(P,P_*)|_a_n=0.The matrix S_*:=S(P,P_*)|_a_n=0 has asingle nonzero entry in its last column;this is (S_*)_2n-m,2n-m=b_n-ma_n-m. Hence R^0=b_n-ma_n-m M, whereM:=[S_*]_2n-m,2n-m (M is (2n-m-1)× (2n-m-1)). For ν =1,… ,n-m one can subtract the (n-m+ν)th row of Mmultiplied by 1/b_k from itsνth row to make disappear the terms a_k in its first n-m rows. The newmatrix is denoted by M^1; one has M= M^1.The only terms of M^1 containing a_k^n-1are nowobtained by multiplying the entries b_ka_k of the last n-1 rows of M^1.To get these terms up to a sign one has to multiply (b_ka_k)^n-1 byM^*, where M^* is obtained from M^1 by deleting the rows andcolumns of the entries b_ka_k. The matrix M^* is block-diagonal, itsleft upper block is upper-triangular and its right lower block islower-triangular. The diagonal entries of these blocks (of sizes k× kand (n-m-k)× (n-m-k)) equal 1-b_0/b_k and a_n-1. HenceY^0_1,2=± b_n-ma_n-m(1-b_0/b_k)^kb_k^n-1a_n-1^n-m-k≢0.There remains to prove that Y^†≢0, see (<ref>).The matrix Y^† is obtained as follows.Set D^†:=D̃_m|_a_n=0= S_*;recall that S_*=b_n-ma_n-m M^1, see the proof of Lemma <ref>.Then Y^†=S(D^†,∂ D^†/∂ a_k,a_k). Notice thatD^† is a degree n-1, not n, polynomial in a_k, thereforeY^† is (2n-3)× (2n-3). It suffices to show that fora_j=0, j≠ k, n-m, n-1, one has Y^†≢0. Thisresults from M^1|_a_j=0,k≠ j≠ n-1 not having multiple roots(which we prove below). One can develop n-m-k times M^1 w.r.t. its last column,where it has a singlenonzero entry a_n-1, to obtainM^1=± a_n-1^n-m-k M^†; M^† is(n+k-1)× (n+k-1), it is obtained from M^1 bydeleting the last n-m-k columns and the rows with indicesk+1, …, n-m. The matrixM^† satisfies the conditions of Lemma <ref> withp=n+k-1 and s=k, the entries r_j from the lemma equal 1-b_0/b_k≠ 0(for j=1, …, k) or b_ka_k (for j=k+1, …, n+k-1);one has q_j=a_n-1 (1≤ j≤ k) or q_j=b_0 (k+1≤ j≤ n+k-1).Hence M^†|_a_j=0,k≠ j≠ n-1= (1-b_0/b_k)^k(b_ka_k)^n-1± b_0^n-1a_n-1^k. Fora_n-1≠ 0 it has n-1 distinct roots. Part (3) is proved.We use sometimes the same notation as in the proofof part (3), but with different values of the indices, therefore the proofsof the two parts of the proposition should be considered as independent ones.For k=n-m+1, … ,n, the polynomialD̃_m contains the monomialN_k-n+m:=± b_n-m^n-ka_n-m^n-kb_0^ka_k^n-m; itdoes not contain any other monomial ofthe form Ω a_k^n-mD, where D is a product of powersof variables a_i with i≠ k, see Proposition <ref>. The first column of the (2n-2m-1)× (2n-2m-1)-matrixY:=S(D̃_m,∂D̃_m/∂ a_k,a_k) contains only twononzero entries, namely Y_1,1=± b_n-m^n-ka_n-m^n-kb_0^k and Y_n-m,1=± (n-m)b_n-m^n-ka_n-m^n-kb_0^k. ThusY is divisible by a_n-m^n-k. We consider two cases:Case 1: k=n. We show that Y≢0 if a_n-m=0. We prove this fora_j=0, n-m-1≠ j≠ n. In this case the nonzero entries of the matrixS(P,P_*) and their positions are [ 1 (j,j) , a_n-m-1 (j,j+n-m-1) ,;; a_n (j,j+n) ,j=1,…, n-m ,;; b_0 (ν +n-m,ν ) ,b_n-m-1a_n-m-1 (ν +n-m,ν +n-m-1) , ν =1,… ,n . ]Subtracting the (j+n-m)th row multiplied by 1/b_n-m-1 from the jth onefor j=1, …, n-m, one makes disappear the terms a_n-m-1 in thefirst n-m rows. The only nonzero entry in the last column is now a_n inposition (n-m,2n-m), soS(P,P_*)=(-1)^na_n [S(P,P_*)]_n-m,2n-m .The last matrix satisfies the conditions of Lemma <ref> withp=2n-m-1, s=n and one finds that its determinant is of the formΩ _4a_n-m-1^n+Ω _5a_n^n-m-1. HenceS(P,P_*)=(-1)^na_n(Ω _4a_n-m-1^n+Ω _5a_n^n-m-1).This means that the matrixS(D̃_m,∂D̃_m/∂ a_n,a_n) has only thefollowing entries in the following positions: [Ω _5 (j,j) , Ω _4a_n-m-1^n (j,j+n-m-1) ,;;j=1,… ,n-m-1 ,;; (n-m)Ω _5 (ν +n-m-1,ν ) , Ω _4a_n-m-1^n (ν +n-m-1,ν +n-m-1) ,;; ν =1,… ,n-m . ]One can subtract the (j+n-m-1)st row from the jth one, j=1,…, n-m-1, to make disappear the terms Ω _4a_n-m-1^n in thefirst n-m-1 rows. The matrix becomes lower-triangular, with diagonalentries equal to (1-n+m)Ω _5 or to Ω _4a_n-m-1^n, so itsdeterminant is not identically equal to 0.Case 2: n-m+1≤ k≤ n-1. To prove that Y is notdivisible by a_n-m^n-k+1we develop it w.r.t.its first column: Y = (± b_n-m^n-ka_n-m^n-kb_0^k)( ([Y]_1,1)+ (-1)^n-m(n-m) ([Y]_n-m,1)) .Our aim is to show that for a_n-m=0 the sum U:= ([Y]_1,1)+ (-1)^n-m(n-m) ([Y]_n-m,1) is nonzero; this impliesa_n-m^n-k+1 not dividingY. Notice that for a_n-m=0 the onlynonzero entries in the second column ofY^0:=Y|_a_n-m=0 are Y^0_1,2 andY^0_n-m,2=(n-m-1)Y^0_1,2. Thus[ U|_a_n-m=0= (Y^0_1,2+(-1)^n+1(-1)^n-m(n-m)Y^0_n-m,2) Y^†; ; =(1-(n-m)(n-m-1))Y^0_1,2 Y^† , ]where the matrix Y^† is obtained from Y^0by deleting its first two columns, its first and (n-m)th rows.The entry Y^0_1,2 is a not identically equal to 0 polynomial in thevariables a_j, k≠ j≠ n-m.Indeed, this is the coefficient ofa_k^n-m-1 in R^0:=Res(P,P_*)|_a_n-m=0.The matrix S_*:=S(P,P_*)|_a_n-m=0 has asingle nonzero entry in its last column;this is (S_*)_n-m,2n-m=a_n. Hence R^0=a_n M, whereM:=[S_*]_n-m,2n-m (M is (2n-m-1)× (2n-m-1)).The only terms of M containing a_k^n-m-1are obtained by multiplying the entries a_k of the first n-m-1 rows of M.To obtain these terms up to a sign one has to multiply a_k^n-m-1 byM^*, where M^* is obtained from M by deleting the rows andcolumns of the entries a_k. The matrix M^* is block-diagonal, itsleft upper block is upper-triangular and its right lower block islower-triangular. The diagonal entries of these blocks (of sizes k× kand (n-m-k)× (n-m-k)) equal b_0 and a_n-m-1. HenceY^0_1,2=± a_nb_0^ka_n-m-1^n-m-k≢0. There remains to prove that Y^†≢0, see (<ref>).The matrix Y^† is obtained as follows.Set D^†:=D̃_m|_a_n-m=0= S_*;recall that S_*=a_n M (see the proof of Lemma <ref>).Then Y^†=S(D^†,∂ D^†/∂ a_k,a_k). Notice thatD^† is a degree n-m-1, not n-m, polynomial in a_k, thereforeY^† is (2n-2m-3)× (2n-2m-3). It suffices to show that fora_j=0, k≠ j≠ n-m-1, one has Y^†≢0. Thisresults from M|_a_j=0,k≠ j≠ n-m-1 not having multiple roots(which we prove below).For a_j=0, k≠ j≠ n-m-1, one can develop n-k times M w.r.t. its last column in which there is a single nonzero entryb_n-m-1a_n-m-1(on the diagonal). Hence M=(b_n-m-1a_n-m-1)^n-k M^†, whereM^† is (n-m+k-1)× (n-m+k-1); it isobtained from M by deleting the last n-k rows and columns.The matrix M^† satisfies the conditions of Lemma <ref> withp=n-m+k-1, s=k, r_j=1, q_j=a_k (j=1, …, n-m-1) orr_j=a_n-m-1, q_j=b_0 (j=n-m, …, n-m+k-1). HenceM^†=a_n-m-1^k± b_0^ka_k^n-m-1. For a_n-m-1≠ 0 it hasn-m-1 distinct roots. § SOME PROPERTIES OF THE SETS Θ AND M̃Suppose that all roots of P_*(.,a^0) (a^0∈𝒜)are simple and nonzero and that P(.,a^0) and P_*(.,a^0) have exactly oneroot in common. Then for any j=n-m+1, …, n,in a neighbourhood of a^0∈𝒜 the set {D̃_m=0} is locally the graph of a smooth analyticfunction in the variables a^j. If in addition all roots of P_m,k(.,a^0)are simple and nonzero (1≤ k≤ n-m),then in a neighbourhood of a^0∈𝒜 the set {D̃_m=0} is locally the graph of a smooth analyticfunction in the variables a^k.Denote by [a]_n-m the first n-m coordinates of a∈𝒜. Any simple root of P_* is locally (in aneighbourhood of [a^0]_n-m) the value of a smooth analytic functionλ in the variables [a]_n-m. As λ ([a^0]_n-m)≠ 0,the condition P(λ ,a)/λ ^j=0, j<m,allows to express a_n-j locally (for a_i close to a_i^0, i≠ j)as a smooth analytic functionin the variables a^n-j. Suppose that all roots of P_m,k(.,a^0)are simple and nonzero. Then any of these roots is a smooth analytic functionin the variables a^k. This refers also to μ, the root in common ofP and P_* which is also a root of P_m,k. Hence one can expressa_k as a function in a^k from the condition P(μ ,a)/μ ^n-k=0. At a point of the Maxwell stratum the hypersurface {D̃_m=0} islocally the transversal intersection of two smooth analytic hypersurfaces alonga smooth analytic subvariety of codimension 2.Suppose first that the roots in common of P and P_* are 0 and 1.The two conditions P_*(0)=P_*(1)=0 define a codimension 2linear subspace 𝒮 in the space 𝒜of the variables a. Adding to them the two conditionsP(0)=P(1)=0 means defining a codimension 2 linear subspace𝒯⊂𝒮; hence 𝒯 is acodimension 4 linear subspace of 𝒜. The two linear subspaces{ P(0)=0} and { P(1)=0} and their intersections with{ P_*(0)=P_*(1)=0} intersect transversally (along respectively{ P(0)=P(1)=0} and 𝒯). By means of a linear change τ : x↦α x+β,α∈ℂ^*, β∈ℂ, one can transform anypair of distinct complex numbers into the pair (0,1). Hence at a point of𝒯 the Maxwell stratum is locally the direct product of𝒯 and the two-dimensional orbit of the group oflinear diffeomorphisms induced in the space 𝒜 by the group of linear changes τ. Thisproves the statement. (1) At a point of the set Θ (see Definition <ref>) the set{D̃_m=0} is not representable as the graph in the space𝒜 of a smoothanalytic function in the variables a^j, for any j=n-m+1, …, n.(2) At a point of the set Θ this set is a smooth analyticvariety of dimension n-2 in the space of variables a. Suppose that for some a=a_0∈𝒜 one hasP_*(x_0,a_0)=P_*'(x_0,a_0)=0. Suppose first that x_0≠ 0. Consider the equationP_*(x,a_0)=ε ,   where  ε∈ (ℂ,0) . Its left-hand side equalsP_*”(x_0,a_0)(x-x_0)^2/2+o((x-x_0)^2) (with P_*”(a_0,x_0)≠ 0).Thus locally (for x close tox_0) one hasx-x_0=(2/P_*”(x_0,a_0))^1/2ε ^1/2+ o(ε ^1/2) . In a neighbourhood of a_0∈𝒜 one can introduce new coordinatestwo of which are x_0 and ε. Indeed, one can write [(n-m-1)!P_*'/n!= (x-x_0)(x^n-m-2+g_1x^n-m-3+⋯ +g_n-m-2); ; = x^n-m-1+b_1^*a_1x^n-m-2+ ⋯ +b_n-m-1^*a_n-m-1 , ]where b_j^*=(n-j)!(n-m-1)!/(n-m-j-1)!n!.Hence[ b_1^*a_1=g_1-x_0, b_2^*a_2= g_2-x_0g_1, … ,; ; b_n-m-2^*a_n-m-2= g_n-m-2-x_0g_n-m-3, b_n-m-1^*a_n-m-1=-x_0g_n-m-2. ] The Jacobianmatrix ∂ (a_1, …, a_n-m-1)/ ∂ (x_0, g_1, …,g_n-m-2) is, up to multiplication of the columns by nonzero constantsfollowed by transposition, the Sylvester matrix of the polynomialsx-x_0 and x^n-m-2+g_1x^n-m-3+⋯ +g_n-m-2. Its determinant is nonzerobecause x_0 is not a root of the second of these polynomials. Thus in the space of the variables (a_1, …, a_n-m-1) one can chooseas coordinates (x_0, g_1, …, g_n-m-2). The polynomial P_* isa primitive of P_*' and (-ε ) can be considered as theconstant of integration, see (<ref>), therefore(x_0, g_1, …, g_n-m-2, ε ) can be chosen ascoordinates in the space of the variables (a_1, …, a_n-m). Addingto them (a_n-m+1, …, a_n), one obtains local coordinatesin the space 𝒜. Hence the double root μ of P_* is not an analytic,but a multivalued function of the local coordinates in 𝒜, see(<ref>). Consider the condition P(μ ,a)/μ ^n-j=0. One canexpress from it a_j (n-m+1≤ j≤ n)as a linear combination of the variables a^j with coefficients dependingon μ. This expression is of the form A+ε ^1/2B, where Aand B (B≢0)depend analytically on the local coordinates in 𝒜.This proves the statement for x_0≠ 0. For x_0=0 thestatement also holds true – if for x_0=0 the set {D̃_m=0}is locally the graph of a holomorphic function in the variables a^j, thenthis must be the case for nearby values of x_0 as well which is false. Such values exist – the change x↦ x+δ, δ∈ℂ,shifts simultaneously by -δ all roots of P(hence of all its nonconstant derivatives as well). Denote by ξ the root of P_*' which isalso a root of P_* and of P. Then ξ is a smooth analytic functionin the variables a^†:=(a_1, …, a_n-m-1). The conditionP_*(ξ ,a)=0 allows to express a_n-m as a smooth analytic functionα in the variables a^†. Set a^*:=a|_a_n-m=α (a^†).One can express a_n as asmooth analytic function in the variables a_j, n-m≠ j≠ n, from thecondition P(ξ ,a^*)=0. Thus locally Θ is the graph of a smoothanalytic vector-function in the variables a_j, n-m≠ j≠ n,with two components.For 2≤ m≤ n-2 the polynomials B_m,k and C_m,kdefined in Theorem <ref> are irreducible.Irreducibility of the factor B_m,k is proved by analogy withProposition <ref>. (For n-m+1≤ k≤ n the analogy is completebecause after the dilatations a_j↦ a_j/b_j, j=1, …, n-m,the polynomial P_* becomes b_0P^*, where P^* is the polynomialP defined for n-m instead of n. For 1≤ k≤ n-m the coefficientsof the polynomial P_m,k are not a_j (we set a_0=1),but (b_k-b_j)a_j, and one can perform similar dilatations.Only the variable a_k is absent; this, however, is not an obstacleto the proof of irreducibility. The details are left for the reader.)Irreducibility of the factors C_m,k can be proved like this. Denote byξ and η two of the roots of P_*. They are multivalued functionsof the coefficients a_1, …, a_n-m. The system of two equationsP(ξ ,a)=P(η ,a)=0 allows to express for ξ≠η thecoefficients a_n and a_n-1 as functions of a_1, …, a_n-2.These multivalued functions are defined over a Zariski dense opensubset of the space of variables (a_1, …, a_n-2) fromwhich irreducibility of the set M̃ follows. Hence its projectionsin the hyperplanes 𝒜^k are also irreducible. In the case m=1 one cannot prove in the same way as above that thepolynomials C_1,k are irreducible because the coefficient a_n-1 isin fact a_n-m. Proposition <ref>, Lemma <ref>,Statements <ref>,<ref> and <ref>allow to conclude that D̃_m,k is of the formA_m,kB_m,k^s_m,kC_m,k^r_m,k,where s_m,k, r_m,k∈ℕ. Indeed, the form of the factorA_m,k is justified by Proposition <ref>.It follows from Lemma <ref> and its proofthat for A_m,kB_m,kC_m,k≠ 0 the polynomials D̃_m and∂D̃_m/∂ a_k, when considered as polynomials in a_k,have no root in common. Hence a priori D̃_m,k is of the formA_m,kB_m,k^s_m,kC_m,k^r_m,k, withs_m,k, r_m,k∈ℕ∪ 0 (implicitly we use the irreducibilityof B_m,k and C_m,k here).Statements <ref> and <ref>imply that one cannot have s_m,k=0 or r_m,k=0. To prove formula(<ref>) now means to prove that s_m,k=1, r_m,k=2. This isperformed in the next sections. § THE CASE M=N-2 For m=n-2, n≥ 4, one has s_m,k=1 and r_m,k=2.For 3≤ k≤ n the polynomial D̃_n-2 is a degree 2polynomial in a_k, see Proposition <ref>, so one can setD̃_n-2:=Ua_k^2+Va_k+W and∂D̃_n-2/∂ a_k=2Ua_k+V,where U, V, W∈ℂ[a^k], U≢0. HenceS(D̃_n-2,∂D̃_n-2/∂ a_k,a_k)= ( [UVW; 2UV0;0 2UV ]) andD̃_n-2,k=U(4UW-V^2). The second factor is up to a sign thediscriminant of the quadratic polynomial (in the variable a_k)Ua_k^2+Va_k+W. Up to a sign, U isthe determinant of the matrix S^L obtained from S(P,P_*) by deleting itsfirst two rows and the columns, where its entries a_k are situated.Hence U=ω a_2^n-k, ω∈ℂ^*. Indeed, S^L isblock-diagonal, with diagonal blocks of sizes k× k (upper left) and(n-k)× (n-k) (lower right). They are respectively upper- andlower-triangular, with diagonal entries equal to b_0 and b_2a_2. For a_2=0 the factor 4UW-V^2 reduces to -V^2∈ℂ[a]. From thefollowing lemma we deduce (after its proof) that the factor C_n-2,kmust be squared.The polynomial -V^2 is aquadratic polynomial in the variables a_i, i=3, …, n, with the squareof at least one of them present in -V^2. For k<n (resp. k=n)it contains the monomial a_n^2(b_0)^2k(b_1a_1)^2(n-k)(resp. a_n-1^2(b_0)^2(n-1)(b_1a_1)^2). Indeed, if k<n, thenset V^*:=V|_a_j=0,j≠ 1, k, n and S^*:=S(P,P_*)|_a_j=0,j≠ 1, k, n.There are two entries a_k (resp. a_1 and a_n)in S^*, in positions (1,k+1) and(2,k+2) (resp. (1,2), (2,3), and (1,n+1), (2,n+2)). The othernonzero entries of S^* are b_0 (resp. b_1a_1)in positions (ν +2,ν ) (resp. (ν +2,ν +1)), ν =1, …,n. ThusV^*= V^**+ V^***  ,   where  V^**=[S^*]_1,k+1|_a_k=0  ,   V^***=[S^*]_2,k+2|_a_k=0 .The matrices V^** and V^*** are (n+1)× (n+1). HenceV^**=(-1)^na_n [V^**]_1,n+1  ,   V^***=0(because all entries in the last column of V^*** equal 0). The matrix[V^**]_1,n+1 is block-diagonal, with diagonal blocks of sizesk× k (left upper, it is upper-triangular) and (n-k)× (n-k)(right lower, it is lower-triangular). Theirdiagonal entries equal respectively b_0 and b_1a_1. ThusV^*= V^**=(-1)^na_n(b_0)^k(b_1a_1)^n-k .Hence for k<n the term -V^2 contains the monomiala_n^2(b_0)^2k(b_1a_1)^2(n-k). For k=n we set a_j=0, j≠ 1, n-1, n, S^†:=S(P,P_*)|_a_j=0,j≠ 1, n-1, n and V^†:=V|_a_j=0,j≠ 1, n-1, n. HenceV^†= V^††+ V^†††  ,   where   V^††=[S^†]_1,n+1|_a_n=0  ,   V^†††=[S^†]_2,n+2|_a_n=0 .One has V^††=0 (all entries in the last column are 0) andV^††† has an entry a_n-1 in position (1,n); no otherentry of V^††† depends on a_n-1. HenceV^††† contains the monomial(-1)^n+1a_n-1 [V^†††]_1,n. The matrix[V^†††]_1,n is block-diagonal, with diagonal blocks ofsizes (n-1)× (n-1) (upper left, it is upper-triangular, withdiagonal entries equal to b_0) and1× 1 (lower right, it equals b_1a_1). Hence -V^2 contains themonomial a_n-1^2(b_0)^2(n-1)(b_1a_1)^2. The factor C_n-2,k is a linear function in the variablesa_3, …, a_n, with coefficients depending on a_1 and a_2. Indeed,set P_*:=b_0(x-α )(x-β ), 0≠α≠β≠ 0.One can choose (α ,β ) ascoordinates in the space (a_1,a_2). The polynomial P is obtained fromP_* by rescaling of its coefficients followed by (n-2)-fold integrationwith constants of integrationof the form η _sa_s, η _s∈ℚ^*, s=3, …, n.Consider the two conditions P(α ,a)/α ^n-k=0 andP(β ,a)/β ^n-k=0. Each of them is a linear form in the variablesa_3, …, a_n, with coefficients depending on a_1 and a_2; theone of a_k equals 1.The projection of the Maxwell stratum in the space of the variables a^k isgiven by the conditionβ ^n-kP(α ,a)-α ^n-kP(β ,a)=0 .Its left-hand side is a linear form in the variables a_3, …, a_k-1,a_k+1, …, a_n, with coefficients depending on α and β.The presence of the monomial a_n^2(b_0)^2k(b_1a_1)^2(n-k) ora_n-1^2(b_0)^2(n-1)(b_1a_1)^2 in D̃_n-2,k (see Lemma <ref>)implies that the factor C_n-2,k must be squared. There remains to prove that s_n-2,k=1, see Remark <ref>.The left-hand side of equation (<ref>) is divisible by α - β.Represent this expression in the form (α - β )Q(α ,β ,a).The polynomial Q depends in fact on α +β =-b_1a_1/b_0,αβ =b_2a_2/b_0 and a,hence this is a polynomial in a (denoted by K(a)). Clearly K dependslinearly on the variables a_3, …, a_n. On the other handK is quasi-homogeneous. Hence K is irreducible. Indeed, should K bethe product of two factors, then one of the two (denoted by Z)should not depend on anyof the variables a_3, …, a_n, i.e. Z should be a polynomial ina_1 and a_2. This polynomial should divide the coefficients ofall variables a_3, …, a_n in K. But for 3≤ s≤ nthe coefficient of a_s inK equals (see (<ref>))c_s:=(β ^n-kα ^n-s-α ^n-kβ ^n-s)/(α - β ).Hence Z divides c_s-β c_s-1=α ^n-s-1β ^n-kfor all s≠ k, and by symmetry Z divides α ^n-kβ ^n-s-1for all s≠ k. Hence Z=1 and the polynomial C_n-2,k equals(β ^n-kP(α ,a)-α ^n-kP(β ,a))/(α -β ). Itsquasi-homogeneous weight (QHW) is 2n-k-1 (notation: QHW(C_n-2,k)=2n-k-1).Indeed, one has to consider QHW(α ) and QHW(β ) to be equal to1 because α and β are roots ofP_* and their QHW is the same as the one of thevariable x. Obviously QHW(D̃_n-2,k)=2QHW(U)+QHW(W) becauseD̃_n-2,k=U(4UW-V^2) and D̃_n-2,k is quasi-homogeneous.As U=ω a_2^n-k, one has QHW(U)=2(n-k). The polynomialD̃_n-2,k contains a monomial ω̃a_2^n,ω̃≠ 0 (see Proposition <ref>). This monomialis contained also in W=D̃_n-2|_a_k=0 henceQHW(D̃_n-2)=QHW(W)=2n. ThusQHW(D̃_n-2,k)=2 QHW(U)+ QHW(D̃_n-2) =6n-4k .On the other hand one knows already that a prioriD̃_n-2,k=A_n-2,kB_n-2,k^s_n-2,kC_n-2,k^2, s_n-2,k∈ℕ,A_n-2,k=a_2^n-k. Hence[s_n-2,k QHW(B_n-2,k) = QHW(D̃_n-2,k)- QHW(A_n-2,k)- 2 QHW(C_n-2,k); =6n-4k-2(n-k)-2(2n-k-1) = 2 , ]and as B_n-2,k=b_1^2a_1^2-4b_0b_2a_2, one has QHW(B_n-2,k)=2, so s_n-2,k=1.In order to deal with the cases k=1 and k=2 we need to know thedegrees andquasi-homogeneous weights of certain polynomials in the variables a: (1) _a_1D̃_n-2=n, _a_2D̃_n-2=n; (2) QHW(D̃_n-2)=2n,QHW(∂D̃_n-2/∂ a_1)=2n-1,QHW(∂D̃_n-2/∂ a_2)=2n-2; (3) QHW(D̃_n-2,1)=n(3n-2); (4) QHW(D̃_n-2,2)=2n(n-1);(5) QHW(B_n-2,1)=n(n-1), QHW(B_n-2,2)=n(n-1); (6) QHW(C_n-2,1)=n(n-1), QHW(C_n-2,2)=n(n-1)/2. For k=1 or 2 one has to find positive integers u and v such thatQHW(D̃_n-2,k)=(2-k)n+u QHW(B_n-2,k)+v QHW(C_n-2,k) ,because A_n-2,k=a_n^2-k. For k=2 parts (4), (5) and (6) of the lemmaimply thatu=1, v=2is the only possible choice. For k=1 there remain two possibilities –(u,v)=(1,2) or (u,v)=(2,1) – so we need another lemma as well:For a_j=0, j≠ 1, n-1, n, the polynomials D̃_n-2,D̃_n-2,1, B_n-2,1 and C_n-2,1 are of the form respectively(with Δ _i≠ 0)[ D̃_n-2= Δ _1a_na_1^n+Δ _2a_na_n-1a_1+Δ _3a_n^2, D̃_n-2,1= Δ _4a_n^2n-1a_n-1^n+Δ _5a_n^3n-2,; ;B_n-2,1=Δ _6a_n^n-1+Δ _7a_n-1^nandC_n-2,1=Δ _8a_n^n-1. ]The lemma implies that it is possible to have (u,v)=(1,2), but not(u,v)=(2,1). Indeed, otherwise the productD̃_n-2,1=A_n-2,1B_n-2,1^2C_n-2,1, withA_n-2,1=a_n, should contain three different monomials whereas it contains only two. Parts (1) and (2) follow directly from Proposition <ref>. To proveparts (3) and (4) one has to observe that as the polynomial D̃_n-2contains a monomial c^*a_n^2, c^*≠ 0,the (2n-1)× (2n-1)-Sylvester matricesS^*_k:=S(D̃_n-2,∂D̃_n-2/∂ a_k,a_k),k=1 or 2,contain this monomialin positions (j,j+n), j=1, …, n-1 and only there. The matrixS^*_1 (resp. S^*_2) has entries c^†a_n, c^†≠ 0 (resp. c^**≠ 0)in positions(ν +n-1,ν ), ν =1, …, n. HenceD̃_n-2,k contains amonomial ± (c^†a_n)^n(c^*a_n^2)^n-1 for k=1 and± (c^**)^n(c^*a_n^2)^n-1 for k=2 whose quasi-homogeneous weight isrespectively n(3n-2) and 2n(n-1). To prove part (5) recall that the(2n-1)× (2n-1)-Sylvester matrixS^0:=S(P_n-2,k,P_n-2,k'), k=1 or 2,has entries of the form c^**a_n, c^**≠ 0, in positions(j,j+n), j=1, …, n-1 and only there, and constant nonzero termsin positions (ν +n-1,ν ), ν =1, …, n. Thus B_n-2,kcontains a monomial ± c^***(a_n)^n-1, c^***≠ 0 and QHW(B_n-2,k)=n(n-1). For the proof of part (6) we need to recall that the factors C_n-2,k arerelated to polynomials P divisible by P_*. When one performs thisEuclidean division one obtains a rest of the formU^†(a)x+V^†(a), whereU^†,V^†∈ℂ[a], QHW(U^†)=n-1,QHW(V^†)=n, U^† (resp. V^†)contains monomialsω _1a_1^n-1 and ω _2a_n-1 (resp. ω _3a_1^n-2a_2 andω _4a_n),ω _i≠ 0. (To see that the monomials ω _1a_1^n-1 andω _3a_1^n-2a_2 are present one has to recall that at each step of theEuclidean division one replaces a term Lx^s, L∈ℂ[a],by the sum-L(b_1/b_0)a_1x^s-1-L(b_2/b_0)a_2x^s-2.) To obtain the factor C_n-2,1 one has to eliminate a_1 from the systemof equations U^†(a)=V^†(a)=0, i.e. one has to find the subsetin the space of variables a^1 for which U^† and V^†have a common zero whenconsidered as polynomials in a_1. The (2n-3)× (2n-3)-Sylvestermatrix S(U^†,V^†,a_1) contains terms ω _2a_n-1in positions(j,j+n-1), j=1, …, n-2, and terms ω _3a_2 in positions(ν +n-2,ν ), ν =1, …, n-1. Hence C_n-2,1 contains amonomial ± (ω _2a_n-1)^n-2(ω _3a_2)^n-1, ofquasi-homogeneous weight n(n-1). The proof of the second statement of part (6) is performed separatelyfor the cases of even and odd n. If n is even, then U^†(resp. V^†)contains monomials Ω _1a_1a_2^n/2-1 and Ω _2a_n-1(resp. Ω _3a_2^n/2 and Ω _4a_n), Ω _i≠ 0. The(n-1)× (n-1)-Sylvester matrixS(U^†,V^†,a_2)contains termsΩ _4a_n in positions(j,j+n/2), j=1, …, n/2-1, and Ω _1a_1 in positions(ν +n/2-1,ν ), ν =1, …, n/2. Hence C_n-2,2 contains amonomial ± (Ω _4a_n)^n/2-1(Ω _1a_1)^n/2, ofquasi-homogeneous weight n(n-1)/2. When n is odd, then U^† (resp. V^†)contains monomials Ω̃_1a_2^(n-1)/2 and Ω̃_2a_n-1(resp. Ω̃_3a_1a_2^(n-1)/2 and Ω̃_4a_n),Ω̃_i≠ 0. The(n-1)× (n-1)-Sylvester matrix S(U^†,V^†,a_2) contains termsΩ̃_2a_n-1 in positions(j,j+(n-1)/2), j=1, …, (n-1)/2, andΩ̃_3a_1 in positions(ν +(n-1)/2,ν ), ν =1, …, (n-1)/2. ThusC_n-2,2 contains a monomial± (Ω̃_2a_n-1Ω̃_3a_1)^(n-1)/2, ofquasi-homogeneous weight n(n-1)/2. One can develop S(P,P_*) w.r.t. the last column in which thereis a single nonzero entry (a_n, in position (2,n+2)). HenceD̃_n-2=(-1)^na_n S^♯, whereS^♯:=[S(P,P_*)]_2,n+2. The last column of S^♯ contains onlytwo nonzero entries (a_n in position (1,n+1) and b_1a_1in position (n+1,n+1)), thereforeS^♯=(-1)^na_n S^♯ 1+b_1a_1 S^♯ 2  ,   where  S^♯ 1:=[S^♯]_1,n+1  ,   S^♯ 2:=[S^♯]_n+1,n+1 .The matrix S^♯ 1 is upper-triangular, with diagonalentries equal to b_0, so S^♯ 1=b_0^n, while S^♯ 2 containsonly two nonzero entries in its last column (a_n-1 in position (1,n) andb_1a_1 in position (n,n)). HenceS^♯ 2=(-1)^n+1a_n-1 S^♯ 3+b_1a_1 S^♯ 4  ,   where  S^♯ 3:=[S^♯ 2]_1,n  ,   S^♯ 4:=[S^♯ 2]_n,n .The matrix S^♯ 3 is upper-triangular, with diagonal entries equal tob_0, so S^♯ 3=b_0^n-1. The matrix S^♯ 4 becomeslower-triangular after subtracting its second row multiplied by 1/b_1 fromthe first one,with diagonal entries 1-b_0/b_1, b_1a_1, …, b_1a_1, from whichthe form of D̃_n-2 follows. Hence the (2n-1)× (2n-1)-Sylvester matrixS(D̃_n-2,∂D̃_n-2/∂ a_1,a_1) has only thefollowing nonzero entries, in the following positions: [Δ _1a_n(j,j) , Δ _2a_na_n-1(j,j+n-1) ,Δ _3a_n^2(j,j+n) ,; ; j=1,… ,n-1,; ; nΔ _1a_n(ν +n-1,ν ) , Δ _2a_na_n-1 (ν +n-1,ν+n-1) ,ν =1,… ,n . ]One can subtract the (j+n-1)st row from the jth one (j=1,… ,n-1)to make disappear the terms Δ _2a_na_n-1 in positions(j,j+n-1). This does not change the determinant; the entriesΔ _1a_n in positions (j,j) become (1-n)Δ _1a_n. The form ofD̃_n-2,1 follows now from Lemma <ref>.For a_j=0, j≠ 1, n-1, n, the polynomial P_n-2,1 is of the formα _1x^n+α _2a_n-1x+α _3a_n, α _i≠ 0, sothe (2n-1)× (2n-1)-Sylvester matrix S(P_n-2,1,P_n-2,1')has nonzero entries only [α _1at (j,j) , α _2a_n-1at (j,j+n-1) , α _3a_nat (j,j+n) , j=1,… ,n-1,;; nα _1at (ν ,ν ) , α _2a_n-1 at (ν ,ν+n-1) , ν =1,… ,n . ]By analogy with the reasoning about D̃_n-2,1 one finds thatB_n-2,1=Δ _6a_n^n-1+Δ _7a_n-1^n. To justify the form of C_n-2,1 it suffices to observe that fora_j=0, j≠ 1, n-1, n, one has (see the definition ofU^† and V^† in the proof of Lemma <ref>)U^†=α _4a_n-1+α _5a_1^n-1,V^†=α _6a_n, α _i≠ 0, so _a_1U^†=n-1 and_a_1V^†=0. When eliminating a_1 from the system ofequalities U^†=V^†=0 one obtains Res(U^†,V^†,a_1)=0,i.e. (α _6a_n)^n-1=0.§ THE PROOF OF S_M,1=1 In the present section we prove the followingWith the notation of Remark <ref> one has s_m,1=1.The proof of the proposition makes use of the following lemma: Set a_j=0 for j≠ 1, ℓ and n, where n-m+1≤ℓ≤ n-1.Then S(P,P_*) is of the formΩ _1a_n^n-m-1a_1^n+Ω _2a_n^n-m-1a_ℓa_1^n-ℓ+Ω _3a_n^n-m. Lemma <ref> with ℓ =n-1 implies that the matrixS(D̃_m,∂D̃_m/∂ a_1, a_1) has only the followingnonzero entries, in the following positions: [Ω _1a_n^n-m-1(j,j), Ω _2a_n^n-m-1a_n-1 (j,j+n-1)  ,; ;Ω _3a_n^n-m(j,j+n), j=1,… ,n-1 ,; ; nΩ _1a_n^n-m-1(ν +n-1,ν ), Ω _2a_n^n-m-1a_n-1 (ν +n-1,ν +n-1)  ,; ; ν =1,… ,n . ]Subtract for j=1, …, n-1 its (j+n-1)st row from the jth one.This preserves its determinant and leaves only the following nonzero entries,in the following positions: [ (1-n)Ω _1a_n^n-m-1(j,j),Ω _3a_n^n-m(j,j+n),; ;j=1,… ,n-1,; ; nΩ _1a_n^n-m-1(ν +n-1,ν ), Ω _2a_n^n-m-1a_n-1(ν +n-1,ν +n-1),; ; ν =1,… ,n .] The new matrix satisfies the conditions of Lemma <ref> withp=2n-1, s=n. Hence its determinant is of the forma_n^(n-m-1)(2n-1)(Ω _4a_n-1^n+Ω _5a_n^n-1)  ,where Ω _4=((1-n)Ω _1)^n-1Ω _2^n andΩ _5=±Ω _3^n-1Ω _1^n. The polynomialRes(P_m,1,P_m,1') contains monomials α a_n^n-1 and β a_n-1^n,α≠ 0≠β; this can be proved by complete analogy withthe analogous statement of Proposition <ref> with m=1 and weleave the proof for the reader. Hence the polynomial (<ref>) is notdivisible by a power of Res(P_m,1,P_m,1') higher than 1, because in thiscase it would contain at least three different monomials in a_n anda_n-1. Thus s_m,1=1.The matrix S(P,P_*) has only the following nonzero entries,in the following positions: [ 1 (j,j) , a_1 (j,j+1) , a_ℓ (j,j+ℓ )  ,;; a_n (j,j+n) ,j=1,… ,n-m ,;; b_0 (ν +n-m,ν ) ,b_1a_1 (ν +n-m,ν +1) , ν =1,… ,n . ]One can develop the determinant n-m-1 times w.r.t. the last column in whicheach time there will be a single nonzero entry a_n. ThusS(P,P_*)=± a_n^n-m-1 S^, where the first row ofS^ contains the entries 1, a_1, a_ℓ and a_n in positionsrespectively (1,1), (1,2), (1,ℓ +1) and (1,n+1); its second rowis of the form (b_0, b_1a_1, 0, …, 0) and the next rows arethe consecutive shifts of this one by one position to the right. Developingof S^ w.r.t. the last column yieldsS^=(-1)^na_n [S^]_1,n+1+b_1a_1[S^]_n+1,n+1 .The matrix [S^]_1,n+1 is upper-triangular, with diagonal entriesequal to b_0 (hence [S^]_1,n+1=b_0^n). The determinant of thematrix S^:=[S^]_n+1,n+1 can be developedn-ℓ -1 timesw.r.t. its last column, where each timeit has a single nonzero entry b_1a_1 in its right lower corner: S^=(b_1a_1)^n-ℓ -1 S^*† ,where S^*† is (ℓ +1)× (ℓ +1); it is obtained bydeleting the last n-ℓ -1 rows and columns of S^. Thedeterminant S^*† can be developed w.r.t. its last column:S^*†=(-1)^ℓa_ℓ [S^*†]_1,ℓ +1+b_1a_1[S^*†]_ℓ +1,ℓ +1 .The matrix [S^*†]_1,ℓ +1 (resp. [S^*†]_ℓ +1,ℓ +1)is upper-triangular, with diagonal entries equal to b_0, soits determinant equals b_0^ℓ (resp. becomes lower-triangular (aftersubtracting its second row multiplied by 1/b_1 from its first row),with diagonalentries equal to 1-b_0/b_1, b_1a_1, …, b_1a_1, so its determinantequals (1-b_0/b_1)(b_1a_1)^ℓ -1). This implies the lemma. § COMPLETION OF THE PROOF OFTHEOREM <REF> If formula (<ref>) is true for n=n_0, k=k_0,then it is true for n=n_0+1, k=k_0+1.If formula (<ref>) is true for n=n_0, m=n_0-2, k=1, thenit is true for n=n_0, 2≤ m<n_0-2, k=1.Recall that we have shown already(see Remark <ref>) that foreach n fixed the polynomials D̃_m,k (2≤ m≤ n-2,1≤ k≤ n) are of the form A_m,kB_m,k^s_m,kC_m,k^r_m,k,s_m,k, r_m,k∈ℕ. Suppose that for 4≤ n≤ n_0 one hass_m,k=1, r_m,k=2. (Using MAPLE one can obtain this result for n_0=4.) Set P(a,x):=x^n_0+a_1x^n_0-1+⋯ +a_n_0, a:= (a_1,… ,a_n_0) and consider the polynomials F:=ux^n_0+1+P andF_*:=b_-1ux^n_0-m+1+P_*, u∈ (ℂ,0), 0≠ b_-1≠ b_j for0≤ j≤ n_0-m. They are deformationsrespectively of P and P_*. Our reasoning uses the followingOne has[F=u(x^n_0+1+x^n_0/u+∑ _j=0^n_0-1(a_n-j/u)x^j),; ;F_*= u(b_-1x^n_0-m+1+b_0x^n_0-m/u+ ∑ _j=0^n_0-1(b_n-ja_n-j/u)x^j), ]so after the change of parametersã_1=1/u, ã_s=a_s-1/u,s=2, …, n_0 (which is well-defined for u≠ 0) and theshifting by 1 of the indices of the constants b_j, the polynomials F andF_* (up to multiplication by 1/u) become P and P_* defined for n_0+1instead of n_0.The zero set of Res(F,F_*) for u≠ 0is defined byan equation of the form D̃_m+uH/d=0, whereH∈ℂ[u,a] and d≠ 0.Consider the(2n_0-m+2)× (2n_0-m+2)-Sylvester matrixS̃:=S(F,F_*). Permute the rows of S̃ as follows:place the (n_0-m+2)nd row in second position while shiftingthe ones with indices 2, …, n_0-m+1 by one position backward.This preserves up to a sign the determinant and yields a matrix T which we decompose in four blocks the diagonal ones being of size2× 2 (upper left, denoted by T^*) and(2n_0-m)× (2n_0-m) (lower right, denoted by T^**); the left lowerblock is denoted by T^0 and the right upper by T^1.An easy check shows thatT^*=( [ u 1; b_-1u b_0 ])   ,  T^**|_u=0=S(P,P_*)  ,and that the only nonzero entries of the left lower block T^0 areu and b_-1u, in positions(3,2) and (n_0-m+3,2) respectively.Divide the firstcolumn of T by u (we denote the thus obtained matrix by T^†).This does not change the zero set of T foru≠ 0. For u=0 the matrix T^† is block-upper-triangular,with diagonal blocks equal to( [11; b_-1b_0 ]) and S(P,P_*).Hence T^†=d S(P,P_*)+uH(u,a),d:= T^*|_u=0=b_0-b_-1≠ 0,H∈ℂ[u,a].Thus the zero set of Res(F,F_*) for u≠ 0sufficiently small is defined bythe equation D̃_m+uH/d=0.For u≠ 0 (resp. for u=0) the quantity T^†is a degree n_0-m+1 (resp. n_0-m) polynomial ina_k for k=n_0-m+1, …, n_0, and a degree n_0+1 (resp.n_0) polynomial in a_k for k=1, …, n_0-m, seeProposition <ref>. Hence for each k=1, …, n_0there is one simple root -1/w_k(u,a) of Res(F,F_*)that tends to infinity as u→ 0. Thus one can setRes(F,F_*)=(1+w_k(u,a)a_k)D̃^*_m, whereD̃^*_m|_u=0≡D̃_m and_a_kD̃^*_m=n_0-m (resp. n_0)for k=n_0-m+1, …, n_0 (resp. for k=1, …, n_0-m).Set E_m:=Res(F,F_*) andD̃^*_m,k:=Res(E_m,∂ E_m/∂ a_k, a_k). Then for u≠ 0 one hasD̃^*_m,k=Ω ^♭♭(a_n_0^2(n_0-m-k)D̃_m,k +uH_m,k(u,a)), where H_m,k∈ℂ[u,a].One can set u:=a_n_0^2(n_0-m-k)v to obtain the equality D̃^*_m,k=Ω ^♭♭a_n_0^2(n_0-m-k)(D̃_m,k +vH_m,k(a_n_0^2(n_0-m-k)v,a)) . Now in a neighbourhood of each a_n_0≠ 0fixed the zero set of D̃^*_m,k is defined by the equationD̃_m,k+vH_m,k(a_n_0^2(n_0-m-k)v,a)=0, i.e. by deforming the equationD̃_m,k=0. Indeed, Proposition <ref> implies that D̃_mcontains a monomial Ω ^♭a_k^n_0a_n_0^n_0-m-k, 1≤ k≤ n_0-m(resp. Ω _♭a_k^n_0-ma_n_0-m^n_0-k, n_0-m+1≤ k≤ n_0)and this is the onlymonomial containing a_k^n_0 (resp. a_k^n_0-m). Similarly, E_m containsa monomial I:=u^k+1Ω ^♮a_k^n_0+1a_n_0^n_0-m-k, 1≤ k≤ n_0-m(resp. J:=u^k+1Ω _♮a_k^n_0-m+1a_n_0-m^n_0-k,n_0-m+1≤ k≤ n_0)and this is the only monomial containing a_k^n_0+1 (resp. a_k^n_0-m+1).(The monomial I is obtained as follows: one subtracts for ν =1, …, n_0-m+1the (ν +n_0-m+1)st row multiplied by 1/b_k from the νth one to make disappearthe terms a_k in the first n_0-m+1 rows. The monomial I is the product of the termsb_ka_k in the last n_0+1 rows, the terms (1-1/b_k)u in the first k+1 rows and the termsa_n_0 in the next n_0-m-k rows. The monomial J is obtained in a similar way. One hasto assume that QHW(u)=-1.)Knowing that _a_kE_m=n_0+1 (resp. _a_kE_m=n_0-m+1) for u≠ 0and that _a_kD̃_m=n_0 (resp. _a_kD̃_m=n_0-m)one concludes that[ E_m = u^k+1Ω ^♮a_k^n_0+1a_n_0^n_0-m-k +Ω ^♭a_k^n_0a_n_0^n_0-m-k + uE^*(u,a);;(resp. E_m = u^k+1Ω _♮a_k^n_0-m+1a_n_0-m^n_0-k +Ω _♭a_k^n_0-ma_n_0-m^n_0-k +uE^**(u,a)   ) , ]where E^*, E^**∈ℂ[u,a], _a_kE^*≤ n_0,_a_kE^**≤ n_0-m. The Sylvester matrixS(E_m, ∂ E_m/∂ a_k,a_k) is (2n_0+1)× (2n_0+1) (resp.(2n_0-2m+1)× (2n_0-2m+1)). We permute its rows by placing the(n_0+1)st (resp. (n_0-m+1)st) row in second position while shifting byone position backward the second, third, …, n_0th (resp. (n_0-m)th)rows. The new matrix T^♭ can be block-decomposed, with diagonal blocksT^uℓ (2× 2, upper left) and T^ℓ r; the other two blocks aredenoted by T^ur and T^ℓℓ. Hence [T^uℓ = ( [ u^k+1Ω ^♮a_n_0^n_0-m-kΩ ^♭a_n_0^n_0-m-k+uX^1(u,a); ;(n_0+1)u^k+1Ω ^♮a_n_0^n_0-m-k n_0Ω ^♭a_n_0^n_0-m-k+uX^2(u,a) ]) ,;;(resp.T^uℓ = ( [ u^k+1Ω _♮a_n_0^n_0-m-kΩ _♭a_n_0^n_0-m-k+uX^3(u,a); ;(n_0-m+1)u^k+1Ω _♮a_n_0^n_0-m-k (n_0-m)Ω _♭a_n_0^n_0-m-k+uX^4(u,a) ]) ) , ]X^i∈ℂ[u,a]. One hasT^ℓ r|_u=0=S(D̃_m,∂D̃_m/∂ a_k,a_k). The blockT^ℓℓ has just two nonzero entries, in its second column, andT^ℓℓ|_u=0=0. The first of these entries is in position(3,2) and equals u^k+1Ω ^♮a_n_0^n_0-m-k (resp.u^k+1Ω _♮a_n_0^n_0-m-k). The second of them is in position(n_0+2,2) (resp. (n_0-m+2,2)) and equals(n_0+1)u^k+1Ω ^♮a_n_0^n_0-m-k (resp.(n_0-m+1)u^k+1Ω _♮a_n_0^n_0-m-k). Thus for u=0≠ a_n_0 the zero set of D̃_m,k^* is the one ofD̃_m,k. For u≠ 0 small enough this set does not change if onedivides the first column of the matrix T^♭ by u^k+1. We denote the newmatrix by T^♭ *. ObviouslyT^♭ *=-Ω ^♮Ω ^♭(a_n_0^2(n_0-m-k)D̃_m,k+uH_m,k) (resp.T^♭ *=-Ω _♮Ω _♭(a_n_0^2(n_0-m-k)D̃_m,k+uH_m,k)) for a suitably defined polynomial H_m,k whichproves the lemma. Further to distinguish between the sets Θ and M̃(see Definition <ref>)defined for the polynomials P or F we write Θ _P and M̃_Por Θ _F and M̃_F.Consider a point A∈Θ _P and a germ𝒢 of an affine space of dimension 2 which intersects Θ _Ptransversally at A. Hence there exists a compactneighbourhood 𝒩 ofA in the space 𝒜 such that the parallel translates of𝒢 which intersect Θ _P at points of 𝒩,intersect Θ _P transversally at these points. We assume that the valueof |a_n_0| remains ≥ρ in 𝒩 for some ρ >0.The restrictions of D̃_m,k to each ofthese translates are smooth analytic functions each of which has onesimple zero at itsintersection point with Θ _P; this follows from the factor B_m,kparticipating in power 1 in formula (<ref>) for n=n_0.Hence for all u∈ℂ with 0<|u|≪ρ the restriction of D̃^*_m,kto these translates are smooth analytic functions having simple zeros at theintersection points of the translates with Θ _P. But this means that the power of the factor B_m,k in formula (<ref>)applied to the polynomial F is equal to 1 on the intersection ofΘ _F with some open ball of dimension n_0+1 centered at (0,A) inthe space of the variables (u,a).Hence this power equals 1 on some Zariski open dense subset Θ ^0of Θ _F(if its complement Θ _F\Θ ^0 is nonempty, then onΘ ^0 this power might be >1).Thus the equality s_m,k=1 is justified for n=n_0+1, 2≤ k≤ n_0+1(because it is the coefficient of x^n_0-k, not of x^n_0+1-k of F, that equals a_k). Now we adapt the above reasoning to the situation, where instead of a pointA ∈Θ _P one considers a point A∈M̃_P. Each of thetranslates of 𝒢 intersects M̃_P transversally, at justone point. The restriction of D̃_m,k to the translate is asmooth analytic function having a double zero, so a priori the restriction ofD̃^*_m,k to it has either one double or two simple zeros. (Under an analytic deformation a double zero either remains such or splits into twosimple zeros.)However two simple zeros is impossible because these zeros would be twopoints of M̃_P whereas the translate contains just one point. Thusthe power 2 of the factor C_m,k is justified for someZariski open dense subset of M̃_F. Once again,this is sufficient to claim thatformula (<ref>) is valid for n=n_0+1 and for 2≤ k≤ n_0+1. Recall that by Remark <ref> we have to show that for n=n_0one has s_m,k=1, r_m,k=2. The first of these equalities was proved inSection <ref> (see Proposition <ref>), so there remains to provethe second one. As in the proof of Statement <ref> we setP(a,x):=x^n_0+a_1x^n_0-1+⋯ +a_n_0, a:= (a_1,… ,a_n_0). We define the polynomial P_*:=x^2+b_1a_1x+b_2a_2 tocorrespond to the case m=n_0-2(i.e. b_k≠ 0, 1, b_3-k for k=1, 2).For m=n_0-2 Theorem <ref>is proved in Section <ref>, so we assume that m<n_0-2 and we setG:=x^n_0-m-2P_*+u(b_3a_3x^n_0-m-3+⋯ +b_n_0-ma_n_0-m), whereu∈ (ℂ,0) and for i,j≥ 3, i≠ j, one has0≠ b_i≠ b_j≠ 0.Denote by G^♯ the (2n_0-m)× (2n_0-m)-matrixS(P,G). One has G^♯|_u=0=a_n_0^n_0-m-2 S(P,P_*)= a_n_0^n_0-m-2D̃_2.Hence G̃:= G^♯=a_n_0^n_0-m-2D̃_2+uH^♯(u,a),H^♯∈ℂ[u,a].All nonzero entries of the matrix G^♯ in the intersection ofits last n_0-m-2 columnsand rows are 0 for u=0. One can develop n_0-m-2 timesG^♯|_u=0 w.r.t. its last column;each time there is a single nonzero entry in it which equals a_n_0. The matrix obtained from G^♯|_u=0 by deletingits last n_0-m-2 columns and the rows with indices m+2, …, n_0-1is precisely S(P,P_*).One can observe that G^♯ and G^♯|_u=0 areboth degree n_0 polynomials in a_1. Assume that a_n_0 belongs to aclosed disk on which one has |a_n_0|≥ρ ^♭>0. Suppose that|u|≪ρ ^♭, so one can consider the quantityD̃_2+(u/a_n_0^n_0-m-2)H^♯(u,a) as a deformation of D̃_2. To this end we set u:=a_n_0^n_0-m-2v,v∈ (ℂ,0), see Remark <ref>.Now to prove Statement <ref> one has just to repeatthe reasoning from the last paragraph of the proof of Statement <ref>. 40 AF A. Albouy and Y. Fu, Some Remarks About Descartes’ Rule of Signs, Elemente der Mathematik 69 (2014), 186-194. CLO W. Castryck, R. Laterveer and M. Ounaïes,Constraints on counterexamples to the Casas-Alvero conjecture and averification in degree 12, Mathematics of ComputationVol. 83, No. 290, (2014) 3017-3037.EHHS H. Ezzaldine, K. Houssam, M. Hossein and M. Sarrage,Overdetermined strata for degree 5 hyperbolic polynomials. Vietnam J. Math.43, no. 1 (2015) 139-150. EK H. Ezzaldine and V.P. Kostov,Even and old overdetermined strata for degree 6 hyperbolic polynomials.Serdica Math. J. 34, no. 4 (2008) 743-770.FKS J. Forsgård, V.P. Kostov and B.Z. Shapiro,Could René Descarteshave known this?, Experimental Mathematics vol. 24, issue 4 (2015) 438-448. Ko1 V.P. Kostov, Topics on hyperbolic polynomials in one variable.Panoramas et Synthèses 33 (2011), vi + 141 p. SMF. Ko2 V.P. Kostov, Some facts about discriminants,Comptes Rendus Acad. Bulg. Sci. (to appear). Ko3 V.P. Kostov, A property of discriminants, arXiv:1701.02912. Ko4 V.P. Kostov, On polynomial-like functions, Bulletin desSciences Mathématiques 129, No. 9 (2005) 775-781. Ko5 V.P. Kostov, Root arrangements of hyperbolicpolynomial-like functions, Revista MatemáticaComplutense vol. 19, No. 1 (2006) 197-225. Ko6 V.P. Kostov, On hyperbolic polynomial-like functionsand their derivatives,Proc. Royal Soc. Edinb. 137A (2007) 819-845. Ko7 V.P. Kostov, On root arrangements for hyperbolic polynomial-likefunctions and their derivatives, Bulletin des SciencesMathématiques 131 (2007) 477-492, doi:10.1016/j.bulsci.2006.12.004. KoSh V.P. Kostov and B.Z. Shapiro, On arrangement of rootsfor a real hyperbolicpolynomial and its derivatives, Bulletin des SciencesMathématiques 126, No. 1 (2002) 45-60.Me I. Méguerditchian, Géométrie duDiscriminant Réel et des Polynômes Hyperboliques, Thèse de Doctorat(soutenue le 24 janvier 1991 à Rennes). Y S. Yakubovich, The validity of the Casas-Alvero conjecture,arXiv:1504.00274v1 [math.CA] 1 April 2015. Y1 S. Yakubovich, On some properties of the Abel-Goncharovpolynomials and the Casas-Alvero problem,Integral Transforms Spec. Funct. 27, no. 8 (2016) 599-610. Y2 S. Yakubovich, Polynomial problems of the Casas-Alvero type,J. Class. Anal. 4, no. 2 (2014) 97-120.
http://arxiv.org/abs/1702.08216v1
{ "authors": [ "Vladimir Petrov Kostov" ], "categories": [ "math.CA" ], "primary_category": "math.CA", "published": "20170227100903", "title": "On higher-order discriminants" }
Discovery of optical flickering from the symbiotic star EF Aquilae R. K. Zamanov1Corresponding authors: rkz@astro.bas.bg,kstoyanov@astro.bas.bg S. Boeva 1 Y. M. Nikolov1 B. Petrov1 R. Bachev1 G. Y. Latev1 V. A. Popov2 K. A. Stoyanov1 M. F. Bode3 J. Martí4 T. Tomov5 A. Antonova6December 30, 2023 ======================================================================================================================================================================================================================================In this papertwo types of multgrid methods, i.e., the Rayleigh quotient iteration and the inverse iteration with fixed shift,are developed for solving the Maxwell eigenvalue problem withdiscontinuous relative magnetic permeability and electric permittivity. With the aid of the mixed form of source problem associated with the eigenvalue problem, we provethe uniform convergence ofthe discrete solution operatorto the solution operator in ł2 using discrete compactness of edge element space. Then we prove the asymptotically optimal error estimates for both multigrid methods. Numerical experiments confirm our theoretical analysis. Maxwell eigenvalue problem, multigrid method, edge element, error analysis 65N25, 65N30myheadings plain JIAYU HAN MULTIGRID METHODS FOR MAXWELL EIGENVALUES § INTRODUCTIONThe Maxwell eigenvalue problem is of basic importance in designing resonant structures for advanced waveguide. Up to now, the communities fromnumerical mathematics andcomputational electromagnetism have developed plenty of numerical methodsfor solving this problem (see, e.g., <cit.>).The difficulty of numerically solving theeigenvalue problem lies in imposing the divergence-free constraint. For this purpose, the nodal finite element methods utilize the filter, parameterized and mixed approaches to find the true eigenvalues <cit.>. The researchers inelectromagnetic field usually adopt edge finite elements due to the property of tangential continuity of electric field <cit.>.Using edge finite element methods, when one only considers to compute the nonzero eigenvalue, the divergence-freeconstraint can be dropped from the weak form and satisfied naturally (see <cit.>).But this will introduce spurious zero eigenvalues. Since the eigenspacecorresponding to zerois infinite-dimensional, usually, the finer the mesh is, the more the spurious eigenvalues there are. However, using this form there is no difficulty in computing eigenvalues on a very coarse mesh. So the work of <cit.> subtly applies this weak form to two grid method for the Maxwell eigenvalue problem. That is, they first solve a Maxwell eigenvalue problem on a coarse mesh and then solve a linear Maxwell equation on a fine mesh. Another approach is the mixed form of saddle point type in which a Lagrange multiplier is introduced to impose the divergence-free constraint (see <cit.>). A remarkable feature of the mixed form isits equivalence to the weak form in <cit.> for nonzero eigenvalues.The mixed form is well known as having a good property of no spurious eigenvalues being introduced. However,it is not an easy task to solve it on a fine mesh (see <cit.>).The multigrid methods for solving eigenvalue problems originated from the idea of two grid method proposed by <cit.>. Afterwards, this work was further developed by <cit.>. Among them the more recent work <cit.> makes a relatively systematical research onmultigrid methods based on shifted inverse iteration especially on itsadaptive fashion.Inspired by the above works, this paper is devoted to developingmultigrid methods for solving the Maxwell eigenvalue problem. We first use the mixed form to solve the eigenvalue problem on a coarser mesh and then solve a series of Maxwell equations on finer and finer meshes without using the mixed form. Roughly speaking, we develop the two grid method in <cit.> into multigrid method where onlynonzero eigenvalues are focused on. We prefer to use the mixed form instead of the one in <cit.> on a coarse mesh to capture the information of the trueeigenvalues.One reason is thatthe mixed form can include the physical zero eigenvalues and rule out the spurious zero ones simultaneously.Using it, the physical zero eigenvalues can be captured on a very coarse mesh,which is necessary whenthe resonant cavity has disconnected boundaries(see, e.g., <cit.>). Another reason lies in that the mixed discretization of saddle point type isnot difficult to solve on a coarse mesh.In this paper, we study two types of multigrid methods based on shifted inverse iteration: Rayleigh quotient iteration and inverse iteration with fixed shift. The formeris a well-known methodfor solving matrix eigenvalues but the corresponding coefficient matrix is nearly singular anddifficult to solve to some extend. To overcome this difficulty the latter first performs theRayleigh quotient iteration at previous few steps and then fixes the shift at the following steps as the estimated eigenvalueobtained by the former.Referring to the error analysis framework in <cit.> and using compactness property of edge element space, we first prove theuniform convergence ofthe discrete solution operatorto the solution operator in ł2 and then the error estimates of eigenvalues and eigenfunctionsfor the mixed discretization;then we adoptthe analysis tool in <cit.> that is different from the one in <cit.> andprove theasymptotically optimal error estimates for bothmultigrid methods.In addition, this paper is concerned about thetheoretical analysis for the case of thediscontinuous electric permittivityμ and magnetic permeability ϵ in complex matrix form, which has important applications for the resonant cavity being filled with different dielectric materials invariably. It is noticed that our multigrid methods andtheoretical results are not onlyvalid for the lowest order edge elements but also for high order ones.More importantly, based on the work of this paper, once given an a posteriori error indicator of eigenpair one can further develop the adaptive algorithms of shifted inverse iteration type for the problem. In the last section of this paper, we present several numerical examples to validate the efficiency of our methods in different cases.Throughout this paper, we use the symbol a ≲ b to mean that a ≤ Cb, where C denotes a positive constant independent of mesh parameters and iterative times and may not be the same in different places.§ PRELIMINARIESConsider the Maxwell eigenvalue problem in electric field curl(μ^-1curl𝐮)=ω^2ϵ𝐮   in Ω,div(ϵ𝐮)=0    in Ω, 𝐮_t=0    in ∂Ω,where Ω is a bounded Lipschitz polyhedron domain in ℝ^d(d=2,3), 𝐮_t is the tangential trace of 𝐮. The coefficient μ is the electric permittivity, and ϵ is the magnetic permeability andpiecewise smooth. In this paper, λ=ω^2 with ω being the angular frequency, is defined as the eigenvalue ofthis problem. We assume that μ,ϵ are two positive definite Hermite matrices such that μ^-1,ϵ∈ (L^∞(Ω))^d× dand there existtwo positive numbers γ,β satifying ξ·μ^-1ξ≥γξ·ξ, ξ·ϵξ≥βξ·ξ, ∀ 0≠ξ∈ℂ^d.§.§ Some weak formsLet𝐇_0(curl,Ω)={𝐮∈𝐋^2(Ω): curl(𝐮)∈𝐋^2(Ω),𝐮_t|_∂Ω=0},equipped with the norm 𝐮_curl:=curl𝐮_0+𝐮_0. Throughout this paper, ·_0 and ·_0,ϵ denote the norms in ł2 induced by the inner products (·,·) and (ϵ·,·)respectively. Define thedivergence-free space:𝐗:={𝐮∈𝐇_0(curl,Ω): div(ϵ𝐮)=0}. The standard weak form of the Maxwell eigenvalue problem (<ref>)-(<ref>) is as follows: Find (λ,𝐮)∈ℝ×𝐗 and 𝐮≠0 such thata(𝐮,𝐯)=λ(ϵ𝐮,𝐯),   ∀𝐯∈𝐗,where a(𝐮,𝐯)=(μ^-1curl𝐮,curl𝐯). Denote v̱_a:=√(a(v̱,v̱)),∀v̱∈0̧.As the divergence-free spacein (<ref>) is difficult todiscretize, alternatively, we would like to solve the eigenvalue problem (<ref>)-(<ref>) in the larger space 𝐇_0(curl;Ω), that is: Find (λ,𝐮)∈ℝ×𝐇_0(curl;Ω) and 𝐮≠0 such thata(𝐮,𝐯)=λ(ϵ𝐮,𝐯),   ∀𝐯∈𝐇_0(curl;Ω).Note that when λ≠0 (<ref>) and (<ref>) are equivalent, since (<ref>) implies the divergence-free condition holds for λ≠0 (e.g., see <cit.>).According to (<ref>), we have√(γ)curlu̱_0≤u̱_aIn order to study theeigenvalue problem in 0̧ we need the auxiliary bilinear formA(w̱,v̱)=a(w̱,v̱)+ γ/β(ϵw̱,v̱),which defines an equivalent norm ·_A=√(A(·,·)) in 0̧.By Lax-Milgram Theorem we can define the solution operator T:ł2→𝐗 asA(T𝐟,𝐯)= (ϵ𝐟,𝐯), ∀𝐯∈𝐗.Thenthe eigenvalue problem (<ref>) has the operator formTu̱=λ^-1u̱ with λ=λ+γ/β. The following mixed weak form of saddle point type can be found in <cit.>: Find (λ,𝐮,σ)∈ℝ×0̧× H^1_0(Ω) with 𝐮0such thata(𝐮,𝐯)+b(𝐯,σ) = λ(ϵ𝐮,𝐯),  ∀𝐯∈0̧,b(𝐮,p) = 0,  ∀ p∈ H^1_0(Ω),where b(𝐯, p):=(ϵv̱,∇ p) for any v̱∈0̧, p∈ H^1_0(Ω). We introduce the corresponding mixed equation: Find Tf̱∈0̧ and S f̱∈H^1_0(Ω) for f̱∈ł2 such thatA(Tf̱,𝐯)+b(𝐯, S f̱) =(ϵ𝐟,𝐯),  ∀𝐯∈0̧,b(T𝐟,p) = 0,  ∀ p∈H^1_0(Ω);the following LBB condition can be verified by taking w̱=∇v̱sup_w̱∈0̧|b(w̱,v̱)|/w̱_curl≥β |v̱|_1, ∀v̱∈ H^1_0(Ω).This yields the existence and uniqueness of linear bounded operators T and S (see <cit.>). Due to Helmholtz decomposition 𝐇_0(curl;Ω)=∇H_0^1(Ω)⊕𝐗, it is easy to see R(T)⊂ and S f̱=0, Tf̱=Tf̱for any f̱∈. HenceT and T share the same eigenpairs. More importantly,the operator T: ł2ł2 is self-adjoint. In fact, ∀𝐯,𝐰∈ł2,(ϵw̱, Tv̱)=A( T𝐰, T𝐯) =A( T𝐯, T𝐰) =(ϵ𝐯, T𝐰)= (ϵ T𝐰,𝐯).Note that T is compact as a operator from ł2 to ł2 and from 𝐗 to 𝐗 since 𝐗↪ł2 compactly (see Corollary 4.3 in <cit.>).§.§ Edge element discretizations and error estimates We will consider the edge element approximations based on theweak forms (<ref>),(<ref>) and (<ref>)-(<ref>). Let π_h be a shaped-regular triangulation of Ω composed of the elements κ. Here we restrict our attention toedge elements on tetrahedra because the argument for edge elements on hexahedra is the same. The k(k≥0) order edge element of the first family <cit.> generates the space𝐕_h={𝐮_h∈𝐇_0(curl,Ω):𝐮_h|_κ∈ [P_κ(k)]^d⊕𝐱× [P_κ(k)]^d},where P_κ(k) is the polynomial space ofdegree less than or equal to k on κ, P_κ(k) is the homogeneous polynomial space ofdegree kon κ, and x̱ =(x_1,⋯,x_d)^T.We also introduce the discrete divergence-free space𝐗_h={𝐮_h∈𝐕_h: (ϵ𝐮_h,∇ p)=0,  ∀ p ∈ U_h},where U_h is the standardLagrangian finite element space vanishing on ∂Ω of total degree less than or equal to k+1 and ∇ U_h⊂𝐕_h. The standard finite element discretization of (<ref>) is stated as: Find (λ_h,𝐮_h)∈ R×_h and 𝐮_h≠0 such thata(𝐮_h,𝐯_h)=λ_h(ϵ𝐮_h,𝐯_h),  ∀𝐯_h∈_h.It is also equivalent to the following form fornonzero λ_h (see <cit.>): Find (λ_h,𝐮_h)∈ℝ×V̱_h and 𝐮_h≠0 such thata(𝐮_h,𝐯_h)=λ_h(ϵ𝐮_h,𝐯_h),  ∀𝐯_h∈𝐕_h. In order to investigate the convergence of edge element discretization (<ref>), we have to study the convergence of edge element discretization for the associated Maxwell source problem.Then by Lax-Milgram Theorem we can define the solution operator T_h:ł2 →𝐗_h asA(T_h𝐟,𝐯)= (ϵ𝐟,𝐯), ∀𝐯∈𝐗_h.Thenthe eigenvalue problem (<ref>)has the operator formT_hu̱_h=λ_h^-1u̱_h with λ_h=λ_h+γ/β.Introduce thediscrete form of (<ref>)-(<ref>): Find (λ_h,𝐮_h,σ_h)∈ℝ×V̱_h× U_h, 𝐮_h0 such thata(𝐮_h,𝐯)+b(𝐯,σ_h) = λ_h(ϵ𝐮_h,𝐯),   ∀𝐯∈V̱_h,b(𝐮_h,p) = 0,  ∀ p∈ U_h.Introduce the corresponding operators: Find T_hf̱∈V̱_h and S_hf̱∈ U_h for any f̱∈ł2A(T_hf̱,𝐯)+b(𝐯,S_hf̱) =(ϵ𝐟,𝐯),  ∀𝐯∈V̱_h,b(T_hf̱,p) = 0,  ∀ p∈ U_h.Due to discrete Helmholtz decomposition 𝐕_h=∇ U_h⊕𝐗_h, it is easy to know R(T_h)⊂_h and S_hf̱=0, T_hf̱=T_hf̱ for any f̱∈+_h. Hence T_h and T_h share the same eigenpairs.Similar to (<ref>)-(<ref>), one canverify the corresponding LBB condition for the discrete mixedform(<ref>)-(<ref>).According to the theory of mixed finite elements (see <cit.>), we get for all f̱∈ł2,T_h 𝐟_A+ T 𝐟_A+|S_hf̱|_1+|Sf̱|_1≤𝐟_0,ϵ, T 𝐟- T_h𝐟_A+|S f̱-S_h f̱|_1≤(inf_𝐯_h∈𝐕_hT𝐟-𝐯_h_curl+inf_v_h∈U_h|S f̱-v_h|_1).Similar to (<ref>) we can prove T_h:ł2ł2 is self-adjoint in the sense of (ϵ·,·)_0. In fact, ∀w̱,v̱∈ł2,(ϵw̱, T_hv̱)=A( T_h w̱, T_hv̱)=A( T_h v̱, T_hw̱)= (ϵv̱, T_hw̱)= (ϵ T_h w̱,v̱). The discrete compactness is a very interesting and important property in edge elements because it is intimately related to the property of the collective compactness. Kikuchi <cit.> first successfully applied this property to numerical analysis of electromagnetic problems,and more recently it was further developed by <cit.> and so on. The following lemma, which states the discrete compactness of _h into ł2, is a direct citation of Theorem 4.9 in <cit.>. (Discrete compactness property) Any sequence {v̱_h}_h>0 with v̱_h∈_h that is uniformly bounded in H̱(curl,Ω) contains a subsequence that converges strongly in Ḻ^2(Ω). In the remainder of this subsection, we will prove the error estimates for the discrete forms (<ref>)-(<ref>), (<ref>) or (<ref>) with λ_h0. The authors in <cit.> have built a general analysis framework for the a priori error estimates of mixed form (see Theorem 2.2 and Lemma 2.3 therein). Although we cannot directly apply their theoretical results to the mixed discretization (<ref>)-(<ref>), we can use its proof idea to derive the following Lemma 2.2 and Theorem 2.3. The following uniform convergence provides us with the possibilityto use the spectral approximation theory in <cit.>.There holds the uniform convergenceT-T_h_ł2→0, h→0. Since ∪_h>0V̱_h and ∪_h>0U_h are dense in 0̧ and H^1_0(Ω), respectively, we deduce from (<ref>) for any f̱∈ł2T 𝐟- T_h𝐟_A≤(inf_𝐯_h∈𝐕_hT𝐟-𝐯_h_curl+inf_v_h∈U_h|Sf̱-v_h|_1) 0. That is,T_h converges to T pointwisely.Since T, T_h : ł20̧ are linear bounded uniformly with respect to h, ∪_h>0( T- T_h)B is a bounded set in 0̧ where B is the unit ball in ł2. From ↪ł2 compactly and the discrete compactness property of _h in Lemma 2.1, we know that ∪_h>0( T- T_h)B is a relatively compact set in ł2, which impliescollectively compact convergence T_h T.Noting T,T_h: ł2ł2 are self-adjoint, due to Proposition 3.7 or Table 3.1 in <cit.> we getT-T_h_ł2→0, h→0.This ends the proof. Prior to proving the error estimates for edge element discretizations, we define some notations as follows. Let λ be the kth eigenvalue of (<ref>) or (<ref>)-(<ref>) of multiplicity q. Let λ_j,h (j=k,k+1,⋯,k+q-1) be eigenvalues of T_h that converge to the eigenvalue λ=λ_k=⋯=λ_k+q-1. Here and hereafter we use M(λ) to denote the space spanned by all eigenfunctionscorresponding to the eigenvalue λ, and M_h(λ) to denote the direct sum of all eigenfunctions corresponding to the eigenvalues λ_j,h (j=k,k+1,⋯,k+q-1). For argument convenience, hereafter we denote λ_j=λ_j+γ/β andλ_j,h=λ_j,h+γ/β.Now we introduce the following small quantity:δ_h(λ)=sup_𝐮∈M(λ),u̱_A=1inf_𝐯∈V̱_h𝐮-𝐯_A. Thanks to (<ref>) and (<ref>) we haveλ( T_h- T)|_M(λ)_A≤λsup_u̱∈M(λ),u̱_A=1inf_v̱_h∈V̱_hTu̱-v̱_h_curl≤√(γ)δ_h(λ). The error estimates of edge elements forthe Maxwell eigenvalue problem have been obtained in, e.g., <cit.>. Here we would like to use the quantity δ_h(λ) to characterize the error for eigenpairs. From thespectral approximation, we actually derive the a priori error estimates for the discrete eigenvalue problem(<ref>) with λ_h0, (<ref>) or (<ref>)-(<ref>).Let λbe theeigenvalue of (<ref>) or (<ref>)-(<ref>) and letλ_h be the discrete eigenvalue of (<ref>) or (<ref>)-(<ref>) converging to λ. There exist h_0 > 0 such that if h ≤ h_0 then for any eigenfunction u̱_h corresponding to λ_h with u̱_h_A = 1 there exists u̱∈ M(λ) such thatu̱-u̱_h_A≤δ_h(λ)and for any u̱∈ M(λ) withu̱_A=1 there exists u̱_h∈ M_h(λ) such thatu̱-u̱_h_A≤δ_h(λ),where the positive constantis independent of mesh parameters. We take λ=λ_k. Supposeu̱_h is an eigenfunction of (<ref>)-(<ref>) corresponding to λ_h satisfying u̱_h_A=√(λ_h)u̱_h_0,ϵ = 1. Then according to Theorems 7.1 and 7.3 in <cit.> and Lemma 2.2 there exists u̱∈ M(λ) satisfyingu̱_h-u̱_0,ϵ≲( T- T_h)|_M(λ)_0,ϵ, |λ_j,h-λ|≲( T- T_h)|_M(λ)_0,ϵ for j=k,⋯,k+q-1.By a simple calculation, we deduce|u̱_h-u̱_A- λ( T- T_h)u̱_A| = |λ_hT_hu̱_h-λ Tu̱_A- λ( T- T_h)u̱_A|≤T_h(λ_hu̱_h-λu̱)_A≲ λ_hu̱_h-λu̱_0,ϵ≲ |λ_h-λ|u̱_h_0,ϵ+λu̱_h- u̱_0,ϵ.Since the equality(<ref>) implies ( T- T_h)|_M(λ)_0,ϵ≲δ_h(λ), this together with (<ref>)-(<ref>)yields (<ref>). Conversely, suppose u̱ is an eigenfunction of (<ref>)-(<ref>) corresponding to λsatisfying u̱_A =√(λ)u̱_0,ϵ =1. Then according toTheorems 7.1 in <cit.> and Lemma 2.1 there exists u̱_h∈ M_h(λ) satisfying u̱_h-u̱_0,ϵ≲( T- T_h)|_M(λ)_0,ϵ. Let u̱_h=∑_j=k^k+q-1u̱_j,h whereu̱_j,his the eigenfunction corresponding to λ_j,h such that {u̱_j,h}_j=k^k+q-1 constitutes an orthogonal basis of M_h(λ) in (ϵ·,·). Then|u̱_h-u̱_A- λ( T- T_h)u̱_A|≤u̱_h- T_h(λu̱)_A    ≲ T_h(∑_j=k^k+q-1λ_j,hu̱_j,h-λu̱)_A    ≲∑_j=k^k+q-1 (λ_j,h-λ_h)u̱_j,h+λ_hu̱_h-λu̱_0,ϵ.Since( T- T_h)|_M(λ)_0,ϵ≲δ_h(λ), this together with (<ref>)-(<ref>) yields (<ref>). Remark 2.1. Based on the estimate (<ref>), one can naturally obtain the optimal convergence order 𝒪(δ_h^2(λ)) for|λ_h-λ| using the Rayleigh quotient relation (<ref>) in the following section. In addition, note that when λ0 in Theorem 2.2 the estimate (<ref>) implies u̱_h_a converges tou̱_a=√(λ)u̱_0,ϵ>0. Here we introduce u̱_h= u̱_h/u̱_h_a then u̱_h= u̱_h/u̱_h_A and (<ref>) givesu̱-u̱_h_A≤δ_h(λ).For simplicity of notation, we still use the same C_3 and u̱ in the above estimate as in (<ref>)-(<ref>).Remark 2.2.When Ω is a Lipschitz polyhedron and ϵ,μ are properly smooth, it is known that ⊂(̱H^σ(Ω))^3 (σ∈(1/2,1]) (see <cit.>) andδ_h(λ)≲ h^σ. In particular, if ⊂{v̱∈H̱^s(Ω):curlv̱∈H̱^s(Ω)} (1≤ s≤ k+1) then δ_h(λ)≲ h^s (seeTheorem 5.41 in <cit.>). § MULTIGRID SCHEMES BASED ON SHIFTED INVERSE ITERATION§.§ Multigrid SchemesIn practical computation, the information on the physical zero eigenvalues can be easily captured on a coarse mesh Husingthe mixed discretization (<ref>)-(<ref>). In this section we shall present our multigrid methods for solving nonzero Maxwell eigenvalue. The following schemes are proposed by<cit.>. Note that we assume in the following schemes the numerical eigenvalue λ_H approximates the nonzero eigenvalue λ.Scheme 3.1.   Rayleigh quotient iteration.Given the maximum number of iterative times l.Step 1. Solve the eigenvalue problem (<ref>)-(<ref>) on coarse finite element space V̱_H× U_H: find (λ_H,𝐮_H,σ_H)∈ R×V̱_H× U_H, 𝐮_H_a=1 such thata(𝐮_H,𝐯)+b(𝐯,σ_H) = λ_H(ϵ𝐮_H,𝐯),  ∀𝐯∈V̱_H,b(𝐮_H,p) = 0,  ∀ p∈ U_H.Step 2. 𝐮^h_0⇐𝐮_H, λ^h_0⇐λ_H, i⇐ 1.Step 3. Solve an equation on V̱_h_i: find (𝐮',σ')∈V̱_h_i such thata(𝐮',𝐯)-λ^h_i-1(ϵ𝐮',𝐯) = (ϵ𝐮^h_i-1,𝐯),  ∀𝐯∈V̱_h_i.Set𝐮^h_i=𝐮'/𝐮'_a .Step 4. Compute the Rayleigh quotientλ^h_i=a(𝐮^h_i,𝐮^h_i)/(ϵ𝐮^h_i,𝐮^h_i).Step 5. If i=l, then output (λ^h_i,𝐮^h_i), stop; else, i⇐ i+1, and return to step 3. In Step 3 of the above Scheme, when the shift λ^h_l-1 is close to the exact eigenvalue enough, the coefficient matrix of linear equation is nearly singular. Hence the following algorithm gives a natural way of handling this problem.Scheme 3.2.   Inverse iteration with fixed shift.Given the maximum number of iterative times l and i0.Step 1-Step 4. The same as Step 1-Step 4 of Scheme 3.1. Step 5. If i>i0 then i⇐ i+1 and return to Step 6; else i⇐ i+1 and return to Step 3.Step 6. Solve an equation on V̱_h_i: find (𝐮',σ')∈V̱_h_i such thata(𝐮',𝐯)-λ^h_i0(ϵ𝐮',𝐯) = (ϵ𝐮^h_i-1,𝐯),  ∀𝐯∈V̱_h_i.Set𝐮^h_i=𝐮'/𝐮'_a.Step 7. Compute the Rayleigh quotientλ^h_i=a(𝐮^h_i,𝐮^h_i)/(ϵ𝐮^h_i,𝐮^h_i).Step 8. If i=l, then output (λ^h_i,𝐮^h_i), stop; else, i⇐ i+1, and return to step 6. Remark 3.1. The mixed discretization (<ref>)-(<ref>) was adopted by the literatures<cit.>. As is proved in Theorem 2.2, using this discretizationwe can compute the Maxwell eigenvalueswithout introducingspurious eigenvalues. However, it is also a saddle point problem that is difficultto solve on a fine mesh (see <cit.>). Therefore, the multigrid schemes can properly overcome the difficulty sincewe only solve (<ref>)-(<ref>) on a coarse mesh, as shown in step 1 of Schemes 3.1 and 3.2. Moreover, in order to further improve theefficiency of solving the equation in Steps 3 and 6 in Schemes 3.1 and 3.2 the HX preconditioner in<cit.> is a good choice (see <cit.>).§.§ Error Analysis Inthis subsection, we aim to prove the error estimates for Schemes 3.1 and 3.2. We shall analyze the constants in the error estimates are independent of mesh parameters and iterative times l. First of all, we give two useful lemmas.For any nonzero 𝐮,𝐯∈0̧, there hold𝐮/𝐮_A-𝐯/𝐯_A_A≤2𝐮-𝐯_A/𝐮_A,    𝐮/𝐮_A-𝐯/𝐯_A_A≤2𝐮-𝐯_A/𝐯_A. See <cit.>.Let (λ,u̱) be an eigenpair of (<ref>) or of (<ref>) with λ0, then for any v ∈0̧\{0}, the Rayleigh quotient R(v̱)=a(v̱,v̱)/v̱_0,ϵ^2 satisfiesR(v̱)-λ=v̱-u̱_a^2/v̱_0,ϵ^2-λv̱-u̱_0,ϵ^2/v̱_0,ϵ^2.See pp.699 of <cit.>.The basic relation in Lemma 3.2 cannot be directly applied to our theoretical analysis, so in the followingwe shall further simplify the estimate (<ref>).Let C= (γ/β)^1/2 then according to the definition of A(·,·),v̱_0,ϵ≤C^-1v̱_A,   ∀v̱∈0̧.If u̱∈ M(λ), v̱_h∈V̱_h, v̱_h_A=1 and v̱_h-u̱_A≤C(4√(λ))^-1, then by Lemma 3.1 we deducev̱_h-u̱/u̱_A_A≤ 2v̱_h-u̱_A≤C(2√(λ))^-1, v̱_h-u̱/u̱_A_0,ϵ≤C^-1v̱_h-u̱/u̱_A_A≤ (2√(λ))^-1,which together with u̱_A=√(λ)u̱_0,ϵyieldsv̱_h_0,ϵ≥u̱_0,ϵ/u̱_A-v̱_h-u̱/u̱_A_0,ϵ≥ (2√(λ))^-1.Hence, from Lemma <ref> we get the following estimate|R(v̱_h)-λ|≤ C_4v̱_h-u̱_A^2,whereC_4=4λ(1+λC^-2).Define theoperators T: ł20̧ and T_h:ł2 →𝐕_hasA(Tf̱,v̱)=(ϵf̱,v̱), ∀v̱∈0̧, A(T_h𝐟,𝐯_h)= (ϵ𝐟,𝐯_h), ∀𝐯_h∈𝐕_h. The following lemma turns our attention from the spectrumof T and T_h to that of Tand T_h. T, T and T share theeigenvalues greater than γ/βand the associated eigenfunctions.The same conclusion is valid for T_h, T_h and T_h.Moreover, T|_= T|_ =T|_ andT_h|_= T_h|_=T_h|_. The assertionsregarding the relations among T, T, T_h and T_h have been described in section 2. Next we shall prove the relations between T andT and between T_h and T_h. By the definition of T and T, the eigenpair (λ,u̱) of T satisfies A(𝐮,𝐯)= λ(ϵ𝐮,𝐯) for all 𝐯∈𝐗 and the eigenpair (λ,u̱) of T satisfies A(𝐮,𝐯)= λ(ϵ𝐮,𝐯) for all 𝐯∈0̧. Note that the above two weak forms are equivalent when λ>γ/β (since this implies the eigenfunction u̱ of the latter satisfies divergence-free constraint). Hence T and T share theeigenvaluesλ>γ/βand the associated eigenfunctions. Similarly one can check T_h and T_h share theeigenvaluesλ_h>γ/βand the associated eigenfunctions.Thanks to Helmholtz decomposition 0̧=∇H_0^1(Ω)⊕𝐗 and (<ref>), we also have for any f̱∈A(T𝐟,𝐯)=(ϵ𝐟,𝐯), ∀𝐯∈0̧.This together with (<ref>) yields T|_= T|_.Thanks to discrete Helmholtz decomposition 𝐕_h=∇ U_h⊕𝐗_h and (<ref>), we also have for any f̱∈_h+A(T_h𝐟,𝐯)= (ϵ𝐟,𝐯), ∀𝐯∈𝐕_h.This together with (<ref>) yields T_h|_= T_h|_.Denote dist(𝐰,W)=inf_𝐯∈ W𝐰-𝐯_A.Forbetter understanding of notations, hereafterwe write ν_k=λ^-1,ν_j,h=λ_j,h^-1, andM_h(ν_k)=M_h(λ_k).The following lemma(see <cit.>) is valid since T_h and T_h share the same eigenpairs. It providesa crucial tool foranalyzing the error of multigrid Schemes 3.1 and 3.2. Let (ν_0,𝐮_0) is an approximate eigenpair of (ν_k,𝐮_k), where ν_0 is not an eigenvalue of T_h and𝐮_0∈V̱_h with 𝐮_0_a=1. Suppose thatdist(𝐮_0,M_h(ν_k))≤1/2,|ν_0-ν_k|≤ρ/4 ,   |ν_j,h-ν_j|≤ρ/4(j= k-1,k,k+q,j≠0),where ρ=min_ν_j≠ν_k|ν_j-ν_k|. Let 𝐮^s∈V̱_h,𝐮_k^h∈V̱_h satisfy(ν_0- T_h)𝐮^s=𝐮_0,   𝐮_k^h=𝐮^s/𝐮^s_a.Thendist(𝐮_k^h, M_h(ν_k))≤4/ρmax_k≤ j≤ k+q-1|ν_0-ν_j,h|dist(𝐮_0,M_h(ν_k)).Let δ_0 and δ'_0 be two positive constants such that δ_0≤min{C(4√(λ_k))^-1,1/2},  δ'_0≤λ_k/2,  δ'_0/(λ_k- δ'_0 )λ_k≤ρ/4, δ_0^2<λ_j, ^2δ_0^2 /(λ_j-δ_0^2 )λ_j≤ρ/4,  j=k-1,k,k+q,j≠0,(3+) δ_0+ 3C^-2δ_0^2 +C^-2(3λ_k+2C^2) δ_0≤ 1/2.In the coming theoretical analysis, in the step 3 of Scheme 3.1 and step 6 of Scheme3.2 we introduce a new auxiliaryvariable 𝐮^h_i satisfying𝐮^h_i=𝐮'/𝐮'_A.Then it is clear that 𝐮^h_i=𝐮^h_i/𝐮^h_i_a and λ^h_i=a( 𝐮^h_i, 𝐮^h_i)/(ϵ 𝐮^h_i, 𝐮^h_i). C̱o̱ṉḏi̱ṯi̱o̱ṉ ̱3̱.̱1̱.̱ There exists u̱_k ∈ M(λ_k) such that for some i∈{1,2,⋯,l}u̱_k^h_l-1-u̱_̱ḵ_A≤δ_0,  δ_h_l (λ_j ) ≤δ_0 (j = k - 1, k,k+q, j ≠ 0), |λ^h_i_k-λ_k| ≤δ'_0,where λ^h_i_k and (λ^h_l-1_k,u̱_k^h_l-1)are approximate eigenpairs corresponding to the eigenvalue λ_k obtained by Scheme 3.1 or Scheme 3.2.We are in a position to prove a critical theorem which establishes the error relation for approximate eigenpairs between two adjacent iterations. Our proof shall sufficiently make use of the relationship among the operatorsT, T_h_l, T, T_h_l, T andT_h_l, as shown in Lemma 3.3, and the proof method is an extension of that in <cit.>. Let (λ_k^h_l,u̱_k^h_l) be an approximate eigenpair obtained by Scheme 3.1 or Scheme 3.2. Suppose Theorem 2.2 holds withλ=λ_k-1,λ_k,λ_k+q,andCondition 3.1 holds with i=l-1 for Scheme 3.1 or with i=i0,l-1 for Scheme 3.2. Let λ_0=λ_k^h_l-1 for Scheme 3.1 or λ_0=λ_k^h_i0 for Scheme 3.2.Then thereexists 𝐮_k∈ M(λ_k) such that𝐮^h_l_k-𝐮_k_A≤/2(|λ_0-λ_k|(|λ^h_l-1_k- λ_k|+𝐮^h_l-1_k-𝐮_k_0,ϵ) +δ_h_l(λ_k)),where is independent of the mesh parametersand the iterative times l. Step 3 of Scheme 3.1 with i = lis equivalent to: find (𝐮_h_l',σ')∈ U_h_l× V_h_l such that A(𝐮',𝐯)-(λ_0+C^2)A( T_h_l𝐮',𝐯)=A( T_h_l𝐮^h_l-1_k,𝐯),   ∀𝐯∈V̱_h_l,and𝐮^h_l_k=𝐮'/𝐮'_a,  𝐮^h_l_k=𝐮'/𝐮'_A. That is((λ_0+C^2)^-1- T_h_l)𝐮'=(λ_0+C^2)^-1 T_h_l𝐮^h_l-1_k,   𝐮̱^h_l_k=𝐮'/𝐮'_A. Denoteν_0=(λ_0+C^2)^-1,   𝐮_0=(λ^h_l-1_k+C^2) T_h_l𝐮^h_l-1_k /(λ^h_l-1_k+C^2) T_h_l𝐮^h_l-1_k_A,𝐮^s=(λ_0+C^2)𝐮'/(λ^h_l-1_k+C^2) T_h_l𝐮^h_l-1_k_A,    ν_h_l=1/λ^h_l _k .Noting 𝐮_k^h_l-1=𝐮^h_l-1_k/𝐮^h_l-1_k_a, then Step 3 of Scheme 3.1 is equivalent to:(ν_0- T_h_l)𝐮^s=𝐮_0,   𝐮^h_l_k=𝐮^s/𝐮^s_A. Noting 𝐮_k_A≤1+δ_0≤ 3/2, using Lemma 3.3 we derive from (<ref>) and (<ref>)(λ^h_l-1_k+C^2) T_h_l𝐮_k-𝐮_k_A= (λ_k+C^2) T𝐮_k-(λ^h_l-1_k+C^2) T_h_l𝐮_k_A   ≤(λ_k+C^2)( T- T_h_l)𝐮_k_A +(λ_k-λ^h_l-1_k) T_h_l𝐮_k_A   ≤δ_h_l(λ_k)𝐮_k_A + C^-1|λ^h_l-1_k-λ_k|𝐮_k_0,ϵ    ≤3/2δ_h_l(λ_k) + 3/2C^-2|λ^h_l-1_k-λ_k|.By (<ref>), (<ref>) and (<ref>), we have𝐮_0-𝐮_k/𝐮_k_A_A≤ 2 (λ^h_l-1_k+C^2) T_h_l𝐮^h_l-1_k-𝐮_k_A    ≤2((λ^h_l-1_k+C^2) T_h_l𝐮_k-𝐮_k_A+(λ^h_l-1_k+C^2) T_h_l(𝐮_k-𝐮^h_l-1_k)_A )    ≤2(λ^h_l-1_k+C^2) T_h_l𝐮_k-𝐮_k_A+C^-1(3λ_k+2C^2)𝐮_k-𝐮^h_l-1_k_0,ϵ . We shall verify the conditions of Lemma <ref>. Recalling (<ref>), (<ref>) and (<ref>), the estimates (<ref>) and (<ref>) lead todist(𝐮_0,M_h_l(λ_k))≤𝐮_0-𝐮_k/𝐮_k_A_A +dist(𝐮_k/𝐮_k_A, M_h_l(λ_k) )    ≤ (3+) δ_h_l(λ_k)+ 3C^-2|λ_k^h_l-1-λ_k| +C^-1(3λ_k+2C^2)𝐮_k-𝐮^h_l-1_k_0,ϵ    ≤ (3+) δ_0+ 3C^-2δ_0^2 +C^-2(3λ_k+2C^2) δ_0    ≤ 1/2.Due to Condition 3.1 we have from (<ref>)|ν_k-ν_0|=|λ_0-λ_k|/|(λ_0+C^2)λ_k|≤δ'_0/(λ_k- δ'_0 )λ_k≤ρ/4.Since by (<ref>), (<ref>) and (<ref>) we getλ_j,h_l≥λ_j-|λ_j-λ_j,h_l|≥λ_j-δ_h_l^2(λ_j) ≥λ_j-δ_0^2>0and then for j=k-1,k,k+q,j≠0|ν_j-ν_j,h_l|=|λ_j-λ_j,h_l|/|λ_j λ_j,h_l|≤^2δ_0^2 /(λ_j-δ_0^2 )λ_j≤ρ/4.Therefore the conditions of Lemma <ref> hold, and we havedist(𝐮_k^h_l, M_h_l(λ_k))≤4/ρmax_k≤ j≤ k+q-1|ν_j,h_l-ν_0|dist(𝐮_0,M_h_l(λ_k)). Applying (<ref>), (<ref>) and (<ref>) we have for j=k,k+1,⋯,k+q-1|ν_j,h_l-ν_0|=|λ_0-λ_j,h_l|/|(λ_0+C^2) λ_j,h_l|≤|λ_0-λ_k|+|λ_k-λ_j,h_l|/(λ_k- δ'_0 ) (λ_k-δ_0^2)    ≤|λ_0-λ_k|+δ_h_l^2(λ_k)/(λ_k- δ'_0 ) (λ_k-δ_0^2). Substituting (<ref>) and (<ref>) into (<ref>), we havedist(𝐮^h_l_k, M_h_l(λ))≤4/ρ( |λ_0-λ_k|+δ_h_l^2(λ_k)/|(λ_k- δ'_0 ) (λ_k-δ_0^2)|) ×    ((3+) δ_h_l(λ_k)+ 3C^-2|λ^h_l-1_k-λ_k| +C^-1(3λ_k+2C^2)𝐮_k-𝐮^h_l-1_k_0,ϵ).Let 𝐮_j,h be the eigenfunction corresponding to λ_j,h such that {𝐮_j,h}_j=k^k+q-1 constitutes an orthonormal basis of M_h(λ) in the sense of norm ·_A. Let 𝐮^* =∑_j=k^k+q-1A(𝐮^h_l_k,𝐮_j,h_l) 𝐮_j,h_l then 𝐮^h_l_k-𝐮^*_A=dist(𝐮^h_l_k, M_h_l(λ_k)). From Theorem 2.2, we know there exists {𝐮_j^0}_k^k+q-1⊂ M(λ_k) such that 𝐮_j,h_l-𝐮_j^0 satisfies (<ref>) and it holds by taking 𝐮_k =∑_j=k^k+q-1A(𝐮^h_l_k,𝐮_j,h_l)𝐮_j^0𝐮_k-𝐮^*_A = ∑_j=k^k+q-1A(𝐮^h_l_k,𝐮_j,h_l)(𝐮_j^0-𝐮_j,h_l)_A≤(∑_j=k^k+q-1𝐮_j^0-𝐮_j,h_l^2_A)^1/2≤ √(q)δ_h_l(λ_k). Therefore,summing up (<ref>) and (<ref>), we knowthere exists a positive constant ≥ that is independent of mesh parameters and l such that (<ref>)holds.Condition 3.2.  For any given ε∈ (0,2), there existt_i∈ (1,3-ε] such that δ_h_i(λ_k)=δ^t_i_h_i-1(λ_k)and δ_h_i(λ_k)→ 0 (i→∞). Condition 3.2 is easily satisfied. For example, for smooth solution, by using the uniform mesh, let h_0 =√(2)/8, h_1 =√(2)/32, h_2 =√(2)/64 and h_3=√(2)/128, we have h_i = h_i-1^t_i, i.e., δ_h_i = δ_h_i-1 ^t_i, where t_1≈ 1.80, t_2 ≈ 1.22, t_3 ≈ 1.18. For non-smooth solution, the condition could be satisfied when the local refinement is performed near the singular points. Let (λ_k^h_l, u̱_k^h_l) be the approximate eigenpairs obtained by Scheme 3.1. SupposeCondition 3.2 holds. Then there existu̱_k∈ M(λ_k) and H_0>0 such that if H≤ H_0 thenu̱_k^h_l-u̱_k_A≤C_0δ_h_l(λ_k), |λ_k^h_l-λ_k|≤ C_4 C_0^2δ_h_l^2(λ_k),    l≥1.The proof is completed by using induction and Theorem 3.5 with λ_0=λ_k^h_l-1.Noting that δ_H(λ_k) 0 as H 0,there exists H_0>0 such that if H<H_0 then Theorem 2.2holds for λ=λ_k-1,λ_k,λ_k+qandδ_H(λ_k)≤δ_0, ^2δ_H^2(λ_k)≤δ'_0, δ_H(λ_j)≤δ_0,(j=k-1,k,k+q,j0), C_4^2C_0^4δ_H^1+ε(λ_k)+C_4C_0^3C^-1δ_H^ε(λ_k)≤ 1. When l=1, (λ_k^h_l-1, u̱_k^h_l-1)=(λ_k,H, u̱_k,H), from (<ref>) and (<ref>) we know that there exists u̱_k∈ M(λ_k) such thatu̱_k,H-u̱_k_A≤ C_3δ_H(λ_k),|λ_k,H-λ_k|≤ C_4C_3^2δ_H^2(λ_k).Then u̱_k^h_0-u̱_k_A≤δ_H(λ_k)≤δ_0, |λ_k^h_0-λ_k|≤ C_4^2δ_H^2(λ_k)≤δ'_0 and δ_h_1(λ_j)≤δ_0 (j=k-1,k,k+q,j0), i.e., Condition 3.1 holds for l=1. Thus, by Theorem 3.5 and C_3≤ C_0 we getu̱_k^h_1-u̱_k_A≤/2{C_4^2C_0^4δ_H^4(λ_k) +C_4C_0^3C^-1δ_H^3(λ_k)+δ_h_1(λ_k)}    ≤/2{C_4^2C_0^4δ_H^4-t_1(λ_k)+C_4C_0^3C^-1δ_H^3-t_1(λ_k)+1}δ_h_1(λ_k)    ≤/2{C_4^2C_0^4δ_H^1+ε(λ_k)+C_4C_0^3C^-1δ_H^ε(λ_k)+1}δ_h_1(λ_k),where we have usedthe fact3-t_1≥ε. This yields (<ref>) and (<ref>) for l=1.Suppose that Theorem 3.6 holds for l-1, i.e., there exists u̱_k∈ M(λ_k) such thatu̱_k^h_l-1-u̱_k_A≤C_0δ_h_l-1(λ_k), |λ_k^h_l-1-λ_k|≤ C_4 C_0^2δ_h_l-1^2(λ_k),then u̱_k^h_l-1-u̱_A≤δ_0, |λ_k^h_l-1-λ_k|≤δ'_0and δ_h_l(λ_j)≤δ_0 (j=k-1,k,k+q,j0), and the conditions of Theorem 3.5 hold. Therefore, for l, by (<ref>) we deduceu̱_k^h_l-u̱_k_A≤/2{C_4^2C_0^4δ_h_l-1^4(λ_k) +C_4C_0^3C^-1δ_h_l-1^3(λ_k) +δ_h_l(λ_k)}    ≤/2{C_4^2C_0^4δ_h_l-1^4-t_i(λ_k) +C_4C_0^3C^-1δ_h_l-1^3-t_i(λ_k) +1}δ_h_l(λ_k)    ≤/2{C_4^2C_0^4δ_H^1+ε(λ_k) +C_4C_0^3C^-1δ_H^ε(λ_k) +1}δ_h_l(λ_k)    ≤ C_0δ_h_l(λ_k),i.e., (<ref>)are valid. And from (<ref>) and (<ref>) we get (<ref>). This ends the proof.Condition 3.3.  There exist β_0∈ (0,1) and β_i∈ [β_0, 1) (i=1,2,⋯) such that δ_h_i(λ_k)=β_iδ_h_i-1(λ_k)and δ_h_i(λ_k)→ 0 (i→∞). Remark 3.2.Note that if Condition 3.3 is valid, Condition 3.2 holds for H properly small; however, the inverse is not true. So in Theorem 3.6, (<ref>) and (<ref>) still hold if we replace Condition 3.2 withCondition 3.3.Let (λ_k^h_l, u̱_k^h_l)be an approximate eigenpairobtained by Scheme 3.2. Suppose thatCondition 3.2 holds for i≤ i0 and Condition 3.3 holds for i> i0. Then there existu̱_k∈ M(λ_k) and H_0>0 such that if H≤ H_0 thenu̱_k^h_l-u̱_k_A≤ C_0δ_h_l(λ_k), |λ_k^h_l-λ_k|≤ C_4C_0^2δ_h_l^2(λ_k),   l> i0.The proof is completed by using induction. Notingδ_H(λ_k) 0 as H 0, there exists H_0>0 such that if H<H_0thenTheorems 2.2 holds for λ=λ_k-1,λ_k,λ_k+q,Theorem 3.6holdsandδ_H(λ_k)≤δ_0, ^2δ_H^2(λ_k)≤δ'_0, δ_H(λ_j)≤δ_0,(j=k-1,k,k+q,j0), C_4^2C_0^4δ_H^3(λ_k)1/β_0 +C_4C_0^3/Cδ_H^2(λ_k)1/β_0≤ 1.From Theorem 3.6 we know thatwhen l=i0,i0+1 there exists u̱_k∈ M(λ_k) such thatu̱_k^h_l-u̱_k_A≤ C_0δ_h_l(λ_k),|λ_k^h_l-λ_k|≤ C_4C_0^2δ_h_l^2(λ_k).SupposeTheorem 3.7 holds for l-1, i.e., there exists u̱_k∈ M(λ_k) such thatu̱_k^h_l-1-u̱_k_A≤ C_0δ_h_l-1(λ_k), |λ_k^h_l-1-λ|≤ C_4C_0^2δ_h_l-1^2(λ_k).Then the conditions of Theorem 3.5 hold, therefore, for l, observing that in (<ref>) u̱_k^h_l-1-u̱_k_0,ϵ can be replaced by C^-1u̱_k^h_l-1-u̱_k_A, we deduceu̱_k^h_l-u̱_k_a≤C_0/2{C_4^2C_0^4δ_h_i0^2(λ_k)(δ_h_l-1^2(λ_k) +C_0/Cδ_h_l-1(λ_k)) +δ_h_l(λ_k)}    ≤C_0/2{C_4^2C_0^4δ_h_i0^2(λ_k)δ_h_l-1(λ_k)1/β_0 +C_4C_0^3/Cδ_h_i0^2(λ_k)1/β_0 +1}δ_h_l(λ_k),noting that δ_h_l-1(λ_k)≤δ_h_i0(λ_k)≤δ_H(λ_k), we get (<ref>) immediately. (<ref>) can be obtained from (<ref>) and (<ref>). The proof is completed. Remark 3.3. The error estimates (<ref>) and (<ref>) for u̱^h_l_k can lead to the error estimates for u̱^h_l_k. In fact, under the conditions of Theorem 3.6 or Theorem 3.7, we haveu̱_k_A≥u̱^h_l_k_A-δ_h_l(λ_k)≥1-δ_0≥1/2,thenu̱_k_a=√(λ_k)/√(λ_k)u̱_k_A≥√(λ_k)/2√(λ_k). We further assume δ_0≤√(λ_k)/4√(λ_k) then u̱^h_l_k_a≥u̱_k_a-δ_h_l(λ_k)≥√(λ_k)/2√(λ_k)-δ_0 ≥√(λ_k)/4√(λ_k).Therefore we derive from (<ref>) or (<ref>)u̱^h_l_k -u̱_k/u̱_k_a_A≤u̱^h_l_k-u̱_k_Au̱_k_a+u̱^h_l_k-u̱_k_au̱_k_A/u̱^h_l_k_a𝐮_k_a    ≤(√(λ_k)+√(λ_k))u̱^h_l_k-u̱_k_A/u̱^h_l_k_a√(λ_k)≤4(√(λ_kλ_k)+λ_k)δ_h_l(λ_k)/λ_k,i.e., u̱^h_l_k has the same convergence order as u̱^h_l_k in the sense of ·_A.§ NUMERICAL EXPERIMENT In this section, we will reportseveral numerical experiments for solving the Maxwell eigenvalue problem by multigrid Scheme 3.2 using the lowest order edge element to validate our theoretical results.We use MATLAB 2012a to compile our program codesand adopt the data structure of finite elements inthe package of iFEM <cit.> to generate and refine the meshes.We use the sparse solver eigs(A,B,k,'sm') to solve (<ref>) for k lowest eigenvalues.In our tables 4.1-4.6 we use the notation λ_k^h_i to denote the numerical eigenvalue approximatingλ_kobtained by multigrid methods at ith iteration on the mesh π_h_i (with number ofdegree of freedom N^h_i), and R to denote the convergence rate with respect to Dof^-1/3 where Dof isthe number of degrees of freedom.For comparative purpose, λ_k,h_j denotes the numerical eigenvalue approximating λ_kcomputed by the direct solver eigs on the mesh π_h_j.Example 4.1. Consider the Maxwell eigenvalue problem with μ=ϵ=I on the unit cube Ω=(-1/2,1/2)^3.We use Scheme 3.2 with i0=0 to compute theeigenvalues λ_1=2π^2, λ_4=3π^2,λ_6=5π^2 (of multiplicity 3, 2 and 6 respectively).The numerical results are shown in Table4.1, which indicates that numerical eigenvalues obtained by multigrid methods achieve the optimal convergence rate R≈2.Example 4.2. Consider theeigenvalue problem with ϵ=I and μ=I orμ=([ 21-2j-j; 1 + 2 j 4 j; j-j 5 ])on the thick L-shaped Ω=((-1,1)^2\ (-1,0]^2)× (0,1). When μ=ϵ=I λ_1≈9.6397, λ_2≈11.3452and λ_3≈13.4036 (see <cit.>).We use Scheme 3.2 with i0=1 to compute the lowest three eigenvalues for both cases. The numerical results are shown in Tables 4.2-4.3. From Table 4.2 we see that the eigenvalue errorsobtained by Scheme 3.2 after 2nd iteration are respectively 0.019, 0.012 and 7.0e-04, which indicates the accuracy ofthelowest two eigenvalues is affected by the singularity of the associated eigenfunctionsin the directions perpendicular to the reentrant edge and the convergence rate R is usually less than 2. Alternately, we adopt the mesheslocally refined towards the reentrant edge (see Figure 4.1) to perform the iterative procedure. And the associated numerical results are listed in Table 4.4,which implies the errors of λ^h_2_1 and λ^h_2_2 are significantlydecreased to 0.0033 and 0.0066 respectivelywith less degrees of freedom.Example 4.3. Consider the Maxwell eigenvalue problem with Ω=(-1/2,1/2)× (0,0.1)× (-1/2,1/2)where μ=I and if x_3>0then ϵ=2I otherwise ϵ=I. This is a practical problem in engineering computed in <cit.>. We use Scheme 3.2 with i0=1 to compute the three lowesteigenvalues for both cases. The numerical results are shown in Table 4.5.The relatively accurate eigenvalues reported in <cit.> are respectively3.538^2(12.5174), 5.445^2(29.6480) and5.935^2(35.2242). Using them as the reference values, the relative errors of numerical eigenvalues after 3rd iteration are respectively 6.0877e-05, 3.7524e-05 and0.0105. Obviously we get the good approximations of the eigenvalues λ_1 and λ_2. Regarding the computation for λ_3, we refer toTable II in <cit.> whose relative error for computing λ_3 is 0.0107 using a higher ordermethod. This is a computational result very close to ours. Hence we think our method is also efficient for solving the problem.Example 4.4. Consider the Maxwell eigenvalue problem with μ=ϵ=I and Ω=(-1,1)^3\ [-1/2,1/2]^3. In this example, we capture a physical zero eigenvalue on a coarse mesh with number of degrees of freedom 6230, i.e., λ_1,H=1.9510e-12. We use Scheme 3.2 with i0=0 to compute the eigenvalues λ_2(of multiplicity 3), λ_5(of multiplicity 2) and λ_7(of multiplicity 3). The numerical results are shown in Table 4.6. Note that the coarse mesh seems slightly “fine”. This is because we would like to capture allinformation of the lowesteight eigenvalues (some of them would not be captured on a very coarse mesh). This is an example of handling the problem in a cavity with two disconnected boundaries. For more numerical examples of the cavity with disconnected boundaries, we refer the readers to the work of <cit.>.Acknowledgment The author wishes to thank Prof. Yidu Yang for many valuable comments on this paper.s30arbenzP. Arbenz, R. Geus, and S. Adam, Solving Maxwell eigenvalue problems for accelerating cavities, Phys. Rev. Accelerators and Beams, 4 (2001), 022001.arbenz1P. Arbenz and R. Geus, Multilevel preconditioned iterative eigensolvers for Maxwell eigenvalue problems, Appl. Numer. Math., 54 (2005), pp. 107-121.ainsworthM. Ainsworth, J. Coyle, Computation of Maxwell eigenvalues on curvilinear domains using hp-version Nédélec elements, Numer. Math. Adv. Appl.,2003, pp. 219-231. boffi1D. Boffi, P. Fernandes, L. Gastaldi, and I. Perugia, Computational models of electromagnetic resonantors: Analysis of edge element approximation, SIAM J. Numer. Anal., 36 (1999), pp. 1264-1290. boffi2D. Boffi, Fortin operator and discrete compactness for edge elements, Numer. Math., 87 (2000), pp. 229-246. boffi3D. Boffi, Finite element approximation of eigenvalue problems, Acta. Numerica, 19 (2010), pp. 1-120. buffa1 A. Buffa, P. Ciarlet, and E. Jamelot, Solving electromagnetic eigenvalue problems in polyhedral domains with nodal finite elements, Numer. Math., 113 (2009), pp. 497-518. buffa2 A. Buffa, P. Houston, and I. Perugia, Discontinuous Galerkin computation of the Maxwell eigenvalues on simplicial meshes, J. Comput. Appl. Math., 204 (2007), pp. 317-333. brezziF. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods, vol. 15, Springer, New York, NY, USA, 1991. brenner S. C. Brenner, F. Li , L. Sung, Nonconforming Maxwell Eigensolvers, J. Sci. Comput., 40(2009), pp. 51-85.babuskaI. Babuska, J. Osborn, Eigenvalue Problems, in: P .G. Ciarlet, J. L. Lions, (Ed.), Finite Element Methods (Part 1), Handbook of Numerical Analysis, vol.2, Elsevier Science Publishers, North-Holand, 1991,pp. 640-787.caorsiS. Caorsi, P. Fernandes, M. Raffetto, On the Convergence of Galerkin Finite Element Approximations of Electromagnetic Eigenproblems, SIAM J. Numer. Anal.,38(2000), pp. 580-607. chenL. Chen, iFEM: An Integrated Finite Element Methods Package in MATLAB, Technical report, University of California at Irvine, 2009. ciarlet1 P. Ciarlet, Jr., and G. Hechme, Computing electromagnetic eigenmodes with continuous Galerkin approximations, Comput. Methods Appl. Mech. Engrg., 198 (2008), pp. 358-365. chen2J. Chen, Y. Xu, and J. Zou, An adaptive inverse iteration for Maxwell eigenvalue problem based on edge elements, J. Comput. Phys., 229 (2010), pp. 2649-2658. chatterjeeA. Chatterjee, J. M. Jin, and J. L Volakis, Computation of cavity resonances using edge based finite elements, IEEE Trans. Microwave Theory Tech., 40 (1992), pp. 2106-2108. chatelin F. Chatelin, Spectral Approximations of Linear Operators, Academic Press, New York, 1983. dauge2M. Dauge, Benchmark computations for Maxwell equations for the approximation of highly singular solutions (2004). See Monique Dauges personal web page at the location http://perso.univ-rennes1.fr/monique.dauge/core/index.html girault V. Girault and P. A. Raviart, Finite Element Methods for Navier-Stokes Equations: Theory and Algorithms, Springer-Verlag, Berlin, 1986. hiptmair1 R. Hiptmair, Finite elements in computational electromagnetism, Acta Numer., 11 (2002), pp. 237-339. hiptmair2 R. Hiptmair and J. Xu, Nodal auxiliary space preconditioning in H(curl) and H(div) spaces, SIAM J. Numer. Anal., 45 (2007), pp. 2483-2509. huX. Hu and X. Cheng, Acceleration of a two-grid method for eigenvalue problems, Math. Comp., 80 (2011), pp. 1287-1301. milanM. M. Ilić, B. M. Notaroš,Higher order hierarchical curved hexahedral vector finite elements for electromagnetic modeling, IEEE Trans. Microwave Theory Tech., 51(2003), pp. 1026-1033. jiangW. Jiang, N. Liu, Y. Yue, Q. Liu, Mixed finite element method for resonant cavity problem with complex geometric topology and anisotropic media, IEEE Trans. Magnetics, 52(2016), 7400108. kikuchiF. Kikuchi, Weak formulations for finite element analysis of an electromagnetic eigenvalue problem, Scientific Papers of the College of Arts and Sciences, University of Tokyo, 38 (1988), pp. 43-67. kikuchi1F. Kikuchi, On a discrete compactness property for the Nedelec finite elements, J.fac.sci.univ.tokyo Sect. 1A Math, 36(1989), pp. 479-490. kirschA. Kirsch, P. Monk,A finite element method for approximating electromagnetic scattering from a conducting object, Numer. Math.,92(2002), pp. 501-534. monkP. Monk, Finite Element Methods for Maxwells Equations, Oxford University Press, Oxford, UK, 2003. monk2P. Monk, L. Demkowicz, Discrete compactness and the approximation of Maxwell's equations in ℝ^3, Math. Comp., 70(2000), pp. 507-523. nedelecJ. C. Nédélec, Mixed finite elements in R^3. Numer. Math., 35(1980), pp. 315-341.reddy C. J.Reddy, M. D. Deshpande, C. R. Cockrell, F. B. Beck, Finite Element Method for Eigenvalue Problems in Electromagnetics, Nasa Sti/recon Technical Report N, 95(1995).russoA. D. Russo, A. Alonso, Finite element approximation of Maxwell eigenproblems on curved Lipschitz polyhedral domains, Appl. Numer. Math.,59(2009), pp. 1796-1822 xu2 J. Xu, A. Zhou, A two-grid discretization scheme for eigenvalue problems, Math. Comp. 70(1999), pp. 17-25. yang2 Y. Yang, H. Bi, J. Han, Y. Yu, The Shifted-Inverse Iteration Based on the Multigrid Discretizations for Eigenvalue Problems, SIAM J. Sci. Comput., (37)(2015), pp. A2583-A2606. yang3Y. Yang, H. Bi, A two-grid discretization scheme based on shifted-inverse power method, SIAM J. Numer. Anal.49 (2011), pp. 1602-1624.yang4Y. Yang, Y. Zhang, H Bi, Multigrid Discretization and Iterative Algorithm for Mixed Variational Formulation of the Eigenvalue Problem of Electric Field, Abstr. Appl. Anal., 2012(2012), pp. 1-25, doi:10.1155/2012/190768.zhou J. Zhou, X. Hu, L. Zhong, S. Shu, L. Chen. Two-Grid Methods for Maxwell Eigenvalue Problem, SIAM J. Numer. Anal., 52(4)(2014), pp. 2027-2047.
http://arxiv.org/abs/1702.08241v1
{ "authors": [ "Jiayu Han" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170227112815", "title": "Multigrid methods based on shifted inverse iteration for the Maxwell eigenvalue problem" }
bicudo@tecnico.ulisboa.ptmjdcc@cftp.ist.utl.pt CFTP, Instituto Superior Técnico, Universidade de Lisboa orlando@fis.uc.ptpsilva@teor.fis.uc.ptCFisUC, Department of Physics, University of Coimbra, P-3004 516 Coimbra, Portugal12.38.Gc, 11.15.Ha We revisit the static potential for the Q Q Q̅Q̅ system using SU(3) lattice simulations, studying both the colour singlets groundstate and first excited state. We consider geometries where the two static quarks and the two anti-quarks are at the corners of rectangles of different sizes.We analyse the transition between a tetraquark system and a two meson system with atwo by two correlator matrix. We compare the potentials computed with quenched QCD and with dynamical quarks.We also compare our simulations with the results of previous studies and analyze quantitatively fits of our results with anzatse inspired in the string flip-flop model and in its possible colour excitations.Lattice QCD static potentials of the meson-meson and tetraquark systems computed with both quenched and full QCD P. J. Silva December 30, 2023 ================================================================================================================== § INTRODUCTION Our current understanding of strong interaction phenomenology, being the hadron spectrum or the form factors associated to transitions between hadrons, relies on the description of the quark and gluon interaction within Quantum Chromodynamics.Despite the efforts of several decades, the non-perturbative nature of QCD still ensconce several properties of its fundamental particles.Indeed, we still do not understand the confinement mechanism,which prevents the observation of free quarks and gluons in nature,and still do not have a satisfactory answer why the experimentally  <cit.> confirmed hadrons are composed of three valence quarks or a pair of quark and an anti-quark. QCD is a gauge theory and physical observables should be gauge invariant objects. Gauge invariance implies that only certain combinations of quarks and/or gluons can lead to observables particles. If one applies blindly sucha simple rule, the observed hadrons are necessarily composite states involving multi-quarks and multi-gluonconfigurations.There is a priori no reason why states with other valence composition than mesons or baryons, called in general exotic states, should not be observed.Exotic states can be pure glue states (glueballs), multi-quark states (tetraquark, pentaquarks, etc) or hybrid states (mesons with a non-vanishing valence gluon content). Besides the hadron states compatible with the quark model,the particle data book  <cit.> also reports candidates for the different types of exotic states, see e.g. the reviews on pentaquarks andnon-q q mesons. The masses of the experimental states listed as candidates to multi-quark/gluon hadrons cover thefull range of energies of the particle spectrum. In particular the exotics with most observations are the tetraquarks. In what concerns the experimental observation of exotic tetraquarks, the quarkonium sector of double-heavy tetraquarks including a QQ̅ pair is the most explored experimentally, see e.g. the recent reviews <cit.>. In particular, the charged Z_c^± and Z_b^± are crypto-exotic, but technically they can be regarded as essentially exotic tetraquarks if we neglect c c̅ or b b̅ annihilation. There are two Z_b^± observed only by the collaboration BELLE at KEK <cit.>,slightly above BB̅^* and B^*B̅^* thresholds,the Z_b(10610)^+ and Z_b(10650)^+. Their nature is possibly different from thetwo Z_c(3940)^± andZ_c(4430)^±, whose mass is well above DD threshold <cit.>. The Z_c^± has been observed with very high statistical significance and has received a series of experimental observations by the BELLE collaboration <cit.>, the Cleo-C collaboration <cit.>, the BESIII collaboration <cit.> and the LHCb collaboration <cit.>. This family is possibly related to the closed-charm pentaquark recently observed at LHCb <cit.>.Notice that, using naïve Resonant Group Method calculations, in 2008, some of us predicted <cit.> a partial decay width to π J/ψ of the Z_c(4430)^- consistent with the recently observed experimental value<cit.>.On the other hand, in what concerns lattice QCD simulations, the most promising exotic tetraquark sector is also double-heavy, but it has a pair of heavy quarks Q Q or antiquarks Q̅Q̅, and thus it differs from the quarkonium sector. Note that in lattice QCD, the study of exotics is presently even harder than in the laboratory,since the techniques and computer facilities necessary to study of resonances with many decay channels remain to be developed.Lattice QCD searched for evidence of a large tetraquark component in the closed-charm Z_c(3940)^- candidate but this resonance is well above threshold, and Ref.<cit.>concluded there is no robust lattice QCD evidence of a Z_c^± tetraquark resonance.Lattice QCD also searched for the expected boundstate in light-light-antiheavy-antiheavy channels <cit.>. Using dynamical quarks, the only heavy quark presently accessible to Lattice QCD simulations is the charm quark. No evidence for boundstates in this possible family of tetraquarks, say for a u d c̅c̅ was found.Moreover the potentials between two mesons, each composed of a light quark and a static (or infinitely heavy) antiquark , have been computed in lattice QCD <cit.>.A static antiquark constitutes a good approximation to a spin-averaged b̅ bottom antiquark. The potential between the two light-static mesons can then be used, with the Born-Oppenheimer approximation <cit.>, as a B-B potential, where the higher order 1/m_b terms including the spin-tensor terms are neglected. From the potential of the channel with larger attraction, which occurs in the Isospin=0 and Spin=0 quark-quark system,the possible boundstates of the heavy antiquarks have been investigated with quantum mechanics techniques. Recently, this approach indeed found evidence for a tetraquark u d b̅b̅ boundstate <cit.>, while no boundstates have been foundfor states where the heavy quarks are c̅b̅ or c̅c̅ (consistent with full lattice QCD computations <cit.>) or where the light quarks are s̅s̅ or c̅c̅ <cit.>. The b̅b̅ probability density in the only binding channel has also been computed in Ref.<cit.>.The quark models for tetraquarks with the most sophisticated description of confinement are the string flip-flop models. Clearly, tetraquarks are always coupled to meson-meson systems, and we must be able to address correctly the meson-meson interactions. The first quark models had confining two-body potentials proportional to the SU(3) colour Casimir invariantλ⃗_i ·λ⃗_j V(r_ij) suggested by the One-Gluon-Exchange type of potential. However this would lead to an additional Van der Waals potential V_Van der Waals=V'(r)r× T, where T is a polarization tensor. The resulting Van der Waals<cit.> force between mesons, or baryons would be extremely large and this is clearly not compatible with observations. The string flip-flop potential for the meson-meson interaction was developed in Refs.<cit.>, to solve the problem of the Van der Waals forces produced by the two-body confining potentials.The first considered string flip-flop potential was the one minimizing the energy of the possible two different meson-meson configurations, say M_13M_24 or M_14M_23. This removes the inter-meson potential, and thus solves the problem of the Van der Waals force.An upgrade of the string flip-flop potential includes a third possible configuration <cit.>, in the tetraquark channel, say T_12 , 34, where the four constituents are linked by a connected string <cit.>.The three confining string configurations differ in the strings linking the quarks and antiquarks,this is illustrated inFig. <ref>.When the diquarks qq and q̅q̅ distances are small, the tetraquark configuration minimizes the string energy. When the quark-antiquark pairs q q̅ and q q̅ are close, the meson-meson configuration minimizes the string energy. With a triple string flip-flop potential,boundstates below the threshold for hadronic coupled channels have been found <cit.>.On the other hand, the string flip-flop potentials allow fully unitarized studies of resonances <cit.>. Analytical calculations with a double flip-flop harmonic oscillator potential, <cit.>, using the resonating group method again with a double flip-flop confining harmonic oscillator potential, <cit.>, anfd with the triple string flip-flop potential <cit.> have already predicted resonances and boundstates. So far. the theoretical and experimental interpretations of the observed states that can possibly be exotics is not clear crystal and, certainly, a better understanding of the colour force helps to elucidate our present view of the hadronic spectrum.For heavy quark systems its dynamics can be represented by a potential which, in general, is a function of the geometry of the hadrons,of the spin orientation of its components and of the quark flavours. In the limit of infinite quark mass one can compute the so-called static potential using first principle lattice QCD techniques via the evaluation of Wilson loops. The static potential provides an important input to the modelling of hadrons and it gives a simple realisation of the confinement mechanism. Moreover it can be applied to study tetraquarks QQ Q̅Q̅ with two heavy quarks and two heavy antiquarks, see for instance a Dyson-Schwinger study in Ref.<cit.>, at the intersection of the two sectors most studied experimentally and theoretically. The static potential has been computed using lattice QCD for mesons, tetraquark, pentaquarks and hybrid systems see<cit.>. For a quark and an anti-quark system, the static potential V_QQ is a landmark calculation in lattice QCD and it isused to set the scale of the simulations. V_QQ has been computed both in the quenched theory andin full QCD with the lattice data being well described by a one-gluon exchange potential (a Coulomb like potential) at short distances and a linear rising function of the quark distances at large separations. The behaviour at large interquark distances provides a nice explanation of the confinement mechanism.Moreover, for other hadronic systems and for large separations of its constituents a similar pattern of the corresponding static potentials has been observed in lattice simulations, i.e. a linear rising potential which, once more, is a simple realisation of quark confinement.In the current work we revisit the static potential for tetraquarks using lattice simulations. The static potential for tetraquarks wascomputed for the gauge group SU(3) and in the quenched approximation in <cit.>.The hybrid potential defined and measured in <cit.> can also be viewed as a particular limit of the tetraquark potential.Herein, of all the possible geometries for the QQ Q̅Q̅ system we consider the case where quarks and anti-quarks are at the corners of a rectangle, see Fig. <ref>, and recompute the static potential of the system both in the quenchedapproximation and in full QCD. We focus our analysis in the comparison of the quenched and full QCD and also in the transition between a tetraquark system and a two meson system. Thus we go beyond the triple string flip-flop paradigm of Fig. <ref> and analyse, in the transition region, the mixing between the meson-meson and tetraquark string configurations. Moreover we explore not only the groundstate but also the first excited state. The current work is organised as follows. In Sec. <ref> we discuss the possible colour structures for aQQ Q̅Q̅ system and introduce the Q Q̅ potentials used to compare the results of the static potentials for the tetraquark. In Sec. <ref> we revisit the geometries used to compute the static potentials and discuss the expected configurations at large separations. In Sec. <ref> the method used toevaluate the static potentials is described. In Sec. <ref> we report on the parameters used in the lattice simulations and how we set thescale of the simulations. The results for the static tetraquark potential for the two geometries are described in Sec <ref>. In Sec. <ref> we resume and conclude. In the Appendix, the reader can find various tables with all our numerical results.§ THE COLOR STRUCTURE OF A QQQ̅Q̅ SYSTEMThe colour-spin-spatial wave function of a QQQ̅Q̅ system has multiple combinations, relevant for the computation of static potentials. In this section, we analyse the possible colour wave functions associated with a tetraquark system.The quarks belong to the fundamental 3 representation of SU(3), while anti-quarks are in a 3 representation of the group.Thespace built from the direct product 3⊗3⊗3⊗3 includes two independent colour singlet states. In a QQQ̅Q̅ system, quarks and anti-quarks can combine into colour singlet meson-like states,leading naturally to the two meson states,|1_131_24⟩ = 1/3δ_ikδ_jl|Q_iQ_jQ̅_kQ̅_l⟩ , |1_141_23⟩ = 1/3δ_ilδ_jk|Q_iQ_jQ̅_kQ̅_l⟩ ,where only the colour indices are written explicitly and 1_ij refers to the meson-like colour singlet state built combining quark i and anti-quark j. The two colour singlet states in Eq. (<ref>) are not orthogonal to each other and a straightforward algebra gives,⟨1_131_24|1_141_23⟩=1/3 . Moreover, a quark and anti-quark pair, besides a colour singlet state, can also form a colour octet state. With two colour octets it is again possible to build a colour singlet state. For the QQQ̅Q̅system the colour singlet states built from the octets read,|8_138_24⟩=1/4√(2)λ_ik^aλ_jl^a|Q_iQ_jQ̅_kQ̅_l⟩ , |8_148_23⟩=1/4√(2)λ_il^aλ_jk^a|Q_iQ_jQ̅_kQ̅_l⟩ ,where the factors comply with the normalization condition,⟨8_138_24| 8_138_24⟩ =⟨8_148_23|8_148_23⟩ = 1.The colour octet-octet states in Eq. (<ref>) can be written in terms of the meson-meson states defined in Eq. (<ref>),|8_138_24⟩ = 3|1_141_23⟩-|1_131_24⟩/2√(2) , |8_148_23⟩ = 3|1_131_24⟩-|1_141_23⟩/2√(2) .A simple calculation shows that the colour octet states (<ref>) are not orthogonal (in colour space) to each other. However, each of the octet-octet states is orthogonal to the corresponding meson-meson state, i.e.⟨1_131_24|8_138_24⟩ = 0,⟨1_141_23|8_148_23⟩ = 0. The states in Eqs. (<ref>)and (<ref>) do not represent all the possible colour singlet states that can be associated to aQQQ̅Q̅ system. We can also consider diquark-antidiquark configurations. For the group SU(3) it follows that3⊗3=3̅⊕6,3̅⊗3̅=3⊕6̅ and the two colour singlet states belong to the space spanned by 3⊗3̅ and 6⊗6̅,|3̅_123_34⟩ = 1/2√(3) ϵ_ijm ϵ_klm  |Q_iQ_jQ̅_kQ̅_l⟩= √(3/4)(|1_131_24⟩-|1_141_23⟩),|6_126̅_34⟩ = √(3/8)(|1_131_24⟩+|1_141_23⟩).The states in Eqs. (<ref>) and (<ref>) are orthogonal to each other in colour space,i.e. ⟨3̅_123_34|6_126̅_34⟩=0. Furthermore, they are eigenstates of the exchange operators of quarks or anti-quarks, and verify the following relations,P_12|3̅_123_34⟩ = P_34|3̅_123_34⟩=-   |3̅_123_34⟩ , P_12|6_126̅_34⟩ = P_34|6_126̅_34⟩= +   |6_126̅_34⟩,where P_ij is the exchange operator of (anti)quark i with (anti)quark j.Eqs.(<ref>) and (<ref>) can be inverted, giving,|1_131_24⟩=√(2/3)|6_126̅_34⟩+1/√(3)|3̅_123_34⟩ , |1_141_23⟩=√(2/3)|6_126̅_34⟩-1/√(3)|3̅_123_34⟩ ,which shows that the meson-meson states of Eq. (<ref>) are not eigenstates of the quark and of the anti-quark exchange operators P_12 and P_34. The static potential V for a QQQ̅Q̅ system is a complicated object which may involve, two, three and four body interactions. In general, V also depends on the allowed quantum numbers of the constituents of the multiquark state. The static potential should allow, when combined with quantum mechanics, for the groundstates to be the ones of Fig. <ref>. For example, the static potential should allow for the formation of two-meson states when the quark-anti-quark distances are small compared to the quark-quark and anti-quark-anti-quark distances, or possibly for the formation of a tetraquark at other particular distances.As an approximate model to understand the results of the lattice simulations for the static potential in terms of overlaps with the various colour singlets, one can consider the two-body potential given by the Casimir scaling,V_CS = ∑_i<j C_ij  V_M ,where V_M is the mesonic static Q Q̅ potential with C_ij = λ_i^a·λ_j^a/-16/3 and compare the results of the simulations with the one of any of the colour singlet states and the Casimir potential given by,V_Ψ = ⟨Ψ | V_CS | Ψ⟩ .Note, for a two body system, the one gluon exchange predicts a static potential proportional to λ^a_i ·λ^a_j.The expectation values ⟨Ψ | C_ij | Ψ⟩ for the possible colour singlet states associated to the QQQ̅Q̅ system are reported in Table <ref>. These numbers are important to obtain a qualitative insight into the result of the simulations.For instance, if for a given state C_ij < 0, we don't expect that the lattice result would give us a strong attraction between theparticles i and j and, therefore, one can expected significant deviations of the static potential relative to the potential V_CS associated to the corresponding colour singlet state. Moreover we consider as well the first excitation of the QQQ̅Q̅, which also depends in the particular distances of the system. Based in the orthogonality conditions and in a crude Casimir scaling where V_M would be a spatial independent potential, we would expect the pairs of colour singlet states, (|1_13 1_24⟩, |8_13 8_24⟩) , (|1_14 1_23⟩, |8_14 8_23⟩) and(| 3_12 3_34⟩, |6_136_24⟩) to form possible (groundstate, first excited state) pairs. This already goes beyond the simple paradigm of Fig. <ref>. Nevertheless, Eq. (<ref>) is clearly an approximation, and our aim is to compute more rigorous potentials. Previous lattice studies<cit.> show that the static potential for a tetraquark system is not described entirely by a function proportional to this potential.An example of such a kind of potentials is the two-meson potential,V_33 = ⟨1_131_24 | V_CS | 1_131_24⟩ , =V_M(r_13) + V_M(r_24),which we expect to saturate the ground state when the quark-quark and anti-quark-anti-quark distances are large.§ GEOMETRICAL SETUP We aim to measure the static potential for the QQQ̅Q̅ system but also to investigate the transition between thetetraquark and a two meson state,and the transition between the two two-meson states. This computation within lattice QCD simulations requires choosing a particular geometrical setup of thequark system under investigation. In principle, one could choose any of the available geometrical configurations allowedby the hypercubic lattice. In order to study in detail the transitions between the different states, in the current work we opt for restricting our study to the case where the four particles are at the corners of arectangle and look at two particular alignments.In the so-called parallel alignment, see Fig. <ref> (left), the two quarks (anti-quarks) are at adjacent corners of the rectangle. In the anti-parallel alignment, see Fig. <ref> (right), the quarks (anti-quarks) are at the opposite corners of therectangle. §.§ Parallel Alignment of Quarks For this geometry, where the two quarks are at neighbour corners of the rectangle, we can describe the system via the intra-diquark distances,r_12=|𝐱_1-𝐱_2|=|𝐱_3-𝐱_4|,and the inter-diquark distances,r_13=|𝐱_1-𝐱_3|=|𝐱_2-𝐱_4|.Note that for both cases the second equalityholds only due to the particular geometrical configuration considered.If one assumes that quarks are confined within colourless states, this geometrical setup has two limits which allow to study the transition between a tetraquark state and a two meson system. Indeed, when r_12≪ r_13 one expects the ground state of theQQQ̅Q̅system to be that of a tetraquark,while for the opposite case, i.e. for r_13≪ r_12, one expects the system, i.e. its potential, to behave as a two meson system.For this geometrical setup, in the evaluation of the static potential we consider the basis of operators shown in Fig. <ref>.They are associated with a tetraquark operator (left in the figure) and a two-meson operator (right in the figure), the two ground state configurationsexpected for this particular geometry.§.§ Anti-parallel Alignment of Quarks For the anti-parallel alignment of quarks described in Fig. <ref> (right), we take as distance variables,r_13 = |𝐱_1-𝐱_3|=|𝐱_2-𝐱_4|, r_14 = |𝐱_1-𝐱_4|=|𝐱_2-𝐱_3| ,where, again, the second equalities are valid due to the particular characteristics of the geometrical distribution of quarks and anti-quarks. For this geometrical setup, one expects the ground state of the system when r_13≪ r_14 and r_14≪ r_13 to be dominated by the two possible independent two-meson states. For the computation of the static potential we use the basis of operators shown in Fig. <ref> that are associated with the two two-meson operators. § COMPUTING THE STATIC POTENTIAL For the computation of the static potential, including the groundstate and the first excited state, we rely on a basis of two operators 𝒪_i for each of the geometrical setupsdiscussed in Sec. <ref>.Defining the correlation matrix,M_ij = ⟨𝒪_i(0)^† 𝒪_j(t)⟩= ∑_nc_in^*c_jne^-V_nt ,where ⟨⋯⟩ stands for vacuum expectation value, c_in=⟨ n|𝒪_i|0⟩ and |n ⟩ are the eigenstates of the Hamiltonian of the system, the determination of the potential requires the knowledge of the solutions of the generalizedeigenvalue problem M_ij(t)a_j(t) = λ_k(t) M_ij(t_0) a_j(t) .In our calculation, we assume that the creation of an excited state out of the vacuum occurs at t = 0.From the generalized eigenvalues λ_k, the energy levels of system V_k can be estimated from the plateaux on the effective mass given by,M_eff(t)= logλ_k(t)/λ_k(t+1)=V_k+𝒪(e^-(V_k+1-V_k)t).In practice, the effective mass plateaux are identified fitting to a constantboth generalized eigenvalues. In this way, one is able to compute both the static potential for the ground state and the first excited state of the system. As described above, the basis of operators chosen to compute V depends on the geometry of the systemand on the expected ground states. For the anti-parallel alignment, we use two meson-meson operators, while for the parallel alignment a meson-meson operator and a diquark-antidiquarkoperator, i.e. a 3̅_123_34 colour configuration, are used to compute the correlation matrix.In the case where the quarks are in the anti-parallel alignmentthe operators used to compute the potential are,𝒪_13,24 = 1/3Q_1^iL_13^ijQ̅_3^j Q_2^kL_24^klQ̅_4^l ,𝒪_14,23 = 1/3Q_1^iL_14^ijQ̅_4^j Q_2^kL_23^klQ̅_3^l ,where L are Wilson lines connecting the quark. Its representation in terms of closed Wilson loops is given inFig. <ref>. The corresponding correlation matrix reads,M =([W_13W_24 1/3W_1324; 1/3W_1423W_14W_23 ]),where W_i are normalized mesonic Wilson loops W=[U].On the other hand, for the parallel alignment the two operators we consider are, 𝒪_YY= 1/2√(3)Q_1^iQ_2^jϵ_i'j'kL_1a^ii'L_2a^jj'L_ab^kk'ϵ_k'l'm'L_b3^l'lL_b4^m'mQ̅_3^lQ̅_4^m , 𝒪_13,24 =1/3Q_1^iL_13^ijQ̅_3^j Q_2^kL_24^klQ̅_4^l .The closed Wilson loops associated to 𝒪_YY and 𝒪_13,24 are represented in Fig. <ref> and the corresponding correlation matrix is given by,M = ([ W_YY 1/2√(3)W_YY,1324; 1/2√(3)W_1324,YY W_13W_24 ]).§ LATTICE SETUPFrom the static potential we aim to understand the transition between possible configurations of a QQQ̅Q̅ system. Furthermore, we also want to glimpse any possible differences due to the quark dynamics. Therefore, for the computation of V_k we consider two different simulations.Our quenched simulation uses an ensemble of 1199 configurations provided by the PtQCD collaboration <cit.>,generated using the Wilson action in a 24^3 × 48 lattice for a value of β = 6.2. The quenched configurations were generated using GPU's and a combination ofCabbibo-Marinari, pseudo-heatbath and over-relaxation algorithms, and computed in the GPU servers of the PtQCD collaboration. Our full QCD simulation uses a Wilson fermion dynamical ensemble of 156 configurations generated in a 24^3 × 48 lattice and a β = 5.6.In the dynamical ensemble we take κ=0.15825 for hopping parameter, which corresponds to a pion mass of m_π = 383 MeV.For Wilson fermions the deviations from continuum physics are of order 𝒪 (a) in the lattice spacing and, therefore, one can expect relative large systematic errors. However, we expect that the static potential as measured from the full QCD simulation away from the physical point to be more realistic when compared to the quenched simulation. The full QCD configuration generation has been performed in the Centaurus cluster <cit.> using the Chroma library <cit.>. The Hybrid Monte Carlo integrator scheme has been tuned using the methods described in<cit.>. Then, with both the quenched and full QCD ensembles of configurations, we perform our correlation matrix computations at the PC cluster ANIMAL of the PtQCD collaboration.The Wilson loops at large Euclidean time are decaying exponential functions of the static potential times the Euclidean time and, therefore,for large Euclidean times the Wilson loops are dominated by the statistical noise of the Monte Carlo.A reliable measurement of the static potential requires techniques which reduce the contribution of the noise to the correlation functions used in the evaluation of V.The quality of the measurement of the effective masses depends strongly on the overlap with the ground state of the system. In order to improve the ground state overlap we applied50 iterations of APE smearing <cit.>with w = 0.2 to the spatial links in both configuration ensembles.Furthermore, for the quenched ensemble, to further improve the signal to noise ratio, we used the extended multihittechnique <cit.>. This procedure generalizes the multihit as described in <cit.> by fixing the n^th neighbouring links instead of the first ones when performing the averages of the links.However, this technique has the inconvenient of changing the short distance behaviour of the correlators and, therefore, one should not consider the points with r<r_min. In previous studies with the multihit, r_min=2 was sufficient, but in our study we consider r_min=4. For the dynamical configurations the multihit technique can not be applied and, therefore, we resorted on hypercubic blocking <cit.> with the parameters α_1 = 0.75, α_2 = 0.60 and α_3 = 0.30 to improve the signal to noise ratio.For the conversion into physical units we first evaluate Wilson loops to access the ground state meson static potential on a single axis. In this calculation, we use a variational basis built using four different smearing levels to access the ground state meson static potential. The lattice data for the static meson potential is then fitted to the Cornell potential functional form,V_M(r)=K-γ/r+σ r.The fits for different fitting ranges are reported in Tables <ref> and <ref> for the quenched and the dynamical ensembles, respectively.The fits allows for the evaluation of the physical scale associated to the two ensembles through the Sommer method <cit.>. Indeed, by demanding that,r_0^2dV_M/dr(r_0) = 1.65,where r_0=0.5 fm, the lattice spacing a is measured and we present it inTables <ref> and <ref> for various fitting ranges. The results show that a is fairly independent of the fitting intervals and, in the following,we take a ≃ 0.0681 fm for the quenched data ensemble and a ≃ 0.0775 fm for the dynamical data set.Our QCD lattice spacing is essentially similar to the one obtained with different techniques. It follows that the lattice volumes used in the simulation are ( 1.63fm)^3 × 3.27fm for the quenched case and( 1.86fm)^3 × 3.72fm for the dynamical simulation. For completeness, inFig. <ref> we show the ground state meson potentials for the two ensembles in physical units. § RESULTS FOR THE STATIC QQQ̅Q̅ POTENTIALIn this section, we report on the results for the static potential with the two different geometries mentionedin Sec. <ref>, and we apply fits with ansatze bases in the string flip-flop potential and in the Casimir scaling. In Fig. <ref>, as an example, we show effective mass plots for the pure gauge simulation (left), full QCD simulation (right) and for the ground state (top) and first excited state (bottom) for a QQ Q̅Q̅ system in the antiparallel geometry.The red curves are the results of fitting the lattice data to measure the static potential. See the appendix for further details on the numerics.We consider the maximum number of points aligned in a horizontal line with acceptable χ^2 / d. o. f..Because the noise reduction technique in the quenched simulation rejects the cases with source distances smaller than 4a, we end up by accepting a few more results in the full QCD case than in the quenched case.§.§ The anti-parallel alignment We start by analyzing the simpler case of the anti-parallel geometry, where the meson-meson systems are expected to have lower energies than the tetraquark system.Our results are plotted in Figs. <ref> and <ref>. Clearly there are two different trends for r_13 < r_14 and for r_13>r_14 and a transition, with mixing, at the point r_13=r_24. Moreover we compare in detail our results with different ansatze. From the string flip-flop paradigm of Fig. <ref> we expectthe ground state of the system to be that of a two meson systemwhen the distance between a quark and an anti-quark, i. e. r_13 or r_14, is much smaller than the quark-quark distance, i.e. r_12. Then, for sufficiently small r_13 and/or r_14 the potential of the ground state of the QQQ̅Q̅ should reproduce the string flip-flop potential,V_0 ≃V_ff = min[ V_MM,V_MM'] , where the two different meson -meson potentials areV_MM =2 V_M(r_13), V_MM' =2V_M(r_14),and V_M is the ground state potential of a meson in Eq. (<ref>). Previous lattice simulations <cit.> confirm that V_0 is compatible with such a result. Deviations from Eq. (<ref>) are expected at intermediate distances together with a smooth transition from one picture to the other, i.e. from the two meson state with valence content Q_1Q̅_3 and Q_2 Q̅_4 to the two meson with valence content Q_1Q̅_4 and Q_2 Q̅_3.On the other hand, for the excited state, we have two possible scenarios. From the string-flip-flop, we would again expect, when the distance between quark and anti-quarka quark and an anti-quark, i. e. r_13 or r_14, is much smaller than the quark-quark distance, i.e. r_12, the system to be that of the next two meson system,V_1?≃max[ V_MM,V_MM'].However, given that the colour wavefunctions of the two mesonic states are not orthogonal, see Eq. (<ref>), and Section II, possibly the excited state is not another mesonic state and, but instead is an octet state, V_1?≃max[ V_88,V_88') ],where we estimate the colour octet potential assuming Casimir scaling, i.e. using the decompositionin Eq. (<ref>) and the values reported onTab. <ref>,V_88 =1/2V_M(√(r_13^2+r_14^2))+7/4V_M(r_14)-1/4V_M(r_13), V_88'=1/2V_M(√(r_13^2+r_14^2))+7/4V_M(r_13)-1/4V_M(r_14), Thus we have two different simple anzatse to interpret our results.The ground state potential V_0 and the first excited state potential V_1 for the quenched and dynamical ensembles arereported in Figs. <ref> and <ref>, respectively, together with V_MM, V_MM' and the octet potentials V_88, V_88'.As the figures show, the ground state static potential V_0 as a function of r_14 is compatible with two two meson potentials for small and large values of r_14.Indeed, for all r_13, at small values of r_14 the static potential is compatible with V_MM, while for large r_14V_0 becomes compatible with V_MM'.We show in Table <ref> good fits with the meson-meson potentials.This type of behaviour is well described by the string flip-flop potential V_ff. In the transition region r_13∼ r_14 where also V_MM∼ V_MM', deviations of V_0 from V_MM or V_MM' can be seen.The difference between the ground state potential and the sum of the two meson potentialsin physical units is detailed in Fig. <ref>, and in particular the transition pointr_12 = r_13 is analysed in Fig. <ref>. The results for the quenched simulation are well described assuming an off diagonal term Δ in the correlation matrix, leading to the functional form,V_0(r_13,r_14) = V_MM+V_MM'/2-√((V_MM-V_MM'/2)^2+Δ^2) ,where we may have either,Δ(r_1,r_2)=Δ_0e^-λ(r_1+r_2)/1+c(r_1-r_2)^2or,Δ(r_1,r_2)=Δ_0/1+c(r_1-r_2)^2+d(r_1+r_2)^2 .Eq. (<ref>) interpolates between the two potentials in flip-flop picture of a meson-meson.The fits for the functional forms in Eqs. (<ref>) and (<ref>) are reported in Tables <ref> and <ref>.In order to quantify the deviation from the two limits where the system behaves as a two meson system,we refer that the fits give a Δ(0.5 ,0.5 )≃60, a number to be compared with typical values for the meson potential which are of the order of GeV (see Fig. <ref>). This results shows that the corrections due to Δ to the flip-flop picture are small when the quarks and anti-quarks are in an anti-parallel geometry. The full QCD simulation shows similar results to the quenched QCD simulation.However, the results for V_0 for the full QCD configurations are not described by the same type of functional formgiven in Eq. (<ref>) which reproduces the flip-flop potential at large distances. We found no window where the fits are stable and, therefore, conclude that the dynamical V_0 is not reproduced byEq. (<ref>) with the deviations parametrised by either Eq. (<ref>) or Eq. (<ref>).In what concerns the excited state potential V_1 there are clearly two different regimes for r_13 very different from r_14, but we are not able to find an analytic form compatible with the lattice data, neither for the quenched simulations nor for the full QCD simulations.In bothFigs. <ref>and <ref>, it is clear the static potential V_1 lies between the functional forms of Eq. (<ref>) and Eq. (<ref>). There are subtle differences between Fig. <ref>and <ref>. In general, the full QCD case is closer to the octet expression of Eq. (<ref>) than the quenched QCD case.A fortiori, we are not able as well to find a good ansatze to fit V_1 in the transition region. For a detailed view of the differences for the quenched simulation in this region, see Fig. <ref>. This observed behaviour for V_1 can be understood in terms of adjoint strings. When the quark-anti-quark inside the octets are close to each other, they can be seen externally as a gluon. Therefore, we have a single adjoint string with a tension of σ_A=9/4σ.On the other hand, when the quark and the anti-quark are pulled apart, the adjoint string tends to split intotwo fundamental strings, with a total string tension of 2σ. The splitting of the adjoint string, gives arepulsive interaction between the quark-anti-quark pairs that form octets in the excited state. This is qualitatively consistent with the behaviour predicted by Casimir scaling, where the potential for a quark and an anti-quark in an octet corresponds to a repulsive interaction.§.§.§ Mixing angleFor the anti-parallel geometry and for the ground state potential the lattice results show that the tetraquark is essentially a two meson state. Therefore, one can write the most general ket describing the ground state |u_0⟩ of a QQQ̅Q̅ system as a linear combination of the available colourless states|u_0⟩=cosθ  |6_126̅_34⟩+ sinθ  |3̅_123_34⟩ =√(3/4){( cosθ/√(2) + sinθ)|1_131_24⟩+ ( cosθ/√(2) - sinθ)|1_141_23⟩} .For a pure two-meson state, the mixing angle is either θ = θ_0, for |1_131_24⟩, or θ = -θ_0, for|1_141_23⟩, with θ_0 = tan^-1 ( 1 / √(2) ). For the general case, the angle θ can be estimated using the generalized eigenvectors obtained solving Eq. (<ref>) with the following operators,𝒪_S = √(3/8)(𝒪_13,24+𝒪_14,23) ,𝒪_A = √(3/4)(𝒪_13,24-𝒪_14,23).The results for θ for the quenched simulation can be seen in Fig. <ref>. From the lattice data one can estimate a typical length, or broadness, associated to the transition between the two two-meson states. In the region when |r_13-r_14| ≲ d_trans, the transition occurs and the groundstate is a mixing of the MM and MM' states.We estimate the typical transition length from,d^-1_trans∼dθ(r_13,r_14)/dr_14|_r_14=r_13 .For the quenched data, see Fig. <ref>, the derivative stays within 0.36/a and 0.42/a and, therefore, d_trans∼ 0.16 - 0.19 fm.For the dynamical simulation, see Fig. <ref>, the typical transition length is essentially the same and we find d_trans∼ 0.16 -0.20 fm. The lattice data for the mixing angle gives a vanishing angle for r_13=r_14. This means that the ground state for the anti-parallel alignment is given only by |6_126̅_34⟩ and has no |3̅_123_34⟩ component.The results reported in Figs. <ref> and <ref> show that, in general, a QQQ̅Q̅ system is in a mixture of two possible colour meson states and it approaches meson states as the distance between the pairs of quark-anti-quark is much smaller than the distance between quarks or anti-quarks. §.§ The parallel alignment For this particular geometry, the static potential was investigated with lattice methods in <cit.>.For the ground state and in the limit where r_12≪ r_13, the authors found that the lattice data is compatible with the double-Y (or butterfly) potential,V_YY =2K - γ( 1/2r_12 + 1/2r_34+1/4r_13 +1/4r_24+1/4r_14+1/4r_23)+σ L_min ,where γ and K are the estimates of the static meson potential and σ is the fundamental string tension. For the geometry described on the right hand side of Fig. <ref> and for r_13 > r_12 / √(3) the butterfly potential simplifies into,V_YY = 2K-γ(1/r_12+1/2r_13+1/2√(r_12^2+r_13^2))+σ(√(3)r_12+r_13),Moreover, from the expression for the Casimir scaling potential given in (<ref>) and using the results reported on Tab. <ref> it is possible to define various types of potentials to be compared with the static potential computed from the lattice simulations.The potential associated to the state where the quarks and anti-quarks are in triplet states leads to the so-called triplet-antitriplet or diquark-antidiquark potential,V_33=∑_i<j⟨3̅_123_34|C_ij|3̅_123_34⟩ V_M(r_ij), or in a form similar to (<ref>), V_33 = 2K-γ(1/r_12+1/2r_13+1/2√(r_12^2+r_13^2)) +σ(r_12+1/2r_13+1/2√(r_12^2+r_13^2)) . Similarly, the anti-sextet-sextet potential is given byV_66 = ∑_i<j⟨6_126̅_34|C_ij|6_126̅_34⟩ V_M(r_ij) = 5/4V_M(r_13)+5/4V_M(√(r_12^2+r_13^2))-1/2V_M(r_12),and the octet-octet potential readsV_88=1/2V_M(r_12)-1/4V_M(r_13)+7/4V(√(r_12^2+r_13^2)). The lattice estimates for the ground state and first excited (whenever possible) potentials can been in Figs. <ref>and <ref> for the quenched and for the dynamical simulation, respectively.The data shows that for large quark-anti-quark distances, i.e. for large r_13, the static potentials are compatible with a linearlyrising function of r_13. This result can be viewed has an indication that the fermions on a tetraquark system are confined particles.For both the pure gauge and dynamical simulations and for small quark-anti-quark distances, i.e. for small r_13, and up to r_13≤r_12 the ground state potential reproduces that of a two meson state V_MM. In this sense, one can claim that for sufficiently small quark-anti-quark distances the ground state of a QQQ̅Q̅ systemis a two meson state. For the excited potential, the pure gauge results are among the double-Y potential (<ref>) and the octet-octet potential (<ref>). However, for the dynamical results, the static potential seems to be closer to V_88 at smaller and large r_13 and closer to V_YY as r_13 approaches r_12. On the other hand for sufficiently large r_13, the ground state potential is essentially that of a diquark-antidiquark system V_33 and the system enters its tetraquark phase.Indeed, the ground potential is given by 2 V_M for quark-anti-quarks distances up to r_13 = r_12 and is just above V_YY for distances r_13≥ r_12 + 1 in lattice units. These results suggests that, for this geometrical setup, the transition of a two meson state towards a tetraquark state occurs at r_13∼ r_12 + 1 (in lattice units).In what concerns the dependence of V_0 on r_12, the lattice data suggests that the potential increases with thequark-quark distance and favours a V_0 ∼ V_YY for sufficiently large r_12 as was also observed in <cit.>.For the quark models with four-body tetraquark potentials, in particular the string flip-flop potential illustrated in Fig. <ref> it is very important to quantify the deviation of V_0 from the V_YY ansatz; and we have studied several ansatze for this difference. Clearly V_0 is more attractive than the tetraquark potential V_YY of Eq. (<ref>) reported by previous authors, and this favours the existence of tetraquarks.Adding a negative constant (attractive) to the double-Y potential is not sufficient for a good fit of the lattice data for any of the sets of configurations. Adding a correction to the double-Y potential which is linear in the quark-quark distance,V_YY^B=V_YY(r_12,r_34) + δ K + δσ_12  r_12 ,describes quite well the dynamical simulation data and a fit gives δ K = -0.12(3)√(σ), δσ_12=-0.34(5)σ, where σ is the fundamental string tension, for a χ^2 / d.o.f =0.46 (see the tables on the appendix for details on the fits).The dynamical data for the deviations from V_YY are also compatible with a Coulomb like correctionV_YY^C = V_YY(r_12,r_34) + δ K + δγ_12/r_12 ,for a δ K=-0.67(4)√(σ), δγ_12=0.22(3)with a χ^2/d.o.f. = 0.62 (see appendix for details). Such a functional form is not compatible with the lattice data for the pure gauge case. A possible explanation could come from the difference in the statistics of both ensembles. Recall that the number of configurations for the pure gauge ensemble is about ten times larger than for the dynamical simulation and, therefore, the associated statistical errors are much smaller. In what concerns the first excited potential V_1, the data for the pure gauge and for the dynamical fermion simulation follows slightlydifferent patterns. In the quenched simulation and for r_13 < r_12, the potential is close to V_T and the behaviour for larger values of r_13 does not reproduces any of the potentials considered here. On the other hand, in the dynamical simulation V_1 for small and large values of r_13 is just below the data for anti-sextet-sextet potentialV_66=∑_i<j⟨6_126̅_34|C_ij|6_126̅_34⟩ V_M(r_ij)which, for this geometry, is given byV_66=5/4V_M(r_13)+5/4V_M(√(r_12^2+r_13^2))-1/2V_M(r_12)and, at intermediate distances where r_13∼ r_12, is compatible with the octet-octet potential, V_88=1/2V_M(r_12)-1/4V_M(r_13)+7/4V(√(r_12^2+r_13^2)).Further, at very small distances the potential seems to flatten for full QCD and the data also suggests a flattening or a small repulsive core.Note, for the quenched simulation smaller distances than 4 are not accessible, this short distance effect is not visible.§ SUMMARY AND DISCUSSIONIn this work the static potential for a Q Q Q̅Q̅ system was investigated using both quenched and Wilson Fermion full QCD simulations for two different geometric setups. The two geometries are designed to investigate sectors where dominantly meson-meson or tetraquark static potentials are expected.The simulations show that whenever one distance is much larger than the other, the ground state potential and the first excited state potential are compatible with a linearly rising function of the distance between constituents, suggesting that quarks and anti-quarks are confined particles. For the distances studied, the quenched and full QCD results are qualitatively similar, and their subtle differences only become clearer when we compare the lattice data with anzatse inspired in the string flip-flop potential and in Casimir scaling.For the anti-parallel geometry setup, the groundstate potential V_0 is approximately described by a sum of two two meson potentials, i.e. it is compatible with the string flip-flop type of potential. We take this result as an indication that the Q Q Q̅Q̅ wave function is given by a superposition of two meson states and we compute the mixing angle, as a function of the quark-anti-quark distances, which caracterize such a quantum state.The mixing angle shows that the tetraquark system undergoes a transition from one of the meson states to the other configuration as the quark-antiquark distance increases, and the broadness of this transition has a typical length scale of 0.16 - 0.20 fm. Moreover, for the quenched simulation, we found an analytical expression which describes well the lattice groundstate. The analytical expression is essentially a flip-flop type of potential with corrections, parametrized by Δ (r_1, r_2), which are typically≲ 10% than the sum of two two mesons potentials. In what concerns the first excited potential V_1 in the anti-parallel geometry, the results show that for small enough quark-anti-quark distances the potential is just below one of the possible octet-octet potentials and approaches a two meson potential from above from large quark-anti-quark distances. This results for the excited potential can be interpreted in terms of and excited state including a combination of meson-meson and octet-octet states.For the parallel geometry setup, the groundstate potential V_0 is compatible witha diquark-antidiquark potential for large quark-antiquark distances and a sum of two meson potentials for small separations. Moreover, the lattice data for the full QCD simulation iscompatible with a butterfly type of potential with corrections that we are able to parametrize. For the quenched simulation we found no analytical expressions that are able to describe the data, but the trend is the same. The interpretation of the first excited potential V_1 for the parallet geometry, in terms of possible colour configurations is not as compliant with models as in the anti-parallel geometry. It seems that V_1 for the full QCD simulation is just below the octet-octet from small quark-anti-quark distances and approaches again the octet-octet potential for at large distances. For the quenched simulation, the interpretation of V_1 in terms of colour components is not so clear, as the lattice data seems to point for a combination of different colour potentials. Importantly for quark models with four-body tetraquark potentials, in particular for the string flip-flop potential illustrated in Fig. <ref>, we obtain a groundstate potential V_0 more attractive, by a difference of -300 to -500 MeV, than the butterfly potential reported by previous authors <cit.>, and this favours the existence of tetraquarks.As an outlook, it would be interesting to measure the static Q Q Q̅Q̅ potentials for larger distances and for different geometries. We leave this for future studies.The authors are extremely grateful to Nuno Cardoso <cit.> for generating the ensemble of quenched configurations utilized in this work. The authors also acknowledge both the use of CPU and GPU servers of the collaboration PtQCD <cit.>, supported by NVIDIA, CFTP and FCT grant UID/FIS/00777/2013, and the Laboratory for Advanced Computing at University of Coimbra<cit.> for providing HPC resources that have contributed to the research results reported within this paper.M. C. is supported by FCT under the contract SFRH/BPD/73140/2010. P. J. S. acknowledges support by FCT under contracts SFRH/BPD/40998/2007 and SFRH/BPD/109971/2015.ieeetr§ TABLES OF RESULTS
http://arxiv.org/abs/1702.07789v1
{ "authors": [ "P. Bicudo", "M. Cardoso", "O. Oliveira", "P. J. Silva" ], "categories": [ "hep-lat", "hep-ph" ], "primary_category": "hep-lat", "published": "20170224222906", "title": "Lattice QCD static potentials of the meson-meson and tetraquark systems computed with both quenched and full QCD" }
APS/123-QED Steklov Mathematical Institute of Russian Academy of Sciences, Moscow 119991, Russia National Research Nuclear University MEPhI, Moscow 115409, Russia Department of Mathematics and Russian Quantum Center, National University of Science and Technology MISiS, Moscow 119049, Russia Russian Quantum Center, Skolkovo, Moscow 143025, RussiaSteklov Mathematical Institute of Russian Academy of Sciences, Moscow 119991, Russia Bauman Moscow State Technical University, Moscow 105005, Russia Russian Quantum Center, Skolkovo, Moscow 143025, Russia QApp, Skolkovo, Moscow 143025, RussiaDepartment of Mathematics and Russian Quantum Center, National University of Science and Technology MISiS, Moscow 119049, Russia Russian Quantum Center, Skolkovo, Moscow 143025, Russia QApp, Skolkovo, Moscow 143025, RussiaDecoy-state quantum key distribution is a standard tool for long-distance quantum communications.An important issue in this field is processing the decoy-state statistics taking into account statistical fluctuations (or “finite-key effects”).In this work, we propose and analyze an option for decoy statistics processing, which is based on the central limit theorem. We discuss such practical issues as inclusion of the failure probability of the decoy-states statistical estimates in the total failure probability of a QKD protocoland also taking into account the deviations of the binomially distributed random variables used in the estimations from the Gaussian distribution. The results of numerical simulations show that the obtained estimations are quite tight. The proposed technique can be used as a part of post-processing procedures for industrial quantum key distribution systems.Practical issues in decoy-state quantum key distribution based on the central limit theorem A.K. Fedorov December 30, 2023 =============================================================================================§ INTRODUCTION Quantum key distribution (QKD) as the main part of quantum cryptography is known to provide information-theoretic (or unconditional) security of key distribution.However, QKD protocols like BB84 assume the employ of single-photon sources <cit.>. In contrast,real-life implementations of QKD setups are based on attenuated laser pulses instead of true single photons <cit.>. Thisrealization makes QKD vulnerable to various attacks, such as the photon number splitting attack <cit.>. A well-known toolfor solving this problem is the decoy-state method, which can be considered as a standard technique used in many QKD realizations <cit.>.The decoy-state method uses laser pulses with different intensities.The intensities are chosen from a certain finite set.The choices for the pulses are kept in secret by the legitimate sender (Alice),but are publicly announced after the reception of all pulses by the legitimate receiver (Bob). By analyzing (i) statistics of reception for pulses with different intensities and (ii) error rates for different intensities,one can estimate the fraction of single-photon pulses and the error rate for single-photon pulses.In particular, this allows detection of the photon number splitting attack <cit.>. An important task in the framework of the decoy-state QKD is to take into account statistical fluctuations (so called “finite-key effects”).Several methods are proposed in the literature, including those based on the central limit theorem <cit.>,Chernoff–Hoefding method <cit.>,and improved Chernoff–Hoefding method <cit.>. In the present work, we propose and analyze an option for processing decoy-state statistics based on the central limit theorem.Namely, we derive expressions for statistical estimations of the fraction of positions (bits)in the verified key obtained from single-photon pulses and the error rate in such positions.They are further used in calculations of the length of the secret (final) key with a given tolerable failure probability. We also provide the results of numerical simulations, which show that these estimations are quite tight. It should be mentioned that the methods based on the central limit theorem are criticized as not sufficiently rigorous <cit.>.However, we estimate the deviations from the Gaussian distribution in a rigorous way using the results of Ref. <cit.>.Another important practical issue that we discuss is the accurate inclusion of the failure probability of the decoy states statistical estimates into the formula for the total failure probability. We note that our analysis uses the decoy-state QKD protocol, which is described in Ref. <cit.>.Our work is organized as follows. In Sec. <ref>, we describe the basics of the QKD post-processing procedure. In Sec. <ref>, we present the suggested method for processing of decoy-state statistics based on the central limit theorem. In Sec. <ref>, we use numerical simulations in order to compare the obtained estimations with theoretical limits of the decoy-state QKD protocol. In Sec. <ref>, we estimate the deviations of the random variables used in ourprocessing procedurefrom the Gaussian distribution. We summarize the main results in Sec. <ref>.§ QKD POST-PROCESSING PROCEDURE The operating QKD protocol can be divided into several stages. On the first quantum stage of a QKD protocol, Alice sends quantum states to Bob, who measures them.After the quantum stage Alice and Bob have two binary strings, the so-called raw keys.The second stage is the use of post-processing procedures. Let us recall the basic stages of post processing the raw keys for the BB84 QKD protocol(for details, see Refs. <cit.>): * Sifting: Alice and Bob announce the bases they used for the preparation and measurement of quantum states and drop the positions with inconsistent bases from the raw keys.The resulting keys are called the sifted keys.The decoy-states statistics is announced at this stage as well. * Information reconciliation, also known as error correction: This entails removing discrepancies between Alice's and Bob's sifted keys via communication over the authenticated channel(for the last issues concerning the adaptation of error-correcting codes for QKD, see Ref. <cit.>). Often this stage is completed by a verification procedure: one legitimate side send a hash-tag of his or her key to the other side to ensure the coincidence of their keys after the error correction.The blocks of the sifted keys which fail the verification test are discarded at this state. The resulting common key is called the verified key.* Parameter estimation:this is the estimation of the quantum bit error rate (QBER) in the sifted keys. Also, processing the decoy states statistics is performed on this stage. * Privacy amplification:the possible information obtained by an eavesdropper (Eve) about the keys is reduced to a negligible value This is achieved by a special contraction of the verified key into a shorter key.For such contractions, 2-universal hash functions are used.This provides unconditional security against both classical and quantum eavesdroppers <cit.>.The resulting key is called the secret key or the final key.It is the output of a QKD protocol.Post-processing procedures require communication between Alice and Bob over a classical channel.This channel is not necessarily private (Eve is freely allowed to eavesdrop), but it must be authentic,i.e. Eve can neither change the messages sent via this channel nor send her own messages without being detected.To provide the authenticity of the classical channel, Alice and Bob uses message authentication codes.There are unconditionally secure message authentication codes <cit.>. Classically, the parameter estimation stage precedes the information reconciliation <cit.>. The QBER value is estimated by random sampling from the sifted keys (of course, being publicly announced this sample is discarded from the sifted keys).In our scheme, following Ref. <cit.>, the QBER value is determined after the information reconciliation and verification stages.Clearly, the straightforward comparison of the keys before and after the error correction procedure provides the exact number of corrected errors and corresponding QBER value.This allows one to avoid discarding a part of the sifted keys.This scheme was also used in the recently suggested symmetric blind information reconciliation method <cit.>.The results of the statistical analysis of decoy states are used in the privacy amplification stage to calculate the length of the final key (contraction rate) which provides the required degree of security. For the BB84 protocol, the formula is as follows <cit.>:l_ sec=κ̂_1^ l l_ ver[1-h(ê_1^ u)]-leak_ec+5log_2ε_ pa,where l_ ver is the length of the verified key,leak_ec the amount of information (number of bits) about the sifted keys leaked to Eve during the information reconciliation stage,h(x)=-xlog_2x-(1-x)log_2(1-x) is the binary entropy function, and ε_ pa is a tolerable failure probability for the privacy amplification stage.It can be interpreted as a probability that the privacy amplification stage has not destroyed all of Eve's information on the verified key and, so Eve has partial non-negligible information on the final key.Further, κ̂_1^ l is a lower bound on the fraction of bits in the verified key obtained from single-photon pulses,and ê_1^ u is an upper bound on the fraction of errors in such positions in the sifted keys.It is assumed that the bits of the verified keys obtained from multiphoton pulses are known to the eavesdropper.The quantity h(ê_1^ u) determines the potential knowledge of the bits obtained from single-photon pulses by the eavesdropper.This reflects the essence of QKD: it is impossible to get knowledge of the bits of the sifted key obtained from single-photon pulses without introducing errors in them.The estimation κ̂_1^ l and ê_1^ u is the purpose of the decoy statistics analysis, which is given in the next section and is the main subject of the present paper.The failure probability for these estimates (the probability that at least one of these estimates is not true) must not be greater than some value ε_ decoy.If l_ sec given by Eq. (<ref>) is positive, then the secret key distribution is possible.If τ is the time needed to generate a verified key with the length l_ ver,then the secret key rate can be defined as follows:R_ sec=l_ sec/τ. Let us comment on expression (<ref>).Essentially, it is taken from Ref. <cit.>, where a rigorous proof of unconditional security of the BB84 protocol is given.Strictly speaking, the proof was given for the case when the QBER is estimated by random sampling from the sifted keys (see Remark <ref>).However, the scheme when the QBER is estimated after the information reconciliation, simplifies the proof and formulas, since,in this case, the QBER is estimated not probabilistically but deterministically.Actually, the function ε_ pe(ν) in Theorem 3 in Ref. <cit.>,which is the probability that the QBER estimation is incorrect, can be set to zero for all ν. The total failure probability of the QKD system is the sum of the failure probabilities of each component:verification, authentication, privacy amplification, and statistical estimations of κ̂_1^ l and ê_1^ u:ε_ qkd=ε_ ver+ε_ aut+ε_ pa+ε_ decoy.Here ε_ ver is the probability that verification hash tags of Alice and Bob coincide, whereas their keys after the information reconciliation do not,and ε_ aut is the probability that message authentication codes do not detect Eve's interference into the classical channel.The meaning of the total failure probability ε_ qkd is as follows:the key generated by the QKD protocol is indistinguishable from the perfectly secure key in any possible context (any possible application of this key)with the exception probability at most ε_ qkd (see Ref. <cit.>). More precisely, ε_ qkd is the trace distance between the actual joint classical-quantum state of Alice, Bob, and Eveand the ideal one,which corresponds to the case when Alice and Bob either have aborted the protocol or Alice's and Bob's keys coincide and are completely uncorrelated with Eve's state.But ε_ qkd as a failure probability, the real state coincides with the ideal with the probability of at least 1-ε_ qkd <cit.>.The derivation of formula (<ref>) is given in the Appendix. § DECOY-STATE STATISTICS PROCESSING Let us describe the estimations of κ̂_1^ l and ê_1^ u used in Eq. (<ref>).Here we adopt a finite-key version of the decoy statistics analysis described in Ref. <cit.>.Namely, in each round Alice sends to Bob a fixed number N pulses.Each pulse has the “signal intensity” μ>0 with the probability p_μ,or the “decoy intensity” ν>0 with the probability p_ν,or the “vacuum intensity” λ≥0 with the probability p_λ=1-p_μ-p_ν.We note that the intensity of the “vacuum state” λ is close to zero, but not exactly zero due to the technical reasons. In our consideration we assumeλ=0.01. In fact, the “vacuum intensity” is the second decoy intensity.It is required that λ<ν/2 and λ+ν<μ.Signal pulses are used to establish the raw key (and then the sifted, verified, and secret keys), whereas decoy pulses are used to estimate κ̂_1^ l and ê_1^ u.Let N be the total number of pulses sent by Alice, N_μ,N_ν, and N_λ be the numbers of signal, decoy, and vacuum pulses sent by Alice(generally, they do not coincide with p_μ N, p_ν N, and p_λ N due to statistical fluctuations, but, of course, N_μ+N_ν+N_λ=N), and n_μ,n_ν,n_λ be the numbers of the corresponding pulses registered by Bob. Further, let Q_μ the probability that a signal pulse is registered by Bob, Q_ν and Q_λ the corresponding probabilities for decoy and vacuum pulses,and Q_1 bethe joint probability that a pulse contains a single photon and that it is registered.Thenθ_1=Q_1/Q_μis the probability that a bit in the sifted (as well as verified) key is obtained from a single-photon pulse.Finally, let κ_1 be the actual fraction of bits in the verified key obtained from single-photon pulses (it may differ from θ_1 due to statistical fluctuations). We note that the random variables N_α (α∈{μ,ν,λ}) and l_ verκ_1 are binomially distributed.Indeed, each of N pulses is, for example, a signal pulse with the probability p_μ independently of other pulses, and the number of the signal pulses N_μ is not fixed (random).Another probability distribution widely used in QKD is the hypergeometric distribution arising from sampling without replacement.Here (like, e.g., in Ref. <cit.>) we do not use sampling without replacement,but use the independent random choice scheme giving rise to the binomial distribution for the number of choices of a certain alternative (type of pulse). If a random variable X follows the binomial distribution with the number of experiments n and the success probability in one experiment p, then we will write X∼ Bi(n,p).Then N_α∼ Bi(N,p_α) and l_ verκ_1∼ Bi(l_ ver,θ_1). If the value of N_α is known and fixed (i.e., if we treat it as non-random),then n_α are also binomially distributed: n_α∼ Bi(N_α,Q_α).Indeed, each pulse of a given type is detected with the probability Q_α independently of other pulses of this type.In order to estimate κ_1 from below, we should estimate θ_1 from below. To do this, we should estimate Q_1 from below and Q_μ from above.According to Ref. <cit.>, the lower bound for Q_1 is as follows:Q_1 ≥μ e^-μ/ν(1-ν/μ)-λ(1-λ/μ)××[Q_ν e^ν-Q_λ e^λ-ν^2-λ^2/μ^2(Q_μ e^μ-Y_0^ l)],whereY_0^ l=max{ν Q_λ e^λ-λ Q_ν e^ν/ν-λ,0}is the lower bound for the probability that Bob obtains a click event provided that the pulse contains no photons.The estimates of Q_μ,Q_ν and Q_λ are given by:Q̂_α=n_α/N_α, α=μ,ν,λ.Here and in the following the notation without a “hat” denotes a true value of a probability (a parameter in the binomial distribution),while the notation with a “hat” denotes its statistical estimate (i.e., a random variable).Due to the central limit theorem,the distribution of the random variable Q̂_α is well approximated by the normal distribution with the mean Q_α and standard deviation √(Q_α(1-Q_α)/N_α).If we denote φ=Φ^-1(1-ε_ decoy/a),where Φ^-1 is the quantile function for the standard normal distribution and a is some constant to be specified, thenP[Q_α-Q̂_α≥φ√(Q_α(1-Q_α)/N_α)]≤ε_ decoy/a.This gives lower and upper bounds on Q_α:Q̂_α^ u,l=Q̂_α±φ√(Q̂_α(1-Q̂_α)/N_α).Each bound is satisfied with the probability not less than 1-ε_ decoy/a.We will need the upper bound on Q_μ and two-sided bounds on Q_ν and Q_λ.These five bounds are simultaneously satisfied with the probability not less than 1-5ε_ decoy/a.Substitution of these bounds into Eq. (<ref>) yields:Q_1≥μ e^-μ/ν(1-ν/μ)-λ(1-λ/μ)××[Q̂^ l_ν e^ν-Q̂^ u_λ e^λ-ν^2-λ^2/μ^2(Q̂^ u_μ e^μ-Ŷ_0^ l)]≡Q̂_1^ l,whereŶ_0^ l=max{νQ̂^ l_λ e^λ-λQ̂^ u_ν e^ν/ν-λ,0}.Thus, we arrive at the following expression: θ_1≥Q̂^ l_1/Q̂^ u_μ=θ̂_1^ lwith the probability not less than 1-5ε_ decoy/a.The actual fraction κ_1 is estimated from below as κ_1≥θ_1-φ√(θ_1(1-θ_1)/l_ ver).with the probability not less than 1-ε_ decoy/a, or,κ_1≥θ̂^ l_1- φ√(θ̂^ l_1(1-θ̂^ l_1)/l_ ver)≡κ̂_1^ l.with the probability not less than 1-6ε_ decoy/a.We have obtained one of two estimates participating in Eq. (<ref>). Let us now find an upper bound for the error rate of the single-photon states.Though the formulas of these bounds are known (see Ref. <cit.>), the use of the binomial distribution should be analyzed in more detail.If Eve performs a coherent attack, then the errors in different positions of the keys cannot be treated as independent events.However, we are going to show that we can still use the binomial distribution. Let n_i and e_i denote the number of bits in the verified key obtained from the i-photon pulses andthe error rate in the i-photon states, respectively(i.e., n_ie_i is the number of errors in the bits obtained from the i-photon pulses).Also let e_μ denotes the total error rate (QBER) for signal pulses.Thenl_ ver e_μ = ∑_i=0^∞ n_ie_i≤ e_0n_0+e_1n_1, e_1≤ l_ ver e_μ-e_0n_0/n_1= e_μ-e_0n_0/l_ ver/κ_1,where we have used n_1=κ_1l_ ver, by definition of κ_1.Obviously, the probability of error for a vacuum pulse is 1/2 for both natural noise and Eve's attack.Indeed, if there is no eavesdropping, then only dark counts can cause the click event on the Bob's side.If the Bob's detectors are identical, then they have equal probabilities of a click.Moreover, if they are memoryless (after a certain dead time), then the error events in vacuum pulses are independent from each other.Now consider the case of the presence of Eve. She has no way of knowing about the bit sent by Alice since the pulse contains no photons.The only thing she can do is to send her own pulse.But since she does not know the Alice's bit, her bit can be either correct or not with equal probabilities and independently of the correctness of other bits.Hence, e_0n_0∼ Bi(n_0,1/2) for a fixed n_0. But n_0 is also a binomially distributed random variable.For each of N_μ signal pulses, the joint probability that a signal pulse contains zero photons,the basis choices of Alice and Bob coincide, and Bob has a click event is e^-μY_0/2 [see Eq. (<ref>)].Hence, n_0∼ Bi(N_μ,e^-μY_0/2), e_0n_0∼ Bi(N_μ,e^-μY_0/4),e_0n_0 ≥N_μ e^-μY_0^ l/4- φ√(N_μe^-μY_0^ l/4(1-e^-μY_0^ l/4))≡υ,ande_1 ≤e_μ-υ/l_ ver/κ̂_1^ l≡ê_1^ uwith the probability not less than 1-ε_ decoy/a. Thus, all statistical estimates are satisfied with the probability not less than 1-7ε_ decoy/a. Since they must be satisfied with the probability not less than 1-ε_ decoy, we set a=7.§ SIMULATION RESULTS We then consider realization of the described procedure on the realistic “plug-and-play” QKD setup <cit.>and compare the obtained results with theoretical limitations. The parameters of the QKD setup implementation are as follows:number of pulses in train 5×10^4,repetition rate of pulses in train 300 MHz,storage line length 17 km, detectors efficiency 10%,detectors dead time 1 μ s,dark count probability 3×10^-7,additional losses on Bob's side 5 dB,fiber attenuation coefficient 0.2 dB/km,and interference visibility 97%.The parameters of the post-processing procedure are as follows: ε_ ver=ε_ aut=ε_ pa=ε_ decoy=10^-12.The information leakage on the procedure of error correction and verification are <cit.>: leak_ec=f_ ech(e_μ),f_ ec=1.15.The length of processed block l_ ver is limited by the value of 16 Mbits or the length of sifted key accumulated after 30 min of the operation of the QKD setup. We use the differential evolution method for numerical optimization of signal and decoy intensities (μ and ν) together with their generation probabilities (p_μ and p_ν). The “vacuum intensity” is fixed at the level λ=0.01, and its generation probability is given by p_λ=1-p_μ-p_ν. The results are presented in Fig. <ref> and Fig. <ref>.We then compare the results given by our approach with the theoretic limit,where we neglect statistical fluctuations and assume that we know the exact values of κ_1 and e_1 (i.e., there is no need in statistical estimates and decoy states, p_μ=1, p_ν=p_λ=0). In the theoretical limit, the secret key rate is given by the expression: R_ sec^*=R_ sift{κ_1[1-h(e_1)]-f_ ech(e_μ)},where R_ sift is the sifted key rate.The quantities R_ sift, κ_1, e_1, and e_μ in Eq. (<ref>) depend on the intensity of signal pulses (μ^* for the theoretical limit case). These quantities are obtained from the numerical optimization of the intensities for various communication distances.The results of the comparison of our approach and theoretical limit are presented in Fig. <ref>. In Fig. <ref>(a) it is shown that the proposed approach rather closely approximates the theoretical limit on distances up to 100–120 km. The optimal operating of post-processing procedures on such distances is important, in particular, for inter-city QKD for future quantum networks.We note that the sifted key rate in the theoretic limit is higher due to higher optimal signal intensity [Fig. <ref>(b)] and absence of decoy states. Also note that the optimal fraction of decoy states in the proposed approach is relatively small and is about 5% for distances less than 100 km [see Fig. <ref>(c)].§ DEVIATIONS FROM THE GAUSSIAN DISTRIBUTION In the discussed approach the statistical fluctuations are treated in the framework of the central limit theorem, i.e. we assume that the binomially distributed random variables(the fraction of positions obtained by single-photon pulses κ_1 and the error rate in these positions e_1) are well approximated by the Gaussian distribution.This approach is criticized as not sufficiently rigorous <cit.>, in contrast to other approaches which are more rigorous but give slightly worse estimates. Now we are going to estimate deviations from the Gaussian distribution.Let X∼ Bi(n,p).Then according to Ref. <cit.>:C_n,p(k)≤[X≤ k]≤ C_n,p(k+1),where C_n,p(0)=(1-p)^n, C_n,p(n)=1-p^n, andC_n,p(k)=Φ( sgn( kn-p)√(2nH( kn,p)))for k=1,…,n-1,Φ(x)=1/√(2π)∫_-∞^x e^-t^2/2dt, H(x,p)=xlnx/p+(1-x)ln1-x/1-p,and sgn(x)=x/|x| for x≠0 and sgn(0)=0. If k=np+φ√(np(1-p)), then it is straightforward to show using Taylor's theorem (for n→∞) that C_n,p(k)=Φ(φ)- φ^2(1-2p)e^-φ^2/2/6√(2π np(1-p))+O(1n), C_n,p(k+1)=Φ(φ)- φ^2(1-2p)e^-φ^2/2/6√(2π np(1-p))+e^-φ^2/2/√(2π np(1-p))+O(1n). We see that the deviations from the Gaussian distribution become significant for small n or p close to 0 or 1.In Equations (<ref>), (<ref>), and (<ref>) we took φ such thatΦ(φ)=1-ε_ decoy/7[see Eq. (<ref>)] for ε_ decoy=10^-12, i.e., φ≈7.30. But the precise value ε'_ decoy of the failure probability is given by[X≤ np+φ√(np(1-p))]=1-ε'_ decoy/7.Eqs. (<ref>) can be used to estimate the difference between the precise value ε'_ decoy and the required value ε_ decoy. Let us consider the worst-case scenarios for estimates (<ref>), (<ref>), and (<ref>).In Eq. (<ref>), the minimal possible N_α is N_λ∼ 10^8 (when the length is close to zero) and all Q̂_α are at least of the order 10^-7(they are bounded from below by dark count probability with a dead time corrections).The substitution of these worst-case parameters n=10^8 and p=10^-7 to Eq. (<ref>) givesε'_ decoy-ε_ decoy≈ 2·10^-15≪ε_ decoy=10^-12. In Eq. (<ref>), the worst-case parameters are n=l_ ver=10^5 and p=θ_1≈0.47 (corresponding to the maximal length 155 km):ε'_ decoy-ε_ decoy≈ 1.5·10^-16≪ε_ decoy=10^-12. In Eq. (<ref>), we always have Y_0^ l=0, so, in fact, we do not perform the statistical estimation and use a trivial estimate n_0e_0≥0. We see that the precise value ε'_ decoy may exceed the required value ε_ decoy only by a negligible quantity.The higher-order corrections to the Gaussian distribution with respect to n^-1/2 are even smaller.Thus, for practical parameters one can use the proposed formulas based on the Gaussian distribution.Derivation of a general procedure for decoy state statistics processing based on rigorous formula (<ref>) will be a subject for a subsequent work.§ CONCLUDING REMARKS We have presented a sort of the decoy state statistics processing.The final formulas are (<ref>) and (<ref>), which give the statistical estimates used in the formula for the length of the final secret key (<ref>).Also we claim that the failure probability ε_ decoy for the decoy states statistical estimates should be treated as an additional term in the total failure probabilityε_ qkd in Eq. (<ref>).Usually, one simply puts ε_ decoy=ε_ pa and not treat it as an additional term in the total failure probability.From the point of view of the rigorous theory, this is not correct: formula (<ref>) provides the failure probability at most ε_ pa only for true single-photon sources or,at least, when we can estimate κ̂_1^ l and ê_1^ u with certainty.If we cannot estimate these quantities with certainty, the failure probability of the estimate should be included as an additional term in the the total failure probability of the QKD system.The appendix is devoted to the rigorous derivation of Eq. (<ref>) for the total failure probability.Finally, we have shown that, for practical parameters,deviations of the binomially distributed random variables used in the decoy states statistics processing from the Gaussian distribution can be neglected.The suggested option for the decoy-state processing is implemented in the proof-of-principle realization of the post-processing procedure for industrial QKD systems <cit.>,which is freely available under GNU general public license (GPL) <cit.>. This procedure is used in the modular QKD setup described in Ref. <cit.>. § ACKNOWLEDGMENTS We are grateful to Y.V. Kurochkin, A.S. Sokolov, and A.V. Miller for useful discussions.The work of A.S.T. and E.O.K. was supported by the grant of the President of the Russian Federation (project MK-2815.2017.1). A.K.F. is supported by the RFBR grant (17-08-00742).§ APPENDIX. DERIVATION OF THE FORMULA FOR THE FAILURE PROBABILITY This section is devoted to the derivation of formula (<ref>).According to the results of Ref. <cit.>:ε_ qkd=ε_ corr+ε_ sec,where ε_ corr and ε_ sec stand for correctness (coincidence of Alice's and Bob's final keys) and secrecy (ignorance of Eve about the final key), respectively.Namely, ε_ corr is the probability that Alice's and Bob's keys do not coincide,but the protocol was not aborted, and ε_ sec is the trace distance between the actual joint classical-quantum state of Alice and Eve and the ideal one. In our case, ε_ corr=ε_ ver+ε_ aut,i.e., the Alice's and Bob's final keys coincide in the case of coincidence of their verification hash tags and their authentication hash tags (otherwise Eve could interfere into their communication and fake the verification tags).So, the failure probability for correctness is the sum of failure probabilities of the verification and authentication.Further, in the case of single-photon sources, ε_ sec=ε_ pa (see, Ref. <cit.>).In this case we can estimate κ̂_1^ l and ê_1^ u in Eq. (<ref>) with certainty. If we cannot estimate these quantities with certainty, the failure probability of the estimate should be included as an additional term in thetotal failure probability of the QKD system:ε_ sec=ε_ pa+ε_ decoy.A mathematical fact justifying expression Eq. (<ref>) is as follows.Consider two classical-quantum states:ρ_XY =∑_x∈𝒳 p_x|x⟩⟨x|⊗ρ_Y|x, σ_XY =∑_x∈𝒳 p_x|x⟩⟨x|⊗σ_Y|x,where 𝒳 is a finite set, p_x are probabilities.Let, further, Ω⊂ X (event), Ω=𝒳\Ω, p(Ω)=∑_x∈Ωp_x,ρ_XE|Ω =1/p(Ω)∑_x∈Ω p_x|x⟩⟨x|⊗ρ_E|x, σ_XE|Ω =1/p(Ω)∑_x∈Ω p_x|x⟩⟨x|⊗σ_E|x, ρ_XE =p(Ω)ρ_XE|Ω+(1-p(Ω))ρ_XE|Ω, σ_XE =p(Ω)σ_XE|Ω+(1-p(Ω))σ_XE|Ω, D(ρ_XE|Ω,σ_XE|Ω)≤ε_1,where D is the trace distance (for details, see Refs. <cit.>) and p(Ω)≥1-ε_2.ThenD(ρ_XE,σ_XE) = p(Ω)D(ρ_XE|Ω,σ_XE|Ω)+ (1-p(Ω))D(ρ_XE|Ω,σ_XE|Ω)≤(1-ε_2)ε_1+ε_2≤ε_1+ε_2,where we have used that the trace distance does not exceed unity.In our case 𝒳 is the set of all pairs (κ_1,e_1), Ω is the subset corresponding to the event (κ_1≥κ̂_1^ l and e_1≤ê_1^ u),ε_1=ε_ pa, ε_2=ε_ decoy,(<ref>) is the trace distance between the actual and the ideal final states of the protocol conditioned on the event that the statistical estimates of κ_1 and e_1 are true,and, finally, (<ref>) is the total trace distance between the actual and the ideal final states of the protocol. In Ref. <cit.>, another formula relating the failure probability and trace distance is used:if ε is the failure probability, then the trace distance between the actual and the ideal state is bounded from above by √(ε(2-ε)),instead of linear formulas (<ref>) and (<ref>).The reason is the difference between the techniques of security proofs.In Ref. <cit.>, entanglement-distillation technique was assumed.Its final result is expressed in terms of fidelity F(ρ,σ) between the actual joint state of Alice, Bob,and Eve ρ and the ideal one σ. If F(ρ,σ)≥1-ε, then the trace distance is bounded byD(ρ,σ)≤√(1-F(ρ,σ)^2)≤√(ε(2-ε)).In contrast, proofs of Refs. <cit.> are information-theoretic and their results are direct bounds on the trace distance(the leftover hash lemma is essentially the main ingredient yielding such a If we cannot estimatebound). 99 BB84 C.H. Bennet and G. Brassard, https://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf in Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing Bangalore, India (IEEE, New York, 1984), p. 175.Gisin N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, http://dx.doi.org/10.1103/RevModPhys.74.145Rev. Mod. Phys. 74, 145 (2002).Scarani V. Scarani, H. Bechmann-Pasquinucci, N.J. Cerf, M. Dusek, N. Lütkenhaus, and M. Peev, http://dx.doi.org/10.1103/RevModPhys.81.1301Rev. Mod. Phys. 81, 1301 (2009).LoRev H.-K. Lo, M. Curty, and K. Tamaki, https://dx.doi.org/10.1038/nphoton.2014.149Nat. Photonics 8, 595 (2014).LoRev2 E. Diamanti, H.-K. Lo, and Z. Yuan,https://dx.doi.org/10.1038/npjqi.2016.25npj Quant. Inf. 2, 16025 (2016).Huttner B. Huttner, N. Imoto, N. Gisin, and T. Mor, http://dx.doi.org/10.1103/PhysRevA.51.1863Phys. Rev. A 51, 1863 (1995).Brassard G. Brassard, N. Lütkenhaus, T. Mor, and B.C. Sanders, http://dx.doi.org/10.1103/PhysRevLett.85.1330Phys. Rev. Lett. 85, 1330 (2000).Decoy01 W.-Y. Hwang,https://doi.org/10.1103/PhysRevLett.91.057901Phys. Rev. Lett. 91, 057901 (2003).Decoy02 H.-K. Lo, X. Ma, and K. Chen,https://doi.org/10.1103/PhysRevLett.94.230504Phys. Rev. Lett. 94, 230504 (2005).Decoy03 X.-B. Wang,http://dx.doi.org/10.1103/PhysRevLett.85.1330Phys. Rev. Lett. 94, 230503 (2005).Decoy04 X. Ma, B. Qi, Y. Zhao, and H.-K. Lo, https://doi.org/10.1103/PhysRevA.72.012326Phys. Rev. A 72, 012326 (2005).Curty1 M. Curty, F. Xu, W. Cui, C.C.W. Lim, K. Tamaki, and H.-K. Lo,https://doi.org/10.1038/ncomms4732Nat. Commun. 5, 3732 (2014).Curty2 C.C.W. Lim, M. Curty, N. Walenta, F. Xu, and H. Zbinden,https://doi.org/10.1103/PhysRevA.89.022307Phys. Rev. A 89, 022307 (2014).Ma Z. Zhang, Q. Zhao, M. Razavi, and X. Ma, https://doi.org/10.1103/PhysRevA.95.012333Phys. Rev. A 95, 012333 (2017).ZubkovSerov A.M. Zubkov and A.A. Serov, https://doi.org/10.1137/S0040585X97986138Theory Probab. Appl. 57, 539 (2013); http://arxiv.org/abs/1207.3838arXiv:1207.3838.SECOQC M. Peev et al.,http://dx.doi.org/10.1088/1367-2630/11/7/075001New J. Phys. 11, 075001 (2009).Gisin2 N. Walenta, A. Burg, D. Caselunghe, J. Constantin, N. Gisin, O. Guinnard, R. Houlmann, P. Junod, B. Korzh, N. Kulesza, M. Legré,C.C.W. Lim, T. Lunghi, L. Monat, C. Portmann, M. Soucarros, P. Trinkler, G. Trolliet, F. Vannel, and H. Zbinden, http://dx.doi.org/10.1088/1367-2630/16/1/013047New J. Phys. 16 013047 (2014).Fedorov E.O. Kiktenko, A.S. Trushechkin, Y.V. Kurochkin, and A.K. Fedorov, http://dx.doi.org/10.1088/1742-6596/741/1/012081J. Phys. Conf. Ser. 741, 012081 (2016).Kiktenko2 E.O. Kiktenko, A.S. Trushechkin, C.C.W. Lim, Y.V. Kurochkin, and A.K. Fedorov, https://arxiv.org/abs/1612.03673arXiv:1612.03673.ComposPA R. Renner and R. König, http://dx.doi.org/10.1007/978-3-540-30576-7_22Lect. Notes Comp. Sci. 3378, 407 (2005).WegCar M.N. Wegman and J.L. Carter, http://dx.doi.org/10.1016/0022-0000(81)90033-7J. Comp. Syst. Sci. 22, 265 (1981).Krawczyk H. Krawczyk, http://dx.doi.org/10.1007/3-540-48658-5_15Lect. Notes Comp. Sci. 839, 129 (1994).Krawczyk2 H. Krawczyk,http://dx.doi.org/10.1007/3-540-49264-X_24Lect. Notes Comp. Sci. 921, 301 (1995).Tomamichel M. Tomamichel and A. Leverrier, http://arxiv.org/abs/1506.08458arXiv:1506.08458.TomRenner M. Tomamichel, C.C.W. Lim, N. Gisin, and R. Renner, http://dx.doi.org/10.1038/ncomms1631Nat. Commun. 3, 634 (2012).Compos M. Ben-Or, M. Horodecki, D.W. Leung, D. Mayers, and J. Oppenheim, http://dx.doi.org/10.1007/978-3-540-30576-7_21Lect. Notes Comp. Sci. 3378, 386 (2005).Sokolov A.S. Sokolov, A.V. Miller, A.A. Kanapin, V.E. Rodimin, A.V. Losev, A.S. Trushechkin, E.O. Kiktenko, A.K. Fedorov, V.L. Kurochkin, Y.V. Kurochkin, https://arxiv.org/abs/1612.04168arXiv:1612.04168.Code E.O. Kiktenko, A.S. Trushechkin, M.N. Anufriev, N.O. Pozhar, and A.K. Fedorov, Post-processing procedure for quantum key distribution systems (source code), https://dx.doi.org/10.5281/zenodo.200365.Portmann C. Portmann and R. Renner,https://arxiv.org/abs/1409.3525arXiv:1409.3525.Fung C.-H.F. Fung, X. Ma, and H.F. Chau,https://doi.org/110.1103/PhysRevA.81.012318Phys. Rev. A 81, 012318 (2010).Renner R. Renner,https://dx.doi.org/10.1142/S0219749908003256Int. J. Quantum Inform. 6, 1 (2008); Security of Quantum Key Distribution, PhD thesis, ETH Zurich;https://arxiv.org/abs/quant-ph/0512258arXiv:0512258 (2005).
http://arxiv.org/abs/1702.08531v2
{ "authors": [ "A. S. Trushechkin", "E. O. Kiktenko", "A. K. Fedorov" ], "categories": [ "quant-ph", "cs.IT", "math.IT" ], "primary_category": "quant-ph", "published": "20170227211215", "title": "Practical issues in decoy-state quantum key distribution based on the central limit theorem" }
[ Probabilistic Path Hamiltonian Monte Carloequal* Vu Dinhequal,fredhutch Arman Bilgeequal,fredhutch,UW Cheng Zhangequal,fredhutch Frederick A. Matsen IVfredhutch fredhutchProgram in Computational Biology, Fred Hutchison Cancer Research Center, Seattle, WA, USA UWDepartment of Statistics, University of Washington, Seattle, WA, USAFrederick, A. Matsen IVmatsen@fredhutch.org Hamiltonian Monte Carlo, leapfrog algorithm, phylogenetics0.3in ]Hamiltonian Monte Carlo (HMC) is an efficient and effective means of sampling posterior distributions on Euclidean space, which has been extended to manifolds with boundary. However, some applications require an extension to more general spaces. For example, phylogenetic (evolutionary) trees are defined in terms of both a discrete graph and associated continuous parameters; although one can represent these aspects using a single connected space, this rather complex space is not suitable for existing HMC algorithms. In this paper, we develop Probabilistic Path HMC (PPHMC) as a first step to sampling distributions on spaces with intricate combinatorial structure. We define PPHMC on orthant complexes, show that the resulting Markov chain is ergodic, and provide a promising implementation for the case of phylogenetic trees in open-source software. We also show that a surrogate function to ease the transition across a boundary on which the log-posterior has discontinuous derivatives can greatly improve efficiency.§ INTRODUCTION Hamiltonian Monte Carlo is a powerful sampling algorithm which has been shown to outperform many existing MCMC algorithms, especially in problems with high-dimensional and correlated distributions <cit.>. The algorithm mimics the movement of a body balancing potential and kinetic energy by extending the state space to include auxiliary momentum variables and using Hamiltonian dynamics. By traversing long iso-probability contours in this extended state space, HMC is able to move long distances in state space in a single update step, and thus has proved to be more effective than standard MCMC methods in a variety of applications. The method has gained a lot of interest from the scientific community and since then has been extended to tackle the problem of sampling on various geometric structures such as constrained spaces <cit.>, on general Hilbert space <cit.> and on Riemannian manifolds <cit.>.However, these extensions are not yet sufficient to apply to all sampling problems, such as in phylogenetics, the inference of evolutionary trees. Phylogenetics is the study of the evolutionary history and relationships among individuals or groups of organisms. In its statistical formulation it is an inference problem on hypotheses of shared history based on observed heritable traits under a model of evolution. Phylogenetics is an essential tool for understanding biological systems and is an important discipline of computational biology. The Bayesian paradigm is now commonly used to assess support for inferred tree structures or to test hypotheses that can be expressed in phylogenetic terms <cit.>.Although the last several decades have seen an explosion of advanced methods for sampling from Bayesian posterior distributions, including HMC, phylogenetics still uses relatively classical Markov chain Monte Carlo (MCMC) based methods. This is in part because the number of possible tree topologies (the labeled graphs describing the branching structure of the evolutionary history) explodes combinatorially as the number of species increases. Also, to represent the phylogenetic relation among a fixed number of species, one needs to specify both the tree topology (a discrete object) and the branch lengths (continuous distances). This composite structure has thus far limited sampling methods to relatively classical Markov chain Monte Carlo (MCMC) based methods. One path forward is to use a construction of the set of phylogenetic trees as a single connected space composed of Euclidean spaces glued together in a combinatorial fashion <cit.> and try to define an HMC-like algorithm thereupon.Experts in HMC are acutely aware of the need to extend HMC to such spaces with intricate combinatorial structure: <cit.> describes the extension of HMC to discrete and tree spaces as a major outstanding challenge for the area. However, there are several challenges to defining a continuous dynamics-based sampling methods on such spaces. These tree spaces are composed of Euclidean components, one for each discrete tree topology, which are glued together in a way that respects natural similarities between trees. These similarities dictate that more than two such Euclidean spaces should get glued together along a common lower-dimensional boundary. The resulting lack of manifold structure poses a problem to the construction of an HMC sampling method on tree space, since up to now, HMC has just been defined on spaces with differential geometry. Similarly, while the posterior function is smooth within each topology, the function's behavior may be very different between topologies. In fact, there is no general notion of differentiability of the posterior function on the whole tree space and any scheme to approximate Hamiltonian dynamics needs to take this issue into account.In this paper, we develop Probabilistic Path Hamiltonian Monte Carlo (PPHMC) as a first step to sampling distributions on spaces with intricate combinatorial structure (Figure <ref>). After reviewing how the ensemble of phylogenetic trees is naturally described as a geometric structure we identify as an orthant complex <cit.>, we define PPHMC for sampling posteriors on orthant complexes along with a probabilistic version of the leapfrog algorithm. This algorithm generalizes previous HMC algorithms by doing classical HMC on the Euclidean components of the orthant complex, but making random choices between alternative paths available at a boundary. We establish that the integrator retains the good theoretical properties of Hamiltonian dynamics in classical settings, namely probabilistic equivalents of time-reversibility, volume preservation, and accessibility, which combined together result in a proof of ergodicity for the resulting Markov chain. Although a direct application of the integrator to the phylogenetic posterior does work, we obtain significantly better performance by using a surrogate function near the boundary between topologies to control approximation error. This approach also addresses a general problem in using Reflective HMC <cit.> for energy functions with discontinuous derivatives (for which the accuracy of RHMC is of order 𝒪(ϵ), instead of the standard local error 𝒪(ϵ^3) of HMC on ℝ^n). We provide, validate, and benchmark two independent implementations in open-source software. § MATHEMATICAL FRAMEWORK§.§ Bayesian learning on phylogenetic tree space A phylogenetic tree (τ, q) is a tree graph τ with N leaves, each of which has a label, and such that each edge e is associated with a non-negative number q_e. Trees will be assumed to be bifurcating (internal nodes of degree 3) unless otherwise specified. We denote the number of edges of such a tree by n=2N-3. Any edge incident to a leaf is called a pendant edge, and any other edge is called an internal edge. Let 𝒯_N be the set of all N-leaf phylogenetic trees for which the lengths of pendant edges are bounded from below by some constant e_0 > 0. (This lower bound on branch lengths is a technical condition for theoretical development and can be relaxed in practice.)We will use nearest neighbor interchange (NNI) moves <cit.> to formalize what tree topologies that are “near” to each other. An NNI move is a transformation that collapses an internal edge to zero and then expands the resulting degree 4 vertex into an edge and two degree 3 vertices in a new way (Figure <ref>a). Two tree topologies τ_1 and τ_2 are called adjacent topologies if there exists a single NNI move that transforms τ_1 into τ_2.We will parameterize 𝒯_N as Billera-Holmes-Vogtmann (BHV) space <cit.>, which we describe as follows. An orthant of dimension n is simply ℝ_≥ 0^n; each n-dimensional orthant is bounded by a collection of lower dimensional orthant faces. An orthant complex is a geometric object 𝒳 obtained by gluing various orthants of the same dimension n, indexed by a countable set Γ, such that: (i) the intersection of any two orthants is a face of both orthants, and (ii) each x ∈𝒳 belongs to a finite number of orthants. Each state of 𝒳 is thus represented by a pair (τ, q), where τ∈Γ and q ∈ℝ_≥ 0^n. Generalizing the definitions from phylogenetics, we refer to τ as its topology and to q as the vector of attributes. The topology of a point in an orthant complex indexes discrete structure, while the attributes formalize the continuous components of the space.For phylogenetics, the complex is constructed by taking one n-dimensional orthant for each of the (2n-3)!! possible tree topologies, and gluing them together along their common faces. The geometry can also be summarized as follows. In BHV space, each of these orthants parameterizes the set of branch lengths for a single topology (as a technical point, because we are bounding pendant branch lengths below by e_0, we can take the corresponding entries in the orthant to parameterize the amount of branch length above e_0). Top-dimensional orthants of the complex sharing a facet, i.e. a codimension 1 face, correspond to (NNI) adjacent topologies.For a fixed phylogenetic tree (τ,q), the phylogenetic likelihood is defined as follows and will be denoted by L(τ, p) <cit.>. Let ψ = (ψ_1, ψ_2,...,ψ_S) ∈Ω^N × S be the observed sequences (with characters in Ω) of length S over N leaves. The likelihood of observing ψ given the tree has the formL(τ, q) = ∏_s=1^S∑_a^sη(a_ρ^s)∏_(u,v)∈ E(τ, q)P^uv_a^s_ua^s_v( q_uv)where ρ is any internal node of the tree, a^s ranges over all extensions of ψ^s to the internal nodes of the tree, a^s_u denotes the assigned state of node u by a^s, E(τ, q) denotes the set of tree edges, P_ij(t) denotes the transition probability from character i to character j across an edge of length t defined by a given evolutionary model and η is the stationary distribution of this evolutionary model. For this paper we will assume the simplest <cit.> model of a homogeneous continuous-time Markov chain on Ω with equal transition rates, noting that inferring parameters of complex substitution models is a vibrant yet separate subject of research <cit.>.Given a proper prior distribution with density π_0 imposed on the branch lengths and on tree topologies, the posterior distribution can be computed as 𝒫(τ, q) ∝ L(τ, q) π_0(τ, q). §.§ Bayesian learning on orthant complexes With the motivation of phylogenetics in mind, we now describe how the phylogenetic problem sits as a specific case of a more general problem of Bayesian learning on orthant complexes, and distill the criteria needed to enable PPHMC on these spaces. This generality will also enable applications of Bayesian learning on similar spaces in other settings. For example in robotics, the state complex can be described by a cubical complexes whose vertices are the states, whose edges correspond to allowable moves, and whose cubes correspond to collections of moves which can be performed simultaneously <cit.>. Similarly, in learning on spaces of treelike shapes, the attributes are open curves translated to start at the origin, described by a fixed number of landmark points <cit.>.An orthant complex, being a union of Euclidean orthants, naturally inherits the Lebesgue measure which we will denote hereafter by μ.Orthant complexes are typically not manifolds, thus to ensure consistency in movements across orthants, we assume that local coordinates of the orthants are defined in such a way that there is a natural one-to-one correspondence between the sets of attributes of any two orthants sharing a common face.[Consistency of local coordinates]Given two topologies τ, τ' ∈Γ and state x = (τ, q_τ) = (τ', q_τ') on the boundary of the orthants for τ and τ', then q_τ= q_τ'. We show that BHV tree space can be given such coordinates in the Appendix. For the rest of the paper, we define for each state (τ, q) ∈𝒳 the set 𝒩(τ, q) of all neighboring topologies τ' such that τ' orthant contains (τ, q). Note that 𝒩(τ, q) always includes τ, and if all coordinates of q are positive, 𝒩(τ, q) is exactly {τ}. Moreover, if τ' ∈𝒩(τ, q) and τ' τ, we say that τ and τ' are joined by (τ,q). If the intersection of orthants for two topologies is a facet of each, we say that the two topologies are adjacent.Finally, let 𝒢 be the adjacency graph of the orthant complex 𝒳, that is, the graph with vertices representing the topologies and edges connecting adjacent topologies. Recalling that the diameter of a graph is the maximum value of the graph distance between any two vertices, we assume that The adjacency graph 𝒢 of 𝒳 has finite diameter, hereafter denoted by k. For phylogenetics, k is of order 𝒪(N log N) <cit.>.We seek to sample from a posterior distribution 𝒫(τ, q) on 𝒳. Assume that the negative logarithm of the posterior distribution U(τ, q) :=-log P(τ, q) satisfies:U(τ, q) is a continuous function on 𝒳, and is smooth up to the boundary of each orthant τ∈Γ. In the Appendix, we prove that if the logarithm of the phylogenetic prior distribution π_0(τ, q) satisfies Assumption <ref>, then so does the phylogenetic posterior distribution. It is also worth noting that while U(τ, q) is smooth within each orthant, the function's behavior may be very different between orthants and we do not assume any notion of differentiability of the posterior function on the whole space. §.§ Hamiltonian dynamics on orthant complexesThe HMC state space includes auxiliary momentum variables in addition to the usual state variables. In our framework, the augmented state of this system is represented by the position (τ, q) and the momentum p, an n-dimensional vector. We will denote the set of all possible augmented state (τ, q, p) of the system by 𝕋.The Hamiltonian is defined as in the traditional HMC setting: H(τ, q, p) = U(τ, q) + K(p), where K(p) = 1/2p^2. We will refer to U(τ, q) and K(p) as the potential energy function and the kinetic energy function of the system at the state (τ,q, p), respectively.Our stochastic Hamiltonian-type system of equations is:dp_i/dt =- ∂ U/∂ q_i(τ, q) if   q_i > 0 p_i- p_i;τ∼Z(𝒩(τ, q))   if   q_i = 0 dq_i/dt = p_iwhere Z(A) denotes the uniform distribution on the set A.If all coordinates of q are positive, the system behaves as in the traditional Hamiltonian setting on ℝ^n. When some attributes hit zero, however, the new orthant is picked randomly from the orthants of neighboring topologies (including the current one), and the momenta corresponding to non-positive attributes are negated.Assumption <ref> implies that despite the non-differentiable changes in the governing equation across orthants, the Hamiltonian of the system along any path is constant: H is conserved along any system dynamics. §.§ A probabilistic “leap-prog” algorithmIn practice, we approximate Hamiltonian dynamics by the following integrator with step size ϵ, which we call “leap-prog” as a probabilistic analog of the usual leapfrog algorithm. This extends previous work of <cit.> on RHMC where particles can reflect against planar surfaces of ℝ^n_≥ 0.In the RHMC formulation, one breaks the step size ϵ into smaller sub-steps, each of which correspond to an event when some of the coordinates cross zero. We adapt this idea to HMC on orthant complexes as follows. Every time such an event happens, we reevaluate the values of the position and the momentum vectors, update the topology (uniformly at random from the set of neighboring topologies), reverse the momentum of crossing coordinates and continue the process until a total step size ϵ is achieved (Algorithm <ref>). We note that several topologies might be visited in one leap-prog step.If there are no topological changes in the trajectory to time ϵ, this procedure is equivalent to classical HMC. Moreover, since the algorithm only re-evaluates the gradient of the energy function at the end of the step when the final position has been fixed, changes in topology on the path have no effect on the changes of position and momentum. Thus, the projection of the particles (in a single leap-prog step) to the (q,p) space is identical to a leapfrog step of RHMC on ℝ^n_≥ 0.Here FirstUpdateEvent(τ, q,p, t) returns x, the position of the first event for which the line segment [q, q+ tp] crosses zero;e, the time when this event happens; and I, the indices of the coordinates crossing zero during this event. If q_i and p_i are both zero before FirstUpdateEvent is called, i is not considered as a crossing coordinate. If no such an event exists, ∅ is returned. § HAMILTONIAN MONTE CARLO ON ORTHANT COMPLEXES Probabilistic Path Hamiltonian Monte Carlo (PPHMC) with leap-prog dynamics iterates three steps, similar to that of classical HMC. First, new values for the momentum variables are randomly drawn from their Gaussian distribution, independently of the current values of the position variables. Second, starting with the current augmented state, s=(τ, q, p), the Hamiltonian dynamics is run for a fixed number of steps T using the leap-prog algorithm with fixed step size ϵ. The momentum at the end of this trajectory is then negated, giving a proposed augmented state s^* = (τ^*, q^*, p^*). Finally, this proposed augmented state is accepted as the next augmented state of the Markov chain with probability r(s,s^*)= min(1, exp(H(s) - H(s^*))).PPHMC has two natural advantages over MCMC methods for phylogenetic inference: jumps between topologies are guided by the potential surface, and many jumps can be combined into a single proposal with high acceptance probability. Indeed, rather than completely random jumps as used for MCMC, the topological modifications of HMC are guided by the gradient of the potential. This is important because there are an enormous number of phylogenetic trees, namely (2n-3)!! trees with n leaves. Secondly, HMC can combine a great number of tree modifications into a single step, allowing for large jumps in tree space with high acceptance probability. These two characteristics are analogs of why HMC is superior to conventional MCMC in continuous settings and what we aimed to extend to our problem. §.§ Theoretical properties of the leap-prog integratorTo support the use of this leap-prog integrator for MCMC sampling, we establish that integrator retains analogs of the good theoretical properties of Hamiltonian dynamics in classical settings, namely, time-reversibility, volume preservation and accessibility (proofs in Appendix).We formulate probabilistic reversibility as: [Reversibility] For a fixed finite time horizon T, we denote by P(s, s') the probability that the integrator moves s to s' in a single update step. We haveP((τ, q, p), (τ', q', p') ) =P((τ', q', -p'), (τ, q, -p) ).for any augmented states (τ', q',p') and (τ, q, p) ∈𝕋.The central part of proving the detailed balance condition of PPHMC is to verify that Hamiltonian dynamics preserves volume. Unlike the traditional HMC setting where the proposal distribution is a single point mass, in our probabilistic setting, if we start at one augmented state s, we may end up at countably many end points due to stochastic HMC integration. The equation describing volume preservation in this case needs to be generalized to the form of Equation (<ref>), where the summations account for the discreteness of the proposal distribution. [Volume preservation] For every pair of measurable sets A,B ⊂𝕋 and elements s,s' ∈𝕋, we denote by P(s,s') the probability that the integrator moves s to s' in a single update step and defineB(s) = {s' ∈ B: P(s, s') >0 }andA(s') = {s ∈ A: P(s, s') >0 }.Then∫_A∑_s' ∈ B(s)P(s,s') ds = ∫_B∑_s ∈ A(s')P(s',s) ds'.If we restrict to the case of trajectories staying in a single topology, A(s) and B(s') are singletons and we get back the traditional equation of volume preservation. We also note that the measure ds in Equation (<ref>) is the Lebesgue measure: when there is no randomness in the Hamiltonian paths, (3.1) becomes the standard volume preservation condition, where volumes are expressed by the Lebesgue measure.Typically, accessibility poses no major problem in various settings of HMC since it is usually clear that one can go between any two positions in a single HMC step. In the case of PPHMC, however, the composition of discrete and continuous structure, along with the possible non-differentiability of the potential energy across orthants, make it challenging to verify this condition. Here we show instead that the PPHMC algorithm can go between any two states with k steps, where k denotes the diameter of adjacency graph 𝒢 the space 𝒳 and each PPHMC step consists of T leap-prog steps of size ϵ.[k-accessibility] For a fixed starting state (τ^(0), q^(0)), any state (τ', q') ∈𝒳 can be reached from (τ^(0), q^(0)) by running k steps of PPHMC.The proof of this Lemma is based on Assumption <ref>, which asserts that the adjacency graph 𝒢 of 𝒳 has finite diameter, and that classical HMC allows the particles to move freely in each orthant by a single HMC step.To show that Markov chains generated by PPHMC are ergodic, we also need to prove that the integrator can reach any subset of positive measure of the augmented state space with positive probability. To enable such a result, we show: For every sequence of topologies ω = {τ^(0), τ^(1), …, τ^(n_ω)} and every set with positive measure B ⊂𝒳, let B_ω be the set of all (τ', q') ∈ B such that (τ', q') can be reached from (τ^(0), q^(0)) in k PPHMC steps and such that the sequence of topologies crossed by the trajectory is ω. We denote by I_B, ω the set of all sequences of initial momenta for each PPHMC step {p^(0), …, p^(k)} that make such a path possible.Then, if μ(I_B, ω)=0, then μ(B_ω)=0.We also need certain sets to be countable.Given s ∈𝕋, we denote by R(s) the set of all augmented states s' such that there is a finite-size leap-prog step with path γ connecting s and s', and by K(s) the set of all such leap-prog paths γ connecting s and s' ∈ R(s). Then R(s) and K(s) are countable. Moreover, the probability P_∞( s, s') of moving from s to s' via paths with infinite number of topological changes is zero.§.§ Ergodicity of Probabilistic Path HMC In this section, we establish that a Markov chain generated by PPHMC is ergodic with stationary distribution π(τ, q) ∝exp(-U(τ, q)). To do so, we need to verify that the Markov chain generated by PPHMC is aperiodic, because we have shown k-accessibility of the integrator rather than 1-accessibility. Throughout this section, we will use the notation P((τ, q, p), ·) to denote the one-step proposal distribution of PPHMC starting at augmented state (τ, q, p), and P((τ, q), ·) to denote the one-step proposal distribution of PPHMC starting at position (τ, q) and with a momentum vector drawn from a Gaussian as described above.We first note that:PPHMC preserves the target distribution π. Given probabilistic volume preservation (<ref>), the proof is standard and is given in the Appendix.[Ergodic]The Markov chain generated by PPHMC is ergodic.For every sequence of topologies ω ={τ^(0), τ^(1), …, τ^(n_ω)} (finite by Lemma <ref>) and every set with positive measure B ⊂𝒳, we define B_ω and I_B, ω as in the previous section. By Lemma <ref>, we haveB = ⋃_ωB_ω.Assume that μ(I_B, ω) = 0 for all ω. From Lemma <ref>, we deduce thatμ(B_ω)=0 for all ω. This makes μ(B) = 0, which is a contradiction. Hence μ(I_B, ω) > 0 for some ω and P^n_ω((τ^(0),q^(0)), B ) is at least the positive quantity1/Z∫_p ∈ I_B, ωP^n_ω((τ^(0), q^(0), p),B) exp (-K(p))dpwhere Z is the normalizing constant. This holds for all sets with positive measure B ⊂𝒳, so PPHMC is irreducible.Now assume that a Markov chain generated by the leapfrog algorithm is periodic. The reversibility of Hamiltonian dynamics implies that the period d must be equal to 2. In other words, there exist two disjoint subsets X_1, X_2 of 𝒳 such that π(X_1)>0, andP(x, X_2)=1  ∀ x ∈ X_1,  and  P(x, X_1)=1  ∀ x ∈ X_2.Consider x ∈ X_1 with all positive attributes. There exists a neighborhood U_x around x such that any y ∈ U_x is reachable from x by Hamiltonian dynamics. Since X_1, X_2 are disjoint, we deduce that μ(U_x ∩ X_1)=0. Since the neighborhood U_x exists for almost every x ∈ X_1, this implies that μ(X_1)=0, and hence, that π(X_1)=0, which is a contradiction. We conclude that any Markov chain generated by the leapfrog algorithm is aperiodic.Lemma <ref> shows that PPHMC preserves the target distribution π. This, along with π-irreducibility and aperiodicity, completes the proof <cit.>.§.§ An efficient surrogate smoothing strategy One major advantage of HMC methods over traditional approaches is that HMC-proposed states may be distant from the current state but nevertheless have a high probability of acceptance. This partially relies on the fact that the leapfrog algorithm with smooth energy functions has a local approximation error of order 𝒪(ϵ^3) (which leads to global error 𝒪(Tϵ^3), where T is the number of leapfrog steps in a Hamiltonian path).However, when the potential energy function U(τ, q) is not differentiable on the whole space this low approximation error can break down. Indeed, although PPHMC inherits many nice properties from vanilla HMC and RHMC <cit.>, this discontinuity of the derivatives of the potential energy across orthants may result in non-negligible loss of accuracy during numerical simulations of the Hamiltonian dynamics. A careful analysis of the local approximation error of RHMC for potential energy functions with discontinuous first derivatives reveals that it only has an local error rate of order at least Ω(ϵ) (see proof in Appendix): Given a potential function V, we denote by V^+ and V^- the restrictions of V on the half-spaces {x_1 ≥ 0} and {x_1 ≤ 0} and assume that V^+ and V^- are smooth up to the boundary of their domains. If the first derivative with respect to the first component of the potential energy V(q) are discontinuous across the hyper-plane {x_1=0} (i.e., (∂ V^+)/(∂ q_1) and (∂ V^-)/(∂ q_1) are not identical on this set), then RHMC on this hyper-plane has a local error of order at least Ω(ϵ).Since PPHMC uses RHMC, when the first derivatives of the potential energy are discontinuous, it also has a global error of order 𝒪(C ϵ + Tϵ^3), which depends on the number of critical events C along a Hamiltonian path (that is, the number of reflection/refraction events). This makes it difficult to tune the step size ϵ for optimal acceptance rate, and requires small ϵ, limiting topology exploration.To alleviate this issue, we propose the use of a surrogate induced Hamiltonian dynamics <cit.> with the Hamiltonian H̃(τ,q,p) = Ũ(τ,q) + K(p), where the surrogate potential energy isŨ(τ,q)= U(τ,G(q)), G(q) = (g(q_1),…,g(q_n))and g(x) is some positive and smooth approximation of |x| with vanishing gradient at x=0. One simple example which will be used for the rest of this paper isg_δ(x) = {[ x,x≥δ; 1/2δ(x^2+δ^2),0≤ x <δ ].where δ will be called the smoothing threshold.Due to the vanishing gradient of g, Ũ now has continuous derivatives across orthants. However, Ũ is no longer continuous across orthants since g(0)≠ 0 and we thus employ the refraction technique introduced in <cit.> (see Algorithm <ref> for more details). The proposed state s_δ^∗ = (τ_δ^∗,q_δ^∗,p_δ^∗) at the end of the trajectory is accepted with probability according to the original Hamiltonian, that is, min(1,exp(H(s)-H(s_δ^∗))).By following the same framework proposed in previous sections, we can prove that the resulting samplerstill samples from the exact posterior distribution 𝒫(τ,q). A complete treatment, however, requires more technical adjustments and is beyond the scope of the paper.We will leave this as a subject of future work.As we will illustrate later, compared to the exact potential energy, the continuity of the derivative of the surrogate potential across orthants dramatically reduces the discretization error and allows for high acceptance probability with relatively large step size.§ EXPERIMENTSIn this section, we demonstrate the validity and efficiency of our PPHMC method by an application to Bayesian phylogenetic inference. We compared our PPHMC implementations to industry-standard MrBayes 3.2.5, which uses MCMC to sample phylogenetic trees <cit.>. We concentrated on the most challenging part: sampling jointly for the branch lengths and tree topologies, and assumed other parameters (e.g., substitution model, hyper-parameters for the priors) are fixed. More specifically, for all of our experiments we continued to assume the Jukes-Cantor model of DNA substitution and placed a uniform prior on the tree topology τ∼ Z(𝒯_N) with branch lengths i.i.d. q_i ∼Exponential(λ = 10), as done by others when measuring the performance of MCMC algorithms for Bayesian phylogenetics <cit.>. As mentioned earlier, although in the theoretical development we assumed that the lengths of the pendant edges are bounded from below by a positive constant e_0 to ensure that the likelihood stays positive on the whole tree space, this condition is not necessary in practice since the Hamiltonian dynamics guide the particles away from regions with zero likelihood (i.e., the region with U = ∞). We validate the algorithm through two independent implementations in open-source software: * a Scala version available at <https://github.com/armanbilge/phyloHMC> that uses the Phylogenetic Likelihood Library[<https://github.com/xflouris/libpll>] <cit.>, and* a Python version available at <https://github.com/zcrabbit/PhyloInfer> that uses the ETE toolkit <cit.> and Biopython <cit.>.§.§ Simulated dataAs a proof of concept, we first tested our PPHMC method on a simulated data set. We used a random unrooted tree with N=50 leaves sampled from the aforementioned prior. 1000 nucleotide observations for each leaf were then generated by simulating the continuous-time Markov model along the tree. This moderate data set provided enough information for model inference while allowing for a relatively rich posterior distribution to sample from.We ran MrBayes for 10^7 iterations and sampled every 1000 iterations after a burn-in period of the first 25% iterations to establish a ground truth for the posterior distribution. For PPHMC, we set the step size ϵ=0.0015 and smoothing threshold δ=0.003 to give an overall acceptance rate of about α = 0.68 and set the number of leap-prog steps T=200. We then ran PPHMC for 10,000 iterations with a burn-in of 25%. We saw that PPHMC indeed samples from the correct posterior distribution (see Figure <ref> in the Appendix).§.§ Empirical data We also analyzed an empirical data set labeled DS4 by <cit.> that has become a standard benchmark for MCMC algorithms for Bayesian phylogenetics since <cit.>. DS4 consists of 1137 nucleotide observations per leaf from N=41 leaves representing different species of fungi. Notably, only 554 of these observations are complete; the remaining 583 are missing a character for one or more leaves so the likelihood is marginalized over all possible characters. <cit.> observed that the posterior distribution for DS4 features high-probability trees separated by paths through low-probability trees and thus denoted it a “peaky” data set that was difficult to sample from using MrBayes.To find the optimal choice of tuning parameters for DS4, we did a grid search on the space of step size ϵ and the smoothing threshold–step size ratio δ / ϵ. The number of leap-prog steps T was adjusted to keep the total simulation time ϵ T fixed. For each choice of parameters, we estimated the expected acceptance rate α by averaging over 20 proposals from the PPHMC transition kernel per state for 100 states sampled from the posterior in a previous, well-mixed run. This strategy enabled us to obtain an accurate estimate of the average acceptance rate without needing to account for the different mixing rates in a full PPHMC run due to the various settings for the tuning parameters.The results suggest that choosing δ≈ 2ϵ maximizes the acceptance probability (Figure <ref>b). Furthermore, when aiming for the optimal acceptance rate of α = 0.65 <cit.>, the use of the surrogate function enables a choice of step size ϵ nearly 10 times greater than otherwise. In practice, this means that an equivalent proposal requires less leap-prog steps and gives a more efficient sampling algorithm.To see the difference this makes in practice, we ran long trajectories for exact and surrogate-smoothed PPHMC with a relatively large step size ϵ=0.0008. Indeed, we found that the surrogate enables very long trajectories and large number of topology transformations (Figure <ref>a,c). § CONCLUSIONSophisticated techniques for sampling posteriors using HMC have thus far been restricted to manifolds with boundary. To address this limitation, we have developed “PPHMC,” which is the first extension of HMC to a space with intricate combinatorial structure. The corresponding integrator makes a random choice among alternatives when encountering a boundary. To prove ergodicity, we extend familiar elements of HMC proofs to this probabilistic path setting. We develop a smoothing surrogate function that enables long HMC paths with many boundary transitions across which the posterior is not differentiable. Our surrogate method enables high acceptance probability for RHMC <cit.> in the case of potential functions with discontinuous derivatives; this aspect of our work is independent of the probabilistic nature of PPHMC. Our implementation shows good performance on both simulated and real data. There are many opportunities for future development, including extending the theory to other classes of combinatorially-described spaces and surrogate functions, developing adaptive path length algorithms, as well as extending our implementation for phylogenetics to sample other mutation model and demographic parameters along with topologies. plainnat § APPENDIX§.§ Properties of the phylogenetic posterior distribution Recall that L(τ, q) denotes the likelihood function of the tree T=(τ, q), we haveU(τ, q) = - log L(τ, q) - logπ_0(τ, q)Since -logπ_0(τ, q) is assumed to satisfy the Assumption <ref>, we just need to prove that the phylogenetic likelihood function is smooth while each orthant and is continuous on the whole space.Without loss of generality, we consider the case when a single branch length of some edge e is contracted to zero. To investigate the changes in the likelihood function and its derivatives, we first fix all other branches, partition the set of all extensions of ψ according to their labels at the end points of e, and split E(T) into two sets of edges E_left and E_right corresponding to the location of the edges with respect to e. The likelihood function of the tree T=(τ, q) can be rewritten asL(T)= ∏_s=1^S∑_ij∑_a ∈𝒜_ij( ∏_(u,v) ∈ E_leftP^uv_a_ua_v( q_uv)) × η(i) P^e_ij(t) ×( ∏_(u,v) ∈ E_rightP^uv_a_ua_v( q_uv))where t is the branch length of e, η is the stationary distribution, 𝒜_ij denotes the set of all extensions of ψ for which the labels at the left end point and the right end point of e are i and j, respectively. By grouping the products over E_left and E_right, the stationary frequency η(·), and the sum over a in a single term b_ij^s, we can define the one-dimensional log-likelihood function as a univariate function of the branch length of eL_T(t)=∏_s=1^S(∑_ijb^s_ijP^e_ij(t)).Consider the tree T' obtained by collapsing edge e of the tree T to zero. The likelihood of T' can be written asL(T') = ∏_s=1^S(∑_i=jb^s_ijP^e_ij(0)) =∏_s=1^S(∑_ib^s_ii)since P_ij(0) = 1 if i=j and 0 otherwise. Thuslim_t→ 0L_T(t) = L(T').Since this is true for all (τ, q) and t ∈ E(τ, q), we deduce that the likelihood function is continuous up to the boundary of each orthant, and thus, is continuous on the whole tree space. Moreover, using the same arguments, we can prove that likelihood function is smooth up to the boundary of each orthant.Now fixing all but two branch lengths t_e, t_f, the likelihood can be rewritten asL_T(t_e, t_f)=∏_s=1^S(∑_ijb^s_ij(t_e)P^f_ij(t_f))and the derivative of the log likelihood is1/L_T(t_e, t_f) ∂ L_T/∂ t_f(t_e, t_f)= ∑_s=1^S∑_ijb^s_ij(t_e)(P^f_ij)'(t_f)/∑_ijb^s_ij(t_e)P^f_ij(t_f).By using the same argument as above, we have that b_ij^s (t_e) is continuous in t_e up to zero and solim_t_e → 01/L_T(t_e, t_f) ∂ L_T/∂ t_f(t_e, t_f) = 1/L(T') ∂ L/∂ t_f(T').Thus, when a Hamiltonian particle crosses a boundary between orthants, partial derivatives of the energy function with respect to positive branch lengths are continuous. §.§ Theoretical properties of the leap-prog integratorNote that for PPHMC,in a single leap-prog step γ of finite size ϵ, the algorithm only re-evaluates the gradient of the energy function at the end of the step when the final position q' has been fixed, and changes in topology on the path have no effect on the changes of position and momentum. Thus, the projection γ̃ of γ to the (q,p) space is just a deterministic reflected Hamiltonian path. As a result, for any s^(1) =(τ^(1), q^(1), p^(1)), s^(2) =(τ^(2), q^(2), p^(2)) ∈ R(s), we have (q^(1), p^(1))= (q^(2), p^(2)). This, along with the fact that set of topologies is countable, implies that R(s) is countable.Now denote by {t^(1) < t^(2) < … < t^(n) < …≤ϵ} the set of time points at which γ̃ hits the boundary. Since this set is strictly increasing, it is countable. Moreover, the τ-component of γ is only updated with finite choices at {t^(i)}. This implies that K(s) is countable.Finally, consider any leap-prog step γ that connects s and s' through infinite number of topological changes. We note that at each t^(i), the next topology is chosen among x^(i)≥ 2 neighboring topologies. Denote by P_γ( s, s') the probability of moving from s to s' via path γ, we haveP_γ(s, s') ≤∏_i=1^∞1/x^(i)= 0.Since K(s) is countable, we deduce that P_∞( s, s')=0. Consider any possible path γ that connects s and s'. By definition, one can find a sequence of augmented states (s=s^(0), s^(1), s^(2), …, s^(k) = s' ) such that γ can be decomposed into segments on which the topology does not change. From standard result about Hamiltonian dynamics, the Hamiltonian is constant on each segment.For PPHMC, since the potential energy is continuous across the boundary and the magnitude of the momentum does not change when moving from one orthant to another one, we deduce that the Hamiltonian is constant along that path.Similarly, for PPHMC with surrogates, the algorithm is designed in such a way that any changes in potential energy is balanced by a change in momentum, which conserves the total energy from one segment to another. We also deduce that the Hamiltonian is constant along the whole path. Define σ(τ, q, p):=(τ, q, -p). Consider any possible leap-prog step γ that connects s and s'; say the sequences of augmented states (s=s^(0), s^(1), s^(2), …, s^(k) = s' ), topologies (τ=τ^(0), τ^(1), τ^(2), …, τ^(k) = τ' ) and times (t=t^(0), t^(1), t^(2), …,t^(k) = t' ) decompose γ into segments on which the topology is unchanged. Denote by P_γ( s, s') the probability of moving from s to s' via path γ, we haveP_γ( s, s')= ∏_iℙ(s^(i+1)| s^(i), t^(i+1) -t^(i))× ∏_jℙ(τ^(j+1)|τ^(j)),where each sub-step of the algorithm is a leapfrog update (ϕ^(i)) with some momentum reversing (σ^(i)), that is s^(i+1) = σ^(i)(ϕ^(i)(s^(i)))and σ^(i) is a map that changes the sign of some momentum coordinates.If we start the dynamics at σ(s^(i+1)), then since the particle is crossing the boundary, the momenta corresponding to the crossing coordinates are immediately negated and the system is instantly moved to the augmented stateσ^(i)σ(s^(i+1)) = σσ^(i)(s^(i+1)) = σ(ϕ^(i)(s^(i))).A standard result about reversibility of Reflective Hamiltonian dynamics implies that the system starting at σ(ϕ^(i)(s^(i))) will end at σ(s^(i)) after the same period of time t^(i+1)-t^(i). We deduce thatℙ(s^(i+1)| s^(i), t^(i+1) -t^(i)))=  ℙ(σ(s^(i)) |σ(s^(i+1)), t^(i+1) -t^(i))).On the other hand, at time t^(j), (τ^(j), q^(j)) and (τ^(j+1), q^(j)) are neighboring topologies, henceℙ(τ^(j+1)|τ^(j)) = 1/|𝒩(τ^(j), q^(j))| =1/|𝒩(τ^(j+1), q^(j))| = ℙ(τ^(j)|τ^(j+1)).ThereforeP_γ(s, s') = P_γ(σ(s'),σ(s))for any path γ. This completes the proof. We denote by C the set of pairs (s, s') ∈ A × B such that P(s, s')>0. Let us consider any possible leap-prog step γ that connects s ∈ A and s' ∈ B crossing a finite number of boundaries and the sequences of augmented states (s=s^(0), s^(1), s^(2), …, s^(k) = s' ), topologies (τ=τ^(0), τ^(1), τ^(2), …, τ^(k) = τ' ), times (t=t^(0), t^(1), t^(2), …,t^(k) = t' ) and indices α = (α^(0), α^(1), …, α^(k)) (each α^(i) is a vector of± 1 entries characterizing the coordinates crossing zero in each sub-step) that decompose γ into segments on which the topology is unchanged. By grouping the members of C by the value of α and ω, we have:C= ⋃_(α, ω)C_α, ω.Because there will typically be many paths between s and s', the C_α, ω need not be disjoint. However, we can modify the (countable number of) sets by picking one set for each (s, s') and dropping it from the rest, making a collection of disjoint sets { C_j } such that each C_j is a subset of some C_α, ω andC= ⋃_j ∈ JC_j. We will write s ∈ A_j(s') and s' ∈ B_j(s) if (s, s') ∈ C_j and denoteA_j=⋃_s' ∈ BA_j(s')and B_j=⋃_s ∈ AB_j(s).We note that although the leap-prog algorithm is stochastic, if (α, ω) has been pre-specified, the whole path depends deterministically on the initial momentum. Thus, by denoting the projection of C_α, ω to A by A_α, ω, we have that the transformation ϕ_α, ω that maps s to s' is well-defined on A_α, ω. Since the projection of the particles (in a single leap-prog step) to the (q,p) space is exactly Reflective Hamiltonian Monte Carlo on ℝ^n_≥ 0. Using Lemma 1, Lemma 2 and Theorem 1 in<cit.>, we deduce that the determinant of the Jacobian of ϕ_α, ω is 1.Now consider any j∈ J such that C_j ⊂ C_α, ω. Because P(s,s') = P(s',s) for all s, s' ∈𝕋 and the determinant of the Jacobian of ϕ_α, ω is 1, we have∫_B_j∑_s ∈ A_j(s')P(s',s) ds' =∫_B_jP(s',ϕ_α, ω^-1(s')) ds'= ∫_A_jP(ϕ_α, ω(s),s)  ds= ∫_A_jP(s, ϕ_α, ω(s))  ds= ∫_A_j∑_s' ∈ B_j(s)P(s,s') ds. DenoteA^*=⋃_jA_jand B^*=⋃_jB_j.Summing (<ref>) over all possible values of j gives∫_B^*∑_s ∈ A(s')P(s',s) ds' = ∫_A^*∑_s' ∈ B(s)P(s,s') ds.Moreover, we note that for s ∉ A^*, B(s)=∅. Similarly, ifs' ∉ B^*, A(s')=∅. We deduce that∫_B∑_s ∈ A(s')P(s',s) ds' = ∫_A∑_s' ∈ B(s)P(s,s') ds. By definition of k, for any state (τ', q') ∈ B, we can find a sequence of topologies (τ=τ^(0), τ^(1), τ^(2), …, τ^(k) = τ' ) for some l ≤ k such that τ^(i) and τ^(i+1) are adjacent topologies. From the construction of the state space, let (τ^(i), q^(i)) denote a state on the boundary between the orthants for the two topologies τ^(i) and τ^(i+1). Moreover, since (τ^(i), q^(i)) and (τ^(i+1), q^(i+1)) lie in the same orthant, we can find momentum values p^(i) and (p^(i))' such thatP((τ^(i), q^(i), p^(i)) → (τ^(i+1), q^(i+1), (p^(i+1))')) >0for all i. That is, we can get from (τ^(i), q^(i), p^(i)) to (τ^(i+1), q^(i+1), (p^(i+1))') by a sequence of leapfrog steps Σ^(i) with length T. By joining the Σ^(i)'s, we obtain a path Σ of k PPHMC steps that connects (τ^(0), q^(0)) and (τ',q').For a path Σ of k PPHMC steps connecting (τ^(0), q^(0)) and (τ',q'), we define F_Σ = {(τ^(0), q^(0)), (τ^(1), q^(1)), …, (τ^(n_ω), q^(n_ω))}, where (τ^(i), q^(i)) denotes the state on Σ that joins the topologies τ^(i) and τ^(i+1). We first note that although our leap-prog algorithm is stochastic, if the sequence of topologies crossed by a path Σ has been pre-specified, the whole path depends deterministically on the sequence of momenta p=(p^(0), …, p^(m)) along Σ. Thus, the functionsϕ_i, ω(p):= q^(i)∀ p ∈ I_B, ω,are well-defined.We will prove that ϕ_n_ω, ω is Lipschitz by induction on n_ω. For the base case n_ω=0, the sequence ω is of length 1, which implies no topological changes along the path. The leap-prog algorithm reduces to the baseline leapfrog algorithm and from standard results about HMC on Euclidean spaces <cit.>, we deduce that ϕ_1, ω is Lipschitz.Now assume that the results holds true for n_ω = n. Consider a sequence ω of length n+1. For all (τ', q') ∈ B_ω, let Σ(τ', q') be a (k,T)-path connecting (τ^(0), q^(0)) and (τ', q'). We recall thatF_Σ(τ', q')= {(τ^(0), q^(0)), (τ^(1), ϕ_1, ω(p)), …, (τ^(n_ω), ϕ_n_ω, ω(p))},where ϕ_n_ω, ω(p))=(τ', q'), is the set of states that join the topologies on the path Σ(τ', q').Define ω' = {τ^(0), τ^(1), …, τ^(n_ω-1)} and B'=ϕ_n_ω-1(I_B,ω), the induction hypothesis implies that the function ϕ_n_ω', ω' = ϕ_n_ω-1, ω is Lipschitz on I_B', ω'=I_B, ω.On the other hand, since (τ^(n), q^(n)) and(τ^(n+1), q^(n+1)) belong to the same topology, the base case implies that q^(n+1) is a Lipschitz function in p and q^(n) = ϕ_n_ω-1, ω(p). Since compositions of Lipschitz functions are also Lipschitz, we deduce that ϕ_n_ω, ω is Lipschitz.Since Lipschitz functions map zero measure sets to zero measure sets <cit.>, this implies μ(B_ω)=0 which completes the proof.§.§ Ergodicity of PPHMCWe denoteν(τ, q,p)= 1/Zexp(-U(τ, q)) exp(-K(p)) = 1/Zexp(-H(τ, q,p))and refer to it as the canonical distribution.It is straightforward to check that for all s, s' ∈𝕋, we have ν(s)r(s, s') = ν(s')r(s' , s). Lemma <ref> implies thatP(s, ds') ds = P(s', ds) ds'in term of measures. This gives the detailed balance condition∫_A ∫_B ν(s)r(s, s') P(s, ds') ds=∫_B ∫_A ν(s')r(s' , s) P(s', ds) ds'for all A, B ⊂𝕋.We deduce that every update step of the second step of PPHMC satisfies detailed balance with respect to ν and hence, leaves ν invariant. On the other hand, since ν is a function of |p|, the negation of the momentum p at the end of the second step also fixes ν. Similarly, in the fist step, p is drawn from its correct conditional distribution given q and thus leaves ν invariant.Since the target distribution π is the marginal distribution of ν on the position variables, PPHMC also leaves π invariant.§.§ Approximation error of reflective leapfrog algorithm In this section, we investigate the local approximation error of the reflective leapfrog algorithm <cit.> without using surrogates. Recall that V^+ and V^- are the restrictions of the potential function V on the sets {x_1 ≥ 0} and {x_1 ≤ 0}, and we assume that V^+ and V^- are smooth up to the boundary of their domains. Consider a reflective leapfrog step with potential energy function V starting at (q^(0), p^(0)) (with q_1^(0)>0), ending at (q^(1), p^(1)) (with q_1^(1)<0) and hitting the boundary at x (with x_1=0, i.e., a refraction event happens on the hyper-plane of the first component). Let p and p' denote the half-step momentum of a leapfrog step before and after the refraction events, respectively. Recall that in a leapfrog approximation with refraction at x_1=0, we havep_i^(0) = p_i + ϵ/2∂ V/∂ q_i(q^(0)),p_i^(1) = p_i' - ϵ/2∂ V/∂ q_i(q^(1)),wherep_1' = √(p_1^2 - 2dV(x)),p_i' = p_i for i>1, and dV(x) = V^-(x) - V^+(x) denotes the change in potential energy across the hyper-plane.The change in kinetic energy after this leapfrog step isΔ K = -d V(x) - ϵ/2∑_i(p_i∂ V/∂ q_i(q^(0)) + p_i' ∂ V/∂ q_i(q^(1)) ) +ϵ^2/8∑_i((∂ V/∂ q_i(q^(1)))^2- (∂ V/∂ q_i(q^(0)))^2 ).We can bound the second-order term by(∂ V/∂ q_i(q^(1)))^2- (∂ V/∂ q_i(q^(0)))^2= 2 ∫_0^ϵ∂ V/∂ q_i(q^(0) + tp) ∂^2 V/∂ q_i^2(q^(0) + tp) p_i  dt = 𝒪(ϵ) ·sup_z, W=V^+, V^-∂ W/∂ q_i(z) ∂^2 W/∂ q_i^2(z).On the other hand for the potential energy,Δ V= V(q^(1)) - V(q^(0)) = V(q^(1)) - V^-(x) + dV(x) +V^+(x) - V(q^(0))= dV(x) +∫_ϵ_1^ϵ∇ V(q^(0) + tp) · p  dt +∫_0^ϵ_1∇ V(q^(0) + tp) · p  dtwhere ϵ_1 and ϵ_2 := ϵ - ϵ_1denote the integration times before and after refraction. By the trapezoid rule for integration,Δ V = dV(x)+ ∑_i>1ϵ/2(p_i∂ V/∂ q_i(q^(0)) + p_i'∂ V/∂ q_i(q^(1))) + p_1' ϵ_2/2∂ V/∂ q_1(q^(1)) +p_1' ϵ_2/2∂ V^-/∂ q_1(x) + p_1ϵ_1/2∂ V/∂ q_1(q^(0)) +p_1 ϵ_1/2∂ V^+/∂ q_1(x)+ 𝒪(ϵ^3) ·sup_z ∑_i, W=V^+, V^-(∂^3 W/∂ q_i^3(z) ) .We recall that the error of the trapezoid rule on [a, b] with resolution h is a constant multiple of h^2(b-a), which is of order ϵ^3 in our case. We deduce thatΔ H= Δ V + Δ K = -p_1' ϵ_1/2∂ V/∂ q_1(q^(1))+p_1' ϵ_2/2∂ V^-/∂ q_1(x) -p_1 ϵ_2/2∂ V/∂ q_1(q^(0))+p_1 ϵ_1/2∂ V^+/∂ q_1(x)+ 𝒪(ϵ^3).Using Taylor expansion, we have∂ V/∂ q_1(q^(1)) = ∂ V^-/∂ q_1(x) + 𝒪(ϵ),and∂ V/∂ q_1(q^(0))= ∂ V^-/∂ q_1(x)+𝒪(ϵ).This impliesΔ H =(ϵ_2 - ϵ_1) (p_1' ∂ V^-/∂ q_1(x) - p_1∂ V^+/∂ q_1(x) )+ 𝒪(ϵ^2).In general, there is no dependency between ϵ_1 and ϵ_2, and the only cases where Δ H is not of order 𝒪(ϵ) are when√(p_1^2 - 2dV(x)) ∂ V^-/∂ q_1(x) - p_1∂ V^+/∂ q_1(x) =0.In order for this to be true for all p, we need to have eitherdV(x)=0 and∂ V^-/∂ q_1(x) = ∂ V^+/∂ q_1(x),or∂ V^-/∂ q_1(x) = ∂ V^+/∂ q_1(x) = 0.In both cases, the first derivative of V with respect to the first component must be continuous. This completes the proof. §.§ Estimated posterior tree distributions for the simulated data§.§ Coordinate systems for branch lengths on trees In this section we verify Assumption <ref> for phylogenetic trees. Further explanation of the framework used here can be found in <cit.>.Assume we are considering phylogenetic trees on N leaves, and that those leaves have labels [N] := {1, …, N}. Every possible edge in such a phylogenetic tree can be described by its corresponding split, which is a partition of [N] into two non-empty sets, by removing that edge of the tree and observing the resulting partitioning of the leaf labels. If a split can be obtained by deleting such an edge of a given phylogenetic tree, we say that the tree displays that split. We use a vertical bar (|) to denote the division between the two sets of the bipartition. For example, if we take the unrooted tree with four leaves such that 1 and 2 are sister to one another, the tree displays splits 1|234, 12|34, 134|2, 124|3, and 123|4. Two splits A|B and C|D on the same leaf set are called compatible if one of A ∩ C, B ∩ C, A ∩ D, or B ∩ D is empty. A set of splits that are pairwise compatible can be displayed on a phylogenetic tree <cit.>, and in fact the set of pairwise compatible sets of splits is in one-to-one correspondence with the set of (potentially multifurcating) unrooted phylogenetic trees.When a single branch length goes to zero, 𝒩(τ, q) will have three elements: τ itself and its two NNI neighbors. When multiple branch lengths go to zero, one can re-expand branch lengths for any set of splits that are compatible with each other and with the splits that did not originally go to zero. This generalizes the NNI condition. However, the correspondence between the branches that went to zero and the newly expanded branches is no longer obvious.One can define such a correspondence using a global splits-based coordinate system. Namely, such a coordinate system can be achieved by indexing branch length vectors by splits, with the proviso that for any two incompatible splits r and s, one of q_r or q_s is zero. We could have used such a coordinate system for this paper, such that branch length vectors q would live in ℝ^2^N-1.However, for simplicity of notation, we have indexed the branch lengths (e.g. in Algorithm <ref>) with integers [n] corresponding to the actual branches of a phylogenetic tree. Thus our branch length vectors q live in 2N-3 dimensions. One can use a total order on the splits to unambiguously define which branches map to which others when the HMC crosses a boundary. We will describe how this works when two branch lengths, q_i and q_j, go to zero. The extension to more branch lengths is clear.Our branch indices i, j ∈ [2N-3] are always associated with a phylogenetic tree τ with numbered edges. For any branch index i on τ, one can unambiguously take the split s_i. Assume without loss of generality that s_i < s_j in the total order on splits. Now, when q_i and q_j go to zero, one can transition to a new tree τ' which may differ from τ by up to two splits. We assume without loss of generality that these are actually new splits (if not, we are in a previously defined setting) which we call s_1' and s_2' such that s_1' < s_2'. We carry all of the branch indices for branches that aren't shrinking to zero across to τ'. Then map branch i in τ to the branch in τ' corresponding to the split s_1', and branch j to the branch in τ' corresponding to the split s_2'. Thus, for example, the momentum q_i in the τ orthant is carried over to this corresponding q_i in the τ' orthant.
http://arxiv.org/abs/1702.07814v2
{ "authors": [ "Vu Dinh", "Arman Bilge", "Cheng Zhang", "Frederick A. Matsen IV" ], "categories": [ "q-bio.PE", "05C05, 92B10, 92D15, 65J99" ], "primary_category": "q-bio.PE", "published": "20170225012042", "title": "Probabilistic Path Hamiltonian Monte Carlo" }
Scheduling Post-Disaster Repairs in Electricity Distribution Networks Yushi Tan, Feng Qiu, Arindam K. Das, Daniel S. Kirschen, Payman Arabshahi, Jianhui Wang=============================================================================================§ ABSTRACT Natural disasters, such as hurricanes, earthquakes and large wind or ice storms, typically require the repair of a large number of components in electricity distribution networks. Since power cannot be restored before these repairs have been completed, optimally scheduling the available crews to minimize the cumulative duration of the customer interruptions reduces the harm done to the affected community. Considering the radial network structure of the distribution system, this repair and restoration process can be modeled as a scheduling problem with soft precedence constraints. As a benchmark, we first formulate this problem as a time-indexed ILP with valid inequalities. Three practical methods are then proposed to solve the problem: (i) an LP-based list scheduling algorithm, (ii) a single to multi-crew repair schedule conversion algorithm, and (iii) a dispatch rule based on ρ-factors which can be interpreted as Component Importance Measures. We show that the first two algorithms are 2 and (2 - 1/m) approximations respectively. We also prove that the latter two algorithms are equivalent. Numerical results validate the effectiveness of the proposed methods.§ KEYWORDS Electricity distribution network, Natural disasters, Infrastructure resilience, Scheduling with soft precedence constraints, Time-indexed integer programming, LP-based list scheduling, Conversion algorithm, Component Importance Measure (CIM)§ INTRODUCTIONNatural disasters, such as Hurricane Sandy in November 2012, the Christchurch Earthquake of February 2011 or the June 2012 Mid-Atlantic and Midwest Derecho, caused major damage to the electricity distribution networks and deprived homes and businesses of electricity for prolonged periods. Such power outages carry heavy social and economic costs. Estimates of the annual cost of power outages caused by severe weather between 2003 and 2012 range from $18 billion to $33 billion on average<cit.>.Physical damage to grid components must be repaired before power can be restored<cit.>. Hurricanes often cause storm surges that flood substations and corrode metal, electrical components and wiring <cit.>. Earthquakes can trigger ground liquefaction that damage buried cables and dislodge transformers <cit.>. Wind and ice storms bring down trees, breaking overhead cables and utility poles <cit.>. As the duration of an outage increases, its economic and social costs rise exponentially. See <cit.> <cit.> for discussions of the impacts of natural disasters on power grids and <cit.> <cit.> for its impact on other infrastructures. It is important to distinguish the distribution repair and restoration problem discussed in this paper from the blackout restoration problem and the service restoration problem. Blackouts are large scale power outages (such as the 2003 Northeast US and Canada blackout) caused by an instability in the power generation and the high voltage transmission systems. This instability is triggered by an electrical fault or failure and is amplified by a cascade of component disconnections. Restoring power in the aftermath of a blackout is a different scheduling problem because most system components are not damaged and only need to be re-energized. See <cit.> <cit.>for a discussion of the blackout restoration problem and <cit.> for a mixed-integer programming approach for solving this problem.On the other hand, service restoration focuses on re-energizing a part of the local, low voltage distribution grid that has been automatically disconnected following a fault on a single component or a very small number of components. This can usually be done by isolating the faulted components and re-energizing the healthy parts of the network using switching actions. The service restoration problem thus involves finding the optimal set of switching actions. The repair of the faulted component is usually assumed to be taking place at a later time and is not considered in the optimization model. Several approaches have been proposed for the optimization of service restoration such as heuristics <cit.> <cit.>, knowledge based systems <cit.>, and dynamic programming <cit.>.Unlike the outages caused by system instabilities or localized faults, outages caused by natural disasters require the repair of numerous components in the distribution grid before consumers can be reconnected. The research described in this paper therefore aims to schedule the repair of a significant number of damaged components, so that the distribution network can be progressively re-energized in a way that minimizes the cumulative harm over the total restoration horizon. Fast algorithms are needed to solve this problem because it must be solved immediately after the disaster and may need to be re-solved multiple times as more detailed information about the damage becomes available. Relatively few papers address this problem. Coffrin and Van Hentenryck <cit.> propose an MILP formulation to co-optimize the sequence of repairs, the load pick-ups and the generation dispatch. However, the sequencing of repair does not consider the fact that more than one repair crew could work at the same time. Nurre et al. <cit.> formulate an integrated network design and scheduling (INDS) problem with multiple crews, which focuses on selecting a set of nodes and edges for installation in general infrastructure systems and scheduling them on work groups. They also propose a heuristic dispatch rule based on network flows and scheduling theory.The rest of the paper is organized as follows. In Section 2, we define the problem of optimally scheduling multiple repair crews in a radial electricity distribution network after a natural disaster, and show that this problem is at least strongly 𝒩𝒫-hard. In Section 3, we formulate the post-disaster repair problem as an integer linear programming (ILP) using a network flow model and present a set of valid inequalities. Subsequently, we propose three polynomial time approximation algorithms based on adaptations of known algorithms in parallel machine scheduling theory, and provide performance bounds on their worst-case performance. A list scheduling algorithm based on an LP relaxation of the ILP model is discussed in Section 4; an algorithm which converts the optimal single crew repair sequence to a multi-crew repair sequence is presented in Section 5, along with a equivalent heuristic dispatch rule based on ρ-factors. In Section 6, we apply these methods to several standard test models of distribution networks. Section 7 draws conclusions.§ PROBLEM FORMULATION A distribution network can be represented by a graph G with the set of nodes N and the set of edges (a.k.a, lines) L. We assume that the network topology G is radial, which is a valid assumption for most electricity distribution networks. Let S ⊂ N represent the set of source nodes which are initially energized and D = N ∖ S represent the set of sink nodes where consumers are located. An edge in G represents a distribution feeder or some other connecting component. Severe weather can damage these components, resulting in a widespread disruption of power supply to the consumers. Let L^D and L^I = L ∖ L^D denote the sets of damaged and intact edges, respectively. Each damaged line l ∈ L^D requires a repair time p_l which is determined by the extent of damage and the location of l. We assume that it would take every crew the same amount of time to repair the same damaged line. Without any loss of generality, we assume that there is only one source node in G. If an edge is damaged, all downstream nodes lose power due to lack of electrical connectivity. In this paper, we consider the case where multiple crews work simultaneously and independently on the repair of separate lines, along with the special case where a single crew must carry all the repairs.Finally, we make the assumption that crew travel times between damage sites are minimal and can be either ignored or factored into the component repair times. Therefore, our goal is to find a schedule by which the damaged lines should be repaired such that the aggregate harm due to loss of electric energy is minimized. We define this harm as follows:∑_n ∈ Nw_nT_n, where w_n is a positive quantity that captures the importance of node n and T_n is the time required to restore power at node n. The importance of a node can depend on multiple factors, including but not limited to, the amount of load connected to it, the type of load served, and interdependency with other critical infrastructure networks. For example, re-energizing a node supplying a major hospital should receive a higher priority than a node supplying a similar amount of residential load. Similarly, it is conceivable that a node that provides electricity to a water sanitation plant would be assigned a higher priority. These priority factors would need to be assigned by the utility companies and their determination is outside the scope of this paper. We simply assume knowledge of the w_n's in the context of this paper.The time to restore node n, T_n, is approximated by the energization time E_n, which is defined as the time node n first connects to the source node. System operators normally need to consider voltage and stability issues before restoring a node. However, this is not a major issue in distribution networks. The operators progressively restore a radial distribution network with enough generation capacity back to its normal operating state. And even if a rigorous power flow model is taken into account, the actual demands after re-energization are not known and hard to forecast. As a result, we model network connectivity using a simple network flow model, i.e., as long as a sink node is connected to the source, we assume that all the load on this node can be supplied without violating any security constraint. For simplicity, we treat the three-phase distribution network as if it were a single phase system. Our analysis could be extended to a three-phase system using a multi-commodity flow model, as in <cit.>. §.§ Soft Precedence Constraints We construct two simplified directed radial graphs to model the effect that the topology of the distribution network has on scheduling. The first graph, G^', is called the `damaged component graph'. All nodes in G that are connected by intact edges are contracted into a supernode in G^'. The set of edges in G^' is the set of damaged lines in G, L^D. From a computational standpoint, the nodes of G^' can be obtained by treating the edges in G as undirected, deleting the damaged edges/lines, and finding all the connected components of the resulting graph. The set of nodes in each such connected component represents a (super)node in G^'. The edges in G^' can then be placed straightforwardly by keeping track of which nodes in G are mapped to a particular node in G^'. The directions to these edges follow trivially from the network topology. G^' is useful in the ILP formulation introduced in Section <ref>.The second graph, P, is called a `soft precedence constraint graph', which is constructed as follows. The nodes in this graph are the damaged lines in G and an edge exists between two nodes in this graph if they share the same node in G^'. Computationally, the precedence constraints embodied in P can be obtained by replacing lines in G^' with nodes and the nodes in G^' with lines. Such a graph enables us to consider the hierarchal relationship between damaged lines, which we define as soft precedence constraints. A substantial body of research exists on scheduling with precedence constraints. In general, the precedence constraint i ≺ j requires that job i be completed before job j is started, or equivalently, C_j ≥ C_i, where C_j is the completion time of job j. Such precedence constraints, however, are not applicable in post-disaster restoration. While it is true that a sink node in an electrical network cannot be energized unless there is an intact path(i.e., all damaged lines along that path have already been repaired) from the source (feeder) to this sink node, this does not mean that multiple lines on some path from the source to the sink cannot be repaired concurrently. We keep track of two separate time vectors: the completion times of line repairs, denoted by C_l's, and the energization times of nodes, denoted by E_n's. While we have so far associated the term `energization time' with nodes in the given network topology, G, it is also possible to define energization times on the lines. Consider the example in Fig. <ref>. The precedence graph, P, requires that the line 650-632 be repaired prior to the line 671-692. If this (soft) precedence constraint is met, as soon as the line 671-692 is repaired, it can be energized, or equivalently, all nodes in SN_3 (nodes 692 and 675) in the damaged component graph, G^', can be deemed to be energized. The energization time of the line 671-692 is therefore identical to the energization times of nodes 692 and 675. Before generalizing the above example, we need to define some notations. Given a directed edge l, let h(l) and t(l) denote the head and tail node of l. Let l = h(l) → t(l) be any edge in the damaged component graph G^'. Provided the soft precedence constraints are met, it is easy to see that E_l = E_t(l), where E_l is the energization time of line l and E_t(l) is the energization time of the node t(l) in G. Analogously, the weight of node t(l), w_t(l), can be interpreted as a weight on the line l, w_l. The soft precedence constraint, i ≺_S j, therefore implies that line j cannot be energized unless line i is energized, or equivalently, E_j ≥ E_i, where E_j is the energization time of line j.Given any feasible schedule of post-disaster repairs, the energization time E_j always satisfies,E_j = i≼_S jmax C_iSo far, we have modeled the problem of scheduling post-disaster repairs in electricity distribution networks as a parallel machine scheduling with outtree soft precedence constraints in order to minimize the total weighted energization time, or equivalently, P | outtreesoftprec |∑ w_j E_j, following Graham's notation in <cit.>. §.§ Complexity AnalysisIn this section, we study the complexity of the scheduling problem P | outtreesoftprec |∑ w_j E_j and show that it is at least strongly 𝒩𝒫-hard.The problem of scheduling post-disaster repairs in electricity distribution networks is at least strongly 𝒩𝒫-hard. We show this problem is at least strongly 𝒩𝒫-hard using a reduction from the well-known identical parallel machine scheduling problem P|| ∑ w_jC_j defined as follows,P|| ∑ w_jC_j: Given a set of jobs J in which j has processing time p_j and weight w_j, find a parallel machine schedule that minimizes the total weighted completion time ∑ w_jC_j, where C_j is the time when job j finishes. P|| ∑ w_jC_j is strongly 𝒩𝒫-hard <cit.>.Given an instance of P|| ∑ w_jC_j defined as above, construct a star network G_S with a source and | J | sinks. Each sink j has a weight w_j and the line between the source and sink j has a repair time of p_j. Whenever a line is repaired, the corresponding sink can be energized. Therefore the energization time of sink j is equal to the completion time of line j. If one could solve the problem of scheduling post-disaster repairs in electricity distribution networks to optimality, then one can solve the problem in G_S optimally and equivalently solve P|| ∑ w_jC_ j.§ INTEGER LINEAR PROGRAMMING (ILP) FORMULATIONWith an additional assumption in this section that all repair times are integers, we model the post-disaster repair scheduling problem using time-indexed decision variables (see ), x_l^t, where x_l^t = 1 if line l is being repaired by a crew at time period t. Variable y_l^t denotes the repair status of line l where y_l^t = 1 if the repair is done by the end of time period t - 1 and ready to energize at time period t. Finally, u_i^t = 1 if node i is energized at time period t. Let T denote the time horizon for the restoration efforts. Although we cannot know T exactly until the problem is solved, a conservative estimate should work. Since T_i = ∑_t=1^T (1 - u_i^t) by discretization, the objective function of eqn. <ref> can be rewritten as: minimize ∑_t=1^T∑_i ∈ Nw_i(1 - u_i^t) This problem is to be solved subject to two sets of constraints: (i) repair constraints and (ii) network flow constraints, which are discussed next. We mention in passing that the above time-indexed (ILP) formulation provides a strong relaxation of the original problem <cit.> and allows for modeling of different scheduling objectives without changing the structure of the model and the underlying algorithm. §.§ Repair Constraints Repair constraints model the behavior of repair crews and how they affect the status of the damaged lines and the sink nodes that must be re-energized. The three constraints below are used to initialize the binary status variables y_l^t and u_i^t. Eqn. <ref> forces y_l^t=0 for all lines which are damaged initially (i.e., at time t=0) while eqn. <ref> sets y_l^t=1 for all lines which are intact. Eqn. <ref> forces the status of all source nodes, which are initially energized, to be equal to 1 for all time periods. y_l^1 = 0,∀l ∈L^Dy_l^t = 1,∀l ∈L^I, ∀t ∈[1,T]u_i^t = 1,∀i ∈S, ∀t ∈[1,T]where T is the restoration time horizon. The next set of constraints is associated with the binary variables x_l^t. Eqn. <ref> constrains the maximum number of crews working on damaged lines at any time period t to be equal to m, where m is the number of crews available. ∑_l ∈ L^Dx_l^t≤ m,∀ t ∈ [1,T] Observe that, compared to the formulation in <cit.>, there are no crew indices in our model. Since these indices are completely arbitrary, the number of feasible solutions can increase in crew indexed formulations, leading to enhanced computation time. For example, consider the simple network i → j → k → l, where node i is the source and all edges require a repair time of 5 time units. If 2 crews are available, suppose the optimal repair schedule is: `assign team 1 to i → j at time t=0, team 2 to j → k at t=0, and team 1 to k → l' at t=5.Clearly, one possible equivalent solution conveying the same repair schedule and yielding the same cost, is: `assign team 2 to i → j at t=0, team 1 to j → k at t=0, and team 1 to k → l at t=5'. In general, formulations without explicit crew indices may lead to a reduction in the size of the feasible solution set. Although the optimal repair sequences obtained from such formulations do not natively produce the work assignments to the different crews, this is not an issue in practice because operators can choose to let a crew work on a line until the job is complete and assign the next repair job in the sequence to the next available crew (the first m jobs in the optimal repair schedule can be assigned arbitrarily to the m crews).Finally, constraint eqn. <ref> formalizes the relationship between variables x_l^t and y_l^t. It mandates that y_l^t cannot be set to 1 unless at least p_l number of x_l^τ's, τ∈ [1, t - 1], are equal to 1, where p_l is the repair time of line l. y_l^t≤1/p_l∑_τ=1^t-1 x_l^τ,∀ l ∈ L^D,∀ t ∈ [1,T] While we do not explicitly require that a crew may not leave its current job unfinished and take up a different job, it is obvious that such a scenario cannot be part of an optimal repair schedule.§.§ Network flow constraints We use a modified form of standard flow equations to simplify power flow constraints. Specifically, we require that the flows, originating from the source nodes (eqn. <ref>), travel through lines which have already been repaired (eqn. <ref>). Once a sink node receives a flow, it can be energized (eqn. <ref>). ∑_l ∈δ_G^-(i) f_l^t ≥0,∀t ∈[1,T], ∀i ∈S-M ×y_l^t ≤f_l^t ≤M ×y_l^t,∀t ∈[1,T], ∀l ∈Lu_i^t ≤∑_l ∈δ_G^+(i) f_l^t - ∑_l ∈δ_G^-(i) f_l^t,∀t ∈[1,T],∀i ∈DIn eqn. <ref>, M is a suitably large constant, which, in practice, can be set equal to the number of sink nodes, M = | D |. In eqn. <ref>, δ_G^+(i) and δ_G^-(i) denote the sets of lines on which power flows into and out of node i in G respectively.§.§ Valid inequalities Valid inequalities typically reduce the computing time and strengthen the bounds provided by the LP relaxation of an ILP formulation. We present the following shortest repair time path inequalities, which resemble the ones in <cit.>. A node i cannot be energized until all the lines between the source s and node i are repaired. Since the lower bound to finish all the associated repairs is ⌊SRTP_i/m⌋, where m denotes the number of crews available and SRTP_i denotes the shortest repair time path between s and i, the following inequality is valid: ∑_t=1^⌊SRTP_i/m⌋ - 1 u_i^t = 0, ∀ i ∈ N To summarize, the multi-crew distribution system post-disaster repair problem can be formulated as: minimize eqn. <ref> subject to eqns. <ref> ∼ <ref>§ LIST SCHEDULING ALGORITHMS BASED ON LINEAR RELAXATION A majority of the approximation algorithms used for scheduling is derived from linear relaxations of ILP models, based on the scheduling polyhedra of completion vectors developed in <cit.> and <cit.>. We briefly restate the definition of scheduling polyhedra and then introduce a linear relaxation based list scheduling algorithm followed by a worst case analysis of the algorithm. §.§ Linear relaxation of scheduling with soft precedence constraintsA set of valid inequalities for m identical parallel machine scheduling was presented in <cit.>: ∑_j ∈ A p_j C_j ≥ f(A) := 1/2m(∑_j ∈ A p_j)^2 + 1/2 ∑_j ∈ A p_j^2 ∀ A ⊂ NThe completion time vector C of every feasible schedule on m identical parallel machines satisfies inequalities (<ref>). The objective of the post-disaster repair and restoration is to minimize the harm, quantified as the total weighted energization time. With the previously defined soft precedence constraints and the valid inequalities for parallel machine scheduling, we propose the following LP relaxation: C, Eminimize ∑_j ∈ L^D w_j E_jsubject to C_j ≥ p_j, ∀ j ∈ L^DE_j ≥ C_j,∀ j ∈ L^DE_j ≥ E_i,∀ (i → j) ∈ P ∑_j ∈ A p_j C_j ≥1/2m(∑_j ∈ A p_j)^2 + 1/2 ∑_j ∈ A p_j^2, ∀ A ⊂ L^D where P is the soft precedence graph discussed in Section <ref> (see also Fig. <ref>). Eqn. <ref> constrains the completion time of any damaged line to be lower bounded by its repair time, eqn. <ref> ensures that any line cannot be energized until it has been repaired, eqn. <ref> models the soft precedence constraints, and eqn. <ref> characterizes the scheduling polyhedron.The above formulation can be simplified by recognizing that the C_j's are redundant intermediate variables. Combining eqns. <ref> and  <ref>, we have: ∑_j ∈ A p_j E_j≥ ∑_j ∈ A p_j C_j≥ 1/2m(∑_j ∈ A p_j)^2 + 1/2 ∑_j ∈ A p_j^2, ∀ A ⊂ L^D which indicates that the vector of E_j's satisfies the same valid inequalities as the vector of C_j's. After some simple algebra, the LP-relaxation can be reduced to: Eminimize ∑_j ∈ L^D w_j E_jsubject toE_j ≥ p_j,∀ j ∈ L^DE_j ≥ E_i,∀ (i → j) ∈ P ∑_j ∈ A p_j E_j ≥1/2m(∑_j ∈ A p_j)^2 + 1/2 ∑_j ∈ A p_j^2, ∀ A ⊂ L^D We note that although there are exponentially many constraints in the above model, the separation problem for these inequalities can be solved in polynomial time using the ellipsoid method as shown in <cit.>. §.§ LP-based approximation algorithm List scheduling algorithms, which are among the simplest and most commonly used approximate solution methods for parallel machine scheduling problems <cit.>, assign the job at the top of a priority list to whichever machine is idle first. An LP relaxation provides a good insight into the priorities of jobs and has been widely applied to scheduling with hard precedence constraints. We adopt a similar approach in this paper. Algorithm <ref>, based on a sorted list of the LP midpoints, summarizes our proposed approach. We now develop an approximation bound for Algorithm <ref>. Let E_j^H denote the energization time respectively of line j in the schedule constructed by Algorithm <ref>. Then the following must hold, E_j^H≤ 2 E_j^LP, ∀ j ∈ L^DLet S_j^H, C_j^H and E_j^H denote the start time, completion time respectively of some line j in the schedule constructed by Algorithm <ref>.Define M := [M_j^LP: j = 1,2, … , | L^D |]. Let M̃ denote M sorted in ascending order, ℐ̃_j denote the position of some line j ∈ L^D in M̃, and { k: ℐ̃_k ≤ℐ̃_j, k ≠ j } := R denote the set of jobs whose LP midpoints are upper bounded by M_j^LP. First, we claim that S_j^H≤1/m ∑_i ∈ R p_i. To see why, split the set R into m subsets, corresponding to the schedules of the m crews, i.e., R = ⋃_k=1^mR^k. Since job j is assigned to the first idle crew and repairs commence immediately, we have:S_j^H = min{∑_i ∈ R^k p_i: k = 1,2, … m }≤1/m ∑_k=1^m∑_i ∈ R^k p_i = 1/m ∑_i ∈ R p_i , where the inequality follows from the fact that the minimum of a set of positive numbers is upper bounded by the mean. Next, noting that M_j^LP = E_j^LP - p_j/2, we rewrite eqn. <ref> as follows: ∑_j ∈ A p_j M_j^LP≥1/2m(∑_j ∈ A p_j)^2,∀ A ⊂ L^DNow, letting A = R,we have: (∑_i ∈ R p_i) M_j^LP ≥ ∑_i ∈ R p_i M_i^LP ≥ 1/2m(∑_i ∈ R p_i)^2 , where the first inequality follows from the fact that M_j^LP≥ M_i^LP for any i ∈ R. Combining eqns. <ref> and <ref>, it follows that S_j^H≤ 2 M_j^LP. Consequently, C_j^H = S_j^H + p_j ≤ 2 M_j^LP + p_j = 2 E_j^LP. Then, E_j^H = i ≼_S jmaxC_i^H ≤i ≼ jmax2E_i^LP = 2 E_j^LP ,where the last equality follows trivially from the definition of a soft precedence constraint.Algorithm <ref> is a 2-approximation. Let E_j^* denote the energization time of line j in the optimal schedule. Then, with E_j^LP being the solution of the linear relaxation, ∑_j ∈ L^D w_j E_j^LP≤∑_j ∈ L^D w_j E_j^* Finally, from eqns. <ref> and <ref>, we have: ∑_j ∈ L^D w_j E_j^H≤ 2∑_j ∈ L^D w_j E_j^LP≤ 2∑_j ∈ L^D w_j E_j^* § AN ALGORITHM FOR CONVERTING THE OPTIMAL SINGLE CREW REPAIR SEQUENCE TO A MULTI-CREW SCHEDULEIn practice, many utilities schedule repairs using a priority list <cit.>, which leaves much scope for improvement. We analyze the repair and restoration process as it would be done with a single crew because this provides important insights into the general structure of the multi-crew scheduling problem. Subsequently, we provide an algorithm for converting the single crew repair sequence to a multi-crew schedule, which is inspired by similar previous work in <cit.>, and analyze its worst case performance. Finally, we develop a multi-crew dispatch rule and compare it with the current practices of FirstEnergy Group <cit.> and Edison Electric Institute <cit.>.§.§ Single crew restoration in distribution networks We show that this problem is equivalent to 1 | outtree |∑ w_jC_j, which stands for scheduling to minimize the total weighted completion time of N jobs with a single machine under `outtree' precedence constraints.Outtree precedence constraints require that each job may have at most one predecessor. Given the manner in which we derive the soft precedence (see Section <ref>), it is easy to see that P will indeed follow outtree precedence requirements, i.e. each node in P will have at most one predecessor, as long as the network topology G does not have any cycles. We will show by the following lemma that the soft precedence constraints degenerate to the precedence constraints with one repair team. Given one repair crew, the optimal schedule in a radial distribution system must follow outtree precedence constraints, the topology of which follows the soft precedence graph P.Given one repair crew, each schedule can be represented by a sequence of damaged lines. Let i-j and j-k be two damaged lines such that the node (j,k) is the immediate successor of node (i,j) in the soft precedence graph P. Let π be the optimal sequence and π^' another sequence derived from π by swapping i-j and j-k. Denote the energization times of nodes j and k in π by E_j and E_k respectively. Similarly, let E_j^' and E_k^' denote the energization times of nodes j and k in π^'. Define f := ∑_n ∈ Nw_nE_n.Since node k cannot be energized unless node j is energized and until the line between it and its immediate predecessor is repaired, we have E_k^' = E_j^' in π' and E_k > E_j in π. Comparing π and π^', we see that node k is energized at the same time, i.e., E_k^' = E_k, and therefore, E_j^' > E_j. Thus: f(π^') - f(π) = (w_jE_j' + w_kE_k') - (w_jE_j + w_kE_k)= w_j(E_j' - E_j) + w_k(E_k' - E_k) > 0Therefore, any job swap that violates the outtree precedence constraints will strictly increase the objective function. Consequently, the optimal sequence must follow these constraints. It follows immediately from Proposition <ref> that: Single crew repair and restoration scheduling in distribution networks is equivalent to 1 | outtree |∑_j w_jC_j, where the outtree precedences are given in the soft precedence constraint graph P.§.§ Recursive scheduling algorithm for single crew restoration scheduling As shown above, the single crew repair scheduling problem in distribution networks is equivalent to 1 | outtree |∑ w_jC_j, for which an optimal algorithm exists <cit.>. We will briefly discuss this algorithm and the reasoning behind it. Details and proofs can be found in <cit.>.Let J^D ⊆ L^D denote any subset of damaged lines. Define: w(J^D) := ∑_j ∈ J^D w_j, p(J^D) := ∑_j ∈ J^D p_j,q(J^D) := w(J^D)/p(J^D) Algorithm <ref>, adapted from <cit.> with a change of notation, finds the optimal repair sequence by recursively merging the nodes in the soft precedence graph P. The input to this algorithm is the precedence graph P. Let N(P) = {1,2, …| N(P) |} denote the set of nodes in P (representing the set of damaged lines, L^D), with node 1 being the designated root. The predecessor of any node n ∈ P is denoted by pred(n). Lines 1-7 initialize different variables. In particular, we note that the predecessor of the root is arbitrarily initialized to be 0 and its weight is initialized to -∞ to ensure that the root node is the first job in the optimal repair sequence. Broadly speaking, at each iteration, a node j ∈ N(P) (j could also be a group of nodes) is chosen to be merged into its immediate predecessor i ∈ N(P) if q(j) is the largest. The algorithm terminates when all nodes have been merged into the root. Upon termination, the optimal single crew repair sequence can be recovered from the predecessor vector and the element A(1), which indicates the last job finished. We conclude this section by noting that Algorithm <ref> requires the precedence graph P to have a defined root. However, as illustrated in Section <ref>, it is quite possible for P to be a forest, i.e., a set of disjoint trees. In such a situation, P can be modified by introducing a dummy root node with a repair time of 0 and inserting directed edges from this dummy root to the roots of each individual tree in the forest. This fictitious root will be the first job in the repair sequence returned by the algorithm, which can then be stripped off.§.§ Conversion algorithm and an approximation bound A greedy procedure for converting the optimal single crew sequence to a multiple crew schedule is given in Algorithm <ref>. We now prove that it is a (2-1/m) approximation algorithm. We start with two lemmas that provide lower bounds on the minimalharm for an m-crew schedule, in terms of the minimal harms for single crew and ∞-crew schedules. Let H^1,∗, H^m,∗ and H^∞,∗denote the minimal harms when the number of repair crews is 1, some arbitrary m (2 ≤ m < ∞), and ∞ respectively.H^m, *≥1/mH^1, * Given an arbitrary m-crew schedule S^m with harm H^m, we first construct a 1-crew repair sequence, S^1. We do so by sorting the energization times of the damaged lines in S^m in ascending order and assigning the corresponding sorted sequence of lines to S^1. Ties, if any, are broken according to precedence constraints or arbitrarily if there is none. By construction, for any two damaged lines i and j with precedence constraint i ≺ j, the completion time of line i must be strictly smaller than the completion time of line j in S^1, i.e., C_i^1 < C_j^1. Additionally, C_i^1 = E_i^1 because the completion and energization times of lines are identical for a 1-crew repair sequence which also meets the precedence constraints of P.Next, we claim that E_i^1 ≤ m E_i^m, where E_i^1 and E_i^m are the energization times of line i in S^1 and S^m respectively.In order to prove it, we first observe that: E_i^1 = C_i^1 = ∑_{j:E_j^m ≤ E_i^m} p_j ≤∑_{j:C_j^m ≤ E_i^m} p_j , where the second equality follows from the manner we constructed S^1 from S^m and the inequality follows from the fact that C_j^m ≤ E_j^m ⇒{j:E_j^m ≤ E_i^m}⊆{j:C_j^m ≤ E_i^m} for any m-crew schedule. In other words, the number of lines that have been energized before line i is energized is a subset of the number of lines on which repairs have been completed before line i is energized. Next, we split the set {j:C_j^m ≤ E_i^m} := R into m subsets, corresponding to the schedules of the m crews in S^m, i.e., R = ⋃_k=1^mR^k, where R^k is a subset of the jobs in R that appear in the k^th crew's schedule. It is obvious that the sum of the repair times of the lines in each R^k can be no greater than E_i^m. Therefore, E_i^1 ≤∑_{j:C_j^m ≤ E_i^m} p_j := ∑_j ∈ R p_j = ∑_k=1^m (∑_j ∈ R^k p_j) ≤ m E_i^m Proceeding with the optimal m-crew schedule S^m,∗ instead of an arbitrary one, it is easy to see that E_i^1 ≤ m E_i^m,∗, where E_i^m,∗ is the energization time of line i in S^m,∗. The lemma then follows straightforwardly. H^m, * = ∑_i ∈ L^D w_i E_i^m,*≥∑_i ∈ L^D w_i1/mE_i^1 = 1/m ∑_i ∈ L^D w_i E_i^1 = 1/mH^1 ≥1/mH^1, * H^m,∗≥ H^∞,∗ This is intuitive, since the harm is minimized when the number of repair crews is at least equal to the number of damaged lines. In the ∞-crew case, every job can be assigned to one crew. For any damaged line j ∈ L^D, C^∞_j = p_j and E^∞_j = max_i ≼ j C^∞_i = max_i ≼ j p_i. Also, C^m,∗_j ≥ p_j = C^∞_j and E^m,∗_j = max_i ≼ j C^m,∗_i ≥max_i ≼ j p_i = E^∞_j. Therefore: H^m, * = ∑_j ∈ L^D w_j E_j^m,*≥∑_j ∈ L^D w_j E^∞_j = H^∞,∗Let E_j^m be the energization time of line j after the conversion algorithm is applied to the optimal single crew repair schedule. Then, ∀ j ∈ L^D, E_j^m≤1/mE_j^1,∗ + m-1/mE^∞,∗_j.Let S_j^m and C_J^mdenote respectively the start and energization times of some line j ∈ L^D in the m-crew repair schedule, S^m, obtained by applying the conversion algorithm to the optimal 1-crew sequence, S^1,∗. Also, let ℐ_j denote the position of line j in S^1,∗ and {k: ℐ_k < ℐ_j} := R denote the set of all lines completed before j in S^1,∗. First, we claim that: S_j^m ≤1/m ∑_i ∈ R p_i. A proof can be constructed by following the approach taken in the proof of Proposition <ref> and is therefore omitted. Now:C_j^m= S_j^m + p_j ≤1/m ∑_i ∈ R p_i + p_j = 1/m ∑_i ∈ R∪j p_i + m-1/mp_j= 1/mC_j^1,∗ + m-1/mp_j and E_j^m= max_i ≼_S j C_i^m ≤max_i ≼ j1/mC_i^1,∗ + max_i ≼ jm-1/mp_i = 1/mC_j^1,∗ + m-1/m max_i ≼ j p_i = 1/mE_j^1,∗ + m-1/mE^∞,∗_j The conversion algorithm is a (2 - 1/m)-approximation. H^m= ∑_j ∈ L^D w_j E_j^m≤∑_j ∈ L^D w_j (1/m E_j^1, * + m-1/mE^∞_j) ⋯ using Proposition <ref>= 1/m ∑_j ∈ L^D w_j E_j^1, * +m-1/m ∑_j ∈ L^D w_j E^∞_j = 1/mH^1, * + m-1/mH^∞, *≤1/m (m H^m, *) + m-1/mH^m, *⋯ using Propositions <ref> - <ref>= (2 - 1/m) H^m, *§.§ A Dispatch RuleWe now develop a multi-crew dispatch rule from a slightly different perspective, and show that it is equivalent to the conversion algorithm. In the process, we define a parameter, ρ(l), ∀ l ∈ L^D, which can be interpreted as a `component importance measure' (CIM) in the context of reliability engineering. This allows us to easily compare our conversion algorithm to standard utility practices. Towards that goal, we revisit the single crew repair problem, in conjunction with the algorithm proposed in <cit.>.Let S_l denote the set of all trees rooted at node l in P and s^*_l∈ S_l denote the minimal subtree which satisfies: ρ(l) := ∑_j ∈ N(s^∗_l) w_j/∑_j ∈ N(s^∗_l) p_j = max_s_l∈ S_l( ∑_j ∈ N(s_l) w_j/∑_j ∈ N(s_l) p_j) , where N(s_l) is the set of nodes in s_l. We define the ratio on the left-hand side of the equality in eqn. <ref> to be the ρ-factor of line l, denoted by ρ(l). We refer to the tree s^*_l as the minimal ρ-maximal tree rooted at l, which resembles the definitions discussed in <cit.>. With ρ-factors calculated for all damaged lines, the repair scheduling with single crew can be solved optimally, as stated in Algorithm <ref> below, adopted from <cit.>. Note that ρ-factors are defined based on the soft precedence graph P, whereas the following dispatch rules are stated in terms of the original network G to be more in line with industry practices. It has been proven in <cit.> that Algorithms <ref> and <ref> are equivalent. The ρ-factors can be calculated in multiple ways: (1) following the method proposed in <cit.>, (2) as a byproduct of Algorithm <ref>, and (3) using a more general method based on parametric minimum cuts in an associated directed precedence graph <cit.>. Algorithm <ref> can be extended straightforwardly to accommodate multiple crews. However, in this case, it could happen that the number of damaged lines that are connected to energized nodes is smaller than the number of available repair crews. To cope with this issue, we also consider the lines which are connected to the lines currently being repaired, as described in Algorithm <ref> below. Algorithm <ref> is equivalent to Algorithm <ref> discussed in Section <ref>. As stated above, Algorithms <ref> and <ref> are both optimal algorithms and we assume that, without loss of generality, they produce the same optimal sequences. Then it suffices to show that Algorithm <ref> converts the sequence generated by Algorithm <ref> in the same way that Algorithm <ref> does to Algorithm <ref>.The proof is by induction on the order of lines being selected. In iteration 1, it is obvious that Algorithms <ref> and <ref> choose the same line for repair. Suppose this is also the case for iterations 2 to t-1, with the lines chosen for repair being l_1, l_2, l_3, ⋯, and l_t-1 respectively. Then, in iteration t, the set of candidate lines for both algorithms is the set of immediate successors of the supernode {l_1, l_2, ⋯, l_t-1}. Both algorithms will choose the job with the largest ρ-factor in iteration t, thereby completing the induction process.§.§ Comparison with current industry practicesAccording to FirstEnergy Group <cit.>, repair crews will “address outages that restore the largest number of customers before moving to more isolated problems”. This policy can be interpreted as a priority-based scheduling algorithm and fits within the scheme of the dispatch rule discussed above, the difference being that, instead of selecting the line with the largest ρ-factor, FirstEnergy chooses the one with the largest weight (which turns out to be the number of customers). Edison Electric Institute <cit.> states that crews are dispatched to “repair lines that will return service to the largest number of customers in the least amount of time”. This policy is analogous to Smith's ratio rule <cit.> where jobs are sequenced in descending order of the ratios w_l/p_l, ensuring that jobs with a larger weight and a smaller repair time have a higher priority. The parameter, ρ(l), can be viewed as a generalization of the ratio w_l/p_l and characterizes the repair priority of some damaged line l in terms of its own importance as well as the importance of its succeeding nodes in P. Stated differently, ρ(l) can be interpreted as a broad component importance measure for line l. Intuitively, we expect a dispatch rule based on ρ(l) to work better than current industry practice since it takes a more holistic view of the importance of a line and, additionally, has a proven theoretical performance bound. Simulation results presented later confirm that a dispatch rule based on our proposed ρ-factors indeed results in a better restoration trajectory compared to standard industry practices.§ CASE STUDIES In this section, we apply our proposed methods to three IEEE standard test feeders of different sizes. We consider the worst case, where all lines are assumed to be damaged. In each case, the importance factor w of each node is a random number between 0 and 1, with theexception of a randomly selected extremely important node with w = 5. The repair times are uniformly distributed on integers from 1 to 10. We compare the performances of the three methods, with computational time being of critical concern since restoration activities, in the event of a disaster, typically need to be performed in real time or near real time. All experiments were performed on a desktop with a 3.10 GHz Intel Xeon processor and 16 GB RAM. The ILP formulation was solved using Julia for Mathematical Programming with Gurobi 6.0. §.§ IEEE 13-Node Test Feeder The first case study is performed on the IEEE 13 Node Test Feeder shown in Fig. <ref>, assuming that the number of repair crews is m=2. Since this distribution network is small, an optimal solution could be obtained by solving the ILP model. We ran 1000 experiments in order to compare the performances of the two heuristic algorithms w.r.t the ILP formulation.Fig. <ref> shows the density plots of optimality gaps of LP-based list scheduling algorithm (LP) and the conversion algorithm (CA), along with the better solution from the two (EN). Fig. <ref> shows the optimality gaps when all repair times are integers. The density plot in this case is cut off at 0 since the ILP solves the problem optimally. Non-integer repair times can be scaled up arbitrarily close to integer values, but at the cost of reduced computational efficiency of the ILP. Therefore, in the second case, we perturbed the integer valued repair times by ± 0.1, which represents a reasonable compromise between computational accuracy and efficiency. The optimality gaps in this case are shown in Fig. <ref>. In this case, we solved the ILP using rounded off repair times, but the cost function was computed using the (sub-optimal) schedules provided by the ILP model and the actual non-integer repair times. This is why the heuristic algorithms sometimes outperform the ILP model, as is evident from Fig. <ref>. In both cases, the two heuristic algorithms can solve most of the instances with an optimality gap ofless than 10%. Comparing the two methods, we see that the conversion algorithm (CA) has a smaller mean optimality gap, a thinner tail, and a better worst case performance. However, this does not mean that the conversion algorithm is universally superior. In approximately 34% of the problem instances, we have found that the LP-based list scheduling algorithm yields a solution which is no worse than the one provided by the conversion algorithm. §.§ IEEE 123-Node Test Feeder Next, we ran our algorithms on one instance of the IEEE 123-Node Test Feeder <cit.> with m=5. Since solving such problems to optimality using the ILP requires a prohibitively large computing time, we allocated a time budget of one hour. As shown in Table <ref>, both LP and HA were able to find a better solution than the ILP, at a fraction of the computing time.§.§ IEEE 8500-Node Test Feeder Finally, we tested the two heuristic algorithms on one instance of the IEEE 8500-Node Test Feeder medium voltage subsystem <cit.> containing roughly 2500 lines, with m=10. We did not attempt to solve the ILP model in this case. As shown in Table <ref>, it took about more than 60 hours to solve its linear relaxation (which is reasonable since we used the ellipsoid method to solve the LP with exponentially many constraints) and the conversion algorithm actually solved the instance in two and a half minutes.We also compared the performance of our proposed ρ-factor based dispatch rule to standard industry practices discussed in Section <ref>. We assign the same weights to nodes for all three dispatch rules. The plot of network functionality (fraction restored) as a function of time in Fig. <ref> shows the comparison of functionality trajectories. While the time to full restoration is almost the same for all three approaches, it is clear that our proposed algorithm results in a greater network functionality at intermediate times. Specifically, an additional 10% (approximately) of the network is restored approximately halfway through the restoration process, compared to standard industry practices.§.§ Discussion From the three test cases above, we conclude that the ILP model would not be very useful for scheduling repairs and restoration in real time or near real time, except for very small problems. Even though it can be slow for large problems, the LP-based list scheduling algorithm can serve as an useful secondary tool for moderately sized problems. The conversion algorithm appears to have the best overall performance by far, in terms of solution quality and computing time.§ CONCLUSION In this paper, we investigated the problem of post-disaster repair and restoration in electricity distribution networks. We first proposed an ILP formulation which, although useful for benchmarking purposes, is feasible in practice only for small scale networks due to the immense computational time required to solve it to optimality or even near optimality. We then presented three heuristic algorithms. The first method, based on LP-relaxation of the ILP model, is proven to be a 2-approximation algorithm. The second method converts the optimal single crew schedule, solvable in polynomial time, to an arbitrary m-crew schedule with a proven performance bound of (2-1/m). The third method, based on ρ-factors which can be interpreted as component importance measures, is shown to be equivalent to the conversion algorithm. Simulations conducted on three IEEE standard networks indicate that the conversion algorithm provides very good results and is computationally efficient, making it suitable for real time implementation. The LP-based algorithm, while not as efficient, can still be used for small and medium scale problems.Although we have focused on electricity distribution networks, the heuristic algorithms can also be applied to any infrastructure network with a radial structure (e.g., water distribution networks). Future work includesdevelopment of efficient algorithms with proven approximation bounds which can be applied to arbitrary network topologies (e.g., meshed networks). While we have ignored transportation times between repair sites in this paper, this will be addressed in a subsequent paper. In fact, when repair jobs are relatively few and minor, but the repair sites are widely spread out geographically, optimal schedules are likely to be heavily influenced by the transportation times instead of the repair times. Finally, manydistribution networks contain switches that are normally open. These switches can be closed to restore power to some nodes from a different source. Doing so obviously reduces the aggregate harm. We intend to address this issue in the future.
http://arxiv.org/abs/1702.08382v2
{ "authors": [ "Yushi Tan", "Feng Qiu", "Arindam K. Das", "Daniel S. Kirschen", "Payman Arabshahi", "Jianhui Wang" ], "categories": [ "math.OC", "90B35" ], "primary_category": "math.OC", "published": "20170227170904", "title": "Scheduling Post-Disaster Repairs in Electricity Distribution Networks" }
[Fast Threshold Tests for Detecting Discrimination Emma Pierson Sam Corbett-Davies Sharad GoelStanford University Stanford University Stanford University ]Threshold tests have recently been proposed as a useful method for detecting bias in lending, hiring, and policing decisions. For example, in the case of credit extensions, these tests aim to estimate the bar for granting loans to white and minority applicants, with a higher inferred threshold for minorities indicative of discrimination. This technique, however, requires fitting a complex Bayesian latent variable model for which inference is often computationally challenging. Here we develop a method for fitting threshold tests that is two orders of magnitude faster than the existing approach, reducing computation from hours to minutes. To achieve these performance gains, we introduce and analyze a flexible family of probability distributions on the interval [0, 1]—which we calldiscriminant distributions—that is computationally efficient to work with. We demonstrate our technique by analyzing 2.7 million police stops of pedestrians in New York City. § INTRODUCTION There is wide interest in detecting and quantifying bias in human decisions, but well-known problems with traditional statistical tests of discrimination have hampered rigorous analysis. The primary goal of such work is to determine whether decision makers apply different standards to groups defined by race, gender, or other protected attributes—what economists call taste-based discrimination <cit.>. For example, in the context of banking,such discrimination might mean that minorities are granted loans only when they are exceptionally creditworthy. The key statistical challenge is that an individual's qualifications are typically only partially observed(e.g., researchers may not know an applicant's full credit history); it is thus unclear whether observed disparities are attributable to discrimination or omitted variables.To address this problem, <cit.> recently proposed the threshold test,which considers both the decisions made (e.g., whether a loan was granted)and the outcomes of those decisions (e.g., whether a loan was repaid). The test simultaneously estimates decision thresholds and risk profiles via a Bayesian latent variable model. This approach mitigates some of the most serious statistical shortcomings of past methods. Fitting the model, however, is computationally challenging, often requiring several hours on moderately sized datasets. As is common in full Bayesian inference, the threshold model is typically fit with Hamiltonian Monte Carlo (HMC) sampling. In this case, HMC involves repeatedly evaluating gradients of conditional beta distributions that are expensive to compute.Here we introduce a family of distributions on the interval [0,1]—which we call discriminant distributions—that is efficient for performing common statistical operations. Discriminant distributions comprise a natural subset of logit-normal mixture distributions which is sufficiently expressive to approximate logit-normal and beta distributions for a wide range of parameters. By replacing the beta distributions in the threshold testwith discriminant distributions, we speed up inference by two orders of magnitude.To demonstrate our method, we analyze 2.7 million police stops of pedestrians in New York City between 2008 and 2012.We apply the threshold test to assess possible bias in decisions to search individuals for weapons. We also extend the threshold test to detect discrimination in the decision to stop an individual.For both problems (search decisions and stop decisions), our method accelerates inference by more than 75-fold. Such performance gains are consequential in part because each new application requires running the threshold test dozens of times to conduct a battery of standard robustness checks.To carry out the experiments in this paper, we ran the threshold test nearly 100 times.That translates into about two months of continuous, serial computation under the standard fitting method; our approach required less than one day of computation.These performance gains also allow one to run the threshold test on very large datasets. In a national analysis of traffic stops by <cit.>, running the threshold test required splitting the data into state-level subsets; with our approach, one can fit a single national model on 22 million stops in 30 minutes, facilitating efficient pooling of information across states. Finally, such acceleration broadens the accessibility of the threshold test to policy analysts with limited computing resources.Our fast implementation of the threshold test is available online. § BACKGROUND*Traditional tests of discrimination. To motivate the threshold test, we review two traditional statistical tests of discrimination: the benchmark test (or benchmarking) and the outcome test.The benchmark test analyzes the rate at which some action is taken (e.g.,the rate at which stopped pedestrians are searched).Decision rates might vary across racial groups for a variety of legitimate reasons, such as race-specific differences in behavior. One thus attempts to estimate decision rates after controlling for alllegitimate factors. If decision rates still differ by race after such conditioning,the benchmark test would suggest bias.Though popular, this test suffers from the well-known problem of omitted variable bias, asit is typically impossible for researchers to observe—and control for—all legitimate factors that might affect decisions. For example, if evasiveness is a reliable indicator of possessing contraband, is not observed by researchers, and is differentially distributed across race groups, the benchmark test might indicate discrimination where there is none. This concern is especially problematic for face-to-face interactions such as police stops that may rely on hard-to-quantify behavioral observations.Addressing this shortcoming, <cit.> proposed the outcome test, which is based not on the rate at which decisions are made but on the hit rate (i.e, the success rate) of those decisions. Becker reasoned that even ifone cannot observe the rationale for a search, absent discrimination contraband should be found onsearched minorities at the same rate as on searched whites. If searches of minorities turn up weapons at lower rates than searches of whites, it suggests that officers are applying a double standard,searching minorities on the basis of less evidence.Outcome tests, however, are also imperfect measures of discrimination <cit.>. Suppose that there are two, easily distinguishable types of white pedestrians: those who have a 1% chance of carrying weapons, and those who have a 75% chance. Similarly assume that black pedestrians have either a 1% or 50% chance of carrying weapons. If officers, in a race-neutral manner, search individuals who are at least 10% likely to be carrying a weapon, then searches of whites will be successful 75% of the time whereas searches of blacks will be successful only 50% of the time.With such a race-neutral threshold, no individual is treated differently because of their race. Thus, contrary to the findings of the outcome test (which suggests discrimination against blacks due to their lower hit rate), no discrimination is present.This illustrates a failure of outcome tests known as the problem of infra-marginality <cit.>. *The threshold test. To circumvent this problem of infra-marginality,the threshold test of<cit.> attempts to directly infer race-specific search thresholds. Though still relatively new,the test has already been used to analyze tens of millions of police stops across the United States <cit.>. The threshold test is based on a Bayesian latent variable model that formalizes the following stylized process of search and discovery.Upon stopping a pedestrian, officers observethe probability p the individual is carrying a weapon; this probability summarizes all the available information, such as the stopped individual's age and gender, criminal record, and behavioral indicators like nervousness and evasiveness. Because these probabilities vary from one individual to the next, p is modeled as being drawn from a risk distribution that depends on the stopped person's race (r) and the location of the stop (d), where location might indicate the precinct in which the stop occurred.Officers deterministically conduct a search if the probability p exceeds a race- and location-specific threshold (t_rd), and if a search is conducted, a weapon is found with probability p. By reasoning in terms of risk distributions, one avoids the omitted variables problem by marginalizing out all unobserved variables.In this formulation, one need not observe the factors that led to any given decision, and can instead infer the aggregate distribution of risk for each group.Figure <ref> illustrates hypothetical risk distributions and thresholds for two groups in a single location. This representation visually describes the mapping from thresholds and risk distributions to search rates and hit rates (the observed data).Suppose P_rd is a random variable (termed “risk distribution”) that gives the probability of finding a weapon on a stopped pedestrian in group r in location d. The search rate s_rd of group r in location d is (P_rd > t_rd), the probability a randomly selected pedestrian in that group and location exceeds the race- and location-specific search threshold; graphically, this is the proportion of the risk distribution to the right of the threshold. The hit rate is the probability that a random searched pedestrian is carrying a weapon:h_rd = 𝔼[P_rd| P_rd>t_rd]; in other words, the hit rate is the mean of the risk distributionconditional on being above the threshold. The primary goal of inference is to determine the latent thresholds t_rd. If the thresholds applied to one race group are consistently lower than the thresholds applied to another, this suggests discrimination against the group with the lower thresholds. In order to estimate the decision thresholds,the risk distributions must be simultaneously inferred.In <cit.>, these risk profiles take the form of beta distributions parameterized by meansϕ_rd=logit^-1(ϕ_r+ϕ_d)and total count parameters λ_rd = exp(λ_r+λ_d),where ϕ_r, ϕ_d, λ_r, and λ_d are parameters that depend on the race of the stopped individuals and the location of the stops.Reparameterizing these risk distributions is the key to accelerating inference. Given the number of searches and hits by race and location, we can compute the likelihood of the observed data under any set of model parameters {ϕ_d, ϕ_r, λ_d, λ_r, t_rd}. One can likewise compute theposterior distribution of the parameters given the data and prior distributions. *Inference via Hamiltonian Monte Carlo.Bayesian inferenceis challenging when the parameter space is high dimensional, because random walk MCMC methods fail to fully explore the complex posterior in any reasonable time. This obstacle can be addressed with Hamiltonian Monte Carlo (HMC) methods <cit.>, which propose new samples by numerically integrating along the gradient of the log posterior, allowing for more efficient exploration. The speed of convergence of HMC depends on three factors: (1) the gradient computation time per integration step; (2) the number of integration steps per sample; and (3) the number of effectively independent samples relative to the total number of samples (i.e., the effective sample size). The first can be improved by simplifying the analytical form of the log posterior and its derivatives. The second and third factors depend on the geometry of the posterior: a smooth posterior allows for longer paths between samples that take fewer integration steps to traverse <cit.>. On all three measures, our new threshold model generally outperforms that of <cit.>; the improvement in gradient computationis particularly substantial. When full Bayesian inference is computationally difficult,it is common to consider alternatives such as variational inference <cit.>. Though fast, such alternatives have shortcomings.As we discuss below, variational inference produced worse fits than full Bayesian inference on our policing dataset. Moreover, parameter estimates from variational inference varied significantly from run to run in our tests, as estimates were sensitive to initialization. With our accelerated threshold test, one can have the best of both worlds: the statistical benefits of full Bayesian inference and the speed of fast alternatives.§The computational complexity of the standard threshold testis in large part due to difficulties of working with beta distributions. When P has a beta distribution, it is expensive to compute the search rate (P > t), the hit rate 𝔼 [P| P > t], and their associated derivatives <cit.>.Here we introduce an alternative family ofdiscriminant distributions for which it is efficient to compute these quantities. We motivate and analyze this family in the specific context of the threshold test, but the family itself can be applied more widely.To define discriminant distributions, assume that there are two classes (positive and negative),and the probability of being in the positive class is ϕ. For example, positive examples might correspond to individuals who are carrying weapons, and negative examples to those who are not. We further assume that each class emits signals that are normally distributed according to N(μ_0, σ_0) and N(μ_1, σ_1), respectively. Denote by X the signal emitted by a random instance in the population, and by Y ∈{0,1} its class membership. Then, given an observed signal x, one can compute the probability g(x) = (Y = 1 | X =x) that it was emitted by a member of the positive class. Throughout the paper, we term the domain of g the signal space and its range the probability space. Finally, we say the random variable g(X) has a discriminant distribution with parameters ϕ, μ_0, σ_0, μ_1, and σ_1.Consider parameters ϕ∈ (0,1),μ_0 ∈ℝ,σ_0 ∈ℝ_+,μ_1 ∈ℝ,and σ_1 ∈ℝ_+, where μ_1 > μ_0. The discriminant distributiondisc(ϕ, μ_0, σ_0, μ_1, σ_1) is defined as follows.LetY ∼Bernoulli(ϕ), X | Y=0∼N(μ_0, σ_0),X | Y=1∼N(μ_1, σ_1).Set g(x) = (Y=1 | X=x).Then the random variable g(X) is distributed as disc(ϕ, μ_0, σ_0, μ_1, σ_1). Our description above mirrors the motivation of linear discriminant analysis (LDA).Although it is common to consider the conditional probability of class membership g(x), it is less common to consider the distribution of these probabilities as an alternative to beta or logit-normal distributions. To the best of our knowledge,the computational properties of discriminant distributionshave not been previously studied.As in the case of LDA, the statistical properties of discriminant distributions are particularly nice when the underlying normal distributions have the same variance. Proposition <ref> below establishes a key monotonicity property that is standard in the development of LDA; though the statement is well-known, we include it here for completeness. Proofs of this proposition and other technical statements are in the Supplementary Information (SI). Given a discriminant distribution disc(ϕ, μ_0, σ_0, μ_1, σ_1), the mapping g from signal space to probability space is monotonic if and only if σ_0=σ_1. We confine our attention to homoskedastic discriminant distributions so that the mapping from signal space to probability space will be monotonic. Without this property, we cannot interpret a threshold on signal space as a threshold in probability space, which is key to our analysis. Homoskedastic discriminant distributions (i.e., with σ_0 = σ_1)involve four parameters. But in fact only two parameters are required to fully describe this family of distributions. This simplified parameterization is useful for computation.Supposedisc(ϕ, μ_0, σ, μ_1, σ) anddisc(ϕ', μ_0', σ', μ_1', σ') are two homoskedastic discriminant distributions. Let δ = μ_1-μ_0/σand define δ' analogously. Then the two distributions are identical if ϕ = ϕ' and δ= δ'. As a result, homoskedastic discriminant distributionscan be parameterized by ϕ and δ alone. Given Proposition <ref>, we henceforth write disc(ϕ, δ)to denote a homoskedastic discriminant distribution. Considering disc(ϕ, δ) as a distribution of calibrated predictions, the parameters have intuitive interpretations: ϕ is the fraction of individuals in the positive class, while δ is monotonically related to the AUC-ROC of the predictions[AUC-ROC is the probability that a random member of the positive class is assigned a higher score than a random member of the negative class. Since the positive and negative class emit signals from independent normal distributions, it's straightforward to show that the AUC-ROC equals Φ(δ/√(2)) (where Φ(·) is the CDF of the standard normal distribution).]. Even though the distribution itself depends on only ϕ and δ, the transformation g from signal space to probability space depends on the particular 4-parameter representation we use. For simplicity, we consider the representation with μ_0 = 0 and σ=1. This yields the simplified transformation function:g(x)=1/1+1-ϕ/ϕexp(-δ x+δ^2/2). Our primary motivation for introducing discriminant distributions is to accelerate key computations of the (complementary) CDF and conditional means. Letting P = g(X), we are specifically interested incomputing (P > t) and 𝔼[P| P > t].With (homoskedastic) discriminant distributions,these quantities map nicely to signal space, where they can be computed efficiently.Denote by Φ̅(x;μ,σ) the normal complementary CDF. Then the complementary CDF of P can be computed as follows.(P> t) = (1-ϕ)Φ̅(g^-1(t);0,1)+ϕΦ̅(g^-1(t);δ, 1).For the conditional mean, we have𝔼[P| P> t] = ϕΦ̅(g^-1(t);δ, 1)/(1-ϕ)Φ̅(g^-1(t);0,1)+ϕΦ̅(g^-1(t);δ, 1). Importantly, the CDF and conditional means fordiscriminant distributions are closely related to those for the normal distributions, and as such are computationally efficient to work with. In particular, the gradients of these functions are relatively straightforward to evaluate. The corresponding quantities forlogit-normal and beta distributions involve tricky numerical approximations <cit.>.Finally, we show that discriminant distributions are an expressive family of distributions (Fig. <ref>): they can approximate typical instantiations of the logit-normal and beta distributions. First, we select the parameters of the reference distribution: (μ,σ) for the logit-normal or (ϕ,λ) for the beta. Then we numerically optimize the parameters of the discriminant distribution to minimize the total variation distance between the reference distribution and the discriminant distribution. The top row of Figure  <ref> shows some typical densities and their approximations. The bottom row investigates the approximation error for a wide range of parameter values. The discriminant distribution fits the logit-normal very well (distance below 0.1 for all distributions with σ≤ 3), and the beta distribution moderately well (distance below 0.2 for λ≥ 1). Discriminant distributions approximate logit-normal distributions particularly well because they form a subset of logit-normal mixture distributions (SI).§ STOP-AND-FRISK CASE STUDYTo demonstrate the value of discriminant distributions forspeeding up the threshold test, we analyze a public dataset of pedestrian stopsconducted by New York City police officers under its “stop-and-frisk” practice. Officers have legal authority to stop and briefly detain individuals when they suspect criminal activity. There is worry, however, that such discretionary decisions are prone to racial bias;indeed the NYPD practice was recently ruled discriminatoryin federal court and subsequently curtailed <cit.>. Here we revisit the statistical evidence for discrimination. Our dataset contains information on2.7 million police stops occurring between 2008 and 2012. Several variables are available for each stop, including the race of the pedestrian, the police precinct in which the stop occurred, whether the pedestrian was “frisked”(patted-down in search of a weapon),and whether a weapon was found. We analyze stops of white, black, and Hispanic pedestrians, as there are relatively few stops of individuals of other races. We use the threshold test to analyze two decisions: the initial stop decision, and the subsequent decision of whether or notto conduct a frisk. Analyzing frisk decisions is a straightforward application ofthe threshold test: simply replacing beta distributions in the model with discriminant distributions results in more than a 100-fold speedup. To analyze stop decisions, we extend the threshold model to the case where one does not observe negative examples (i.e., those who were not stopped) and show that discriminant distributions again produce significant speedups. §.§ Assessing bias in frisk decisions We fit the threshold models using Stan <cit.>, a language for full Bayesian statistical inference via HMC. When using beta distributions in the threshold test, it takes nearly two hours to infer the model parameters; when we replace beta distributions with discriminant distributions, inference completes in under one minute. Why is it that discriminant distributions result in such a dramatic increase in performance? The compute time per effective Monte Carlo sample is the product of three terms: seconds/n_eff =samples/n_eff·integration steps/sample·seconds/integration step.All three factors are significantly reduced by using discriminant distributions (Table <ref>), with the final term providing the most significant reduction.(The reduction in the first two terms is likely due to the geometry of the underlying parameter space and the accuracy with which we can numerically approximate gradients for the beta and discriminant distributions.) Using discriminant distributions reduces the time per effective sample by a factor of 760. In practice, one typically runs chains in parallel,and so total running time is determined by the last chain to terminate. When running five chains in parallel for 5,000 iterations each, the accelerated model is faster by a factor of 140.The thresholds inferred using the accelerated model are extremely highly correlated with the thresholds inferred under the original model (correlation = 0.95), and indicate discrimination against black and Hispanic individuals. (In general, we would not expect the original and accelerated models to yield identical results on all datasets, since they use different probability distributions, so the fact that the thresholds are highly correlated serves as a robustness check.) Figure <ref> shows the inferred thresholds (under the accelerated model). Each point corresponds to the threshold for one precinct, with the threshold for white pedestrians on the horizontal axis and for minority pedestrians on the vertical axis.Within precinct, thresholds for minority pedestrians are consistently lower than thresholds for white pedestrians.*Robustness checks. To evaluate the robustness of our substantive finding of bias in frisk decisions, we perform a series of checks for threshold tests recommended by <cit.>; we include all figures in the SI.We start by conducting posterior predictive checks <cit.>.We compute the model-inferred frisk and hit rates for each precinct and race group,and compare these to the observed rates (SI Figure 1). The model almost perfectly fits the observed frisk rates, and fits the observed hit rates quite well: The RMSE of frisk rates is 0.05%, and the RMSE of hit rates is 2.5%. (RMSEs for the original beta model are comparable: the RMSE for frisk rates is 0.1%, and the RMSE for hit rates is 2.4%). For comparison, if the model of <cit.> is fit with variational inference—rather than HMC, to speed up inference—the frisk rate RMSE is 0.15% (a 3-fold increase), and the hit rate RMSE is 2.6% (on par with HMC). Variational inference fits the model of <cit.> in 44 seconds, comparable to the runtime with HMC and discriminant distributions. The stylized behavioral model underlying the threshold test posits a single frisk threshold for each race-precinct pair. In reality, officers within a precinct might apply different thresholds, and even the same officer might vary the threshold from one stop to the next. Moreover, officers only observe noisy approximations of a stopped pedestrian's likelihood of carrying a weapon; such errors can be equivalently recast as variation in the frisk threshold applied to the true probability.To investigate the robustness of our results to such heterogeneity,we next examine the stability of our inferences on synthetic datasets derived from a generative process with varying thresholds. We start with the model fit to the actual data. Then, for each observed stop, we draw a signal p from the inferred signal distribution for the precinct d in which the stop occurred and the race r of the pedestrian. Second, we set the stop-specific threshold to T ∼logit-normal(logit(t_rd), σ), wheret_rd is the inferred threshold, and σ is a parameter we set to control the degree of heterogeneity in the thresholds. This corresponds to adding normally-distributed noise to the inferred threshold on the logit scale. Third, we assume a frisk occurs if and only if p ≥ T, and if a frisk is conducted, we assume a weapon is found with probability p. Finally, we use our modeling framework to infer new frisk thresholds t_rd' for the synthetic dataset.There is a steady decrease in inferred thresholds as the noise increases (SI Figure 2). Importantly, however, there is a persistent gap between whites and minorities despite this decline, indicating that the lower thresholds for minorities are robust to heterogeneity in frisk thresholds.σ=1 is a substantial amount of noise. Decreasing the frisk threshold of blacks by 1 on the logit scale corresponds to a 3-fold increase in the city-wide frisk rate of blacks.In theory, the threshold test is robust to unobserved heterogeneity that affects the signal, since we effectively marginalize over any omitted variables when estimating the signal distribution. However, we must still worry about systematic variation in the thresholds that is correlated with race. For example, if officers apply a lower frisk threshold at night, and black individuals are disproportionately likely to be stopped at night, then blacks would, on average, experience a lower frisk threshold than whites even in the absence of discrimination. Fortunately, as a matter of policy only a limited number of factors may legitimately affect the frisk thresholds,and many—but not all—of these are recorded in the data. There are a multitude of hard-to-quantify factors (such as behavioral cues) that may affect the signal—but these should not affect the threshold.We examine the robustness of our results to variation in thresholds across year, time-of-day, and age and gender of the stopped pedestrian.[Variation across location is explicitly captured by the model. Gender, like race, is generally not considered a valid criterion for altering the frisk threshold, though for completeness we still examine its effects on our conclusions.] To do so, we disaggregate our primary dataset by year (and, separately, by time-of-day, by age, and by gender), and then independently run the threshold test on each component (SI Figure 3). The inferred thresholds do indeed vary across the different subsets of the data. However, in every case, the thresholds for frisking blacks and Hispanics are lower than the thresholds for frisking whites, corroborating our main results.Finally, we conduct two placebo tests, where we rerun the threshold test with race replaced by day-of-week, and separately,with race replaced by month. The hope is that the threshold test accurately captures a lack of “discrimination” based on these factors. The model indeed finds that the threshold for frisking individuals isrelatively stable by day-of-week, with largely overlapping credible intervals (SI Figure 4). We similarly find only small differences in the inferred monthly thresholds. Some variation is expected, as officers might legitimately apply slightly different frisk standards throughout the week or year. §.§ Assessing bias in stop decisions We now extend the threshold model to test for discrimination in an officer's decision to stop a pedestrian. In contrast to frisk decisions, we do not observe instances in which an officer decided not to carry out a stop. Inferring thresholds with such censored datais analogous to learning classifiers from only positive and unlabeled examples <cit.>. We assume officers are equally likely to encounter anyone in a precinct. Coupled with demographic data compiled by the U.S. Census Bureau, this assumption lets us estimate the racial distribution of individuals encountered by officers. Such estimates are imperfect, in part becauseresidential populations differ from daytime populations <cit.>; however, our inferences are robust toviolations of this assumption.*Model description.When analyzing frisk decisions,the decision itself and the success of a frisk were modeled as random outcomes. For stops, we model as randomthe race of stopped individuals and whether a stop was successful (i.e., turned up a weapon). The likelihood of the observed outcomes can then be computed under any set of model parameters, which allows us to computeposterior parameter estimates. Let S_rd denote the number of stopsof individuals of race r in precinct d,and let H_rd denote the number of such stops that yield a weapon. We denote by c_rd the fraction of people in a precinct of a given race. Letting R_d denote the race of an individual randomly encountered by the police in precinct d, we have(R_d = r |stopped) ∝(stopped| R_d = r) (R_d = r).Assuming officers are equally likely to encounter everyone in a precinct, (R_d = r) = c_rd. We further assume that individuals of race r are stopped when their probability of carrying a weapon exceeds a race- and precinct-specific threshold t_rd; this assumption mirrors the one made for the frisk model. Setting θ_rd = (R_d = r |stopped), we haveθ_rd∝ c_rd·(stopped| R_d = r) (stopped| R_d = r) = (1 - ϕ_rd)Φ̅(t_rd; 0, 1)+ ϕ_rdΦ̅(t_rd; δ_rd, 1). For each precinct d,the racial composition of stops is thus distributed as a multinomial: S⃗_d∼multinomial(θ⃗_d, N_d)where N_d denotes the total number of stops conducted in that precinct, θ⃗_d is a vector of race-specific stop probabilities θ_rd, and S⃗_d is the number of stops of each race group in that precinct. We model hits as in the frisk model.We put normal or half-normal priors on all the parameters:{ϕ_d, ϕ_r, λ_d, λ_r, t_rd}.[ Following <cit.> and <cit.>, weput weakly informative priors on ϕ_r, λ_r, and t_rd; we put tighter priors on the location parameters ϕ_d and λ_d to restrict geographicalheterogeneity and to accelerate convergence. ] *Results.We apply the above model to the subset of approximately 723,000 stops predicated on suspected criminal possession of a weapon, as indicated by officers. In these cases, the stated objective of the stop is discovery of a weapon, and so we consider a stop successful if a weapon was discovered <cit.>. We estimate the racial composition of precincts using data from the 2010 U.S. Census. In Table <ref> we compare the time to fit our stop model with discriminant distributions rather than beta distributions. The speedup from using discriminant distributions is dramatic: with beta distributions, the model requires more than 4 hours to fit; with discriminant distributions it takes under 4 minutes. The primary reason for the speedup is the reduced time per gradient evaluation, although the number of gradient evaluations is also reduced. As with frisk decisions, we find stop thresholds for blacks and Hispanics are consistently lower than for whites, suggestive of discrimination (Figure <ref>). Our results are in line with those from past statistical studies of New York City's stop-and-frisk practices based on benchmark <cit.> and outcome <cit.> analysis. *Robustness checks.A key assumption of our stop model is that the racial composition of the residential population (as estimated by the U.S. Census) is similar to the racial composition of pedestrians officers encounter on the street. We test how sensitive our inferred thresholds are to this assumption byrefitting the stop model with various estimates of the fraction of white individuals encountered in each precinct. Letting c_white,d denote the original Census estimate, we varied this number from c_white,d/2 to2c_white,d.The inferred thresholds remain stable,with thresholds for blacks and Hispanics consistently lower than for whites: thresholds for whites varied from 5.7% to 6.1%, thresholds for blacks from 0.8% to 1.2%, and thresholds for Hispanics from 1.8% to 2.1%.This stability is in part due to thefact that altering assumptions about the base populationdoes not change the observed hit rates, which are substantially higher for whites. We also ran the robustness checks outlined in <cit.>:posterior predictive checks, tests for threshold heterogeneity, and tests for omitted variable bias (SI Figures 5–7). In all cases, the results confirm our main findings. Standard placebo tests cannot be run in this setting because natural placebos (such as month) eliminate all heterogeneity across groups, breaking model identifiability.As with the frisk model, HMC inference with discriminant distributions yields better model fit than variational inference with beta distributions. Though variational inference fits more quickly (30 seconds vs. 208 seconds), with variational inference the RMSE of stop rates is four times larger (0.9% vs. 0.2%), and the RMSE of hit rates is twice as large (1.3% vs. 0.8%). (For comparison, the RMSE of the original beta model for stop rates is 0.2% and the RMSE of the hit rates is 0.7%). These performance gaps illustrate the value of full Bayesian inference over approximate methods.§ CONCLUSION We introduced and analyzed discriminant distributions to accelerate threshold tests for discrimination. The CDF and conditional means of discriminant distributions reduce to simple expressions that are no more difficult to evaluate than the equivalent statistics for normal distributions. Consequently, using discriminant distributions speeds up inference in the threshold test by more than 75-fold.It is now practical to use the threshold test to investigate bias in a wide variety of settings.Practitioners can quickly carry out analysis—including computationally expensive robustness checks—on low-cost hardware within minutes. Our test also scales to previously intractable datasets, such as the national traffic stop database of <cit.>. We also extended the threshold test to domains in which actions are only partially observed, allowing us to assess possible discrimination in an officer's decision to stop a pedestrian. Tools for black box Bayesian inference are allowing inference for increasingly complicated models.As researchers embrace this complexity, there is opportunity to consider new distributions.Historical default distributions—often selected for convenient properties like conjugacy—may not be the best choices when using automatic inference. An early example of this was the Kumaraswamy distribution <cit.>, developed as an alternative to the beta distribution for its simpler CDF. Discriminant distributions may also offer computational speedups beyond the threshold test as automatic inference enjoys increasingly widespread use. Code and acknowledgments:Code is available at https://github.com/5harad/fasttt. We thank Peng Ding, Avi Feller, Pang Wei Koh, and the reviewers for helpful comments, and the John S. and James L. Knight, Hertz, and NDSEG Foundations.plainnat
http://arxiv.org/abs/1702.08536v3
{ "authors": [ "Emma Pierson", "Sam Corbett-Davies", "Sharad Goel" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170227211819", "title": "Fast Threshold Tests for Detecting Discrimination" }
1]Emilio Zappa 1]Miranda Holmes-Cerfon 1]Jonathan Goodman[1]Courant Institute of Mathematical Sciences, New York University, NY. Monte Carlo on manifolds: sampling densities and integrating functions [====================================================================== We describe and analyze some Monte Carlo methods for manifolds in Euclidean space defined byequality and inequality constraints. First, we give an MCMC sampler for probability distributions defined by un-normalized densities on such manifolds. The sampler uses a specific orthogonal projection to the surface that requires only information about the tangent space to the manifold, obtainable from first derivatives of the constraint functions, hence avoiding the need for curvature information or second derivatives. Second, we use the sampler to develop a multi-stage algorithm to compute integrals over such manifolds. We provide single-run error estimates that avoid the need for multiple independent runs. Computational experiments on various test problems show that the algorithms and errorestimates work in practice. The method is applied to compute the entropies of different sticky hard sphere systems. These predict the temperature or interaction energy at which loops of hard sticky spheres become preferable to chains.§ INTRODUCTION Many physical and engineering systems involve motion with equality constraints. For example, in physics and chemistry, bonds between atoms or colloidal particles may be modeled as having a fixed length <cit.>, while in robotics, joints or hinges connecting moving components may be written as constraints on distances or angles <cit.>. Constraints also arise in statistics on the set of parameters in parametric models <cit.>. In such problems the space of accessible configurations is lower-dimensional than the space of variables which describe the system, often forming a manifold embedded in the full configuration space.One may then be interested in sampling a probability distribution defined on the manifold,or calculating an integral over the manifold such as its volume. This paper presents Monte Carlo methods for computing a d-dimensional integral over ad-dimensional connected manifold M embedded in a d_a-dimensional Euclidean space,with d_a≥ d. The manifold is defined by sets of constraints. We first describe an MCMC sampler for manifolds defined by constraints in this way. Then we describe a multi-phase procedure that uses the sampler to estimate integrals. It should go without saying (but rarely does) that deterministic algorithms to compute integrals are impractical except in very low dimensions or for special problems(e.g.<cit.>).OurMCMC sampler has much in common with other algorithms for constraint manifolds, including other samplers, optimizers, differential algebraic equation (DAE) solvers(see, e.g., <cit.>), etc. A move from a point x ∈ M starts with a move in the tangent space at x, which is v ∈ T_x. This is followed by a projection back to y ∈ M, which we write as y = x + v + w. The sampler simplifies ifwe require w ⊥ T_x, so the projection isperpendicular to the tangent space at the original point.This idea was first suggested in <cit.>[We learned of this work after we submitted the original version of this paper for publication.]. This is different from other natural choices, such as choosing y ∈ M to minimize the projection distance y-(x+v).The choice of w⊥ T_x makes the sampler easy to implement as it requires only first derivatives of the constraint surfaces. Neither solving for y using Newton's method nor the Metropolis Hastings detailed balance condition require any higher derivatives. Other surface sampling methods that we know of require second derivatives,such as the explicit parameterization method of <cit.>, the Hamiltonian method such as of <cit.>,geodesic methods such as <cit.>,and discretized SDE methods such as <cit.>. It is common in our computational experiments that there is no w ⊥ T_x so that x+v+w ∈ M, or that one or more w values existbut the Newton solver fails to find one. Section <ref> explains how we preserve detailed balance even in these situations.Our goal is to compute integrals of the form Z = ∫_M f(x)σ(dx) ,where σ(dx) is the d-dimensional surface area measure (Hausdorff measure) andf is a positive smooth function. Writing B ⊂^d_a for the ball centered at a point x_0∈ M, we also estimate certain integrals of the form Z_B = ∫_M∩ B f(x)σ(dx) .For each B, we define a probability distribution on M∩ B:ρ_B(dx) = 1/Z_B f(x) σ(dx) .The sampling algorithm outlined above, and described in more detail in Section <ref>,allows us to draw samples from ρ_B.To calculate integrals as in (<ref>) we use a multi-phase strategy related to nested sampling and thermodynamic integration (e.g.<cit.>). Similar strategies were applied to volume estimation problems by the Lovasz school,see, e.g., <cit.>. Let B_0 ⊃ B_1 ⊃⋯⊃ B_k be a nested contracting family of balls witha common center. With an abuse of notation, we write Z_i for Z_B_i. We use MCMC sampling in M∩ B_i to estimate the ratioR_i = Z_i/Z_i+1.We choose the last ball B_k small enough that we may estimate Z_k by direct Monte-Carlo integration. We choose B_0 large enough that M ⊂ B_0so Z = Z_0. Our estimate of Z is Z = Z_k∏_i=0^k-1R_i ,where hatted quantities such as Z are Monte Carlo estimates. Section <ref> describes the procedure in more detail.Error estimates should be a part of any Monte Carlo computation. Section <ref> describes our procedure for estimating σ^2_Z = ( Z) ,a procedure adapted from<cit.>. If we can estimate σ^2_R_i, and if these are small enough, then we can combine them to estimate σ^2_Z. We explore how σ_Z^2 depends on parameters of the algorithm via several simplified model problems, in section <ref>.We specifically analyze how the error depends on the amount by which we shrink the balls for each ratio, and make predictions that could lead to good systematic choices of this parameter.We illustrate our methods on a number of examples. We apply the sampler to the surface of a torus and cone in 3D, as well as the special orthogonal group SO(n) for n=11, which is a manifold of dimension 55 in an ambient space of dimension 121 (section <ref>) . All of these examples have certain marginal distributions which are known analytically, so we verify the sampler is working correctly.We verify the integration algorithm is correct by calculating the surface areas of a torus and SO(n) for 2≤ n ≤ 7 (sections <ref>, <ref>.) Finally, we apply our algorithm to study clusters of sticky spheres (section <ref>), a model that is studied partly to understand how colloidal particles self-assemble into various configurations <cit.>. A simple question in that direction concerns a linear chain of n spheres – whether it is more likely to be closed in a loop or open in a chain. According to equilibrium statistical mechanics, the ratio of the probabilities of being open versus closed depends on a parameter ( the “sticky parameter”) characterizing the binding energy and temperature, and the entropies, whichare surface areas of the kind we are computing.We compute the entropies for open and closed chains of lengths 4–10. This data can be used to predict the relative probabilities of being in a loop or a chain, if the sticky parameter is known, but if it is not known our calculations can be used to measure it by comparing with data.Notation and assumptions. Throughout the paper, we will denote by M a d-dimensionalconnected manifold embedded in an ambient space ^d_a, which is defined by equality andinequality constraints. There are m equality constraints, q_i(x) = 0, i = 1, …, m, where the q_i are smooth functions q_i : ^d_a→. There are l inequality constraints of the form h_j(x) > 0, for j =1, …, l. The manifold M is M = { x ∈^d_a : q_i(x) = 0, i=1, …, m, h_j(x) > 0, j = 1, …, l }.If M has several connected components, then the sampler may sample only one component, or it may hop between several components. It is hard to distinguish these possibilities computationally. We let Q_x be the matrix whose columns are the gradients {∇ q_i (x) }_i=1^m. This makes Q_x the transpose of the Jacobian of the overall constraint functionq: ^d_a→ R^m. The entries of Q_x are ( Q_x )_ij= ∂ q_i(x)/∂ x_j.We assume that Q_x has full rank m everywhere on M.By the implicit function theorem, this implies that the dimension of M is d = d_a-m and that the tangent space T_x≡ T_xM at a point x ∈ M is well defined. In this case {∇ q_i (x) }_i=1^m form a basis of the orthogonal space T^⊥_x ≡ T_xM^⊥.Note that M inherits the metric from the ambient space ^d_a by restriction.The corresponding d-dimensional volume element is d-dimensional Hausdorff measure, which we denote by σ(dx). § SAMPLING ON A CONSTRAINT MANIFOLD§.§ The algorithm We describe the MCMC algorithm we use to sample a probability distribution of the formρ(dx) = 1/Z f(x) σ(dx) .Here σ is the d-dimensional surface measure on M given by (<ref>). As usual, the algorithm requires repeated evaluation of f but does not use the unknownnormalization constant Z. The algorithm is described in pseudocode in Figure <ref>.Our Metropolis style MCMC algorithm generates a sequence X_k ∈ M with the property that X_k ∼ρ X_k+1∼ρ.Once X_k = x is known, the algorithm makes a proposal y ∈ M. If the proposal fails (see below) or if it is rejected, then X_k+1 = X_k. If the algorithm succeeds in generating a y, and if y is accepted, then X_k+1 = y. If M is connected, compact, and smooth, then the algorithm is geometrically ergodic. See e.g. <cit.> for an efficient summary of the relevant MCMC theory.The proposal process begins with a tangential move x → x+v with v∈ T_x. We generate v by sampling a proposal density p(v|x) defined on T_x. All the numerical experiments reported here use an isotropic d-dimensional Gaussian with width s centered at x:p(v|x) = 1/(2π)^d/2 s^d e^-|v|^2/2s^2. We generate v using an orthonormal basis for T_x, which is the orthogonal complement of the columns of the constraint gradient matrix Q_x, see (<ref>). This orthonormal basis is found as the last d columns of the d_a× d_a matrix Q in the QR decomposition[Please be aware of theconflict of notation here.The d_a× d_a orthogonal Q of the QR decomposition is not the d_a× d gradient Q_x.]of Q_x, which we find using dense linear algebra.Given x and v, the projection step looks for w ∈ T_x^⊥ with y = x + v + w ∈ M.See Figure <ref>(a) for an illustration.It does this using an m-component column vector a = (a_1, …, a_m)^t andw = ∑_j=1^m a_j ∇ q_j(x) = Q_x a .The unknown coefficients a_j are found by solving the nonlinear equationsq_i(x+v+Q_xa) =0 ,i=1,… m.This can be done using any nonlinear equation solver. In our code,we solve (<ref>) using simple Newton's method (no line search, regularization, etc.) and initial guess a = 0. The necessary Jacobian entries areJ_ij = ∂_a_j q_i(x + v + Q_xa)= ( ∇ q_i(x + v + Q_xa))^t ∇ q_j(x) .We iterate until the first time a convergence criterion is satisfied|q(x+v+Q_xa)| ≤.where |q| is some norm. In our implementation, we used the l_2 norm.When this criterion is satisfied, the projection is considered a success and y = x + v + w is the proposal. If the number of iterations reachesbefore the convergence criterion is satisfied, then the projection phase is considered a failure. It is possible to change these details, but if you do you must also make the corresponding changes in the detailed balance check below.There are other possible projections from x+v to M, each with advantages and disadvantages. It would seem natural, for example, to find y by computing_y ∈ M| x + v - y |_2 .This produces a projection w ∈ T_y^⊥, rather than our w ∈ T_x^⊥. It also has the advantage that such a y always exists and is almost surely unique. The optimization algorithm that computes (<ref>) may be more robust than ourNewton-based nonlinear equation solver. The disadvantage of (<ref>) is that there are curvature effects in the relative surface area computation that are part of detailed balance.Computing these curvatures requires second derivatives of the constraint functions q_i.This is not be an intrinsic limitation, but requires more computation and is not as straightforward to implement as our algorithm. The proposal algorithm just described produces a random point y ∈ M. There are two cases where we must reject it immediately so the detailed balance relation may hold (which we verify in section <ref>.)First, we must check whether an inequalityconstraint is violated, so h_i(y) ≤ 0 for some i. If so, then y is rejected. Second, we must check whether the reverse step is possible, i.e. whether it is possible to propose x starting from y.To make the reverse proposal, the algorithm would have to choose v^'∈ T_y so thatx = y + v^' + w^' with w^'⊥ T_y. Figure <ref>(b) illustrates this. Since x and y are known, we may find v^' and w^' by the requirement that x-y = v^' + w^', v^'∈ T_y, w^'∈ T_y^⊥. This is always possible by projecting x-y onto T_y and T_y^⊥, which are found using the QR decomposition of Q_y. However, we must additionallydetermine whether the Newton solver would find x starting from y + v^', a step that was neglected in earlier studies, e.g. <cit.>.Even though we know that x is a solution to the equations, we must still verify that the Newton solver produces x. With that aim, we run the Newton algorithm from initial guess y + v^' and see whether thethe converge criterion (<ref>) withiniterations. If not the proposal y is rejected. Whenever y is rejected we set X_k+1=x.If we come to this point, we compute an acceptance probability a(y|x) using the Metropolis Hastings formulaa(y|x) = min( 1, f(y)p(v^'|y)/f(x)p(v|x)) .The reader might expect that we should use the more complicated formulaa(y|x) = min( 1, f(y)p(v^'|y) J(x|y)/f(x)p(v|x) J(y|x)) ,where J(y|x) is the inverse of the determinant of the linearization of the projectionv → y,formally, |∂ v/∂ y|. This would account for how the area of a small patch of surface near x,is distorted upon mapping it to y.However, it turns out that the Jacobian factors cancel:J(y|x)=J(x|y) .This is evident in part b of Figure <ref>, where the angle between w and T_y is thesame as the angle between w^' and T_x. The proof in two dimensions is an exercise in Euclidean geometry. In more dimensions, J(y|x) and J(x|y) are products of cosines of principle angles between T_x and T_y, see <cit.>.To prove (<ref>), let U_x and U_y be matrices whose columns are orthonormal basesfor T_x and T_y respectively. Column i of U_x will be denoted u_x,i, and similarly for U_y. Consider a vector ξ∈ T_x, which may be written ξ = ∑ a_i u_x,i. Let η∈ T_y be the projection of ξ to T_y normal to T_x, and writeη = ∑ b_j u_y,j. The orthogonality condition is u_x,i^t η = u_x,i^tξ=a_i. This leads to the equations∑_j u_x,i^t u_y,jb_j = a_i .These take the form Pb = a, or b = P^-1a, where P = U_x^t U_y. Since U_x and U_y are orthonormal bases, the volume element going from x→ y is expanded by a factorJ(y|x)^-1 = (P^-1)= ((U_x^tU_y)^-1).Similar reasoning shows that the projection from T_y to T_x perpendicular to T_yexpands volumes by a factor ofJ(x|y)^-1 =((U_y^tU_x)^-1).These factors are equal because the determinants of a matrix and its transpose are the same, proving(<ref>).We close this section with a general remark concerning the choice of parameters.A key part of the algorithm is the projection step, which requires using Newton'smethod or some other algorithm to solve a nonlinear system of equations.This algorithm called twice at every step of the sampler and so making it efficientis critical to the overall efficiency of the sampler.Profiling reveals that most of the computing time is spent in the part (see Figure <ref>). We found that setting the parameter low, or otherwise terminating the solver rapidly, made a significant difference to the overall efficiency. Unlike most problems that require solving equations, we are not interested in guaranteeing that we can find a solution if one exists – we only need to find certain solutions rapidly; our reverse projection check and subsequent rejection ensures that we correct for any asymmetries in the solver and still satisfy detailed balance. §.§ Verifying the balance relation In this section we verify what may already be clear, that the algorithm described abovesatisfies the desired balance relation (<ref>). In the calculations below, we assume that the constraints hold exactly, an assumption that will not be true when they are solved for numerically, as we discuss in the remark at the end of the section. To start, let us express the probability distribution of the proposal point y as the product of a surface density and d-dimensional surface measure, P_x(dy) = h(y|x) σ(dy) + ζ(x) δ_x(dy).Here δ_x(dy) is the unit point mass probability distribution at the point x,ζ(x) is the probability that the projection step failed, i.e. Newton's method failed to produce a y or it lay outside the boundary, and h(y|x) is the proposal density, i.e. the probability density of successfully generating y∈ M from x.The proposal density is the product of three factors:h(y|x) = (1- 1_F_x(y))·p(v|x)·J(y|x) ,where the set F_x⊂ M consists of those points y∈ M that cannot be reached using our Newton solver starting from x, and 1_F_x(y) denotes the characteristic function of the set F_x, whichequals 1 if y ∈ F_x and zero if y ∉ F_x.Note that we must have ∫_M h(y|x)dσ(y)+ ζ(x) = 1. If the acceptance probability is a(y|x), then the overall probability distribution of y takes the form R_x(dy) = a(y|x) h(y|x) σ(dy) + ξ(x)δ_x(dy) .Here, ξ(x) is the overall probability of rejecting a move when starting at x,equal to the sum of the probability of not generating a proposal move, and the probabilityof successfully generating a proposal move that is subsequently rejected.Since 1-a(y|x) is the probability of rejecting a proposal at y, the overall rejection probability isξ(x) = ζ(x) + ∫_M ( 1 - a(y|x) )h(y|x)σ(dy) . We quickly verify that the necessary balance relation (<ref>) is satisfied if a(y|x) is chosen to satisfy the detailed balance formulaf(x) a(y|x)h(y|x) = f(y) a(x|y)h(x|y) .For this, suppose X_k has probability distribution on M with density g_k(x) with respectto surface area measure:X_k ∼ρ_k(dx) = g_k(x) σ(dx) .We check that if (<ref>) is satisfied, and g_k = f, then g_k+1 = f.We do this in the usual way, which is to write the integral expression for g_k+1 and then simplify the result using (<ref>). To start, g_k+1(x) = ∫_M g_k(y) a(x|y) h(x|y) dσ(y)+ ξ(x)g_k(x) .The integral on the right represents jumps to x from y ≠ x on M. The second term on the right represents proposals from x that were unsuccessful or rejected. If g_k = f, using the detailed balance relation (<ref>) this becomesg_k+1(x) = ∫_M f(x) a(y|x) h(y|x) dσ(y)+ ξ(x)f(x) .The integral on the right now represents the probability of accepting a proposal from x:( ) = ∫_M a(y|x)h(y|x) dσ(y) .When combined with the second term, the right side of (<ref>) is equal to f(x), as we wished to show.We use the standard Metropolis Hastings formula to enforce (<ref>):a(y|x) = min(1, f(y)h(x|y)/f(x)h(y|x)) .If we succeed in proposing y from x, then we know y ∉ F_x so we are able to evaluate h(y|x).However, it is still possible that x ∉ F_y, in which case h(x|y) = 0. This is why we must apply the Newton solver to the reverse move, to see whether it would succeed in proposing x from y. If not, we know h(x|y) = 0 and therefore a(y|x) = 0 and we must reject y. Remark. We have assumed so far the constraints hold exactly, which is not true when we solve for y numerically. Rather, the proposal move will lie within some tolerance , so we are sampling a “fattened” region M_ = {y: |q_i(y)| ≤i=1,…,m,h_j(x) > 0, j = 1, …, l }.It is well-known (e.g. <cit.>) that if the fattened region is uniformly sampled, the distribution near M will not be the uniform distribution, but rather will differ by an O(1) amount which does not vanish as → 0. We do not know the distribution by which our numerical solver produces a point in M_. However, since we observe the correct distributions on M in the examples below (for small enough ), we infer that we are not sampling the fattened region uniformly, but rather in a way that is consistent with the surface measure on the manifold, in the limit when → 0. Moreover, it is practical to taketo be extremely small, on the order of machine precision, because of the fast local convergence of Newton's method. §.§ ExamplesIn this section we illustrate the MCMC sampler with three examples: a torus, a cone and the special orthogonal group SO(n). §.§.§ TorusConsider a torus 𝕋^2 embedded in ^3, implicitly defined by𝕋^2 = { (x,y,z) ∈^3 : ( R- √(x^2+y^2))^2 + z^2-r^2 = 0 },where R and r are real positive numbers, with R > r.Geometrically, 𝕋^2 is the set of points at distance r from the circle of radius R in the (x,y) plane centered at (0,0,0). An explicit parameterization of 𝕋^2 is given by𝕋^2 = {(R + r cos(ϕ) cos(θ), (R + r cos(ϕ) sin(θ), r sin(ϕ) ) : θ, ϕ∈ [0,2π] }. We ran N = 10^6 MCMC steps to sample uniform measure (f(x) = 1) on 𝕋^2 with toroidal radius R = 1, poloidal radius r = .5, and step size scale s = .5.Figure <ref> shows our empirical marginal distributions and the exact theoretical distributions.The empirical distributions of the toroidal angle θ and the poloidal angle ϕ are correct to within statistical sampling error. Around 6 % of proposed moves were rejected because of failure in the reverse projection step. When we didn't implement this step, the marginals were not correct. §.§.§ Cone We consider the circular right-angle cone 𝒞⊂^3 with vertex (0,0,0) given by𝒞 = { (x,y,z) ∈^3 : z-√(x^2+y^2) = 0 , x^2+y^2 < 1, z > 0}.We study this example in order to determine how our algorithm might behave near singular pointsof a hypersurface.We have written strict inequalities to be consistent with the general formalism (<ref>). In principle, the algorithm should do the same thing with non-strict inequalities, since the set of points with equality (e.g. z = 0) has probability zero. If we allowed z=0, then 𝒞 would not be a differentiable manifold because ofthe singularity at the vertex.We suspected that the projection steps (see Figure <ref>) would be failure prone near thevertex because of high curvature, and that the auto-correlation times would be large or even unbounded. Nevertheless, Figure <ref> shows satisfactory results, at least for this example. We ran for N=10^6 MCMC steps with step size parameter s = .9.The theoretical marginal densities for X, Y, and Z are easily found to beg_X(x) = 2/π√(1-x^2) , g_Y(y) = 2/π√(1-y^2) , g_Z(z) = 2z . §.§.§ Special orthogonal group We apply our surface sampler to the manifold SO(n), which is the the group of n × n orthogonal matrices with determinant equal to 1.We chose SO(n) because these manifolds have high dimension, 1/2n(n-1), and high co-dimension, 1/2n(n+1) <cit.>.We view SO(n) as the set of n × n matrices, x ∈^n× n that satisfy the rowortho-normality constraints for k = 1, …, n and l > k:g_kk(x) = ∑_m=1^n x_km^2 = 1,g_kl(x) = ∑_m=1^n x_kmx_lm = 0 .This is a set of 1/2n(n+1) equality constraints. The gradient matrix Q_x defined by (<ref>) may be seen to have full rank for all x ∈ SO(n). Any x satisfying these constraints has (x) = ± 1. The set with (x) = 1 is connected. It is possible that our sampler would propose an x with (x) = -1, but we rejectsuch proposals. We again chose density f(x) = 1, which gives SO(n) a measure that is, up to a constant,the same as the natural Haar measure on SO(n).One check of the correctness of the samples is the known large-n distribution of T = (x), which converges to a standard normal as n →∞ <cit.>. This is surprising in view of the fact that (x) is the sum of n random numbers each of which has (x_kk) = 1. If the diagonal entries were independent with this variance, then (T) = n^1/2 instead of the correct (T) = 1. This makes the correctness of the T distribution an interesting test of the correctnessof the sampler.Figure <ref> presents results for n = 11 andN=10^6 MCMC steps,and proposal length scale s = .28.The distribution of T seems correct but there are non-trivial auto-correlations. The acceptance probability was about 35%.Note that there are better ways to sample SO(n) that to use our sampler. For example, one can produce independent samples by choosing y ∈^n× n with i.i.d. entries from N(0,1) and use theQR decomposition y = Rx, where R is upper triangular and x is orthogonal.§ INTEGRATING OVER A MANIFOLD§.§ Algorithm In this section we describe an algorithm to calculate integrals over a manifold M of the form (<ref>). We assume that M given in (<ref>) is bounded.The strategy as outlined in the Introduction is to consider a sequence of d_a-dimensional balls B_0⊃ B_1⊃…⊃ B_k, centered atx_0 ∈ M with radii r_i, for i = 1,…, k, where B_0⊃ M entirely contains the manifold,and B_k is “small enough" in a way we describe momentarily.We assume that each M∩ B_i is connected, an assumption that is critical for our algorithm to work.We define the collection of integrals (see (<ref>))Z_i = ∫_B_i ∩ M f(x) dσ(x) .The final estimate is Z ≈Z according to (<ref>).To estimate the ratio R_i = Z_i/Z_i+1 we generate n_i points in B_i∩ M with the probability distribution ρ_i as in (<ref>).This is done using the MCMC algorithm given in Section <ref> by including the extra inequality|x-x_0|^2 < r_i^2.An estimator for R_i is then given byR_i= n_i/N_i,i+1,where, for i≤ j, we defineN_i,j = # points generated in B_i ∩ M that also lie in B_j ∩ M.We could use additional elements N_i,j with j>i+1 to enhance the estimate of the ratios, but we leave exactly how to do this a question for future research. Note that N_i,i=n_i.The integral over the smallest set M∩ B_k is computed in a different way. If M had no equality constraints, so that the dimension of M were equal to the dimension of B_k, then we could chooseB_k⊂ M to lie entirely inside M and so the integral (for constant f) would be known analytically.Indeed, this idea was used in several studies that developed efficient algorithms to calculatethe volume of a convex body <cit.>. With the equality constraints, in general there is no B_k with radius r_k > 0 for which the integral Z_k is known in closed form exactly.Instead, we choose r_k small enough thatthat there is an easily computed one-to-one correspondence between points in M ∩ B_k and points in the tangent plane D_k = T_x_0∩ B_k. This happens, for example, when the manifold M∩ B_k is a graph above D_k.We use the coordinates in the flat disk D_k to integrate over M∩ B_k, which is simply the surface integral as illustratedin Figure <ref>:Z_k =∫_M∩ B_k f(y)dσ(y) = ∫_D_k f(y(x)) J^-1(x)dx .This uses the following notation. The integral on the right is over the d-dimensional disk. For any x ∈ D_k, the projection to M perpendicular to T_x_0 is called y(x)∈ M.For small enough r_k, the projection is easily calculated using the projection algorithm of Figure <ref>. The Jacobian factor J(x)= det(U_x_0^tU_y(x)) is known from (<ref>), where as before U_x is a matrix whose columns are an orthonormal basis of T_x.The integral over D_k may be computed deterministically if d is small enough, but otherwisewe estimate it by direct Monte Carlo integration. We choose n_k independent points x_i ∈ D_k, which is easy for a d-dimensional disk. We compute the corresponding projections and Jacobian weight factors, y_i = y(x_i) and J_i = J(x_i). Some of the y_i will be outside of M ∩ B_k, so we need an extra indicator function factor, 1_B_k(y_i), that is equal to 1 if y_i ∈ M∩ B_k. The d dimensional volume of D_k is known, sowe have the direct Monte Carlo estimateZ_k = _d(D_k)/n_k ∑_i=1^n_k 1_B_k(y_i)f(y_i) J_i^-1,_d(D_k) = π^d/2/Γ(d/2+1)r_k^d .The estimators (<ref>) and (<ref>) are substituted in (<ref>) togive our overall estimate of the integral Z. There is one additional consideration which is important to make this work in practice. That is, the projection from D_k to M∩ B_k must never fail. If it does, the estimate (<ref>) will not be convergent. Empirically we found the projection always worked when r_k was small enough, which proved to be a more limiting criterion than the criterion that M∩ B_k be a graph above D_k. §.§ Variance estimate This section serves two purposes. One is to present the method we use to estimate the variance, σ^2_Z,of the estimator (<ref>). This is similar to the strategy of <cit.>, but adapted to this setting. This estimate is accurate only if the MCMC runs are long enough so that Z_k and the R_i are estimated reasonably accurately. Our computational tests below show that the estimator can be useful in practice. The other purpose is to set up notation for some heuristics for choosing parameters, a question dealt with in more depth in Section <ref>.We start with the simple estimator Z_k in (<ref>) for the integral over the smallest ball. It is convenient to write the estimator as the sum of the exactquantity and a statistical error, and then to work with relative error rather than absolute error. We neglect bias, as we believe that error from bias is much smaller than statistical error in all our computations. In the present case, we write Z_k = Z_k + σ_k ζ_k = Z_k ( 1 + ρ_kζ_k) ,where ζ_k is a zero mean random variable with (ζ_k) = 1, σ_k is the absolute standard deviation of Z_k, and ρ_k=σ_k/Z_k is the relative standard deviation. Since the samples in (<ref>) are independent, the standard deviation satisfiesσ_k^2 =^2_d(D_k)/n_kVar(G(y)),G(y) = 1_B_k(y)f(y) J^-1(y).We estimate Var(G(y)) with the sample variance andobtain, for the absolute and relative variances, σ_k^2 ≈^2_d(D_k)/n_k^2∑_i=1^n_k (G(y_i)-G)^2, ρ_k^2 ≈σ_k^2/Z_k^2.Here G = ∑_i=1^n_k1/n_k G(y_i) is the sample mean. Now consider the estimator R_i in (<ref>) for the ith volume ratio. This involves the number count N_i,i+1. We start by approximating(N_i,i+1) and then incorporate this in the approximation for (R_i). The number count may be writtenN_i,i+1 = ∑_j=1^n_i 1_B_i+1(X_j) ,where X_j ∈ M ∩ B_i is our sequence of samples.In the invariant probability distribution of the chain, the probability of X_j ∈ B_i+1 isp_i = 𝔼[1_B_i+1(X_j)] = 1/R_i = Z_i+1/Z_i.Since 1_B_i+1(X_j) is a Bernoulli random variable, we have (in the invariant distribution)(1_B_i+1(X_j)) = p_i (1-p_i) .We will call this the static variance. It alone is not enough to determine (N_i,i+1), since the steps of the Markov chain are correlated. To account for this correlation,we use the standard theory (see e.g. <cit.>) of error bars for MCMC estimators ofthe form (<ref>), which we now briefly review. Consider a general sum over a function of a general MCMC processS = ∑_j=1^n F(X_j) .The equilibrium lag t auto-covariance is (assuming X_j is in the invariant distribution and t ≥ 0)C_t = [ F(X_j),F(X_j+t)] .This is extended to be a symmetric function of t using C_t = C_|t|. It is estimated from the MCMC run usingF≈1/n_i∑_j=1^n_i F(X_j) ,C_t ≈1/n_i-t∑_j=1^n_i-t( F(X_j)- F)( F(X_j+t)- F) .The Einstein Kubo sum isD = ∑_t = - ∞^∞ C_t .The auto-correlation time isτ = D/C_0 = 1 + 2/(F(X_j))∑_t=1^∞ C_t .The sum may be estimated from data using the self-consistent window approximation described in <cit.>. The static variance C_0 in (<ref>) may be estimated using(<ref>) rather than (<ref>). The point of all this is the large n variance approximation(S) ≈ nD = nC_0τ. For our particular application, this specializes to(N_i,i+1) ≈np_i(1-p_i)τ_i ,where τ_i is the correlation time for the indicator function 1_B_i+1(X_j), assuming points are generated in M∩ B_i. It may be calculated using (<ref>). We use this to write N_i,i+1 in terms of a random variable ξ_iwith unit variance and mean zero as N_i,i+1 ≈ n_ip_i ( 1 + √((1-p_i)τ_i/n_ip_i) ξ_i ) .Substituting this approximation into (<ref>) gives the approximation R_i≈1/p_i( 1 - √((1-p_i)τ_i/n_ip_i) ξ_i ) .where we expanded the term 1/N_i,i+1 assuming that n_i was large. Continuing, we substitute into (<ref>) to getZ ≈ Z [( 1 + ρ_k ζ_k)∏_i=0^k-1( 1 - √((1-p_i)τ_i/n_ip_i) ξ_i )] .If the relative errors are all small, then products of them should be smaller still. This suggests that we get a good approximation of the overall relative error keeping only terms linear in ζ_k and the ξ_i. This givesZ≈ Z [ 1 + ρ_k ζ_k + ∑_i=0^k-1√((1-p_i)τ_i/n_ip_i) ξ_i ] .Finally, we assume that the random variables ζ_k and ξ_i are all independent. As with the earlier approximations, this becomes valid in the limit of large n_i. In that case, we get(Z) = σ_Z^2≈ Z^2 ( ρ_k^2 + ∑_i (1-p_i)τ_i/n_ip_i).The one standard deviation error bar for Z is Zσ_r, where the relativestandard deviation isσ_r ≈√(ρ_k^2 + ∑_i=0^k-1(1-p_i)τ_i/n_ip_i). All the quantities on the right in (<ref>) are estimated from a single run of the sampler in each M∩ B_i, with p_i estimated as 1/R_i(see (<ref>)), τ_i estimated fromthe sequence1_B_i+1(X_j) using a self-consistent window approximation to (<ref>),and ρ_k estimated as in (<ref>). §.§ Algorithm parameters This section describes how we choosethe base point, x_0, and the sequence of ball radii r_0, r_1,…,r_k.A guiding motivation is the desire to minimize the variance of Z,estimated by (<ref>), for a given amount of work.Our parameter choices are heuristic and certainly may be improved. But they lead to an overall algorithm that is “correct” in that it gives convergent estimatesof the integral Z.Our aim is to choose r_0 as small as possible, and r_k as large as possible, to have a smaller overall ratio to estimate. Additionally, we expect the estimate of the final integral Z_k to be significantly faster and more accurate than each ratio estimate R_i, since Z_k is estimated by independently chosen points, so we wish r_k to be large.With this aim in mind, the first step is to learn about M through sampling. We generate n points x_i ∈ M by sampling the surface measureρ(x) = 1/Z f(x) dσ(x), using n MCMC steps of the surface sampling algorithm of Section <ref>. Then we choose x_0 from among these samples. Usually we would like it to be near the center of M, to reduce r_0 or increase r_k. Unless otherwise noted, for our implementation we choose it far from an inequality constraint boundary using x_0 = _x_i = x_1, …, x_nmin_j h_j(x_i) .The h_j are the functions that determine the boundary of M (cf. (<ref>)). This choice may allow the smallest ball, B_k to be large, or make it so that most points of B_k project to points that satisfy the inequality constraints. This is heuristic in that there is no quantitative relation between min_j h_j(x_i) and the distance (Euclidean or geodesic or whatever) from x_i to the boundary of M, if M has aboundary.This is not the only possibility for choosing x_0. One could also choose it randomly from the set of sampled points. Another possibility would be to center x_0 by minimizing the “radius” of M about x_0, which is the maximum distance from x_0 to another x ∈ M.We could do this by setting x_0 = _x_imax_j | x_i - x_j| . Having a smaller radius may mean that we need fewer balls B_j in the integrationalgorithm.Once x_0 is fixed, the radius r_0 of the biggest ball B_0 can be found using the sample points: r_0 = max{ |x_0-x_i| : i = 1, …, n }.Next consider the minimum radius r_k. We need this to be small enough that (a) B_k∩ M issingle-valued over T_x_0,and (b) the projection from D_k→ M∩ B_k never fails.We test (b) first because it is the limiting factor.We choose a candidate radius r̃, sample n≈ 10^5 points uniformly from D_k, and project to M∩ B_k. If any single point fails, we shrink r̃ by a factor of 2 and test again.When we have a radius r̃ where all of the 10^5 projections succeed, our error in estimating (<ref>) should be less than about 0.001%. Then, we proceed to test (a) approximately as follows.We sample n points x_1, …, x_n uniformly from B_r̃(x_0) ∩ M, where B_r̃(x_0) is the ball of radius r̃ centered at x_0.We then consider every pair (x_i,x_j) and check the angle of the vector v_ij = x_i-x_j with the orthogonal complement T_x_0^⊥.Specifically, we construct an orthogonal projection matrix onto the normal space T_x_0^⊥, for example as P = U_x_0U_x_0^T.Then, if there exists a pair for which |v-Pv| <, whereis a tolerance parameter, we shrink r̃ and repeat the test. In practice, we shrink r̃ by a constant factor each time (we used a factor C = 2).We expect this test to detect regions where B_r̃(x_0) ∩ M is multiple-valued as long as it is sampled densely enough. Of course, if the overlapping regions are small, we may miss them, but then these small regions will not contribute much to the integral Z_k. An alternative test (which we have not implemented) would be to sample B_r̃(x_0) ∩ M and check the sign of the determinant of the projection to T_x_0, namely ( (U_x_0^tU_y)) where y∈ B_r̃(x_0) ∩ M. If B_r̃(x_0) ∩ M is multiple-valued, we expect there to be open sets where this determinant is negative, and therefore we would find these sets by sampling densely enough. Having chosen x_0, r_0, and r_k, it remains to choose the intermediate ball radii r_i. We use a fixed ν>1 and take the radii so the balls have fixed d-dimensional volume ratios (note d is typically different from the ball's actual dimension d_a), as(r_i/r_i+1)^d = ν. Such fixed ν strategies are often used for volume computation <cit.>.Given (<ref>), the radii arer_i = r_0 ν^-i/d.Since we have already chosen the smallest radius, this gives a relationship between ν and k asν = (r_0/r_k)^d/k. Note that we choose r_k, the smallest radius, before we choose k, the number of stages.It is not clear what would be the best choice for ν. If ν is too large then the ratios R_i (see (<ref>)) will not be estimated accurately – indeed they will be large so p_i in (<ref>)will be small, increasing the relative standard deviation of the estimator.If ν is too small, then k will be large and the errors at each stage will accumulate. In addition, the error at each single stage will be large if we fix the total number of MCMC points, since there will be fewer points per stage.Clearly, these issues are important for the quality of the algorithm in practice, but they are issues that are seriously under-examined in the literature.We return to them in section <ref>.§ INTEGRATION: EXAMPLESThis section presents some experiments with the integration algorithm of Section <ref>. There are three sets of computations. We first compute the surface area of the torus 𝕋^2 (section <ref>.) This computation is cheap enough that we can repeat it many times and verify that theerror bar formula (<ref>) is reasonably accurate. We then calculate the volume of SO(n) (section <ref>), to verifythe algorithm can work on surfaces with high dimension and high co-dimension. Finally, we apply the algorithm to study clusters of sticky spheres (section <ref>), and make new predictions that could be tested experimentally. §.§ Surface of a torus Consider the torus of Subsection <ref> with R=1 and r = .5. The exact area is Z = 4 π^2 rR ≈ 19.7392. The geometry of the manifold is simple enough that we can choose the parameters of the algorithm analytically. We take the initial point to be x_0 = (R+r,0,0).The smallest ball B_0 centered at x_0 such that 𝕋^2⊂𝕋^2∩ B_0 has radius r_0 = 2(R+r). We take the radius of the smallest ball to be r_k = r, for which the projectionT_x_0∩ B_k →𝕋^2∩ B_k is clearly single valued. For each computation we generate n points at each stage, with the total number of points fixed to be n_t= kn = 10^5.We calculated the volume for each number of stages k = 1, …, 20. We repeated each calculation 50 times independently. This allowed us to get a direct estimate of the variance of the volume estimator Z. Figure <ref> shows that the standard deviation estimated from a single run using (<ref>) is in reasonable agreement with the direct estimate from 50 independent runs.[We do not strive for very precise agreement. An early mentor of one of us (JG), Malvin Kalos, has the wise advice: “Don't put error barson error bars.”] It also shows that for this problem k=2 seems to be optimal. It does not help to use a lot of stages, but it is better to use more than one. §.§ The special orthogonal group The volume of SO(n) as described in Subsection <ref> is (see, e.g., <cit.>)(SO(n)) = 2^n(n-1)/4∏_i=1^n-1(S^i), where(S^i) = 2π^i+1/2/Γ(i+1/2) is the volume of the ith dimensional sphere,.This formula may be understood as follows. An element of SO(n) is a positively oriented orthonormal basis of ℝ^n. The first basis element may be any element of the unit sphere in ℝ^n, which is S^n-1. The next basis element may be any element orthogonal to the first basis element, so it is in S^n-2, and so on. The prefactor 2^n(n-1)/4 arises from the embedding of SO(n) intoℝ^n× n. For example, SO(2) is a circle that would seem to have length 2π, butin ℝ^4 it is the set (cos(θ), sin(θ), -sin(θ), cos(θ)), 0 ≤θ < 2π. This curve has length 2 √(2)π. The difference is the ratio 2^2× 1/4 = √(2).We applied the integration algorithm to compute the volume of SO(n), for 2≤ n ≤ 7.For each n, we ran the algorithm N = 50 independent times and computed the meanof theestimates V, and the sample standard deviation s_V.We used a total of n_t = 10^5 points and k = 4 steps for each n.Since the true value V_t is known, we can compute the relative standard deviation s_v and the relative error ϵ_r vias_V = s_V/√(N) V_t, ϵ_r = |V-V_t/V_t|.We give the results in Table <ref>, which shows our relative errors are small, and close the the estimated standard deviations.We notice that the errors increase as the dimension gets higher. §.§ Sticky-sphere clusters Next we consider a system of particles interacting with a so-called “sticky” potential, which can be thought of as a delta-function when the surfaces of two particles are in contact. This is a model for particles whose range of interaction is very short compared to the particles' diameters, as is the case for many kinds of colloidal particles. The probabilities of finding the system in different states in equilibrium can be calculated using statistical mechanics <cit.> and we briefly summarize the relevant ideas. Consider a system of N unit spheres in ℝ^3, represented as a vectorx = (x_1, …, x_N)^t ∈^3N, where x_i ∈^3 is the center of the i-th sphere.We suppose there are m pairs of spheres in contact E = {(i_1,j_1), …, (i_m,j_m)}.For each pair in contact there is a constraint|x_i-x_j|^2-1 = 0,(i,j)∈ E, and for each pair not in contact there is an inequality which says the spheres cannot overlap:|x_i - x_j|> 1,(i,j)∉ E.To remove the translational degrees of freedom of the system, we fix the center of mass of thecluster at the origin. We do this by imposing three extra constraints, which together are written in vector form as ∑_i=1^N x_i = 0 . We define M to be the set of points x satisfying the constraints (<ref>),(<ref>), and the inequalities (<ref>).It is possible that M has singular points where the equality constraints (<ref>)are degenerate (have gradients that are not linearly independent), but we ignore this possibility (note that the cone in section <ref> shows our algorithm may work even near singular points.) Therefore M is a manifold of dimension d = 3N - m -3 embedded in an ambient space of dimension d_a = 3N, with surface area element dσ.If M is not connected, our algorithm samples a single connected component.The equilibrium probability to find the spheres in a particular cluster defined by contacts E is proportional to the partition function Z. In the sticky-sphere limit, and where the cluster is in a container much larger than the cluster itself, the partition function is calculated (up to constants that are the same for all clusters with the same number of spheres) as <cit.>Z = κ^m z, where z = ∫_M f(x)dσ(x).Here κ is the “sticky parameter”, a single number depending on both the strength of the interaction between the spheres and the temperature. This parameter is something that must be known or measured for a given system.The factor z is called the “geometrical partition function” because it does not depend on the nature of the interaction between the spheres, nor the temperature.The function f(x) is given by f(x) = ∏_i=1^3N-m-3λ_i(x)^-1/2,which is the product of the non-zero eigenvalues λ_i of R^tR, where R is half the Jacobian of the set of contact constraints (<ref>). In the mathematical theory of rigidity, the matrix R is called the rigidity matrix <cit.>. As a first check that our algorithm is working we calculate geometrical partition functions z that are already known, by considering the 13 different clusters of 6 spheres with m = 10 contacts. All manifolds formed in this way are five-dimensional. For each manifold we compute z as in (<ref>) using n_t = 10^8 points and k = 4 steps. For each estimate we compute its error bar using formula (<ref>). We compare these values with the values z_t calculated in <cit.>, which were obtained byparameterizing and triangulating each manifold and calculating the integrals using finite elements.[ In <cit.>, the calculations were done on the quotient manifold obtained by modding out the rotational degrees of freedom. We originally tried working in a quotient space here, by fixing six coordinates; this worked for these low-dimensional manifolds but we had problems with higher-dimensional ones, particularly chains. We presume this was because of the singularity in the quotient space. To be consistent, all calculations in this paper are performed on the original manifold M, which contains rotational degrees of freedom. Therefore, we compare our results to the value z_t = 8 π^2 z, where z is the value reported in <cit.>. The constant 8π^2 is the volume of the rotational space SO(3). ]Table <ref> shows the two calculations agree. Next, we consider a cluster of N identical spheres which may form either a chain (C) or a loop (L), as in Figure <ref>.We are interested in the conditions under which chains are preferable to loops. Let M_C,N, M_L,N denote the manifolds corresponding to a chain, loop respectively; these have dimensions 2N-2, 2N-3. For example, for N=10 spheres the dimensions are 18 and 17. Integrals over these manifolds are not easily calculated by deterministic parameterizations.For each N = 4-10, and for each i ∈{L,C}, we compute the volume V and the geometrical partition function z of the manifold M_i,N, using n_t = 10^8 points and k = 4 steps each.Because we don't have other results to compare to, we check that our calculations are correct by comparing our calculated value of the average of f, namely h = z/V, to that estimated by our sampling algorithm, h̃. The values of h̃, h̅ agree within statistical error as shown in Table <ref>. With the values of z_C, z_Lwe can make experimental predictions. Indeed, the ratio of the probabilities of finding a system of N sticky sphere in a chain versus a loopin equilibrium isP(chain)/P(loop) = κ^-1n_C z_C/n_L z_L ,where n_C, n_L are the number of distinct copies of each manifold that one obtains by considering all permutations of spheres. If the spheres are indistinguishable then one can calculate[ The number of distinct copies is N!/o, where o is the symmetry number of the clusters, i.e. the number of permutations that are equivalent to an overall rotation. For the chain, the symmetry group is C_2, the cyclic group of order 2, and therefore the symmetry number is 2. For the loop, the symmetry group is C_N, the rotational group of a regular N-gon of order N, so the symmetry number is 2N.It is also possible that the spheres are distinguishable, for example if the bonds forming the chain are unbreakable and only one bond can break and reform, in which case we would have n_C=n_L=1.Note that forthree-dimensional clusters, the number of distinct clusters of N spheres should usually be computed as 2N!/o, where o is the number of permutations that are equivalent to a rotation of the cluster (e.g. <cit.>.) In our case, however, the clusters can be embedded into a two-dimensional plane, so a reflection of the cluster is equivalent to a rotation, hence we do not need to include the factor 2. ]that n_C = N!/2 and n_L = (N-1)!/2.From the calculated ratios (n_Cz_C)/(n_Lz_L), shown in Table <ref>, and the value of the sticky parameter, which must be measured or inferred in some other way, we obtain a prediction for the fraction of chains versus loops, which could be compared to experimental data. Of course, the spheres could assemble into other configurations, whose entropies we could in principle calculate, but (<ref>) still holds without calculating these additional entropies because the normalization factors cancel.If we don't know the sticky parameter κ for the system, then (<ref>) can actually be used to measure it for a given system. Indeed, an estimate for κ using a fixed value of N would beκ ≈ κ = # of loops of length N/# of chains of length N·n_C z_C/n_L z_L .Such a measurement could be useful, for example, in systems of emulsion droplets whose surfaces have been coated by strands of sticky DNA <cit.>. The DNA attaches the droplets together, like velcro, but the sticky patch can move around on the surface of a droplet, so a chain of attached droplets can sample many configurations. Understanding the strength of the droplet interaction from first principles, or even measuring it directly, is a challenge because it arises from the microscopic interactions of many weak DNA bonds as well as small elastic deformation of the droplet. But, (<ref>) could be used to measure the stickiness simply by counting clusters in a microscope.Finally, we note that the ratio (n_C z_C)/(n_Lz_L) is related to the loss of entropy as a chain forms a bond to become a loop. Even though a loop has more bonds, hence lower energy, it still might be less likely to be observed in equilibrium because it has lower entropy. The ratio (n_C z_C)/(n_Lz_L) is exactly the value of stickiness κ above which a loop becomes more likely than a chain, which corresponds to a threshold temperature or binding energy. Note that as N increases the ratio does too, so we expect fewer loops for longer polymers than for shorter ones.§ MINIMIZING THE ERROR BARS: SOME TOY MODELS TO UNDERSTAND PARAMETER DEPENDENCE Equation (<ref>) estimates the relative standard deviation of the estimator Z for Z.We would like to choose parameters in the algorithm to minimize this relative error. Previous groups studying similar algorithms in Euclidean spaces have investigated how the overall number of operations scales with the dimension <cit.>, but to our knowledge there is little analysis of how such an algorithm behaves for fixed dimension, and how to optimize its performance with fixed computing power. To this aim we construct a hierarchy of toy models that suggest how the error formula may dependon parameters. We focus exclusively on the sum ∑_i (1-p_i)τ_i / (n_ip_i) at the right of (<ref>) which estimates the relative variance of a product of ratio estimates. This is because the estimator Z_k for ρ_k usually has a much smaller error, since it is estimated by straightforward Monte Carlo integration with independently chosen points.Our models may apply to other multi-phase MCMC algorithms.We consider an algorithm which chooses the same number of points n at each stage and has a fixed number of total points n_t = kn, where k is the number of stages.Additionally, we fix the largest and smallest radii r_0, r_k, and take the d-dimensional volume ratios between balls to be the same constant ν_i = ν at each stage. The ratio ν and number of stages k are related by ν^k = (r_0/r_k)^d = C,where we will write C to denote a constant (not always the same) which does not depend on ν or k.We now make the approximation that R_i ∝ν. This would be exactly true if M were a hyperplane of dimension d.The relative variance we wish to minimize isg(ν) = C ν-1/log(ν)∑_i=0^k-1τ_i . Recall that τ_i is the correlation time for the indicator function1_B_i+1(X_j), assuming points are generated in M∩ B_i (see (<ref>), (<ref>).) We now consider several models for the correlation times τ_i that will allow us to investigate the behavior of g(ν). Ultimately, we would like to find the value ν^* which minimizes this function. §.§ Constant τ. Suppose the τ_i= τ are equal and independent of k. This would happen, for example, if we were able to make independent samples and achieve τ_i = 1. It also could happen if there were a scale invariant MCMC algorithm such as the hit-and-run method or affine invariant samplers. It seems unlikely that such samplers exists for sampling embedded manifolds. In that case, we haveg(ν) = C ν-1/log(ν)^2.Optimizing this over ν gives ν^* ≈ 4.9, independent of dimension. This result is interesting in that it suggests a more aggressive strategy than the previously-suggested choice ν = 2 <cit.>, i.e. one should shrink the balls by a larger amount and hence have a smaller number of stages.§.§ Diffusive scaling As a second scenario, we model τ_i as τ_i ∝ r_i^2.This is a “diffusive scaling” because it represents the time scale of a simple diffusion or random walk in a ball of radius r_i (see (<ref>) in section <ref> below.) From (<ref>), we have τ_i ∝ν^-2i/d.Hence (<ref>) becomes, after some algebra and using (<ref>):g(ν) ∝ g_d(ν) ≡ν-1/log(ν)(1-ν^-2/d) . Plots of g_d(ν) are given in Figure <ref>. Examples of minimizers ν_d^* are{2.6,2.7,3.1,3.4,3.6,3.7,4.1,4.5,4.7} for d={1,2,3,4,5,6,10,20,50} respectively. Some analysis shows that ν^*_d →ν^* as d →∞, where ν^* is the optimal ratio assuming constant τ above. The same analysis shows that g_d(ν^*_d)≈d(ν^*-1)/2log^2(ν^*)for large d, so the relative variance in this model is proportional to the manifold's intrinsic dimension d.The figure shows that g_d(ν) increases more slowly for ν > ν^* than for ν < ν^*. This also suggests taking larger values of ν in computations.We found it surprising that the model with diffusive scaling predicted smaller ratios than the one with constant scaling, since it has a correlation time which decreases as the balls shrink so one might prefer to shrink them rapidly. Some insight into why comes from thinking about the marginal cost of adding an additional ball, given we already have k' balls. For the first model, this marginal cost is constant no matter what k' is, and equal to the cost of just the first ball. For the second model, the marginal cost can decrease as k' increases, since the contribution to the cost from the jth ball is a decreasing function ofj. So, with diffusive scaling, each additional ball does not increase the error by as much.Another way of seeing this is to plot the correlation times as a function of ball number, and note that the first and last balls have fixed correlation times. The total error is roughly the area under this plot.But, the area under an exponential function connecting (0,Cr_0^d) to (k',Cr_k^d) increases less, when k'→ k'+1, than the area under a constant function connecting (0,C') to (k',C') (here C, C' are constants.) §.§ A Brownian motion model in balls. As a third model, we approximate the MCMCsequence X_0,X_1,… with a continuous-time Brownian motion, and simplify the geometry to two concentric balls in a d-dimensional plane.This allows us to find an analytic formula for τ_i.While this is a natural model to consider, we will describe later why it fails to capture the behavior of almost any system of interest. Nevertheless, the reasons behind its failure help build intuition into high-dimensional geometry, so we feel it useful to describe this mathematical exercise. What follows is an exact derivation of the correlation times in this model. Readers interested in only the result may skip to equation (<ref>)and the subsequent discussion. Let X_t be a d-dimensional Brownian motion on a d-dimensional ball B_0 of radius R_0 centered at the origin, with reflecting boundary conditions at |x|=R_0. We consider the stochastic process F(X_t) =1_B_1(X_t),where B_1 is a ball with radius R_1 < R_0, also centered at the origin.Our goal is to derive an analytic formula for the correlation time of the process, defined for a continuous-time process to be (compare with (<ref>)) <cit.>τ = 2 ∫_0^+∞C(t)/C(0)dt . This is expressed in terms of thestationary covariance function of F(X_t): C(t) = 𝔼[ ( 1_B_1(X_0) - F) ( 1_B_1(X_t) - F)], F = 𝔼[1_B_1(X)] .The expectation is calculated assuming that X_0 is drawn from the stationary distribution, which for a Brownian motion is the uniform distribution. We know that C(0) = (F(X_t)) = F(1-F) (as for a Bernoulli random variable), so we focus on computing the integral in (<ref>).Notice that F = (R_1/R_0)^d. We start by writing (<ref>) in terms of the probability density p(t,x,y) for X_t=x given X_0=y. If X_0 is in the stationary distribution, then the joint density of X_0=y and X_t = xis Z_0^-1 p(t,x,y), where Z_0 = (B_0). Using this expression to write (<ref>) as an integral over dx,dy, integrating over time, and rearranging the integrals gives ∫_0^∞ C(t) dt = Z_0^-1∫_B_0∫_B_0( 1_B_1(x) - F) ( 1_B_1(y) - F)∫_0^∞ p(t,x,y)dtdydx .To calculate the time integral we must evaluate p(x,y) = ∫_0^+∞ p(t,x,y)dt . Notice that p(t,x,y) satisfies the forward Kolmogorov equation with boundary and initial conditions (see, e.g., <cit.>):p_t = 1/2Δ_x p (x ∈ B_0) , ∇ p · = 0 (x ∈∂ B_0) , p(0,x,y) = δ(x-y) ,whereis the vector normal to the boundary.Therefore p(x,y) satisfies the equationΔ_x p(x,y) = ∫_0^+∞Δ_x p(t,x,y)dt=2 ∫_0^+∞ p_t(t,x,y) dt = -2 δ(x-y).Therefore p(x,y) is proportional to the Green's function for the Laplace equation in a ball. We now wish to evaluate the integral over the y coordinateu(x)= ∫_B_0 f(y)p(x,y) dy ,where we have defined f(x) =1_B_1(x) - F to be the centered function whose covariance we wish to evaluate.One can check that u(x) solves the Poisson equation Δ_x u = -2 f forx ∈ B_0, ∇ u · = 0 for x ∈∂ B_0 . We now solve this equation analytically for u(x).For d =1, (<ref>) reduces to an ODE, whose solution is u(x) = {( R_1/R_0-1 ) x^2 |x| < R_1R_1/R_0 x^2-2R_1|x|+R_1^2 R_1 < |x| < R_0 . For d ≥ 2, we solve (<ref>) by looking at radial solutions u = u(|x|) = u(r).In polar coordinates (<ref>) isu_rr + d-1/ru_r= -2 f forr ∈ (0,R_0),u_r(R_0) = 0. We solve this equation using the elementary radial solutionsΦ(r) of the Laplace equation (Δ u = 0) in the disk,[These are given by Φ(r) = {Clog(r) + K d = 2 C/2-dr^2-d+ K d ≥ 3 . where C and K are constants <cit.>. ] and requiring the solution to satisfy lim_r → 0 u(r) = 0 and to be continuous at r = R_1.For d = 2 we haveu(r) = { R_1^2-R_0^2/R_0^2 r^2 0≤ r≤ R_1R_1^2(logR_1-logr)+R_1^2/2(r^2/R_0^2-1) R_1 < r ≤ R_0 .and for d ≥ 3 we haveu(r) = { R_1^d-R_0^d/dR_0^d r^2 0 ≤ r ≤ R_1-2R_1^d/d(2-d) r^2-d + R_1^d/dR_0^dr^2 + R_1^2/2-dR_1 ≤ r ≤ R_0 . We can now evaluate the integral over the x coordinate to obtain ∫_0^+∞ C(t)dt =Z_0^-1∫_B_0 f(x) u(x) dx = (S_d-1)Z_0^-1∫_0^R_0 f(r)u(r)r^d-1dr where in the last step we have written the integral using polar coordinates.Here (S_d-1) is the surface area of the boundary of the unit d-dimensional sphere. Substituting our analytic expression for u(x), and using (<ref>) in (<ref>), we obtainτ = R_0^2 h_d(ν).Here ν = (R_o/R_1)^d is the volume ratio as in (<ref>), and the functions h_d(ν) are given byh_d(ν) = { 4/3ν-1/ν^2d = 1 ν^-1-1+log(ν)/ν-1d = 2 4/d^2-4(d-2)ν^-1 -dν^2/d-1+2/ν^2/d(1-ν^-1)d ≥ 3 .The functions h_d(ν) are proportional to the correlation time and are plotted in Figure <ref>for d = 1,2,3,4,5. For fixed ν and d≥ 3 the correlation time decreases with the dimension, and for all d we have lim_ν→ 1 h_d(ν) = lim_ν→ +∞ h_d(ν) = 0.We found all of these observations to be surprising and will discuss them further in section <ref>. We use (<ref>) to approximate the correlation time τ_i in (<ref>) as τ_i = r_i^2 h_d(ν). After some algebra, we obtaing(ν) ∝ l_d(ν) ≡(ν-1)h_d(ν)/log(ν)(1-ν^-2/d) One can verify from (<ref>), (<ref>) that l_d(ν) = g_d(ν)h_d(ν), so this model's only difference with that of section <ref> is that it additionally models how τ_i depends on ν. Plots of the function l_d(ν) for dimension up to d = 5 are given in Figure <ref>. The function l_1(ν) is strictly decreasing, while l_d(ν) is strictly increasing for d≥2. It follows that the ratio ν^* that minimizes (<ref>) is ν^* = 1 for d ≥ 2.§.§ Comparing the toy models with dataOur first two models for the correlation timepredicted an error that was minimized at an intermediate value ν^*, equal to about ν^*≈ 5 for the model with constant τ, and slightly smaller for the model with diffusive scaling though approaching it as d→∞. The third was totally different – it predicted taking as smallν as possible. What does the empirical evidence suggest? Figure <ref> plots the relative standard deviation of Z obtained empirically from averaging over several simulations, as a function of log(ν), for several different manifolds of varying dimensions and co-dimensions.For this figure we calculated the volume using k = 1, …, 40 and computed ν using (<ref>), assuming fixed r_0,r_k.All curves have the same basic shape, which is qualitatively the same as that predicted by the first two models but not the third. What went wrong with the third model? The only difference it has with the diffusive scaling model is an additional assumption for how the correlation time depends on ν.This correlation time was small for large ν, something to be expected since a Brownian motionshould leave B_1 very quickly when B_1 is much smaller than B_0.But, importantly, the correlation time was also small for small ν; small enough to make the overall relative variance decrease monotonically as ν decreases.The fact that the correlation time approaches 0 as ν→ 1implies that a Brownian motion leaves B_1 very quickly even when B_1 is nearly as big as B_0. We believe this is an artifact of our highly restrictive geometry, and almost any problem of interest will not have this property. In high dimensions, virtually all of the volume of an object will be concentrated near the extremal parts of its boundary, a fact that can be checked for example by calculating the volume of a spherical shell of width ϵ≪ 1.A sphere's boundary therefore is concentrated near the outermost spherical shell, so when a Brownian motion enters B_1 it will with overwhelming probability stay near the boundary, where it is easy for it to leave B_1. Therefore, we expect the Brownian motion to quickly enter and leave B_1 repeatedly, like a Brownian motion jumping back and forth between the boundaries of a thin strip. The correlation time therefore should be short.When B_1∩ M, B_0∩ M are not spherical, then some parts of their boundaries will lie further from the origin than others, and it is in these “corners” that we expect a Brownian motion to get stuck for longer times. In convex geometry such corners are known to control the computationalcomplexity of Monte-Carlo algorithms <cit.>, an effect that is simply not possible to capture with Brownian motion in spheres. We expect that inhomogeneous boundary densities are the norm rather than the exception in any problem of scientific interest, and therefore do not believe the predictions of this third model.§ CONCLUSIONSWe introduced algorithms to sample and integrate over aconnected component of a manifold M, defined by equality andinequality constraints in Euclidean space.Our sampler is a random walk on M that does not require an explicit parameterization ofM and uses only the first derivatives of the equality constraint functions.This is simpler than other samplers for manifolds that use second derivatives. Our integration algorithm was adapted from the multi-phase Monte Carlo methods for computing high dimensional volumes, based on intersecting the manifold with a decreasing sequence of balls. We discussed several toy models aimed at understanding how the integration errors depend on the factor by which the balls are shrunk at each step. Our models and data suggested taking larger factors than those suggested in the literature.We also estimated these errors by computing results analytically for a Brownian motion in balls, but the results of these calculations disagree with computational experiments with the sampler. We discussed the geometric intuition behind this failure. We tested the algorithms on several manifolds of different dimensions whose distributions or volumes are known analytically, and then we used our methods to compute the entropies of various clusters of hard sticky spheres. These calculations make specific predictions that could be tested experimentally, for example using colloids interacting via sticky DNA strands. We expect our methods to apply to other systems of objects that can be effectively modeled as “sticky,” i.e. interacting with a short-range potential <cit.>. For example, one could calculate the entropies of self-assembling polyhedra, a system that has been realized experimentally in the hope that it will lead to efficient drug-delivery methods <cit.>; one could calculate the entire free energy landscape of small clusters (building on results in <cit.>); or one could more accurately calculate the phase diagrams of colloidal crystals <cit.> by computing entropic contributions to the free energy. While our methods have worked well on the systems considered in this paper, we expect that applying them to certain larger systems will require additional techniques and ideas. For example, the manifold corresponding to the floppy degrees of freedom of a body-centered cubic crystal is thought to have a shape like a bicycle wheel, with long thin spokes leading out of a small round hub near the center of the crystal <cit.>. Sampling efficiently from a manifold with such different length scales will be a challenge, though one may draw on techniques developed to sample the basins of attraction of jammed sphere packings, for which the same issues arise <cit.>.A different, though possibly related problem, is to sample from manifolds whose constraint functions are not linearly independent. Such manifolds arise for example in clusters of identical spheres <cit.> or origami-based models <cit.>, and could be important for understanding how crystals nucleate.There are many possible improvements to the Monte Carlo methods described here. For the sampling algorithm, one improvement would be to use gradient information of the density function f to create a more effective proposal distribution. It may be possible to find a manifold version of exact discrete Langevin methods described, for example, in <cit.> and <cit.>. An interesting extension to the sampler would be to allow it to jump between manifolds of different dimensions. With such a sampler, one could calculate the ratio of entropies in (<ref>) directly, without first calculating the volumes. Our integration algorithm could also be enhanced in several ways.It could be extended to the case when the intermediate manifolds M∩ B_i are not connected, for example by using umbrella sampling techniques. It could be more adaptive: the radii R_i could be chosen more systematically using variance estimators, as is done in thermodynamic integration <cit.>.We could use additional ratios such as N_i,i+2 (see (<ref>)) to improve the ratio estimates.We could also let n_i vary with ball radius so the errors at each ratio more equally distributed; according to the diffusive scaling model of section <ref>, we should use more points to estimate the larger ratios.Finally, it might be possible to change the probability distribution using smoother “steps” when breaking up the volume into a product of ratios, as in <cit.> and thermodynamic integration <cit.>, though how to adapt this strategy to manifolds is not immediately clear.§.§ AcknowledgementsE. Z. and M. H.-C. acknowledge support from DOE grant DE-SC0012296.The authors would additionally like to thank Gabriel Stoltz and Tony Lelievre for interesting discussions that helped to improve this article. unsrt
http://arxiv.org/abs/1702.08446v2
{ "authors": [ "Emilio Zappa", "Miranda Holmes-Cerfon", "Jonathan Goodman" ], "categories": [ "math.NA", "cond-mat.stat-mech", "stat.CO" ], "primary_category": "math.NA", "published": "20170226175002", "title": "Monte Carlo on manifolds: sampling densities and integrating functions" }
=5pt plus 1pt minus 1pt In a general decay chain A→ B_1B_2→ C_1C_2…, we prove that the angular correlation function I(θ_1,θ_2,ϕ_+) in the decay of B_1,2 is irrelevant to the polarization of the mother particle A at production. This guarantees that we can use these angular distributions to determine the spin-parity nature of A without knowing its production details. As an example, we investigate the decay of a potential doubly-charged boson H^±± going to same-sign τ lepton pair.A theorem about two-body decay and its application for a doubly-charged boson H^±± going to τ^±τ^±Li-Gang XiaDepartment of Physics, Tsinghua University, Beijing 100084, People's Republic of China December 30, 2023 ======================================================================================================== § INTRODUCTIONAfter the discovery of the higgs boson h(125) <cit.>, we are more and more interested in searching for high-mass particles, such as doubly-charged higgs bosons <cit.>, denoted by H^±±. Once we observe any unknown particle, it is crucial to determine its spin-parity (J^P) nature to discriminate different theoretic models. A good means is to study the angular distributions in a decay chain where the unknown particle is involved <cit.>. For the Standard Model (SM) higgs, its spin-parity nature can be probed in the decay modes h(125)→ W^+W^-/ZZ/ <cit.>. The validity of this method relies on that the correlation of the decay planes of W/Z/τ does not depend upon the polarization of h(125) at production. This is proved in a general case in this paper.As an example, we also investigate the decay →, where the spin-statistic relation provides more interesting constraints as the final state is two identical fermions. § PROOF OF THE THEOREMLet us consider a general decay chain A→ B_1B_2 with B_1→ C_1X_1 and B_2 → C_2 X_2, where B_1 and B_2 can be different particles and C_1X_1 and C_2X_2 can be different decay modes even if B_1 and B_2 are identical particles. Here we prove a theorem, which states that the angular correlation function I(θ_1,θ_2,ϕ_+) (defined in Eq. <ref>) in the decay of the daughter particles B_1,2 is independent upon the polarization of the mother particle A. Let ϕ_+ denote the angle between two decay planes B_i→ C_iX_i (i=1,2). Therefore, we can measure the ϕ_+ distribution to determine the spin-parity nature of the mother particle A without knowing its production details [After finishing this work, I was informed that the same statement had been verified in Ref. <cit.> in the case that B_1,2 are spin-1 particles and C_1,2 and X_1,2 are spin- particles. I also admit that it is of no difficulty to generalize it to any allowed spin values for B, C and X as shown in this work.]. Before calculating the amplitude, we introduce the definition of the coordinate system to describe the decay chain as illustrated in Fig. <ref>. For the decay A→ B_1B_2, we take the flight direction of A as the +z axis (if it is still, we take its spin direction as the +z direction), denoted by ẑ(A). θ and ϕ are the polar angle and azimuthal angle of B_1 in the center-of-mass (c.m.) frame of A. For the decay B_1→ C_1X_1, we take the flight direction of B_1 in the c.m. frame of A as the +z axis, denoted by ẑ(B_1) and the direction of ẑ(A)×ẑ(B_1) as the +y axis, denoted by ŷ(B_1). The +x axis in this decay system is then defined as ŷ(B_1)×ẑ(B_1). θ_1 and ϕ_1 are the polar angle and azimuthal angle of C_1 in the c.m. frame of B_1. The same set of definitions holds for the decay B_2→ C_2X_2. ϕ_+ is defined in Eq. <ref>. It represents the angle between the two decay planes of B_i→ C_iX_i (i=1,2). Here ϕ_1, ϕ_2 and ϕ_+ are constrained in the range [0,2π).ϕ_+ ≡{[ϕ_1+ϕ_2, if ϕ_1+ϕ_2 < 2π;ϕ_1+ϕ_2-2π , if ϕ_1+ϕ_2 > 2π; ].According to the helicity formalism developed by Jacob and Wick <cit.>, the amplitude is =∑_λ_1,λ_2 F_λ_1λ_2^JD_M, λ_1-λ_2^J*(Ω)× G_ρ_1σ_1^j_1D_λ_1,ρ_1-σ_1^j_1*(Ω_1) × G_ρ_2σ_2^j_2 D_λ_2,ρ_2-σ_2^j_2*(Ω_2).Here the spin of A, B_1 and B_2 is J, j_1 and j_2 respectively. M is the third spin-component ofA. The indices λ_1,2, ρ_1,2 and σ_1,2 denote the helicity of B_1,2, C_1,2 and X_1,2 respectively. D_mn^J(Ω) ≡ D_mn^J(ϕ, θ, 0)=e^-imϕd_mn^J(θ) and D_mn^J (d_mn^J) is the Wigner D (d) function. F_λ_1λ_2^J is the helicity amplitude for A→ B_1B_2 and definedasF_λ_1λ_2^J ≡⟨ JM;λ_1,λ_2|ℳ|JM⟩ ,with ℳ being the transition matrix derived fromthe S matrix. It is worthwhile to note that F_λ_1λ_2^J does not rely on M because ℳ is rotation-invariant. Similarly, G_ρ_iσ_i^j_i is thehelicity amplitude for B_i→ C_iX_i (i=1,2). Taking the absolute square ofand summing over all possible initial and final states, the differential cross section can be written asdσ/dΩ dΩ_1dΩ_2∝∑_M,λ_1,λ_1^',λ_2,λ_2^'F_λ_1λ_2^JF_λ_1^'λ_2^'^J* e^i((λ_1-λ_1^')ϕ_1+(λ_2-λ_2^')ϕ_2) × d_M,λ_1-λ_2^J(θ)d_M,λ_1^'-λ_2^ '^J(θ) f_λ_1λ_1^';λ_2λ_2^'^j_1,j_2(θ_1,θ_2) ,with f_λ_1λ_1^';λ_2λ_2^'^j_1,j_2(θ_1,θ_2)≡∑_ρ_1,σ_1,ρ_2,σ_2 |G_ρ_1σ_1^j_1|^2|G_ρ_2σ_2 ^j_2|^2 d_λ_1,ρ_1-σ_1^j_1(θ_1)d_λ_1^',ρ_1-σ_1^j_1(θ_1) × d_λ_2,ρ_2-σ_2^j_2 (θ_2)d_λ_2^',ρ_2-σ_2^j_2(θ_2) .Here the summation on M is over the polarization state of A at production. If we do not know the detailed production information, the summation cannot be performed. Defining δλ^(')≡λ_1^(')-λ_2^('), the exponential term in Eq. <ref> is equivalent to e^i[(λ_1-λ_1^')ϕ_+-(δλ-δλ^')ϕ_2]. Performing the integration on ϕ_2 and using the definition of ϕ_+, we have (keeping only the terms related with ϕ_2) ∫_0^2πdϕ_2 e^i((λ_1-λ_1^')ϕ_1+(λ_2-λ_2^')ϕ_2)=∫_0^ϕ_+dϕ_2e^i[(λ_1-λ_1^')ϕ_+-(δλ-δλ^')ϕ_2] +∫_ϕ_+^2πdϕ_2e^i[(λ_1-λ_1^')(ϕ_++2π)-(δλ-δλ^')ϕ_2] .Noting that (λ_1-λ_1^'), δλ and δλ^' are integers, the integration gives the requirement δλ = δλ^'. Then the differential cross section in terms of λ_1^('), δλ^(') and ϕ_+ is∑_λ_1,λ_1^',δλF_λ_1,λ_1-δλ^JF_λ_1^',λ_1^'-δλ^J*e^i(λ_1-λ_1^')ϕ_+ ×∑_M d_M,δλ^J(θ)^2 f_λ_1λ_1^';λ_1-δλ,λ_1^'-δλ^j_1,j_2(θ_1,θ_2).According to the orthogonality relations of the Wigner D functions, we obtain∫ d_mn^J(θ)^2 dcosθ = 2/2J+1 ,which is independent upon the indices m, n. Using this property, we find that integration over θ of the terms related with M in Eq. <ref> onlyprovides a constant factor ∑_M2/2J+2, which is irrelevant to the normalized angular distributions in the B_1,2 decays. So we finalize the proof of this theorem in Eq. <ref>.I(θ_1,θ_2,ϕ_+) ≡1/σdσ/dcosθ_1dcosθ_2dϕ_+ ∝∑_λ_1,λ_1^',δλ F_λ_1,λ_1-δλ^JF_λ_1^',λ_1^'-δλ^J* × e^i(λ_1-λ_1^')ϕ_+ f_λ_1λ_1^';λ_1-δλ,λ_1^'-δλ^j_1,j_2(θ_1,θ_2). Experimentally, we are interested in the ϕ_+ distribution, which can be used to measure the spin-parity nature of A. We integrate out θ_1 and θ_2 and rewrite F_mn^J≡ R_mn^Je^i φ_mn^J, where R_mn^J and φ_mn^J are real. The ϕ_+ distribution turns out to bedσ/σ dϕ_+∝∑_λ_1,δλR_λ_1,λ_1-δλ^J^2F_λ_1λ_1;λ_1-δλ,λ_1-δλ^j_1,j_2+∑_λ_1≠λ_1^'∑_δλR_λ_1,λ_1-δλ^JR_λ_1^',λ_1^'-δλ^J F_λ_1λ_1^';λ_1-δλ,λ_1^'-δλ^j_1,j_2 ×cos[(λ_1-λ_1^')ϕ_++ (φ_λ_1,λ_1-δλ^J-φ_λ_1^',λ_1^'-δλ^J)] ,withF_λ_1λ_1^';λ_2,λ_2^'^j_1,j_2≡∫ f_λ_1λ_1^';λ_2,λ_2^'^j_1,j_2(θ_1,θ_2) dcosθ_1dcosθ_2 .Here the second term in Eq. <ref> is obtained using the fact that the summation is invariant with the exchange λ_1 ↔λ_1^'. If the parity is conserved in the decay A→ B_1B_2 (namely, ^-1ℳ = ℳ withbeing the parity operator), we have R_mn^J =P_A P_B_1P_B_2(-1)^J-j_1-j_2R_-m,-n^J,φ_mn^J=φ_-m,-n^J ,where P_A/B_1/B_2 is the parity of A/B_1/B_2 and the factor -1 is absorbed in R_mn^J (namely, we require 0≤φ_mn^J<π). Noting that the second summation in Eq. <ref> is invariant with the index exchange (λ_1,λ_1',δλ) ↔ (-λ_1, -λ_1', -δλ), thus we have∑_λ_1≠λ_1^'∑_δλ⋯ = 1/2∑_λ_1≠λ_1^'∑_δλ⋯ + 1/2∑_-λ_1≠-λ_1^'∑_-δλ⋯.Using the symmetry relation in Eq. <ref>, this summation turns out to be 1/2∑_λ_1≠λ_1^'∑_δλR_λ_1,λ_1-δλ^JR_λ_1^',λ_1^'-δλ^J×{F_λ_1λ_1^';λ_1-δλ,λ_1^'-δλ^j_1,j_2.×cos[(λ_1-λ_1^')ϕ_++ (φ_λ_1,λ_1-δλ^J-φ_λ_1^',λ_1^'-δλ^J)] + F_-λ_1,-λ_1^';-λ_1+δλ,-λ_1^'+δλ^j_1,j_2 .×cos[(λ_1-λ_1^')ϕ_+- (φ_λ_1,λ_1-δλ^J-φ_λ_1^',λ_1^'-δλ^J)]} .Focusing on the expressions of Eq. <ref> and Eq. <ref>, we are able to show thatF_λ_1λ_1^';λ_1-δλ,λ_1^'-δλ^j_1,j_2 = F_-λ_1,-λ_1^';-λ_1+δλ,-λ_1^'+δλ^j_1,j_2,using the following property of the Wigner d function d_mn^j(π-θ) = (-1)^j-n d_-m,n^j(θ) .With Eq. <ref> and Eq. <ref>, Eq. <ref> can be simplified as dσ/σ dϕ_+∝∑_λ_1,δλR_λ_1,λ_1-δλ^J^2F_λ_1λ_1;λ_1-δλ,λ_1-δλ^j_1j_2+∑_λ_1≠λ_1^'∑_δλR_λ_1,λ_1-δλ ^JR_λ_1^',λ_1^'-δλ^JF_λ_1λ_1^';λ_1-δλ,λ_1^'-δλ^j_1j_2 ×cos(φ_λ_1,λ_1-δλ^J-φ_λ_1^',λ_1^'-δλ^J)cos[(λ_1-λ_1^')ϕ_+] ,This expression is actually the Fourier series for a 2π-periodic even function. Comparing Eq. <ref> and Eq. <ref>, we can see that the terms which are odd with respective to ϕ_+ are forbidden due to parity conservation in the decay A→ B_1B_2.Now we consider the special case that B_1 and B_2 are identical particles and B_1,2 decay to the same final state, for example, we will study a doubly charged boson decay H^++→→ν̅_τν̅_τ. For identical particles, the state with the spin J and the third component M is |JM;λ_1λ_2⟩_S = |JM;λ_1λ_2⟩ +(-1)^J |JM;λ_2λ_1⟩,which satisfies the spin-statistics relation. Here the normalization factor is omitted. The helicity amplitude F_λ_1λ_2^J = _S⟨ JM;λ_1λ_2|ℳ|JM⟩ has the symmetryF_λ_1λ_2^J = (-1)^JF_λ_2λ_1^J. This symmetry relation will further constrain the helicity states, namely, the indices λ_1, λ_1^' and δλ in the summation in Eq. <ref>, <ref> and  <ref>.§ STUDY OF →→Ν̅_ΤΝ̅_ΤRef. <cit.> is an example of the application of this theorem. It studies the decay Z^'→ ZZ → l^+l^-l^+l^-, where B_1,2 are identical bosons. Here we consider the decay chain →→ν̅_τν̅_τ. For two spin- identical fermions, we write down all states explicitly. The helicity index λ=+ () is denoted by R(L). |JM;LL⟩_S = (1+(-1)^J)|JM;LL⟩ |JM;LL⟩_S = -|JM;RR⟩_S|JM;RR ⟩_S = (1+(-1)^J)|JM;RR⟩ |JM;RR⟩_S = -|JM;LL⟩_S|JM;LR⟩_S = |JM;LR⟩+(-1)^J|JM;RL⟩ |JM;LR⟩_S = -|JM;LR⟩_S The third state is already a parity eigenstate. The first two states can be combined to have a definite parity.(1+(-1)^J)(|JM;LL⟩±|JM;RR⟩), P = ∓ 1In addition, the angular momentum conservation requires |λ_1- λ_2|≤ J. Now we can give the selection rules, which are summarized inTable <ref>. We can see that the states with odd spin and even parity are forbidden. For comparison, the selection rules for a neutral particle decaying to spin-fermion anti-fermion pair are summarized in Table <ref>. In future electron-electron colliders, H^– may be produced in the process e^-e^- → H^–. However,the reaction rate for a spin-1 H^– will be highly suppressed because the vector coupling requires that both electrons have the same handness while the only allowed state is |LR⟩-|RL⟩. Similarly, the production rate for a scalar H^– is also highly suppressed. This is called “helicity suppression”. Replacing A, B_1,2 and C_1,2 by ,andrespectively in Eq. <ref>, the amplitude is = G_0^ G_0^ e^iMϕ[F_RR^Jd_M0^J(θ)e^i(ϕ_1+ ϕ_2)sinθ_1/2sinθ_2/2.+.F_LL^Jd_M0^J(θ)e^-i(ϕ_1+ϕ_2)cosθ_1/2cosθ_2/2.-.F_LR^Jd_M,-1^J(θ)e^i(-ϕ_1+ϕ_2)cosθ_1/2sinθ_2/2.-.F_RL^Jd_M,1^J(θ)e^i(ϕ_1-ϕ_2)sinθ_1/2cosθ_2/2] . Here we have only one decay helicity amplitude, G_0^, for thedecay. This is becauseis a pseudo-scalar and ν̅_τ is right-handed.The angular correlation function isI(θ_1,θ_2,ϕ_+)∝ 1+ cosθ_1cosθ_2, for oddJ I(θ_1,θ_2,ϕ_+) ∝ 1+a_J^2 + (1-a_J^2)cosθ_1cosθ_2 -P_Hsinθ_1sinθ_2cosϕ_+, for evenJHere for even J, a_J is defined as a_J≡|F_LR^J|/|F_RR^J|. P_H is the parity of . We can see that the polarization information ofdoes not appear in the angular distributions. The ϕ_+ distribution is dσ/σ dϕ_+∝{[ 1for oddJ; 1 -P_Hπ^2/161/1+a_J^2cosϕ_+ for evenJ; ]. . The ϕ_+ distributions for different J^Ps are shown in Fig. <ref>, where a_J=1 is assumed for illustration.Here are a few conclusions. * The ϕ_+ distribution is uniform for odd J.* For J=0, thehelicity amplitudes F_LR^J andF_RL^J are forbidden due to angular momentum conservation. Thus a_J=0 and the ϕ_+ distribution becomes dσ/σ dϕ_+∝ 1-P_Hπ^2/16cosϕ_+,which is the same as that in the decay h(125)→τ^+τ^-→ν_τν̅ _τ.* For nonzero even J, the ϕ_+ distribution depends upon J through the amplitude ratio a_J. Experimentally,it is difficult to reconstruct the τ lepton information due to the invisible neutrinos <cit.>. But we are able to obtain the decay plane angle ϕ_+ in some ways (see amost recent review Ref. <cit.> and references therein). The so-called impact parameter method <cit.> is suitable for the decay →ν̅_τ studied here. It requires that final s have significant impact parameters, which condition can be satisfied at high-energy colliders such as the Large Hadron Collider (LHC). § CONCLUSIONS In summary, fora general decay chain A→ B_1B_2→ C_1C_2…, we have proved that the angular correlation function I(θ_1,θ_2,ϕ_+) in the decay of the daughter particles B_1,2 is independent upon the polarization of the mother particle A at production. It guarantees that the spin-parity nature of the mother particleA can be determined by measuring the angular correlation of the two decay planes B_i→ C_i… (i=1,2) without knowing its production details. This theorem has a simple form if the parity is conserved in the decay A→ B_1B_2. Taking a potential doubly-charged particle decay → as example, we present the selection rules for various spin-parity combinations. It is found that this decay is forbidden for thewith odd spin and even parity. Furthermore, we show that the angle between the two τdecay plans is an effective observable to determine the spin-parity nature of . § ACKNOWLEDGEMENT Li-Gang Xia would like to thank Fang Dai for many helpful discussions. The author is also indebted to Yuan-Ning Gao for enlightening discussions. This work is supported by the General Financial Grant from the China Postdoctoral Science Foundation (Grant No. 2015M581062).99higgs_atlas G. Aad et al., ATLAS Collaboration, Phys.Lett. B 716, 1 (2012).higgs_cms S. Chatrchyan et al., CMS Collaboration, Phys. Lett. B 716, 30 (2012).hpp_atlas1 G. Aad et al., ATLAS Collaboration, JHEP1503, 041 (2015).hpp_atlas2 G. Aad et al., ATLAS Collaboration, Eur. Phys. J. C, 72, 2244 (2012).hpp_cms S. Chatrchyan et al., CMS Collaboration, Eur.Phys.J.C, 72, 2189 (2012). nelson1 J. R. Dell'Aquila and C. A. Nelson, Phys. Rev. D 33, 80 (1986). nelson2 J. R. Dell'Aquila and C. A. Nelson, Phys. Rev. D 33, 93 (1986). nelson3 J. R. Dell'Aquila and C. A. Nelson, Phys. Rev. D 33, 101 (1986). spin_qi M. R. Buckley, H. Murayama, W. Klemm, and V. Rentala, Phys. Rev. D 78, 014028 (2008). spin_mi F. Boudjema and R. K. Singh, JHEP 0907, 028 (2009).JP_atlas1 G. Aad et al., ATLAS Collaboration, Eur. Phys. J. C, 75,476 (2015), Eur. Phys. J. C, 76, 152 (2016).JP_atlas2 G. Aad et al., ATLASCollaboration, Eur. Phys. J. C, 75, 231 (2015).JP_atlas3 G. Aad et al., ATLASCollaboration, Phys. Lett. B, 726, 120 (2013).JP_cms1V. Khachatryan et al., CMSCollaboration, Phys. Rev. D 92, 012004 (2012).JP_cms2S. Chatrchyan et al., CMSCollaboration, Phys. Rev. Lett. 110, 081803 (2013).JacobWick M. Jacob and G.C. Wick,Annals Phys. 281, 774-799 (2000).ZpZZ W.-Y.Keung, I. Low, and J. Shu, Phys. Rev.Lett. 101, 091802 (2008).MMC A. Elagin, P. Murat, A.Pranko, and A. Safonov, Nucl. Instrum. Meth.A 654, 481 (2011).mtt_xlg Li-Gang Xia, Chin. Phys. C 40, 113003 (2016). BBK S. Berge, W. Bernreuther, and S. Kirchner, Phys. Rev. D 92, 096012 (2015).ImpactParameter S. Berge and W. Bernreuther, Phys. Lett. B 671, 470 (2009).
http://arxiv.org/abs/1702.08186v2
{ "authors": [ "Li-Gang Xia" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170227083417", "title": "A theorem about two-body decay and its application for a doubly-charged boson $H^{\\pm\\pm}$ going to $τ^{\\pm}τ^{\\pm}$" }
Multi-Label Segmentation via Residual-Driven Adaptive Regularization Byung-Woo HongJa-Keoung KooChung-Ang University, Korea{hong,jakeoung}@cau.ac.krStefano SoattoUniversity of California Los Angeles, U.S.A.soatto@cs.ucla.edu ======================================================================================================================================================================================We present a variational multi-label segmentation algorithm based on a robust Huber loss for both the data and the regularizer, minimized within a convex optimization framework. We introduce a novel constraint on the common areas, to bias the solution towards mutually exclusive regions. We also propose a regularization scheme that is adapted to the spatial statistics of the residual at each iteration, resulting in a varying degree of regularization being applied as the algorithm proceeds: the effect of the regularizer is strongest at initialization, and wanes as the solution increasingly fits the data. This minimizes the bias induced by the regularizer at convergence. We design an efficient convex optimization algorithm based on the alternating direction method of multipliers using the equivalent relation between the Huber function and the proximal operator of the one-norm. We empirically validate our proposed algorithm on synthetic and real images and offer an information-theoretic derivation of the cost-function that highlights the modeling choices made. § INTRODUCTION To paraphrase the statistician Box, there is no such thing as a wrong segmentation. Yet, partitioning the image domain into multiple regions that exhibit some kind of homogeneity is useful in a number of subsequent stages of visual processing. So much so that segmentation remains an active area of research, with its own benchmark datasets that measure how right a segmentation is, often in terms of congruence with human annotators (who themselves are often incongruent).The method of choice is to select a model, or cost function, that tautologically defines what a right segmentation is, and then find it via optimization. Thus, most segmentation methods are optimal, just with respect to different criteria.Classically, one selects a model by picking a function(al) that measures data fidelity, which can be interpreted probabilistically as a log-likelihood, and one that measures regularity, which can be interpreted as a prior, with a parameter that trades off the two. In addition, since the number of regions is not only unknown, but also undefined (there could be any number of regions between one and the number of pixels in a single image), typically there is a complexity cost that drives the selection of a model among many.Specifically for the case of multi-label, multiply-connected region-based segmentation, there is a long and illustrious history of contributions too long to list here, but traceable to <cit.> and references therein.77ptIn each and every one of these works, to the best of our knowledge, the trade-off between data fidelity and regularization is determined from the distribution of the optimization residual and assumed constant over all the partitioning regions, leading to a trade-off or weighting parameter that is constant both in space (, on the entire image domain <cit.>) and in time, , during the entire course of the (typically iterative) optimization.Neither is desirable: consider Fig. <ref>. Panels (c) and (d) show the residual and the variance, respectively, for eachregion shown in (b), into which the image (a) is partitioned. Clearly, neither the residual, nor the variance (shown as a gray-level: bright is large, dark is small), are constant in space. Thus, we need a spatially adapted regularization, beyond static image features as studied in  <cit.>, or local intensity variations <cit.>. While regularization in these works is space-varying, the variation is tied to the image statistics, and therefore constant throughout the iteration. Instead, we propose aspatially-adaptive regularization scheme that is a function of the residual, which changes during the iteration, yielding an automatically annealed schedule whereby the changes in the residual during the iterative optimization gradually guide the strength of the prior, adjusting it both in space, and in time/iteration.To the best of our knowledge, we are the first to present an efficient scheme that uses the Huber loss for both data fidelity and regularization, in a manner that includes standard models as a special case, within a convex optimization framework. While the Huber loss <cit.> has been used before for regularizations <cit.>, to the best of our knowledge we are the first to use it both in the data and regularization terms.Furthermore, to address the phenomenon of proliferation of multiple overlapping regions that plagues most multi-label segmentation schemes, we introduce a constraint that penalizes the common area of pairwise combinations of partitions. Theconstraints often used to this end are ineffective in a convex relaxation <cit.>, which often leads to the need for user interaction <cit.>.To boot, we present an efficient convex optimization algorithm within in the alternating direction method of multipliers (ADMM) framework <cit.> with a variable splitting technique that enables us to simplify the constraint <cit.>.§.§ Related Work One of the most popular segmentation models relies on the bi-partition/piecewise-constant assumption <cit.>, which has been re-cast as the optimization of a convex functional based on the thresholding theorem <cit.>.In a discrete graph representation, global optimization techniques are developed based on min-cut/max-flow <cit.>, and there is an approach that has been applied to general multi-label problems <cit.>. Continuous convexrelaxation techniques have been applied to multi-label problems in a variational framework <cit.>, where the minimization of Total Variation (TV) is performed using a primal-dual algorithm. In minimizing TV, a functional lifting technique has been applied to the multi-label problem <cit.>. Most convex relaxation approaches for the multi-label problems have been based on TV regularization while different data fidelity terms have been used by L_1 norm <cit.> or L_2 norm <cit.>.For the regularization term, the Huber norm has been used for TV in order to avoid undesirable staircase effects <cit.>. There have been adaptive algorithms proposed to improve the accuracy of the boundary using an edge indicator function <cit.>, generalized TV <cit.>, or an anisotropic structure tensor <cit.>.A local variation of the image intensity within a fixed size window has been also applied to consider local statistics into the regularization <cit.>, and the regularization parameter has been chosen based on the noise variance <cit.>. Most adaptive regularization algorithms have considered spatial statistics that are constant during the iteration, irrespective of the residual. It has been known that most multi-label models suffer from inaccurate or duplicate partitioned regions when used with a large number of labels <cit.>, which forces user interactions including bounding boxes <cit.>, contours <cit.>, scribbles <cit.>, or points <cit.>.Alternatively, side information such as depth has been applied to overcome the difficulties that stem from uncertainty in characterization of regions in the multi-label problem <cit.>.The words “deep learning” appear nowhere in this paper but this sentence: we believe there are deep connections between the dynamic data-driven regularization we have proposed and a process to design models that best exploit the informative content of the data, learning which can inform more useful models moving forward. Sec. <ref> is a first step in this direction.§.§ Contributions Our first contribution is a multi-label segmentation model that adapts locally, in space and time/iteration, to the (data-driven) statistics of the residual (Sec. <ref>). The second contribution is to introduce a Huber functional as a robust penalty for both the data fidelity and the regularization terms, which are turned into proximal operators of the L_1 norm, allowing us to conduct the optimization efficiently via Moreau-Yosida regularization (Sec. <ref>). Third, unlike most previous algorithms that were ineffective at dealing with a large number of labels, we introduce a constraint on mutual exclusivity of the labels, which penalizes the common area of pairwise combination of labels so that their duplicates are avoided (Sec. <ref>).Finally, we give an information-theoretic interpretation of our cost function, which highlights the underlying assumption and justifies the adaptive regularization scheme in a way that the classical Bayesian interpretation is not suited for the derivation of our model (Sec. <ref>).We validate our model empirically in Sec. <ref>, albeit with the proviso that existing benchmarks are just one representative choice for measuring how correct a segmentation is, as hinted at in the introduction, whereas our hope is that our method will be useful in a variety of settings beyond the benchmarks themselves, for which purpose we intend to make our code available open-source upon completion of the anonymous review process. § MULTI-LABEL VIA ADAPTIVE REGULARIZATION §.§ General Multi-Label SegmentationLet f : Ω→ be a real valued[Vector-valued images can also be handled, but we consider scalar for ease of exposition.] image with Ω⊂ℝ^2 its spatial domain. Segmentation aims to divide the domain Ω into a set of n pairwise disjoint regions Ω_i where Ω = ∪_i=1^n Ω_i and Ω_i ∩Ω_j = ∅ if i ≠ j. The partitioning is represented by a labeling function l : Ω→Λ where Λ denotes a set of labels with | Λ | = n. The labeling function l(x) assigns a label to each point x ∈Ω such that Ω_i = { x | l(x) = i }. Each region Ω_i is indicated by the characteristic function χ_i : Ω→{ 0, 1 } defined by: χ_i(x) = 1 : l(x) = i, 0 : l(x) ≠ i. Segmentation of the image f(x) can be cast as an energy minimization problem in a variational framework, by seeking for regions {Ω_i } that minimize an energy functional with respect to a set of characteristic functions {χ_i }: ∑_i ∈Λ{λ 𝒟 (χ_i) + (1 - λ)ℛ (D χ_i) },∑_i ∈Λχ_i(x) = 1, where 𝒟 measures the data fidelity and ℛ measures the regularity of D χ_i, and D denotes an operator for the distributional derivative. The trade-off between the data fidelity 𝒟 and the regularization ℛ is controlled by the relative weighting parameter λ∈ [0, 1].A simple data fidelity can be designed to measure the homogeneity of the image intensity based on a piecewise constant model with an additional noise process: f(x) = c_i + η_i(x), x ∈Ω_i where c_i ∈ and η_i(x) is assumed to follow a certain distribution independently in i ∈Λ with a specified parameter σ∈. The regularization is designed to penalize the boundary length of region Ω_i that is preferred to have a smooth boundary.The classical form of the data fidelity and the regularization is: 𝒟 = ∫_Ωχ_i(x) | f(x) - c_i |^px,ℛ = ∫_Ω | ∇χ_i(x) |x where p is given depending on the noise distribution assumption (e.g. p=1 for Laplacian and p=2 for Gaussian). The control parameter λ in (<ref>) is related to the parameter σ of the noise distribution leading to be constant for all i since σ is assumed to be the same for all i. The energy formulation in (<ref>) with the terms defined in (<ref>) is non-convex due to the integer constraint of the characteristic function χ_i(x) ∈{0, 1}, and the control parameter λ is given to be constant in Ω for all i due to the assumption that the associated parameter σ with the noise distribution is constant. We present a convex energy formulation with a novel constraint on the set of partitioning functions in the following section.§.§ Energy with Mutually Exclusive ConstraintWe derive a convex representation of the energy functional in (<ref>) following classical convex relaxation methods presented in <cit.> as follows: ∑_i ∈Λ{∫_Ωλ ρ(u_i(x))x + ∫_Ω (1 - λ)γ( ∇ u_i(x))x }, subject tou_i(x) ∈ [0, 1],∑_i ∈Λ u_i(x) = 1,∀ x ∈Ω, where ρ(u_i) represents the data fidelity, γ(∇ u_i) represents the regularization, and their relative weight is determined by λ about which will be discussed in the following section. The characteristic function χ_i in (<ref>) is replaced by a continuous function u_i ∈ BV(Ω) of bounded variation and its integer constraint is relaxed into the convex set u_i ∈ [0, 1].In the determination of the energy functional in (<ref>), we employ a robust penalty estimator using the Huber loss function ϕ_η with a threshold parameter η > 0 as defined in <cit.>: ϕ_η(x) =1/2 η x^2 : |x| ≤η, |x| - η/2: |x| > η. We define the data fidelity ρ(u_i) and the regularization γ(∇ u_i) using the Huber loss function as follows: ρ(u_i(x); c_i)ϕ_η (f(x) - c_i) u_i(x),γ(∇ u_i(x))ϕ_μ (∇ u_i(x)), where c_i ∈ is an approximate of f to estimate within the region Ω_i, and the threshold parameters η, μ > 0 are related to the selection of the distribution model depending on the residual.The data fidelity ρ(u_i; c_i) is defined by following the piecewise image constant model, however it can be generalized to ρ(u_i; c_i)- log p_i(f(x)) where p_i(f(x)) is the probability that the observation f(x) fits a certain model distribution p_i.The advantage of using the Huber loss in comparison to the L_2 norm is that geometric features such as edges are better preserved while it has continuous derivatives whereas the L_1 norm is not differentiable leading to staircase artifacts. In addition, the Huber loss enables efficient convex optimization algorithm due to its equivalence to the proximal operator of L_1 norm, which will be discussed in Sec. <ref>. Theregions are desired to be pairwise disjoint, namely Ω_i ∩Ω_j = ∅ if i ≠ j, however the summation constraint ∑_i ∈Λ u_i(x) = 1,∀ x ∈Ω in (<ref>) is ineffective to this end, especiallywith a large number of labels <cit.>. Thus, we introduce a novel constraint to penalize the common area of each pair of combinations in regions Ω_i in such a way that ∑_i ≠ j u_i u_j is minimized for all i, j ∈Λ. Then, we arrive at the following energy functional with the proposed mutually exclusive constraint: ∑_i ∈Λ{∫_Ωλ ρ(u_i(x); c_i) + τ( ∑_i ≠ j u_j(x) ) u_i(x)x + ∫_Ω (1 - λ)γ( ∇ u_i(x))x }, subject tou_i(x) ∈ [0, 1],∑_i ∈Λ u_i(x) = 1,∀ x ∈Ω, where τ > 0 is a weighting parameter for the constraint of the mutual exclusivity in segmenting regions, and λ determines the trade-off between the data fidelity and the regularization.The optimal partitioning functions u_i and the approximates c_i are computed by minimizing the energy functional in (<ref>) in an alternating way. The desired segmentation results are obtained by the optimal set of partitioning functions u_i as given by: l(x) = _i u_i(x),i ∈Λ, x ∈Ω. In the optimization of the energy functional in (<ref>), we propose a novel regularization scheme that is adaptively applied based on the local fit of data to the model for each label as discussed in the following section.§.§ Residual-Driven RegularizationThe trade-off between the data fidelity ρ(u_i; c_i) and the regularization γ(∇ u_i) in (<ref>) is determined by λ based on the noise distribution in the image model. We assume that the diversity parameter in the probability density function of the residual that is defined by the difference between the observed and predicted values varies in label. We propose a novel regularization scheme based on the adaptive weighting function λ_i depending on the data fidelity ρ(u_i; c_i) as defined by: ν_i(x)= exp( - ρ(u_i(x); c_i)/β),λ_i(x)= min_λ1/2ν_i(x) - λ_2^2 + α λ_1, where β > 0 is a constant parameter that is related to the variation of the residual, and α > 0 is a constant parameter for the sparsity regularization. The relative weight λ_i(x) between the data fidelity and the regularization is adaptively applied for each label i ∈Λ and space x ∈Ω depending on ν_i(x) determining the local fit of data to the model. The adaptive regularity scheme based on the weighting function λ_i(x) is designed so that regularization is stronger when the residual is large, equivalently ν_i(x) is small, and weaker when the residual is small, equivalently ν_i(x) is large, during the energy optimization process. We impose sparsity on the exponential measure of the negative residual ν_i(x) to obtain the weighting function λ_i(x) as a solution of the Lasso problem <cit.> defined in (<ref>). Such a solution with the identity operator as a predictor matrix can be efficiently obtained by the soft shrinkage operator 𝒯(ν | α) <cit.>:𝒯 (ν|α)=ν - α: ν > α0 : ν_1 ≤α ν + α: ν < -αleading to the solution λ_i(x) = 𝒯(ν_i(x) | α). The Lagrange multiplier α > 0 in the Lasso problem in (<ref>) where 0 < ν_i ≤ 1 restricts the range of λ_i to be [0, 1-α] so that the regularization is imposed everywhere, which leads to well-posedness even if ρ(u_i; c_i) = 0. The constant α is related to the overall regularity on the entire domain. The final energy functional for our multi-label segmentation problem with the adaptive regularity scheme reads: ∑_i ∈Λ{∫_Ωλ_i(x)ρ(u_i(x); c_i) + τ( ∑_i ≠ j u_j(x) ) u_i(x)x + ∫_Ω (1 - λ_i(x))γ( ∇ u_i(x))x }, subject tou_i(x) ∈ [0, 1],∑_i ∈Λ u_i(x) = 1,∀ x ∈Ω, where λ_i is obtained as the solution of (<ref>). The optimization algorithm to minimize the energy functional in (<ref>) is presented in Sec. <ref> and the supplementary material. §.§ Information-Theoretic InterpretationThe energy functional for our basic model incorporating the adaptive regularization, simplified after removing auxiliary variables used in the optimization and the additional constraint on the mutual exclusivity of regions, consists of a point-wise sum, which could be interpreted probabilistically by assuming that the image f is a sample from an IID random field. Under that interpretation, we have that ρ(u_i(x)) = - log p(u(x) | f(x), c_i),and γ(∇ u_i(x)) = - log p(u(x) | c_i). For simplicity, we indicate those as -log p(u | f) and -log p(u) respectively, and even further p ≐ p(u|f) and q ≐ p(u). Thenλ_i(x) ∝ p, and the overall cost function (<ref>) to be minimized, ∫λρ + (1 - λ) γ dx, can be written as ℰ^ min_p,q = - 𝔼(plogp/q) + 𝔼(log q),where the expectation 𝔼 is the sum with respect to the values taken by u and f on x. The first term is the Kullback-Leibler divergence between the chosen prior and the data-dependent posterior. The second term is constant once the prior is chosen. Therefore, the model chosen for the posterior is the one that maximizes the divergence between prior and posterior,where the influence of the prior q wanes as the solution becomes an increasingly better fit of the data, without the need for manual tuning of the annealing, and without the undesirable bias of the prior on the final solution. The model chosen is therefore the one that, for a fixed prior, selects the posterior to be as divergent as possible, so as to make the data as informative as possible (in the sense of uncertainty reduction).Compare this with the usual Bayesian interpretation of variational segmentation, whereby the function to be maximized is ℱ^ min_p,q = ∫log p + β log qx,for some fixed β and a prior q whose influence does not change with the data. If we wanted it to change, we would have to introduce an annealing parameter λ, so ℱ^ max_p,q = p^λq^(1-λ),with no guidance from Bayesian theory on how to choose it or schedule it. Clearly choosing λ = p yields a form that is not easily interpreted within Bayesian inference. Thus the information-theoretic setting provides us with guidance on how to choose the parameter λ, whereby the cost function is given as:ℰ^ max_p,q = 𝕂𝕃(p || q), where 𝕂𝕃 denotes the Kullback-Leibler divergence. We obtain λ = p and consequently an automatic, data-driven annealing schedule. § ENERGY OPTIMIZATIONIn this section, we present an efficient convex optimization algorithm in the framework of alternating direction method of multipliers (ADMM) <cit.>. The detailed derivations of our optimization algorithm are further provided in the supplementary material.The energy (<ref>), that is convex in u_i with fixed c_i, is minimized with respect to the partitioning functions u_i and the intensity estimates c_i in an alternating way.We modify the energy functional in (<ref>) using variable splitting <cit.> introducing a new variable v_i with the constraint u_i = v_i as follows: ∑_i ∈Λ{∫_Ωλ_iρ(u_i; c_i) + τ( ∑_i ≠ j u_j ) u_ix + ∫_Ω (1 - λ_i)γ( ∇ v_i )x + θ/2 u_i - v_i + y_i _2^2 }, subject tou_i(x) ≥ 0,∑_i ∈Λ v_i(x) = 1,∀ x ∈Ω, where y_i is a dual variable for each equality constraint u_i = v_i, and θ > 0 is a scalar augmentation parameter. The original constraints u_i ∈ [0, 1] and ∑_i u_i = 1 in (<ref>) are decomposed into the simpler constraints u_i ≥ 0 and ∑_i v_i = 1 in (<ref>) by variable splitting u_i = v_i. In the computation of the data fidelity and the regularization, we employ a robust estimator using the Huber loss function defined in (<ref>).An efficient procedure can be performed to minimize the Huber loss function ϕ_η following the equivalence property of Moreau-Yosida regularization of a non-smooth function | · | as defined by <cit.>: ϕ_η(x) = inf_r { | r | + 1/2 η (x - r)^2 }, which replaces the data fidelity ρ(u_i; c_i) and the regularization γ(∇ v_i) in (<ref>) with the regularized forms ρ(u_i; c_i, r_i) and γ(∇ v_i; z_i), respectively as follows: ρ(u_i; c_i, r_i)= ( | r_i | + 1/2 η (f - c_i - r_i)^2 ) u_i,γ(∇ v_i; z_i)= z_i _1 + 1/2 μ∇ v_i - z_i ^2_2, where r_i and z_i are the auxiliary variables to be minimized in alternation. The constraints on u_i and v_i in (<ref>) can be represented by the indicator function δ_A(x) of a set A defined by: δ_A(x)= 0 : x ∈ A ∞: x ∉ A The constraint u_i ≥ 0 is given by δ_A(u_i) where A = { x | x ≥ 0 }, and the constraint ∑_i v_i = 1 is given by δ_B( { v_i } ) where B = {{x_i} | ∑_i x_i = 1 }. The Moreau-Yosida regularization for the Huber loss function and the constraints represented by the indicator functions lead to the following unconstrained energy functional ℒ_i for label i: ℒ_i = ∫_Ωλ_iρ(u_i; c_i, r_i) + τ( ∑_i ≠ j u_j ) u_ix + δ_A(u_i) + ∫_Ω (1 - λ_i)γ( ∇ v_i; z_i )x + θ/2 u_i - v_i + y_i _2^2, and the final unconstrained energy functional ℰ reads: ℰ({u_i, v_i, y_i, c_i, r_i, z_i}) = ∑_i ∈Λℒ_i + δ_B({v_i}). The optimal set of partitioning functions { u_i } is obtained by minimizing the energy functional ℰ using ADMM, minimizing the augmented Lagrangian ℒ_i in (<ref>) with respect to the variables u_i, v_i, c_i, r_i, z_i, and applying a gradient ascent scheme with respect to the dual variable y_i followed by the update of the weighting function λ_i and the projection of { v_i } onto the set B in (<ref>).The alternating optimization steps using ADMM are presented in Algorithm <ref>, where k is the iteration counter.The technical details regarding the optimality conditions and the optimal solutions for the update of variable at each step in Algorithm <ref> are provided in the supplementary material.§ EXPERIMENTAL RESULTSWe demonstrate the effectiveness and robustness of our proposed algorithm for the multi-label segmentation problem using the images in the Berkeley segmentation dataset <cit.> and synthetic images. The numerical experiments are designed to demonstrate the robustness of the energy functional that uses a Huber function as a penalty estimator, the effectiveness of the mutually exclusive constraint on the partitioning functions, and the effectiveness of the adaptive regularity scheme based on the local fit of data to the model. Note that we use a random initialization for the initial labeling function for all the algorithms throughout the experiments.Robustness of the Huber-Huber (H^2) Model:87pt We empirically demonstrate the robustness of the proposed energy functional that uses a robust Huber loss function for both the data fidelity and the regularization. We consider a bi-partitioning segmentation on the images that are suited for a bi-partitioning image model in the Berkeley dataset in order to demonstrate the robustness of the proposed H^2 model in comparison to TV-L_1 and TV-L_2 ignoring the effect of the constraint on the common area of the pairwise region combinations.We apply the ADMM optimization algorithm for our model with a variable splitting technique and the primal-dual algorithm is applied for TV-L_1 and TV-L_2 models. It is shown that our model yields better and faster results as shown in Fig. <ref> where (a) F-measure and (b) error are presented for each iteration. The parameters for each algorithm are chosen fairly so accuracy and the convergence rate are optimized. The experimental results indicate that the proposed H^2 model with the presented optimization algorithm has a potential to be applied for a variety of imaging tasks. Effectiveness of Mutually Exclusive Constraint:52pt 70pt 5pt We demonstrate the effectiveness of our proposed model with a constraint of the mutual exclusivity in comparison with other multi-label segmentation algorithms. We qualitatively compare the segmentation results with different number of labels on the classical junction test (cf. Fig. 12 of <cit.> or Fig. 5-8 of <cit.>), whereby the number of labels is fixed to one-less than the number of regions in the input image. The algorithm is then forced to “inpaint” the central disc with labels of surrounding regions.The segmentation results on the junction prototype images with different number of regions are shown in Fig. <ref> where the input junction images have 5 (top), 7 (middle), 9 (bottom) regions as shown in (a).We compare (f) our Huber-Huber (H^2) model without the mutual exclusivity constraint in (<ref>) and (g) our full H^2 model with the constraint in (<ref>) to the algorithms including: (b) fast-label (FL) <cit.>, (c) convex relaxation based on Total Variation using the primal-dual (TV) <cit.>, (d) vectorial Total Variation using the Dogulas-Rachford (VTV) <cit.>, (e) paired calibration (PC) <cit.>.This experiment is particularly designed to demonstrate a need for the constraint of the mutual exclusivity, thus the input images are made to be suited for a precise piecewise constant model so that the underlying image model of the algorithm under comparison is relevant. The illustrative results are presented in Fig. <ref> where the performance of most algorithms degrades as the number of regions increases (top-to-bottom), while our algorithm yields consistently better results.Effectiveness of Adaptive Regularity: We empirically demonstrate the effectiveness of our proposed adaptive regularization using a representative synthetic image with four regions, each exhibiting spatial statistics of different dispersion, in Fig. <ref> (a). The artificial noises are added to the white background, the red rectangle on the left, the green rectangle on the middle, and the blue rectangle on the right with increasing degree of noises in order.To preserve sharp boundaries, one has to manually choose a large-enough λ so that regularization is small; however, large intensity variance in some of the data yields undesirably irregular boundaries between regions, with red and blue scattered throughout the middle and right rectangles (d), all of which however have sharp corners. On the other hand, to ensure homogeneity of the regions, one has to crank up regularization (small λ), resulting in a biased final solution where corners are rounded (e), even for regions that would allow fine-scale boundary determination (red). Our approach with the adaptive regularization (b), however, naturally finds a solution with a sharp boundary where the data term supports it (red), and let the regularizer weigh-in on the solution when the data is more uncertain (blue). The zoom in images for the marked regions in (b) and (e) are shown in (c) and (f), respectively in order to highlight the geometric property of the solution around the corners.75pt 10pt Multi-Label Segmentation on Real Images:68pt 1ptexp5 62096 4124084 315062 4238011 312003 329030 5We perform a comparative analysis for the multi-label segmentation based on the Berkeley dataset <cit.>. We compare our algorithm to the existing state-of-the-art techniques of which the underlying model assumes the piecewise constant image for fair comparison, and consider the algorithms: FL <cit.>, TV <cit.>, VTV <cit.>, PC <cit.>.We provide the qualitative evaluation in Fig. <ref> where the input images are shown in (a) and the segmentation results are shown in (b)-(f) where the same number of labels is applied to all the algorithms for each image. The algorithm parameters for the algorithms under comparison are optimized with respect to the accuracy while we set the parameters: η=0.5, μ=0.5, α=0.01, β=10, τ=0.5, θ=1. While our method yields better labels than the others, the obtained results may seem to be imperfect in general, which is due to the limitation of the underlying image model in particular in the presence of texture or illumination changes.The quantitative comparisons are reported in terms of precision and recall with varying number of labels in Tables <ref>, <ref>. The computational cost as a baseline excluding the use of special hardware (e.g. multi-core GPU/CPU) and image processing techniques (e.g. image pyramid) is provided for 481 × 321 × 3 RGB images in Table <ref>.§ CONCLUSION We have introduced a multi-label segmentation model that is motivated by Information Theory. The proposed model has been designed to make use of a prior, or regularization functional, that adapts to the residual during convergence of the algorithm in both space, and time/iteration. This results in a natural annealing schedule, with no adjustable parameters, whereby the influence of the prior is strongest at initialization, and wanes as the solution approaches a good fit with the data term. All this is done within an efficient convex optimization framework using the ADMM.Our functional has been based on a Huber-Huber penalty, both for data fidelity and regularization. This is made efficient by a variable splitting, which has yielded faster and more accurate region boundaries in comparison to the TV-L_1 and TV-L_2 models. To make all this work for multi-label segmentation with a large number of labels, we had to introduce a novel constraint that penalized the common area of the pairwise region combinations. The proposed energy formulation and optimization algorithm can be naturally applied to a number of imaging tasks such as image reconstruction, motion estimation and motion segmentation. Indeed, segmentation can be improved by a more sophisticated data term integrated into our algorithm. § FROM BAYES-TIKHONOV TO INFORMATION-DRIVEN REGULARIZATIONThis section offers an alternative motivation for our choice of adaptive regularization.The cost function (<ref>), simplified after removing auxiliary variables used in the optimization, consists of a point-wise sum, which could be interpreted probabilistically by assuming that the image f is a sample from an IID random field. In a Bayesian setting, one would choose a prior q based on assumptions about the state of nature. For instance, assuming natural images to be piecewise smooth, one can choose q ≐ p(u(x) | c_i) ∝exp(- γ(∇ u_i(x))). Once chosen a prior, one would then choose or learn a likelihood model ℓ = p(f(x) | u(x), c_i), or equivalently a posterior p = ℓ q where, for instance, p = p(u(x) | f(x), c_i) ∝exp(-ρ(u_i(x))). The natural inference criterion in a Bayesian setting would be to maximize the posterior:ψ^ max_ℓ,q = ℓ q = p.Here the prior exerts its influence with equal strength throughout the inference process, and is generally related to standard Tikhonov regularization. Instead, we would like a natural annealing process that starts with the prior as our initial model, and gradually increases the influence of the data term until, at convergence, the influence of the prior is minimized and the data drives convergence to the final solution. This could be done by an annealing parameter λ that changes from 0 to 1 according to some schedule, yielding (an homotopy class of cost functions mapping the prior to the likelihood as λ goes from 0 to 1)ψ^ max_ℓ,q(λ) = ℓ^λ q^(1-λ)≠ pIdeally, we would like to not pick this parameter by hand, but instead have the data guide the influence of the prior during the optimization. Bayesian inference here not only does not provide guidance on how to choose the annealing schedule, but it does not even allow a changing schedule, for that would not be compatible with a maximum a-posteriori inference criterion. Therefore, we adopt an information-theoretic approach: We first choose the prior q, just like in the Bayesian case, but then we seek for a likelihood ℓ, or equivalently a posterior p, that makes the data as informative as possible. This is accomplished by choosing the posterior p to be as divergent as possible from the prior, in the sense of Kullback-Leibler. That way, the data is as informative as possible (if the posterior was chosen to be the same as the prior, the system would not even look at the data).This yields a natural cost criterionϕ^ max_p,q = 𝕂𝕃(p || q) +const.choosing the constant to be 𝔼(log q), we obtain the equivalent cost function to be minimized:ϕ^ min_p,q = - 𝔼(p logp/q) + 𝔼(log q)where the expectation is with respect to the variability of the data sampled on the spatial domain. If such samples are assumed IID, we have that ϕ^ min_p,q = - logψ_ℓ, q^ max(p)from (<ref>). Thus the information-theoretic approach suggests a scheduling of the annealing parameter λ that is data-driven, and proportional to the posterior p.This yields a model with adaptive regularization that automatically adjusts during the optimization: The influence of the prior q wanes as the solution becomes an increasingly better fit of the data, without the undesirable bias of the prior on the final solution (see Fig. <ref>). At the same time, it benefits from heavy regularization at the outset, turning standard regularization into a form of relaxation, annealing, or homotopy continuation method. In practice, the most tangible advantages of this choice are an algorithm that is easy to tune, and never again having to answer the question: “how did you choose λ?” § DETAILED DERIVATION OF ENERGY OPTIMIZATIONThe energy (<ref>) is minimized with respect to the partitioning functions u_i and the intensity estimates c_i in an alternating way. Since it is convex in u_i with fixed c_i, we use an efficient convex optimization algorithm in the framework of alternating-direction method of multipliers (ADMM) <cit.>. We modify the energy functional in (<ref>) using variable splitting <cit.> introducing a new variable v_i with the constraint u_i = v_i as follows: ∑_i ∈Λ{∫_Ωλ_iρ(u_i; c_i) + τ( ∑_i ≠ j u_j ) u_ix + ∫_Ω (1 - λ_i)γ( ∇ v_i )x + θ/2 u_i - v_i + y_i _2^2 }, subject tou_i(x) ≥ 0,∑_i ∈Λ v_i(x) = 1,∀ x ∈Ω, where y_i is a dual variable for each equality constraint u_i = v_i, and θ > 0 is a scalar augmentation parameter. The original constraints u_i ∈ [0, 1] and ∑_i u_i = 1 in (<ref>) are decomposed into the simpler constraints u_i ≥ 0 and ∑_i v_i = 1 in (<ref>) by variable splitting u_i = v_i. In the computation of the data fidelity and the regularization, we employ a robust estimator using the Huber loss function defined in (<ref>).An efficient procedure can be performed to minimize the Huber loss function ϕ_η following the equivalence property of Moreau-Yosida regularization of a non-smooth function | · | as defined by <cit.>: ϕ_η(x) = inf_r { | r | + 1/2 η (x - r)^2 }, which replaces the data fidelity ρ(u_i; c_i) and the regularization γ(∇ v_i) in (<ref>) with the regularized forms ρ(u_i; c_i, r_i) and γ(∇ v_i; z_i), respectively as follows: ρ(u_i; c_i, r_i)= ( | r_i | + 1/2 η (f - c_i - r_i)^2 ) u_i,γ(∇ v_i; z_i)= z_i _1 + 1/2 μ∇ v_i - z_i ^2_2, where r_i and z_i are the auxiliary variables to be minimized in alternation. The constraints on u_i and v_i in (<ref>) can be represented by the indicator function δ_A(x) of a set A defined by: δ_A(x)= 0 : x ∈ A ∞: x ∉ A The constraint u_i ≥ 0 is given by δ_A(u_i) where A = { x | x ≥ 0 }, and the constraint ∑_i v_i = 1 is given by δ_B( { v_i } ) where B = {{x_i} | ∑_i x_i = 1 }. The Moreau-Yosida regularization for the Huber loss function and the constraints represented by the indicator functions lead to the following unconstrained energy functional ℒ_i for label i: ℒ_i = ∫_Ωλ_iρ(u_i; c_i, r_i) + τ( ∑_i ≠ j u_j ) u_ix+ δ_A(u_i)+ ∫_Ω (1 - λ_i)γ( ∇ v_i; z_i )x + θ/2 u_i - v_i + y_i _2^2, and the final unconstrained energy functional ℰ reads: ℰ({u_i, v_i, y_i, c_i, r_i, z_i}) = ∑_i ∈Λℒ_i + δ_B({v_i}). The optimal set of partitioning functions { u_i } is obtained by minimizing the energy functional ℰ using ADMM, minimizing the augmented Lagrangian ℒ_i in (<ref>) with respect to the variables u_i, v_i, c_i, r_i, z_i, and applying a gradient ascent scheme with respect to the dual variable y_i followed by the update of the weighting function λ_i and the projection of { v_i } onto the set B in (<ref>).The alternating optimization steps using ADMM are presented in Algorithm <ref>, where k is the iteration counter.The update of the estimate c_i^k+1 in (<ref>) is obtained by the following step: c_i^k+1∫_Ωλ_i^k (f - r_i^k) u_i^kx/∫_Ωλ_i^k u_i^kx.The update for the auxiliary variable r_i^k+1 in (<ref>) is obtained by the following optimality condition:0∈∂ | r_i^k+1 | - 1/η ( f - c_i^k+1 - r_i^k+1 ),where ∂ denotes the sub-differential operator.The solution for the optimality condition in (<ref>) is obtained by the proximal operator of the L_1 norm <cit.> as follows: r_i^k+1( f - c_i^k+1 |ηg ), where g(x) =x _1.The proximal operator (v |ηg) of the weighed L_1 norm ηg with a parameter η > 0 at v is defined by: (v |ηg) _x ( 1/2 x - v _2^2 + ηg(x) ). The solution of the proximal operator of the L_1 norm is obtained by the soft shrinkage operator 𝒯 (v |η) defined in (<ref>). Thus, the solutions of the proximal problem in (<ref>) is given by: r_i^k+1𝒯( f - c_i^k+1 |η). In the same way, the update of the auxiliary variable z_i^k+1 in (<ref>) is obtained by: z_i^k+1𝒯( ∇ v_i^k|μ). For the primal variable u_i^k+1 in (<ref>), we employ the intermediate solution ũ_i^k+1 and its optimality condition is given by: 0 ∈λ_i^k+1 d_i^k+1 + τ∑_i ≠ j u_j^k + θ (ũ_i^k+1 - v_i^k + y_i^k), d_i^k+1 | r_i^k+1 | + 1/2 η (f - c_i^k+1 - r_i^k+1)^2,leading to the following update:ũ_i^k+1 v_i^k - y_i^k - λ_i^k+1/θ d_i^k+1 - τ/θ∑_i ≠ j u_j^k. Given the intermediate solution ũ_i^k+1, the positivity constraint is imposed to obtain the solution for the update of u_i^k+1 as follows: u_i^k+1Π_A( ũ_i^k+1 ) = max{ 0, ũ_i^k+1}, where the orthogonal projection operator Π_A on a set A = { x | x ≥ 0 } is defined by: Π_A(x) = min_y ∈ A y - x _2. We also employ the intermediate solution ṽ_i^k+1 in (<ref>) for the update of the primal variable v_i^k+1 in (<ref>).The optimality condition for the update of ṽ_i^k+1 reads: 0 ∈1 - λ_i^k+1/μ∇^* ( ∇ṽ_i^k+1 - z_i^k+1 ) - θ( u_i^k+1 - ṽ_i^k+1 + y_i^k ), where ∇^* denotes the adjoint operator of ∇, leading to the following linear system of equation with ξ_i = 1 - λ_i^k+1/μθ: ṽ_i^k+1 -ξ_i Δṽ_i^k+1 = u_i^k+1 + y_i^k - ξ_i z_i^k+1, where - ∇^* ∇ = Δ is the Laplacian operator, and - ∇^* = is the divergence operator.We use the Gauss-Seidel iterations to solve the linear system in (<ref>).Given the set of intermediate solution {ṽ_i^k+1}, the solution for the update of the variable v_i^k+1 is obtained by the orthogonal projection of the intermediate solution to the set B in Eq. (<ref>): v_i^k+1 = ṽ_i^k+1 - 1/n( V - 1 ),V = ∑_i ∈Λṽ_i^k+1. The gradient ascent scheme is applied to update the dual variable y_i^k+1 in (<ref>). ieee
http://arxiv.org/abs/1702.08336v1
{ "authors": [ "Byung-Woo Hong", "Ja-Keoung Koo", "Stefano Soatto" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170227155035", "title": "Multi-Label Segmentation via Residual-Driven Adaptive Regularization" }
=14cm =22cm = 0cm =1cm = 0cm footnote thmTheorem[section] defin[thm]Definition prop[thm]Proposition lem[thm]Lemma fact[thm]Fact cor[thm]Corollary exampleExample remarkRemark[section] proProblem[section] observation[thm]Observation reegGBKsongStrong rainbow connection numbers oftoroidal meshes Yulong Wei[Corresponding author. E-mail address: yulong.wei@mail.bnu.edu.cn (Y. Wei), xum@bnu.edu.cn (M. Xu), wangks@bnu.edu.cn (K. Wang).]Min XuKaishun WangSch. Math. Sci. & Lab. Math. Com. Sys., Beijing Normal University, Beijing, 100875,China ================================================================================================================================================================================================================================================================== In 2011, Li et al. <cit.> obtained an upper bound of the strong rainbow connection number of an r-dimensional undirected toroidal mesh. In this paper, this bound is improved. As a result, we give a negative answer to theirproblem.Key words: toroidal mesh; (strong) rainbow path; (strong) rainbow connection number; Cayley graph. § INTRODUCTION All graphs considered in this paper are finite, connected and simple. We refer to the book <cit.> for graph theory notation and terminology not described here. Let Γ be a graph. Denote by V(Γ) and E(Γ) the vertex set and edge set of Γ, respectively. A sequence of distinct vertices is a path if any two consecutive vertices are adjacent. A path P:(v_1, v_2, …, v_k) is a cycle if v_1 is adjacent to v_k, denoted by C_k. The distance, d(u, v), between vertices u and v is equal to the length of a shortest path connecting u and v. The diameter of Γ, d(Γ), is the maximum distance between two vertices in Γ over all pairs of vertices.Define a k-edge-coloring ζ: E(Γ)→{1,2,…,k}, k∈ℕ, where adjacent edges may be colored the same. A path is rainbow if no two edges of it are colored the same. A path from u to v is called a strong rainbow path if it's a rainbow path with length d(u, v). If any two distinct vertices u and v of Γ are connected by a (strong) rainbow path, then Γ is called (strong) rainbow-connected under the coloring ζ, and ζ is called a (strong) rainbow k-coloring of Γ. The (strong) rainbow connection number of Γ, denoted by (src(Γ)) rc(Γ), is the minimum k for which there exists a (strong) rainbow k-coloring of Γ. Clearly, we have d(Γ)≤ rc(Γ)≤ src(Γ).The (strong) rainbow connection number of a graph was first introduced by Chartrand et al. <cit.>. Ananth and Nasre <cit.> proved that, for every integer k≥3, deciding whether src(Γ)≤ kis NP-hard even when Γ is bipartite. (Strong) rainbow connection numbers of some special graphs have been studied in the literature, such as outerplanar graphs <cit.>, Cayley graphs <cit.>, line graphs <cit.>, power graphs <cit.>, undirected double-loop networks <cit.> and non-commuting graphs <cit.>.The Cartesian product of two graphs Γ and Λ is the graph Γ□Λ whose vertex set is the set {γλ | γ∈ V(Γ), λ∈ V(Λ)}, and two vertices γλ, γ' λ' are adjacent if λ=λ' and {γ,γ'}∈ E(Γ) or if γ=γ' and {λ,λ'}∈ E(Λ). The Cartesian product operation is commutative and associative, hence the Cartesian product of more factors is well-defined. The graph C_n_1□⋯□ C_n_r is an r-dimensional undirected toroidal mesh, where n_k≥2 for 1≤ k≤ r.In 2011, Li et al. proved the following theorem and proposed an open problem.<cit.> Let C_n_k, n_k≥2, 1≤ k ≤ r be cycles. Then∑_1≤ k ≤ r⌊n_k/2⌋≤ rc(C_n_1□⋯□ C_n_r)≤ src(C_n_1□⋯□ C_n_r)≤∑_1≤ k ≤ r⌈n_k/2⌉.Moreover, if n_k is even for every 1≤ k ≤ r, thenrc(C_n_1□⋯□ C_n_r)= src(C_n_1□⋯□ C_n_r)=∑_1≤ k ≤ rn_k/2. <cit.> Given an Abelian group G and an inverse closed minimal generating set S⊆ G ∖ 1 of G, is it true thatsrc(C(G,S))= rc(C(G,S))=∑_a∈ S^*⌈|a|/2⌉?where S^*⊆ S is a minimal generating set of G. In this paper, we improve the upper bound of src(C_n_1□⋯□ C_n_r) in Theorem <ref>. Our main result is listed below. Let C_n_k, n_k≥2, 1≤ k ≤ r be cycles. Thensrc(C_n_1□⋯□ C_n_r)≤{[ ⌈n_1+⋯+n_r-μ2⌉, 0≤μ≤⌊r2⌋;;; ⌈n_1+⋯+n_r-r+μ2⌉,⌊r2⌋+1≤μ≤ r, ].where μ is the number of even numbers among n_1, …, n_r. Note that an r-dimensional undirected toroidal mesh is a Cayley graph. As a result, Theorem <ref> gives a negative answer to Problem <ref>. § PRELIMINARY RESULTSIn this section, we will introduce some useful results for the strong rainbow connection numbers of graphs. <cit.> For each integer n≥4, rc(C_n)= src(C_n)=⌈n/2⌉. We make the following simple observation, which we will use repeatedly.Let Γ and Λ be two connected graphs. Thensrc(Γ) ≤ src(Γ□Λ)≤ src(Γ)+ src(Λ).For each integer n≥3, src(C_n□ C_2)=⌈n+1/2⌉.Write (0, 1, …, n-1) for C_n and (0, 1) for C_2. Since the diameter of C_n□ C_2 is ⌈n+1/2⌉, it suffices to show that src(C_n□ C_2)≤⌈n+1/2⌉. We only need to construct a strong rainbow ⌈n+1/2⌉-coloring. Now we divide our discussion into two cases.Case 1   n=2k.Define an edge-coloring f_1 of the graph C_2k□ C_2 byf_1(e)= {[ k, if e={i0, i1};; i,if e={ij, (i+1)j}, 0≤ i≤ k-1;; i-k, if e={ij, (i+1)j}, k≤ i≤ 2k-2;; k-1,if e={(2k-1)j, 0j}. ]. For illustration, we give a strong rainbow 5-coloring of C_8□ C_2 in Figure <ref>.Note that any path from u to v in Table <ref> is a strong rainbow path under the coloring f_1. It follows that f_1 is a strong rainbow (k+1)-coloring.Case 2   n=2k+1.Define an edge-coloring f_2 of the graph C_2k+1□ C_2 byf_2(e)= {[ i,if e={i0, i1}, 1≤ i≤ k;; i-k-1, if e={i0, i1}, k+1≤ i≤ 2k;; k, if e={00, 01};; i,if e={ij, (i+1)j}, 0≤ i≤ k;; i-k-1, if e={ij, (i+1)j}, k+1≤ i≤ 2k-1;; k-1,if e={(2k)j, 0j}. ]. For illustration, we give a strong rainbow 4-coloring of C_7□ C_2 in Figure <ref>.Note that any path from u to v in Table <ref> is a strong rainbow path under the coloring f_2. It follows that f_2 is a strong rainbow (k+1)-coloring. In the graph Γ□Λ, we write Γ y for Γ□{y}, where y∈ V(Λ). The union of graphs Γ and Λ is the graph Γ∪Λ with vertex set V(Γ)∪ V(Λ) and edge set E(Γ)∪ E(Λ).Let Γ be a connected graph. Thensrc(Γ□ C_n)≤⌈n-2/2⌉+ src(Γ□ C_2), n≥3. Write (x_1, x_2, …, x_n) for C_n. Meanwhile, we use E[Γ x_i, Γ x_j] to denote the edge set between Γ x_i and Γ x_j.By Observation <ref>, we havesrc(Γ□ C_3)≤ src(C_3)+ src(Γ)=1+ src(Γ)≤ 1+ src(Γ□ C_2).Thus, (<ref>) holds for n=3. Now, we suppose that n≥4.Let L_1 and L_2 be induced subgraphs of Γ□ C_nwhose vertex sets are V(Γ x_1)∪ V(Γ x_n) and V(Γ x_⌊n/2⌋)∪ V(Γ x_⌊n/2⌋+1) respectively. Then each L_i is isomorphic to Γ□ C_2. Let S_0={1, …, ⌈n-2/2⌉}. Suppose that f_1: E(Γ)→ S_1 is a strong rainbow src(Γ)-coloring of Γ, and f_2, i: E(L_i)→ S_2 is a strong rainbow src(Γ□ C_2)-coloring of L_i for 1≤ i≤2, where S_0∩ S_2=∅. By Observation <ref>, we may assume that S_1⊆ S_2. Define an edge-coloring f_3: E(Γ□ C_n)→ S_0∪ S_2 byf_3(e)= {[ i, if e∈ E[Γ x_i, Γ x_i+1], 1≤ i≤⌊n/2⌋-1;; i-⌊n/2⌋,if e∈ E[Γ x_i, Γ x_i+1], ⌊n/2⌋+1≤ i≤ n-1;; f_2,i(e) , if e∈ E(L_i) for 1≤ i≤2;; f_1({y_1, y_2}), if e={y_1x_i, y_2x_i}∈ E[Γ x_i, Γ x_i], n≥ 5 and i∈ I, ].where I={2, 3, …, ⌊n/2⌋-1, ⌊n/2⌋+2, …, ⌊n/2⌋+3, …, n-1}.For illustration of f_3, see Figure <ref>. Pick any two distinct vertices u and v of Γ□ C_n. Write u=y_1x_i and v=y_2x_j. Without loss of generality, we may assume that i≤ j.We only need to show that there exists a strong rainbow path from u to v under f_3.If i=j, the desired result is obvious. Assume that i≠ j. We divide our discussion into three cases.Case 1   1≤ j-i≤⌊n/2⌋, and 2≤ j≤⌊n/2⌋ or ⌊n/2⌋+1≤ i≤ n-1.Pick a strong rainbow path P_1 from y_1x_j to v in Γ x_j. Then(u=y_1x_i, y_1x_i+1, …, y_1x_j)∪ P_1is a desired strong rainbow path.Case 2   1≤ j-i≤⌊n/2⌋, j≥⌊n/2⌋+1 and i≤⌊n/2⌋.Pick a strong rainbow path P_2 from y_1x_⌊n/2⌋ to y_2x_⌊n/2⌋+1 in L_2. Then(u=y_1x_i, y_1x_i+1, …, y_1x_⌊n/2⌋) ∪ P_2 ∪(y_2x_⌊n/2⌋+1, y_2x_⌊n/2⌋+2, …, y_2x_j=v)is a desired strong rainbow path.Case 3   j-i≥⌊n/2⌋+1.Pick a strong rainbow path P_3 from y_1x_1 to y_2x_n in L_1. Then(u=y_1x_i, y_1x_i-1, …, y_1x_1)∪ P_3 ∪(y_2x_n, y_2x_n-1,…, y_2x_j+1, y_2x_j=v)is a desired strong rainbow path.As mentioned above, we obtain the desired result. § PROOF OF THEOREM <REF> Let C_n_k, n_k≥2, 1≤ k ≤ r be cycles. Thensrc(C_n_1□⋯□ C_n_r)≤⌈n_1+⋯+n_r/2⌉. Without loss of generality, we may assume that n_1≥ n_2≥⋯≥ n_r. We distinguish two cases.Case 1   n_r≥3.We prove this proposition by induction on r. If r=1, (<ref>) is derived from Lemma <ref>. Suppose r=2. If n_2 is even, then by Lemma <ref> and Observation <ref>, (<ref>) holds. If n_2 is odd, then by Proposition <ref> and Lemma <ref>, we havesrc(C_n_1□ C_n_2) ≤ ⌈n_2-2/2⌉+ src(C_n_1□ C_2)=⌈n_1+n_2/2⌉. Now, Suppose r≥3.If each n_i is odd, thensrc(C_n_1□⋯□ C_n_r-1□ C_n_r)≤ ⌈n_r-2/2⌉+ src((C_n_1□⋯□ C_n_r-2)□ (C_n_r-1□ C_2))  ( by Proposition <ref>)≤ ⌈n_r-2/2⌉+ src(C_n_1□⋯□ C_n_r-2)+ src(C_n_r-1□ C_2)  ( by Observation <ref>)≤ ⌈n_r-2/2⌉+⌈n_1+ ⋯+n_r-2/2⌉+⌈n_r-1+1/2⌉  ( by induction hypothesis  and Lemma <ref>)= ⌈n_1+⋯+n_r/2⌉. If n_i is even for some i, thensrc(C_n_1□⋯□ C_n_r)≤src(C_n_1□⋯□ C_n_i-1□ C_n_i+1□⋯□ C_n_r)+ src(C_n_i)  ( by Observation <ref>)≤ ⌈n_1+⋯+n_i-1+n_i+1+⋯+n_r/2⌉+⌈n_i/2⌉  ( by induction hypothesis)= ⌈n_1+⋯+n_r/2⌉. Case 2   n_r=2.Suppose s is the minimum positive integer such that n_s=2. Case 2.1   s=1. By Observation <ref>, (<ref>) is obtained.Case 2.2   s≥ 2. In this case, we havesrc(C_n_1□⋯□ C_n_r)≤src(C_n_1□⋯□ C_n_s-1)+ src(C_2□⋯□ C_2)  ( by Observation <ref>)≤ ⌈n_1+⋯+n_s-1/2⌉+r-s+1  ( by Case 1)= ⌈n_1+⋯+n_r/2⌉. Combining Case 1 and Case 2, we obtain the desired result. Proof of Theorem <ref>:If r=1, (<ref>) is obvious. Now, suppose r≥ 2. We divide our discussion into three cases. Case 1   r=2. If n_1=n_2=2, Observation <ref> implies that (<ref>) holds. Now suppose n_j≥ 3, for some j. Without loss of generality, we may assume that n_1≥ 3. By Proposition <ref> and Lemma <ref>, we havesrc(C_n_1□ C_n_2)≤⌈n_2-2/2⌉+ src(C_n_1□ C_2) =⌈n_2-2/2⌉+⌈n_1+1/2⌉. If μ=1 and n_1 is even, thensrc(C_n_1□ C_n_2)= src(C_n_2□ C_n_1)≤⌈n_1-2/2⌉+ src(C_n_2□ C_2)=⌈n_1+n_2-1/2⌉.Otherwise, (<ref>) implies (<ref>).Case 2   r=3.If μ=0 or μ=3, (<ref>) holds by Proposition <ref>. Now suppose μ=1 or μ=2. Without loss of generality, we assume that n_1 is even and n_3 is odd. By Observation <ref>, Lemma <ref> and Case 1, we havesrc(C_n_1□ C_n_2□ C_n_3)≤ src(C_n_1)+ src(C_n_2□ C_n_3) ≤⌈n_1+n_2+n_3-1/2⌉. Case 3   r≥4. If μ=0 or μ=r, Proposition <ref> implies (<ref>). In the following, we assume that n_1, …, n_μ are even and n_μ+1, …, n_r are odd.Case 3.1   1≤μ≤⌊r/2⌋-1.src(C_n_1□⋯□ C_n_r)≤src(C_n_2μ+1□⋯□ C_n_r)+∑_1≤ s≤μ, s+t=2μ+1 src(C_n_s□ C_n_t)  ( by Observation <ref>)≤ ⌈n_2μ+1+⋯+n_r/2⌉+∑_1≤ s≤μ, s+t=2μ+1n_s+n_t-1/2  ( by Proposition <ref>  and Case 1)= ⌈n_1+⋯+n_r-μ/2⌉. Case 3.2   μ=⌊r/2⌋.src(C_n_1□⋯□ C_n_r)≤ 1/2(1+(-1)^r+1) src(C_n_r)+∑_1≤ s≤μ, s+t=2μ+1 src(C_n_s□ C_n_t)  ( by Observation <ref>)≤ 1/2(1+(-1)^r+1)⌈n_r/2⌉+∑_1≤ s≤μ, s+t=2μ+1n_s+n_t-1/2  ( by Proposition <ref>  and Case 1)= ⌈n_1+⋯+n_r-μ/2⌉. Case 3.3   ⌊r/2⌋+1≤μ≤ r-1.src(C_n_1□⋯□ C_n_r)≤src(C_n_1□⋯□ C_2μ-r)+∑_μ+1≤ t≤ r, s+t=2μ+1 src(C_n_s□ C_n_t)  ( by Observation <ref>)≤ ⌈n_1+⋯+n_2μ-r/2⌉+∑_μ+1≤ t≤ r, s+t=2μ+1n_s+n_t-1/2  ( by Proposition <ref>  and Case 1)= ⌈n_1+⋯+n_r-r+μ/2⌉. As mentioned above, we obtain the desired result.§ ACKNOWLEDGEMENTM. Xu's research is supported by the National Natural Science Foundation of China (11571044, 61373021). K. Wang's research is supported by the National Natural Science Foundation of China (11671043, 11371204). 99ANP. Ananth, M. Nasre, New hardness results in rainbow connectivity, arXiv:1104.2074v1 [cs.CC], 2011.BMG J.A. Bondy, U.S.R. Murty, Graph Theory, GTM 244, Springer, 2008.CJMZ G. Chartrand, G.L. Johns, K.A. McKeon and P. Zhang, Rainbow connection in graphs, Math. Bohem., 133 (1) (2008), 85–98.DXX X. Deng, K. Xiang and B. Wu, Polynomial algorithm for sharp upper bound of rainbow connection number of maximal outerplanar graphs, Appl. Math. Lett., 25 (2012), 237–244.LLL H. Li, X. Li and S. Liu, The (strong) rainbow connection numbers of Cayley graphs on Abelian groups, Comput. Math. Appl.,62 (11) (2011), 4082–4088.LSue X. Li and Y. Sun, Upper bounds for the rainbow connection numbers of line graphs, Graphs Combin., 28 (2)(2012), 251–263.MFW X. Ma, M. Feng and K. Wang, The rainbow connection number of the power graph of a finite group, Graphs Combin., 32 (4) (2016), 1495–1504.MLY.B. Ma and Z.P. Lu, Rainbow connection numbers of Cayley graphs, J. Comb. Optim., http://dx.doi.org/10.1007/s10878-016-0052-6.SY Y. Sun, Rainbow connection numbers for undirected double-loop networks, Advances in global optimization, 109–116, Springer Proc. Math. Stat., 95, Springer, Cham, 2015.WMW Y. Wei, X. Ma and K. Wang, Rainbow connectivity of the non-commuting graph of a finite group, J. Algebra Appl., 15 (2016), 1650127, 8pp.
http://arxiv.org/abs/1702.07986v2
{ "authors": [ "Yulong Wei", "Min Xu", "Kaishun Wang" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170226044350", "title": "Strong rainbow connection numbers of toroidal meshes" }
Hospital acquired infections (HAI) are infections acquired within the hospital from healthcare workers, patients or from the environment, but which have no connection to the initial reason for the patient's hospital admission. HAI are a serious world-wide problem, leading to an increase in mortality rates,duration of hospitalisation as well as significant economic burden on hospitals. Although clear preventive guidelines exist, studies show that compliance to them is frequently poor. This paper details the software perspective for an innovative, business process software based cyber-physical system that will be implemented as part of a European Union-funded research project. The system is composed of a network of sensors mounted in different sites around the hospital, a series of wearables used by the healthcare workers and a server side workflow engine. For better understanding, we describe the system through the lens of a single, simpleclinical workflow that is responsible for a significant portion of all hospital infections. The goal is that when completed, the system will be configurable in the sense of facilitating the creation and automated monitoring of those clinical workflows that when combined, account for over 90% of hospital infections.Preventing Hospital Acquired Infections Through a Workflow-Based Cyber-Physical System Maria Iuliana Bocicorsup1,2, Arthur-Jozsef Molnarsup1,2 and Cristian Taslitchisup3 sup1SC Info World SRL, Bucharest, Romania sup2Faculty of Mathematics and Computer Science, Babes-Bolyai University, Cluj-Napoca, Romania sup3Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania {iuliana.bocicor, arthur.molnar}@infoworld.ro, cristian.taslichi@gmail.comDecember 30, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONHospital acquired infections (HAI) or nosocomial infections are defined as infections ”acquired in hospital by a patient who was admitted for a reason other than that infection. An infection occurring in a patient in a hospital or other healthcare facility in whom the infection was not present or incubating at the time of admission. This includes infections acquired in the hospital but appearing after discharge, and also occupational infections among staff of the facility” <cit.>. Thus, in addition to decreasing the quality of life and increasing mortality rates, the duration of hospitalisation as well as the costs of medical visits for patients, HAI represent a direct occupational risk for healthcare workers, which significantly affects costs and has the potential of creating personnel deficits in case of an outbreak.Existing research shows that HAI are prevalent across the globe, regardless of geographical, political, social or economic factors <cit.>, <cit.>. Even more compelling is the fact that while the sophistication of medical care is constantly increasing, reported HAI rates have not seen meaningful decrease <cit.>, <cit.>, <cit.>, <cit.>. Studies undertaken between 1995 and 2008 in several developed countries have revealed infection rates between 5.1% to 11.6% <cit.>; data from lesser developed countries is in many cases limited and deemed of low quality. According to the European Centre for Disease Prevention and Control, approximately 4.2 million HAI occurred in 2013 alone in European long-term care facilities, the crude prevalence of residents with at least one HAI being 3.4%; this translates to more than 100 thousand patients on any given day <cit.>. The total cases of HAI in Europe amount to 16 million extrahospitalisation days, 37.000 attributable deaths and an economic burden of 7 billion in direct costs <cit.>. A study conducted in the United Kingdom <cit.> concluded that patients who developed HAI stayed in hospital 2.5 times longer and the hospital costs (for nursing care, hospital overheads, capital charges, and management) tripled. In the USA, in 2002 alone, 1.7 million patients were affected by HAI and the annual economic impact was approximately US$6.5 billion in 2004. Besides all these resource expenses, what is even worse is that HAI are responsible for an increase in mortality rates, as these infections lead to death in 2.7% of cases <cit.>. A more recent study <cit.> conducted in 183 US hospitals revealed that 4% of the patients had one or more HAI, which lead to an estimated number of 648.000 patients with HAI in acute care hospitals in the US, in 2011. According to a recent study <cit.>, the prevalence of HAI in Southeast Asia between 2000 and 2012 was 9%, the excess length of stay in hospitals of infected patients varied between 5 and 21 days and the attributed mortality was estimated between 7% and 46%. In Canada, more than 220.000 HAI result in 8000 deaths a year, making infections the fourth leading cause of death in the country, with $129 million in extracosts incurred in 2010 <cit.>, <cit.>. In Australia, there is an estimated number of 200.000 cases of HAI per year, resulting in 2 million hospitalization days <cit.>.The most common target sites of HAI are the urinary and respiratory tracts and areas involved in invasive procedures or catheter insertion areas. While the methodology for prevention exists, it is often ignored due to lack of time, unavailability of appropriate equipment or because of inadequate staff training. Research shows that the most important transmission route for HAI are members of staff coming into contact with patients or contaminated equipment without following proper hygiene procedures <cit.>.During the twentieth century, several specific measures were taken to prevent the occurrence and spread of infections, which have been translated into a series of instructions for controlling the vectors that propagate infection as well as to properly manage outbreaks and epidemics. Originally, these instructions were established as guidelines for healthcare workers, later they were also transformed into prevention rules included in the documentation concerning workplace safety, and in recent years they were incorporated into various software or cyber-physical solutions to monitor and ensure compliance. An unquestionable benefit and substantial improvement in this respect has been brought by the Internet of Things (IoT) technologies, which are currently employed in healthcare and many other areas of life. Since the use of various IoT monitoring systems, significant improvement has been observed regarding compliance to hygiene regulations and prevention standards, as well as decreases in the rates of infections. In this paper we present an IoT-based cyber-physical system that targetsHAI prevention, on which development has started under funding from the European Union. Integrating a network of sensors that monitor clinical workflows and ambient conditions with monitoring software, the system will provide real-time information and alerts. The system will monitor general processes known to affect HAI spread such as cleaning and equipment maintenance, together with clinical processes at risk for HAI such as catheter insertion, postoperative care or mechanical ventilation. In this paper, we will illustrate the system using a clinical workflow that is often involved in HAI transmission, together with a motivating example that details a software perspective regarding how the system ensures compliance to established preventive guidelines.§ RELATED WORKSeveral automated solutions have been implemented to avert and reduce HAI and their impact. The underlying idea used by several of these systems is continuous monitoring of healthcare workers' hand hygiene and real-time alert generation in case of non-compliance with established guidelines. The final purpose is to modify human behaviour towards better hand hygiene compliance. This is usually accomplished using wearable devices worn by healthcare workers, which interact with sensors placed in key hospital locations in order to record personnel activities and hand hygiene events. When a hand hygiene event is omitted or performed, the device provides visual, auditory or haptic notification. The IntelligentM <cit.> and Hyginex <cit.> systems use bracelet-like devices that are equipped with motion sensors to ensure that hand sanitation is correctly performed. Biovigil technology <cit.> and MedSense <cit.> are designed for the same purpose, only in these cases bracelets are replaced withbadges worn by the healthcare workers. The Biovigil device uses chemical sensors to detect whether hand hygiene is in accordance with established standards. The systems can be configured to remind clinicians to disinfect their hands before entering patient rooms, or before procedures such as intravenous infusions or catheter insertion. Furthermore, these systems record hygiene events, centralise them and enable analysis, visualisation and report generation. Unlike the solutions presented so far, the SwipeSense <cit.> system employs small devices that are alcohol-based and easy to use gel dispensers which can be worn by medical personnel. Thisexempts healthcare workers from interrupting their activities in order to walk to a sink or a disinfectant dispenser <cit.>. A different type of system that is of assistance in the fight against infection is Protocol Watch <cit.>, a decision support system used to improve compliance with the ”Surviving Sepsis Campaign” international guidelines <cit.> for the prevention and management of sepsis. Protocol Watch regularly checks certain medical parameters of the patients <cit.>, its main goal being to reduce the time period between the debut of sepsis (the moment when it is first detected) and the beginning of treatment. If the system detects that certain conditions that may cause sepsis are met, it alerts the medical staff and indicates which tests, observations and interventions must be performed, according to prevention and treatment protocols.A different approach for the prevention of infection is taken by Xenex <cit.>: the Xenex ”Germ-Zapping Robot” can disinfect a room by using pulses of high-intensity, high-energy ultraviolet light. The robot must be taken inside the room to be disinfected and in most cases, the deactivation of pathogens takes place in five minutes. After disinfection, the room will remain at a low microbial load, until it is recontaminated by a patient, healthcare worker or through the ventilation system.Another relevant issue regards the problem of identifying control policies and optimal treatment in infection outbreaks, as introduced in <cit.>. The authors propose a comprehensive approach using electronic health records to build healthcare worker contact networks with the objective of putting into place efficient vaccination policies in case of outbreaks. Relevant software systemsdeveloped to also improve treatment policies in case of outbreaks and epidemics are RL6: Infection <cit.> - a software solution developed to assist hospitals in the processes of controlling and monitoring infections and outbreaks, and Accreditrack <cit.> - a software system designed to ensure compliance with hand hygiene guidelines and to verify nosocomial infection management processes. § MOTIVATING EXAMPLE - HAND HYGIENEResearch shows that most cases of hospital infection are tightly connected to certain clinical workflows <cit.>. Thus, we propose a cyber-physical system which monitors workflows considered relevant in HAI propagation. Although there are other ICT automated solutions that target the prevention of HAI, there are no other systems that use the clinical workflow monitoring based approach, to our knowledge. In order to successfully prevent infection and outbreaks, the system must focus on the typical sites where these infections usually occur (urinary and respiratory tracts, sites of surgery or invasive procedures <cit.>), as well as track workflows that are not infection-site specific, but which have a significant contribution to HAI rates, such as hand hygiene and transmission from the environment.A hardware-centric motivating example of the proposed system was presented in <cit.>. The authors of<cit.> focus on the hardware components of the system, namely types and location of sensors used to monitor clinical activities and the wearable technology employed by the medical staff for identification, monitoring and alerting. The purpose of our paper is to illustrate the software-side of the proposed cyber-physical system using a motivating example based on one of the most relevant, and often occurring clinical workflows. On the software side, the system will model and encode clinical workflows using the Business Process Modelling and Notation (BPMN) <cit.> standard and it will be compatible with leading medical informatics standards such as HL7 V3 <cit.>, thus allowing seamless interconnection to hospital infrastructure via HL7-compliant Hospital Information Systems (HIS). In addition, it will integrate a network of hardware sensors, able to identify ongoing clinical processes, provide location information and track the use of hospital equipment and materials. Healthcare workers will use wearable devices that continuously monitor their location and activity. The central hub of the system will be a server side engine able to load and execute BPMN-based workflows. Integration with the HIS enables the retrieval of key information regarding patients, such as admissions, transfers and discharges as well as records of past of planned invasive interventions, which can be used to determine the level of risk. Hence, infections and outbreak management could be improved so that in case of suspicion, the locations of previous admissions, as well patients of members of staff considered at risk can be contacted.The modelled clinical workflows will be executed using a BPMN engine. When the workflow leads to a state that is a risk of HAI, the engine will generate alerts that are received directly by the involved healthcare workers. In this paper we illustrate the interplay between the hardware and software components of the proposed system using a motivating example based on hand hygiene, which remains one of the most common pathways of HAI transmission. We must note that the provided example is only used to portray the workflow-based system, and that the final implementation will allow the creation and monitoring of a variety of clinical workflows, that will be executed by the BPMN engine whenever necessary.We illustrate our motivating example using a typical scenario: a healthcare worker enters a room with two beds, and interacts with both inpatients before exiting. Both common as well as highly relevant to hand hygiene, our example is illustrated in Figure <ref>, using simplified BPMN-like notation to reduce the number of decision points and emphasize readability.According to established hand hygiene guidelines <cit.>, upon entering a patient room, workers must perform hand disinfection. If this procedure is skipped or performed inadequately (e.g. without disinfectant, shorter washing time than recommended), the system will generate an alert to warn the clinician about the detected non-compliance. After each patient contact, and before leaving the room, hand disinfection should again be correctly performed. All these events and alert reports will be persisted to enable later analyses, such as identification of an outbreak's patient zero and route of transmission.Figure <ref>, which was adapted from <cit.> illustrates a typical patient room, with two beds, a sink and a bathroom. As soon as the healthcare worker enters it, the Radio-Frequency Identification (RFID) tag and motion sensor combination detect this and identify them. The system records and interprets the data received from the sensors and the BPMN engine starts a new instance of the relevant workflows, including the one for hand-hygiene. According to the hand hygiene workflow, the clinician should perform the hand disinfection procedure before going near a patient and before leaving their surrounding. In our example, this is achieved using the sink or the bathroom sink and the disinfectant dispenser. All of them have inexpensive sensors together with RFID tags and Bluetooth Low Energy transceivers; their role is to provide input to the workflow engine. The workflow engine records received data into the persistent repository. Furthermore, by running the workflow, the software engine ascertains that hand hygiene guidelines were observed. In our example, the healthcare worker performs hand disinfection before contact with the first patient, which is recorded by the sensors integrated with the sink and the disinfectant dispenser. However, the worker can move to the second patient directly, as shown in Figure <ref>. In this case, the system records their proximity to the second bed via RFID; if a disinfection event that is compliant with guidelines was not recorded before the contact, the system interprets this as non-compliance, and emits an alert that is recorded and received by the healthcare worker through their wearable device. Once they become compliant by undergoing hand disinfection, they can resume contact with the patient. Information that is persisted is planned to be reused at later dates and in the context of more than one workflow, including at least all the workflows active during that time. As an example, the room entry and exit events from Figure <ref> can be used for finding the source of an outbreak, or tracking its propagation. In our example, the current instance of the hand hygiene workflow ends once the healthcare worker exists the patient room.§ CONCLUSIONSThis paper is centred on technology-driven preventive measures for a serious public health issue, namely hospital acquired infections. While most transmission routes are well understood and standard guidelines exist to curb them, available research shows that in practice, they are either not applied, or not applied thoroughly. As such, we propose an innovative technological solution for preventing HAI and outbreaks. Our proposed approach will employ a standards-compliant, workflow based, cyber-physical system able to monitor and enforce compliance with clinical workflows that are associated with over 90% of HAI instances, in accordance with current guidelines and best practices.We have illustrated an overall picture of the system using a software workflow perspective through a motivating example based on hand-hygiene guidelines. However, our aim is to enable the creation, configuration and execution of location-specific clinical workflows using BPMN-like notation. As existing research shows that infections within the urinary tract, respiratory tract, surgical sites, or skin and soft tissue <cit.> account for over 75% of the total number, we aim to also evaluate the system using workflows related to those sites. Another category is represented by workflows that are not infection-site specific, but which represent transmission vectors for numerous types of infections: hand hygiene (such as presented in this paper) and transmission from the environment.Another future functionality, that currently sits beyond the scope of our research is a graphically interactive, configurable component that allows the management of the monitored workflows, by taking into account the specific infrastructure of the deployed sensors and of the unit of care.Furthermore, additional components that can be added to the system includeadvanced analysis tools for the gathered data that allow building risk maps and contact networks enabling new ways of pinpointing elusive reasons of hospital infection that occur even when best practices are adhered to.In addition to the software-based functionalities, another issue that must be addressed is related to the legal aspects regarding the deployment of such a system, more precisely those regarding data protection. Considering that the system involves tracking of patients and healthcare workers alike, the software side of such systems must remain compliant with regulations regarding the protection of highly-sensitive personal data. § ACKNOWLEDGEMENTThis work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CCCDI UEFISCDI, project number 9831[https://www.eurostars-eureka.eu/project/id/9831]. apalike
http://arxiv.org/abs/1702.08010v1
{ "authors": [ "Maria Iuliana Bocicor", "Arthur-Jozsef Molnar", "Cristian Taslitchi" ], "categories": [ "cs.OH", "D.2.2, J.3" ], "primary_category": "cs.OH", "published": "20170226094036", "title": "Preventing Hospital Acquired Infections Through a Workflow-Based Cyber-Physical System" }
jmkurdestany@gmail.comDepartment of Physics & Astronomy, University of Missouri, Columbia, MO 65211, USAMotivated by the current interest in the understanding of the Mott insulators away from halffilling, observed in many perovskite oxides, we study the Mott metal-insulator transition (MIT) in the doped Hubbard-Holstein model using the Hatree-Fock mean field theory. The Hubbard-Holstein model is the simplest model containing both the Coulomb and the electron-lattice interactions, which are important ingredients in the physics of the perovskite oxides. In contrast to the half-filled Hubbard model, which always results in a single phase (either metallic or insulating), our results show that away from half-filling, a mixed phase of metallic and insulating regions occur. As the dopant concentration is increased, the metallic part progressively grows in volume, until it exceeds the percolation threshold, leading to percolative conduction. This happens above a critical dopant concentration δ_c, which, depending on the strength of the electron-lattice interaction, can be a significant fraction of unity.This means that the material could be insulating even for a substantial amount of doping,in contrastto the expectation that doped holes would destroy the insulating behavior of the half-filled Hubbard model. Our theory provides a framework for the understanding of the density-driven metal-insulator transition observed in many complex oxides. Mottmetal-insulator transition in the Doped Hubbard-Holstein model Jamshid Moradi Kurdestanyand S. Satpathy December 30, 2023 =====================================================================§ INTRODUCTION It is well known that the half filled Hubbard model is a Mott insulator<cit.> when thestrength of the on-site Coulomb interaction U exceeds a critical value.Within the Hubbard model, the Mott insulating state can exist only at half filling, and just a single hole is supposed to destroy the antiferromagnetic insulating ground state, turning it into a ferromagnetic metal as suggested by the Nagaoka Theorem<cit.>, strictly true in the infinite U limit.Quite early on, the Mott insulator LaTiO_3 was thought to be a prototypical example of the Nagaoka Theorem, where the undoped LaTiO_3 is an antiferromagnetic insulator, as predicted for the half-filled Hubbard model, but both the antiferromagnetism as well as the insulating behavior are quickly destroyed with the introduction of a small number of holes via the addition of extra oxygen<cit.> or via Sr substitution (with as little as x ≈ 0.05 for La_1-xSr_xTiO_3)<cit.>. Indeed, a large number of perovskite oxides have since been found to turn into metals upon hole doping, but only after a substantial amount of hole concentration has been introduced into the system. At the same time, scanning tunneling microscopy images of these doped oxides show mixed phases in the nanoscale, meaning that there is no clear phase separation with a single boundary separating the two phases, but rather that the two phases break into intermixed nanoscale puddles. In addition, transport measurements follow percolative scaling laws with doping and temperature, further confirming the existence of the mixed phase<cit.>.From a theoretical point of view, there have been many studies of the doped Mott insulators<cit.>, largely for models in two dimensions (2D), because numerical methods such as Quantum Monte Carlo are more feasible there. However, the results vary depending on the methods used.In the 2D Hubbard model, results from quantum Monte Carlo calculations<cit.>found no evidence for phase separation, consistent with the “somewhat" exact results of Su<cit.>. However, other authors using the fixed-node quantum Monte Carlo method<cit.> or the Hartree-Fock mean-field approximation<cit.> have suggested phase separation in large regions of the parameter space. Phase separation at small doping levels was also found in the dynamical mean field calculation<cit.> and the variational cluster perturbation theory works <cit.>. There are much fewer studies of the phase separation for the Hubbard model in 3D, although the existence of the phase separation there was suggested by the early works of Visscher<cit.> in the 1970s. The recent Hartree-Fock calculations in 3D<cit.> and the dynamical mean-field theory (DMFT) work<cit.>, strictly valid for infinite dimensions, have found phase separation in a large region of parameter space, as did the work of Andriotis et al.<cit.>, who used the coherent-potential approximation and the Bethe lattice. Phase separation in the closely related t-J model has also been investigated because of its relevance to the cuprate superconductors.The phase separation has been reported for all values of J/t by several authors<cit.>, while some authors find it only for larger values of J/t <cit.>.There is thusa general consensus for the phase separation in the t-J model with a large J/t and the non-half-filled band, where the systemseparates into two regions, viz., an undoped antiferromagnetic region and a carrier-rich ferromagnetic region. All these theoretical works do not include the coupling of lattice to the electrons, which is an important ingredient in the physics of many perovskite oxides, where a strong Jahn-Teller coupling plays a critical role in the behavior of the material. In this paper, we study the Hubbard-Holstein model with the Hartree-Fock method, which includes both the Coulomb interaction as well as the electron-lattice coupling. We study the energetics of the various magnetic phases including the paramagnetic and the spiral phase (which incorporates the AFM and FM phases as special cases) and compute the phase stability in the doped system near half filling. For small number of dopants (electrons or holes), the system phase separates into an undopedantiferromagnetic insulator and a carrier-rich, ferro or spiral magnetic, metallic phase. As the dopant concentration is increased, the metallicpart grown in volume, and eventually at a critical dopant concentration, the percolation threshold is reached and the system becomes a conductor. The critical concentration for this percolative Mott metal-insulator (MIT)transition is studied for varying interaction parameters, andthe theoretical results are connected with the existing experiments in the literature. § MODEL We consider the Hubbard-Holstein model for a cubic latticeH = ∑_⟨ ij ⟩σ t_ij (c_iσ^† c_jσ + H.c.) +U∑_in_i↑ n_i↓+ ∑_i(1/2KQ^2_i - gQ_i n_i)-μ∑_i σn_iσ, which contains both the Coulomb interaction and the electron-lattice coupling terms.Herec_iσ^† is the electron creation operator at site i with spin σ, n_iσ = c^†_iσc_iσ is the number operator, t_ij is the hopping amplitude between nearest-neighbor sites denoted by ⟨ ij ⟩, U is the onsite Coulomb repulsion,Q_i is the lattice distortion at site i, K and g are, respectively, the stiffness and the electron-lattice coupling constants, and μ is the chemical potential that controls the carrier concentration.Taking the nearest-neighbor hopping integral as t_ij = -t, there are two parameters in the Hamiltonian, viz., U/t andλ≡ g^2/ ( KW), where W = 12 |t| is the band width and λis the effective electron-lattice coupling strength. Note that we have considered the static Holstein model<cit.>, whichcontains a simpler version of the local lattice interaction such as the Jahn-Teller interaction, and, in addition, it does not contain any phonon momentum dependence.The key problem to study is the energy of the ground state and the stability of the various phases as a function of thecarrier concentration away from the half filling. Both magnetic (ferro, antiferro, or spiral) as well as non-magnetic phases are considered. In fact, all these solutions are special cases of the spiral phase, which is conveniently described in terms of asite-dependent local spin basis set described by the unitary transformation<cit.>d^†_iσ =∑_σ ^'(e^-iσ⃗·α⃗_⃗i⃗/2)_σσ ^' c^†_iσ ^', whereα⃗_⃗i⃗ is the site dependent spin rotation angle.The spiral phase is described by α⃗_⃗i⃗=(q⃗·R⃗_⃗i⃗)  x̂, where x̂ is the spin rotation axis, R⃗_i is the site position, and q⃗≡ (q_x, q_y, q_z)is the modulation wave vector of the spiral state. The ferro, para, as well as the antiferromagnetic states, considered in this work, are all special cases of the spiral state. Explicitly,q⃗ = 0 for the ferro or paramagnetic state, while it is π (1, 1, 1) for the Néelantiferromagnetic state. In the new basis, the Hamiltonian (<ref>) remains unchanged except for the first term, which becomes H_ke=∑_⟨ ij ⟩, σσ^'(t^σσ^'_i jd^†_iσd_jσ^'+H.c.),where the hopping is now spin-dependent t^σσ^ '_i j= (e^iq⃗· (R⃗_⃗i⃗-R⃗_⃗j⃗)σ_x/2)_σσ^ ' t_i j, and in the remaining terms in (<ref>), the number operators are redefined to mean n_iσ = d^†_iσd_iσ.Making the Bloch transformation into momentum space d^†_k⃗σ =1/√(N)∑_ie^ik⃗.R⃗_⃗i⃗d^†_iσ, and using the Hartree-Fock approximation: n_1 n_2 = ⟨ n_1⟩ n_2+ ⟨ n_2⟩ n_1- ⟨ d_1^† d_2⟩d_2^† d_1 -⟨ d_2^† d_1⟩d_1^† d_2 -⟨ n_1⟩⟨n_2⟩ + ⟨ d_1^† d_2⟩⟨ d_2^† d_1⟩,we get the quasi-particle HamiltonianH (k⃗) =[[ T_1(k⃗)+U⟨n_↓⟩-μ -T_2(k⃗) - U ⟨ d_↓ ^† d_↑⟩; - T_2(k⃗)- U ⟨ d_↑ ^† d_↓⟩T_1(k⃗)+U ⟨ n_↑⟩- μ;]], whereT_1(k⃗) = - 2t[ cos(k_x a) cos(q_x a / 2) + cos(k_y a)cos(q_y a / 2) + cos(k_z a) cos(q_z a / 2)], T_2(k⃗) is the same as T_1(k⃗) except that all cosine functions are replaced by sines, only the nearest-neighbor hopping t_ij = -t has been kept in the original Hamiltonian (the unit of energy is set by t = 1), and the expectation values ⟨ d^†_σ d_σ ^'⟩ are to be determined self-consistently. Note that the exact form of H_k would depend on the spin rotation axis α⃗ in the spiral phase (here chosen along x̂). However, the final results should not depend on this choice as there is no coupling between the space and the spin coordinates.We also find from direct calculations that the exchange termsU ⟨ d_σ ^† d_-σ⟩appearingin Eq. (<ref>) contribute very little to the total energy. This contribution would be exactly zero, if the spins don't mix, so that the density matricesρ_σσ^'≡⟨ d_σ ^† d_σ^'⟩ are diagonal in the spin space.The total energy per site is given byE(q⃗) = 1/N∑^μ _k⃗σε_k⃗σ- U⟨ n_↑⟩⟨ n_↓⟩+U ⟨ d_↑^† d_↓⟩⟨ d_↓^† d_↑⟩ - g^2 n^2/2K ,where ε_k⃗σ are the eigenvalues ofthe Hamiltonian in Eq. (<ref>), the second and the third terms correct for the double counting of the Coulomb energy, and the last term is the lattice energy gain at each site, obtained from minimizing the lattice energy ∂ E /∂ Q = 0 from Eq. (<ref>). The chemical potential is related to the number of electrons by the expression N^-1∑_k⃗σθ (μ - ε_k⃗σ) = n, N being the number of lattice sites. For a fixed value of doping δ = 1-n, where n is the total number of electrons per lattice site, we have minimized the total energy E (q⃗) numerically as a function of the spiral vector q⃗ by varying each component between 0 and 2 π. The minimum yields the ground state. All Brillouin zone integrations were performed with 1000 k-points. We restrict ourselves to the hole doping region n ≤ 1 without loss of generality, since we have the electron-hole symmetry in the problem. § RESULTSTo determine the phase diagram, we calculated the ground-state energy of the system according to Eq.(<ref>) for the given input parameters n, U, and λ. Figure (<ref>) shows a typical plot of the ground-state energy per lattice site as a function of the hole concentration δ for different magnetic phases. As seen from the figure, the ground state is antiferromagnetic (AF) at half-filling (δ=0), in agreement with the standard result for the Hubbard model. With increasing hole concentration δ, the system first turns into a spiral (S) state, then into a ferromagnetic (F) state, andeventually into the paramagnetic (P) state. Note that we have considered the spiral state in Eq. (<ref>), which is a spin density wave (SDW) state, with the modulation wave vector q⃗, but not the charge density wave (CDW) state, which is a higher energy state and is not expected to occur in the parameter regime we are working. The CDW state is difficult to incorporate within our calculation as it requires a supercell of arbitrary size depending on the modulation wave vector of the CDW. However, we can study the CDW in a special case, viz., wherethe modulation q⃗ = (π, π, π), in which case we have two sites in the unit cell of the crystal, and we can allow for both charge and spin disproportionation between the two sublattices. Results of this calculation are also shown in Fig. (<ref>) as crosses and they go over to the single-site results indicating the absence of any CDW for this wave vector. We note further that the CDW state could be favored when the electron-lattice interaction is strong. We can estimate the condition for this by considering the energy of the charge-disproportionated state (a special case of the CDW) for the half-filled Hubbard-Holstein model and comparing it with the energy of the state without any charge disproportionation. In the former case, the charges on the two sublattices are 1 ±η (η≤ 1 is the charge disproportionation amplitude), and the total energy would be E = -(g^2/2K) [(1+η)^2 + (1-η^2)] + U η.The first term here is the energy gain due to lattice interaction, the second term is due to the fact that η electrons are forced to occupy the upper Hubbard band, and we have neglected the kinetic energy difference in order to get a simple estimate.It immediately follows from this expression that such a CDW statewould be favorable if U/(W λ) ≤ 1. For the parameter regime relevant for the oxides, this condition is not satisfied, so that it is reasonable to omit the CDW state, which we have not considered in our work. Fig. (<ref>) shows the calculated phase diagram.For the half-filled case, there is perfect Fermi surface nesting[ε (k⃗_F) = ε (k⃗_F + q⃗_n) = 0, q⃗_n = π (1, 1, 1) is the nesting vector],which leads to an anti-ferromagnetic insulator for any value ofU.As we move away from half filling, perfect nesting is lost and a critical value U_c is needed for the onset of magnetic order. Below a certain hole doping δ, the system goes from the paramagnetic state to a spiral state, and eventually to the ferromagnetic state, as U is increased, while for a larger value of δ, the system goes directly from the paramagnetic to the ferromagnetic state. Fig. (<ref>) shows the calculated energy as a function of the spiral wave vector for three different parameters. Note that for the paramagnetic solution corresponding to U/W = 0.3, the energy is independent of the spiral wave vector, since the magnetic moment is zero.Para-Ferro phase boundary – The boundary between the paramagnetic and the ferromagnetic phases in the Hubbard model (Fig. <ref>) can be understood by taking a model density of states and applying the Stoner criterion for ferromagnetic instability. We consider the sinusoidal density-of-states for each spin ρ (ε) =π/2 Wsin (επ / W)if0< ε < W, 0 else, of bandwidth W. The total energy E is a sum of the band energy, the Coulomb energy, and the lattice energy, which is immediately obtained from a direct integration to yield E(n,m)= W/2 π[ √(1-x^2) - x cos^-1 x + √(1-y^2) - y cos^-1 y] +U/4 (n^2 - m ^2) -λ n^2, wherex = 1-n-m, y=1-n+m, n =n_↑ + n_↓ is the number of electrons, and m =n_↑ - n_↓ is the spin polarization. The Fermi energies for the up and down spins are, respectively, ε _F↑ = π^-1 W cos^-1 x andε _F↓ = π^-1 W cos^-1 y. The onset offerromegetism is determined from the Stoner criterion U ρ (ε_F) ≥ 1, where ε_F = π^-1 W cos^-1 (1- n) is the Fermi energy of the paramagnetic phase, while the spin polarization is determined by the minimization of the energy, Eq. (<ref>), as a function of the polarization m. TheStoner criterion leads to the equation of the para-ferro transition line:δ= 1-n = √(1- (2W/π U)^2), which is plotted as a dotted line in Fig. (<ref>) and reproduces the trend found from the full solution of the Hubbard model for the cubic lattice. It is readily seen from Eq. (<ref>) that for the Coulomb interaction below the critical value U_c = 2/ π, the system is paramagnetic for all values of the hole concentration δ. Percolative metal-insulator transition – Returning to our original Hubbard-Holstein model, as seen from Fig. (<ref>), the ground-state energy is noteverywhere convex, whichindicates a phase separation, which is seen for small doping near half filling. At half filling, we have an antiferromagnetic insulator. As holes are introduced, the system phase separates into two regions, one is the anti-ferro insulating state with hole concentration zero, and the second is a spiral or ferro phase (depending on the strength of U) with hole concentration δ^*. As δ is increased, so does the volume of the metallic fraction. When it exceeds a certain threshold δ_c, given by the percolation theory, the metallic regions form a percolative network and the system conducts.The fraction of the two phases can be obtained from the standard Maxwell construction, which is illustrated for the case of U/ W = 2 in Fig. (<ref>). If v_m (v_i) is the volume fraction of the substance in the metallic (insulating) phase in the mixed phase region (δ^* < δ < 0),then we have the two equations: v_m+ v_i = 1 and v_m δ^* = δ, which means that the metallic volume fraction linearly increases with the hole concentration, i.e., v_m= δ/δ^*.The hole concentration δ^* separates the mixed phase region from the single phase region and depends on the Hamiltonian parameters as seen, e.g., from Fig. (<ref>), and it must be calculated from the total energy curve for each set of parameters from a Maxwell construction.The Maxwell construction indicates phase separation into two separate regions consisting of single phases, separated by a single boundary. However, in the actual solids, one does not encounter such clear phase separation, but rather a mixed phase usually results, where the two phases are intermixed on the nanoscale.There are many reasons why a mixed phase could be more favorable. For example, the presence of a small amountof charged impurities because of unintentional doping could cause a deviation from charge neutrality of the two components and would impede the formation of the phase separation due to the large cost in Coulomb energy.Thus one would encounter a nanoscale inhomogeneous phase (or mixed phase) with intermixed metallic and insulating components (Coulomb frustrated phase separation)<cit.>. It has also been suggested that the mixed phase could even originate due to kinetic reasons, i.e., self-organized inhomogeneities resulting from a strong coupling between electronic and elastic degrees of freedom <cit.>. A large number of experiments point to the existence of the mixed phases in the oxide materials, including transport results and scanning tunneling microscopy images.<cit.> The percolation threshold v_c, beyond which the metallic regions touch and the percolative conduction begins, depends on the specific model used in the percolation theory, but is typically about v_c ≈ 0.30.For example, in the percolation model, where the metallic region consists of randomly-packed, overlapping spheres of radius r in an insulating matrix,the critical volume fraction of the spheres for the onset of percolationisv_c ≈ 0.29 and is independent of r <cit.>.On the other hand, for the site percolation problem in the cubic lattice, the percolation threshold is about v_c ≈ 0.31.The site percolation thresholds are long well known,<cit.> but are summarized in Table I for ready reference. We have used the value v_c = 0.3 in our calculations, which is similar to the site percolation result for the cubic lattice. Percolative conduction occurs, when the metallic volume fraction exceeds v_c,i.e., δ/ δ^* = v_m > v_c, or δ>δ_c = v_c δ^*, where δ^* is the critical concentration, beyond which the system turns into a single-phase metal, which is either ferromagnetic or in the spin spiral state depending on the strength of U/W (see Fig. (<ref>)). For the specific parameters used in Fig. (<ref>), the full metallic phase for δ > δ^* is ferromagnetic;for intermediate values of δ between 0 and δ^*, the phase separation occurs between the AFI half-filled(δ = 0) phase and the FM metallic phase with carrier concentration δ^*.Fig. (<ref>) summarizes thephase diagram showing the MIT boundary. The system continues to remain an AF insulator until dopant concentration (electrons or holes) exceeds the critical value δ_c. Effect of electron-lattice coupling – A finite value of the electron-lattice coupling in the Hubbard-Holstein model does not change the relative energies of the various phases for a fixed concentration n as already noted, since it alters the energy of each phase equally (see Eqs. (<ref>) and (<ref>)).The presence of charge disproportionation or a CDW (n varies from site to site) would change the phase diagram; However, as we have already argued at the beginning of this Section, for parameters relevant to the oxides, the CDW phase is unlikely to occur, which we have not considered in this work. Thus the various phase regions (AF, F, P, or S) in the phase diagram, Fig. (<ref>), remain unchanged. However, the curvatures of the ground-state total energy as a function of n or δ, as in Fig. (<ref>), change, leading to the phase separation regions which now change with λ, and therefore so dothe quantities δ_c and δ^*. This is clearly seen from Fig. <ref>, where δ^* increases as the electron-lattice coupling strength λ is increased. Fig. (<ref>) shows the critical doping δ_c as a function of the electron-lattice coupling strength λ for several values of U/W.As seen from the figure, the larger the value of λ, the higher is the dopant concentration δ needed for the transition into the metallic state. Finally, Fig. (<ref>) shows the phase diagram in the Hubbard-Holstein model for a specific value of λ. square To make connections with the experiments, we summarize the measured critical carrier density for the MIT in several perovskite oxides from the existing literature in Table II. As these results indicate, the critical carrier concentration δ_c needed to transform the insulating phase into the metallic phase is a significant fraction of unity, starting from 0.05 for LaTiO_3 to as high as 0.5 for YVO_3. However, other than a few systems, where δ_c is as high as 0.5, for most compounds shown in Table II, it is between 0.05 and 0.2, which is the typical value of δ_c predicted by our theory. Fig. (<ref>) shows the experimental conductivity behavior<cit.> of the doped titanates RTiO_3 plotted against the bandwidth of the material as well as the same calculated from our theory. Although inclusion of the detail interactions in the Hamiltonian may be necessary for a quantitative description of a specific compound,the general trend for the onset of the MIT is well described within the Hubbard-Holstein model.As seen from Fig. (<ref>), for a large bandwidth (U/W less than a critical value), the system is a metal for all doping levels, and as U/W is increased beyond a critical value, the critical carrier concentration for MIT increases, roughly linearly. This agrees with the experimental data, where Katsufuji et al.<cit.> have plotted the inverse bandwidth vs. the conduction behavior for a large number of samples with different carrier concentrations in the titanates.As was argued in Ref. <cit.>, the magnitude of the Coulomb U may be expected to be relatively unchangedfor the R_1-xCa_xTiO_3+y/2 series, allowing a direct comparison of the trends seen in theory vs. experiments.One point to note is that Eq. (<ref>) puts an upper limit on thecritical doping δ_c ≈ 0.3, sinceδ^* can not exceed one and v_c ≈ 0.3, which is what is observed for most of the samples in Table I. For carrier concentration δ as high as 0.5, as is the case for some of the samples, the crystal and electronic structures are likely changed significantly, making the model less applicable for such systems. In our theory, we have assumed that the percolative conduction occurs in the mixed phase, where the two components (metallic and insulating) occur randomly, so that the percolation theory applies. If the two components do not occur randomly, but rather that there is a tendency towards coalescing of the components, this would increase the critical value δ_c, as more volume fraction of the metallic component will be needed before a percolation path for conduction forms.Note that the Hartree-Fock approximation due to its mean-field nature does omit theeffect of fluctuations on the phase separation. It has been shown that such quantum fluctuations can indeed modify the magnetic phase boundary within the Hubbard model<cit.>.However, the qualitative similarity of our theoretical results with the experiments (as seen from Fig. (<ref>)) suggests that the Hartree-Fock results should contain the qualitative physics of the problem, while the fluctuation effects will likely alter the predicted critical doping quantitatively.The effect of the fluctuations on the phase separation remains an open question for future study. § SUMMARY In summary, we studied the phase diagram and energetics of the Hubbard-Holstein model using the Hartree-Fock method. For a wide range of the Hamiltonian parameters, we found the existence of a mixed phase, consisting ofan undoped component which is an anti-ferro insulator and a carrier-rich metallic phase, which is either ferromagnetic or spiral magnetic. As the carrier concentration (electrons or holes) increases with doping, the metallic portion slowly grows forming isolated islands in an insulating matrix. As the volume fraction of the metallicislands increases with carrier doping, eventually they form a percolative conducting network and the material conducts beyond the critical dopant concentration δ_c. This happens for δ_c which is typically between zero and 0.2 or so, in general agreement with the experimental results.We furthermore showed that the electron-lattice interaction favors the insulating phase with respect to the metallic phase and the critical doping value increases along with thestrength of the electron-lattice coupling.The general trends for the critical doping concentration for MIT predicted by our theory agrees with the existing experimental results for the hole doped perovskite oxides.This research was supported by the U.S. Department of Energy,Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under GrantNo. DE-FG02-00ER45818.100 Mott N. F. Mott, The Basis of the Electron Theory of Metals, with Special Reference to the Transition Metals, Proc. R. Soc. London Ser. A 62, 416 (1949).Nagaoka Y. Nagaoka, Ferromagnetism in a Narrow, Almost Half-Filled s Band, Phys. Rev. 147, 392 (1966).Taguchi Y. Taguchi, T. Okuda, M. Ohashi, C. Murayama, N. Mori, Y. Iye, and Y. Tokura, Critical behavior in LaTiO_3+δ/2 in the vicinity of antiferromagnetic instability, Phys. Rev. B 59, 7917 (1999).Tokura Y. Tokura, Y. Taguchi, Y. Okada, Y. Fujishima, T. Arima, K. Kumagai, and Y. Iye, Filling dependence of electronic properties on the verge of metal Mott-insulator transition in Sr_1-xLa_xTiO_3, Phys. Rev. Lett. 70, 2126 (1993). SWCheong M. Uehara, S. Mori, C. H. Chen and S.-W. Cheong, Percolative phase separation underlies colossal magnetoresistance in mixed-valent manganites, Nature 399, 560 (1996).Schneider S. Schintke and W.-D. Schneider, Insulators at the ultrathin limit: electronic structure studied by scanning tunnelling microscopy and scanning tunnelling spectroscopy, J. Phys. Condens. Matter. 16R49–R81 (2004).Mydosh M. Fath, S. Freisem, A. A. Menovsky, Y. Tomioka, J. Aarts and J. A. Mydosh, Spatially Inhomogeneous metal-insulator transition in Doped Manganites, Science 285, 1540 (1999).Kotliar96 H. Kajueter, G. Kotliar, and G. Moeller, Doped Mott insulator: Results from mean-field theory, Phys. Rev. B 53, 16214 (1996).XGWen P. A. Lee, N. Nagaosa, and X.-G. Wen, Doping a Mott insulator: Physics of high-temperature superconductivity, Rev. Mod. Phys. 78, 17 (2006).Kotliar86 G. Kotliar and A. E. Ruckenstein, New Functional Integral Approach to Strongly Correlated Fermi Systems: The Gutzwiller Approximation as a Saddle Point, Phys. Rev. Lett. 57, 1362 (1986).Onoda S. Onoda and M. Imada, Filling-Control metal-insulator transition in the Hubbard Model Studied by the Operator Projection Method, J. Phys. Soc. Jpn. 70, 3398 (2001).Tremblay2009 M. Balzer, B. Kyung, D. Sénéchal, A.-M. S. Tremblay, and M. Potthoff, First-order Mott transition at zero temperature in two dimensions: Variational plaquette study, Europhys. Lett. 85, 17002 (2009).Dagotto2 A. Moreo, D. Scalapino, and E. Dagotto, Phase separation in the Hubbard model, Phys. Rev. B 43, 11442 (1991).BeccaF. Becca, M. Capone and S. Sorella, Spatially homogeneous ground state of the two-dimensional Hubbard model, Phys. Rev. B 62, 12700 (2000).GangG. Su, Phase separation in the two-dimensional Hubbard model, Phys. Rev. B 54, R8281 (1996).CosentiniA. C. Cosentini, M. Capone, L. Guido and G. B. Bachelet, Phase separation in the two-dimensional Hubbard model: A fixed-node quantum Monte Carlo study, Phys. Rev. B 58, R14685 (1998).Arrigoni E. Arrigoni and G. C. Strinati, Doping-induced incommensurate antiferromagnetism in a Mott-Hubbard insulator, Phys. Rev. B 44, 7455 (1991).ZitzlerR. Zitzler, Th. Pruschke, and R. Bulla, Magnetism and phase separation in the ground state of the Hubbard model, Eur. Phys. J. B 27, 473 (2002).AichhornM. Aichhorn, E. Arrigoni, M. Potthoff, and W. Hanke, Antiferromagnetic to superconducting phase transition in the hole- and electron-doped Hubbard model at zero temperature, Phys. Rev. B 74, 024508 (2006).Visscher P. Visscher, High-Temperature thermodynamics of the Hubbard model: An exact numerical solution, Phys. Rev. B 10, 932 (1974); Phase separation instability in the Hubbard model, ibid, 943 (1974).AndriotisA. N. Andriotis, E. N. Economou, Qiming Li, and C. M. Soukoulis, Phase separation in the Hubbard model, Phys. Rev. B 47, 9208 (1993).Emery V. Emery, S. Kivelson, and H. Lin, Phase separation in the t-J model, Phys. Rev. Lett. 64, 475 (1990). HellbergC. S. Hellberg and E. Manousakis, Phase Separation at all Interaction Strengths in the t-J Model, Phys. Rev. Lett. 78, 4609 (1997); J. H. Han, Q.-H. Wang, and D.-H. Lee, Antiferromagnetism, Stripes and Superconductivity in the t-J model with coulomb interaction, Int. J. Mod. Phys. B 15, 1117 (2001).GimmT.-H. Gimm and S.-H. Suck Salk, Phase separation based on a U(1) slave-boson functional integral, Phys. Rev. B 62, 13930 (2000).LuchiniW. O. Putikka and M. U. Luchini, Limits on phase separation for two-dimensional strongly correlated electrons, Phys. Rev. B 62, 1684 (2000).Shih C. T. Shih, Y. C. Chen and T. K. Lee, Phase separation of the two-dimensional t-J model, Phys. Rev. B 57, 627 (1998).Igoshev P. A. Igoshev, M. A. Timirgazin, V. F. Gilmutdinov, A. K. Arzhnikov, and V. Yu. Irkhin, Spiral magnetism in the single-band Hubbard model: the Hartree-Fock and slave-boson approaches, J. Phys.: Condens. Matter 27, 446002 (2015).Balents C.Hou Yee and L. Balents, Phase Separation in Doped Mott Insulators, Phys. Rev. X 5, 021007 (2015).Holstein T. Holstein, Studies of polaron motion, Ann. Phys. 8, 325 (1959).Millis P. Werner and A. Millis, Doping-driven Mott transition in the one-band Hubbard model, Phys. Rev. B 75, 085108 (2007).Raimondi R. Raimondi, C. Castellani, M. Grilli, Y. Bang, and G. Kotliar, Charge collective modes and dynamic pairing in the three band Hubbard model. II. Strong-coupling limit, Phys. Rev. B 47, 3331 (1993).Nagaev E.L. Nagaev, Phase separation in high-temperature superconductors and related magnetic systems, Phys.-Usp. 38, 497 (1995).Dagotto E. Dagotto, Complexity in Strongly Correlated Electronic Systems, Science 309, 257 (2005).Jeon D. H. Jeon, J. H. Nam, C.-J. Kim, Microstructural Optimization of Anode-Supported Solid Oxide Fuel Cells by a Comprehensive Microscale Model, J. Electrochem. Soc. 153 (2) A406–A417 (2006).Costamagna P. Costamagna, M. Panizza, G. Cerisola, A. Barbucci, Effect of composition on the performance of cermet electrodes. Experimental and theoretical approach, Electrochimica. Acta 47 1079–1089 (2002).SundeS. Sunde, Monte Carlo Simulations of Polarization Resistance of Composite Electrodes for Solid Oxide Fuel Cells, J. Electrochem. Soc. 143 1930–1939 (1996).Murthy G. Kotliar, S. Murthy, and M. J. Rozenberg, Compressibility Divergence and the Finite Temperature Mott Transition, Phys. Rev. Lett. 89, 046401 (2002).Capone2 M. Capone, G. Sangiovanni, C. Castellani, C. Di Castro, and M. Grilli, Phase Separation Close to the Density-Driven Mott Transition in the Hubbard-Holstein Model, Phys. Rev. Lett. 92, 106401 (2004).HRK V. B. Shenoy, T. Gupta, H. R. Krishnamurthy, and T. V. Ramakrishnan, Coulomb Interactions and Nanoscale Electronic Inhomogeneities in Manganites, Phys. Rev. Lett. 98, 097201 (2007).KHAHNK. H. Ahn, T. Lookman, and A. R. Bishop, Strain-induced metal-insulator phase coexistence in perovskite manganites, Nature (London) 428, 401 (2004).Loa I. Loa, P. Adler, A. Grzechnik, K. Syassen, U. Schwarz, M. Hanfland, G. Kh. Rozenberg, P. Gorodetsky, and M. P. Pasternak, Pressure-Induced Quenching of the Jahn-Teller Distortion and Insulator-to-Metal Transition in LaMnO_ 3, Phys. Rev. Lett. 87, 125501 (2001).Baldini M. Baldini, V. V. Struzhkin, A. F. Goncharov, P. Postorino, and W. L. Mao, Persistence of Jahn-Teller Distortion up to the Insulator to Metal Transition in LaMnO_ 3, Phys. Rev. Lett. 106, 066402 (2011). Ramos A. Y. Ramos, N. M. Souza-Neto, H. C. N. Tolentino, O. Bunau, Y. Joly, S. Grenier, J.-P. Itíe, A.-M. Flank, P. Lagarde, and A. Caneiro, Bandwidth-driven nature of the pressure-induced metal state of LaMnO_3 , Europhys. Lett. 96, 36002 (2011). Pike See, e.g., G. E. Pike and C. H. Seager, Percolation and conductivity: A computer study, Phys. Rev. B 10, 1421 (1974).Efros See, for example, A. L. Efros, Physics and Geometry of Percolation Theory, (MIR publications, Moscow, 1982), p. 128 and references therein.YcaVO31993 M. Kasuya, Y. Tokura, T. Arima, H. Eisaki, and S. Uchida, Optical spectra of Y_1-xCa_xVO_3: Change of electronic structures with hole doping in Mott-Hubbard insulators, Phys. Rev. B47, 6197 (1993).LaSrVO32006 J. Fujioka, S. Miyasaka and Y. Tokura, Doping Variation of Orbitally Induced Anisotropy in the Electronic Structure of La _1-xSr_xVO_3, Phys. Rev. Lett. 97, 196401 (2006).YCdVO3 A. A. Belik, R. V. Shpanchenko, and E. Takayama-Muromachi, Carrier-doping metal-insulator transition in solid solutions of CdVO_3-YVO_3, J. Magn. Magn. Mater. 310, e240 (2007).SmCaNiO3 P.-H. Xiang, S. Asanuma, H. Yamada, I. H. Inoue, H. Akoh, and A. Sawa, Room temperature Mott metal-insulator transition and its systematic control in Sm_1−xCa_xNiO_3 thin films, Appl. Phys. Lett. 97, 032114 (2010).RCaTiO3 T. Katsufuji, Y. Taguchi, and Y. Tokura, Transport and magnetic properties of a Mott-Hubbard system whose bandwidth and band filling are both controllable: R_1-xCa_xTiO_3+y/2, Phys. Rev. B 56, 10145 (1997).YCaTiO3 Y. Taguchi, Y. Tokura, T. Arima, and F. Inaba, Change of electronic structures with carrier doping in the highly correlated electron system Y_1-xCa_xTiO_3, Phys. Rev. B 48, 511 (1993).LaCaMnO3 B. B. Van Aken, O. D. Jurchescu, A. Meetsma, Y. Tomioka, Y. Tokura, and T. T. M. Palstra, Orbital-Order-Induced metal-insulator transition in La_1-xCa_xMnO_3, Phys. Rev. Lett. 90, 066403 (2003). LaSrMnO3 A. Urushibara, Y. Moritomo, T. Arima, A. Asamitsu, G. Kido, and Y. Tokura, Insulator-metal transition and giant magnetoresistance in La_1-xSr_xMnO_3, Phys. Rev. B 51, 14103 (1995).1PrCaMnO3 M. R. Lees, J. Barratt, G. Balakrishnan, D. McK. Paul, and M. Yethiraj, Influence of charge and magnetic ordering on the insulator-metal transition in Pr_1-xCa_xMnO_3, Phys. Rev. B 52, R14303(R) (1995).2PrCaMnO3 Y. Tomioka, A. Asamitsu, H. Kuwahara, Y. Moritomo, and Y. Tokura, Magnetic-field-induced metal-insulator phenomena in Pr_1-xCa_xMnO_3 with controlled charge-ordering instability, Phys. Rev. B 53, R1689(R) (1996).NdSrMnO3 H. Kawano, R. Kajimoto, H. Yoshizawa, Y. Tomioka, H. Kuwahara, and Y. Tokura, Magnetic Ordering and Relation to the metal-insulator transition in Pr_1-xSr_xMnO_3 and Nd_1-xSr_xMnO_3 with x ≈ 1/2, Phys. Rev. Lett. 78, 4253 (1997).fluctu1 P. G. J. van Dongen, Thermodynamics of the extended Hubbard model in high dimensions, Phys. Rev. Letts. 67, 757 (1991).fluctu2 A. N. Tahvildar-Zadeh, J. K. Freericks, and M. Jarrell, Magnetic phase diagram of the Hubbard model in three dimensions: The second-order local approximation, Phys. Rev. B 55, 942 (1997).
http://arxiv.org/abs/1703.02886v2
{ "authors": [ "Jamshid Moradi Kurdestany", "S. Satpathy" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170225174340", "title": "Mott metal-insulator transition in the Doped Hubbard-Holstein model" }
Role-separating ordering in social dilemmas controlled by topological frustration Jafferson K. L. da Silva December 30, 2023 =================================================================================Magnetic massive stars comprise approximately 10% of the total OB star population. Modern spectropolarimetry shows these stars host strong, stable, large-scale, often nearly dipolar surface magnetic fields of 1 kG or more. These global magnetic fields trap and deflect outflowing stellar wind material, forming an anisotropic magnetosphere that can be probed with wind-sensitive UV resonance lines. Recent HST UV spectra of NGC 1624-2, the most magnetic O star observed to date, show atypically unsaturated P-Cygni profiles in the Civ resonant doublet, as well as a distinct variation with rotational phase. We examine the effect of non-radial, magnetically-channeled wind outflow on P-Cygni line formation, using a Sobolev Exact Integration (SEI) approach for direct comparison with HST UV spectra of NGC 1624-2. We demonstrate that the addition of a magnetic field desaturates the absorption trough of the P-Cygni profiles, but further efforts are needed to fully account for the observed line profile variation. Our study thus provides a first step toward a broader understanding of how strong magnetic fields affect mass loss diagnostics from UV lines.§ INTRODUCTION Hot, luminous stars undergo mass loss through a steady outflow of supersonic material from the stellar surface. These radiatively driven stellar winds are best characterized through modeling UV resonance lines, which are sensitive to wind properties. Recent spectropolarimetric measurements have revealed approximately 10% of massive stars host strong, stable, nearly dipolar magnetic fields with surface field strength of approximately 1kG or more (<cit.>). The magnetic field channels the flow of the stellar wind along its field lines, confining the wind within closed loops and reducing the overall mass loss (see Fig. <ref>). This results in a dynamic and structurally complex so-called magnetosphere (<cit.>) with observational diagnostics in the optical, UV, and X-ray regimes (<cit.>, David-Uraz et al., these proceedings). Fig. <ref> shows HST UV spectra of NGC 1624-2, the most magnetic O star observed to date, compared with HD 93146 and HD 36861 (non-magnetic O stars of similar spectral type). In stark contrast to the strong line saturation observed in non-magnetic O stars, NGC 1624-2 and other magnetic O-type stars such as HD 57628, HD 191612, CPD -28 2561, show atypically unsaturated P-Cygni profiles in the Civ resonant doublet, as well as a distinct dependence on rotational phase (<cit.>). Therefore, the development of specialized analytical tools is necessary to interpret these profiles and derive the associated wind properties. § THE MAGNETICALLY CONFINED WINDIn general, the dipolar axis of magnetic OB stars is not aligned with the rotational axis. Therefore, the resulting magnetospheres produce rotationally modulated variability in wind-sensitive UV line profiles and significantly reduced mass-loss rates (<cit.>). P-Cygni line profiles can provide a powerful tool to probe the structure of these magnetospheres. A detailed investigation of the density and velocity structure of magnetically channeled winds has been carried out using magnetohydrodynamic (MHD) simulations (<cit.>). Although this work has provided unprecedented insight, these simulations remain computationally cumbersome and expensive. The recently published Analytic Dynamical Magnetosphere (ADM) model (<cit.>) provides a simplified parametric prescription corresponding to a time-averaged picture of the normally complex magnetospheric structure previously derived through full MHD simulations. As shown in Fig. <ref>, outflowing material leaves the stellar surface and is channeled along the field lines. At the magnetic equator, the outflowing material ("upflow") from each hemisphere collides, producing a shock. Cooling is achieved through X-ray production, allowing the wind material to flow back toward the stellar surface ("downflow"). This model shows remarkable agreement with Hα and X-ray observations. However, as yet the ADM model has not been used to explain UV line variability, particularly with respect to the observed unsaturated absorption troughs of Civ lines. This provides the unique opportunity to conduct a thorough parameter study capable of accurately reproducing observed trends without the usual computational complexities associated with multi-dimensional MHD modeling.§ INITIAL RESULTS AND FUTURE APPLICATIONS We examine the effect of non-radial, magnetically channeled wind outflows on UV line formation through the development of synthetic UV wind-line profiles which implement the ADM formalism developed in <cit.>. We solve the equation of radiative transfer using a Sobolev Exact Integration (SEI) method (both in the optically thin and optically thick approximations, following <cit.>) to produce synthetic P-Cygni profiles for comparison with observational data. For simplicity, these preliminary models only implement the wind upflow component. We also limit these initial investigations to the magnetic "pole-on" and "equator-on" viewing angles.Initial results (see Fig. <ref>) show that significant desaturation of the UV wind-line absorption troughs, both in the magnetic "pole-on" and "equator-on" views, occurs naturally from the addition of a dipolar magnetic field at the stellar surface. This is in agreement with earlier studies, which used MHD simulations to synthesize UV line profiles (<cit.>) as well as observational data (see Fig. <ref>). This suggests that the desaturation is a direct result of the presence of a magnetic field. However, comparison with observational data also reveals a failure to reproduce the exact phase dependence of the lines, indicating the need for further investigation and additional modeling. In comparison to other computationally expensive MHD modeling techniques, the ADM formalism allows us to provide meaningful results that can be applied across a broader spectrum of magnetic massive stars. Future models will include both the wind upflow and downflow regions, as well as the addition of a “shock retreat” boundary to account for X-ray production in the wind. Further consideration of the effects of an approximated source function (as opposed to one that is solved self-consistently) also need to be explored. The detection and characterization of magnetic fields in massive stars currently relies heavily on spectropolarimetry, a powerful but expensive technique. Given these limitations, the future of the field of massive star magnetism might rely on indirect detection methods. Therefore, the development of robust UV diagnostics will be a critical step forward in understanding the circumstellar environment of massive stars, as well as the possible presence of an underlying field.[Bard & Townsend 2016]bardtownsend2016 Bard, C. & Townsend, R.H.D. 2016, MNRAS, 462.4, 3672[Fossati et al. 2015]fossati+2015 Fossati, L., Castro, N., Schöller, M., Hubrig, S., Langer, N., Morel, T., Briquet, M., Herrero, A., Przybilla, N., Sana, H., et al. 2015, A&A, 582, A45[Grunhut et al. 2009]grunhut+2009 Grunhut, J.H., Wade, G.A., Marcolino, W.L.F., Petit, V., Henrichs, H.F., Cohen, D.H., Alecian, E., Bohlender, D., Bouret, J.C., Kochukhov, O., et al. 2009, MNRAS, 400.1, L94[Marcolino et al. 2012]marcolino+2012 Marcolino, W.L.F., Bouret, J.C., Walborn, N.R., Howarth, I.D., Nazé, Y., Fullerton, A.W., Wade, G.A., Hillier, D.J., & Herrero, A. 2015, MNRAS, 422.3, 2314[Marcolino et al. 2013]marcolino+2013 Marcolino, W.L.F., Bouret, J.C., Sundqvist, J.O., Walborn, N.R., Fullerton, A.W., Howarth, I.D., Wade, G.A., & ud-Doula, A. 2013, MNRAS, 431.3, 2253[Nazé et al. 2015]naze+2015 Nazé, Y., Sundqvist, J.O., Fullerton, A.W., ud-Doula, A., Wade, G.A., Rauw, G., & Walborn, N.R. 2015, MNRAS, 452.3, 2641[Owocki et al. 2016]owocki+2016 Owocki, S.P., ud-Doula, A., Sundqvist, J.O., Petit, V., Cohen, D.H., & Townsend, R.H.D. 2016, MNRAS, 462.4, 3830[Owocki & Rybicki 1985]owockirybicki1985 Owocki, S.P. & Rybicki, G.B. 1985, ApJ, 299, 265[Petit et al. 2013]petit+2013 Petit, V., Owocki, S.P., Wade, G.A., Cohen, D.H., Sundqvist, J.O., Gagné, M., Apellániz, J.M., Oksala, M.E., Bohlender, D.A., Rivinius, T., et al. 2013, MNRAS, 429.1, 398[Sundqvist et al. 2012]sundqvist+2012 Sundqvist, J.O., ud-Doula, A., Owocki, S.P., Townsend, R.H.D., Howarth, I.D., & Wade, G.A. 2012, MNRAS, 423.1, L21[Townsend & Owocki 2005]townsendowocki2005 Townsend, R.H.D. & Owocki, S.P. 2005, MNRAS, 357.1, 251[ud-Doula et al. 2008]uddoula+2008 ud-Doula, A., Owocki, S.P., & Townsend, R.H.D. 2008, MNRAS, 385.1, 97[ud-Doula et al. 2009]uddoula+2009 ud-Doula, A., Owocki, S.P., & Townsend, R.H.D. 2009, MNRAS, 392.3, 1022[ud-Doula & Owocki 2002]uddoulaowocki2002 ud-Doula, A. & Owocki, S.P. 2002,ApJ, 576.1, 413[Wade et al. 2011a]wade+2011 Wade, G.A., Grunhut, J., Gräfener, G., Howarth, I.D., Martins, F., Petit, V., Vink, J.S., Bagnulo, S., Folsom, C.P., Nazé, Y., et al. 2011, MNRAS, 419.3, 2459[Wade et al. 2011b]wade+2011b Wade, G.A., Howarth, I.D., Townsend, R.H.D., Grunhut, J.H., Shultz, M., Bouret, J.C., Fullerton, A., Marcolino, W., Martins, F., Nazé, Y., et al. 2011, MNRAS, 416.4, 3160[Wade et al. 2012]wade+2012 Wade, G.A., Apellániz, J.M., Martins, F., Petit, V., Grunhut, J., Walborn, N.R., Barbá, R.H., Gagné, M., García-Melendo, E., Jose, J., et al. 2012, MNRAS, 425.2, 1278[Wade et al. 2016]wade+2016 Wade, G. A., Neiner, C., Alecian, E., Grunhut, J.H., Petit, V., de Batz, B., Bohlender, D.A., Cohen, D.H., Henrichs, H.F., Kochukhov, O., et al. 2016, MNRAS, 456.1, 2
http://arxiv.org/abs/1702.08535v1
{ "authors": [ "Christiana Erba", "Alexandre David-Uraz", "Veronique Petit", "Stanley P. Owocki" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170227211753", "title": "New Insights into the Puzzling P-Cygni Profiles of Magnetic Massive Stars" }
Department of Chemistry, Rice University, Houston, TX, USA 77005-1892 Department of Chemistry, Rice University, Houston, TX, USA 77005-1892 Department of Physics and Astronomy, Rice University, Houston, TX, USA 77005-1892 Department of Chemistry, Rice University, Houston, TX, USA 77005-1892 Department of Physics and Astronomy, Rice University, Houston, TX, USA 77005-1892Projected Hartree-Fock theory provides an accurate description of many kinds of strong correlation but does not properly describe weakly correlated systems.Coupled cluster theory, in contrast, does the opposite.It therefore seems natural to combine the two so as to describe both strong and weak correlations with high accuracy in a relatively black-box manner.Combining the two approaches, however, is made more difficult by the fact that the two techniques are formulated very differently.In earlier work, we showed how to write spin-projected Hartree-Fock in a coupled-cluster-like language.Here, we fill in the gaps in that earlier work.Further, we combine projected Hartree-Fock and coupled cluster theory in a variational formulation and show how the combination performs for the description of the Hubbard Hamiltonian and for several small molecular systems.Projected Hartree-Fock as a Polynomial of Particle-Hole Excitations and Its Combination With Variational Coupled Cluster Theory Gustavo E. Scuseria December 30, 2023 ===============================================================================================================================§ INTRODUCTIONThe successes of traditional single-reference coupled cluster theory<cit.> are manifold, and the method is rightly held as the gold standard of quantum chemistry.These successes, however, come with an important caveat: affordable coupled cluster calculations are generally only accurate for weakly correlated systems.For strongly correlated problems involving more than two strongly correlated electrons, coupled cluster theory is significantly less accurate and often fails badly.Several techniques exist to address the strongly correlated regime.Among the simplest computationally is the use of symmetry projected mean-field methods to which we will refer generically as projected Hartree-Fock (PHF).<cit.> The idea is also simple: when electrons become strongly correlated, mean-field methods such as Hartree-Fock theory no longer provide even qualitatively reasonable descriptions of their behavior (which is, at heart, why coupled cluster theory fails).However, mean-field methods typically signal their own breakdown by artificially breaking some or even all symmetries of the problem.These symmetry-broken mean-field methods often contain valuable information.For example, unrestricted Hartree-Fock (UHF) correctly localizes the two electrons to separate nuclei in the dissociation of H_2 at the cost of good spin (and spatial) symmetry, while restricted Hartree-Fock (RHF) imposes spin symmetry and in doing so delivers unphysical results.Of course the broken symmetries of UHF are not broken in the exact wave function, but projecting out the component of the UHF wave function in spin-projected UHF (SUHF) restores those symmetries but retains the advantages of UHF and dissociates H_2 correctly.While SUHF in the form introduced by Löwdin as extended Hartree-Fock<cit.> is computationally cumbersome, it can be reformulated in a way which has mean-field computational scaling using integration over generalized symmetry coherent states.<cit.>Much of our work in the past year has revolved around combining these two approaches.<cit.>The main challenge is that the two techniques use fundamentally very different frameworks.Coupled cluster theory introduces a similarity-transformed Hamiltonian constructed via particle-hole excitation operators, and solves this similarity-transformed Hamiltonian in a subspace of the full Hilbert space.In contrast, PHF is a variational approach which minimizes the expectation value of a projected mean-field state with respect to the identity of the broken symmetry determinant from which the PHF wave function is built.Recently, we showed an analytic resummation of the SUHF wave function in terms of particle-hole excitations constructed out of a symmetry adapted RHF mean-field determinant.<cit.>While this is an important piece of the puzzle, we did not show any results for the combination of PHF and coupled cluster.This manuscript remedies that deficiency and fills in details missing from our earlier work. § THE SUHF POLYNOMIALLet us begin by proving the main contention of Ref. Qiu2016, that SUHF is a polynomial of double excitations when written in the RHF basis.While this result is novel, it was inspired by results obtained in Ref. Piecuch1996; see also Ref. Stuber2016.Conventionally, we would write the spin-projected UHF wave function in terms of an integration of rotation operators acting on the UHF determinant:|SUHF⟩ = 1/8π^2 ∫_0^2πdα ∫_0^π sin(β)dβ ∫_0^2πdγR |UHF⟩,where we have already specialized to the case of projecting a singlet spin, and have defined the spin rotation operator R = e^i αS_z e^i βS_y e^i γS_z.It is helpful to note that the integrations with respect to α and γ amount to projectors onto S_z eigenstates with eigenvalue 0, and that the UHF determinant is an eigenfunction of this projector, so we can write more simply|SUHF⟩ = 1/2P_S_z = 0 ∫_0^π sin(β)dβ e^i βS_y|UHF⟩. For our purposes, it will be convenient to write the UHF determinant as a Thouless transformation<cit.> from some RHF determinant: |UHF⟩ = e^T_1 + U_1|RHF⟩, T_1= ∑_ia t_i^a(c_a_↑^†c_i_↑ + c_a_↓^†c_i_↓), U_1= ∑_ia u_i^a(c_a_↑^†c_i_↑ - c_a_↓^†c_i_↓). Here and throughout, spatial orbitals i, j, k, …are occupied and a, b, c, …are virtual.Because the operator T_1 preserves spin, it commutes with the spin projection operator and we will disregard it in what follows so as to simplify the derivation.Note that a general Thouless transformation has two more pieces which mix ↑ and ↓ spin, but as these pieces yield generalized Hartree-Fock rather than UHF, we omit them here.With the UHF determinant represented as a Thouless transformation from the RHF determinant, we can write the SUHF state as|SUHF⟩ =1/2P_S_z = 0 ∫_0^π sin(β)dβ e^Ũ_1|RHF⟩,where we have introducedŨ_1 = e^i βS_yU_1e^-i βS_yand have used the fact that the RHF determinant is an eigenfunction of S_y with eigenvalue 0.Using the representation of S_y in terms of fermionic creation and annihilation operators and taking advantage of the Baker-Campbell-Hausdorff commutator expansion, we find that Ũ_1 = ∑_ia u_i^a[ cos(β)(c_a_↑^†c_i_↑ - c_a_↓^†c_i_↓)- sin(β)(c_a_↑^†c_i_↓ + c_a_↓^†c_i_↑)] = cos(β) U_1 - sin(β)(U_1^(+) + U_1^(-)) where we have introduced U_1^(+) = ∑_ia u_i^a c_a_↑^†c_i_↓, U_1^(-) = ∑_ia u_i^a c_a_↓^†c_i_↑. Note that U_1^(+) increases the S_z eigenvalue of an S_z eigenstate by 1, while U_1^(-) decreases it.With these ingredients in hand, the SUHF wave function is|SUHF⟩ = P_S_z = 0 ∑_n=0^∞∑_k=0^n 1/n! nk I_n,k× U_1^n-k (U_1^(+) + U_1^(-))^k |RHF⟩where I_n,k is the integralI_n,k = 1/2 (-1)^k∫_-1^1 dcos(β)cos(β)^n-k sin(β)^k.The S_z projector means that in (U_1^(+) + U_1^(-))^k we must have as many powers of U_1^(+) as of U_1^(-), which means k must be even, so we write k = 2 j.The integral I_n,2j vanishes unless n is also even, and we write n = 2m.Using these facts simplifies the wave function to|SUHF⟩ = ∑_m=0^∞∑_j=0^m 1/(2m)! 2m2j 2jjI_2m,2j× U_1^2m-2j (U_1^(+)U_1^(-))^j |RHF⟩ We can analytically evaluate I_2m,2j as an Euler Beta function:1/2 ∫_-1^1dx x^2m-2j(1-x^2)^j = j!Γ(m-j+1/2)/2Γ(m+3/2),where we have written x = cos(β), so that|SUHF⟩ = ∑_m=0^∞∑_j=0^m 1/(2m)! 2m2j 2jj j!Γ(m-j+1/2)/2Γ(m+3/2)× U_1^2m-2j (U_1^(+)U_1^(-))^j |RHF⟩ This result is deceptively formidable.The combinatorial factors, however, simplify enormously.Using the fact thatΓ(m+1/2) = √(π) (2m)!/m! 4^mallows us to write1/(2m)! 2m2j2jj j!Γ(m-j+1/2)/2Γ(m+3/2)= 1/(2m+1)! mj4^jfrom which we obtain simply |SUHF⟩ = ∑_m=0^∞1/(2m+1)! ∑_j=0^m mj×(U_1^2)^m-j (4 U_1^(+)U_1^(-))^j |RHF⟩= ∑_m=0^∞1/(2m+1)! (6 K_2)^m |RHF⟩= sinh(√(6 K_2))/√(6 K_2) |RHF⟩ where we have defined6 K_2 = U_1^2 + 4 U_1^(+)U_1^(-)so that the polynomial F(K_2) = ∑ 1/(2m+1)! (6 K_2)^m implicitly defined in Eqn. <ref> begins with 1 + K_2 + ….Finally, it is useful to write K_2 in terms of the amplitudes in U_1 and excitation operators.In agreement with Ref. Piecuch1996, we find that K_2= 1/6 (U_1^2 + 4 U_1^(+)U_1^(-)) = -1/6 ∑_ijab(u_i^a u_j^b + 2 u_i^b u_j^a) E_a^i E_b^j where E_a^i and E_b^j are unitary group generatorsE_a^i = c_a_↑^†c_i_↑ + c_a_↓^†c_i_↓. Taken all together, these results demonstrate that the SUHF wave function can be written as a polynomial of double excitations out of the RHF ground state where the double excitation operators are spin adapted and factorizable.One should not forget that in addition, spin-adapted single excitations must be included.§ SYMMETRY-PROJECTED EXTENDED COUPLED CLUSTEROnce we have the SUHF wave function in this particle-hole language, we might naturally wish to optimize the parameters defining it and extract the ground state energy.To do so, Ref. Qiu2016 defined an energy expressionE = ⟨RHF| G(L_2) F^-1(K_2)H F(K_2) |RHF⟩where L_2 is an operator of the same form as K_2^† but expressed with a different set of amplitudes v^i_a, and where G(L_2) is a polynomial which seeks to make the left-hand state look as much like the adjoint of the right-hand-state as possible.The resulting energy is made stationary with respect to the amplitudes u_i^a and v^i_a; additionally, we may include spin-adapted single excitations in the traditional coupled-cluster style.A characteristic result is shown in Fig. <ref> where we demonstrate that with suitably optimized G(L_2) we closely reproduce the traditional PHF energy.This good agreement between SUHF and our similarity-transformed approach demonstrates the correctness of our energy expression and amplitude equations.Evaluating the energy above is not entirely trivial.We will sketch our technique here and provide full details in the supplementary material.Our energy expression isE = ∑_lmn a_lc̅_m c_n⟨RHF| L_2^l K_2^mH K_2^n |RHF⟩where a_l, c̅_m, and c_n are the coefficients in the polynomials G(L_2), F^-1(K_2), and F(K_2), respectively.The key task is thus to evaluate matrix elementsE_lmn = ⟨RHF| L_2^l K_2^mH K_2^n |RHF⟩.In order to evaluate this matrix element efficiently, we replace L_2 and K_2 with P_0 K_2 P_0= 1/2P_0 U_1^2 P_0, P_0 L_2 P_0= 1/2P_0 V_1^2 P_0, where P_0 is the projection operator onto S=0 and V_1 is explicitlyV_1 = ∑_ia v^i_a(c_i_↑^†c_a_↑ - c_i_↓^†c_a_↓).We can introduce as many projection operators P_0 as we need because they are idempotent, commute with K_2, L_2, and H, and P_0 |RHF⟩ = |RHF⟩.Most of these projection operators can then be eliminated by using the fact (demonstrated in the supplementary material) thatP_0 U_1^a P_0 U_1^b P_S = ϵ_a,b^S P_0 U_1^a+bP_Swhere P_S is the projector onto spin S with S_z = 0.A consequence of this relation is thatP_0 K_2^n P_0 = p_n P_0 U_1^2nP_0where p_n is a simple coefficient; in fact, p_n = 2n+1/6^nas can be readily derived from the fact that|SUHF⟩ = P_0 e^U_1P_0 |RHF⟩= P_0 F(K_2) P_0 |RHF⟩.Similar results of course hold for V_1 and L_2.Using these relations gives usE_lmn = p_lp_mp_n⟨RHF |V_1^2lP_0 U_1^2mP_0 H U_1^2n |RHF⟩= p_lp_mp_n∑_k=0^42nk⟨RHF |V_1^2lP_0 U_1^2mP_0 U_1^2n-k (H U_1^k)_c |RHF⟩= p_lp_mp_n∑_k=0^4∑_S 2nk⟨RHF |V_1^2lP_0 U_1^2mP_0 U_1^2n-kP_S(H U_1^k)_c |RHF⟩=p_l p_m p_n∑_k=0^4 ∑_S 2nk ϵ^S_2m,2n-k ⟨RHF | V_1^2lP_0 U_1^2m+2n-kP_S(H U_1^k)_c|RHF⟩.Here, the subscript “c” stands for the connected component.In going from the first line to the second we have usedH U_1^2n = ∑_k=0^42nk U_1^2n-k (H U_1^k)_c.In going from the second to the third we have introduced the resolution of unity in the form 1 = ∑_S P_S, and in going from the third line to the fourth we have used Eqn. <ref>.In principle S must run over all possible spin states that can be constructed from (H U_1^k)_c |RHF⟩; in practice, we need only S = 0, 1, or 2.At this point, we can replace the projection operators by integrations over a grid, following our standard techniques: E_lmn = p_l p_m p_n∑_k=0^4 ∑_S 2nkϵ^S_2m,2n-k (2 S + 1) ∫dΩ_1/8π^2dΩ_2/8π^2D^S_0,0(Ω_2) ×⟨RHF | Ṽ_1(Ω_1)^2lU_1^2m+2n-k [HŨ_1(Ω_2)^k]_c |RHF⟩ where Ω is a compact notation for the Euler angles over which we integrate and Ũ_1 and Ṽ_1 are the rotated U_1 and V_1 operators (c.f. Eqn. <ref>, but note that we must now also include rotations with angles α and γ in the rotation operator of Eqn. <ref>).Though our expression is somewhat cumbersome, it can now be evaluated.The idea is to break it down into a sum of products of connected components; there are 𝒪(N^2) such connected components needed to evaluate the energy for all l, m, and n, each of which can be evaluated in 𝒪(N^4) or less.We thus evaluate and store all possible connected components, then put them together to evaluate a given combination of indices l, m, n, k, and S.To make this concrete, let us show a single example in which we will suppress the explicit dependence on rotation angles.One possible term is⟨ V_1^4 U_1^2 (H U_1^2)_c⟩= 12⟨(V_1 U_1)_c⟩^2⟨ V_1^2(H U_1^2)_c⟩+ 6⟨(V_1^2 U_1^2)_c⟩ ⟨ V_1^2(H U_1^2)_c⟩+ 24⟨(V_1 U_1)_c⟩ ⟨(V_1^2 U_1)_c V_1(H U_1^2)_c⟩+ 6⟨(V_1^2 U_1)^2_c(H U_1^2)_c⟩+ 4⟨(V_1^3 U_1^2)_c V_1(H U_1^2)_c⟩where each of these pieces can readily be evaluated using diagrammatic techniques.Note finally that each diagram resolves into the product of a spatial orbital part and a spin part.The spin integration ultimately can be carried out analytically, as we have done for the SUHF wave function earlier.In addition to evaluating the energy, we must solve for the amplitudes in U_1 and V_1.For the moment, we use a simple steepest-descents-style algorithm and calculate the derivatives of the energy with respect to U_1 and V_1 amplitudes numerically.§ COMBINING PHF AND COUPLED CLUSTERThus far, we have limited ourselves to the PHF ansatz.While this is useful for the description of static correlation, it is far from ideal for the description of weakly correlated systems, for which we would prefer a combination with coupled cluster theory.Note that previous work has built on ways to combine PHF with perturbation theory<cit.> or with configuration interaction,<cit.> albeit in a rather different way.Unfortunately, it is not entirely clear how this is best to be carried out.The basic idea is straightforward enough: we could construct a similarity-transformed HamiltonianH̅ = e^-TF^-1(K_2) H F(K_2)e^Tand introduce a biorthogonal expression for the energy which we make stationary with respect to the various parameters of the problem.Thus, for example, we might writeE = ⟨RHF| (1 + Z) G(L_2)H̅ |RHF⟩,where Z is a sum of de-excitation operators.Unfortunately, the techniques outlined above cannot avail us once we include genuinely connected double excitations, and evaluating the energy and amplitude equations for this similarity-transform-based combination of PHF and coupled cluster is far from simple.For the moment, we will avoid these issues by moving to a strictly variational treatment<cit.> as proof of principle.If the variational method works well, then we have reason to invest the time and effort in a similarity-transformed coupled-cluster-like approach, whereas if the variational treatment fails, then the similarity-transformed approach is unlikely to be successful.Our variational approach writes|Ψ⟩ = F(K_2)e^T |RHF⟩ = P_S = 0 e^T + U_1|RHF⟩,defines the energy by the usual expectation valueE = ⟨Ψ|H|Ψ⟩/⟨Ψ|Ψ⟩,and obtains the amplitudes by making the energy stationary.We have adopted a full configuration interaction code to do the calculations and have used the conjugate gradients method to solve for the wave function parameters.Though relatively straightforward to implement, this does limit our results in this manuscript to systems which are sufficiently small that exact results are readily available.We limit the cluster operator T to single and double substitutions so that we have variational coupled cluster singles and doubles (VCCSD) and, when combined with SUHF, spin projected unrestricted variational coupled cluster singles and doubles SUVCCSD).We will also report the effects of triple excitations in combination with spin projection to obtain spin projected unrestricted variational coupled cluster singles, doubles, and triples (SUVCCSDT).This implementation of SUHF agrees with our more conventional integration over symmetry coherent states to the precision with which we solve the respective equations (about ten decimal places).We must make one cautionary note.While we can initialize T = 0 in SUVCC, we must provide a non-zero initial guess for U_1 because U_1 = 0 is a solution of the SUHF and SUVCCSD equations.The results of our calculations depend on the choice of initial guess.Usually it suffices to initialize U_1 to the eigenvector of the SUHF Hessian corresponding to the lowest eigenvalue at U_1 = 0.Occasionally, however, this procedure is insufficient.One can frequently circumvent this problem by carrying out a UHF calculation first and finding the Thouless transformation between the RHF and UHF solution, though of course this is only helpful if the UHF breaks spin symmetry.One can also use the Thouless transformation between the RHF determinant and the deformed SUHF determinant, which always exists and which can be found by other means.§ RESULTSWe will first discuss results for the Hubbard model Hamiltonian<cit.> before turning to selected results for small molecules.Most calculations were done using in-house code, but the UHF and UHF-based coupled cluster instead used theprogram package.<cit.>§.§ The Hubbard HamiltonianThe Hubbard Hamiltonian describes electrons on a lattice.Electrons can hop from one side to neighboring sites, while two electrons on the same site repel one another.Mathematically, we write it asH = -t ∑_⟨ ij ⟩(c_i_↑^†c_j_↑ + c_i_↓^†c_j_↓) + U ∑_i c_i_↑^†c_i_↓^†c_i_↓c_i_↑where i and j index lattice sites and the notation ⟨ ij⟩ means the sum runs over sites between which hopping is allowed which, for our purposes, will be nearest-neighbor sites only.The lattice may be periodic or have open boundaries, and may be one-dimensional or multi-dimensional.We will limit ourselves to periodic lattices which are generally of more interest; results for open lattices are broadly similar.Note that the periodicity of the lattice only affects the hopping interaction, so for large U/t there is little practical difference between open and periodic boundary conditions.Exact results are readily available for one-dimensional periodic lattices thanks to a form of Bethe ansatz,<cit.> and high-quality benchmark data is available also for the more physically relevant two-dimensional lattices.<cit.>For small U/t, the system is weakly correlated and can be accurately modeled by traditional coupled cluster on the RHF reference.As U/t becomes large, the system becomes strongly correlated and traditional coupled cluster badly overcorrelates.Variational coupled cluster theory instead badly undercorrelates.At half filling, however, UHF becomes rather accurate, energetically, and in fact the UHF energy becomes exact in the U/t →∞ limit.Of course SUHF improves upon UHF everywhere, so is also energetically exact for large U/t.We would thus expect the combination of SUHF and coupled cluster to be highly accurate everywhere.This is borne out by Fig. <ref>, which shows results for the one-dimensional half-filled Hubbard Hamiltonian with six and ten sites in the left- and right-hand panels, respectively. Figure <ref> shows results for the two-dimensional Hubbard model where the lattice is 2 × 4 and half filled, but with open boundary conditions so as to lift a degeneracy at the Fermi level which is caused by lattice momentum symmetry.In all three cases, we see that SUVCCSD is roughly equivalent to VCCSD for small U/t, while for large U/t it instead resembles SUHF; in the recoupling region it significantly outperforms either.By “recoupling region” we mean values of U/t large enough that the UHF has broken spin symmetry but not so large that the system is effectively described by the Néel state in which each lattice site is occupied by a single electron and electrons on adjacent sites have opposite spins.Adding triples to give SUVCCSDT is exceptionally accurate everywhere for the smaller lattices, but for the 10-site model even higher excitation levels are apparently necessary. There is, however, an unpleasant feature here which we should point out.The errors per electron of both SUHF and VCCSD increase with increasing system size, while UHF gets somewhat better.In the thermodynamic limit, the errors per electron of UHF and SUHF are the same<cit.> and for larger U/t we would presumably see no improvement of SUVCCSD over UHF.On the other hand, coupled cluster with singles and doubles based on the UHF reference (UCCSD) improves noticeably over UHF even for fairly large U/t though it does not fully restore either spin symmetry or lattice momentum symmetry (both of which are broken in UHF) while SUVCCSD breaks neither.Energetically, for very large systems in this particular example there seems to be no reason to go to the expense of using SUVCCSD instead of simply using UCCSD.The SUVCCSD wave function is presumably more physical in that it has the same symmetries as does the exact ground state wave function, but so too would be a spin-projected UCCSD.Results for spin projecting the UCCSD wave function will be presented in due time.Guided by our experience in the Lipkin Hamiltonian,<cit.> we expect spin projecting UCCSD to be superior in the strongly correlated limit (where it should not be worse than UCCSD itself) but perhaps less accurate in the recoupling region.Away from half filling, UHF and consequently SUHF are less helpful.In Fig. <ref> we show results for an eight-site Hubbard ring with six electrons.There is a qualitative change in character in the lowest energy UHF as a function of U/t, which is reflected in the SUHF, and in fact for large U/t the Hartree-Fock ground state appears to be of generalized Hartree-Fock type (i.e. the lowest energy Hartree-Fock breaks both S^2 and S_z symmetry).For this combination of lattice and filling fraction, SUHF is significantly less effective at capturing the strong correlation for large U/t, and accordingly VCCSD and SUVCCSD are essentially identical.The contributions from triple excitations are now very significant for large U/t.Note that there are no triple excitations in SUVCCSD because the PHF polynomial contains only even excitation levels, and single excitations vanish due to momentum symmetry.Unless these triple excitations can be accounted for, there seems to be little hope of obtaining accurate results.Results for doped two-dimensional lattices (not shown) display these same basic features. §.§ Molecular ExamplesLet us now turn to a few simple molecular examples.We begin with the case of four hydrogen atoms on a ring of radius 3.3 bohr, depicted schematically in Fig. <ref>.The ground state and lowest singlet excited state are nearly degenerate for θ≈ 90^∘, and become exactly degenerate at this high symmetry point.While the exact curve of the energy as a function of θ should be smooth with a maximum at 90^∘, approximate methods typically instead have a cusp here and some also predict a spurious local minimum.InFig. <ref> we see that VCCSD is superior to SUHF except near the high symmetry point, where it yields an unphysical cusp.Combining SUHF and VCCSD in SUVCCSD gives results superior to both methods; there is no cusp, and SUVCCSD is nearly parallel to the exact result.Next we consider the dissociation of N_2 in the STO-3G basis set, as depicted in Fig. <ref>.While traditional restricted coupled cluster with single and double excitations (RCCSD) has an unphysical bump and dissociates to an energy which is much too low, VCCSD gives a reasonable curve everywhere.However, VCCSD does not go to quite the right dissociation limit, while in this minimal basis set, SUHF does.Accordingly, SUVCCSD follows VCCSD near equilibrium when the system is weakly correlated, yields the right answer at dissociation, and outperforms both VCCSD and SUHF in the intermediate coupling regime.The right panel of Fig. <ref> shows errors with respect to full configuration interaction as a function of the N-N bond length R_NN, and we see that in this basis set the maximum error of SUVCCSD is about 2.5 kcal/mol.Finally we consider the symmetric double dissociation of H_2O, in which we use the STO-3G basis for the hydrogen atoms and the 6-31G basis for the oxygen atom.We take an H-O-H bond angle of 104.5^∘.As the left panel of Fig. <ref> makes clear, with this slightly larger basis set the SUHF is no longer exact at large bond lengths.Accordingly, as can be seen from the right panel, SUVCCSD offers only negligible improvement over VCCSD. § CONCLUSIONSMethods which can accurately describe both strongly and weakly correlated systems remain frustratingly elusive.Coupled cluster theory is unquestionably successful when correlations are weak, and symmetry projected mean field methods are generally useful when correlations are strong.Combining the two approaches seems to be a logical approach.This combination is, however, not as straightforward as one would like because the two theories are formulated in very different ways.In Ref. Degroote2016 we showed how to interpolate between coupled cluster and the number-projected Bardeen-Cooper-Schriefer (BCS) state in the pairing or reduced BCS Hamiltonian.Reference Qiu2016 formulated spin-projected unrestricted Hartree-Fock in the language of a similarity-transformed Hamiltonion built from particle-hole excitations out of a symmetry-adapted reference.While we feel that this is an important step toward fruitfully combining PHF and coupled cluster theories, we provided no results for this combination.Reference WahlenStrothman2016 shows results for the combination of parity-projected Hartree-Fock and coupled cluster theory when applied to the Lipkin model Hamiltonian.Here, we provide our first applications to the Hubbard and molecular Hamiltonians, albeit only in a variational formulation.Thus far, the results seem to be generally encouraging, but much work clearly remains to be done. Most importantly, it seems obvious that the variational approach used here must be abandoned in favor of a similarity-transformation technique.Work along these lines is underway.More basically, however, we must address several questions.Two questions strike us as particularly important.One important question is under which circumstances combining PHF and coupled cluster will offer any improvements at all.Our results here are somewhat mixed, but generally we suggest that when PHF is adequate to describe the strong correlations in the system, then combining PHF and coupled cluster should work well.This should not be too surprising, and is supported by our earlier work.<cit.>We must therefore ask which symmetries should be projectively restored in the first place, and how the symmetry-projected determinant can be expressed in terms of symmetry-adapted particle-hole excitations?Each symmetry would in general have a different relation between the parameters defining the Thouless transformation that takes us to the broken symmetry determinant on the one hand and the symmetry-adapted particle-hole excitations used to define the similarity-transformed Hamiltonian on the other hand.Thus, for example, while projecting UHF onto a singlet state gives us the sinh polynomial we have called F(K_2), projecting UHF onto a state with different spin will give us different polynomials.Projecting spatial symmetry would be different again.Second, what happens in the thermodynamic limit?While coupled cluster is extensive (i.e. the energy scales properly with system size), extensivity in projected Hartree-Fock is more subtle.It is true that PHF has an extensive component, in that the energy per particle in the thermodynamic limit is the same as that of the broken symmetry mean field (and therefore below that of the symmetry-adapted mean field from which we begin).But what happens when we combine PHF and coupled cluster theory?Certainly we would introduce unlinked terms which in the thermodynamic limit should not be there.These questions are not easy.But we are cautiously optimistic that they can be answered and that the resulting methods will have great potential as broadly applicable techniques which can describe all sorts of correlated systems with high accuracy at a computational cost not much different from that of traditional coupled cluster theory. § SUPPLEMENTARY MATERIALSupplementary material provides full details for the evaluation of the similarity-transformed energy expression of Eqn. <ref> and proves the relation given in Eqn. <ref>.This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Computational and Theoretical Chemistry Program under Award No.DE-FG02-09ER16053. G.E.S. is a Welch Foundation Chair (C-0036).We would like to acknowledge computational support provided by the Center for the Computational Design of Functional Layered Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award DE-SC0012575.
http://arxiv.org/abs/1702.08578v2
{ "authors": [ "Ethan Qiu", "Thomas M. Henderson", "Gustavo E. Scuseria" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170227232236", "title": "Projected Hartree-Fock as a Polynomial of Particle-Hole Excitations and Its Combination With Variational Coupled Cluster Theory" }
We show that every group in a large family of (not necessarily torsion) spinal groups acting on the ternary rooted tree is of subexponential growth. Topological Interference Management withDecoded Message Passing Xinping Yi, Member, IEEE and Giuseppe Caire, Fellow, IEEE This work has been presented in part at Proc. IEEE Int. Symp. Information Theory (ISIT'16), Barcelona, Spain, Jul. 2016. X. Yi and G. Caire are with Communication and Information Theory Chair in Department of Electrical Engineering and Computer Science at Technische Universität Berlin, 10587 Berlin, Germany. (email: {xinping.yi, caire}@tu-berlin.de)December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Word growth is an important quasi-isometry invariant of finitely generated groups. Such groups can be broadly divided into three distinct classes: groups of polynomial growth, groups of exponential growth and groups of intermediate growth (that is, groups whose growth is faster than any polynomial but slower than exponential).While the existence of groups of polynomial or exponential growth is readily established, the question is far less obvious in the case of groups of intermediate growth and was first raised by Milnor in 1968 (<cit.>). It was settled by Grigorchuk in 1983 <cit.> when he proved that the infinite, finitely generated torsion group defined in <cit.>, now known as the first Grigorchuk group, is of intermediate growth.Since this initial discovery, many other groups of intermediate growth have been found. One of those, now known as the Fabrykowski-Gupta group, was first studied by Fabrykowski and Gupta in <cit.> and in <cit.>. It is a self-similar branch group acting on the ternary rooted tree. The Fabrykowski-Gupta group was later revisited by Bartholdi and Pochon in <cit.>, where they provided a different proof of the fact that its growth is intermediate, along with bounds on the growth.In order to establish their results, Bartholdi and Pochon proved that under suitable conditions on the length of words, the growth of a self-similar group acting on a rooted tree is subexponential if and only if the growth of a subset of elements (those for which the projection to a certain fixed level does not reduce the length) is subexponential.It turns out that the same holds if we consider the subset of elements whose length is never reduced by the projection to any level. As this set is smaller than the one considered by Bartholdi and Pochon, it can potentially be easier to show that its growth is subexponential.After reviewing some basic results and definitions in Section <ref>, we will define non-ℓ_1-expanding similar families of groups of rooted tree automorphisms and prove a generalized Bartholdi-Pochon criterion for such families of groups in Section <ref>. We will then use it in Section <ref> to prove that a large family of spinal groups acting on the ternary rooted tree are of subexponential growth.§ PRELIMINARIES§.§ Word growth Given a finite and symmetric (i.e. closed under the operation of taking the inverse) generating set of a group, one can construct a natural metric on the group called the word metric. Let G be a finitely generated group and S be a symmetric finite generating set. The word norm on G (with respect to S) is the map|·|G→g ↦min{k∈| g=s_1… s_k, s_i∈ S}.The word metric on G (with respect to S) is the metricd G× G→(g,h) ↦ |g^-1h|induced by the word norm on G.The word metric is simply the metric coming from the Cayley graph of G with the generating set S.It is clear from the definition that every ball of radius n in G (with the word metric) is finite. We will be interested in seeing how quickly these balls grow.Let G=⟨ S ⟩, where S is symmetric and finite. The mapγ_G,S →n ↦|B_G,S(n)|,where B_G,S(n) is the ball of radius n centred at the identity in the word metric of G with respect to S, is the growth function of G (with respect to S). The growth function of a group depends on the generating set we choose. In order to remove this dependence, we introduce an equivalence relation on functions. Given two non-decreasing functions f,g →, we write f≲ g if there exists C∈^* such that f(n)≤ g(Cn) for all n∈^*. The functions f and g are said to be equivalent, written f∼ g, if f≲ g and g≲ f.Let G be a finitely generated group and S, T be two finite symmetric generating set. Then, γ_G,S∼γ_G,T. Let C=max_t∈ T{|t|_S},where |·|_S is the norm on G with respect to S. Then, clearly, for all g∈ G, we have |g|_S ≤ C|g|_T. Hence,γ_G,T(n) ≤γ_G,S(Cn),so γ_G,T≲γ_G,S. The result follows from the symmetry of the argument. Thanks to proposition <ref>, the equivalence class of the growth function is independent of the choice of generating set. For this reason, we will often omit the reference to the generating set and write simply γ_G for the growth function of the group G. Let f→ be a non-decreasing function. We say that * f is of polynomial growth if there exists d∈ such that f≲ n^d* f is of superpolynomial growth if n^d ⋦ f for all d∈* f is of exponential growth if f∼ e^n* f is of subexponential growth if f⋦ e^n* f is of intermediate growth if f is of superpolynomial growth and of subexponential growth. The above definitions do not depend on the exact function f, but only on its equivalence class. Remark <ref> allows us to define growth for groups. A finitely generated group G is said to be of polynomial, superpolynomial, exponential, subexponential or intermediate growth if for any (and hence all) generating set S, the growth function γ_G,S is of polynomial, superpolynomial, exponential, subexponential or intermediate growth, respectively.Since the growth of a free group is exponential, a finitely generated group cannot have a growth faster than exponential. In what follows, we will be interested mainly in distinguishing between groups of exponential or subexponential growth. For this purpose, it will be convenient to study a quantity called the exponential growth rate of the group. Let G be a finitely generated group and S be a finite generating set. The limitκ_G,S = lim_n→∞γ_G,S(n)^1/nexists. It follows easily from the definition that for all n,m∈,γ_G,S(n+m) ≤γ_G,S(n)γ_G,S(m).Hence, lnγ_G,S is a subadditive function, so according to Fekete's subadditive lemma, the limitlim_n→∞lnγ_G,S(n)/nexists. The result follows.The limitκ_G,S = lim_n→∞γ_G,S(n)^1/nis the exponential growth rate of the group G (with respect to the generating set S).Let G be a finitely generated group with a finite symmetric generating set S. Then, κ_G,S > 1 if and only if G is of exponential growth. It will sometimes be more convenient to consider spheres instead of balls. In the case of infinite finitely generated groups, the exponential growth rate can also be calculated from the size of spheres. Let G be an infinite finitely generated group with finite symmetric generating set S. Then,κ_G,S = lim_n→∞|Ω_G,S(n)|^1/n,where Ω_G,S(n) is the sphere of radius n in the word metric on G with respect to S. It follows from the definition of the word metric that Ω_G,S(n + m) ≤Ω_G,S(n)Ω_G,S(m), so by Fekete's subadditive lemma, the limit exists.Since Ω_G,S(n)≤γ_G,S(n), we have lim_n→∞|Ω_G,S(n)|^1/n≤κ_G,S. On the other hand,γ_G,S(n)= ∑_i=0^n|Ω_G,S(i)|≤ (n+1)|Ω_G,S(s(n))|where s(n)∈{0,1,…, n} is such that |Ω_G,S(s(n))|≥ |Ω_G,S(i)| for all i≤ i ≤ n. Hence,κ_G,S ≤lim_n→∞ (n+1)^1/n|Ω_G,S(s(n))|^1/n=lim_n→∞(|Ω_G,S(s(n))|^1/s(n))^s(n)/n=(lim_n→∞ |Ω_G,S(n)|^1/n)^(lim_n→∞s(n)/n)≤lim_n→∞ |Ω_G,S(n)|^1/n(this uses the fact that G is infinite, so |Ω_G,S(n)| ≥ 1 for all n∈, and that s(n) is a non-decreasing function bounded from above by n). §.§ Word pseudometrics Sometimes, instead of the word metric, it might be more convenient to consider a word pseudometric. Let G be a finitely generated group and S be a symmetric finite set generating G. A map |·| S→{0,1} that associates to every generator a length of 0 or 1 will be called a pseudolength on S. A pseudolength can be extended to a map|·| G→g↦min_g=s_1… s_k, s_i∈ S{∑_i=1^k|s_i| }called the word pseudonorm of G (associated to (S, |·|)). The corresponding pseudometricd G× G→(g,h)↦ |g^-1h|is the word pseudometric of G (associated to (S, |·|)). For the growth function to make sense with the word pseudometric, though, there must be only a finite number of elements with length 0. In this case, the growth function is well defined and equivalent to the growth function given by the word metric. Let G be a group generated by a finite symmetric set S and |·|S →{0,1} be a pseudolength on S. If the subgroupG_0 = ⟨{s∈ S | |s|=0}⟩is finite, then the mapγ_G,S,|·| →n ↦|B_G,S,|·|(n)|is well-defined, where B_G,S,|·|(n)={g∈ G | |g|≤ n}. Furthermore, γ_G,S,|·|∼γ_G,S. To show that γ_G,S,|·| is well-defined, we need to show that|B_G,S,|·|(n)| < ∞for every n∈. For g∈ G with |g|=n, it follows from the definition of the word pseudonorm that there exist s_1,…, s_n ∈ S_1, g_0,…, g_n∈ G_0 such thatg=g_0s_1g_1… s_n g_n,where S_1 = {s∈ S | |s|=1}. Hence,|B_G,S,|·|(n)| ≤ |G_0|^n+1|S_1|^n < ∞. We must now show that the growth function with respect to the word pseudometric is equivalent to the growth function with the word metric. Let us denote by |·|_w the word metric in G with respect to S. LetM = max{|g|_w | g∈ G_0}.Then, the decompositiong=g_0s_1g_1… s_n g_nimplies that|g|_w≤ (n+1)M + n = |g|(M+1) + M ≤ (2M+1)|g|if |g|≥ 1. Hence, if n≥ 1,γ_G,S,|·|(n) ≤γ_G,S((2M+1)n)so γ_G,S,|·|≲γ_G,S. Since it is clear from the definition that γ_G,S≲γ_G,S,|·|, we have γ_G,S,|·|∼γ_G,S. Since the growth function coming from a word pseudometric with a finite subgroup of length 0 is equivalent to the growth function coming from a word metric, we will make no distinction between the two in what follows. The conclusion of proposition <ref> remains valid if we replace γ_G,S by γ_G,S,|·| for some pseudonorm |·|. Hence, we can define an exponential growth rate with respect to a pseudonorm, which we will denote by κ_G,S,|·|. Likewise, the conclusions of proposition <ref> are also true, so the exponential growth rate can be computed from the growth of spheres.§.§ Equivalence classes of non-decreasing functions Given two non-decreasing functions f,g →, we write f≲ g if there exists C∈^* such that f(n)≤ g(Cn) for all n∈^*. The functions f and g are said to be equivalent, written f∼ g, if f≲ g and g≲ f. We say that * f is of polynomial growth if there exists d∈ such that f≲ n^d* f is of superpolynomial growth if n^d ⋦ f for all d∈* f is of exponential growth if f∼ e^n* f is of subexponential growth if f⋦ e^n* f is of intermediate growth if f is of superpolynomial growth and of subexponential growth. §.§ Word pseudometrics and word growth Given a finite symmetric generating set of a group G, one can construct a natural metric on G called the word metric. For our purposes, it will be more convenient to consider the more general notion of a word pseudometric. Let G be a finitely generated group and S be a symmetric finite set generating G. A map |·| S→{0,1} that associates to every generator a length of 0 or 1 will be called a pseudolength on S. A pseudolength can be extended to a map|·| G→g↦min_g=s_1… s_k, s_i∈ S{∑_i=1^k|s_i| }called the word pseudonorm of G (associated to (S, |·|)). The corresponding pseudometricd G× G→(g,h)↦ |g^-1h|is the word pseudometric of G (associated to (S, |·|)).If every generator is assigned a length of 1, then the word pseudometric is in fact a metric, called the word metric. If there is only a finite number of elements with length 0, one can define a growth function for the group with regards to the given pseudometric. The growth function thus obtained is in fact equivalent to the usual growth function. Let G be a group generated by a finite symmetric set S and |·|S →{0,1} be a pseudolength on S. If the subgroupG_0 = ⟨{s∈ S | |s|=0}⟩is finite, then the growth functionγ_G,S,|·| →n ↦|B_G,S,|·|(n)|is well-defined, where B_G,S,|·|(n)={g∈ G | |g|≤ n}. Furthermore, γ_G,S,|·|∼γ_G,S, where γ_G,S is the usual growth function obtained by giving length 1 to each generator. To show that γ_G,S,|·| is well-defined, we need to show that|B_G,S,|·|(n)| < ∞for every n∈. For g∈ G with |g|=n, it follows from the definition of the word pseudonorm that there exist s_1,…, s_n ∈ S_1, g_0,…, g_n∈ G_0 such thatg=g_0s_1g_1… s_n g_n,where S_1 = {s∈ S | |s|=1}. Hence,|B_G,S,|·|(n)| ≤ |G_0|^n+1|S_1|^n < ∞. We must now show that the growth function with respect to the word pseudometric is equivalent to the growth function with the word metric. Let us denote by |·|_w the word metric in G with respect to S. LetM = max{|g|_w | g∈ G_0}.Then, the decompositiong=g_0s_1g_1… s_n g_nimplies that|g|_w≤ (n+1)M + n = |g|(M+1) + M ≤ (2M+1)|g|if |g|≥ 1. Hence, if n≥ 1,γ_G,S,|·|(n) ≤γ_G,S((2M+1)n)so γ_G,S,|·|≲γ_G,S. Since it is clear from the definition that γ_G,S≲γ_G,S,|·|, we have γ_G,S,|·|∼γ_G,S.A word pseudometric yielding a finite subgroup of length 0 will be called a proper word pseudometric. Since the growth function coming from a proper word pseudometric is equivalent to the growth function coming from a word metric, we will make no distinction between the two. In what follows, we will be interested mainly in distinguishing between groups of exponential or subexponential growth. For this purpose, it will be convenient to study a quantity called the exponential growth rate of the group. Let G be a finitely generated group, S be a finite symmetric generating set and |·| G → be a proper word pseudonorm. The limitκ_G,S, |·| = lim_n→∞γ_G,S, |·|(n)^1/nexists and is called the exponential growth rate of the group G (with respect to the generating set S and the pseudonorm |·|).Let G be a finitely generated group with a finite symmetric generating set S and a proper word pseudonorm |·|. Then, κ_G,S,|·| > 1 if and only if G is of exponential growth. It will sometimes be more convenient to consider spheres instead of balls. In the case of infinite finitely generated groups, the exponential growth rate can also be calculated from the size of spheres. Let G be an infinite finitely generated group with finite symmetric generating set S and proper word pseudonorm |·|. Then,κ_G,S,|·| = lim_n→∞|Ω_G,S,|·|(n)|^1/n,where Ω_G,S,|·|(n) is the sphere of radius n in the word pseudometric |·|. §.§ Rooted trees Let d>1 be a natural number and T_d be the d-regular rooted tree. The set of vertices of T_d is V(T_d) = X^*, the set of finite words in the alphabet X={1,2,… d}. We will often abuse the notation and write v∈ T_d instead of v∈ V(T_d) or v∈ X^* to refer to a vertex of T_d. The set of edges isE(T_d)= {{w,wx}⊂ X^* | w∈ X^*, x∈ X}.A vertex w'∈ X^* is said to be a child of w∈ X^* if w'=wx for some x∈ X. In this case, w is called the parent of w'. For n∈, the set L_n⊂ X^* of words of length n is called the n-th level of the tree.The group of automorphisms of T_d, that is, the group of bijections of X^* that leave E(T_d) invariant, will be denoted by (T_d). For G≤(T_d) and v∈ T_d, we will write _G(v) to refer to the stabilizer of v in G. For n∈, we define the stabilizer of the n-th level in G, denoted _G(n), as_G(n) = ⋂_v∈ L_n_G(v).When G=(T_d), we will often simply write (v) and (n) in order to make the notation less cluttered.For v∈ T_d, we will denote by T_v the subtree rooted at v. This subtree is naturally isomorphic to T_d. Any g∈ St(v) leaves T_v invariant. The restriction of g to T_v is therefore (under the natural isomorphism between T_v and T_d) an automorphism of T_d, which we will denote by g|_v. The mapφ_v(v)→(T_d) g ↦ g|_vis clearly a homomorphism. This allows us to define a homomorphismψ(1) →(T_d)^d g ↦(g|_1,g|_2,…, g|_d).We can also define homomorphismsψ_n(n) →(T_d)^d^nfor all n∈^* inductively by setting ψ_1 = ψ and ψ_n+1(g) = (ψ_n(g|_1), ψ_n(g|_2), …, ψ_n(g|_d)).It is clear from the definition that these homomorphisms are in fact isomorphisms. In what follows, we will use the same notation for those maps and their restriction to some subgroup G≤(T_d). In this case, it is important to note that the maps ψ_n are still injective, but are no longer surjective in general.There is a faithful action of (d) on T_d given byτ(xw) = τ(x)wwhere τ∈(d), x∈ X and w∈ X^*. This action gives us an embedding of (d) in T_d and in what follows, we will often identify (d) with its image in T_d under this embedding. The automorphisms in the image of (d) are called rooted automorphisms. Any g∈(T_d) can be uniquely written as g=hτ with h∈(1) and τ∈(d). In a slight abuse of notation, we will often find it more convenient to writeg=(g_1,g_2,…, g_d)τ,where (g_1,g_2,…,g_d) = ψ(h).More generally, for any n∈^*, there is a natural embedding of the wreath product (d)≀…≀(d) of (d) with itself n times in (T_d) and any g∈(T_d) can be written uniquely as g=hτ with h∈(n) and τ∈(d)≀…≀(d). In the same fashion as above, we will often writeg= (g_11… 1, g_11… 2, …, g_dd… d)τwhere (g_11… 1, g_11… 2, …, g_dd… d) = ψ_n(h). § INCOMPRESSIBLE ELEMENTS AND GROWTH§.§ Non-ℓ_1-expanding similar families of groups acting on rooted trees A classical way of showing that a group acting on a rooted tree is of subexponential growth is to show that the projection of elements to some level induces a significant amount of length reduction. We introduce here a class of groups that seem well suited to this kind of argument, non-ℓ_1-expanding similar families of groups acting on rooted trees. This is a restriction of the more general notion of similar families of groups as defined by Bartholdi in <cit.>.Note that for the rest of this section, d will denote an integer greater than 1. Let Ω be a set and σΩ→Ω be a map from this set to itself. For each ω∈Ω, let G_ω≤(T_d) be a group of automorphisms of T_d acting transitively on each level, generated by a finite symmetric set S_ω and endowed with a proper word pseudonorm |·|_ω. The family {(G_ω, S_ω, |·|_ω)}_ω∈Ω is a similar family of groups of automorphisms of T_d if for all ω∈Ω and all g∈ G_ω,g=(g_1,g_2, …, g_d)τwith g_1,g_2,…,g_d ∈ G_σ(ω), τ∈(d). Furthermore, if∑_i=1^d |g_i|_σ(ω)≤ |g|_ω,the similar family {(G_ω, S_ω, |·|_ω)}_ω∈Ω is said to be a non-ℓ_1-expanding similar family of groups of automorphisms of T_d.In the case where |Ω|=1, a similar family contains only one group, which is then said to be self-similar.There is a more general notion of a similar family of groups in which the groups need not act on regular rooted trees, but only on spherically homogeneous rooted trees. However, we will not need such generality for what follows.In what follows, we will frequently consider only (non-ℓ_1-expanding) similar families of the form {(G_ν, S_ν, |·|_ν)}_ν∈, where the map fromtois the addition by 1. This has the advantage of simplifying the notation without causing any significant loss in generality. Indeed, let {(G_ω, S_ω, |·|_ω)}_ω∈Ω be a (non-ℓ_1-expanding) similar family. Then, for any ω∈Ω, {(G_ν, S_ν, |·|_ν)}_ν∈ is also a (non-ℓ_1-expanding) similar family, where (G_ν, S_ν, |·|_ν) = (G_σ^ν(ω), S_σ^ν(ω), |·|_σ^ν(ω)).For all ν∈, let G_ν≤(T_d) be a finitely generated subgroup of (T_d) acting transitively on each level with finite symmetric generating set S_ν and proper word pseudonorm |·|_ν. The collection {(G_ν, S_ν, |·|_ν)}_ν∈ is said to be a non-ℓ_1-expanding similar family of groups (of automorphisms of T_d) if for all ν∈ and all g∈ G_ν,g=(g_1,g_2, …, g_d)τwith g_1,g_2,…,g_d ∈ G_ν+1, τ∈(d) and∑_i=1^d|g_i|_ν+1≤ |g|_ν.If {(G_ν, S_ν, |·|_ν)}_ν∈ is a non-ℓ_1-expanding similar family of groups, then for any ν∈ and s=(s_1,s_2,…,s_d)τ∈ S_ν, we have∑_i=1^d|s_i|_ν+1≤ |s|_ν≤ 1so there is at most one s_i with positive length (and none if |s|_ν=0).In order to keep the notation simple, if {(G_ν, S_ν, |·|_ν)}_ν∈ is a non-ℓ_1-expanding similar family of groups, for ν∈, we will write γ_ν for the growth function and κ_ν for the exponential growth rate of G_ν with respect to the pseudonorm |·|_ν. The exponential growth rates of a non-ℓ_1-expanding similar family of groups form a non-decreasing sequence. Let {(G_ν,S_ν,|·|_ν)}_ν∈ be a non-ℓ_1-expanding similar family of groups of automorphisms of T_d. For any ν∈,κ_ν≤κ_ν+1.Let n∈ be greater than d and let g∈ G_ν be such that |g|_ν≤ n. We haveg=(g_1,g_2,…,g_d)τwith g_1,g_2,…, g_d∈ G_ν+1, τ∈(d) and∑_i=1^d|g_i|_ν+1≤ |g|_ν = n.Since g is determined by g_1,g_2,…, g_d and τ, we haveγ_ν(n) ≤ d!∑_r_1+r_2+… + r_d ≤ nγ_ν+1(r_1)γ_ν+1(r_2)…γ_ν+1(r_d). Let C(k) = γ_ν+1(k)/κ_ν+1^k for any k∈.We haveγ_ν(n)≤ d!∑_r_1+r_2+… + r_d ≤ nC(r_1)κ_ν+1^r_1C(r_2)κ_ν+1^r_2… C(r_d)κ_ν+1^r_d=d!κ_ν+1^n ∑_r_1+r_2+… + r_d ≤ nC(r_1)C(r_2)… C(r_d).Let s(n)∈{1,…, n} be such that C(s(n)) ≥ C(r) for all 1≤ r ≤ n. We then haveγ_ν(n)≤ d!κ_ν+1^n ∑_r_1+r_2+… + r_d ≤ nC(s(n))^d ≤ d! κ_ν+1^n C(s(n))^d n^dIt is clear from the definition that the sequence s(n) is non-decreasing. Therefore, either it stabilizes or it goes to infinity. Since lim_k→∞C(k)^1/k=1, in both cases we have lim_n→∞C(s(n))^1/n=1. Hence,κ_ν =lim_n→∞γ_ν(n)^1/n≤κ_ν+1lim_n→∞ d!^1/n C(s(n))^1/n n^d/n=κ_ν+1. §.§ Examples Let us now present some examples of non-ℓ_1-expanding similar families of groups of automorphisms of T_d.§.§.§ Spinal groups Spinal groups were first introduced and studied, in a more restrictive version, by Bartholdi and Šuniḱ in <cit.>. A more general version was later introduced by Bartholdi, Grigorchuk and Šuniḱ in <cit.>. Spinal groups form a large family of groups which include many previously studied examples, such as the Grigorchuk groups and the Gupta-Sidki group. Note that the definition we give here is not the most general one, because we consider only regular rooted trees.Let B be a finite group andΩ = {{ω_ij}_i∈, 1≤ j < d|ω_ij∈(B,(d)), ⋂_i≥ k⋂_j=1^d-1ω_ij = {1}∀ k ∈}be the set of sequences of homomorphisms from B to (d) such that the intersection of the kernels is trivial no matter how far into the sequence we start. LetσΩ →Ω ω = {ω_ij}_i∈, 1≤ j ≤ d - 1 ↦σ(ω) = {ω_(i+1)j}_i∈, 1≤ j ≤ d - 1be the left-shift (with respect to the first index), which is well-defined thanks to the way the condition on the kernels was formulated.For each ω = {ω_ij}_i∈, j∈{1,2,…,d-1}∈Ω, we can recursively define a homomorphismβ_ω G_B→(T_d) b↦ (ω_01(b), ω_02(b),…, ω_0(d-1)(b), β_σ(ω)(b))where, as usual, we identify (d) with rooted automorphisms of T_d. The condition on the kernels of sequences in Ω ensures that this homomorphism is injective. Let us write B_ω = β_ω(B)≤(T_d).For a fixed ω = {ω_ij}∈Ω, let A_ω≤(d) be any subgroup of (d). For any k∈^*, we then defineA_σ^k(ω) = ⟨⋃_j=1^dω_kj(B)⟩.Using the notation above, the group G_ω = ⟨ A_ω, B_ω⟩ for some ω∈Ω and A_ω≤(d) is a spinal group if A_σ^k(ω) acts transitively on {1,2,…,d} for all k∈.For ω∈Ω and A_ω≤(d), if G_ω = ⟨ A_ω, B_ω⟩ is a spinal group, then G_σ^k(ω) = ⟨ A_σ^k(ω), B_σ^k(ω)⟩ is a spinal group for all k∈. For any spinal group G_ω, the set S_ω = A_ω∪ B_ω is a finite symmetric generating set. Let |·|_ω S_ω→{0,1} be defined by|g|_ω = 0 ifg∈ A_ω1 otherwise.It is clear from the definition that if g∈ S_ω, we haveg=(g_1,g_2,…,g_d)τwith τ∈ A_ω, g_1,g_2,…, g_d∈ S_σ(ω) and∑_i=1^d|g_i|_σ(ω) = |g|_ω.As explained above, |·|_ω can be extended to a word pseudonorm on G_ω that we will also denote by |·|_ω. The set of elements of length 0 in this pseudonorm is exactly A_ω, which is finite, and since it is true for the generators, we have that for g∈ G_ω,g=(g_1,g_2,…,g_d)τwith τ∈ A_ω, g_1,g_2,…, g_d∈ G_σ(ω) and∑_i=1^d|g_i|_σ(ω)≤ |g|_ω.Hence,{G_ω, S_ω, |·|_ω}_ω∈Ωis a non-ℓ_1-expanding similar family of groups.Hence, for any ω∈Ω,{(G^ω_ν, S^ω_ν, |·|^ω_ν)}_ν∈is a non-ℓ_1-expanding similar family of groups, where G^ω_ν = G_σ^ν(ω), S^ω_ν = S_σ^ν(ω) and |·|_ν^ω = |·|_σ^ν(ω).[The first Grigorchuk group] Let d=2, A = (2)≅/2 and B = (/2)^2. Let a be the non-trivial element of A and b,c,d be the non-trivial elements of B. For x∈{b,c,d}, let ω_x B→ A be the epimorphism that sends x to 1 and the other two non-trivial letters to a. The group G_ω with ω = ω_dω_cω_bω_dω_cω_b… (here, since d-1 = 1, there is only one index) and A_ω=A is the first Grigorchuk group, which was first introduced in <cit.>.[Grigorchuk groups] More generally, let d=p, where p is a prime number, A = ⟨(1 2 … p)⟩≅/p and B=(/p)^2. Letϕ_k (/p)^2→/p(x,y)↦ x+kyfor 0≤ k ≤ p-1 and letϕ_p (/p)^2→/p(x,y)↦ y.The groups G_ω with A_ω = A and ω={ω_ij}_i∈, 1≤ j ≤ p-1∈Ω such that ω_i1 =ϕ_k_i for all i∈ and ω_ij = 1 if j 1 are called Grigorchuk groups and were studied in <cit.>.[Šuniḱ groups] Let d=p a prime number, A = ⟨(1 2 … p)⟩≅/p and B=(/p)^m for some m∈. Let ϕ (/p)^m→/p be the epimorphism given by the matrix[ 0 0 … 0 1 ]in the standard basis, ρ (/p)^m → (/p)^m be the automorphism given by[ 0 0 … 0-1; 1 0 … 0 a_1; 0 1 … 0 a_2; ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 … 1 a_m-1 ]in the standard basis, for some a_1,…, a_m-1∈/p, and ω={ω_ij}∈Ω, whereω_i1 = ϕ∘ρ^iand ω_ij = 1 if j 1. For any such ω, we let A_ω = A.The groups G_ω that can be constructed in this way are exactly the groups that were introduced and studied by Šuniḱ in <cit.>. [GGS groups] GGS groups form another important family of examples of spinal groups. They are a generalization of the second Grigorchuk group (introduced in <cit.>) and the groups introduced by Gupta and Sidki in <cit.>. We present here the definition of GGS groups that was given in <cit.>.Let A=⟨(1 2 … d)⟩≅/d, B=/d and ϵ = (ϵ_1,ϵ_2,…, ϵ_d-1) ∈ (/d)^d such that ϵ 0. Let ω = {ω}_ij∈Ω, whereω_ij(1̅) = a^ϵ_j1̅∈ B is the equivalence class of 1 and a=(1 2 d) ∈ A, and let A_ω = A. If (ϵ_1,ϵ_2,…, ϵ_d-1,d) = 1, then G_ω is called a GGS group.§.§.§ Nekrashevych's family of groups 𝒟_ω Let {0,1}^ be the set of infinite sequences of 0 and 1 andσ{0,1}^ →{0,1}^ ω_0ω_1ω_2… ↦ω_1ω_2ω_3…be the left-shift. For ω = ω_0ω_1ω_2…∈{0,1}^, we can recursively define automorphisms β_ω, γ_ω∈(T_2) byβ_ω = (α, γ_σ(ω)) γ_ω = (β_σ(ω), 1) if ω_0=0 (1, β_σ(ω)) if ω_0=1where α∈(T_2) is the non-trivial rooted automorphism of T_2. We can then define the group 𝒟_ω = ⟨α, β_ω, γ_ω⟩. This family of groups was first studied by Nekrashevych in <cit.>.It follows from the definition that α^2=β_ω^2 = γ_ω^2 = 1. Hence, the set S_ω = {α, β_ω, γ_ω} is a finite symmetric generating set of 𝒟_ω. Let |·|_ω S→{0,1} be given by |α|_ω = 0, |β_ω|_ω = |γ_ω|_ω = 1. Then, the family {(G_ν, S_ν, |·|_ν)}_ν∈ is a non-ℓ_1-expanding similar family of automorphisms of T_2, where G_ν = 𝒟_σ^ν(ω), S_ν = S_σ^ν(ω) and |·|_ν = |·|_σ^ν(ω).§.§.§ Peter Neumann's example We present here a group that first appeared as an example in Neumann's paper <cit.>. The description we use here is based on <cit.>.Let A=(6) and X={1,2,…,6}. For every couple (a,x)∈ A× X such that x is a fixed point of a, we can recursively define an automorphism of (T_6) byb_(a,x) = (1,…, b_(a,x),…, 1)awhere the b_(a,x) is in the xth position. LetS={b_(a,x)∈(T_6) | (a,x)∈ A× X },G=⟨ S ⟩ and |·| G → be the word norm associated to S. Then, it is clear from the definition that {(G_ν,S_ν,|·|_ν)}_ν∈ is a non-ℓ_1-expanding similar family of automorphisms of T_6, where G_ν=G, S_ν=S and |·|_ν = |·| for all ν∈. Hence, G is a non-ℓ_1-expanding self-similar group.§.§ Incompressible elements Let {(G_ν, S_ν, |·|_ν)}_ν∈ be a non-ℓ_1-expanding similar family of groups of automorphisms of T_d. For any k∈^*, we recursively define the sets _k^ν of elements of G_ν which have no length reduction up to level k as_k^ν = {g = (g_1,g_2,…, g_d)τ∈ G_ν| g_1,g_2,…, g_d ∈_k-1^ν+1, ∑_i=1^d|g_i|_ν+1 = |g|_ν}where _0^ν = G_ν for all ν∈.We will call the set_∞^ν = ⋂_k=1^∞_k^νthe set of incompressible elements of G_ν. This is the set of elements which have no length reduction on any level. §.§ Growth of incompressible elements We will see that if every group in a non-ℓ_1-expanding similar family of groups of automorphisms of T_d is generated by incompressible elements and the sets of incompressible elements grow uniformly subexponentially, then the groups themselves are also of subexponential growth. This result is a generalization of the first part of Proposition 5 in <cit.>. The main difference is that we show here that under our assumptions, it is sufficient to look at the growth of the set _∞^ν of incompressible elements instead of the set _k^ν of elements which have no reduction up to level k for some k∈. Let A∈ be an integer, {(G_ν, S_ν, |·|_ν)}_ν∈ be a non-ℓ_1-expanding similar family of automorphisms of T_d such that S_ν⊆_∞^ν and |S_ν|≤ A for every ν∈, and let Ω_ν(n) be the sphere of radius n∈ in G_ν with respect to the pseudometric |·|_ν. If there exists a subexponential function δ→ with ln(δ) concave such that for infinitely many ν∈, _∞^ν∩Ω_ν(n) ≤δ(n) for all n∈, then the groups G_ν are of subexponential growth for every ν∈. The proof is inspired by the one found in <cit.>, with a few key modifications. The idea is to split the set Ω_ν(n) in two, the set of elements which can be written as a product of a few incompressible elements and the set of elements which can only be written as a product of a large number of incompressible elements. The first set grows slowly because there are few incompressible elements, and the second set grows slowly because there is a significant amount of length reduction.Let us fix ν∈ such that _∞^ν∩Ω_ν(n) ≤δ(n) for all n∈. In what follows, we will show that κ_ν = 1. By Proposition <ref>, this will show that κ_ν' =1 for all ν'≤ν.Since S_ν⊆_∞^ν, we have that for every g∈ G_ν, the set{N∈| g=g_1g_2… g_N, g_i∈_∞^ν, ∑_i=1^N|g_i|_ν = |g|_ν}is not empty. Hence, we can defineN(g) = min{N∈| g=g_1g_2… g_N, g_i∈_∞^ν, ∑_i=1^N|g_i|_ν = |g|_ν}.For any n∈ and 0<ϵ < 1, the sphere of radius n in G_ν, Ω_ν(n), can be partitioned in two by the subsetsΩ_ν^>(n,ϵ)= {g∈Ω_ν(n) | N(g) > ϵ n} Ω_ν^<(n,ϵ)= {g∈Ω_ν(n) | N(g) ≤ϵ n}. Let g∈Ω_ν^>(n,ϵ). By definition of N(g), there exists g_1,g_2,…, g_N(g)∈_∞^ν such that g=g_1g_2… g_N(g) and ∑_i=1^N(g)|g_i|_ν = |g|_ν. Let h_i = g_2i-1g_2i for 1≤ i ≤⌊N(g)/2⌋. Then,g=h_1h_2… h_N(g)-1/2g_N(g)ifN(g)is oddh_1h_2… h_N(g)/2ifN(g)is even.Notice that since|g|_ν = ∑_i=1^N(g)|g_i|_ν,we must have |h_i|_ν = |g_2i-1|_ν + |g_2i|_ν. Hence, no h_i can be in _∞^ν (otherwise, this would contradict the minimality of N(g)).LetS(g) = {i | |h_i|_ν≤6/ϵ}be the set of "small" factors of g andL(g) = {i | |h_i|_ν > 6/ϵ}be the set of "large" factors. Clearly, |S(g)| + |L(g)| = ⌊N(g)/2⌋. Since g∈Ω_ν^>(n,ϵ), N(g) is not too small compared to n, which implies that as long as n is large enough, more than half of the factors of g must be small. More precisely, if n>3/ϵ, then |S(g)| ≥1/2⌊N(g)/2⌋. Indeed, it that were not the case, then we would have |L(g)|> ⌊N(g)/2⌋, son≥∑_i=1^⌊N(g)/2|h_i|_ν≥∑_h_i∈ L(g) |h_i|_ν> 1/2⌊N(g)/2⌋6/ϵ≥(N(g)-1)/46/ϵ> 3/2n - 3/2ϵ> nwhich is a contradiction. Therefore, if n>3/ϵ,|S(g)|≥1/2⌊N(g)/2⌋≥N(g)-1/4>ϵ/4n - 1/4> ϵ/8n.This means that the number of small factors is comparable with n. This is important because, as we will see, every small factor gives us some length reduction on a fixed level (fixed in the sense that it does not depend on n, but only on ϵ). Hence, on this level, we will see a large amount of length reduction.For r∈, letl_ν(r) = max{k∈|(G_ν∖_∞^ν)∩ B_ν(r) ⊆_k^ν} + 1where B_ν(r) is the ball of radius r in G_ν. Notice that since (G_ν∖_∞^ν)∩ B_ν(r) is finite and not contained in _∞^ν, l_ν(r) is well-defined (i.e. finite).Let us consider the l_ν(6/ϵ)th-level decomposition of g,g=(g_11… 1, g_11… 2, …, g_dd… d)τ.Sinceg=h_1h_2… h_N(g)-1/2g_N(g)ifN(g)is oddh_1h_2… h_N(g)/2ifN(g)is even,we have∑_j∈ X^l_ν(6/ϵ) |g_j|_ν + l_ν(6/ϵ)≤(∑_i=1^N(g)-1/2∑_j∈ X^l_ν(6/ϵ)|h_i,j|_ν + l_ν(6/ϵ)) + ∑_j∈ X^l_ν(6/ϵ)|g_N(g),j|_ν + l_ν(6/ϵ)if N(g) is odd and∑_j∈ X^l_ν(6/ϵ) |g_j|_ν + l_ν(6/ϵ)≤∑_i=1^N(g)/2∑_j∈ X^l_ν(6/ϵ)|h_i,j|_ν + l_ν(6/ϵ)if N(g) is even, where X^l_ν(6/ϵ) is the set of words of length l_ν(6/ϵ) in the alphabet {1,2,…, d},h_i = (h_i,11… 1, h_i, 11… 2, …, h_i, dd… d)τ_iis the l_ν(6/ϵ)th-level decomposition of h_i andg_N(g) = (g_N(g),11… 1, g_N(g), 11… 2, …, g_N(g), dd… d)τ_N(g)is the l_ν(6/ϵ)th-level decomposition of g_N(g).It follows from the definition of l_ν(6/ϵ) that h_i∉_l_ν(6/ϵ)^ν for all i∈ S(g). Hence, for all i∈ S(g),∑_j∈ X^l_ν(6/ϵ)|h_i,j|_ν + l_ν(6/ϵ)≤ |h_i|_ν - 1.Therefore, as long as n>3/ϵ,∑_j∈ X^l_ν(6/ϵ)|g_j|_ν+l_ν(6/ϵ) ≤ n-|S(g)| <n-ϵ/8n =8-ϵ/8n.It follows that for n>3/ϵ,|Ω_ν^>(n,ϵ)|≤∑_k_1+… +k_d^l_ν(6/ϵ)≤8-ϵ/8nC|Ω_ν + l_ν(6/ϵ)(k_1)|… |Ω_l_ν(6/ϵ)(k_d^l_ν(6/ϵ))| ≤(8-ϵ/8n)^d^l_ν(6/ϵ)K(n)κ_ν + l_ν(6/ϵ)^8-ϵ/8nwhere C=[G_ν : _Gν(l_ν(6/ϵ))] and K(n) is a function such thatlim_n→∞K(n)^1/n = 1. We conclude that, for a fixed ϵ between 0 and 1,lim sup_n→∞|Ω_ν^>(n, ϵ)|^1/n≤κ_ν + l_ν(6/ϵ)^8-ϵ/8. On the other hand,|Ω_ν^<(n,ϵ)|≤∑_i=1^ϵ n∑_k_1+… + k_i = n∏_j=1^iδ(k_j)≤∑_i=1^ϵ n∑_k_1+… + k_i = nδ(n/i)^iby lemma 6 of <cit.>, since ln(δ) is concave. Hence, assuming that ϵ < 1/2, we have|Ω_ν^<(n,ϵ)|≤∑_i=1^ϵ nni-1max_1≤ i ≤ϵ n{δ(n/i)^i}≤ϵ n nϵ nmax_1≤ i ≤ϵ n{δ(n/i)^i}.Using the fact that nϵ n≤n^ϵ n/(ϵ n )! and Stirling's approximation, we get|Ω_ν^<(n,ϵ)| ≤ϵ n (e/ϵ)^ϵ nC(n)/√(2πϵ n)max_1≤ i ≤ϵ n{δ(n/i)^i}where lim_n→∞C(n) = 1. Therefore,lim sup_n→∞|Ω_ν^<(n,ϵ)|^1/n≤(e/ϵ)^ϵlim sup_n→∞δ(n/i_n)^i_n/nwhere 1≤ i_n ≤ϵ n maximises δ(n/i)^i. Let k_n=n/i_n. Then, 1/ϵ≤ k_n ≤ n. Since lim_k→∞δ(k)^1/k = 1, there must exist N∈ such that sup_1/ϵ≤ k{δ(k)} = sup_1/ϵ≤ k ≤ N{δ(k)}. Hence, there exists some K_ϵ∈ such that K_ϵ≥1/ϵ and lim sup_n→∞δ(n/i_n)^i_n/n = δ(K_ϵ)^1/K_ϵ. We conclude thatlim sup_n→∞|Ω_ν^<(n,ϵ)|≤(e/ϵ)^ϵδ(K_ϵ)^1/K_ϵfor some K_ϵ≥1/ϵ. Since, for any 0<ϵ<1/2, we have |Ω_ν(n)| = |Ω_ν^>(n,ϵ)| + |Ω_ν^<(n,ϵ)|,κ_ν = lim_n→∞|Ω_ν(n)|^1/n = lim_n→∞(|Ω_ν^>(n,ϵ)| + |Ω_ν^<(n,ϵ)|)^1/n≤lim sup_n→∞(2 max{|Ω_ν^>(n,ϵ)|, |Ω_ν^<(n,ϵ)|})^1/n= max{lim sup_n→∞|Ω_ν^>(n,ϵ)|^1/n, lim sup_n→∞|Ω_ν^<(n,ϵ)|^1/n}≤max{κ_ν + l_ν(6/ϵ)^8-ϵ/8, e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ}.Let us now fix 0<ϵ<1/2. There must exist a k∈ such thatκ_ν+k≤ e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ.Indeed, otherwise we would have κ_ν+i > e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ for all i∈. In particular, this would imply thatκ_ν≤κ_ν + l_ν(6/ϵ)^8-ϵ/8.Let ν'∈ be such that ν' ≥ν + l_ν(6/ϵ) and _∞^ν'∩Ω_ν'(n) ≤δ(n) for all n∈ (such a ν' exist by hypothesis). Then, we would also haveκ_ν'≤κ_ν' + l_ν'(6/ϵ)^8-ϵ/8,and so, using the fact that by Proposition <ref>, κ_ν + l_ν(6/ϵ)≤κ_ν', we would haveκ_ν≤κ_ν' + l_ν'(6/ϵ)^(8-ϵ/8)^2.By induction, we conclude that for any m∈^*, there exists k_m∈ such thatκ_ν≤κ_ν + k_m^(8-ϵ/8)^m.Since |S_i| ≤ A for every i∈, we have that κ_i ≤ A for every i∈. Hence, we get that κ_ν≤ A^(8-ϵ/8)^m for every m∈^*, which implies that κ_ν = 1. This contradicts the hypothesis that κ_ν > e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ.Therefore, there must exist some i∈ such that κ_ν + i≤ e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ. By Proposition <ref>, we must haveκ_ν≤ e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ.As the above inequality is valid for any 0<ϵ<1/2 and lim_ϵ→ 0e^ϵ(1/ϵ)^ϵδ(K_ϵ)^1/K_ϵ = 1we must have κ_ν = 1, and so G_ν is of subexponential growth.§ GROWTH OF SPINAL GROUPS Using the techniques developed by Grigorchuk in <cit.>, one can show that every spinal group acting on the binary rooted tree is of subexponential growth.In this section, we will study the growth of some spinal groups acting on the 3-regular rooted tree T_3. We will be able to prove that the growth is subexponential in several new cases. In particular, our results will imply that all the groups in Šuniḱ's family acting on T_3 (Example <ref>) are of subexponential growth. While this was already known for torsion groups, this was previously unknown for groups with elements of infinite order, except for the case of the Fabrykowski-Gupta group.Unfortunately, we were unable to obtain similar results for spinal groups acting on rooted trees of higher degrees, as the methods used here do not seem to have obvious generalizations in those settings. §.§ Growth of spinal groups acting on T_3 Let m∈, /3≅ A = ⟨ (1 2 3) ⟩⊆(3) and B=(/3)^m. LetΩ = {{ω_ij}_i∈, 1≤ j ≤ 2|ω_i,1∈(B,A), ω_i,2 = 1, ⋂_i≥ k(ω_ij) = 1 ∀ k∈}be a set of sequences of homomorphisms of B into A and σΩ→Ω be the left-shift (see Section <ref>). For any ω∈Ω, let us define A_ω = A. Using the notation of Section <ref>, we get spinal groups G_ω = ⟨ A, B_ω⟩ acting on T_3 which naturally come equipped with a word pseudonorm |·|_ω assigning length 0 to elements of A and length 1 to elements of B_ω. In order to streamline the notation, we will drop the indices ω wherever it is convenient and rely on context to keep track of which group we are working in. We will also drop the second index in the sequences of Ω and write ω=ω_0ω_1…∈Ω, which is a minor abuse of notation.The set of incompressible elements of G_ω will be denoted by _∞^ω, and we will write _∞^ω(n) for the set of incompressible elements of length n.We will write a=(1 2 3) ∈ A, and for any b∈ B_ω, we will write b^a^i = a^iba^-i where i ∈/3.As in the case of the binary tree, we have that for every ω∈Ω, the group G_ω is a quotient of A∗ B_ω. Hence, every element of g_ω can be written as an alternating product of elements of A and B_ω.It follows that every g∈ G_ω of length n can be written asg=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^sfor some s∈/3, β{1,2,…, n}→ B_ω and c{1,2,…, n}→/3 (where we use indices to denote the argument of the function in order to make the notation more readable).For any n∈ at least 2 and c{1,2,…, n}→/3, we will denote by ∂ c{1,2,… n-1}→/3 the discrete derivative of c, that is,∂ c(k) = c_k+1-c_k. Let ω∈Ω and g∈ G_ω with |g|=n. Writing g=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^sfor some n∈, s∈/3, β{1,2,…, n}→ B_ω and c{1,2,…, n}→/3, if g∈_∞^ω(n), then there exists m_c∈{1,2,…, n} such that∂ c(k) =2ifk< m_c 1ifk≥ m_c.If there exists k∈{1,2,…, n-1} such that ∂ c(k) = 0, then c(k)=c(k+1), which means that |g|=|β_1^a^c_1β_2^a^c_2… (β_kβ_k+1)^a^c_k…β_n^a^c_na^s| ≤ n-1a contradiction. Hence, ∂ c(k)0 for all k∈{1,2,…, n-1}.Therefore, to conclude, we only need to show that if ∂ c(k) = 1 for some k∈{1,2,…, n-2}, then ∂ c(k+1)2. For the sake of contradiction, let us assume that ∂ c(k) = 1 and ∂ c(k+1) = 2 for some k∈{1,2,…, n-2}. Without loss of generality, we can assume that c_k=0 (indeed, it suffices to conjugate by the appropriate power of a to recover the other cases). We haveβ_kβ_k+1^aβ_k+2 = (α_k, 1, β_k)(β_k+1,α_k+1, 1)(α_k+2, 1, β_k+2)=(α_kβ_k+1α_k+2, α_k+1, β_kβ_k+2)for some α_k,α_k+1, α_k+2∈ A. Since|α_kβ_k+1α_k+2| + | α_k+1| + |β_kβ_k+2| = 2 < 3 = |β_kβ_k+1^aβ_k+2|,there is some length reduction on the first level, so g∉_∞^ω. It follows from Lemma <ref> that an elementg=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^s∈_∞^ωis uniquely determined by the data (β, s, c_1, m_c), where β{1,2,…, n}→ B_ω, s,c_1∈/3 and m_c∈{1,2,…, n}. Of course, not every possible choice corresponds to an element of _∞^ω. In what follows, we will bound the number of good choices for (β, s, c_1, m_c). Let ω = ω_0ω_1ω_2…∈Ω and let l∈ be the smallest integer such that ∩_i=0^l(ω_i) = 1. Then, there exists a constant C_l∈ such that|_∞^ω(n)| ≤ C_ln^3^l+2 - 1/2for all n∈. Let us fix n∈, s, c_1∈/3 and m_c∈{1,2,…, n}, and let c{1,2,…, n}→/3 be the unique sequence such that c(1)=c_1 and∂ c(k) =2ifk< m_c 1ifk≥ m_cfor all k∈{1,2,… n-1}. We will try to bound the number of maps β{1,2,…,n}→ B_ω∖{1} such thatg=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^s∈_∞^ω(n).Assuming that g∈_∞^ω(n), let us look at the first-level decomposition of g,g=(g_1,g_2,g_3)a^s.Since g∈_∞^ω, we must have |g|=|g_1|+|g_2|+|g_3|. As for any k∈{1,2,…,n}, β_k^a^c_k adds 1 to the length of g and to the length of exactly one of g_1,g_2 or g_3 no matter the value of β_k, we conclude that |g_1|, |g_2|, |g_3| do not depend on β.Since g∈_∞^ω, we must also have that g_1,g_2,g_3∈_∞^ω. Hence, for i=1,2,3, there must exist s^(i),c_1^(i)∈/3, m_c^(i)∈{1,2,…, |g_i|} and β^(i){1,2,…, |g_i|}→ B_σ(ω)∖{1} such thatg_i=(β_1^(i))^a^c_1^(i)(β_2^(i))^a^c_2^(i)… (β_|g_i|^(i))^a^c_|g_i|^(i)a^s^(i)where c^(i) is the unique map satisfying c^(i)(1)=c_1^(i) and∂ c^(i)(k) =2ifk< m_c^(i)1ifk≥ m_c^(i).It is clear that the maps β^(i) are completely determined by the map β. Therefore, to specify g_1,g_2,g_3, we only need to consider s^(i), c_1^(i) and m_c^(i). However, the choice of s^(i), c_1^(i) and m_c^(i) impose some non-trivial conditions on β. Indeed, once these three numbers are fixed, we haveg_i=a^k_1^(i)□ a^k_2^(i)□…□ a^k_|g_i|^(i)□ a^k_|g_i|+1^(i)where □ are unspecified elements of B_σ(ω)∖{1} and the k_j are uniquely determined by s^(i), c_1^(i) and m_c^(i). These k_j^(i) completely determine ω_0(β_k) for all but at most one k∈{1,2,…, n}. Indeed, for k∈{1,2,…,n}, we haveβ_k^a^c_k = (α_k^ω_0, 1, β_k) ifc_k=0 (β_k, α_k^ω_0, 1) ifc_k=1 (1, β_k, α_k^ω_0) ifc_k=2where α_k^ω_0 = ω_0(β_k)∈ A. As long as ∂ c is constant, c is a subsequence of… 021021021 …or… 012012012 …which means that g_1,g_2 and g_3 will be given by an alternating product of α_k^ω_0 and β_k, except perhaps at m_c, if 1<m_c<n. Let us assume for the sake of illustration that c_m_c-1 = 0 (the other two cases are obtained simply by permuting the indices). In that case, we haveβ_m_c-1β_m_c^a^2β_m_c+1 = (α_m_c-1^ω_0, 1, β_m_c-1)(1,β_m_c, α_m_c^ω_0)(α_m_c+1^ω_0, 1, β_m_c+1)=(α_m_c - 1^ω_0α_m_c+1^ω_0, β_m_c, β_m_c-1α_m_c^ω_0β_m_c+1)Therefore, the choice of s^(i), c_1^(i) and m_c^(i) give us conditions on ω_0(β_k) for all but at most one k.By induction, on level l+1, the choice of s^(x), c_1^(x),m_c^(x) for all words x of length at most l+1 in the alphabet {1,2,3} (that is, for all vertices of the tree up to level l+1) determine ω_i(β_k) for all 1≤ i ≤ l and for all k∈{1,2,…, n} except for at most ∑_j=0^l3^j = 3^l+1-1/2. Since ∩_i=0^lω_i = {1}, for each k, there is at most one β_k∈ B having the prescribed images ω_i(β_k) for all 1≤ i ≤ l.Since there are 3^l+2-1/2 vertices in the tree up to level l+1, we have 3^3^l+2-1/2 choices for s(x) and 3^3^l+2-1/2 choices for c_1(x). Since m_c^(x) satisfies 1≤ m_c^(x)≤ n, there are at most n^3^l+2-1/2 choices for m_c^(x). Once all these choices are made, β is completely determined, except for at most 3^l+1-1/2 values. For each of these, we have |B|-1 choices, so there are at most (|B|-1)^3^l+1-1/2 choices for β. Hence, there are at mostC_ln^3^l+2-1/2elements in _∞^ω(n), whereC_l = 3^3^l+2-1(|B|-1)^3^l+1-1/2. With this, we can prove that many spinal groups are of subexponential growth. Let ω∈Ω and G_ω be the associated spinal group of automorphisms of T_3. If there exists l∈ such that ∩_i=k^k+l(ω_i) = 1 for infinitely many k∈, then G_ω is of subexponential growth. According to Proposition <ref>, there exist infinitely many k∈ such that|_∞^σ^k(ω)(n)| ≤ C_ln^3^l+2-1/2for some C_l∈. Since ln(C_ln^3^l+2-1/2) is concave, the result follows from Theorem <ref>.§ ACKNOWLEDGEMENTSThis work was supported by the Natural Sciences and Engineering Research Council of Canada. The author would like to thank Tatiana Nagnibeda for many useful discussions and suggestions, as well as Laurent Bartholdi and Rostislav Grigorchuk for reading previous versions of this paper and offering helpful comments. § GROWTH OF SPINAL GROUPS§.§ Growth of spinal groups acting on T_2 In this section, we will show that many spinal groups acting on T_2 are of subexponential growth. Let m∈^*, A=(2)≅/2 and B=(/2)^m. LetΩ = {ω=ω_0ω_1ω_2…∈(B,A)^|⋂_j≥ i (ω_j) = {1}∀ i∈}be a set of sequences of epimorphisms. Notice that Ω is closed under the left-shift σ. For ω∈Ω, let A_ω = A. Using the notation of Section <ref>, the group G_ω = ⟨ A_ω, B_ω⟩ is a spinal group, and in fact, it is straightforward to show that all spinal groups acting on T_2 are in fact of this form. Again as in Section <ref>, we equip G_ω with the word pseudonorm |·|_ω induced by giving length 0 to the elements of A_ω and length 1 to the non-trivial elements of B_ω. In what follows, we will use the letter a to denote the non-trivial element of A_ω (for any ω∈Ω). We will also denote by _∞^ω the set of incompressible elements in G_ω.Since G_ω is a quotient of A_ω∗ B_ω≅ (/2)∗ (/2)^m, every element of G_ω can be written as an alternating product of elements of A_ω and B_ω, and geodesics in G_ω are all of this form. We will often make no distinction between elements of G_ω and reduced words in the alphabet A_ω∪ B_ω, i.e. elements of A_ω∗ B_ω≅ (/2)∗ (/2)^m. This identification is harmless, because passing from reduced words to elements in the group can only reduce the length.Let g=(g_1,g_2)τ∈ G_ω for some ω∈Ω, where G_ω is the spinal group associated to ω as defined above and τ∈ A_ω. Then, |g_i|_σ(ω)≤|g|_ω + 1/2 (i=1,2). Let c_ω∈ B_ω. We have c_ω = (ω_0(c),c_σ(ω)), so the left child has length 0 and the right child has length 1, and ac_ω a = (c_σ(ω),ω_0(c)), so the left child has length 1 and the right child has length 0. As g can be written as an alternating product of a and elements of B_ω, we see that only half (plus one to account for parity) of the elements of B_ω in g contribute to length on the left child and the other half only contributes to adding length on the right child. The result follows. Let ω=ω_0ω_1ω_2…∈Ω and let G_ω be a spinal group acting on T_2 as defined above. Then, there is at most one element b_ω∈ B_ω such that b_ω∉(ω_i) for all i∈. Suppose that there exist b_ω, c_ω∈ B_ω such that b_ω, c_ω∉(ω_i) for all i∈. Then, ω_i(b_ω)=ω_i(c_ω)=a for all i∈, where a is the non-trivial element of A. Hence, ω_i(b_ω c_ω) = a^2 = 1 for all i∈, which implies that b_ω = c_ω.Let G_ω be the spinal group associated to ω=ω_0ω_1ω_2…∈Ω and c∈ B be such that ω_m(c) = 1 for some m∈. Then, for all g,h∈ G_ω such that |g|_ω, |h|_ω≥ 2^m+1-1 and gc_ω h is geodesic, we havegc_ω h ∉_∞^ω.Let us first notice that in order for gc_ω h to be geodesic, g must end with a and h must begin with a.We will proceed by induction on m. For m=0, we have ω_0(c)=1. Hence, c_ω = (1,c_σ(ω)). Let g=g'd_ω a,h = ae_ω h'∈ G_ω with d_ω, e_ω∈ B_ω and |g'|_ω = |g|_ω -1, |h'|_ω = |h|_ω - 1. Then,gc_ω h = g'd_ω a c_ω a e_ω h' = g' (ω_0(d)c_σ(ω)ω_0(e), d_σ(ω)e_σ(ω))h'.Since|ω_0(d)c_σ(ω)ω_0(e)|_σ(ω) + |d_σ(ω)e_σ(ω)|_σ(ω) = 2≤ 3 = |d_ω a c_ω a e_ω|_ωwe clearly have gc_ω h ∉_∞^ω.Now, let us assume that the result is true for some m∈ and any sequence ω∈Ω and let us show that it must also hold for m+1.Let c∈ B_ω be such that ω_m+1(c) = 1 and g,h∈ G_ω be such that |g|_ω, |h|_ω≥ 2^m+2-1 and gc_ω h is geodesic. Let us write g=τ_g(g_1,g_2), with τ_g∈ A. If |g_i|_σ(ω) < |g|_ω-1/2 for some i∈{1,2}, then g∉_∞^ω, and so gc_ω h ∉_∞^ω. Indeed, according to Lemma <ref>, we have |g_i|_σ(ω)≤|g|_ω + 1/2 for i=1,2, so if one of these is smaller than |g|_ω - 1/2, we must have |g_1|_σ(ω) + |g_2|_σ(ω) < |g|_ω. Likewise, writing h=(h_1,h_2)τ_h with τ_h∈ A, if |h_i|_σ(ω)<|h|_ω - 1/2 for some i∈{1,2}, then gc_ω h ∉_∞^ω.If |g_i|_σ(ω)≥|g|_ω-1/2 and |h_i|_σ(ω)≥|h|_ω-1/2 for i=1,2, then we havegc_ω h = τ_g(g_1ω_0(c)h_1, g_2c_σ(ω) h_2)τ_hwith |g_2|_σ(ω),|h_2|_σ(ω)≥ 2^m+1-1 and σ(ω)_m(c) = ω_m+1(c)=1. If g_2c_σ(ω)h_2 is not geodesic, then |g_2c_σ(ω)h_2|_σ(ω) < |g_2|_σ(ω) + |c_σ(ω)|_σ(ω) + |h_2|_σ(ω) so gc_ω h∉_∞^ω. On the other hand, if g_2 c_σ(ω) h_2 is geodesic, then the induction hypothesis applies and we conclude that g_2c_σ(ω) h_2∉_∞^ω, which means that gc_ω h∉_∞^ω.For every ω=ω_0ω_1ω_2…∈Ω, there exists a constant C_ω∈ such that |_∞^ω(n)| ≤ C_ω for all n∈, where _∞^ω(n) is the set of incompressible elements of length n in the spinal group G_ω. For ω=ω_0ω_1ω_2…∈Ω and c∈ B, letl(c) = min{m∈|ω_m(c)=1}if min{m∈|ω_m(c)=1}∅ and l(c)=0 if ω_m(c)=a for all m∈.If w∈_∞^ω, then according to Lemma <ref>, for any c∈ B such that there exists m∈ with ω_m(c)=1, a geodesic for w can only contain c_ω in the first 2^l(c)+1 - 1 or in the last 2^l(c)+1 - 1 positions.LetM_ω = max_c∈ B{l(c)}and let C_ω = 16(|B|-1)^2^2M_ω + 2. Notice that there are at most C_ω/4 reduced words in A∪ B of length smaller that 2^2M_ω + 2.If g∈_∞^ω is an element of length n ≥ 2^M_ω + 2, we can decompose g asg=g_1g_2g_3,where |g_1|_ω+|g_2|_ω+ |g_3|_ω = |g|_ω and |g_1|_ω, |g_3|_ω≤ 2^M_ω + 1. We have at most 4(|B|-1)^2^M_ω + 1 possibilities for g_1 and 4(|B|-1)^2^M_ω + 1 possibilities for g_2. Since the only elements b_ω∈ B_ω which can appear in a geodesic of g_2 must satisfy ω_m(b)=a for all m∈, it follows from Lemma <ref> that there is at most one possibility for g_2. Hence, there are at most C_ω elements in _∞^ω(n).Let ω=ω_0ω_1ω_2…∈Ω be such that C_σ^k(ω) ≤ D for some D∈ and infinitely many k∈. Then, G_ω is of subexponential growth. Thanks to Proposition <ref>, it suffices to apply Theorem <ref> to G_ω with δ(n) = D.The theorem applies if there exists d∈ such that |∪_i=0^d(ω_k+i)| = 2^m-1 for infinitely many k∈. In particular, all the Šuniḱ groups (see Example <ref>) are of subexponential growth.Since we are working on the binary tree, where, according to <ref>, there is at most one generator which is never in the kernel, it is possible to extend the proof that Grigorchuk used in <cit.> to show that every spinal group acting on the binary tree is of subexponential growth. §.§ Growth of spinal groups acting on T_3 In this section, we will study the growth of some spinal groups acting on the 3-regular rooted tree T_3.Let m∈, A = ⟨ (1 2 3) ⟩⊆(3) ≅/3 and B=(/3)^m. LetΩ = {{ω_ij}_i∈, 1≤ j ≤ 2|ω_i,1∈(B,A), ω_i,2 = 1, ⋂_i≥ k(ω_ij) = 1 ∀ k∈}be a set of sequences of homomorphisms of B into A and σΩ→Ω be the left-shift (see Section <ref>). For any ω∈Ω, let us define A_ω = A. Using the notation of Section <ref>, we get spinal groups G_ω = ⟨ A, B_ω⟩ acting on T_3 which naturally come equipped with a word pseudonorm |·|_ω assigning length 0 to elements of A and length 1 to elements of B_ω. In order to streamline the notation, we will drop the indices ω wherever it is convenient and rely on context to keep track of which group we are working in. We will also drop the second index in the sequences of Ω and write ω=ω_0ω_1…∈Ω, which is a minor abuse of notation.The set of incompressible elements of G_ω will be denoted by _∞^ω, and we will write _∞^ω(n) for the set of incompressible elements of length n.We will write a=(1 2 3) ∈ A, and for any b∈ B_ω, we will write b^a^i = a^iba^-i where i ∈/3.As in the case of the binary tree, we have that for every ω∈Ω, the group G_ω is a quotient of A∗ B_ω. Hence, every element of g_ω can be written as an alternating product of elements of A and B_ω.It follows that every g∈ G_ω of length n can be written asg=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^sfor some s∈/3, β{1,2,…, n}→ B_ω and c{1,2,…, n}→/3 (where we use indices to denote the argument of the function in order to make the notation more readable).For any n∈ at least 2 and c{1,2,…, n}→/3, we will denote by ∂ c{1,2,… n-1}→/3 the discrete derivative of c, that is,∂ c(k) = c_k+1-c_k. Let ω∈Ω and g∈ G_ω with |g|=n. Writing g=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^sfor some n∈, s∈/3, β{1,2,…, n}→ B_ω and c{1,2,…, n}→/3, if g∈_∞^ω(n), then there exists m_c∈{1,2,…, n} such that∂ c(k) =2ifk< m_c 1ifk≥ m_c.If there exists k∈{1,2,…, n-1} such that ∂ c(k) = 0, then c(k)=c(k+1), which means that |g|=|β_1^a^c_1β_2^a^c_2… (β_kβ_k+1)^a^c_k…β_n^a^c_na^s| ≤ n-1a contradiction. Hence, ∂ c(k)0 for all k∈{1,2,…, n-1}.Therefore, to conclude, we only need to show that if ∂ c(k) = 1 for some k∈{1,2,…, n-2}, then ∂ c(k+1)2. For the sake of contradiction, let us assume that ∂ c(k) = 1 and ∂ c(k+1) = 2 for some k∈{1,2,…, n-2}. Without loss of generality, we can assume that c_k=0 (indeed, it suffices to conjugate by the appropriate power of a to recover the other cases). We haveβ_kβ_k+1^aβ_k+2 = (α_k, 1, β_k)(β_k+1,α_k+1, 1)(α_k+2, 1, β_k+2)=(α_kβ_k+1α_k+2, α_k+1, β_kβ_k+2)for some α_k,α_k+1, α_k+2∈ A. Since|α_kβ_k+1α_k+2| + | α_k+1| + |β_kβ_k+2| = 2 < 3 = |β_kβ_k+1^aβ_k+2|,there is some length reduction on the first level, so g∉_∞^ω. It follows from Lemma <ref> that an elementg=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^s∈_∞^ωis uniquely determined by the data (β, s, c_1, m_c), where β{1,2,…, n}→ B_ω, s,c_1∈/3 and m_c∈{1,2,…, n}. Of course, not every possible choice corresponds to an element of _∞^ω. In what follows, we will bound the number of good choices for (β, s, c_1, m_c). Let ω = ω_0ω_1ω_2…∈Ω and let l∈ be the smallest integer such that ∩_i=0^l(ω_i) = 1. Then, there exists a constant C_l∈ such that|_∞^ω(n)| ≤ C_ln^3^l+2 - 1/2for all n∈. Let us fix n∈, s, c_1∈/3 and m_c∈{1,2,…, n}, and let c{1,2,…, n}→/3 be the unique sequence such that c(1)=c_1 and∂ c(k) =2ifk< m_c 1ifk≥ m_cfor all k∈{1,2,… n-1}. We will try to bound the number of maps β{1,2,…,n}→ B_ω∖{1} such thatg=β_1^a^c_1β_2^a^c_2…β_n^a^c_na^s∈_∞^ω(n).Assuming that g∈_∞^ω(n), let us look at the first-level decomposition of g,g=(g_1,g_2,g_3)a^s.Since g∈_∞^ω, we must have |g|=|g_1|+|g_2|+|g_3|. As for any k∈{1,2,…,n}, β_k^a^c_k adds 1 to the length of g and to the length of exactly one of g_1,g_2 or g_3 no matter the value of β_k, we conclude that |g_1|, |g_2|, |g_3| does not depend on β.Since g∈_∞^ω, we must also have that g_1,g_2,g_3∈_∞^ω. Hence, for i=1,2,3, there must exist s^(i),c_1^(i)∈/3, m_c^(i)∈{1,2,…, |g_i|} and β^(i){1,2,…, |g_i|}→ B_σ(ω)∖{1} such thatg_i=(β_1^(i))^a^c_1^(i)(β_2^(i))^a^c_2^(i)… (β_n^(i))^a^c_n^(i)a^s^(i)where c^(i) is the unique map satisfying c^(i)(1)=c_1^(i) and∂ c^(i)(k) =2ifk< m_c^(i)1ifk≥ m_c^(i).It is clear that the maps β^(i) are completely determined by the map β. Therefore, to specify g_1,g_2,g_3, we only need to consider s^(i), c_1^(i) and m_c^(i). However, the choice of s^(i), c_1^(i) and m_c^(i) impose some non-trivial conditions on β. Indeed, once these three numbers are fixed, we haveg_i=a^k_1^(i)□ a^k_2^(i)□…□ a^k_|g_i|^(i)□ a^k_|g_i|+1^(i)where □ are unspecified elements of B_σ(ω)∖{1} and the k_j are uniquely determined by s^(i), c_1^(i) and m_c^(i). These k_j^(i) completely determine ω_0(β_k) for all but at most one k∈{1,2,…, n}. Indeed, for k∈{1,2,…,n}, we haveβ_k^a^c_k = (α_k^ω_0, 1, β_k) ifc_k=0 (β_k, α_k^ω_0, 1) ifc_k=1 (1, β_k, α_k^ω_0) ifc_k=2where α_k^ω_0 = ω_0(β_k)∈ A. As long as ∂ c is constant, c is a subsequence of… 021021021 …or… 012012012 …which means that g_1,g_2 and g_3 will be given by an alternating product of α_k^ω_0 and β_k, except perhaps at m_c, if 1<m_c<n. Let us assume for the sake of illustration that c_m_c-1 = 0 (the other two cases are obtained simply by permuting the indices). In that case, we haveβ_m_c-1β_m_c^a^2β_m_c+1 = (α_m_c-1^ω_0, 1, β_m_c-1)(1,β_m_c, α_m_c^ω_0)(α_m_c+1^ω_0, 1, β_m_c+1)=(α_m_c - 1^ω_0α_m_c+1^ω_0, β_m_c, β_m_c-1α_m_c^ω_0β_m_c+1)Therefore, the choice of s^(i), c_1^(i) and m_c^(i) give us conditions on ω_0(β_k) for all but at most one k.By induction, on level l+1, the choice of s^(x), c_1^(x),m_c^(x) for all words x of length at most l+1 in the alphabet {1,2,3} (that is, for all vertices of the tree up to level l+1) determine ω_i(β_k) for all 1≤ i ≤ l and for all k∈{1,2,…, n} except for at most ∑_j=0^l3^j = 3^l+1-1/2. Since ∩_i=0^lω_i = {1}, for each k, there is at most one β_k∈ B having the prescribed images ω_i(β_k) for all 1≤ i ≤ l.Since there are 3^l+2-1/2 vertices in the tree up to level l+1, we have 3^3^l+2-1/2 choices for s(x) and 3^3^l+2-1/2 choices for c_1(x). Since m_c^(x) satisfies 1≤ m_c^(x)≤ n, there are at most n^3^l+2-1/2 choices for m_c^(x). Once all these choices are made, β is completely determined, except for at most 3^l+1-1/2 values. For each of these, we have |B|-1 choices, so there are at most (|B|-1)^3^l+1-1/2 choices for β. Hence, there are at mostC_ln^3^l+2-1/2elements in _∞^ω(n), whereC_l = 3^3^l+2-1(|B|-1)^3^l+1-1/2. With this, we can prove that many spinal groups are of subexponential growth. Let ω∈Ω and G_ω be the associated spinal group of automorphisms of T_3. If there exists l∈ such that ∩_i=k^k+l(ω_i) = 1 for infinitely many k∈, then G_ω is of subexponential growth. According to Proposition <ref>, there exist infinitely many k∈ such that|_∞^σ^k(ω)(n)| ≤ C_ln^3^l+2-1/2for some C_l∈. Since ln(C_ln^3^l+2-1/2) is concave, the result follows from Theorem <ref>.In particular, all groups in Šuniḱ's family acting on T_3 (Example <ref>) are of subexponential growth. While this was already known for torsion groups, this was previously unknown for groups with elements of infinite order, except for the case of the Fabrykowski-Gupta group. plain
http://arxiv.org/abs/1702.08047v1
{ "authors": [ "Dominik Francoeur" ], "categories": [ "math.GR" ], "primary_category": "math.GR", "published": "20170226153519", "title": "On the subexponential growth of groups acting on rooted trees" }
eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys Joshua Saxe1 and Konstantin Berlin2 December 30, 2023 ================================================================================================================================= Using N-body/gasdynamic simulations of a Milky Way-like galaxy we analyse a Kennicutt-Schmidt relation, ∝^N, at different spatial scales. We simulate synthetic observations in CO lines and UV band. We adopt the star formation rate (SFR) defined in two ways: based on free fall collapse of a molecular cloud– , andcalculated by using a UV flux calibration – .We study a KS relation for spatially smoothed maps with effective spatial resolution from molecular cloud scales to several hundred parsecs. We find that for spatially and kinematically resolved molecular clouds the ∝^N relation follows the power-law with index N ≈ 1.4.Using UV flux as SFR calibrator we confirm a systematic offset between the and distributions on scales compared to molecular cloud sizes. Degrading resolution of our simulated maps for surface densities of gas and star formation rates we establish that there is no relation - below the resolution ∼ 50 pc. We find a transition range around scales ∼ 50-120 pc, where the power-law index N increases from 0 to 1-1.8 and saturates for scales larger ∼ 120 pc. A value of the index saturated depends on a surface gas density threshold and it becomes steeper for higherthreshold. Averaging over scales with size of 150 pc the power-law index N equals 1.3-1.4 for surface gas density threshold ∼ 5 .At scales 120 pc surface SFR densities determined by using CO data and UV flux, /, demonstrate a discrepancy about a factor of 3. We argue that this may be originated from overestimating (constant) values of conversion factor, star formation efficiency or UV calibration used in our analysis. Galaxies: star formation, Galaxies: ISM, ISM: clouds § INTRODUCTIONEstablished empirically a Kennicutt-Schmidt (KS) relation quantitatively tell us how much molecular gas is required to support star formation at a given rate in a galaxy <cit.>.Further observational studies demonstrate that this relation remains valid for galaxies of various type and mass <cit.>.The growth of multiwavelength data for different galaxies and the increase of spatial resolution lead to a transition from studying disk-averaged star formation to sub-kpc scales <cit.>. Using the CO and Hα datasets <cit.> found in M 33 that the star formation and gas surface densities correlate well at 1 kpc resolution, meanwhile the correlation becomesweaker with higher spatial resolution and it breaks down at giant molecular cloud (GMC) scales. In the recent study <cit.> came to the same conclusions based on more data in different wavelengths including FUV as well. Analysing the spatial distribution of CO/H_α peaks in M 33 <cit.> established that the scaling relation between gas and star formation rate surface density observed at large scales does not have its direct origin in an instantaneous cloud-scale relation. This consequently produces a breakdown in the star formation law as a function of the surface density of the starforming regions.Obviously, the relation is believed to be violated at small scales due to the drifting young clusters from their parental GMCs <cit.> or mechanical stellar feedback effects <cit.> or the role of turbulence <cit.> or other mechanisms.From another point of view, <cit.> found that at low molecular gas surface density and on sub-kpc scales, an accurate determination of the slope on the basis of CO observations will be difficult due to uncertainties of CO/H_2 conversion factor. Thus, it is claimed that a KS relation is valid only on scales larger than that of GMCs, when the spatial offset between GMCs and star forming regions is smoothed <cit.>, and the relation holds only for averaging over sufficiently large scales <cit.>.Observational evidences for the spatial offset between molecular gas and star forming regions have been found for M 51 <cit.> and four nearby low-luminosity AGNs <cit.>. Deviations from the KS-type relation have been found on small scales (≤ 100-200 pc) at low gas densities <cit.>. Following by <cit.>, who demonstrated a variation of power-law index, <cit.>has also found that a KS relation can be either sublinear or superlinear: the slopes range from 0.5 to 1.3 and increase for larger spatial scale. The observed spatial scale at which a KS relation has a breakdown is ranged from 100 to 500 pc <cit.>. Such scatter may be originated due to both physical and systematic effects, one of them that multiwavelength datasets used in the analysis are frequently obtained for different spatial resolution, e.g. it is varied from 50 to 200 pc <cit.>. In general, a variation of the break down scale can be considered by using a kind of uncertainty principle <cit.>, which states that a break down of starformation–density relation on small spatial scales is expected to be due to theincomplete sampling of both starforming regions and initial mass function and the spatial drift between gas and stars. One part of a KS relation, gas surface density , is determined by atomic and molecular hydrogen densities. The former is obtained in HI surveys, the latter is not directly observed and it can be found from CO measurements assuming some CO-H_2 conversion factor <cit.>. The other part of the relation, surface SFR density , can be defined in several ways. At first, a value can be found by estimating free-fall time for a given collapsing cloud. This approach is usually used in numerical simulations, where both mass and volume gas density can be determined. Observationally a surface SFR density value is obtained by converting various SFR calibrations/estimators in photometric bands and spectral lines, e.g., FUV, IR, FIR, Hα, Pα and etc <cit.>. These calibrations may be used alone or combined in composite tracers. Some of them are connected not only with stellar population, but also with recombinations in ionized gas. Young stars emit enormous UV photons, so that UV flux is a direct SFR estimator. Certainly, observations of UV radiation are accompanied by many physical obstacles mainly connected to interstellar dust attenuation, but in numerical simulations one can transfer radiation correctly to get UV radiation field and, as a consequence, surface SFR density.Here based on our numerical simulations of galactic evolution <cit.> we analyse how a KS relation behaviours on sub-galactic scales and aim to find a spatial scale at which the relation has a breakdown. We investigate in detail how the relation depends on spatial resolution of our synthetic observations. The paper is organized as follows. Section <ref> describes our model in brief. Section <ref> present our results. Section <ref> summarizes our findings. § MODEL To simulate the galaxy evolution we use our code based on the unsplit TVD MUSCL (Total Variation Diminishing Multi Upstream Scheme for Conservation Laws) scheme for gas dynamics and the N-body method for stellar component dynamics. Stellar dynamics is calculatedby using the second order flip-flop integrator. For gravity calculation we use the TreeCode approach. A complete description of our code can be found in <cit.>. Below we describe it in brief with a particular stress on chemical kinetics and radiation transfer parts.We implemented the self-consistent cooling/heating processes <cit.> coupled with the chemical evolution of 20 species including CO and H_2 molecules using simplified chemical network described by <cit.>. Based on our simple model for H_2 chemical kinetics <cit.> we expand the <cit.> network by several reactions needed for hydrogen ionization and recombination. For H_2 and CO photodissociation we use the approach described by <cit.>. The CO photodissociation cross section is taken from <cit.>. In our radiation transfer calculation described below we get ionizing flux at the surface of a computational cell. To calculate self-shielding factors for CO and H_2 photodissociation rates and dust absorption factor for a given cell we use local number densities of gas and molecules, e.g. f_sh^ H_2 = n_ H_2 L, where n_ H_2 is H_2 number density in a given cell and L is its physical size. The chemical network equations is solved by the CVODE package <cit.>. We assume that a gas has solar metallicity with the abundances given in <cit.>: [ C/H] = 2.45 × 10^-4, [ O/H] = 4.57× 10^-4, [ Si/H] = 3.24× 10^-5. Dust depletion factors are equal to 0.72, 0.46 and 0.2 for C, O and Si, respectively. We suppose that silicon is singly ionized and oxygen stays neutral. For cooling and heating processes we extend our previous model <cit.> by CO and OH cooling rates <cit.> and CI fine structure cooling rate <cit.>. The other cooling and heating rates are presented in detail in Table 2 <cit.>. Here we simply provide a list of it: cooling due to recombination and collisional excitation and free-free emission of hydrogen <cit.>, molecular hydrogen cooling <cit.>, cooling in the fine structure and metastable transitions of carbon, oxygen and silicon <cit.>, energy transfer in collisions with the dust particles <cit.> and recombination cooling on the dust <cit.>, photoelectric heating on the dust particles <cit.>, heating due to H_2 formation on the dust particles, and the H_2 photodissociation <cit.> and the ionization heating by cosmic rays <cit.>. In our simulations we achieve gas temperature value as low as 10 K and number density as high as 5× 10^3 cm^-3. In the star formation recipe adopted in our model mass, energy and momentum from the gaseous cells, where a star formation criteria are satisfied (local Jeans instability, converging flow, temperature threshold), are transited directly to newborn stellar particles. To compute mass loss, energy feedback and UV emission radiated by stellar population we use the stellar evolution code STARBURST'99 <cit.> assuming solar metallicity of stars and Salpeter IMF with mass limits of 0.1 and 100 . We render the UV radiation from young stars by tracing the ultraviolet photon rays on the fly. To account molecule photodestruction we should know spatial structure of UV background in the galactic disc. Recent observations provide some evidences for significant radial and azimuthal variations of UV flux in the nearby galaxies <cit.>. No doubt that such variations are stipulated by local star formation. So that we need to include radiation feedback from stellar particles in our calculations.Through our simulations the UV emission of each stellar particle is computed with the stellar evolution code STARBURST'99 <cit.> assuming solar metallicity of stellar population. So that for each particle we know its luminosity evolution. After that we separate particles in two groups: young stellar particles (the age is smaller than 20 Myr) and the other ones. For definiteness we assume a uniform background field ten times lower than that in the Solar neighbourhood, F_b=0.1 Habing. Thus the UV background F^ UV in a hydrodynamical cell with coordinates r_0 can be written asF_ UV( r_0) = F_ b +∑_i F^ old_i( r_0) + ∑_j F^ young_j( r_0, r_j) ,where ∑_i F^ old_i( r_0) is deposit from old stellar population (age >20 Myr), which plays a role only locally, in a cell where the stellar particle locates (r_0). The last term is UV flux from young stellar population – the brightest stars. Their deposit is the most important in photodestruction of molecules in surrounding medium.Due to small number of young stellar particles at each integration time step, we can use the ray-tracing approach for each stellar particle. For j-th "young particle" we estimate the radius of spherical shell (similar to the Stroemgren sphere), where the UV field value decreases down to 0.1 Habing:R^ d_j = 0.1 δ√(L^*_j/ (4π)) ,where L^*_j is luminosity of j-th stellar particle in Habing units and δ is the effective cell size. For each shell we calculate the UV flux assuming the optical depth τ = 2N / (10^21cm^-2), where N is the total column density of gas in cm^-2. So that we can obtain the distribution of the UV intensity in the entire galactic disc according to Eq. <ref>.Previously we simulated the evolution of galaxies having different morphological type <cit.>. Here we constrain our study by one model, because of similar results for the others. We consider the model of a disk galaxy with a bar and four spiral arms — model B in our list of models in <cit.>, which mimics the Milky Way morphology.This model is characterized by the constant star formation rate 4.5-5  yr^-1 during the first 600 Myr of evolution. Similar to our previous study here we analyse the galaxy at 500 Myr. At this moment we have 2×10^6 stellar particles of different age. The spatial resolution in our simulations of gas dynamics is 4 pc, which is reasonably smaller than that can be reached in the observations aimed to study the KS relation <cit.>. Such high numerical resolution allow us to separate molecular cloud structure and star forming clusters and follow the relation on scales from individual clouds and clusters to kpc-sizes.§ RESULTS§.§ A KS relation based on molecular cloud free-fall time To make synthetic observations we compute the CO line emission maps with post processing radiation transfer approach. The physical parameters of molecular clouds are extracted by applying CLUMPFIND method for simulated CO line spectra <cit.>. Here we adopt brightness temperature threshold T^ th_ b = 1 K and spectra resolution is δ v = 0.1 . Thus, we have physical parameters (size, mass, total CO luminosity and velocity dispersion) for each cloud in the sample of 813 clouds <cit.>. We follow the standard notations from <cit.> for star formation rate and gas surface density. A molecular gas surface density in a pixel with coordinates r is calculated as follow:( r) = W_ CO( r) × X_ CO [M_⊙pc^-2] ,where W_ CO is the CO line integrated intensity, X_ CO is the CO-H_2 conversion factor, which is adopted for simplicity to be constant andequal to 2×10^20  <cit.>, although many studies give evidences of its variation in different environments <cit.>. Note that such simplification is commonly used both in interpretation of observation data and in simulations. Moreover, here we analyse the Milky Way size galaxy assuming a constant metallicity Z=Z_⊙. Because the conversion factor value adopted is determined for the Milky Way gas, our simplification is believed to be reasonable enough.A global star formation rate for a given collapsing cloud can be estimated as <cit.>:= ε M_ clt^-1_ ff [M_⊙yr^-1] ,where ε=0.014 is the star formation efficiency, t_ ff is the free-fall time. The rate cannot be compared to the gas surface density ( r) directly, because the former is determined for the whole cloud. Then, we smooth the value over cloud surface by taking into account brightness distribution within the cloud:( r) = W_ CO( r)/L_ CO [M_⊙yr^-1 kpc^-2] ,where L_ CO is the total cloud luminosity in CO lines. Obviously, an integrated SFR over the cloud surface is equal to . Figure <ref> presents the relation between gas surface density, , and star formation rate,, found for the molecular clouds derived in our simulation. One can clearly see that the dependence for our simulated data follows the power law with slope N=1.4 (compare to solid line) and the locus of the observational points used by <cit.> for establishing his relation coincides with our contour map based on the total (HI+H_2) gas surface density.§.§ A KS relation based on UV calibrationUsing the ray-tracing technique we obtained UV brightness mapsin our simulations on the fly <cit.>. Since UV flux is a direct tracer of young stellar population, star formation rate is usually estimated by using the well-known calibration <cit.>:Σ_ SFR, UV( r) = 1.4× 10^-28 L_ UV [M_⊙yr^-1 kpc^-2]where the coefficient is calculated using Salpeter IMF with mass limits of 0.1 and 100 , which in turn is in agreement with our star formation and feedback prescriptions, L_ UV = F_ UV / S is the UV luminosity and S is the pixel surface, the flux is taken from the Eq. <ref>.Top row of panels in Figure <ref> presents gas surface density and UV flux snapshots of the galaxy (two left panels). The gas surface density and UV emission maps follow the large-scale structure of the galaxy and look very similar. However, there is a systematic offset between them on small scales. That is easily confirmed by the absence of correlation between and  (right middle panel). The surface SFR remains almost constant at level ∼ 10^-3 and shows a significant scatter around it. That means there are no UV sources in a gas with the CO brightness temperature threshold T^ th_ b≥ 1 K. By our definition such regions are considered as a part of molecular clouds.Note also that due to a negligible UV flux in the inner parts of giant molecular clouds there are no points for higher than ∼ 50 in the ∝ dependence presented in the third column in Figure <ref>, whereas this range is filled in the ∝ relation depicted in Figure <ref>. Certainly, young bright UV sources can photodissociate molecular gas up to several tens parsecs around <cit.>, so any correlation can be hardly expected in our pixel-by-pixel analysis. That confirmed by a strong anticorrelation between ratio / andpresented in the right panel (top row of Figure <ref>).Obviously, if we expand in size a region, where CO and UV luminosity values are compared, then we find a characteristic size, which embraces both molecular gas and young stellar particles. That corresponds to observations with relatively low spatial resolution, then, we should average (convolve) our gas surface density and UV emission maps. §.§ A KS relation averaged over sub-galactic scalesIn recent extragalactic CO and UV emission observations structures smaller than ten parsecs are hardly to be resolved. Then here we consider the beam smearing effect by reducing the quality of our simulated maps, in other words, by degrading spatial resolution. At first, we convolute the distributions of gas surface density (, Eq. <ref>), star formation rates based onfree fall time collapsing model (, Eq. <ref>) and star formation rate based onUV emission (, Eq. <ref>) as following:Y^(n)( r) = ∫ Y( r') H( r - r', n) d r'where function Y( r) is ( r), ( r), or ( r); H( r, r', n) represents the Gaussian kernel with a half-width, which is equal to n parsecs. Note that the distributions for n=4 pc correspond to the original images. At second, we re-bin the smoothed maps with a bin size equal to n pc. Figure <ref> presents the convoluted maps of gas surface density and UV surface SFR (two left columns), the ∝ relation and the / ratio vs. gas density  (two right columns) for kernel size n=4, 40, 80, 120, 200 pc (from top to bottom rows). For convolution kernel up to n=40 pc (second row in Fig. <ref>) there is no significant correlation betweenand . Of course, one can note an appearance of small positive slope in the ∝ relation and a decrease of the scatter around the average value. This reflects that several stellar clusters and molecular clouds are separated by a distance around 40 pc. Such groups are expected to be young and located in the densest environment of galactic disk (50 ).The increase of the kernel size to n=80 and even 120 pc leads to a remarkable ∝ relation, its slope depends onvalue: it is closer to 1-1.4 for higher surface density and the scatter decreases (third and fourth rows). So that one can conclude that a scale around 100 pc is a critical and a KS relation is expected to be appeared beyond this spatial scale value. This scale is sufficiently large so that a majority of molecular clouds have nearby stellar particle counterpart independently on environmental density. For a typical drift velocity value ∼ 10-20 a stellar cluster needs about 5-10 Myr to overcome the distance around 100 pc. This timescale is short enough for remaining stellar particle young and saving neighbouring molecular clouds against destruction by the photodissociating radiation ofyoung stellar particle. For largest kernel size considered here, n=200 pc, the relation is well-defined with almost constant slope around unity. The degrading resolution procedure leads to smoothing both surface gas density and star formation rate based on UV emission distributions. This has at least two consequences. The first is that the spatial offset between gas surface density and UV emission distribution becomes smaller and it may disappear at all for larger convolution kernel size. Obviously, this occurs when the kernel size is comparable or larger than a distance between molecular clouds and neighbouring starforming regions and their sizes as well. So that increasing kernel size we lost the information on small scales related to local starforming processes and come to global ones – those that we usually detect in real observations. The second is that in some number of cells there are surface gas density values below the threshold used for the cloud definition. Taking into account both consequences we can analyse how the degrading resolution influences on the ∝ dependence. Figure <ref> presents more detailed dependence of the slope in the ∝^N relation on convolution kernel size n pc. Choosing high surface gas density threshold we consider cells corresponded to the densest parts of molecular clouds, after the convolution procedure we 're-distribute' the surface values (gas density and starformation rate) over neighbouring cells. In case of extremely high threshold the offset between the surface values remains significant and total number of cells involved in our analysis is small, then an error in estimating - slope appears to be large. On the other hand for quite low surface gas density threshold we can take into account atomic gas unrelated to molecular clouds <cit.>. So that here we consider intermediate values for gas surface density threshold.In Figure <ref> one can note that the gas surfacedensity in molecular clouds is usually higher than ∼ 5 (see the lowest contours on the right color map). Then, we can consider a dependence on surface density threshold, here we adopt three values equal to 0.1, 5 and 10 . For any level one can find that the - has no dependence below the resolution ∼ 50 pc, a transition range around ∼ 50-120 pc, where the slope increases from zero up to 1-1.8, and the saturation of the index in this range for resolution larger as ∼ 120 pc. Note that for kernel size ∼ 50-120 pc the power-law index becomes harder for higherthreshold and reaches 2.5 for the highest threshold value at ∼ 100 pc. Such behaviour can be explained that when we constrain surface density threshold, then denser clouds are taken into consideration or, in other words, the number of clouds decreases (see contours on the map in Figure <ref>). For larger kernel the index demonstrates a small bump, it is clearly seen for the highest threshold considered here. Going to larger scales molecular clouds from neighbour starforming regions are included to our analysis, then the slope decreases slightly. One can conclude that a mean distance between evolutionary independent starforming regions is around 120 pc. The power-law index is around 1.3-1.4 forthe gas surface density threshold higher than ∼ 5 . This slope is very close to the index in the original KS relation obtained for averaging overthe whole galaxy. Note that the data depicted in Figure <ref> corresponds to the smallest threshold, so that the slope obtained for that data is around unity.Note that we investigated the dependence of the -slope for gas surface density varied up to 100 and found that the mean value for the - slope does not exceed 2, but the error in estimating the - slope becomes high enough for 20 and reaches more than ± 1 around 1.5-2, whereas it is constrained by ± 0.25 around 1-1.8 for ∼ 0.1-10 (Figure <ref>). That is a result of small number of cells included into the analysis (see the description of the consequences of the degrading resolution procedure above).Using our simulations a SFR value can be found by two different ways: one is based on estimating free-fall time value (Eq. <ref>) and the another is adopted by using UV calibration. Right column of Figure <ref> presents how the / ratio depends on . For the original resolution one can see simple anticorrelation, because the typical distance between molecular clouds and UV sources in our simulations is larger than 4 pc. The increase of kernel size larger than 120 pc leads to that the ratio becomes more flatten. For instance, it is almost constant for the size of 150 pc. However one can note that the ratio is systematically below unity. That means that the  (or ) is under- (over-)estimated about a factor of 3 (for the kernel size 200 pc). Because of no dependence onthis discrepancy may be originated from using some incorrect constant factor for estimating ,and , e.g., that may be conversion factor value in Eq. <ref> or star formation efficiency ϵ in Eq. <ref>.Note that different UV calibrations may be also considered for estimating star formation rate in Eq. <ref>.§ CONCLUSIONS Based on the 3D simulations of the galactic evolution we analyse how the relation between surface star formation rate () and surface gas density () – a Kennicutt-Schmidt relation – depends on spatial scale. We study a KS relation for a Milky Way-like galaxy and follow the dependence from inner structure of molecular clouds to several hundred parsecs. We analyse synthetic observations in both CO line and UV band with different spatial resolution. To determine we consider two different ways: one is based on estimating free-fall time for molecular cloud collapse – , and the other is found by using the well-known UV calibration<cit.> – . Our results can be summarized as follows.* The ∝^N relation obtained by using the simulated CO line emission maps follows the power law with index N=1.4, the locus of the simulated relation coincides with the observational points used by <cit.> for establishing his relation.* Using UV flux as SFR calibrator one can find a systematic offset between the and distributions on scales compared to molecular cloud sizes. Averaging over different spatial scales we find (a) there is no dependence - below ∼ 50 pc; (b) a transition range around ∼ 50-120 pc, where the power-law index in the relation increasesfrom 0 to 1-1.8; (c) there is a saturation of the index for spatial resolution larger than ∼ 120 pc. * For spatial resolution ∼ 50-120 pc the power-law index becomes steeper for higherthreshold. One can conclude that a mean distance between evolutionary independent star forming regions is around 120 pc. The power-law index is around 1.3-1.4 for surface gas density threshold higher than ∼ 5 , which is typical for molecular clouds. * The ratio of surface SFR densities determined by two different ways, /, is flatten becomes constant in the range 1-100 at spatial scales 120 pc. However, it is three times lower than unity. This discrepancy may be explained by varying conversion factor X_ CO (see Eq. <ref>) and/or star formation efficiency ϵ (see Eq. <ref>) and/or UV calibration (see Eq. <ref>).§ ACKNOWLEDGMENTSWe wish to thank the referee for thoughtful suggestions that have improved the quality of the paper. The numerical simulations have been performed at the Research Computing Center (Moscow State University) under the Russian Science Foundation grant (14-22-00041). This work was supported by the Russian Foundation for Basic Research grants (14-02-00604, 15-02-06204, 15-32-21062). SAK thanks the ANR (Agence Nationale de la Recherche) MOD4Gaia project (ANR-15-CE31-0007) and the President of RF grant (MK-4536.2015.2). EOV is thankful to the Ministry of Education and Science of the Russian Federation (project 3.858.2017) and RFBR (project 15-02-08293). The thermo-chemical part was developed under support by the Russian Science Foundation (grant 14-50-00043).
http://arxiv.org/abs/1702.08562v1
{ "authors": [ "Sergey A. Khoperskov", "Evgenii O. Vasiliev" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170227222359", "title": "A Kennicutt-Schmidt relation at molecular cloud scales and beyond" }
Multiple Fault Attack on PRESENT Physical Analysis and Cryptographic Engineering, Temasek Laboratories at Nanyang Technological UniversitySingapore{jbreier, he.wei}@ntu.edu.sg Multiple Fault Attack on PRESENT with a Hardware Trojan Implementation in FPGA Jakub Breier and Wei He December 30, 2023 ============================================================================== Internet of Things connects lots of small constrained devices to the Internet. As in any other environment, communication security is important and cryptographic algorithms are one of many elements that we use in order to keep messages secure. Because of the constrained nature of these environments, it is necessary to use algorithms that do not require high computational power. Lightweight ciphers are therefore ideal candidates for this purpose. In this paper, we explore a possibility of attacking an ultra-lightweight cipher PRESENT by using a multiple fault attack. Utilizing the Differential Fault Analysis technique, we were able to recover the secret key with two faulty encryptions and an exhaustive search of 2^16 remaining key bits. Our attack aims at four nibbles in the penultimate round of the cipher, causing faulty output in all nibbles of the output. We also provide a practical attack scenario by exploiting Hardware Trojan (HT) technique forthe proposed fault injection in a Xilinx Spartan-6 FPGA. § INTRODUCTIONInternet of Things brings new challenges into the security field. With interconnection of a huge number of small devices with constrained computational capabilities, there is a need to design algorithms and protocols simple enough to be run on such devices within a reasonable time frame yet still preserving high level of security. Lightweight cryptography provides algorithms that use operations fulfilling such requirements. It delivers adequate security and does not always lower the security-efficiency ratio <cit.>. Currently there are many lightweight cryptography algorithms available, providing various security levels and encryption speed <cit.>. For the further work we have chosen the PRESENT algorithm.PRESENT is an ultra-lightweight block cipher, introduced by Bogdanov et al. in 2007 <cit.>. The algorithm was standardized in 2011, by the ISO/IEC 29192-2:2011 standard. It is based on SP-network, therefore uses three operations in each round – 4-bit non-linear substitution, bit permutation andwith the key. The best cryptanalysis presented so far is a truncated differential attack on 26 out of 31 rounds presented in 2014 <cit.>.Fault attacks exploit a possibility to change the intermediate values in the algorithm execution so that it can radically reduce the key search space or even reveal the key. The first attack was proposed by Boneh, DeMillo and Lipton in 1996 <cit.>, following by the practical attack by Biham and Shamir one year later <cit.>.Currently, fault analysis is a popular method to attack cryptographic implementations, utilizing clock/voltage glitch techniques,diode laser and ION beam irradiation, EM pulse, or hardware trojans. For attacking symmetric block ciphers, the most populartechnique is Differential Fault Analysis (DFA), in which the fault is usually inserted in the last rounds of a cipher for observing differencesbetween correct and faulty ciphertexts. Other techniques include Collision Fault Analysis (CFA), Ineffective Fault Analysis (IFA), Safe-Error Analysis (SEA) <cit.>. We have chosen DFA as our attack technique, with inserting multiple faults in the penultimate round of PRESENT cipher.In our work we present a novel multiple fault attack on PRESENT. By injecting four nibble-switch faults in the penultimate round we were able to recover the secret key with only two faulty ciphertexts and an exhaustive search of 2^16 remaining bits of the key. In both faulty ciphertexts, itis necessary to flip different nibbles in order to produce a different fault mask in the last round. We provide a practical attack scenario by exploiting Hardware Trojan (HT) technique for fault injection in a Xilinx Spartan-6 FPGA.The rest of the paper is organized as follows. Section <ref> provides an overview of known fault attacks on PRESENT cipher and Section <ref> describes this cipher in details. Our attack model is proposed in Section <ref> and HT implementation of our attack is described in Section <ref>. Finally, Section <ref> concludes our work and provides motivation for further research.§ RELATED WORK The first DFA attack on PRESENT was published in 2010 by G. Wang and S. Wang <cit.>. Their attack aimed at a single nibble and they were able to recover the secret key with the computational complexity of 2^29 by using 64 pairs of correct and faulty cipher texts on average.Zhao et al. <cit.> proposed a fault-propagation pattern based DFA and demonstrated this technique on PRESENT and PRINT ciphers. The attack on PRESENT-80 and PRESENT-128 uses 8 and 16 faulty cipher texts on average, respectively, and reduces the master key search space to 2^14.7 and 2^21.1.Gu et al. <cit.> used a combination of differential fault analysis and statistical cryptanalysis techniques to attack lightweight block ciphers. They tested their methodology on PRESENT-80 and PRINT-48. The attack on PRESENT is aimed at middle rounds of algorithm, using single random S-box and multiple S-boxes fault attack. The main outcome of the paper is an extension of the fault model from the usual fault model, which aims at ultimate or penultimate rounds, to the other rounds as well. Bagheri et al. <cit.> presented two DFA attacks on PRESENT, the first one attacks a single bit and the second one attacks a single nibble of the intermediate state. They were able to recover the secret key with 18 faulty cipher texts on average, using the second attack.The most efficient attack so far was proposed by Jeong et al. <cit.>. Theyused a 2-byte random fault model, attacking the algorithm state after round 28. For PRESENT-80, they needed two 2-byte faults and for PRESENT-128 they needed three 2-byte faults and an exhaustive search of 2^22.3 on average.Hardware Trojans have drawn much attention during the past decade due to their severity in security-sensitive embedded systems <cit.>. As an enormous network of diverse embedded devices, Internet of Things (IoT) is populatedwith a great number of ICs, to collect, encrypt, transmit, and store data. For eachnode inside an IoT system, a complete functional chip normally consists of sorts of IPs,and they are typically designed and manufactured by off-shore design houses or foundries.In theory, any parties involving into the design or manufacturing stages can makealterations in the circuits for malicious purposes <cit.> <cit.>.These tiny changes or extra logic can hide inside the system during the majority of its lifetimeuntil a specific activation mechanism is awaken for pilfering secrets or impairing the main functionality.As a typical stealthy modification to ICs, a Hardware Trojan (either named Trojan Horse) was intentionally integrated into embedded devices for disabling or destroying a system, leakingconfidential information from side channels, or triggering critical faults <cit.> <cit.> <cit.>.HT can be implanted into the circuit at multiple stages with a stealthy nature. The post-manufacturingtesting often fail to detect it since Trojan only influences the circuit under specific conditions <cit.>. At a proper futuretime, the Trojan can be activated. Unlikely to the counterpart Software Trojan (ST), HT cannot be removed by upgrading the software in each device. So HT is truly furtive and ineradicable which hence poses more serious threat to the system security, particularly to the cryptographic blocks inside the IoT system. HT basically consistsof two components: (a) the trigger signal that performs to activate theinserted trojan, and (b) the payload that is affected by the trojan <cit.>. Actually, many solutions have been proposed for detecting the implanted trojans, such as the fine-grained optical inspections <cit.> and the side-channel based comparison with the fully trustworthy “golden chip”. To increase the difficulties of HT detection, the trojan size is preferably to be as small as possible, w.r.t its host design. In thispaper, a compact Trojan module is presented which serves to inject multiple faults into specific algorithmic points in PRESENT cipher, which makes the proposed multiple fault attack approach realistic in practices.§ OVERVIEW OF PRESENT CIPHER PRESENT is a symmetric block cipher, based on SP-network. It consists of 31 rounds, block length is 64 bits and it supports keys with lengths of 80 and 128 bits. Considering the usage purposes, authors recommend the 80 bit key length version. Each round consists of three operations: XOR with the round key, substitution by the 4-bit S-box (Table <ref>), and bit permutation (Table <ref>). At the end of the cipher, a post-whitening XOR with the key is performed, so there are 32 generated keys in total. A high-level overview of the encryption process is stated in Figure <ref>. Key schedule of 80-bit version of the algorithm is following. First, a secret key is stored in register K and represented as k_79k_78… k_0. Since the block length is 64 bits, only 64 leftmost bits are used for each round. Therefore, at round i we have:K_i := κ_63κ_62…κ_0 = k_79k_78… k_16.After every round, key register K is updated in a following way: * [k_79k_78… k_1k_0] = [k_18k_17… k_20k_19]* [k_79k_78k_77k_76] = S[k_79k_78k_77k_76]* [k_19k_18k_17k_16k_15] = [k_19k_18k_17k_16k_15]⊕ RC where RC is a round counter. § ATTACK MODEL PRESENT algorithm uses sixteen 4-bit S-boxes. The output of S-boxes is an input for the permutation layer. The attack exploits following properties of the algorithm: * Output of one S-box is an input for four different S-boxes.* Input of one S-box consists of outputs from four different S-boxes.* There are four different groups of S-boxes: * The outputs of S-boxes 0-3 are inputs for S-boxes 0,4,8,12, * The outputs of S-boxes 4-7 are inputs for S-boxes 1,5,9,13, * The outputs of S-boxes 8-11 are inputs for S-boxes 2,6,10,14, * The outputs of S-boxes 12-15 are inputs for S-boxes 3,7,11,15. If it is possible to corrupt the whole output of some S-box (flip four bits), the fault will spread into four S-boxes in the following round, affecting the same bit position in every S-box. Therefore if we aim at four S-boxes from distinct groups in round 30, the fault will be distributed to every S-box in round 31. This fact is depicted in Figure. <ref>.§.§ Attack StepsThe attack steps are following: * The attacker inserts four 4-bit faults at the output of four S-boxes from distinct groups.* She computes difference tables according to the fault model. More specifically, the bit faults in S_i,j^31, where i∈0,..,15 are S-boxes of round 31 and j∈0,1,2,3 are bits of particular S-box are following: * Fault at S_0^30, S_4^30, S_8^30, S_12^30 will result to faults at S_i,0^31.* Fault at S_1^30, S_5^30, S_9^30, S_13^30 will result to faults at S_i,1^31.* Fault at S_2^30, S_6^30, S_10^30, S_14^30 will result to faults at S_i,2^31.* Fault at S_3^30, S_7^30, S_11^30, S_15^30 will result to faults at S_i,3^31.Those difference tables contain the fault mask (bit position of a fault) and the output mask, which is the difference of the S-box output of a correct and faulty input. As an example, for masks 1000 and 0100, Table <ref> shows these values.* Using a formula <ref>, the attacker observes the output differences and she searches for possible S^31 input candidates.* She repeats steps 1-3, inserting fault in different nibbles, which will result to attacking different S-box bits at S^31.* Shee compares the possible candidate sets from both attacks, the intersection of these sets gives exactly one candidate, which is the correct state before S^31.* After obtaining all the key nibbles of K^31, the attacker uses an exhaustive search on the rest of the key, so the search space is 2^16.Δ = P^-1(C') ⊕ P^-1(C)§.§ Attack ExampleAfter obtaining the faulty ciphertext and inverting the last round permutation layer, it is possible to obtain information about the input of the substitution layer. Let us assume that we attacked the first nibble after the S-box operation in round 30. Therefore the faulty mask of inputs of S-boxes 0,4,8,12 in round 31 will be 1000. We can now observe the changes in the output of the algorithm. Table <ref> shows every possible input and output of the S-box operation, together with the faulty one, after attacking the most significant bit. It is easy to see that in the worst case, we have narrowed the input candidates to four numbers. For example, for the output difference 1111, we have input candidates 0, 7, 8, 9. The notation is following: * I is a correct input of the S-box.* I' is a faulty input of the S-box.* O is a correct output of the S-box.* O' is a faulty output of the S-box.* Δ is the output difference between the correct and the faulty output. The second step of the attack is to change the attacked nibble, so that it will affect another output nibble of the S-box belonging to the same group as the first one. Let us assume, we attacked the second nibble of the S-box output in round 30. Therefore the input of the same S-boxes in round 31 will be changed, but the faulty mask will be 0100 in this case. The outputs for this case are stated in table <ref>. It is easy to see that the groups of the input values producing the same differences after the fault are not overlapping with the first attack, therefore we can determine the input value with certainty. For example if the output difference in this case would be 0101, the possible input candidates are 0, 1, 4, 5. The only common number for both attacks is 0, therefore it is the input value for the given S-box.Using the simplified attack model with only one faulty nibble, it is possible to reveal the last round key with 8 faulty encryptions, since it is necessary to attack two distinct nibbles of each of four groups of S-boxes. If there is a possibility to inject multiple faults per encryption, the last round key could be revealed with 2 encryptions. In each run, the attacker would inject the fault into one nibble of the S-box output of four different groups, changing the nibbles after the first run. In both cases, an exhaustive search of 2^16 is required to obtain the whole PRESENT-80 key.§ HARDWARE TROJAN FOR PRACTICAL FAULT INJECTION In this section, an FPGA implemented PRESENT cipher for the described multiplefault analysis approach will be detailed. The hardware realization relies ona Spartan-6 FPGA (XC6SLX4), soldered on Diligent Cmod-S6 commercial board <cit.>. We mounted the injections using the hardmacro-basedHardware Trojan (HT) <cit.> by inserting specially createdtrojan modules into the algorithmic networks in FPGA scenario. §.§ FPGA ScenarioField Programmed Gate Array (FPGA) has been widely utilized in almostall digital/hybrid logic applications due to its rapid implementation,low cost and high performance. The major advantage of hardware implementationfor cipher lies within its parallel computation networks that allow multiplelogic chains computed in parallel, which results in high computational speedcompared to the microcontroller scenarios. In our work, PRESENT cipherwas implemented inside a compact Spartan-6 FPGA.The cipher is structured in loops with 16 4-bit S-boxes in parallel. Thecomplete encryption is clocked with a global clock signal and the intermediatevalues from each round are stored in 128 1-bit registers. So a complete encryption in out implementation consists of 32 clock cycles, as seen in Figure <ref>. §.§ Hardware Trojan for Fault InjectionHT typically requires some trigger signals to activate the inserted trojanmodules. By the principle of the proposed multiple fault injection, a 1 bitsignal of trojan_trigger is activated at the 30th encryptionround when the fault perturbation is required. Since the injection pointis at the output of the S-box, the specially devised trojan hardmacro forS-boxes 0, 4, 8, 12, and for S-boxes 1, 5, 9, 13 can be inserted during thechip fabrication or at the off-the-shelf stage. The payload is highlightedin the grey box of Figure <ref>. Signal of the flip_triggeris used to control the injection into different S-box groups. Since only 2injection rounds are needed for the proposed fault attack, another 1 bitsignal is enough. So there are totally 2 bit signals acting as the triggerin this solution. In Spartan-6 FPGA, every slice consists of 4 look-up tables(LUT), 8 multiplexers and 8 flip-flops. Even each LUT can either be used as one6-input 1-output Boolean function or two 5-input 1-output functions, only 4 multiplexers in each slice has an external input that we have to use as the 1 input bit of the XOR gate and the input of multiplexer after the XOR gate. Therefore one slice can actually implement 4 HT modules, and 8 extra slices, i.e., 4Configurable Logic Blocks (CLBs), are sufficient for inserting all the 8 trojans.The states of the 2-bit trigger signals are given in Table <ref>.It is emphasized that the insertion of trojan can as well apply to differentS-box groups, just obeying the principles explained in Section <ref>. § CONCLUSIONS In our paper we have proposed a multiple fault injection attack on PRESENT cipher, using DFA technique. By flipping four nibbles in the penultimate round we were able to obtain the secret key using two faulty ciphertext and an exhaustive search of a 2^16 bits. We have implemented a hardware trojan, causing this type of fault on an FPGA implementation of PRESENT.There are two other possible attack scenarios. The first one is flipping more than 4 bits, therefore the attack would affect the following nibble as well. In this case, more than one bit of particular S-box in round 31 will be affected. The attack can be still executed in the same way, only the differential table for the concrete fault mask has to be computed. It can be shown, that intersection of arbitrary two masks from two distinct differential tables gives exactly one candidate in every case, so it is possible to use multiple-bit fault masks.The other scenario is flipping less than 4 bits in one nibble. In this case, the fault doesn't spread into every S^31 S-box. The solution for this scenario is to use more faults, therefore the number of encryptions will increase.The hardware trojan approach supporting the proposed multiple fault attack relies on the hardmacro based HT modules to be inserted into the outputs ofspecific S-boxes. In our tested Spartan-6 FPGA, only 4 CLBs aresufficient to implement the 8 trojan modules combined with 2 bit global trigger signal.Since the trojan blocks are inserted into the signal path of specific S-box outputs, without altering the main functionality of the ciphers, the trojan can be mounted at almost all design stages, such as front-end HDL coding or netlist alteration, as well as back-end layout or sub-gate manipulation.In the subsequent work, we would like to focus on the realistic trojan insertion using the proposed multiple fault analysis into the security modules of an IoT scenario, and the related HT detection technique will also be emphasized. splncs03
http://arxiv.org/abs/1702.08208v1
{ "authors": [ "Jakub Breier", "Wei He" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170227095223", "title": "Multiple Fault Attack on PRESENT with a Hardware Trojan Implementation in FPGA" }
CICATA-Qro, Instituto Politcnico Nacional, Cerro Blanco 141, Colinas del Cimatario, 76090, Quertaro, Mexico Applied Mathematics Research Centre, Coventry University, Coventry, CV1 5FB, UK A homopolar disc dynamo experimentwith liquid metal contacts R. A. Avalos-Ziga1, J. Priede2,C. E. Bello-Morales1 December 30, 2023 ============================================================== We present experimental results of a homopolar disc dynamo constructed at CICATA-Quertaro in Mexico. The device consists of a flat, multi-arm spiral coil which is placed above a fast-spinning metal disc and connected to the latter by sliding liquid-metal electrical contacts. Theoretically, self-excitation of the magnetic field is expected at the critical magnetic Reynolds number Rm≈45, which corresponds to a critical rotation rate of about [10]Hz. We measured the magnetic field above the disc and the voltage drop on the coil for the rotation rate up to [14]Hz, at which the liquid metal started to leak from the outer sliding contact. Instead of the steady magnetic field predicted by the theory we detected a strongly fluctuating magnetic field with a strength comparable to that of Earth's magnetic field which was accompanied by similar voltage fluctuations in the coil. These fluctuations seem to be caused by the intermittent electrical contact through the liquid metal. The experimental results suggest that the dynamo with the actual electrical resistance of liquid metal contacts could be excited at the rotation rate of around [21]Hz provided that the leakage of liquid metal is prevented.§ INTRODUCTION The homopolar disc dynamo is one of the simplest models of the self-excitation of magnetic field by moving conductors which is often used to illustrate the dynamo effect <cit.>. In its simplest form originally considered by Bullard <cit.>, the dynamo consists of a solid metal disc which rotates about its axis and a wire twisted around it which is connected through sliding contacts to the rim and to the axis of the disc. For a sufficiently high rotation rate, the voltage induced by the rotation of the disc in a magnetic field generated by an initial current perturbation can exceed the voltage drop due to the ohmic resistance. At this point, the initial perturbation starts to grow exponentially leading to the self- excitation of current and the associated magnetic field. Despite its simplicity, no successful experimental implementation of the disc dynamo is known so far. The problem appears to be the sliding electrical contacts which are required to convey the current between the rim and the axis of the rotating disc. The electrical resistance of the sliding contacts, usually made of solid graphite brushes, is typically by several orders of magnitude higher than that of the rest of the setup <cit.>. This results in unrealistically high rotation rates which are required for dynamo to operate. To overcome this problem Priede et al. <cit.> proposed a theoretical design of homopolar disc dynamo which uses liquid-metal sliding electrical contacts similar to those employed in homopolar motors and generators <cit.>. The design consists of a flat multi-arm spiral coil placed above a fast-spinning copper disc and connected to the latter by sliding liquid-metal electrical contacts. The theoretical results obtained in <cit.> show a minimum magnetic Reynolds number of 40, at which the dynamo is self-excited. This can be achieved by using a copper disc of [60]cm in diameter which rotates with the frequency of [10]Hz. In this paper, we describe a laboratory model such a disc dynamo and report the first results of its operation.§ DESCRIPTION OF EXPERIMENTAL SETUPThe dynamo setup shown in Fig. <ref> consists of a fast-spinning copper disc and a coil made of a stationary copper disc which is sectioned by 40 logarithmic spiral slits with a constant pitch angle α≈58^∘ relative to the radial direction. The rotating disc includes a central cylindrical cavity and an annular cavity at its rim. The coil disc has a cylindrical solid electrode protruding [4]cm out from the center of its bottom face. It is inserted [3]mm over the rotating disc surface forming the inner and outer annular gaps of width [δ=3]mm and height [d=3]cm, with inner and outer radii R_i=[4.5]cmand [R_o=30]cm, respectively. At the bottom of the annular gaps, there is an eutectic alloy of GaInSn which is liquid at the room temperature. As the rotation rate of the disc increases the centrifugal force pushes the liquid metal through the gaps and creates sliding contacts which connect electrically the stationary coil and the rotating disc. At the top of the sliding contact, a ring cap with 40 thin vanes at its bottom face is attached to the rotating disc. This creates a centrifugal pump which acts as a dynamic seal for the liquid metal. The coil and disc are hold by iron supports which are electrically isolated from the copper discs. The spinning disc is driven by a [3]hp AC motor through a gearing system. The motor rotation rate is controlled by a variable frequency drive (VFD) from 0.5 to [20]Hz which correspond to Rm=2⋯86.§ MAGNETIC FIELD AND VOLTAGE MEASUREMENTSWe report measurements of the temporal variation of the magnetic field and the induced voltage at the upper face of the coil for rotation rates of the disc ranging from 0.5 to [14]Hz. The sense of rotation is opposite to the orientation of the spiral arms of the coil which corresponds to a clock-wise rotation in the setup shown in Fig. <ref>(b). The measurements of the induced magnetic field were performed with a three-axis Hall sensor (Low-Field METROLAB magnetometer 1176LF) placed at the upper face of the coil close to the inner radius of the spiral slits. The rate of data acquisition was 30 samples per second for a period of 14 minutes. In Figure <ref> we show the temporal variation of the vertical component (b_y) of the measured magnetic field together with the rotation rate of the disc. For a rotation frequency of [4.5]Hz the field starts to decrease nearly linearly with the increase of the rotation rate. Then around the rotation frequency of [5.45]Hz the field starts to oscillate between [0.66-0.87]G and [0.735-0.75]G in approximately one second (see Fig. <ref>). During the maximum phase, the field returns close its background value, which was measured at low rotation rates. The background magnetic field, which is important for the interpretation of experimental results, will be discussed in the next section. Such oscillations persist up to the rotation frequency of [13.5]Hz. At this point the period of field oscillations abruptly increased to 22.5 seconds and stayed nearly the same as the disc rotation rate was continuously increased up to its highest value of [14]Hz. Voltage was measured at the upper face of the coil between the inner and outer radius of spiral arm (see sketch in Fig. <ref>) using a digital multimeter (Keithley 2100). The data was acquired with a rate of 10 samples per second for a period of 14 minutes. In Figure <ref>, we show the temporal variation of the induced voltage together with the rotation rate of the disc. Contrary to the magnetic field, the variation of the induced voltage was observable as soon as the disc started to rotate. At the rotation frequencies higher than [5]Hz the voltage was found to vary similarly to the magnetic field.§ INTERPRETATION OF EXPERIMENTAL DATA In order to interpret the experimental results presented above we need take into account that they were obtained in the presence of a rather complicated background magnetic field. First of all, there was Earth’s magnetic field with a downward (negative) vertical component of strength [0.32]G. However, the Hall sensor, which was attached close to the iron frame holding the coil, measured a background magnetic field with an upward (positive) vertical component of strength [0.84]G. In the absence of more detailed measurements it seems reasonable to assume that the measured background magnetic field is mostly due to the magnetization of the iron frame and, thus, localized in its vicinity while the disc is largely subject to Earth’s magnetic field. Then a subcritical amplification of Earth’s magnetic field, which is opposite to that of iron frame at the sensor location, may explain the observed reduction in the measured magnetic field. It has to be noted that there is only amplification but no flux expulsion effect in our dynamo which could provide an alternative explanation. In order to test this hypothesis, let us estimate the current, voltage and magnetic field induced by the disc rotating with a subcritical (below the dynamo threshold) frequency in Earth’s magnetic field. Taking into account the background magnetic field with the vertical component B_0 and the associated angular flux density ϕ_0=∫_R_i^R_oB_0r dr≈ B_0(R_o^2-R_i^2)/2, the voltage induced across the disc rotating with angular velocity Ω can be written according to the theoretical model of Priede et al. <cit.> as Δφ_d=Ω(ϕ_0+ϕ_d)+I_0ln(λ)/2πσ d,where ϕ_d=βI_0R_o/8π^2ϕ̅(λ) is the magnetic flux density induced by the current I_0 flowing across the copper coil with the conductivity σ=[5.96×10^7]S/m and the helicity β=tan(58^∘)≈1.6 of the logarithmic spiral arms; λ=R_0/R_i=1/6 is the radii ratio for which ϕ̅(λ) the dimensionless flux ϕ̅(λ)≈1.7 <cit.>; μ_0=[4π×10^-7]H/m is the permeability of vacuum.The current is related with the coil voltage by Ohm's law: Δφ_c=-I_0(1+β^2)lnλ/2πσ d.which allows us to determine I_0 and other related quantities from the measured values of Δφ_c. On the other hand, applying Ohm’s law to the whole electrical circuit including the disc, coil and liquid-metal contacts we obtain Δφ_d-Δφ_c=I_0ℛ,where ℛ is a liquid-metal contact resistance. This relation allows us to evaluate the actual contact resistance from the measured coil voltage. The results are shown in Table <ref> in terms of the dimensionless contact resistance ℛ̅=2πσ d ℛ, which is seen to be about an order of magnitude higher than its theoretical estimate ℛ̅≈1.26 <cit.>. Such a high contact resistance may due to the heavy oxidation of the eutectic alloy of GaInSn which was observed in experiments. As a result, the rotation rates at which dynamo could become self-sustained are around [25]Hz which are about a factor of two higher than the theoretical value. On the other hand, combining the equations above we can write the induced magnetic flux density as ϕ_d=ϕ_0Ω/(Ω_c-Ω) where Ω_c is a critical rotation rate at which the dynamo becomes self-sustained. A similar expressions hold also for the induced current and voltage. Using this expression to fit the measurements in Fig. <ref>, we find Ω_c≈[21]Hz , which is consistent with the results based on the actual contact resistance shown in Table <ref>.§ CONCLUSIONS We report the first test results of the homopolar disc dynamo device proposed by Priede et al. <cit.> and constructed at CICATA-Quertaro in Mexico. Induced magnetic field and the voltage drop on the coil were measured for rotation rates up to [14]Hz, at which the liquid metal started to leak from the outer sliding contact. Although the rotation rate was significantly above the theoretically predicted dynamo threshold of [10.4]Hz, self-excitation of the magnetic field was not achieved. Instead of the steady magnetic field predicted by the theory we detected a strongly fluctuating magnetic field with a strength comparable to that of Earth's magnetic field which was accompanied by similar voltage fluctuations in the coil. These fluctuations seem to be caused by the intermittent electrical contact through the liquid metal. The experimental results suggest that the dynamo with the actual electrical resistance of liquid metal contacts could be excited at the rotation rate of around [21]Hz provided that the leakage of liquid metal is prevented. § ACKNOWLEDGMENTS This work is supported by the National Council for Science and Technology of Mexico (CONACYT) through the project CB-168850 and by the National Polytechnic Institute of Mexico (IPN) through the project SIP-20171098. The first author thanks to M. Valencia, A. Prez and G. Garca for their technical support during the experimental run. 1 MoffatH.K:Cambrige:1978 H. K. Moffat. Magnetic field generation in electrically conducting fluids. (Cambrige, UK, 1978).BechBranderburg:AnnuRevAstronAstrphys:34 R. Beck, A. Brandenburg, D. Moss and A. Shukurov Galactic magnetism: recent developments and perspectives. Annu. Rev. Astron. Astrophys., vol. 34 (1996), pp. 155–206.Bullard:PhilSoc:51 E.-C. Bullard. The stability of a homopolar dynamo. Phil. Soc., vol. 51 (1955), pp. 744–760.RaedlerRheinhardt:MHD:38 K. H. Raedler, and M. Rheinhardt. Can a disc dynamo work in the laboratory?. Magnetohydrodynamics, vol. 38 (2002), pp. 211–217.Priede=000026Avalos:PhysLetA:377 J. Priede and R. A. Avalos-Ziga. Feasible homopolar dynamo with sliding liquid-metal contacts. Physics letters A, vol. 377 (2013), pp. 2093–2096.MariboGavrilashReillyLynchSondergaard:1-7:2010 D. Maribo, M. Gavrilash, P. J. Reilly, W. A. Lynch, N. A. Sondergaard. Comparison of Several Liquid Metal Sliding Electric Contacts. Proceedings of the 56th IEEE Holm Conference on Electrical Contacts., (2010), pp. 1–7.CebersMaiorov:MaGyd:80:3 D. Maribo, and N. A. Sondergaard. Further studies of a low-melting point alloy used in a liquid metal current collector. IEEE Transactions on Components, Hybrids, and Manufacturing Technology, vol. 10 (1987), pp. 452–455.CarlaBello:MCThesis:2016 C. E. Bello-Morales. Caracterizacin electromagntica del dnamo de disco con contactos de metal lquido. Master Thesis. CICATA-Qro./IPN, Mexico, 2016.
http://arxiv.org/abs/1703.00467v1
{ "authors": [ "R. A. Avalos-Zúñiga", "J. Priede", "C. E. Bello-Morales" ], "categories": [ "physics.geo-ph" ], "primary_category": "physics.geo-ph", "published": "20170227104343", "title": "A homopolar disc dynamo experiment with liquid metal contacts" }
^1 New York University Abu Dhabi, Abu Dhabi, UAE ^2 Eureka Scientific, Oakland, CA. USA^3 Max-Plank-Institut fr Radioastronomie, Bonn, Germany^4 West Virginia University, Morgantown, WV. USA^5 National Radio Astronomy Observatory, Charlottesville, VA. USA ^6 Naval Research Laboratory, Washington, DC. USAhaa280@nyu.edu As of today, more than 2500 pulsars have been found, nearly all in the Milky Way, with the exception of ∼28 pulsars in the Small and Large Magellanic Clouds. However, there have been few published attempts to search for pulsars deeper in our Galactic neighborhood.Two of the more promising Local Group galaxies are IC 10 and NGC 6822 (also known as Barnard's Galaxy) due to their relatively high star formation rate and their proximity to our galaxy. IC 10 in particular, holds promise as it is the closest starburst galaxy to us and harbors an unusually high number of Wolf-Rayet stars, implying the presence of many neutron stars.Weobserved IC 10 and NGC 6822 at 820 MHz with the Green Bank Telescope for ∼15 and 5 hours respectively, and put a strong upper limit of 0.1 mJy on pulsars in either of the two galaxies.We also performedsingle pulse searches of both galaxies with no firm detections.§ INTRODUCTION AND BACKGROUND Almost 50 years have passed since Jocelyn Bell serendipitously discovered the first pulsar after noticing periodic fluctuations in radio telescope data <cit.>. Since then, over 2500 pulsars have been discovered <cit.>. The vast majority of these pulsars have been found in the Milky Way, with only 23 found in the Large Magellanic Cloud (LMC) and 5 in the Small Magellanic Cloud (SMC) <cit.>, both of which are satellites of our galaxy. This work is an attempt to search for radio pulsars that are truly extragalactic, lying beyond the Magellanic Clouds. The detection of extragalactic pulsars would open up new avenues in pulsar astronomy. What we can now learn about the ISM from Galactic pulsars, extragalactic pulsars would teach us about the intergalactic medium (IGM) and the ISM in other galaxies, which we currently know little about <cit.>. Additionally, the study of pulsars in other galaxies would provide us with yet another tool by which we can study stellar evolution in galaxies different from our own. Motivated by these reasons, we present searches of two local group galaxies, IC 10 and NGC 6822 (also known as Barnard's Galaxy).§.§ IC 10 IC 10 is the closest starburst galaxy to the Milky Way, and is in fact the only starburst in the Local Group (LG). It lies fairly low in the Galactic plane (b = -3.3^∘), making distance measurements particularly difficult. However, most recent measurements have found the galaxy, using tip of the red giant branch and Cepheids to determine distance, to be in the neighborhood of 660-740 kpc away <cit.>.The galaxy has been classified as a blue compact dwarf <cit.> and has quite a high rate of star formation (SF), estimated to be around 3-4 times greater than the SF rate of the SMC <cit.>. It is thought to be currently undergoing vigorous neutron star formation resulting from a SF burst ∼6-10 Myr ago. Recent research has shown that the younger population of IC 10, the stars resulting from the most recent SF burst, is found near the centroid of the galaxy, while the oldest population ( >2 Gyr) is significantly offset from the center <cit.>. IC 10 shares many characteristics with the Magellanic Clouds and the SMC in particular: they both have active SF, are metal poor, and have extended HI envelopes. This makes the SMC a good template for comparison with IC 10, and gives us a good reason to choose the galaxy for this search. The evolution of this galaxy is of particular interest as it has an extraordinary number of Wolf-Rayet (W-R) stars, with 33 stars confirmed spectroscopically while a total of ∼100 W-R stars are estimated to exist in IC 10 <cit.>.It has an anomalously high ratio of carbon-type (WC) to nitrogen-type W-R (WN) stars, twenty times the ratio expected for a galaxy of its metallicity <cit.>, although this ratio could change as more stars are confirmed spectroscopically <cit.>. Although it has half the effective radius of the SMC, the galaxy has an effective W-R density 5 times that of the SMC, similar to the densities found in OB associations, and two times greater than that of any other galaxy in the LG <cit.>. §.§ Barnard's Galaxy Besides the Magellanic Clouds, Barnard's galaxy is the nearest dwarf irregular galaxy to us, and is a prototypical dwarf irregular galaxy. Its close distance, ∼470 kpc, makes it one of the few dwarf irregulars whose ISM and stellar population can be studied in detail <cit.>. This makes it an excellent candidate to search for pulsars, as probing its pulsar population could give us hints that could be extrapolated to other dwarf irregulars farther away. NGC 6822 has had a consistent, albeit modest, rate of SF for the last 400 Myr, but also experienced a recent increase in SF rate that is thought to have occured 100-200 Myr ago <cit.>. It has 4 W-R stars, and a W-R density similar to that of the SMC <cit.>. Due to the galaxy lying in the direction of the Galactic center (b = -18.4^∘), it is obscured and has high reddening and absorption, making it hard to detect things such as supernova remnants (SNRs) optically, of which only one has been detected <cit.>. § OBSERVATIONS AND RESULTS §.§ Search Methodology We conducted three observations of IC 10 and two of Barnard's Galaxy using the Green Bank Telescope in West Virginia. The observations were done using the GUPPI backend with 2048 spectral channels, observing at a central frequency of 820MHz and a bandwidth of 200MHz. The IC 10 observations were taken on the 13th, 14th and 17th of October 2009 for 6, 5, and 4.5 hours respectively, and all had a time resolution of 204.6 μs. The Barnard's galaxy observations were made on the 10th and 17th of October 2014, for approximately 2.5 hours each. We used a better time resolution with NGC 6822, for which we had a sample time of 81.92 μs. The galaxies are both well contained within the GBT beam. Although these two galaxies are much farther than the LMC and the SMC (that have distances of 50kpc and 60kpc respectively), where currently the farthest pulsars are known, we still expect to be sensitive to the most luminous pulsars at these greater distances. Assuming that pulsars born in dwarf galaxies are similar in characteristics to the ones in the Milky Way, and that the distances we use are correct, we would optimistically hope to find ∼1 in each galaxy. We base this estimate on the luminosities of Milky Way pulsars. We used the software package PRESTOto search each observation up to a dispersion measure (DM) of 3000 pc cm^-3 <cit.>. The NE2001 model for Galactic free electron density predicts a maximum Galactic contribution to the DM of 210 pc cm^-3 in the direction of IC 10 and DM of 108 pc cm^-3 in the direction of Barnard's Galaxy <cit.>. A newer model by Yao, Manchester and Wang designed to predict distances and DMs of extragalactic pulsars predicts a DM of 289 pc cm^-3 in the direction of IC 10 and 78 pc cm^-3 in the direction of NGC 6822. Both of these values are predominantly due to galactic effects, as the model predicts negligible DM coming from the IGM <cit.>. We chose to search up to a DM of 3000 pc cm^-3, several multiples of the predicted electron density to be safe, since very little is known about the electron densities of the IGM and the ISMs of each galaxy, and to allow for detecting fast radio bursts (FRBs) which could serendipitously occur in the direction of these galaxies. We performed an acceleration search up to a zmax of 300 to search for pulsars in tight binaries. As we have multiple observations of each source, we are able to check the legitimacy of our pulsar candidates by making sure they appear in all data sets. In addition to regular pulsed emission, we search for giant single pulses, which McLaughlin and Cordes predict should be detectable in each of the LG galaxies if they harbor a Crab-like pulsar <cit.>. Giant single pulses, such as the ones emitted by the Crab pulsar <cit.> can be can be over a hundred times brighter than the regular pulsed emission from pulsars <cit.>. We expect to be much more sensitive to giant pulses than we are to regular pulsations. §.§ Results Unfortunately, no strong pulsar candidates were detected, and none of the marginal candidates were seen in more than one pointing. We use the radiometer equation to calculate upper limits of 0.05 mJy and 0.07 mJy for IC 10 and NGC 6822 respectively, assuming a pulse width of ∼10% of the period.A preliminary search for single giant pulses was done for each of the two galaxies, with no obvious single pulse detections. However, a more thorough single pulse search is under way. § DISCUSSION §.§ The case of IC 10From the null result of our search, we may say that there are not many bright pulsars in IC 10. Much effort has gone into understanding the SF rate of IC 10; the literature shows that SF in the galaxy has happened in bursts rather than continuously <cit.>, and that the most recent SF burst occured within the past 10 Myr, likely to have been as recently as 3-4 Myr ago <cit.>. Such recent SF may explain why IC 10 is home to so many W-R stars and yet have few bright pulsars. Antoniou et al. argue that a 10 Myr old SF burst is too young to make any pulsars, yet can produce black holes <cit.>. Coincidentally, IC 10 is home to the most massive known stellar mass black hole IC 10 X-1 <cit.>.IC 10 has several large HI holes and a large synchrotron bubble that were thought to have resulted from multiple SNRs <cit.>. However it appears more likely that the synchrotron bubble is actually a hypernova remnant, from what could plausibly be the event that created IC 10 X-1 <cit.>. This further supports the hypothesis that the recent SF burst is too recent to have resulted in any pulsars. Besides the synchrotron bubble, there are no other SNRs in IC 10.Another possibility is that IC 10's SF rate has been grossly overestimated <cit.>. If a top-heavy initial mass function (IMF) applies to IC 10, that could explain its high density of W-R stars, but could also mean that its SF rate could be as much as 10 times smaller than we expect.§.§ The case of Barnard's Galaxy Unlike IC 10, NGC 6822 has had modest SF for the past 400 Myr, which has had an almost two-fold increase over the last 100-200 Myr <cit.>. There is also evidence that SF rate has gone down around 10 Myr ago <cit.>. Since the lifetimes of most longer-period pulsars tend to be closer to 10 Myr than 100Myr, it is likely that the some of the pulsars born during the peak of Barnard's galaxy's SF peak are no longer energetic enough to emit pulses that would be detectable at this distance.This research was carried out on the High Performance Computing resources at New York University Abu Dhabi. Many thanks to Jorge Naranjo for HPC support. Contributions to this work by PSR at NRL are supported by the Chief of Naval Research (CNR). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. § REFERENCES35bell Hewish A, Bell S, Pilkington J, Scott P and Collins R 1968 Nature 217 709-13.manchester Hobbs G, Manchester R, Teoh A and Hobbs M 2004 Conf. Proc. IAU symposium 218 139-40ridley Ridley J, Crawford F, Lorimer D, Bailey S, Madden J, Anella R and Chennamangalam J 2013 Mon. Not. R. Astron. Soc433 138-46.meiksin Meiksin A 2009 Rev.Mod.Phys. 81 1405-69sakai Sakai, S,Madore B, and Freedman W 1999 Astrophys. J. 511671-9demers Demers S, Battinelli P and Letarte B 2004 Astron. Astrophys. 424 125-32richer Richer, M et al. 2001Astron. Astrophys.370 34-42leroy Leroy A, Bolatto A, Walter F and Blitz L2006 'Atrophys. J. 643 825-43gerbrandt Gerbrandt S, McConnachie A and Irwin M 2015 Mon. Not. R. Astron. Soc 4541000-11crowther Crowther P, Drissen A, Royer P and Smartt S 2003 Astron. Astrophys. 404 483-93 masseyholmes Massey P and Holmes S 2002 Astrophys. J. Letters 580 L35-7massey Massey P 1999 Proc. 193rd symposium of the IAU 193 429-40massey07 Massey P, Olsen K, Hodge P, Jacoby G, McNeill R, Smith R and Strong S 2007 Astron. J. 133 2393-417massey15 Massey P, Neugent K and Morrell N 2015 Proc. Potsdam Wolf-Rayet Workshop 35-9wilcotsmiller Wilcots E, and Miller B 1998 Astron. J. 116 2363-94cioni Cioni M and Habing H 2005 Astron. Astrophys. 429 837-850.gallart Gallart C, Aparicio A, Bertelli G and Chiosi C 1996 Astron. J. 112 2596-606AM1991 Armandroff E and Massey P 1991 Astron. J. 102 927-950.kong Kong A, Sjouwerman Land Williams B2004 Astron. J. 128 2783-8battinelli Battinelli P, Demers S and Kunkel W 2006 Astron. Astrophys. 451 99-108presto Ransom S 2001 Diss. Harvard University Cambridge, Massachusettsne2001 Cordes J, and Lazio T 2002 arXiv preprint astro-ph/0207156 yao Yao J, Manchester R and Wang N 2016 "A New Electron Density Model for Estimation of Pulsar and FRB Distances" Astrophys. J giant McLaughlin M, and Cordes J 2003 Astrophys. J. 596 982-96crab Cordes J, N Bhat, Hankins T, McLaughlin M and Kern J 2004 Astrophys. J 612 375-88cognard Cognard I, Shrauner J, Taylor J and Thorsett S 1996 Astrophys. J. Letters 457 L81-4sanna Sanna N, et al. 2009 Astrophys. J. Letters 699 L84-7magrini Magrini L, and Gonalves D 2009 Mon. Not. R. Astron. Soc 398 280-92yin Yin J, Magrini L, Matteucci F, Lanfranchi G, Gonalves D and Costa R 2010 Astron. Astrophys. 520 A55vacca Vacca W, Sheehy C and Graham J 2007 Astrophys. J 662 272-83antoniou Antoniou V, Zezas A, Hatzidimitriou D and Kalogera V 2010 Astrophys. J. Letters 716 L140-45prestwich Prestwich A, Kilgard R, Crowther P, Carpano S, Pollock A, Zezas A, Saar S, Roberts T and Ward M 2007 Astrophys. J. Letters 669 L21-4silverman Silverman J, and Filippenko A 2008 Astrophys. J. Letters 678 L17-20skillman Yang H and Skillman E 1993 Astron. J. 106 1448-59moiseev Lozinskaya T, and Moiseev A 2007Mon. Not. R. Astron. Soc Letters 381 L26-9wyder Wyder T 2001 Astron. J 122 2490-110
http://arxiv.org/abs/1702.08214v1
{ "authors": [ "Hind Al Noori", "Mallory S. E. Roberts", "David Champion", "Maura McLaughlin", "Scott Ransom", "Paul S. Ray" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170227100556", "title": "A search for extragalactic pulsars in the Local Group galaxies IC 10 and Barnard's Galaxy" }
Non-Concave Network Utility Maximization in Connectionless Networks: A Fully Distributed Traffic Allocation AlgorithmacknowledgementZhisheng Duanaddress_wang December 30, 2023 ===================================================================================================================================== Most modern systems strive to learn from interactions with users, and many engage in exploration: making potentially suboptimal choices for the sake of acquiring new information. We initiate a study of the interplay between exploration and competition—how such systems balance the exploration for learning and the competition for users. Here the users play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing systems.In our model, we consider competition between two multi-armed bandit algorithms faced with the same bandit instance. Users arrive one by one and choose among the two algorithms, so that each algorithm makes progress if and only if it is chosen.We ask whether and to what extent competition incentivizes the adoption of better bandit algorithms. We investigate this issue for several models of user response, as we vary the degree of rationality and competitiveness in the model. Our findings are closely related to the “competition vs. innovation" relationship, a well-studied theme in economics. § INTRODUCTION Learning from interactions with users is ubiquitous in modern customer-facing systems, from product recommendations to web search to spam detection to content selection to fine-tuning the interface. Many systems purposefully implement exploration: making potentially suboptimal choices for the sake of acquiring new information. Randomized controlled trials, a.k.a. A/B testing, are an industry standard, with a number of companies such as Optimizely offering tools and platforms to facilitate them. Many companies use more sophisticated exploration methodologies based on multi-armed bandits, a well-known theoretical framework for exploration and making decisions under uncertainty.Systems that engage in exploration typically need to compete against one another; most importantly, they compete for users. This creates an interesting tension between and . In a nutshell, while exploring may be essential for improving the service tomorrow, it may degrade quality and make users leave today, in which case there will be no users to learn from! Thus, users play three distinct roles: they are customers that generate revenue, they generate data for the systems to learn from, and they are self-interested agents which choose among the competing systems. We initiate a study of the interplay between and . The main high-level question is: whether and to what extent competition incentivizes adoption of better exploration algorithms. This translates into a number of more concrete questions. While it is commonly assumed that better learning technology always helps, is this so for our setting? In other words, would a better learning algorithm result in higher utility for a principal? Would it be used in an equilibrium of the “competition game"? Also, does competition lead to better social welfare compared to a monopoly? We investigate these questions for several models, as we vary the capacity of users to make rational decisions () and the severity of competition between the learning systems (). The two are controlled by the same “knob" in our models; such coupling is not unusual in the literature, see <cit.>.On a high level, our contributions can be framed in terms of the “inverted-U relationship" between rationality/competitiveness and the quality of adopted algorithms (see fig:inverted-U).The relationship between the severity of competition among firms and the quality of technology adopted as a result of this competition is a familiartheme in the economics literature, known as “competition vs. innovation". [In this context, usually refers to adoption of a better technology as a substantial R&D expense to a given firm. It is not as salient whether similar ideas and/or technologies already exist outside the firm. ] We frame our contributions in terms of the “inverted-U relationship", a conventional wisdom regarding “competition vs. innovation" (see fig:inverted-U).Our model. We define a game in which two firms (principals) simultaneously engage in exploration and compete for users (agents). These two processes are interlinked, as exploration decisions are experienced by users and informed by their feedback. We need to specify several conceptual pieces: how the principals and agents interact, what is the machine learning problem faced by each principal, and what is the information structure. Each piece can get rather complicated in isolation, let alone jointly, so we strive for simplicity. Thus, the basic model is as follows: * A new agent arrives in each round t=1,2, …, and chooses among the two principals. The principal chooses an action (a list of web search results to show to the agent), the user experiences this action, and reports a reward. All agents have the same “decision rule" for choosing among the principals given the available information. * Each principal faces a very basic and well-studied version of the multi-armed bandit problem: for each arriving agent, it chooses from a fixed set of actions(a.k.a. arms) and receives a reward drawn independently from a fixed distribution specific to this action. * Principals simultaneously announce their learning algorithms before round 1, and cannot change them afterwards. There is a common Bayesian prior on the rewards (but the realized reward distributions are not observed by the principals or the agents). Agents do not receive any other information. Each principal only observes agents that chose him. Technical results. Our results depend crucially on agents' “decision rule" for choosing among the principals. The simplest and perhaps the most obvious rule is to select the principal which maximizes their expected utility; we refer to it as . We find that is not conducive to adopting better algorithms. In fact, each principal's dominant strategy is to do no purposeful exploration whatsoever, and instead always choose an action that maximizes expected reward given the current information; we call this algorithm . While this algorithm may potentially try out different actions over time and acquire useful information, it is known to be dramatically bad in many important cases of multi-armed bandits — precisely because it does not explore on purpose, and may therefore fail to discover best/better actions. Further, we show that is very sensitive to tie-breaking when both principals have exactly the same expected utility according to agents' beliefs. If tie-breaking is probabilistically biased — say, principal 1 is always chosen with probability strictly larger than 12 — then this principal has a simple “winning strategy" no matter what the other principal does.We relax to allow each principal to be chosen with some fixed baseline probability. One intuitive interpretation is that there are “random agents" who choose a principal uniformly at random, and each arriving agent is either or “random" with some fixed probability. We call this model . We find that better algorithms help in a big way: a sufficiently better algorithm is guaranteed to win all non-random agents after an initial learning phase. While the precise notion of “sufficiently better algorithm" is rather subtle, we note that commonly known “smart" bandit algorithms typically defeat the commonly known “naive" ones, and the latter typically defeat . However, there is a substantial caveat: one can defeat any algorithm by interleaving it with . This has two undesirable corollaries: a better algorithm may sometimes lose, and a pure Nash equilibrium typically does not exist.We further relax the decision rule so that the probability of choosing a given principal varies smoothly as a function of the difference betweenprincipals' expected rewards; we call it . For this model, the “better algorithm wins" result holds under much weaker assumptions on what constitutes a better algorithm. This is the most technical result of the paper. The competition in this setting is necessarily much more relaxed: typically, both principals attract approximately half of the agents as time goes by (but a better algorithm may attract slightly more).All results extend to a much more general version of the multi-armed bandit problem in which the principal may observe additional feedback before and/or after each decision, as long as the feedback distribution does not change over time. In most results, principal's utility may depend on both the market share and agents' rewards.Economic interpretation. The inverted-U relationship between the severity of competition among firms and the quality of technologies that they adopt is a familiar theme in the economics literature <cit.>. [The literature frames this relationship as one between “competition" and “innovation". In this context, “innovation" refers to adoption of a better technology, at a substantial R&D expense to a given firm. It is not salient whether similar ideas and/or technologies already exist outside the firm. It is worth noting that adoption of exploration algorithms tends to require substantial R&D effort in practice, even if the algorithms themselves are well-known in the research literature; see <cit.> for an example of such R&D effort.] We find it illuminating to frame our contributions in a similar manner, as illustrated in fig:inverted-U.Our models differ in terms of rationality in agents' decision-making: from fully rational decisions with to relaxed rationality with to an even more relaxed rationality with . The same distinctions also control the severity of competition between the principals: from cut-throat competition with to a more relaxed competition with , to an even more relaxed competition with . Indeed, with you lose all customers as soon as you fall behind in performance, with you get some small market share no matter what, and with you are further guaranteed a market share close to 12 as long as your performance is not much worse than the competition. The uniform choice among principals corresponds to no rationality and no competition.We identify the inverted-U relationship in the spirit of fig:inverted-U that is driven by the rationality/competitiveness distinctions outlined above: from to to to . We also find another, technically different inverted-U relationship which zeroes in on the model. We vary rationality/competitiveness inside this model, and track the marginal utility of switching to a better algorithm.These inverted-U relationships arise for a fundamentally different reason, compared to the existing literature on “competition vs. innovation.” In the literature, better technology always helps in a competitive environment, other things being equal. Thus, the tradeoff is between the costs of improving the technology and the benefits that the improved technology provides in the competition. Meanwhile, we find that a better exploration algorithm may sometimes perform much worse under competition, even in the absence of R&D costs.Yishay's edits, slightly edited by AlexDiscussion. We capture some pertinent features of reality while ignoring some others for the sake of tractability. Most notably, we assume that agents do not receive any information about other agents' rewards after the game starts. In the final analysis, this assumption makes agents' behavior independent of a particular realization of the Bayesian prior, and therefore enables us to summarize each learning algorithm via its Bayesian-expected rewards (as opposed to detailed performance on the particular realizations of the prior). Such summarization is essential for formulating lucid and general analytic results, let alone proving them. It is a major open question whether one can incorporate signals about other agents' rewards and obtain a tractable model.We also make a standard assumption that agents are myopic: they do not worry about how their actions impact their future utility. In particular, they do not attempt to learn over time, to second-guess or game future agents, or to manipulate principal's learning algorithm. We believe this is a typical case in practice, in part because agent's influence tend to be small compared to the overall system. We model this simply by assuming that each agent only arrives once.Much of the challenge in this paper, both conceptual and technical, was in setting up the right model and the matching theorems, and not only in proving the theorems. Apart from making the modeling choices described above, it was crucial to interpret the results and intuitions from the literature on multi-armed bandits so as to formulate meaningful assumptions on bandit algorithms and Bayesian priors which are productive in our setting.Open questions. How to incorporate signals about the other agents' rewards? One needs to reason about how exact or coarse these signals are, and how the agents update their beliefs after receiving them. Also, one may need to allow principals' learning algorithms to respond to updates about the other principal's performance. (Or not, since this is not how learning algorithms are usually designed!) A clean, albeit idealized, model would be that (i) each agent learns her exact expected reward from each principal before she needs to choose which principal to go to, but (ii) these updates are invisible to the principals. Even then, one needs to argue about the competition on particular realizations of the Bayesian prior, which appears very challenging.Another promising extension is to heterogeneous agents. Then the agents' choices are impacted by their idiosyncratic signals/beliefs, instead of being entirely determined by priors and/or signals about the average performance. It would be particularly interesting to investigate the emergence of specialization: whether/when an algorithm learns to target specific population segments in order to compete against a more powerful “incumbent". Map of the paper. We survey related work (Section <ref>), lay out the model and preliminaries (Section <ref>), and proceed to analyze the three main models, , and (in Sections <ref>, <ref>, and  <ref>, respectively). We discuss economic implications in Section <ref>. Appendix <ref> provides some pertinent background on multi-armed bandits. Appendix <ref> gives a broad example to support anassumption in our model. § RELATED WORK Multi-armed bandits (MAB) is a particularly elegant and tractable abstraction for tradeoff between exploration and exploitation: essentially, between acquisition and usage of information. MAB problems have been studied in Economics, Operations Research and Computer Science for many decades; see <cit.> for background on regret-minimizing and Bayesian formulations, respectively. A discussion of industrial applications of MAB can be found in <cit.>.The literature on MAB is vast and multi-threaded. The most related thread concerns regret-minimizing MAB formulations with IID rewards <cit.>. This thread includes “smart" MAB algorithms that combine exploration and exploitation, such as UCB1 <cit.> and Successive Elimination <cit.>, and “naive” MAB algorithms that separate exploration and exploitation, including explore-first and -Greedy <cit.>.The three-way tradeoff between exploration, exploitation and incentives has been studied in several other settings: incentivizing exploration in a recommendation system <cit.>, dynamic auctions <cit.>, pay-per-click ad auctions with unknown click probabilities <cit.>, coordinating search and matching by self-interested agents <cit.>, as well as human computation <cit.>.<cit.> studied models with self-interested agents jointly performing exploration, with no principal to coordinate them.There is a superficial similarity (in name only) between this paper and the line of work on “dueling bandits" <cit.>. The latter is not about competing bandit algorithms, but rather about scenarios where in each round two arms are chosen to be presented to a user, and the algorithm only observes which arm has “won the duel".Our setting is closely related to the “dueling algorithms" framework <cit.> which studies competition between two principals, each running an algorithm for the same problem. However, this work considers algorithms for offline / full input scenarios, whereas we focus on online machine learning and the explore-exploit-incentives tradeoff therein. Also, this work specifically assumes binary payoffs (win or lose) for the principals.Other related work in economics. The competition vs. innovation relationship and the inverted-U shape thereof have been introduced in a classic book <cit.>, and remained an important theme in the literature ever since <cit.>. Production costs aside, this literature treats innovation as a priori beneficial for the firm. Our setting is very different, as innovation in exploration algorithms may potentially hurt the firm.A line of work on platform competition, starting with <cit.>, concerns competition between firms (platforms) that improve as they attract more users (network effect); see <cit.> for a recent survey. This literature is not concerned with , and typically models network effects exogenously, whereas in our model network effects are endogenous: they are created by MAB algorithms, an essential part of the model.Relaxed versions of rationality similar to ours are found in several notable lines of work. For example, “random agents" (a.k.a. noise traders) can side-step the “no-trade theorem” <cit.>, a famous impossibility result in financial economics. The model is closely related to the literature on product differentiation, starting from <cit.>, see <cit.> for a notable later paper.There is a large literature on non-existence of equilibria due to small deviations (which is related to the corresponding result for ), starting with <cit.> in the context of health insurance markets. Notable recent papers <cit.> emphasize the distinction between and versions of . While agents' rationality and severity of competition are often modeled separately in the literature, it is not unusual to have them modeled with the same “knob" <cit.>. § OUR MODEL AND PRELIMINARIES Principals and agents. There are two principals and T agents. The game proceeds in rounds (we will sometimes refer to them as global rounds). In each round t∈ [T], the followinginteraction takes place. A new agent arrives and chooses one of the two principals. The principal chooses a recommendation: an action a_t∈ A, where A is a fixed set of actions (same for both principals and all rounds). The agent follows this recommendation, receives a reward r_t∈ [0,1], and reports it back to the principal.The rewards are i.i.d. with a common prior. More formally, for each action a∈ A there is a parametric family ψ_a(·) of reward distributions, parameterized by the mean reward μ_a. (The paradigmatic case is 0-1 rewards with a given expectation.) The mean reward vector μ = (μ_a:a∈ A) is drawn from prior distributionbefore round 1. Whenever a given action a∈ A is chosen, the reward is drawn independently from distribution ψ_a(μ_a). The priorand the distributions (ψ_a(·) a∈ A) constitute the (full) Bayesian prior on rewards, denoted .Each principal commits to a learning algorithm for making recommendations. This algorithm follows a protocol of multi-armed bandits (MAB). Namely, the algorithm proceeds in time-steps: [These time-steps will sometimes be referred to as local steps/rounds, so as to distinguish them from “global rounds" defined before. We will omit the local vs. local distinction when clear from the context.] each time it is called, it outputs a chosen action a∈ A and then inputs the reward for this action. The algorithm is called only in global rounds when the corresponding principal is chosen. The information structure is as follows. The prioris known to everyone. The mean rewards μ_a are not revealed to anybody. Each agent knows both principals' algorithms, and the global round when (s)he arrives, but not the rewards of the previous agents. Each principal is completely unaware of the rounds when the other is chosen.Some terminology. The two principals are called “Principal 1" and “Principal 2".The algorithm of principal i∈{1,2} is called “algorithm i" and denoted [i]. The agent in global round t is called “agent t"; the chosen principal is denoted i_t.Throughout, [·] denotes expectation over all applicable randomness.Bayesian-expected rewards. Consider the performance of a given algorithm [i], i∈{1,2}, when it is run in isolation (without competition, just as a bandit algorithm). Let _i(n) denote its Bayesian-expected reward for the n-th step.Now, going back to our game, fix global round t and let n_i(t) denote the number of global rounds before t in which this principal is chosen. Then:[r_t |principal i is chosen in round t and n_i(t)=n ] = _i(n+1)(∀ n∈). Agents' response. Each agent t chooses principal i_t as as follows: it chooses a distribution over the principals, and then draws independently from this distribution. Let p_t be the probability of choosing principal 1 according to this distribution. Below we specify p_t; we need to be careful so as to avoid a circular definition.Let _t be the information available to agent t before the round. Assume _t suffices to form posteriors for quantities n_i(t), i∈{1,2}, denote them by it. Note that the Bayesian expected reward of each principal i is a function only of the number rounds he was chosen by the agents, so the posterior mean reward for each principal i can be written as_i(t) := [r_t |_t and i_t=i ] = [_i(n_i(t)+1) |_t] = _n∼it[_i(n+1)].This quantity represents the posterior mean reward for principal i at round t, according to information _t; hence the notation . In general, probability p_t is defined by the posterior mean rewards _i(t) for both principals. We assume a somewhat more specific shape:p_t = ( _1(t) - _2(t)).Here :[-1,1]→ [0,1] is the response function, which is the same for all agents. We assume that the response function is known to all agents.To make the model well-defined, it remains to argue that information _t is indeed sufficient to form posteriors on n_1(t) and n_2(t). This can be easily seen using induction on t.Since all agents arrive with identical information (other than knowing which global round they arrive in), it follows that all agents have identical posteriors for n_i,t (for a given principal i and a given global round t). This posterior is denoted it.Response functions. We use the response functionto characterize the amount of rationality and competitiveness in our model. We assume thatis monotonically non-decreasing, is larger than 12 on the interval (0,1], and smaller than 12 on the interval [-1,0). Beyond that, we consider three specific models, listed in the order of decreasing rationality and competitiveness (see fig:response-functions):* :equals 0 on the interval [-1,0) and 1 on the interval (0,1]. In other words, the agents will deterministically choose the principal with the higher posterior mean reward. * : equals _0 on the interval [-1,0) and 1-_0 on the interval (0,1], where _0∈ (0,12) are some positive constants. In words, each agent is a agent with probability 1-2_0, and with the remaining probability she makes a random choice.* : (·) lies in the interval [_0,1-_0], _0>0, and is “smooth" around 0 (in the sense defined precisely in Section <ref>).We say thatis symmetric if (-x)+(x)=1 for any x∈ [0,1]. This implies fair tie-breaking: (0)=12. MAB algorithms. We characterize the inherent quality of an MAB algorithm in terms of its Bayesian Instantaneous Regret (henceforth, ), a standard notion from machine learning:(n) := _μ∼[ max_a∈ Aμ_a] - (n),where (n) is the Bayesian-expected reward of the algorithm for the n-th step, when the algorithm is run in isolation. We are primarily interested in how scales with n; we treat K, the number of arms, as a constant unless specified otherwise.We will emphasize several specific algorithms or classes thereof:* “smart" MAB algorithms that combine exploration and exploitation, such as UCB1 <cit.> and Successive Elimination <cit.>. These algorithms achieve (n) ≤Õ(n^-1/2) for all priors and all (or all but a very few) steps n. This bound is known to be tight for any fixed n. [This follows from the lower-bound analysis in <cit.>.] * “naive" MAB algorithms that separate exploration and exploitation, such as Explore-then-Exploit and -Greedy. These algorithms have dedicated rounds in which they explore by choosing an action uniformly at random. When these rounds are known in advance, the algorithm suffers constant in such rounds. When the “exploration rounds" are instead randomly chosen by the algorithm, one can usually guarantee an inverse-polynomial upper bound , but not as good as the one above: namely, (n) ≤Õ(n^-1/3). This is the best possible upper bound on for the two algorithms mentioned above. * : at each step, recommends the best action according to the current posterior: an action a with the highest posterior expected reward [μ_a |],whereis the information available to the algorithm so far.has (at least) a constant for some reasonable priors, (n)>Ω(1). * : always recommends the prior best action,an action a with the highest prior mean reward_μ∼[μ_a]. This algorithm typically has constant . We focus on MAB algorithms such that (n) is non-increasing; we call such algorithms monotone. While some reasonable MAB algorithms may occasionally violate monotonicity, they can usually be easily modified so that monotonicity violations either vanish altogether, or only occur at very specific rounds (so that agents are extremely unlikely to exploit them in practice).More background and examples can be found in Appendix <ref>. In particular, we prove that is monotone.Competition game between principals. Some of our results explicitly study the game between the two principals. We model it as a simultaneous-move game: before the first agent arrives, each principal commits to an MAB algorithm. Thus, choosing a pure strategy in this game corresponds to choosing an MAB algorithm (and, implicitly, announcing this algorithm to the agents).Principal's utility is primarily defined as the market share, the number of agents that chose this principal. Principals are risk-neutral, in the sense that they optimize their expected utility.Assumptions on the prior. We make some technical assumptions for the sake of simplicity. First, each action a has a positive probability of being the best action according to the prior:∀ a∈ A: _μ∼[μ_a> μ_a' ∀ a'∈ A] > 0. Second, posterior mean rewards of actions are pairwise distinct almost surely. That is, the history h at any step of an MAB algorithm [The history of an MAB algorithm at a given step comprises the chosen actions and the observed rewards in all previous steps in the execution of this algorithm.]satisfies[μ_a | h] ≠[μ_a'| h] ∀ a,a'∈ A,except at a set of histories of probability 0.In particular, prior mean rewards of actions are pairwise distinct: [μ_a] ≠[μ_a'] for any a,a'∈ A. We provide two examples for which property (<ref>) is `generic', in the sense that it can be enforced almost surely by a small random perturbation of the prior. Both examples focus on 0-1 rewards and priorsthat are independent across arms. The first example assumes Beta priors on the mean rewards, and is very easy. [Suppose the rewards are Bernouli r.v. and the mean reward μ_a for each arm a is drawn from some Beta distribution Beta(α_a, β_a). Given any history that contains h_a number of heads and t_a number of tails from arm a, the posterior mean reward is α_a + h_a/α_a + h_a + β_a + t_a. Note that h_a and t_a take integer values. Therefore, perturbing the parameters α_a and β_a independently with any continuous noise will induce a prior with property (<ref>) with probability 1.]The second example assumes that mean rewards have a finite support, see Appendix <ref> for details.Some more notation. Without loss of generality, we label actions as A=[K] and sort them according to their prior mean rewards, so that [μ_1] > [μ_2] > … > [μ_K].Fix principal i∈{1,2} and (local) step n. The arm chosen by algorithm [i] at this step is denoted a_i,n, and the corresponding is denoted _i(n). History of [i] up to this step is denoted H_i,n. Write (a| E) = [μ_a | E] for posterior mean reward of action a given event E. §.§ GeneralizationsOur results can be extended compared to the basic model described above.First, unless specified otherwise, our results allow a more general notion of principal's utility that can depend on both the market share and agents' rewards. Namely, principal i collects U_i(r_t) units of utility in each global round t when she is chosen (and 0 otherwise), where U_i(·) is some fixed non-decreasing function with U_i(0)>0. In a formula,U_i := ∑_t=1^T i_t=i· U_i(r_r). Second, our results carry over, with little or no modification of the proofs, to much more general versions of MAB, as long as it satisfies the i.i.d. property. In each round, an algorithm can see a context before choosing an action (as in contextual bandits) and/or additional feedback other than the reward after the reward is chosen (as in, semi-bandits), as long as the contexts are drawn from a fixed distribution, and the (reward, feedback) pair is drawn from a fixed distribution that depends only on the context and the chosen action. The Bayesian priorneeds to be a more complicated object, to make sure that and are well-defined. Mean rewards may also have a known structure, such as Lipschitzness, convexity, or linearity; such structure can be incorporated via . All these extensions have been studied extensively in the literature on MAB, and account for a substantial segment thereof; see <cit.> for background and details.§.§ Chernoff Bounds We use an elementary concentration inequality known as Chernoff Bounds, in a formulation from <cit.>. Consider n i.i.d. random variables X_1 … X_n with values in [0,1]. Let X = 1n∑_i=1^n X_i be their average, and let ν = [X]. Then:min( [ X-ν > δν ],[ ν-X > δν ]) < e^-ν n δ^2/3for any δ∈ (0,1). § FULL RATIONALITY (HARDMAX)In this section, we will consider the version in which the agents are fully rational, in the sense that their response function is . We show that principals are not incentivized to explore— to deviate from . The core technical result is that if one principal adopts , then the other principal loses all agents as soon as he deviates.To make this more precise, let us say that two MAB algorithms deviate at (local) step n if there is an action a∈ A and a set of step-n histories of positive probability such that any history h in this set is feasible for both algorithms, and under this history the two algorithms choose action a with different probability. Suppose that an algorithm π is not dynamic greedy. Then the first exploration for π is the first time step where there is a non-zero probability that the recommended action has a lower expectation than exploitation option.define...Assume response function with fair tie-breaking. Assume that [1] is , and [2] deviates from starting from some (local) step n_0<T. Then all agents in global rounds t≥ n_0 select principal 1.The competition game between principals has a unique Nash equilibirium: both principals choose . This corollary holds under a more general model which allows time-discounting: namely, the utility of each principal i in each global round t is U_i,t(r_t) if this principal is chosen, and 0 otherwise, where U_i,t(·) is an arbitrary non-decreasing function with U_i,t(0)>0.§.§ Proof of Theorem <ref> The proof starts with two auxiliary lemmas: that deviating from implies a strictly smaller Bayesian-expected reward, and that implies a “sudden-death" property: if one agent chooses principal 1 with certainty, then so do all subsequent agents do. We re-use both lemmas in later sections, so we state them in sufficient generality.Assume that [1] is , and [2] deviates from starting from some (local) step n_0<T. Then _1(n_0)>_2(n_0). This holds for any response function . Lemma <ref> does not rely on any particular shape of the response function because it only considers the performance of each algorithm without competition. Since the two algorithms coincide on the first n_0-1 steps, it follows by symmetry that histories H_1,n_0 and H_2,n_0 have the same distribution. We use a coupling argument: w.l.o.g., we assume the two histories coincide, H_1,n_0 = H_2,n_0 = H.At local step n_0, chooses an action a_1,n_0 = a_1,n_0(H) which maximizes the posterior mean reward given history H: for any realized history h∈(H) and any action a∈ A(a_1,n_0| H = h) ≥(a | H=h). Rewrote the rest of the proof to account for positive-prob set of histories.By assumption (<ref>), it follows that (a_1,n_0| H = h) > (a | H=h)for any h∈(H) and a≠ a_1,n_0(h). Since the two algorithms deviate at step n_0, there is a set S⊂(H) of step-n_0 histories such that [S]>0 and any history h∈ S satisfies [a_2,n_0≠ a_1,n_0| H=h]>0. Combining this with (<ref>), we deduce that (a_1,n_0| H = h) > [ μ_a_2,n_0| H=h ]for each history h∈ S.Using (<ref>) and (<ref>) and integrating over realized histories h, we obtain _1(n_0) > _2(n_0). Consider response function with (0)≥12.Suppose [1] is monotone, and _1(t_0)>_2(t_0) for some global round t_0. Then _1(t)>_2(t) for all subsequent rounds t.Let us use induction on round t≥ t_0, with the base case t=t_0. Let =_1,t_0 be the agents' posterior distribution for n_1,t_0, the number of global rounds before t_0 in which principal 1 is chosen. By induction, all agents from t_0 to t-1 chose principal 1, so _2(t_0)= _2(t). Therefore,_1(t) = n∼_1(n+1+t-t_0)≥n∼_1(n+1) =_1(t_0) > _2(t_0)= _2(t),where the first inequality holds because [1] is monotone, and the second one is the base case.Since the two algorithms coincide on the first n_0-1 steps, it follows by symmetry that _1(n) = _2(n) for any n< n_0. By Lemma <ref>, _1(n_0) > _2(n_0).Recall that n_i(t) is the number of global rounds s<t in which principal i is chosen, and it is the agents' posterior distribution for this quantity. By symmetry, each agent t<n_0 chooses a principal uniformly at random. It follows that 1n_0 = 2n_0 (denote both distributions byfor brevity), and (n_0-1)>0. Therefore:_1(n_0)= n∼_1(n+ 1)= ∑_n = 0^n_0-1(n) ·_1(n + 1) > (n_0-1)·_2(n_0) + ∑_n = 0^n_0-2(n)·_2(n + 1)= n∼_2(n + 1) = _2(n_0)So, agent n_0 chooses principal 1. By Lemma <ref> (noting that is monotone), all subsequent agents choose principal 1, too.§.§ with biased tie-breakingThe model is very sensitive to the tie-breaking rule. For starters, if ties arebroken deterministically in favor of principal 1, then principal 1 can get all agents no matter what the other principal does, simply by using . Assume response function with (0)=1 (ties are always broken in favor of principal 1). If [1] is , then all agents choose principal 1.Agent 1 chooses principal 1 because of the tie-breaking rule. Since is trivially monotone, all the subsequent agents choose principal 1 by an induction argument similar to the one in the proof of Lemma <ref>. A more challenging scenario is when the tie-breaking is biased in favor of principal 1, but not deterministically so: (0)>12. Then this principal also has a “winning strategy" no matter what the other principal does. Specifically, principal 1 can get all but the first few agents, under a mild technical assumption that deviates from . Principal 1 can use , or any other monotone MAB algorithm that coincides with in the first few steps.Assume response function with (0)>12 (tie-breaking is biased in favor of principal 1). Assume the prioris such that deviates from starting from some step n_0. Suppose that principal 1 runs a monotone MAB algorithm that coincides with in the first n_0 steps. Then all agents t≥ n_0 choose principal 1. The proof re-uses Lemmas <ref> and <ref>, which do not rely on fair tie-breaking.Because of the biased tie-breaking, for each global round t we have:if _1(t)≥_2(t) then [i_t=1]>12.Recall that i_t is the principal chosen in global round t.Let m_0 be the first step when [2] deviates from , or deviates from , whichever comes sooner. Then [2], and coincide on the first m_0-1 steps. Moreover, m_0≤ n_0 (since deviates from at step n_0), so [1] coincides with on the first m_0 steps.So, _1(n)=_2(n) for each step n<m_0, because [1] and [2] coincide on the first m_0-1 steps. Moreover, if [2] deviates from at step m_0 then _1(m_0) > _2(m_0)by Lemma <ref>; else, we trivially have _1(m_0) = _2(m_0). To summarize:_1(n)≥_2(n)for all steps n≤ m_0. We claim that [i_t=1]>12 for all global rounds t≤ m_0. We prove this claim using induction on t. The base case t=1 holds by (<ref>) and the fact that in step 1, chooses the arm with the highest prior mean reward. For the induction step, we assume that [i_t=1]>12 for all global rounds t<t_0, for some t_0≤m_0. It follows that distribution 1t_0 stochastically dominates distribution 2t_0. [For random variables X,Y on , we say that X stochastically dominates Y if [X≥ x] ≥[Y≥ x] for any x∈.] Observe that_1(t_0) = n∼1t_0_1(n+1)≥n∼2t_0_2(n+1) = _2(t_0).So the induction step follows by (<ref>). Claim proved.Now let us focus on global round m_0, and denote _i = im_0.By the above claim,_1 stochastically dominates _2, and moreover _i(m_0-1)>_i(m_0-1). By definition of m_0, either (i) [2] deviates from starting from local step m_0, which implies _1(m_0)> _2(m_0) by Lemma <ref>, or (ii) deviates from starting from local step m_0, which implies _1(m_0)>_1(m_0-1) by Lemma <ref>. In both cases, using (<ref>) and (<ref>), it follows that the inequality in (<ref>) is strict for t_0=m_0.Therefore, agent m_0 chooses principal 1, and by Lemma <ref> so do all subsequent agents.§ RELAXED RATIONALITY: HARDMAX & RANDOM This section is dedicated to the response model, where each principal is always chosen with some positive baseline probability. The main technical result for this model states that a principal with asymptotically better wins by a large margin: after a “learning phase" of constant duration, all agents choose this principal with maximal possible probability (1). For example, a principal with (n)≤Õ(n^-1/2) wins over a principal with (n)≥Ω(n^-1/3). However, this positive result comes with a significant caveat detailed in Section <ref>.We formulate and prove a cleaner version of the result, followed by a more general formulation developed in a subsequent Remark <ref>. We need to express a property that [1] eventually catches up and surpasses [2], even if initially it receives only a fraction of traffic. For the cleaner version, we assume that both algorithms are well-defined for an infinite time horizon, so that their does not depend on the time horizon T of the game.Then this property can be formalized as:(∀>0)_1( n)/_2(n) → 0.In fact, a weaker version of (<ref>) suffices: denoting _0 = (-1), for some constant n_0 we have(∀ n≥ n_0) _1(_0 n/2)/_2(n) <12.We also need a very mild technical assumption on the “bad" algorithm:(∀ n≥ n_0) _2(n) > 4 e^-_0 n/12.Assume response function. Suppose both algorithms are monotone and well-defined for an infinite time horizon, and satisfy (<ref>) and (<ref>). Then each agent t≥ n_0 chooses principal 1 with maximal possible probability (1) = 1- _0.Consider global round t≥ n_0. Recall that each agent chooses principal 1 with probability at least (-1)>0.Then [n_1(t+1)] ≥ 2_0 t. By Chernoff Bounds (Theorem <ref>), we have that n_1(t+1)≥_0 t holds with probability at least 1-q, where q = exp(-_0 t/12).We need to prove that _1(t) - _2(t)>0. For any m_1 and m_2, consider the quantityΔ(m_1,m_2) := _2(m_2+1) - _1(m_1+1).Whenever m_1 ≥_0 t/2 -1 and m_2<t, it holds thatΔ(m_1,m_2) ≥Δ(_0 t / 2, t) ≥_2(t)/2.The above inequalities follow, resp., from algorithms' monotonicity and (<ref>). Now,_1(t) - _2(t)= m_1∼1t, m_2∼2tΔ(m_1,m_2)≥ -q + m_1∼1t, m_2∼2tΔ(m_1,m_2)| m_1 ≥_0 t/2-1≥_2(t)/2-q > _2(t)/4 > 0by (<ref>). Many standard MAB algorithms in the literature are parameterized by the time horizon T. Regret bounds for such algorithms usually include a polylogarithmic dependence on T. In particular, a typical upper bound for has the following form: [We provide upper bounds on for several standard MAB algorithms to illustrate these dependencies in the appendix.added](n| T)≤(T)· n^-γfor some γ∈(0,12].Here we write (n| T) to emphasize the dependence on T.We generalize (<ref>) to handle the dependence on T: there exists a number T_0 and a function n_0(T)∈(T)such that (∀ T≥ T_0,n≥ n_0(T)) _1(_0 n /2| T)/_2(n| T) <1/2.If this holds, we say that [1] BIR-dominates [2].We provide a version of Theorem <ref> in which algorithms are parameterized with time horizon T and condition (<ref>) is replaced with (<ref>); its proof is very similar and is omitted.To state a game-theoretic corollary of Theorem <ref>, we consider a version of the competition game between the two principals in which they can only choose from a finite setof monotone MAB algorithms. One of these algorithms is “better" than all others; we call it the special algorithm. Unless specified otherwise, it BIR-dominates all other allowed algorithms. The other algorithms satisfy (<ref>). We call this game the restricted competition game. Assume response function. Consider the restricted competition game with special algorithm . Then, for any sufficiently large time horizon T, this game has a unique Nash equilibrium: both principals choose . §.§ A little greedy goes a long wayGiven any monotone MAB algorithm other than , we design a modified algorithm which learns at a slower rate, yet “wins the game" in the sense of Theorem <ref>. As a corollary, the competition game with unrestricted choice of algorithms typically does not have a Nash equilibrium.Given an algorithm [1] that deviates from starting from step n_0 and a “mixing” parameter p, we will construct a modified algorithm as follows. * The modified algorithm coincides with [1] (and ) for the first n_0-1 steps;* In each step n≥ n_0, [1] is invoked with probability 1-p, and with the remaining probability p does the “greedy choice": chooses an action with the largest posterior mean reward given the current information collected by [1]. For a cleaner comparison between the two algorithms, the modified algorithm does not record rewards received in steps with the “greedy choice". Parameter p>0 is the same for all steps. Assume symmetric response function. Let _0 = (-1) be the baseline probability. Suppose [1] deviates from starting from some step n_0. Let [2] be the modified algorithm, as described above, with mixing parameter p such that (1-_0)(1-p)>_0. Then each agent t≥ n_0 chooses principal 2 with maximal possible probability 1-_0.Suppose that both principals can choose any monotone MAB algorithm, and assume the symmetric response function. Then for any time horizon T, the only possible pure Nash equilibrium is one where both principals choose . Moreover, no pure Nash equilibrium exists when some algorithm “dominates" in the sense of (<ref>) and the time horizon T is sufficiently large. The modified algorithm performs explorationat a slower rate. Let us argue how this may translateinto a larger compared to the original algorithm. Let'_1(n) be the of the “greedy choice" after after n-1 steps of [1]. Then_2(n)= m∼ (n_0-1)+Binomial(n-n_0+1,1-p)(1-p) ·_1(m) + p ·'_1(m).In this expression, m is the number of times [1] is invoked in the first n steps of the modified algorithm. Note that [m] = n_0-1 + (n-n_0+1)(1-p) ≥ (1-p)n.Suppose _1(n)= β n^-γ for some constants β,γ>0. Further, assume'_1(n)≥c _1(n), for some c>1-γ. Then for all n≥ n_0 and small enough p>0 it holds that:_2(n) ≥(1-p+pc) [ _1(m)][ _1(m)]≥_1( [m])(By Jensen's inequality)≥_1((1-p)n)(since [m]≥ n(1-p))≥β· n^-γ· (1-p)^-γ (plugging in _1(n)=β n^-γ)> _1(n) (1-pγ)^-1 (since (1-p)^γ < 1-pγ). _2(n) >α·_1(n),whereα = 1-p+pc1-pγ>1.(In the above equations, all expectations are over m distributed as in (<ref>).) Let '_1(n) denote the Bayesian-expected reward of the “greedy choice” after after n-1 steps of [1]. Note that _1(·) and '_1(·) are non-decreasing: the former because [1] is monotone and the latter because the “greedy choice” is only improved with an increasing set of observations. Therefore, the modified algorithm [2] is monotone by (<ref>).By definition of the “greedy choice,” _1(n)≤'_1(n) for all steps n. Moreover, by Lemma <ref>, [1] has a strictly smaller (n_0) compared to ; so, _1(n_0)<_2(n_0).fixed inequalities; but this paragraph is still not perfectly clearLetdenote a copy of [1] that is running “inside" the modified algorithm [2]. Let m_2(t) be the number of global rounds before t in which the agent chooses principal 2 and is invoked; in other words, it is the number of agents seen by before global round t. Let _2,t be the agents' posterior distribution for m_2(t).We claim that in each global round t≥ n_0, distribution _2,t stochastically dominates distribution 1t, and _1(t)<_2(t). We use induction on t. The base case t=n_0 holds because _2,t = 1t (because the two algorithms coincide on the first n_0-1 steps), and _1(n_0)<_2(n_0) is proved as in (<ref>), using the fact that _1(n_0)<_2(n_0).The induction step is proved as follows. The induction hypothesis for global round t-1 implies that agent t-1 is seen by with probability (1-_0)(1-p), which is strictly larger than _0, the probability with which this agent is seen by [2]. Therefore, _2,t stochastically dominates 1t._1(t)= n∼1t_1(n+1)≤m∼_2,t_1(m+1)< m∼_2,t(1-p)·_1(m+1) + p·'_1(m+1)= _2(t).Here inequality (<ref>) holds because _1(·) is monotone and _2,t stochastically dominates 1t, and inequality (<ref>) holds because _1(n_0)<_2(n_0) and _2,t(n_0)>0. [If _1(·) is strictly increasing, then inequality (<ref>) is strict, too; this is because _2,t(t-1)>1t(t-1).] § SOFTMAX RESPONSE FUNCTION This section is devoted to the model. We recover a positive result under the assumptions from Theorem <ref> (albeit with a weaker conclusion), and then proceed to a much more challenging result under weaker assumptions. We start with a formal definition: A response functionis if the following conditions hold: * (·) is bounded away from 0 and 1: (·)∈ [,1-] for some ∈ (0,12),* the response function(·) is “smooth" around 0: ∃ constants δ_0,c_0,c'_0>0∀ x∈ [-δ_0,δ_0]c_0 ≤'(x) ≤ c'_0. * fair tie-breaking: (0)=12. This definition is fruitful when parameters c_0 and c_0' are close to 12. Throughout, we assume that [1] is better than [2], and obtain results parameterized by c_0. By symmetry, one could assume that [2] is better than [1], and obtain similar results parameterized by c_0'. Our first result is a version of Theorem <ref>, with the same assumptions about the algorithms and essentially the same proof. The conclusion is much weaker: we can only guarantee that each agent t≥ n_0 chooses principal 1 with probability slightly larger than 12. This is essentially unavoidable in a typical case when both algorithms satisfy (n)→ 0, by Definition <ref>. Assume response function. Suppose [1] has better in the sense of (<ref>), and [2] satisfies the condition (<ref>). Then each agent t≥ n_0 chooses principal 1 with probability[i_t = 1]≥12 +c_04 _2(t). We follow the steps in the proof of Theorem <ref> to derive_1(t) - _2(t)≥_2(t)/2 -q, where q = exp(-_0 t/12).This is at least _2(t)/4 by (<ref>). Then (<ref>) follows by the smoothness condition (<ref>). We recover a version of Corollary <ref>, if each principal's utility is the number of users (rather than the more general model in (<ref>)). We also need a mild technical assumption that cumulative Bayesian regret () tends to infinity. is a standard notion from the literature (along with ):(n) := n·_μ∼[ max_a∈ Aμ_a] - ∑_n=1^n (n') = ∑_n'=1^n (n').Assume that the response function is , and each principal'sutility is the number of users.Consider the restricted competition game with special algorithm , and assume that all other allowed algorithms satisfy (n)→∞. Then, for any sufficiently large time horizon T, this game has a unique Nash equilibrium: both principals choose . Further, we prove a much more challenging result in which the condition (<ref>) is replaced with a much weaker “BIR-dominance” condition. For clarity, we will again assume that both algorithms are well-defined for an infinite time horizon. The weak BIR dominance condition says there exist constants β_0, α_0∈ (0, 1/2) and n_0 such that (∀ n≥ n_0) _1((1-β_0)n)/_2(n) <1-α_0. If this holds, we say that [1] weakly BIR-dominates[2]. Note that the condition (<ref>)involves sufficiently small multiplicative factors (resp., _0/2and 12), the new condition replaces them with factors thatcan be arbitrarily close to 1. We make a mild assumption on [1] that its _1(n) tends to0. Formally, for any > 0, there exists some n() suchthat (∀ n≥ n()) _1(n) ≤. We also require a slightly stronger version of the technicalassumption (<ref>): for some n_0,(∀ n≥ n_0) _2(n) ≥4/α_0exp( -min{_0, 1/8} n/12) Assume the response function. Suppose [1] weakly-BIR-dominates [2], [1] satisfies (<ref>), and [2] satisfies (<ref>). Then there exists some t_0 such that each agent t≥ t_0 chooses principal 1 with probability[i_t = 1]≥12 +c_0α_04 _2(t). The main idea behind our proof is that even though [1] may have a slower rate of learning in the beginning, it will gradually catch up and surpass [2]. We will describe this process in two phases. In the first phase, [1] receives a random agent with probability at least (-1) = _0 in each round. Since _1 tends to 0, the difference in s between the two algorithms is also diminishing. Due to the response function, [1] attracts each agent with probability at least 1/2 - O(β_0) after a sufficient number of rounds. Then the game enters the second phase: both algorithms receive agents at a rate close to 12, and the fractions of agents received by both algorithms — n_1(t)/t and n_2(t)/t — also converge to 12. At the end of the second phase and in each global round afterwards, the counts n_1(t) and n_2(t) satisfy the weak BIR-dominance condition, in the sense that they both are larger than n_0 and n_1(t)≥ (1-β_0)n_2(t). At this point, [1] actually has smaller , which reflected in the s eventually. Accordingly, from then on [1] attracts agents at a rate slightly larger than 12. We prove that the “bump” over 12 is at least on the order of _2(t).Let β_1 = min{c_0'δ_0, β_0/20} with δ_0 defined in (<ref>).Recall each agent chooses [1] with probability at least (-1)= _0.By condition (<ref>) and (<ref>), there exists some sufficiently large T_1 such that for any t≥ T_1, _1(_0 T_1/2) ≤β_1/c_0' and _2(t) > e^-_0 t/12. Moreover, for any t≥ T_1, we know [n_1(t+1)] ≥_0 t, and by the Chernoff Bounds (Theorem <ref>), we have n_1(t+1) ≥_0 t/2 holds with probability at least 1 - q_1(t) with q_1(t) = exp(-_0 t/12) < _2(t). It follows that for any t≥ T_1,_2(t) - _1(t)= m_1∼1t, m_2∼2t_1(m_1+ 1) - _2(m_2+1)≤ q_1(t)+ m_1∼1t_1(m_1+ 1)| m_1 ≥_0 t/2 - 1- _2(t)≤_1(_0 T_1/2) ≤β_1/c_0' Since the response functionis c_0'-Lipschitz in the neighborhood of [-δ_0, δ_0], each agent after round T_1 will choose [1] with probability at leastp_t ≥1/2 - c_0'(_2(t) - _1(t)) ≥1/2 - β_1. Next, we will show that there exists a sufficiently large T_2 such that for any t≥ T_1 + T_2, with high probability n_1(t) > max{n_0, (1 - β_0)n_2(t)}, where n_0 is defined in (<ref>).Fix any t ≥ T_1 + T_2.Since each agent chooses [1] with probability at least 1/2 - β_1, by Chernoff Bounds (Theorem <ref>) we have with probability at least 1 - q_2(t) that the number of agents that choose [1] is at least β_0(1/2 - β_1)t/5, where the functionq_2(x) = exp( -(1/2 - β_1)(1 - β_0/5)^2x/3).Note that the number of agents received by [2] is at most T_1 + (1/2 + β_1)t + (1/2 - β_1)(1 - β_0/5)t.Then as long as T_2 ≥5T_1/β_0, we can guarantee that n_1(t) > n_2(t) (1 - β_0) and n_1(t) > n_0 with probability at least 1 - q_2(t) for any t ≥ T_1 + T_2.Note that the weak BIR-dominance condition in (<ref>) implies that for any t≥ T_1 + T_2 with probability at least 1 - q_2(t),_1(n_1(t)) < (1- α_0)_2(n_2(t)).It follows that for any t≥ T_1 + T_2,_1(t) - _2(t)= m_1∼1t, m_2∼2t_2(m_2+ 1) - _1(m_1+1)≥ (1 - q_2(t))α_0 _2(t) - q_2(t)≥α_0 _2(t)/4where the last inequality holds as long as q_2(t) ≤α_0_2(t)/4, and is implied by the condition in (<ref>) as long as T_2 is sufficiently large. Hence, by the definition of our response function and assumption in (<ref>), we have[i_t = 1] ≥1/2 + c_0α_0_2(t)/4.Similar to the condition (<ref>), we can also generalize the weak BIR-dominance condition (<ref>) to handle the dependence on T: there exist some T_0, a function n_0(T)∈(T), and constants β_0,α_0∈ (0, 1/2), such that (∀ T≥ T_0,n≥ n_0(T)) _1((1-β_0)n| T)/_2(n| T) <1-α_0. We also provide a version of Theorem <ref> under this more general weak BIR-dominance condition; its proof is very similar and is omitted. The following is just a direct consequence of Theorem <ref> with this general condition.added Assume that the response function is , and each principal'sutility is the number of users. Consider the restricted competition game in which the special algorithm weakly-BIR-dominates the other allowed algorithms, and the latter satisfy (n)→∞. Then, for any sufficiently large time horizon T, there is a unique Nash equilibrium: both principals choose .§ ECONOMIC IMPLICATIONS We frame our contributions in terms of the relationship between and on one side, and adoption of better algorithms on the other. Recall that both (of the game between the two principals) and (of the agents) are controlled by the response function .We frame our contributions in terms of the relationship between and , between the extent to which the game between the two principals is competitive, and the degree of innovation — adoption of better that these models incentivize. is controlled via the response function , and refers to the quality of the technology (MAB algorithms) adopted by the principals. The vs. relationship is well-studied in the economics literature, and is commonly known to often follow an inverted-U shape, as in fig:inverted-U (see Section <ref> for citations). in our models is closely correlated with : the extent to which agents make rational decisions, and indeed is whatcontrols directly.Main story. Our main story concerns the restricted competition game between the two principals where one allowed algorithm is “better" than the others. We track whether and when is chosen in an equilibrium. We vary /by changing the response function from (full rationality, very competitive environment) to to(less rationality and competition). Our conclusions are as follows: * Under , no innovation: is chosen over .* Under , some innovation:is chosen as long as it BIR-dominates.* Under , more innovation: is chosen as long as it weakly-BIR-dominates. [This is a weaker condition, the better algorithm is chosen in a broader range of scenarios.]These conclusions follow, respectively, from Corollaries <ref>, <ref> and <ref>. Further, we consider the uniform choice between the principals. It corresponds to the least amount of rationality and competition, and (when principals' utility is the number of agents) uniform choice provides no incentives to innovate. [On the other hand, if principals' utility is somewhat aligned with agents' welfare, as in (<ref>), then a monopolist principal is incentivized to choose the best possible MAB algorithm (namely, to minimize cumulative Bayesian regret (T)). Accordingly, monopoly would result in better social welfare than competition, as the latter is likely to split the market and cause each principal to learn more slowly. This is a very generic and well-known effect regarding economies of scale.] Thus, we have an inverted-U relationship, see fig:inverted-U2.Secondary story. Let us zoom in on the symmetricmodel. Competitiveness and rationality within this model are controlled by the baseline probability _0 = (- 1), which goes smoothly between the two extremes of (_0=0) and the uniform choice (_0=12). Smaller _0 corresponds to increased rationality and increased competitiveness. For clarity, we assume that principal's utility is the number of agents.We consider the marginal utility of switching to a better algorithm. Suppose initially both principals use some algorithm , and principal 1 ponders switching to another algorithm ' which BIR-dominates . We are interested in the marginal utility of this switch. Then:* _0 = 0 ():     the marginal utility can be negative if is . * _0 near 0:     only a small marginal utility can be guaranteed, as it may take a long time for ' to “catch up" with , and hence less time to reap the benefits. * “medium-range" _0:     large marginal utility, as ' learns fast and gets most agents. * _0 near 12:     small marginal utility, as principal 1 gets most agents for free no matter what.The familiar inverted-U shape is depicted in Figure <ref>. § ACKNOWLEDGEMENTSThe authors would like to thank Glen Weyl for discussions of related work in economics. plainnat § BACKGROUND ON MULTI-ARMED BANDITSThis appendix provides some pertinent background on multi-armed bandits (MAB). We discuss and monotonicity of several MAB algorithms, touching upon: and (Section <ref>), “naive" MAB algorithms that separate exploration and exploitation (Section <ref>), and “smart" MAB algorithms that combine exploration and exploitation (Section <ref>).As we do throughout the paper, we focus on MAB with i.i.d. rewards and a Bayesian prior; we call it Bayesian MAB for brevity. §.§ and We provide an example when and have constant , and prove monotonicity of . For the example, it suffices to consider deterministic rewards (for each action a, the realized reward is always equal to the mean μ_a) and independent priors (according to the prior , random variables μ_1 μ_K are mutually independent) each of full support. The following claim is immediate from the definition of the CDF functionAssume independent priors. Let F_i be the CDF of the mean reward μ_i of action a_i∈ A. Then, for any numbers z_2>z_1>[μ_2] we have [μ_1≤ z_1 and μ_2≥ z_2] = F_1(z_1)(1-F_2(z_2)).We can now draw an immediate corollary of the above claim Consider any problem instance of Bayesian MAB with two actions and independent priors which are full support. Then: (a) With constant probability, has a constant for all steps.(b) Assuming deterministic rewards, with constant probability has a constant for all steps. A similar result holds forrewards which are distributed as Bernoulli random variables. In this case we consider accumulative reward of an action as a random walk, and use a high probability variation of the law of iterated logarithms. (Details omitted.)Next, we show that is monotone. is monotone, in the sense that (n) is non-decreasing. Further, (n) is strictly increasing for every time step n with [a_n≠ a_n+1]>0.We prove by induction on n that (n)≤(n+1) for . Let a_n be the random variable recommended at time t, then [μ_a_n| _n ]=(n). We can rewrite this as:(n)=__n[_r_n[μ_a_n|r_n,_n]] = __n+1[μ_a_n|_n+1]since _n+1=(_n,r_n). At time n+1 will select an action a_n+1 such that:(n+1)=[μ_a_n+1|_n+1]≥[μ_a_n |_n]=(n)which proves the monotonicity. In cases that [a_n≠ a_n+1]>0] we have a strict inequality, since with some probability we select a better action then the realization of a_n. §.§ “Naive" MAB algorithms that separate exploration and exploitation MAB algorithm (m) initially explores each action with m agents and for the remaining T-|A|m agents recommends the action with the highest observed average. In the explore phase it assigns a random permutation of the mK recommendations.The (T^2/3log |A|/δ) algorithm has, with probability 1-δ, for anyn≥ |A|T^2/3 we have (n)=O(T^-1/3). In addition, (m) is monotone.In the explore phase we we approximate for each action a∈ A, the value of μ_a by μ̂_a. Using the standard Chernoff bounds we have that with probability 1-δ, for every action a∈ A we have |μ_a -μ̂_a| ≤ T^-1/3.Let a^* = max_a μ_a and a^ee the action that selects in the explore phase after the first |A|T^2/3 agents. Since μ̂_a^*≤μ̂_a^ee, this implies that μ_a^* - μ_a^ee=O(T^-1/3).To show that (m) is monotone, we need to show only that (mK) ≤(mK+1). This follows since for any t< mK we have (t)=(t+1), since the recommended action is uniformly distributed for each time t. Also, for any t≥ mK+1 we have (t)=(t+1) since we are recommending the same exploration action. The proof that (mK) ≤(mK+1) is the same as for in Lemma <ref>. We can also have a a phased version which we call (m_t), where time is partition in to phases. In phase t we have m_t agents and a random subset of K explore the actions (each action explored by a single agent) and the other agents exploit. (This implies that we need that m_t≥ K for all t. We also assume that m_t is monotone in t.) Consider the case that K=2 and the rewards of the actions are Bernoulli r.v. with parameter μ_i and Δ=μ_1-μ_2. Algorithm (m_t) is monotone and for m_t = √(t) it has (n)=O(n^-1/3+e^-O(Δ^2 n^2/3))). We first show that it is monotone. Recall that μ_1>μ_2. Let S_i=∑_j=1^t r_i,j be the sum of the rewards of action i up to phase t. We need to show that [S_1>S_2]+ (1/2) [S_1=S_2] is monotonically increasing in t. Consider the random variable Z=S_1-S_2. At each phase it increases by +1 with probability μ_1(1-μ_2), decreases by -1 with probability (1-μ_1)μ_2 and otherwise does not change.Consider the values of Z up to phase t. We really care only about the probability that is shifted from positive to negative and vice versa.First, consider the probability that Z=0. We can partition it to S_1=S_2=r events, and let p(r,r) be the probability of this event. For each such event, we have p(r,r)μ_1 moved to Z=+1 and p(r,r)μ_2 moved to Z=-1. Since μ_1>μ_2 we have that p(r,r)μ_1≥ p(r,r)μ_2 (note that p(r,r) might be zero, so we do not have a strict inequality).Second, consider the probability that Z=+1 or Z=-1. We can partition it to S_1=r+1;S_2=r and S_1=r;S_2=r+1 events, and let p(r+1,r) and p(r,r+1) be the probabilities of those events.It is not hard to see that p(r+1,r)μ_2=p(r,r+1)μ_1.This implies that the probability mass moved from Z=+1 to Z=0 is identical to that moved from Z=-1 to Z=0.We have showed that [S_1>S_2]+ (1/2) [S_1=S_2] and therefore the expected valued of the exploit action is non-decreasing. Since we have that the size of the phases are increasing, theis strictly increasing between phases and identical within each phase.We now analyze theregret. Note that agent n is in phase O(n^2/3) and the length of his phase is O(n^1/3). Thehas two parts. The first is due to the exploration, which is at most O(n^-1/3). The second is due to the probability that we exploit the wrong action. This happens with probability [S_1<S_2]+ (1/2) [S_1=S_2] which we can bound using a Chernoff bound by e^-O(Δ^2n^2/3), since we explored each action O(n^2/3) times.Actually we have a tradeoff depending on the parameter m_t between the regret due to exploration and exploitation. (Note that the monotonicity is always guarantee assuming m_t is monotone.) If we can set that m_t = 2^t then at time n we have 2/ n probability of an exploit action. For the explore action we are in phase log n so the probability of a sub-optimal explore action is n^-O(Δ^-2). This should give us (n)=O(n^-O(Δ^-2)).§.§ “Smart" MAB algorithms that combine exploration and exploitation MAB algorithm works as follows. It keeps a set of surviving actions A_s⊆ A, where initially A_s=A. The agents are partition into phases, where each phase is a random permutation of the non-eliminated actions. Let μ̂_i,t be the average of the rewards of action i up to phase t and μ̂^*=max_i μ̂_i,t. We eliminate action i at the end of phase t, i.e., delete it from A_s, if μ̂_t^*-μ̂_i,t > log(T/δ)/√(t). In we simply reset the algorithm with A=A_s-A_e,t, where A_e,t is the set of eliminated actions after phase t. Namely, we restart μ̂_i,t and ignore the old rewards before the elimination.The algorithm , has, with probability 1-δ, (n)=O(log(T/δ)/√(n/K)).Let the best action be a^*=max_a μ_a. With probability 1-δ at any time n we have that for any action i∈ A_s that |μ̂_i -μ_i|≤log(T/δ)/√(n/K), and a^*∈ A_s. This implies that any action a such μ_a^*-μ_a> 3log(T/δ)/√(n/K) is eliminated. Therefore, any action in A_s has (n) of at most 6log(T/δ)/√(n/K).Assume that if μ_i≥μ_j then the rewards r_i stochastically dominates the rewards r_j.Then, is monotoneConsider the first time T an action is eliminated, and let T=τ be a realized value of T. Then, clearly for n<τ we have (n)=(1) .Consider two actions a_1,a_2∈ A, such that μ_a_1≥μ_a_2. At time T=τ, the probability thata_1 is eliminated is smaller than the probability that a_2 is eliminated. This follows since μ̂_a_1 stochastically dominates μ̂_a_2, which implies that for any threshold θ we have [μ̂_a_1≥θ]≥[μ̂_a_2≥θ].After the elimination we consider the expected reward of the eliminated action ∑_i∈ Aμ_i q_i, where q_i is the probability that action i was eliminated in time T=τ. We have that q_i ≤ q_i+1, from the probabilities of elimination.The sum ∑_i∈ Aμ_i q_i with q_i ≤ q_i+1 and ∑_i q_i=1 is maximized by setting q_i=1/|A|. (We can see that if there are q_i≠ 1/|A|, then there are two q_i< q_i+1, and one can see that setting both to (q_i+ q_i+1)/2 increases the value.) Therefore we have that the (τ)≥(τ-1).Now we can continue by induction. For the induction, we can show the property for any remaining set of at most k-1 actions. The main issue is that restarts from scratch, so we can use induction.§ NON-DEGENERACY VIA A RANDOM PERTURBATION We show that Assumption (<ref>) holds almost surely under a small random perturbation of the prior. We focus on problem instances with 0-1 rewards, and assume that the prioris independent across arms and has a finite support. [The assumption of 0-1 rewards is for clarity. Our results hold under a more general assumption that for each arm a, rewards can only take finitely many values, and each of these values is possible (with positive probability) for every possible value of the mean reward μ_a.]Consider the probability vector in the prior for arm a:p⃗_a = ( [μ_a=ν]: ν∈(μ_a) ).We apply a small random perturbation independently to each such vector:p⃗_a ←p⃗_a + q⃗_a, whereq⃗_a∼_a.Here _a is the noise distribution for arm a: a distribution over real-valued, zero-sum vectors of dimension d_a = |(μ_a)|. We need the noise distribution to satisfy the following property:∀ x∈ [-1,1]^d_a∖{0}_q∼_a[ x·(p⃗_a+ q) ≠ 0 ] =1.Consider an instance of MAB with 0-1 rewards. Assume that the prioris independent across arms, and each mean reward μ_a has a finite support that does not include 0 or 1. Assume that noise distributions _a satisfy property (<ref>). If random perturbation (<ref>) is applied independently to each arm a, then eq:assn-distinct holds almost surely for each history h.As a generic example of a noise distribution which satisfies Property (<ref>), consider the uniform distributionover the bounded convex setQ = {q ∈^d_a| q ·1⃗ = 0 q_2 ≤},where 1⃗ denotes the all-1 vector. If x = a 1⃗ for some non-zero value of a, then (<ref>) holds becausex · (p+q) = x · p = a≠ 0.Otherwise, denote p=p⃗_a and observe that x·(p+ q) = 0 only if x · q = c ≜ x · (-p). Since x≠1⃗, the intersection Q∩{ x· q = c } either is empty or has measure 0 in Q, which implies _q[ x·(p+ q) ≠ 0 ] =1.To prove Theorem <ref>, it suffices to focus on two arms, and perturb one of them. Since realized rewards have finite support, there are only finitely many possible histories. Therefore, it suffices to focus on a fixed history h. Consider an instance of MAB with 0-1 rewards. Assume that the prioris independent across arms, and that (μ_1) is finite and does not include 0 or 1. Fix history h. Suppose random perturbation (<ref>) is applied to arm 1, with noise distribution _1 that satisfies (<ref>). Then [μ_1| h] ≠[μ_2| h] almost surely.Note that [μ_a| h] does not depend on the algorithm which produced this history. Therefore, for the sake of the analysis, we can assume w.l.o.g. that this history has been generated by a particular algorithm, as long as this algorithm can can produce this history with non-zero probability. Let us considerthe algorithm that deterministically chooses same actions as h.Let S = (μ_1). Then:[μ_1| h]= ∑_ν∈ Sν·[μ_1 =ν| h] = ∑_ν∈ Sν·[h |μ_1 =ν] ·[μ_1=ν] / [h],[h]= ∑_ν∈ S[h |μ_1 =ν] ·[μ_1=ν].Therefore, [μ_1| h] = [μ_2| h] if and only if∑_ν∈ S (ν-C) ·[h |μ_1 =ν] ·[μ_1=ν] = 0,whereC=[μ_2| h].Since [μ_2| h] and [h |μ_1 =ν] do not depend on the probability vector p⃗_1, we conclude that[μ_1| h] = [μ_2| h] ⇔ x·p⃗_1 =0,where vectorx := ((ν-C) ·[h |μ_1 =ν]: ν∈ S ) ∈ [-1,1]^d_1does not depend on p⃗_1.Thus, it suffices to prove that x·p⃗_1 ≠ 0 almost surely under the perturbation. In a formula:_q∼_1[ x·( p⃗_1+q) ≠ 0 ] =1 Note that [h |μ_1 =ν]>0 for all ν∈ S, because 0,1∉S. It follows that at most one coordinate of x can be zero. So(<ref>) follows from property (<ref>).
http://arxiv.org/abs/1702.08533v2
{ "authors": [ "Yishay Mansour", "Aleksandrs Slivkins", "Zhiwei Steven Wu" ], "categories": [ "cs.GT", "cs.LG" ], "primary_category": "cs.GT", "published": "20170227211357", "title": "Competing Bandits: Learning under Competition" }
1 ... 2017 0 ... 0 xxxx.201700000Flickering from the symbiotic star EF Aql Zamanov, Boeva, Nikolovet al.Institute of Astronomy and National Astronomical Observatory, Bulgarian Academy of Sciences,Tsarigradsko Shose 72,1784Sofia, BulgariaIRIDA, National Astronomical Observatory Rozhen, 4700 Smolyan, Bulgaria Astrophysics Research Institute, Liverpool John Moores University, IC2 Liverpool Science Park, Liverpool, L3 5RF, UK Departamento de Física (EPSJ), Universidad de Jaén, Campus Las Lagunillas, A3-420, 23071, Jaén, SpainCentre for Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Grudziadzka 5, PL-87-100 Torun, Poland Department of Astronomy, Faculty of Physics, St Kliment Ohridski University of Sofia, 5 James Bourchier Boulevard, 1164 Sofia, Bulgaria2017 February 16 ... ... We report optical CCD photometry of the recently identified symbiotic starEF Aql. Our observations in Johnson V and B bands clearlyshow the presence of stochastic light variations with an amplitude of about 0.2 mag on a time scale of minutes. The observations point toward a white dwarf as the hot component in the system.It is the 11-th object among more than 200 symbiotic stars known with detected optical flickering. Estimates of the mass accretion rate onto the WD and the mass loss rate in the wind of the Mira secondary star lead to the conclusion that less than 1% of the wind is captured by the WD. Eight further candidates for the detection of flickering in similar systems are suggested. Discovery of optical flickering from the symbiotic star EF Aquilae R. K. Zamanov1Corresponding authors: rkz@astro.bas.bg,kstoyanov@astro.bas.bg S. Boeva 1 Y. M. Nikolov1 B. Petrov1 R. Bachev1 G. Y. Latev1 V. A. Popov2 K. A. Stoyanov1 M. F. Bode3 J. Martí4 T. Tomov5 A. Antonova6December 30, 2023 ====================================================================================================================================================================================================================================== § INTRODUCTIONEF Aquilae was identified as a variable star on photographic plates fromKönigstuhl Observatory almost a century ago (Reinmuth 1925).It is associated with a brightinfrared source – IRAS 19491-0556 / 2MASS J19515172-0548166. Le Bertre et al. (2003) provide K and L' photometry for EF Aql, and classify it as an oxygen-rich asymptotic giant branch star located at a distance of d=3.5 kpc andlosing mass at a rate 3.8 × 10^-7 M_⊙yr^-1. Richwine et al. (2005) have examined the opticalsurvey data for EF Aql and classify it as a Mira type variablewitha period of 329.4 d and amplitude of variability > 2.4 mag. The optical spectrum showsprominent Balmer emission lines visible through at least H11 and [O III] λ 5007 emission.The emission lines and the bright UV flux detected in GALEX satellite images provide undoubted evidence for the presence of a hot companion.Thus EF Aql appears to be a symbiotic star, a member of the symbiotic Mira subgroup (Margon et al. 2016). The symbiotic stars arelong-period interacting binaries, consisting of an evolved gianttransferring mass to a hot compact object.Their orbital periods are in the range from 100 days to more than 100 years.A cool giant or supergiant of spectral class G-K-M is the mass donor. If this giant has Mira-type variability, the system usually is a strong infrared source.The hot secondary accretes material supplied from the red giant.In most symbiotic stars, the secondary is a degenerate star, typically a white dwarf or subdwarf.In a few cases has the secondary been shown to bea neutron star (e.g. Bahramian et al. 2014; Kuranov & Postnov 2015; and references therein).Systematic searchesforflickering variability insymbiotic starsand related objects (Dobrzycka et al. 1996;Sokoloski, Bildsten & Ho2001;Gromadzki et al. 2006;Stoyanov 2012; Angeloni et al. 2012, 2013)have shown thatoptical flickering activity is rarely detectable. Among more than 200 symbiotic stars known,only 10 present flickering – RS Oph, T CrB, MWC 560, Z And,V2116 Oph, CH Cyg, RT Cru, o Cet, V407 Cyg, and V648 Car. Here we report optical CCD photometry of EF Aql and detection of flickering in Johnson V and B bands. § OBSERVATIONS During the period August - November 2016, we securedCCD photometric monitoring with5 telescopes equipped with CCD cameras: * the 2.0 m RCC telescope of the National Astronomical Observatory Rozhen, Bulgaria (CCD VersArray 1300 B, 1340×1300 px); * the 50/70 cm Schmidt telescope of NAO Rozhen (SBIG STL11000M CCD, 4008 × 2672 px);* the 60 cm telescope of the Belogradchick Astronomical Observatory(SBIG ST8 CCD, 1530 × 1020 px); * the 30 cm astrograph of IRIDA observatory (CCD camera ATIK 4000M,2048×2048 px);* the automated 41 cm telescope of the University of Jaén, Spain - ST10-XME CCD camera with 2184 × 1472 px(Martí, Luque-Escamilla, &García-Hernández 2017). All the CCD images have been bias subtracted, flat fielded, and standardaperture photometry has been performed. The data reduction and aperture photometryare done with IRAF and have been checked with alternative software packages.A few objects from the APASS catalogue(Munari et al. 2014; Henden et al. 2016) have been used as comparison stars.As check stars we used USNO U0825.17150321,USNOU0825.17161750 and USNOU0825.17157668.In Fig. 1 and Table 2, they are marked as check-1, check-2 and check-3, respectively. The results of our observations are summarized in Table <ref> andplotted on Fig. <ref>.For each runwe measure the minimum, maximum, and average brightness in the corresponding band, plus the standard deviation of the run.For each run we calculate the root mean square (rms) deviation σ_rms = √(1/N_pts∑_i(m_i - m )^2 ) ,where m is the average magnitude in the run.σ_rms calculated in this way includes the contributionsfrom the variability of the star (if it exists) and from the measurement errors.For non-variable stars it is a measure of the accuracy of the photometry. § DETECTION OF FLICKERING IN EF AQL During our observations the V band brightness of EF Aql was in the range 15.1 ≤V ≤ 16.03.The General Catalogue of Variable Stars (Samus et al. 2017) suggests brightness of EF Aql in the range 12.4 ≤ V ≤ 15.5.The ASAS data (Pojmanski & Maciejewski 2004) show variability in the range 12.5 ≤ V ≤ 16.0.The lowest brightness in our set isV ≈ 16.03, which is aboutthe minimum brightness measured in ASAS, indicating that our observations are near theminimum of the Mira cycle.Rapid aperiodic brightness variations, like the flickeringfrom cataclysmic variables(Warner 1995, Bruch 2000),is evident on all our observations in B and V bands.It is not detectable in I band, suggesting that the I flux is dominated by the red component.Fig. <ref> shows the light curves from three observations. For comparison, two check stars (one brighter than EF Aql, and one fainter) are also plotted on the same scale.It is apparent that the variability of EF Aql is considerably larger than that of the check stars, which indicates that it is not a result of observational errors. The σ_rms expected from the accuracy of the photometry can be deduced from the observations of the check stars with brightness similar to that of EF Aql.In Table 2 we give the mean magnitude and σ_rms of EF Aql and the three check stars.The root mean square deviation σ_rmsof EF Aql exceeded the rms deviation expected from the check starby more than a factor of 2.In Fig. <ref> we plot σ_rms for EF Aql and of about 30 other stars from the field around EF Aql.During run 20160828 (plotted with plus signs),EF Aql exhibits flickering with peak to peak amplitude0.16 mag. The root mean square deviation is σ_rms(EF Aql) = 0.033 mag.For stars with similar brightnesswe have σ_rms≈ 0.009 mag. In other words, σ_rms of EF Aql is more than three timeslarger than expected from observational errors.During run 20161102, EF Aql exhibits flickering with higher amplitude of about 0.30 mag. The root mean square variability is σ_rms(EF Aql) = 0.061 mag. For stars with similar brightnesswe haveσ_rms≈ 0.008 mag. In other words, the rms of EF Aql is more than seven timeslarger than that expected from observational errorsUsingour simultaneousB and V band observations obtained on 28 August 2016 (see Table 1) andinterstellar extinctionA_V=0.45 (Margon et al. 2016), we calculate the dereddened colour of the flickering source as (B-V)_0 = 0.35 ± 0.05.For comparison, the average (B-V)_0colour of the flickering sourceis0.25 ± 0.21 in the recurrent novae T CrB and RS Oph,and0.10 ± 0.20 in the cataclysmic variables (Bruch 1992; Zamanov et al. 2015).It appears that(B-V)_0 of the flickering source in EF Aql is more similar to theflickering source in T CrB and RS Oph, which also contain a red giant mass donor. § DISCUSSION From our R and I band observations of EF Aql, we measure (R-I)=3.08 ± 0.06 and (R-I) = 2.93 ± 0.05, for28 August 2016 and4 September 2016, respectively.ApplyingA_V=0.45 and the calibration of Fitzpatrick (1999), we correct (R-I) for interstellar reddening, and find2.78 < (R-I)_0< 3.04, which (using the results of Celis 1984) corresponds toa spectral subtype of the asymptotic giant branch star of M7 - M8.An asymptotic giant of spectral type M7 - M8 is expected to have (V-R)_0 ≈4.0 - 4.5 (Celis 1982) and we estimated the brightness of the red giant in the V band around the time of our observations(at minimum of the Mira cycle) as V ∼ 17.8 - 18.5 mag.The non-variable light from the red giant,contributes about 20% of the fluxat V band.Fig. 1 demonstrates that the V band flux can change by more than 5%(0.05 mag) in less than 5 minutes and more than 20% (0.20 mag) in less than one hour.Taking into account the contribution of the red giant, these rapid fluctuations correspond to variations up to± 25%(from the average level) in the V-bandflux from the hot component of EF Aql.The brightness fluctuations of EF Aql are similar to those observed in the prototype Mira (omicron Ceti) by Sokoloski & Bildsten (2010). The flickering (stochastic photometric variations on timescales of a few minutes with amplitude of a few×0.1 magnitudes) is a variability typical for the accreting white dwarfs in cataclysmic variables and recurrent novae. About the nature of the hot companion in EF Aql, Margon et al. (2016) supposedthat the hot source is likely more luminous than a white dwarf, and thus may well be a subdwarf. The persistent presence of minute-timescale stochastic optical variations (see Table 1) with the observed amplitude is a strong indicator that the hot component in EF Aql is a white dwarf. A comparison of the flickering of EF Aql with the flickering of the symbiotic recurrent nova RS Oph(see Fig. 1 in Zamanov et al. 2010) shows that in RS Oph the flickering is visible inBVRI bands, whilein EF Aql it is not detectable in I, but well visible inB and V bands.InRS Oph the mass accretion rate is of about∼ 2 × 10^-8 M_⊙ yr^-1 (Nelson et al. 2011). In EF Aql we see flickering in V, which means that the hot component is brighter than the M giant in V band. Overall, the relative colour dependence of flickering in EF Aql implies that the mass accretion rate is lower than that in RS Oph,but not too much lower, probably of a few× 10^-9 M_⊙ yr^-1. Le Bertre et al. (2003) estimated that the mass donor in EF Aql is losingmassat a rate 3.8 × 10^-7 M_⊙yr^-1. We used 2MASS K=5.36 mag and IRAS 12-micron flux (4.78 Jy) to estimate thecolor K-[12] defined by Gromadzki et al. (2009).Then, applying their Eq. 4, we determined a mass loss rate of 2× 10^-6 M_⊙ yr^-1 for EF Aql. This means that the white dwarf is capturing less than 1% of the stellar wind of the red giant. In addition to the optical observations presented here, we searched the new gPhoton database (Million et al. 2016)for GALEX ultraviolet observations of EF Aql. gPhoton has a calibration and extraction pipeline that allows easy access to calibrated GALEX data. Using its module gFind, we found five epochs of observations in the near UV band (NUV, 1771 Å– 2831 Å), only one of which was previously reported by Margon et al. (2016). We then used the gMap and gAperture modules to determine aperture size and background annulus size and positions, to make sure all the counts are inside the aperture and no contaminating sources are within the background subtraction annulus. We thus obtain the following fluxes: 20040630 17:06 6.25 ± 0.09 × 10^-15 erg s^-1 cm^-2 Å^-1,20050627 14:39 6.64 ± 0.08 × 10^-15 erg s^-1 cm^-2 Å^-1,20100813 16:53 7.98 ± 0.04 × 10^-15 erg s^-1 cm^-2 Å^-1,20100815 09:58 8.26 ± 0.04 × 10^-15 erg s^-1 cm^-2 Å^-1,20100815 13:15 8.37 ± 0.04 × 10^-15 erg s^-1 cm^-2 Å^-1,where the timeis given in the formatyyyymmdd hh:mm.These results show that the NUV flux of EF Aql in August 2010 was ≈ 30% larger than in June 2004, probably indicating variable mass accretion rate onto the white dwarf.There are more than 200 symbiotic starsknown (Belczyński et al. 2000).Among themflickering is detected in only 11 objects (including EF Aql), i.e. in 5% of the cases (see Sect. 1 for references). On the basis of their infrared properties,the symbiotic stars are divided in three main groups S-, D-, and D'-type (Allen 1982;Mikołajewska 2003).There are about 30 symbiotic stars classified as symbiotic Miras (Whitelock 2003).In three of them flickering is present, i.e.10% of the objects.It seems that flickering more often can be detected insymbiotic Mirasthan among S- and D'-type symbiotics.Bearing in mind this, we searched in the Catalogue of Symbiotic Stars (Belczyński et al. 2000), for D-type symbiotics with low ionization potential which are potential candidates for flickering detection.In our opinionV627 Cas, KM Vel, BI Cru, V704 Cen, Hen 2-139, V347 Nor,WRAY 16-312 and LMC 1 deserve to be searched for flickeringnear the minima of the Mira brightness cycle. Symbiotic binaries (especially those with detected rapid variability) have historically revealed remarkable events of acceleration of ejection of collimated outflows (Taylor et al. 1986; Crocker et al. 2002; Brocksopp et al. 2004), although not as powerful as in X-ray binaries and microquasars. Nevertheless, some of their collimated ejecta display a combination of thermal and non-thermal emission mechanisms (Eyres et al. 2009).Inspection of the EF Aql position in the NRAO VLA Sky Survey (NVSS, Condon et al. 1998) reveals no radio source detection with a 3σ upper limit of 1.6 mJy at 20 cm. As a comparison, this upper limit is very similar to the faint source detection of the nearby symbiotic binary system CH Cygni that appears at a flux density level of 2.9 mJy in the same survey. However, EF Aql is significantly more distant than CH Cygni (at least 15 times) and therefore the lack of detection at the NVSS sensitivity is not surprising. Deeper radio mapping with modern interferometers would be thus desirable. With the similarities between RS Oph and EF Aql noted above, it may be worthwhileto look for possible recurrent nova outbursts of the latter system in archival data,as has been done for other objects (e.g. Schaefer 2010).§ CONCLUSIONSOn seven nights during the period August - November 2016, we performed 11.2 hoursphotometricobservations of the symbiotic Mira EF Aql. We find that EF Aql exhibits short-term optical variability (flickering) on a time scale of minutes.The detected amplitude is about 0.15 - 0.30 magin B and V bands.The root mean square deviation of EF Aql is from three to seven timeslarger than that expected from observational errors.The presence of flickering strongly suggests that the hot component is a white dwarf. It seems that the flickering is more often seen in symbiotic Miras than in the general populationof symbiotic stars.This work was partly supported by grantsDN 08/1 13.12.2016 (Bulgarian National Science Fund), AYA2016-76012-C3-3-P (Spanish Ministerio de Economía y Competitividad, MINECO) as well as FEDER funds. [Allen(1982)]1982ASSL...95...27A Allen, D. A. 1982, IAU Colloq. 70: The Nature of Symbiotic Stars, 95, 27 [Angeloni et al.(2012)]2012ApJ...756L..21A Angeloni, R., Di Mille, F., Ferreira Lopes, C. E., & Masetti, N. 2012, , 756, L21 [Angeloni et al.(2013)]2013IAUS..290..179A Angeloni, R., Di Mille, F., Lopes, C. E. F., & Masetti, N. 2013,Feeding Compact Objects: Accretion on All Scales,IAU Symp. 290, 179 [Bahramian et al.(2014)]2014MNRAS.441..640B Bahramian, A., Gladstone, J. C., Heinke, C. O., et al. 2014, , 441, 640 [Belczyński et al.(2000)]2000A AS..146..407B Belczyński, K., Mikołajewska, J., Munari, U.,Ivison, R. J., & Friedjung, M. 2000, , 146, 407 [Brocksopp et al.(2004)]2004MNRAS.347..430B Brocksopp, C., Sokoloski, J. L., Kaiser, C., et al. 2004, , 347, 430 [Bruch(1992)]1992A A...266..237B Bruch, A. 1992, , 266, 237 [Bruch(2000)]2000A A...359..998B Bruch, A. 2000, , 359, 998 [Celis S.(1982)]1982AJ.....87.1791C Celis S., L. 1982, , 87, 1791 [Celis S.(1984)]1984AJ.....89..527C Celis S., L. 1984, , 89, 527 [Condon et al.(1998)]1998AJ....115.1693C Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, , 115, 1693 [Crocker et al.(2002)]2002MNRAS.335.1100C Crocker, M. M., Davis, R. J., Spencer, R. E., et al. 2002, , 335, 1100 [Dobrzycka et al.(1996)]1996AJ....111..414D Dobrzycka, D., Kenyon, S. J., & Milone, A. A. E. 1996, , 111, 414 [Eyres et al.(2009)]2009MNRAS.395.1533E Eyres, S. P. S., O'Brien, T. J., Beswick, R., et al. 2009, , 395, 1533 [Fitzpatrick(1999)]1999PASP..111...63F Fitzpatrick, E. L. 1999, , 111, 63 [Gromadzki et al.(2009)]2009AcA....59..169G Gromadzki, M., Mikołajewska, J., Whitelock, P., & Marang, F. 2009, , 59, 169[Gromadzki et al.(2006)]2006AcA....56...97G Gromadzki, M., Mikolajewski, M., Tomov, T., et al. 2006, , 56, 97 [Henden et al.(2016)]2016yCat.2336....0H Henden, A. A., Templeton, M., Terrell, D., et al. 2016, VizieR Online Data Catalog, 2336,[Kuranov & Postnov(2015)]2015AstL...41..114K Kuranov, A. G., & Postnov, K. A. 2015, Astronomy Letters, 41, 114 [Le Bertre et al.(2003)]2003A A...403..943L Le Bertre, T., Tanaka, M., Yamamura, I., & Murakami, H. 2003, , 403, 943 [Margon et al.(2016)]2016PASP..128b4201M Margon, B., Prochaska, J. X., Tejos, N., & Monroe, T. 2016, , 128, 024201 [Marti et al.(2017)]2017BgAJ...26.. Martí, J.,Luque-Escamilla, P., García-Hernández M. T.,2017,Bulgarian Astronomical Journal, in press[Mikołajewska(2003)]2003ASPC..303....9M Mikołajewska, J. 2003, Symbiotic Stars Probing Stellar Evolution, 303, 9 [Million et al.(2016)]2016ApJ...833..292M Million, C., Fleming, S. W., Shiao, B., et al. 2016, , 833, 292 [Munari et al.(2014)]2014AJ....148...81M Munari, U., Henden, A., Frigo, A., et al. 2014, , 148, 81 [Nelson et al.(2011)]2011ApJ...737....7N Nelson, T., Mukai, K., Orio, M., Luna, G. J. M., & Sokoloski, J. L. 2011, , 737, 7 [Pojmanski & Maciejewski(2004)]2004AcA....54..153P Pojmanski, G., & Maciejewski, G. 2004, , 54, 153 [Reinmuth(1925)]1925AN....225..385R Reinmuth, K. 1925, Astronomische Nachrichten, 225, 385 [Richwine et al.(2005)]2005JAVSO..34...28R Richwine, P., Bedient, J.,Slater, T., & Mattei, J. A. 2005, Journal of the American Association of Variable Star Observers (JAAVSO), 34, 28 [Samus et al.(2017)]2005JAVSO..34...28R Samus N.N., Kazarovets E. V., Durlevich O.V., Kireeva N.N.,Pastukhova E.N., 2017,General Catalogue of Variable Stars: new version. GCVS 5.1,ARep, 60, 1 [Schaefer(2010)]2010ApJS..187..275S Schaefer, B. E. 2010, , 187, 275 [Sokoloski et al.(2001)]2001MNRAS.326..553S Sokoloski, J. L., Bildsten, L., & Ho, W. C. G. 2001, , 326, 553 [Sokoloski & Bildsten(2010)]2010ApJ...723.1188S Sokoloski, J. L., & Bildsten, L. 2010, , 723, 1188 [Stoyanov(2012)]2012BlgAJ..18b..63S Stoyanov, K. A. 2012, Bulgarian Astronomical Journal, 18, 63 [Taylor et al.(1986)]1986Natur.319...38T Taylor, A. R., Seaquist, E. R., & Mattei, J. A. 1986, , 319, 38 [Warner(1995)]1995 Warner, B., 1995, Cataclysmic Variable Stars, Cambridge Univ. Press, Cambridge [Whitelock(2003)]2003ASPC..303...41W Whitelock, P. A. 2003,Symbiotic Stars Probing Stellar Evolution, ASP Conf. Proceedings vol. 303, 41 [Zamanov et al.(2010)]2010MNRAS.404..381Z Zamanov, R. K., Boeva, S., Bachev, R., et al. 2010, , 404, 381 [Zamanov et al.(2015)]2015AN....336..189Z Zamanov, R., Boeva, S., Latev, G., Stoyanov, K. A., & Tsvetkova, S. V. 2015, Astronomische Nachrichten, 336, 189
http://arxiv.org/abs/1702.08243v1
{ "authors": [ "R. K. Zamanov", "S. Boeva", "Y. M. Nikolov", "B. Petrov", "R. Bachev", "G. Y. Latev", "V. A. Popov", "K. A. Stoyanov", "M. F. Bode", "J. Marti", "T. Tomov", "A. Antonova" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170227113638", "title": "Discovery of optical flickering from the symbiotic star EF Aquilae" }
equationsection
http://arxiv.org/abs/1702.08016v2
{ "authors": [ "David J. Gross", "Vladimir Rosenhaus" ], "categories": [ "hep-th", "cond-mat.str-el" ], "primary_category": "hep-th", "published": "20170226102644", "title": "The Bulk Dual of SYK: Cubic Couplings" }
Large-amplitude rapid X-ray variability in the narrow-line Seyfert 1 galaxy PG 1404+226 G. C. Dewangan December 30, 2023 =======================================================================================A straight-line drawing Γ of a graph G=(V,E) is a drawing of G in the Euclidean plane, where every vertex in G is mapped to a distinct point, and every edge in G is mapped to a straight line segment between their endpoints. A path P in Γ is calledincreasing-chord if for every four points (not necessarily vertices) a,b,c,d on Pin this order, the Euclidean distance between b,c is at most the Euclidean distance between a,d. A spanning tree T rooted at some vertex r in Γ iscalled increasing-chord if T contains an increasing-chord path from r to every vertex in T. We prove that given a vertex r in a straight-line drawing Γ, it is NP-complete to decide whether Γ contains an increasing-chord spanning tree rooted at r, which answers a question posed by Mastakas and Symvonis <cit.>. We alsoshed light on the problem of finding an increasing-chord path between a pair of vertices in Γ, but the computational complexity questionremains open. § INTRODUCTION In 1995, Icking et al. <cit.> introduced the concept of a self-approaching curve. A curve is called self-approaching if for any three points a, b and c on the curve in this order, |bc| ≤ |ac|, where |xy| denotes the Euclidean distance between x and y. A curve is called increasing-chord if it is self-approachingin both directions. A path P in a straight-line drawing Γ is calledincreasing-chordif for every fourpoints (not necessarilyvertices) a,b,c,d on Pin this order, the inequality|bc|≤ |ad| holds. Γ is called an increasing-chord drawing if there exists an increasing-chord path between every pair of vertices in Γ.The study of increasing-chord drawings was motivated by greedy routing in geometric networks, where given two vertices s and t, the goal is to send a message from s to t using some greedy strategy, i.e., at each step, the next vertex in the route is selected greedily as a function of the positions of the neighborsof the current vertex u relative to the positions of u, s, and t <cit.>. A polygonal path u_1,u_2, …, u_k is called a greedy path if for every i, where 0<i<k, the inequality |u_iu_k| > |u_i+1u_k| holds. If a straight-line drawing is greedy, i.e., there exists a greedy path between every pair of vertices, then it is straightforward to route the message between any pair of vertices byfollowing agreedy path. For example, wecan repeatedly forward the message to some node which is closer to the destination than the current vertex. A disadvantage of a greedy drawing, however, is that the dilation, i.e., the ratio of the graph distance to the Euclidean distancebetween a pair of vertices, maybe unbounded.Increasing-chord drawings were introduced to address this problem, where the dilation of increasing-chord drawings can be at most 2 π /3 ≤ 2.094 <cit.>.Alamdari et al. <cit.> examined the problem of recognizing increasing-chord drawings, and the problem of constructingsuch a drawing on a given set of points.They showed that it is NP-hard to recognize increasing-chord drawings in ℝ^3, and asked whether it is also NP-hard in ℝ^2. They also proved that for everyset of n points P in ℝ^2, one canconstruct an increasing-chord drawing Γ with O(n)vertices and edges, where P is a subset of the vertices of Γ. In this case, Γ is called a Steiner network of P, and the vertices of Γ that do not belong to P are called Steiner points. Dehkordi et al. <cit.> proved that if P is a convex point set, then one can construct an increasing-chord networkwithO(nlog n) edges, and without introducing any Steiner point. Mastakas and Symvonis <cit.> improved the O(nlog n) upper bound on edges to O(n) with at most one Steiner point. Nöllenburg et al. <cit.> examined the problem of computing increasing-chord drawings of given graphs. Recently, Bonichon et al. <cit.> showed that the existence of an angle-monotone path of width 0≤γ<180^∘ between a pair of vertices (in a straight-line drawing) can be decided in polynomial time, which is very interesting since angle-monotone paths of width γ≤ 90^∘ satisfy increasing chord property. Nöllenburg et al. <cit.> showed that partitioning a plane graph drawinginto a minimum number of increasing-chord components is NP-hard, which extends a result of Tan and Kermarrec <cit.>. They also proved that the problem remains NP-hard for trees, and gavepolynomial-time algorithms in some restricted settings. Recently, Mastakas and Symvonis <cit.> showed that given a point set S and a point v∈ S,one can compute a rooted minimum-cost spanning tree in polynomial time, where each point in S∖{v} is connected to v by a path that satisfiessome monotonicity property. They also proved that the existence of a monotone rooted spanning tree in a given geometric graph can be decided in polynomial time, and asked whether the decision problem remains NP-hard also for increasing-chord or self-approaching properties. We prove that given a vertex r in a straight-line drawing Γ, it is NP-complete to decide whether Γ contains an increasing-chord spanning tree rooted at r, which answers the above question. We alsoshed light on the problem of finding an increasing-chord path between a pair of vertices in Γ, but the computational complexity questionremains open.§ TECHNICAL BACKGROUNDGiven a straight line segment l, the slab of l is an infinite region lying between a pair of parallel straight lines that are perpendicular to l, and pass through the endpoints of l.Let Γ be a straight-line drawing, and let P be a path in Γ. Then the slabs of P are the slabs of the line segments of P. We denote by Ψ(P) the arrangement of the slabs of P. Figure <ref>(a) illustrates a path P, where the slabs of P are shown in shaded regions. Let A be an arrangement of a set of straight lines such that no line in A is vertical. Then the upper envelopeof A is a polygonal chain U(A) such that each point of U(A) belongs to some straight line of A, andthey are visible from the point (0,+∞). The upper envelope of a set of slabs is the upper envelope of the arrangement of lines corresponding to the slab boundaries, as shown in dashed line in Figure <ref>(a). Let t be a vertex in Γ and let Q=(a,b,…,p) be an increasing-chord path in Γ. A path Q'=(a,b,…,p,…,t) in Γ is called an increasing-chord extension of Q if Q' is also an increasing-chord path,e.g., see Figure <ref>(b).The following property can be derived from the definition of an increasing-chord path.[Icking et al. <cit.>]A polygonal path P is increasing-chord if and only if for each point v on the path, the line perpendicular to P at v does notproperly intersectP except possibly at v. A straightforward consequence of Observation <ref> is that every polygonal chain which is both x- and y-monotone, is an increasing-chord path.We will use Observation <ref> throughout the paper to verify whether a path is increasing-chord.Let v be a point in ℝ^2. By the quadrants of v we refer to the four regions determined by the vertical and horizontal lines through v. § INCREASING-CHORD ROOTED SPANNING TREES In this section we prove the problem of computing a rooted increasing-chordspanning tree of a given straight-line drawing to be NP-hard. We will refer to this problem as IC-Tree, as follows:Problem: Increasing-Chord Rooted Spanning Tree (IC-Tree)Instance: A straight-line drawing Γ in ℝ^2, and a vertex r in Γ.Question: Determine whether Γ contains a tree T rooted at r such that for each vertex v(≠ r) in Γ, T contains an increasing-chord path between r and v. Specifically, we will prove the following theorem.Given a vertex r in a straight-line drawing Γ, it is NP-completeto decide whether Γ admits an increasing-chord spanning tree rooted at r. We reduce the NP-complete problem 3-SAT <cit.> to IC-Tree.Let I=(X,C) be an instance of 3-SAT, where X and C are the set of variables and clauses. We construct a straight-line drawing Γ and choose a vertex r in Γ such that Γ contains an increasing-chord spanning tree rooted at r if and only if I admits a satisfying truth assignment.Here we give an outline of the hardness proof and describe the construction of Γ. A detailed reduction is given in Appendix B.Assume that α = |X|, and β = |C|. Let l_h be the line determined by the X-axis. Γ will containO(β) points above l_h, one point t on l_h, andO(α) points below l_h, as shown in Figures <ref>(a)–(b).Each clause c∈ C with j literals, will correspond to a set of j+1 points above l_h, and we will refer to the point with the highest y-coordinate among these j+1 points as the peak t_c of c. Among the points below l_h, there are 4α points that correspond to the variables and their negations, andtwo other points, i.e., sand r. In the reduction, the point t and the points below l_h altogether help to set the truth assignments of the variables. We will first create a straight-line drawing H such thatevery increasing-chord path between r and t_c, where c∈ C, passes through s and t. Consequently, any increasing-chord tree T rooted at r (not necessarily spanning), which spans the points t_c, must contain an increasing-chord path P=(r,s,…,t). We will use this path to set the truth values of the variables. The edges of H below l_h will create a set of thin slabs, and the upper envelope of these slabs will determine a convex chain W above l_h. Each line segment on W will correspond to a distinct variable, as shown in Figure <ref>(b). The points that correspond to the clauses will be positioned below these segments, and hence some of these points will be `inaccessible' depending on the choice of the path P. These literal-points will ensure that for any clause c∈ C, there existsan increasing-chord extension of P from t tot_c if and only if cis satisfied by the truth assignment determined by P. By the above discussion, I admits a satisfying truth assignment if and only if there exists an increasing-chord tree T in H that connects the peaks to r. But H may still contain some vertices that do not belong to this tree. Therefore, we construct the final drawing Γ by adding some new paths to H, which will allow us to reach these remaining verticesfrom r. We now describe the construction in details. Construction of H: We first construct an arrangement 𝒜 of 2α straight line segments.The endpoints of the ith line segment L_i, where 1≤ i≤ 2α, are (0,i) and (2α-i+1,0). We now extend each L_i downward by scaling its length by a factor of (2α+1), as shown in Figure <ref>(a). Later, the variable x_j, where 1≤ j≤α, and its negationwill be represented using the lines L_2j-1 and L_2j.Let l_v be a vertical line segment with endpoints (2α+1,2α) and (2α+1,-5α^2). Since theslope of a line in 𝒜 is in the interval [-2α, - 1/(2α)], each L_i intersects l_v.Since the coordinates of the endpoints of L_i and l_v are of size O(α^2), and all the intersection points can be represented using polynomial space. By construction, the line segments of 𝒜 appear onU(𝒜) in the order of the variables, i.e., the first two segments (from right) of U(𝒜) correspond to x_1 and x_1, the next two segments correspond to x_2 and x_2, etc. Variable Gadgets: We denote the intersection point of l_h and l_vby t, and the endpoint (2α+1, - 5α^2) of l_v by s. We now create the points that correspond to the variables and their negations. Recall that L_2j-1 and L_2j correspond to the variable x_j and its negation x_j, respectively. Denote the intersection point of L_2j-1 and l_v by p_x_j, and the intersection point of L_2j and l_v by p_x_j, e.g., seeFigure <ref>(b). For each p_x_j(p_x_j), we create a new point p'_x_j (p'_x_j) such that the straight line segmentp_x_jp'_x_j (p_x_jp'_x_j) is perpendicular to L_2j-1 (L_2j), as shown using the dotted (dashed) linein Figure <ref>(b).We may assume that all the points p'_x_j and p'_x_j lie on a vertical line l'_v, where l'_v lies ε distance away to the left of l_v. The value of ε would be determined later. In the following we use the points p_x_j, p_x_j, p'_x_j and p'_x_j to create some polygonal paths from s to t. For each j from 1 to α,we draw thestraight line segments p_x_jp'_x_j andp_x_jp'_x_j. Then for each k,where 1< k ≤α, we make p_x_k and p_x_k adjacentto both p'_x_k-1 and p'_x_k-1,e.g., see Figure <ref>(c). We then add the edges from s to p'_x_α and p'_x_α, and finally, from t to p_x_1 and p_x_1. For each x_j(x_j), we refer to the segment p_x_jp'_x_j (p_x_jp'_x_j) as the needle of x_j (x_j). Figure <ref>(c) illustrates the needles in bold.Let the resulting drawing be H_b.Recall that l'_v is ε distance away to the left of l_v. We choose ε sufficiently small such that for each needle, its slab does not intersect any other needle in H_b, e.g., see Figure <ref>(d).The upper envelope of the slabs of all the straight line segments of H_b coincides with U(𝒜). Since the distance between any pair of points that we created on l_v is at least 1/α units, it suffices to choose ε = 1/α^3. Note that the points p'_x_j and p'_x_j can be represented in polynomial space using the endpoints of l'_v and the endpoints of the segmentsL_2j-1 and L_2j. The proof of the following lemma is included in Appendix A.Every increasing-chord path P that starts at s and ends at t must pass through exactly one point among p_x_j and p_x_j, where 1 ≤ j≤α, and vice versa.We now place a point r on the y-axis sufficiently below H_b, e.g., at position (0,-α^5), such that the slab of the straight line segment rs does not intersect H_b (except at s), and similarly, the slabs of the line segments of H_b do not intersect rs. Furthermore, the slab of rs does not intersect any segment L_j, and vice versa.We then add the point r and the segment rs to H_b. Let P be an increasing-chord path from r to t. The upper envelope of Ψ(P) is determined by the needles in P, which selects some segments from the convex chain W, e.g., see Figure <ref>(b). For each x_j, P passes through exactly one point among p_x_j and p_x_j. Therefore, for each variable x_j, either the slab of x_j, or the slab of x_jappears on U(P).Later, if P passes through point p_x_j (p_x_j), then we will set x_j to false (true). Since P is an increasing-chord path, by Lemma <ref> it cannot pass through bothp_x_j andp_x_j simultaneously. Therefore, all the truth values will be set consistently.Clause Gadgets:We now complete the construction of H by adding clause gadgets to H_b. For each clause c_i, where 1≤ i≤β, we first create the peak point t_c_i at position (0,2α+i). For each variable x_j, let λ_x_jbe the interval of L_2j-1 that appearson the upper envelope of 𝒜. Similarly, let λ_x_j be theinterval ofL_2j on the upper envelope of 𝒜.For each c_i, weconstruct a point q_x_j,c_i (q_x_j,c_i) inside the cell of 𝒜immediately below λ_x_j (λ_x_j). We will refer to these points as the literal-points of c_i.Figure <ref> inAppendix B depicts these points in black squares. We assume that for each variable, the corresponding literal-points lie on the same location. One may perturb them to remove vertex overlaps. For each variable x∈ c_i, we create a path (t, x, t_c).In the reduction, if at least one of the literals ofc_i is true, then we can take the corresponding pathto connect t_c to t. Let the resulting drawing be H.Construction of Γ: Let q be a literal-point in H. We now add an increasing-chord pathP' = (r,a,q) to H in such a way that P' cannot be extended to any larger increasing-chord path in H. We place the point a at the intersection point of the horizontal line through q and thevertical line through r, e.g., see Figure <ref>(b) in Appendix B. We refer to the point a as the anchor of q.By the construction of H, all the neighbors of q that have a higher y-coordinate than q lie in the top-left quadrant of q, as illustrated by thedashed rectangle in Figure <ref>(b).Let q' be the first neighbor in the top-left quadrant of q in counter clockwise order. Since ∠ aqq'< 90^∘, P' cannot be extended to any larger increasing-chord path (r,a,q,w) in H, where the y-coordinate of w is higher than q.On the other hand, every literal-point w in H with y-coordinate smaller than q intersects the slab of ra. Therefore, P' cannot be extended to any larger increasing-chord path. For every literal-point q in H, we add such an increasing-chord path from t to q. To avoid edge overlaps, one canperturb the anchors such thatthe new paths remainincreasing-chord and non-extensible to any larger increasing-chord paths. This completes the construction of Γ. We refer the reader to Appendix B for theformal details of the reduction. § INCREASING-CHORD PATHSIn this section we attempt to reduce 3-SAT to the problem of finding an increasing-chord path (IC-Path) between a pair of vertices in a given straight-line drawing. We were unable to bound the coordinates of the drawing to a polynomial number of bits, and hence the computational complexity question of the problem remains open. We hope that the ideas we present here will be useful in future endeavors to settle the question.Here we briefly describe the idea of the reduction. Given a 3-SAT instance I=(X,C),the corresponding drawingfor IC-Path consists of straight-line drawings _i-1, where 1≤ i≤β, e.g., see Figure <ref>(a). The drawing _i-1 corresponds to the each clause c_i. We will refer to the bottommost (topmost) point of _i-1 as t_c_i-1 (t_c_i). We will choose t_c_0 and t_c_β to be the points t and t', respectively, and show that I admits a satisfying truth assignment if and only if there exists anincreasing-chordpath P from t to t' that passes through everyt_c_i. For every i, the subpath P_i-1 of P between t_c_i-1 andt_c_i will correspond to a set of truth values for all the variables in X. The most involved part is to show that the truth values determined by P_i-1 and P_i are consistent. This consistency will be ensured by the construction of , i.e., the increasing-chord path P_i-1 from t_c_i-1 to t_c_i in _i-1 will determine a set of slabs, which will force a unique increasing-chord path P_i in _i between t_c_i and t_c_i+1 with the same truth values as determined by P_i-1. Construction of :The construction of _i-1 depends on an arrangement of lines 𝒜^i-1. The construction of 𝒜^0 is the same as the construction of arrangement 𝒜, which we described in Section <ref>.Figure <ref>(c) illustrates 𝒜^0 in dotted lines.For each variable x_j, where 1≤ j ≤α, there exists an interval λ^0_x_j of L_2j-1 on the upper envelope of 𝒜^0. Similarly, for each x_j,there exists an interval λ^0_x_j of L_2j on the upper envelope of 𝒜^0. We now describe the construction of _0. Choose t_c_0 (t_c_1) to be the bottommost (topmost) point of λ^0_x_1 (λ^0_x_α). We then slightly shrink the intervalsλ^0_x_1 and λ^0_x_α such that t_c_0 andt_c_1 no longer belong to these segments.Assume that c_1contains δ literals, where δ≤ 3, and let σ_1,…,σ_2^δ -1 be the satisfying truth assignments for c_1. We construct a graph G_c_1 that corresponds to these satisfying truth assignments,e.g., see Figure <ref>(b) and Appendix C for formal details. The idea is to ensure that any path between t_c_0 and t_c_1 passes through exactly one point in {q^σ_k_x_j, q^σ_k_x_j}, for each truth assignment σ_k, which will set the truth value of x_j.In 𝒟_0, the pointq^σ_k_x_j (q^σ_k_x_j) is chosen to be the midpoint of λ^i-1_x_j (λ^i-1_x_j). Later, we will refer to these points as q-points, e.g., see Figure <ref>(c).We may assume that for each x_j, the points q^σ_k_x_j lie at the same location. At the end of the construction, one may perturb them to remove vertex overlaps.By Observation <ref>, any y-monotone path P' between t_c_0 and t_c_1 must be an increasing-chord path.If P' passes throughq^σ_x_j, then we set x_j to true. Otherwise, P' must pass through q^σ_x_j,and we set x_j to false.In the following we replace each q-point by a small segment. The slabs of these segments will determine 𝒜^1.Consider an upward ray r^1 with positive slope starting at the q-point on λ_x_1, e.g., see Figure <ref>(c). Since all the edges that are currently in _0 have negative slopes, we can choose a sufficiently large positive slope for r^1 and a point a^1 on r^1 such that all the slabs of _0 lie below a^1.We now find a point b^1 above a^1 on r^1 with sufficiently large y-coordinatesuch that the slab of t_c_1b^1 does not intersect the edges in _0.Let l^1_x_1 be the line determined by r^1. For each x_j and x_j(except for j=1), we now construct the linesl^1_x_j and l^1_x_jthat pass through their corresponding q-points and intersect r^1 above b^1.The lines l^1_x_j and l^1_x_j determine the arrangement 𝒜^1. Observe that one can construct these lines in the decreasing order of the x-coordinates of their q-points, and ensure that for eachl^1_x_j (l^1_x_j), there exists an interval λ^1_x_j(λ^1_x_j) on the upper envelop of 𝒜^1.Note that the correspondence is inverted, i.e.,in 𝒜^1, λ^1_x_j correspondsto λ^0_x_j, andλ^1_x_j corresponds to λ^0_x_j. For each j, we draw a smallsegment s^0_x_j (s^0_x_j) perpendicular to l^1_x_j (l^1_x_j)that passes through the q-point and lies to the left of q, e.g., seeFigure <ref>(d). The construction of _i, where i>1, is more involved. The upper envelope of 𝒜^i+1 is determined by the upper envelope of the slabs of the s-segments in_i-1. For each i, we construct the q-points and corresponding graph G_c_i. Appendix C includes the formal details. In the reduction we show thatany increasing-chord path P from t to t' contains the points t_c_i.We set a variable x_j true or false depending on whetherP passes through s^0_x_j or s^0_x_j. The construction ofimposes the constraint that if P passes throughs^i-1_x_j (s^i-1_x_j), then it must pass through s^i_x_j (s^i_x_j). Hence the truth values in all the clauses are set consistently. By construction of G_c_i, any increasing-chord path between t_c_i-1 to t_c_i determines a satisfying truth assignment for c_i.On the other hand, if I admits a satisfying truth assignment, then for each clause c_i, we choose the corresponding increasing-chord path P_i between t_c_i-1 and t_c_i. The union of all P_i yields the required increasing-chord path P from t to t'. Appendix C presents the construction in details, and explains the challenges of encodingin a polynomial number of bits.§ OPEN PROBLEMS The most intriguing problem in this context is to settle the computational complexity of the increasing-chord path (IC-Path) problem. Another interesting question is whether the problem IC-Tree remains NP-hard under the planarityconstraint; a potential attempt to adapt our hardness reduction could be replacing the edge intersections by dummy vertices.abbrv § APPENDIX A < g r a p h i c s > figureIllustration for the hardness proof using a schematic representation of Γ. The points that correspond to c_1 and c_2 are connected in paths ofblack, and lightgray, respectively.The slabs of the edges of H that determine the upper envelope are shown in lightgray straight lines. Each variable and its negation correspond to a pair of adjacent line segments on the upper envelope ofthe slabs.Lemma <ref>Every increasing-chord path P that starts at s and ends at t must pass through exactly one point among p_x_j and p_x_j, where 1 ≤ j≤α, and vice versa.By Observation <ref>,P must be y-monotone. Consequently, for each j, the edge on the (2j)th position on P is a needle, which corresponds to eitherp_x_jp'_x_j or p_x_jp'_x_j. Therefore, it is straightforward to observe that P passes through exactly one point among p_x_j and p_x_j. Now consider a path P thatstarts at s, ends at t,and for each j,passes through exactly one point among p_x_j and p_x_j. By construction, P must be y-monotone. We now show thatP is an increasing-chord path. Note thatit suffices to show that for every straight-line segment ℓ on P,the slab of ℓ does not properly intersect P except at ℓ. By Observation <ref>, it will follow that P is an increasing-chord path.For every interior edge e on P, which is not a needle, e corresponds to some segment ℓ∈{ p_x_jp'_x_j-1, p_x_jp'_x_j-1 p_x_jp'_x_j-1, p_x_jp'_x_j-1}, for some 1<j≤α. By construction, in each of these four cases,the needles incident toℓ lie either on the boundary or entirely outside of thethe slab of ℓ, and hence the slab does not properly intersect P except at ℓ. Figures <ref>(a)–(b) illustrate the scenario when ℓ∈{p_x_jp'_x_j-1, p_x_jp'_x_j-1}.Let (s,a) and (b,t) be the edges on P incident to s and t, respectively.By construction, these edges behave in the same way, i.e., all the needles on P are above the slab of (s,a) and below the slab of (b,t). Consequently, the slab of (s,a) (resp., (b,t)) does not properly intersect P except at(s,a) (resp., (b,t)). For every interior edge e on P, which is a needle, e corresponds to some segment ℓ∈{ p_x_jp'_x_j,p_x_jp'_x_j}. By construction, the needles following (resp., preceding) ℓ on P are above (resp., below) the slab of ℓ.Consequently, the slab does not properly intersect P except at ℓ. Figures <ref>(c)–(d) illustrate these scenarios.< g r a p h i c s > figureIllustration for the slab of ℓ. (a)–(b) The segment ℓ is not a needle. (c)–(d) The segment ℓ is a needle. § APPENDIX B< g r a p h i c s > figure(a) Construction of clause gadgets, where c_1 = (x_1∨ x_3).(b) A schematic representation of Γ. (c) Illustration for the reduction. Since determining whether a straight-line drawing of a treeis an increasing-chord drawing is polynomial-time solvable <cit.>, the problem IC-Tree is NP-complete. Wenow prove that Γadmits anincreasing-chord rooted spanning tree if and only if I admits a satisfying truth assignment. Equivalence between the instances:First assume that I admits a satisfying truth assignment. We now construct an increasing-chord spanning tree T rooted at r. We first choose a path P from r to t such that it passes through either p_x_j or p_x_j, i.e., if x_j is true (false), then we route the path through p_x_j (p_x_j). Figure <ref>(c) illustrates such a path P in a thick black line, where x_α=true andx_α-1=false. Observe that only 2α points remain below l_h, two points per literal, that do not belong to P. We connect these points in a y-monotone polygonal path Q startingat s, as illustrated in a thin black line in Figure <ref>(c). Note that Q corresponds to a truth value assignment, which is opposite to the truth values determined by P. Therefore, by Lemma <ref>, Q is also an increasing-chord path. Consequently, the point t and the points that lie below l_h are now connected to r throughincreasing-chord paths. The tree T now consists of the paths P and Q, and thus does not span the vertices that lie above l_h. We now addmore paths to T to span the points above l_h. Since every clause c is satisfied,we can choose a pathP'from t to t_c that passes through a literal-pointwhose corresponding literal x∈ c is true.Sincetheliteral-points corresponding totrue literals lie abovethe slabs of P, the path P' determines an increasing-chord extension of P. Therefore, all the peaks and some literal-points above l_h are now connected to r via increasing-chord paths.For each remaining literal-point q, we add q to T via the increasing-chord path through its anchor. There are still some anchors that are not connected to r, i.e., the anchors whose corresponding literal-points are already connected to r via an increasing-chord extension of P. We connect eachanchor a to r via the straight line segment ar. We now assume that Γ contains an increasing-chord rooted spanning tree T, and show how to finda satisfying truth assignment for I. Since T is rooted at r, and the peaks are not reachable via anchors, T must contain an increasing-chord path P = (r,s,…,t) that for each variable x_j, passes through exactly one point among p_x_j and p_x_j. If P passes through p_x_j(p_x_j), then we set x_jto false (true).Observe that passing through a variable x_j or its negation selects a corresponding needle segment p'_x_jp_x_j or p'_x_jp_x_j.Recall that the interval λ_x_j (λ_x_j), whichcorresponds to p'_x_jp_x_j (p'_x_jp_x_j), lies above the literal-pointq_x_j,c_i (q_x_j,c_i), e.g.,see Figure <ref>(a). Therefore, if the above truth assignment does not satisfy some clause c, then there cannot be any increasing-chord extension of P that connects t to t_c. Therefore, T would not be aspanning tree. § APPENDIX C Here we give the formal details of the construction of .Construction of _0: Choose t_c_0 (t_c_1) to be the bottommost (topmost) point of λ^0_x_1 (λ^0_x_α). We then slightly shrink the intervalsλ^0_x_1 and λ^0_x_α such that t_c_0 andt_c_1 no longer belong to these segments.If c_1contains κ literals, then there are 2^κ -1 distinct truth assignment for its variables to satisfy c_1. For each satisfying truth assignment σ_k, where 1≤ k ≤ 2^κ -1, we construct a set of vertices and edges in _0, as follows.For each x_j (x_j), we construct a point q^σ_k_x_j (q^σ_k_x_j) at the midpoint of λ^i-1_x_j (λ^i-1_x_j). Later, we will refer to these points as q-points, e.g., see Figure <ref>(c). For eachj from 1 to (α-1), we makeq^σ_k_x_j and q^σ_k_x_j adjacent to q^σ_k_x_j+1, q^σ_k_x_j+1, e.g., see Figure <ref>(b). We then make t_c_0 (t_c_1) adjacent to the points corresponding to x_1 (x_α) and its negation. Finally,ifx_j (resp., x_j) is true in σ_k, then we remove the edges incident to q^σ_k_x_j (resp., q^σ_k_x_j).We may assume that for each x_j, the points q^σ_k_x_j lie at the same location. At the end of the construction, one may perturb them to remove vertex overlaps. By Observation <ref>, any y-monotone path P' between t_c_0 and t_c_1 must be an increasing-chord path.If P' passes throughq^σ_x_j, then we set x_j to true. Otherwise, P' must pass through q^σ_x_j,and we set x_j to false.In the following we replace each q-point by a small segment. The slabs of these segments will determine 𝒜^1.Consider an upward ray r^1 with positive slope starting at the q-point on λ_x_1, e.g., see Figure <ref>(c). Since all the edges that are currently in _0 have negative slopes, we can choose a sufficiently large positive slope for r^1 and a point a^1 on r^1 such that all the slabs of _0 lie below a^1.We now find a point b^1 above a^1 on r^1 with sufficiently large y-coordinatesuch that the slab of t_c_1b^1 does not intersect the edges in _0.Let l^1_x_1 be the line determined by r^1. For each x_j and x_j(except for j=1), we now construct the linesl^1_x_j and l^1_x_jthat pass through their corresponding q-points and intersect r^1 above b^1.The lines l^1_x_j and l^1_x_j determine the arrangement 𝒜^1. Observe that one can construct these lines in the decreasing order of the x-coordinates of their q-points, and ensure that for eachl^1_x_j (l^1_x_j), there exists an interval λ^1_x_j(λ^1_x_j) on the upper envelop of 𝒜^1.Note that the correspondence is inverted, i.e.,in 𝒜^1, λ^1_x_j correspondsto λ^0_x_j, andλ^1_x_j corresponds to λ^0_x_j. For each j, we draw a smallsegment s^0_x_j (s^0_x_j) perpendicular to l^1_x_j (l^1_x_j)that passes through the q-point and lies to the left of q, e.g., seeFigure <ref>(d). We will refer to these segments as the s-segments. We choose the length of the s-segments small enough such that the slabs of these segments still behave as lines of 𝒜^1. For each s-segment qq', if there exists an edge (w,q), where w has a larger y-coordinate than q, then then we delete the segment wq andadd the line segment wq'.Since the slopes of the s-segments are negative,it is straightforward to verify that any y-monotone path between t_c_0 and t_c_1 will be an increasing-chord path.This completes the construction of _0 and 𝒜^1. Construction of _i, where i>0:The construction for the subsequent drawing _i depends on𝒜^i, where 1≤ i < β, and the arrangement 𝒜^i+1 is determined by _i. Although the construction of _i from 𝒜^i is similar to the construction of _0 from 𝒜^0, we need_i to satisfy some further conditions, as follows. (A) In 𝒜^i+1, the segment λ^i+1_x_j (resp., λ^i+1_x_j)plays the role of λ^i_x_j (resp., λ^i_x_j).Therefore, the q-vertices and edges of _imust be constructed accordingly.As a consequence, if an increasing-chord path P' between t_c_i-1 and t_c_i passes throughsome s^i-1_x_j (s^i-1_x_j) in _i-1, then any increasing-chord extension of P' to t_c_i+1 must pass through s^i_x_j (s^i_x_j) in _i.(B) While constructing_i, we must ensure that the slabs of the segments in _i do not intersect the segments in _0,…,_i-1. We now describe how to construct such a drawing _i, e.g., see Figure <ref>. Without loss of generality assume that i is odd. The construction when i is even is symmetric. Let Δ_i-1 be the largestx-coordinate among all the vertices in _0,…,_i-1.Recall that the drawing of _i depends on𝒜^i, and we construct 𝒜^i starting with an upward ray r^i and choosing a pointb^i on r^i. We choose a positive slope for r^i, which islarger than all the positive slopes determined by the slabs of _0,…,_i-1. We then choose b^i with a sufficiently large y-coordinate such that the x-coordinate of b_i is larger than Δ_i-1. It is now straightforward tochoose the lines of 𝒜^i such that their intersection points are close to b_i, and have x-coordinates larger than Δ_i-1.Since the segments of _i will have positive slopes, their slabscannot intersect the segments of _0,…,_i-1.On the size of vertex coordinates:Note thathas onlya polynomial number of vertices, and our incremental construction foris straightforwardto carry out in polynomial number of steps. Therefore, the crucial challengeis to prove whether the vertices incan be expressed in a polynomial number of bits or not. As explained in the description of the construction of _0, observe that the width and height of _0 is O(α), where α = |X|.To construct _1, the crucial step is to choose the slope for r^1 and the point b^1 on r^1. Since the largest slope of the slabs of 𝒜^1 is O(α),it suffices to choose a slope of α^3 for r^1. It is then straightforwardto choose a point on r^1 as b^1, where x and y-coordinates of b^1 are of size O(α) and O(α^3), respectively, e.g., see Figure <ref>(a).Similarly, to construct _i from _i-1, we can choose a slope of α^2i+1 for r^i, as illustrated inFigure <ref>(b).Consequently, after β steps, where β = |C|, the width of the drawing becomesO(α·β) and the height becomes α^O(β). Note that we can describer^i and b^i in O(βlogα) bits.However, encoding of the rest ofthe drawing seems difficult. For example, one can attempt to construct the remaining vertices and edges ofusing r^i and b^i, as follows. Consider the upper envelope U(𝒜^0) of 𝒜^0. To construct the straight lines for 𝒜^1, one can construct a set of variables and express the necessary constraints in a non-linear programming. Specifically, for each q-point q^σ_x on U(𝒜^0), we create a variable point von r^1above b^1, and a variable v' on the linevq^σ_x on U(𝒜^1), where the variable v' corresponds to the q-point on U(𝒜^1).An example is illustrated in Figure <ref>.Similarly, for each i>1, we can create variables for the q-points of𝒜^i from 𝒜^i-1, r^i and b^i.Since the number of vertices and edges ofis polynomial,the constraints we need to satisfy among all these q pointsis polynomial. However, the solution size of such a nonlinear system may not be bounded to a polynomial number of bits. Equivalence between the instances:Any increasing-chord path P from t to t' contains the points t_c_i.We set a variable x_j true or false depending on whetherP passes through s^0_x_j or s^0_x_j.By Condition (A), if P passes throughs^i-1_x_j (s^i-1_x_j), then it must pass through s^i_x_j (s^i_x_j). Hence the truth values in all the clauses are set consistently. By construction of , any increasing-chord path between t_c_i-1 to t_c_i determines a satisfying truth assignment for c_i. Hence the truth assignment satisfies all the clauses in C. On the other hand, if I admits a satisfying truth assignment, then for each clause c_i, we choose the corresponding increasing-chord path P_i between t_c_i-1 and t_c_i. Let P be the union of all P_i.By construction of , the slabs of P_i do not intersect P except at P_i. Hence, P is the required increasing-chord pathfrom t to t'.
http://arxiv.org/abs/1702.08380v2
{ "authors": [ "Yeganeh Bahoo", "Stephane Durocher", "Sahar Mehrpour", "Debajyoti Mondal" ], "categories": [ "cs.CG", "68Q25, 65D18", "I.3.5" ], "primary_category": "cs.CG", "published": "20170227170731", "title": "Exploring Increasing-Chord Paths and Trees" }
Dynamics of Bound Magnon Pairs in theQuasi-One-Dimensional Frustrated Magnet LiCuVO_4 K. Yoshimura^1 December 30, 2023 ======================================================================================= Human language is ambiguous, however ambiguity is often not represented in most data sets used in machine learning.For single-label classification tasks The standard assumptions is that each example has a single correct label, and each example for a label is equally useful for classifying that label. In reality, some training examples may be more ambiguous than others.As a result, there maybe a distribution of human-judged labels. In this work we propose new methods and data that leverage crowdsourced labels to model ambiguity as a distribution over labels. By fine-tuning ML models with the distribution over labels instead of a single gold-standard label, we report improved performance for two NLP tasks: Recognizing Textual Entailment (NLI) and Sentiment Analysis (SA). Models trained with the crowd-informed fine-tuning data classify more instances of entailment and contradiction labels correctly for the NLI task at the expense of neutral examples. Often when multiple labels are obtained for a training example it is assumed that there is an element of noise that must be accounted for. It has been shown that this disagreement can be considered signal instead of noise. In this work we investigate using soft labels for training data to improve generalization in machine learning models. However, using soft labels for training Deep Neural Networks (DNNs) is not practical due to the costs involved in obtaining multiple labels for large data sets. We propose soft label memorization-generalization (SLMG), a fine-tuning approach to using soft labels for training DNNs. We assume that differences in labels provided by human annotators represent ambiguity about the true label instead of noise. Experiments with SLMG demonstrate improved generalization performance on the Natural Language Inference (NLI) task. Our experiments show that by injecting a small percentage of soft label training data (0.03% of training set size) we can improve generalization performance over several baselines.§ INTRODUCTIONIn Machine Learning (ML) classification tasks a model is trained on a set of labeled data and optimized based on some loss function. The training data consists of some feature set X_train = x_i,…, x_N and associated labels Y_train = y_1,…,y_N, where Y is a vector of integers corresponding to the classes of the problem.Typically we assume that each training example is labeled correctly, and each is equally appropriate for a single class. There is no way to quantify the uncertainty of the examples, nor a way to exploit such uncertainty during training. Particularly for NLP tasks with sentence- or phrase-based classification such as Natural Language Inference (NLI), it is not common to model ambiguity in language in training data labels.For example, consider the following two premise-hypothesis pairs, both taken from the Stanford Natural Language Inference (SNLI) corpus for NLI <cit.>: * Premise: Two men and a woman are inspecting the front tire of a bicycle. Hypothesis: There are a group of people near a bike. * Premise: A young boy in a beige jacket laughs as he reaches for a teal balloon.Hypothesis: The boy plays with the balloon. In both cases the gold-standard label in the SNLI data set is entailment, which is to say that if we assume that the premise is true, one can infer that the hypothesis is also true. However, looking at the two sentence pairs one could argue that they do not both equally describe entailment. The first example is a clear case: people inspecting a front tire of a bike are almost certainly standing near it. However the second example is less clear.Is the child laughing because he is playing?Or is he laughing for some other reason, and is simply grabbing for the balloon to hold it (or give it to someone else)? There is ambiguity associated with the two examples that is not captured in the data. To a machine learning model trained on SNLI, both examples are to be classified as entailment, and incorrect classifications should be penalized equally during learning.Previous work has shown that leveraging crowd disagreements can improve the performance of named entity recognition (NER) models by treating disagreement not as noise but as signal <cit.>. We use the same assumption here and encode crowd disagreements directly into the model training data in the form of a distribution over labels (“soft labels”). These soft labels model uncertainty in training by representing human ambiguity in the class labels. Ideally we would have soft labels for all of our training data, however when training large deep learning models it is prohibitively expensive to collect many annotations for all data in the huge datasets required for training. In this work we show that even a small amount of soft labeled data can improve generalization. This is the first work to fine-tune a deep neural network with soft labels from crowd annotations for a natural language processing (NLP) task.With this in mind we propose soft label memorization-generalization (SLMG), a fine-tuning approach to training that uses distributions over labels for a subset of data as a supplemental training set for a learning model. Ideally a model could be trained with soft labels for all training examples, but because of the costs involved, in this work we explore using a small number of examples for fine-tuning on top of a larger data set. We seek understand the effect of including more informative labels as part of training.Our hypothesis is that using labels that incorporate language ambiguity can improve model generalization in terms of test set accuracy, even for a small subset of the training data.By using a distribution over labels we hope to reduce overfitting by not pushing probabilities to 1 for items where the empirical distribution is more spread out.Our results show that SLMG is a simple and effective way to improve generalization without a lot of additional data for training.We evaluate our approach on NLI (also known as Recognizing Textual Entailment or RTE) <cit.> using the SNLI data set <cit.>. Prior work has shown that lexical phenomena in the SNLI dataset can be exploited by classifiers without learning the task, and performance on difficult examples in the data set is still relatively poor, making NLI a still-open problem <cit.>. For soft labeled data we use the IRT evaluation scales for NLI data <cit.> where each premise-hypothesis pair was labeled by 1000 AMT workers. This way we are able to leverage an existing source of soft labeled data without additional annotation costs. We find that SLMG can improve generalization under certain circumstances, even thought the amount of soft labeled data used is tiny compared to the total training sets (0.03% of the SNLI training data set). SLMG outperforms the obvious but strong baseline of simply gathering more unseen data for labeling and training. Our results suggest that there are diminishing returns for simply adding more data past a certain point <cit.>, and indicate that representing data uncertainty in the form of soft labels can have a positive impact on model generalization.Our contributions are as follows: (i) We propose the SLMG framework for incorporating soft labels in machine learning training, (ii) We use previously-collected human annotated data to estimate soft label distributions for NLI and show that replacing less than 0.1% of training data with soft labeled data can improve generalization for three DNN models, and (iii) We demonstrate for the first time that soft labels can encode ambiguity in training data that can improve model generalization in terms of test set accuracy.[We will release our code upon publication.] (i) we introduce a fine-tuning method for incorporating crowd uncertainty into training, (ii) we demonstrate a new use case for the NLI IRT data set <cit.> and release a new data set of AMT responses for a subset of SSTB <cit.>[Data included as supplemental material. We will release our code upon publication.], (iii) we conduct experiments to show where the fine-tuning data is useful in a model training setup, and (iv) we analyze changes in test set output to understand what types of examples are impacted by SLMG.The rest of this paper is organized as follows: Section <ref> motivates the need for uncertainty in training labels and describe SLMG, Section <ref> gives a thorough description of related work to place SLMG in context, Section <ref> describes our experiments, including the deep learning models we tested, Section <ref> presents results, and Section <ref> discusses the results and areas for potential future work.§ SOFT LABEL MEMORIZATION-GENERALIZATION§.§ OverviewIn a traditional supervised learning single-label classification problem, a model is trained on some data set X_train, and tested on some test set X_test. In this setting, learning is done by minimizing some loss function L.We assume that the labels associated with instances in X_train are correct. That is, for each (x_i, y_i) ∈ X_train we assume that y_i is the correct class for the i-th example, where x_i is some set of features associated with the i-th training example and y_i is the corresponding class. However it is often the case, particularly in NLP, that examples may vary in terms of difficulty, ambiguity, and other characteristics that are often not captured by the single correct class to which the example belongs. The traditional single-label classification task does not take this into account.For example, a popular loss function for classification tasks is Categorical Cross-Entropy (CCE). For a single training example x_i with class y_i ∈ Y where Y is the set of possible classes, CCE loss is defined as L_i^CCE = -∑_j=1^| Y | p(y_ij)log p(ŷ_ij). In the single-class classification case where a single class j has probability 1 CCE loss is L^CCE = ∑_i^N -log p(ŷ_ij), where each example loss is summed over all of the training examples. With this loss function a learning model is encouraged to update its parameters in order to maximize the probability of the correct class for each training example. Without some stopping criteria, parameter updates will continue for a given example until p(ŷ_ij) = 1. This may not always be ideal, since by pushing the model output probability to 1, the learner is encouraged to overfit on an example that may not be representative of the particular class. With SLMG we want to take advantage of the fact that differences between examples in the same class can be useful during training. Instead of treating each training example as having a single correct class, SLMG uses a distribution over labels for the gold standard. This way examples with varying degrees of uncertainty are reflected during training.We make a different assumption regarding noise in human generated labels than previous work <cit.>. The presence of noise when multiple labels are obtained is often attributed to labeler error, lack of expertise, adversarial actions, or other negative causes. However, we believe that the noise in the labels can be considered a signal <cit.>. Examples with less uncertainty about the label (in the form of a label distribution with a single high peak) should be associated with similarly high model confidence. §.§ Training with SLMG In our experiments we investigated two ways to incorporate the soft labeled data into model training, which we define below. Let X_train be the original training set, and let X_test be the test set. Let X_soft be the soft labeled training data with class probabilities. There are two ways to incorporate the X_soft data into a learning task that we investigate: (i) at each training epoch, training with X_train and X_soft interspersed (SLMG-I), and (ii) train a model on X_train for a predefined number of epochs, followed by training on X_soft for a predefined number of epochs, repeated some number of times (meta-epochs) in a sequential fashion (SLMG-S).Algorithms <ref> and <ref> define the two training sequences, respectively. In our experiments we tested two loss functions for the SLMG data, CCE (<ref>) and Mean Squared Error (MSE): L^MSE_i =∑_j=1^| Y | (p̂(y_ij) - p(y_ij))^2. §.§.§ Interspersed Fine-TuningThe motivation for interspersing fine-tuning with soft labels is to prevent overfitting as the model learns.After each epoch in the training cycle, the learning model will have made updates to the model weights according to the outputs on the full training set.By interspersing the fine-tuning after each epoch, our expectation is that we can account for and correct overfitting earlier in the process by making smaller updates to the model weights according to the soft label distributions. This method encourages generalization early in the process, before the model can memorize the training data and possibly overfit.§.§.§ Sequential Fine-Tuning In contrast with the interspersed fine-tuning, the motivation for sequential fine-tuning is to adjust a well-trained model to improve generalization. After a full training cycle of some number of epochs, the learning model is then fine-tuned using the soft-labeled data. This way the fine-tuning takes place after the model has learned a set of weights that perform well on the training data. Fine-tuning here can improve generalization by updating the model weights to be less extreme when dealing with examples that are more ambiguous than others.Since these updates happen on a trained model, there is less risk of the model performance drastically reducing.By repeating this process over a number of meta-epochs, the learning model can memorize, generalize, and repeat the cycle. §.§ Collecting Soft Labeled Data For our NLI soft labeled data, we use data collected by <cit.>. 180 SNLI training examples split evenly between the three labels were randomly selected and given to Amazon Mechanical Turk (AMT) workers (Turkers) for additional labeling. For each example 1000 additional labels were collected.In order to estimate a distribution over labels for these examples we calculate the probability of a certain label according to the proportion of humans that selected the label: P(Y = y) = N_y/N, where N_y is the number of times y was selected by the crowd and N is the total number of responses obtained. For SA, we collected a new data set of labels for 134 examples randomly selected from SSTB, using a similar AMT setup as <cit.>. For each randomly selected example, we asked 1000 Turkers to label the sentence as very negative, negative, neutral, positive, or very positive. Table <ref> shows example premise-hypothesis pairs taken from the SNLI data set for NLI <cit.>. Table <ref> includes the premise and hypothesis sentences, the gold standard class as included in the data set, as well as estimated soft labels using human responses obtained by <cit.>. There are premise-hypothesis pairs that share a class label (e.g. the first two examples) yet are very different in terms of how they are perceived by a crowd of human labelers. In a traditional setup both examples would have a single class label associated with contradiction (class label 1 if 0 = entailment, 1 = contradiction, and 2 = neutral). Certain training examples have much less uncertainty associated with them, which is reflected in the high probability weight on the correct label. In other cases, there is a more evenly spread distribution, which can be interpreted as a higher degree of uncertainty. In a learning scenario, one may want to treat these examples differently according to their uncertainty, as opposed to the common practice of weighing each equally. Consider calculating the entropy, H(X), of the first two training examples from Table <ref>: H(X) = - ∑_y ∈ Y p(y) log p(y). If we assume that the probability of the correct label (in this case, contradiction), is 1, and the probability of all other labels is 0, then entropy in both cases is 0.[Where 0 log 0 = 0.] However if we use the distributions from Table <ref>, then entropy is 0.464 and 0.837 respectively. There is much more uncertainty in the second example than the first, which is not reflected if we assume that both examples are labeled contradiction with probability 1. This uncertainty may be important when learning for classification.§.§ Learning from the CrowdIn this work we take advantage of the fact that we have a distribution over labels provided by the human labelers. We can train using CCE or MSE:as our loss function, where we minimize the difference between the estimated probabilities learned by the model and the empirical distributions obtained from AMT over the training examples. With SLMG we are attempting to move the model predictions closer to the soft label distribution of responses. We are not necessarily trying to push predicted probability values to 1, which is a departure from the standard understanding of single label classification in ML. Here we hypothesize that updating weights according to differences in the observed probability distributions will improve the model by preventing it from updating too much for more uncertain items (that is, examples where the empirical distribution is more evenly spread across the three labels).This scenario assumes that the crowdsourced distribution of responses is a better measure of correctness than a single gold-standard label. We hypothesize that the crowd distribution over labels gives a fuller understanding of the items being used for training. SLMG can update parameters to move closer to this distribution without making large parameter updates under the assumption that a single correct label should have probability 1.If we assume that ML performance is not at the level of an average human (which is reasonable in many cases), then SLMG can help pull models towards average human behavior when we use human annotations to generate the soft labels. If the model updates parameters to minimize the difference between predictions and the distribution of responses provided by AMT workers, then the model predictions should look like that of the crowd. When ML model performance is better than the average AMT user, there is a risk that performance may suffer, if we assume that our model would outperform a human population. The model may have learned a set of parameters that better models the data than the human population, and updating parameters to reflect the human distribution could lead to a drop in performance. However since we are only using SLMG as a fine-tuning mechanism, the risk here is mitigated by the larger training set that we use alongside the SLMG data. § EXPERIMENTSOur hypothesis is that soft labeled data, even in very small amounts, can improve model generalization by capturing ambiguity of language data in the form of distributions over labels. In this section we describe our experiments to test this hypothesis, as well as the data sets and models used in the experiments.§.§ ModelsFor our experiments we tested three deep learning models, an LSTM RNN <cit.> that was released with the original SNLI data set, a memory-augmented LSTM network <cit.>, and a recently released hierarchical network with very strong performance on the SNLI task <cit.>. Each model was trained according to the original parameters provided in the respective papers.[Due to space constraints, please refer to the original papers for descriptions of the model architectures.] Word embeddings for all models were initialized with GloVe 840B 300D word embeddings <cit.>.Our first model is a re-implementation of the 100D LSTM model that was released with the original SNLI data set <cit.>. For the NLI task, the premise and hypothesis sentences were both passed through a 100D LSTM sequence embedding <cit.>. The output embeddings were concatenated and fed through 3 200D tanh layers, followed by a final softmax layer for classification.We implemented in DyNet <cit.>.Neural Semantic Encoder (NSE) <cit.> is a memory augmented neural network. NSE uses read, compute, and write operations to maintain and update an external memory M during training and outputs an encoding h that is used for downstream classification tasks: o_t= f_r^LSTM(x^t) z_t= softmax(o_t^⊤M_t-1) m_r,t = z_t^⊤M_t-1c_t= f_c^MLP(o_t,m_r,t) h_t= f_w^LSTM(c_t) M_t = M_t-1( 1 - (z_t ⊗ e_k)^⊤) + (h_t ⊗ e_t)(z_t ⊗ e_k)^⊤where f_r^LSTM is the read function, f_c^MLP is the composition function, f_w^LSTM is the write function, M_t is the external memory at time t, and e_l ∈ R^l and e_k ∈ R^k are vectors of ones <cit.>. We used the publicly available version of the NSE model released by the authors[<https://bitbucket.org/tsendeemts/nse>] and implemented in Chainer <cit.>. We followed the original NSE training parameters and hyperparameters <cit.>. The Enhanced Sequential Inference Model (ESIM) <cit.> consists of three stages: (i) input premise and hypothesis encoding with BiLSTMs, (ii) local inference modeling with attention, and (iii) inference composition with a second BiLSTM encoding over the local inference information. We used the publicly available ESIM model released by the authors[<https://github.com/lukecq1231/nli>] implemented in Theano <cit.> and kept all of the hyperparameters the same as in the original paper.§.§ Data For NLI data we used the SNLI corpus <cit.>. SNLI is an order of magnitude larger than previously available NLI data sets (550k train/10k dev/10k test), and consists entirely of human-generated P-H pairs. SNLI is evenly split across three labels: entailment, contradiction, and neutral. SNLI is large, well-studied, and often used as a benchmark for new NLP models for NLI. The SA soft label data examples were selected from the SSTB test set, so for our experiments we use a modified SSTB test set where the examples have been removed. In our results we report baseline scores on the modified test set so as to be consistent. We chose to select from the SSTB test set because the training set for SSTB, particularly for the binary task, is smaller than the SNLI data set. We would rather keep all data for training in this instance, and report all of our results on a smaller, but still substantial test set. For all experiments we used early stopping and report test results for the epoch with the highest dev set performance.§.§ BaselinesWe evaluate SLMG against three baselines: (i) B1, Traditional: We train the DNN models (<ref>) in a traditional supervised learning setup, where the soft labeled training data (X_soft) is incorporated in the hard labeled training data (X_train) with their original gold-standard labels, (ii) B2, Comparable Label Effort (CLE): Because each of the 180 X_soft examples have 1000 human annotations, our second baseline is to add new single label training data to B1, to evaluate against a comparable data labeling effort. To that end, we randomly selected 180,000 additional training data points from the Multi-NLI data set <cit.> for additional training data, (iii) B3, AOC: The third baseline is the All in one Classifier (AOC) approach proposed by <cit.>, where for each example in X_soft, every label obtained from the crowd is used as a unique example in the training data. This baseline also has an addition 180,000 training data points as in B2, but the additional pairs all come from X_soft and have varying labels depending on the crowd responses.§ RESULTS AND ANALYSISTable <ref> reports results on the SNLI test set. For each model on the NLI task, we are able to improve generalization performance (i.e. test set accuracy) by injecting soft labeled data at some point. Note that the best performance with SLMG varies according to the model, but for each model there is some configuration that does improve performance. As with all model training, the effect of SLMG requires experimentation according to the use case. In all cases, using CCE as the loss function performs better than using MSE.We suspect that this is due to the fact that small differences are penalized less with CCE than with MSE.Table <ref> shows example of two premise-hypothesis pairs from the SNLI test set, and the model output probabilities from the B1 baseline and the SLMG-I model trained with CCE as the soft label loss function. In the first example, using SLMG results in flipping the output from incorrect (neutral) to correct (entailment). However, this pair seems to be a weak case of entailment, and could be argued to be neutral. The SLMG model considers this and has a reasonably high probability for the neutral class. In the second case, training with SLMG results in the wrong label, but again it could be argued that this is a case where neutral is appropriate. The “sedan” that is stuck may not be the Land Rover (Land Rovers are SUVs), so neutral is a reasonable output here.For the SA task, injecting SLMG data at some point again improves performance. SLMG does not improve performance for the NSE model on the fine-grained SA task, but for the binary task we do see improvements, both for the LSTM and NSE model. This suggests that data close to the decision boundary that was originally misclassified was classified correctly when soft labeled data was added.With binary SA, there is no distinction between “very negative” and “negative” so changes in degree don't have an effect, unless the change is from negative to positive.§.§ Changes in Outputs from SLMG To better understand the effects of SLMG on generalization, we look at the changes in test set performance when SLMG is used as compared to the baseline case. Table <ref> shows 3 confusion matrices: the test-set output for the baseline LSTM model on the NLI task, and the same model when trained with SLMG-S and CCE as the loss function for the soft labeled data, which improved test set performance and SLMG-S with MSE as the loss function for the soft labeled data, which did not. In both cases of training with SLMG, the number of correctly classified entailment and contradiction examples increased, while the number of neutral examples correctly classified decreased. However when MSE is used as the soft label loss function, the increase in misclassified neutral examples was enough to offset the gains in correctly classified entailment and contradiction examples. Depending on the use case, this result could be useful for applications. Fewer false negatives for entailment and contradiction examples may be more important than fewer true positives for the neutral class.If we consider SNLI as a binary classification task, with two possible labels “entailment” and “not entailment” (where we combine contradiction and neutral), and look at Table <ref> we see that SLMG outperforms the baseline in both cases. In fact, the SLMG-MSE method outperforms SLMG-CCE in the binary task (88.0% vs. 86.6%) due to the fact that its performance on the entailment label is much higher.§.§ Comparing the Crowd to the Gold Standard We also looked at the soft labeled data itself to understand how well the crowd label distributions align with the accepted gold-standard labels in the original data set. Figure <ref> reports on how well the crowd distributions align with the gold standard labels included in the original SNLI data set. We see that there are quite a few examples where the gold standard class label does not have a high degree of probability weight as estimated from the crowd.For NLI, there is a high percentage of examples where the gold label has an estimated probability of less than 80%.This may be due to the fact that individuals have different understanding of what constitutes entailment. This uncertainty among humans is useful for understanding outputs from ML models. This is consistent with the inter-rater reliability (IRR) scores originally reported by <cit.> with the IRT data set.IRR scores (Fleiss' κ) for the data ranged from 0.37 to 0.63, which is considered moderate agreement <cit.>. The moderate agreement indicates that there is a general consensus about which label is correct (which is consistent with Figure <ref>), but there is enough disagreement among the annotators that the disagreements should be incorporated into the training data, and not discarded in favor of majority vote or another single label selection criteria. §.§ How Many Labels do we Need? Of course, collecting 1000 labels per example to estimate soft labels becomes prohibitively expensive very quickly.However it may not be necessary to collect that many labels in practice. To determine how many labels are needed to arrive at a reasonable estimate of the soft label distributions, we randomly sampled crowd workers from our dataset one at a time. At each step, we used the sampled workers responses to estimate the soft labels for each example and calculated the Kullback Liebler divergence (KL-Divergence) between the true soft label distributions and the sampled soft label distributions:D_KL(p || q) = - ∑_i P(i) logP(i)/Q(i), where P is the true soft label distribution estimated from the full data set and Q is the sampled soft label distribution. Figure <ref> plots the KL-Divergence averaged over the number of data set examples (180) as a function of the number of crowd workers selected.[We truncate the x-axis to focus on the lower values.]We plot results for 5 runs of the random sampling procedure. As the figure shows, the average KL-Divergence approaches 0 well before all 1000 labels are necessary.When sampling randomly, the average difference drops very quickly, and is very low with as few as 15 or 20 labels per example. Active learning techniques could reduce this number further, either by selecting “good annotators” or identifying examples for which more labels are needed. This is left for future work.To confirm the observation that significantly fewer labels are necessary, we randomly sampled 20 annotators from the dataset, used their responses to estimate the soft label distributions, and re-trained the LSTM model with SLMG-I using CCE as the soft label loss function. We ran this training 10 times, where each time we sampled a new selection of 20 annotators for estimating the soft label distributions. The average accuracy for these models was 76.9 and the standard deviation was 0.3. These models perform as well as the model using the distributions learned from 1000 annotators, with significantly less annotation cost. § RELATED WORKOther work on modeling uncertainty in labels is Knowledge Distillation <cit.>. In Knowledge Distillation, output probabilities of a complex expert model are used as input to a simpler model so the simpler model can learn to generalize based on the output weights of the expert model.A key distinction between Knowledge Distillation and our work is that the expert model that is distilling its knowledge was still trained with a single class label as the gold standard, and the expert passes its uncertainty to the simpler model. In our work we capture uncertainty at the original training data, in order to induce generalization as part of the original training. This work is related to the idea of “crowd truth” and collecting and using annotations from the crowd <cit.>. We use the CrowdTruth assumption that disagreement between annotators provides signal about data ambiguity and should be used in the learning process. In addition this work is closely related to the idea of Label Distribution Learning (LDL) from Computer Vision (CV) <cit.>. For training and testing, LDL assumes that y is a probability distribution over labels. With LDL, the goal is to learn a distribution over labels. However in our case we would still like to learn a classifier that outputs a single class, while using the distribution over training labels as a measure of uncertainty in the data. We use the distribution over labels to represent the uncertainty associated with different examples in order to improve model training. There are several other areas of study regarding how best to use training data that are related to this work. Re-weighting or re-ordering training examples is a well-studied and related area of supervised learning. Often examples are re-weighted according to some notion of difficulty, or model uncertainty <cit.>. In particular, the internal uncertainty of the model is used as the basis for selecting how training examples are weighted. However, model uncertainty is dependent upon the original training data the model was trained on, while here we use an external human measure of uncertainty. Curriculum learning (CL) is a training procedure where models are trained to learn simple concepts before more complex concepts are introduced <cit.>. CL training for neural networks can improve generalization and speed up convergence. They demonstrate the effectiveness of curriculum learning on several tasks and draw a comparison with boosting and active learning <cit.>. Our representation of uncertainty via soft labels can be thought of as a measure of difficulty (i.e. more uncertainty is associated with more difficult examples).Finally, this work is related to transfer learning and domain adaptation <cit.>, but with an important distinction. Transfer learning and domain adaptation repurpose representations learned for a source domain to facilitate learning in a target domain. In this paper we want to improve performance in the source domain by fine-tuning with data from the source domain with distributions over class labels. This work differs from domain adaptation and transfer learning in that we are not adding data from a different domain or applying a learned model to a new task. Instead, we are augmenting a single classification task by using a richer representation of where the data lies within the class labels to inform training. The goal is that by fine tuning with a distribution over labels, a model will be less likely to overfit on a training set. To the best of our knowledge this is the first work to use a subset of soft labeled data for fine-tuning, whereas previous work used an all-or-none approach (all hard or soft labels).§ DISCUSSIONIn this paper we have introduced SLMG, a fine-tuning approach to training that can improve classification performance by leveraging uncertainty in data. In the NLI task, incorporating the more informative class distribution labels leads to improved performance under certain training setups. By introducing specialized supplemental data the model is able to update its representations to boost performance. With SLMG, a learning model can update parameters according to a gold-standard that allows for uncertainty in predictions, as opposed to the classic case where each training example should be equally important during parameter updates. Training examples with higher degrees of uncertainty within a human population have less of an effect on gradient updates than those examples where confidence in the label is very high as measured by the crowd.SLMG is an easy fix, but it is not a silver bullet for improving generalization. In our experiments we found that under different training settings SLMG can improve performance for the different models. It is worthwhile to experiment with SLMG to see if and how it can improve performance on other NLP tasks. NLI is a particularly good use case for SLMG because of the ambiguity inherent in language and the potential disagreements that can arise from different interpretations of text. In addition, further experimentation with the way soft labels are generated can lead to further generalization improvements.There are limitations to this work. One bottleneck is the requirement for having a large amount of human labels for a small number of examples, which goes against the traditional strategy for crowdsourcing label-generation.However one can probably estimate a reasonable distribution over labels with significantly fewer labels than obtained here for each example (Figure <ref>).Identifying a suitable number using active learning techniques is left for future work.While SLMG requires soft labels, it does not necessarily require human-annotated soft labels. Rather, SLMG only requires some measure of uncertainty between training examples as part of the generalization step. This can come from human annotators, an ensemble of machine learning models, or some other pre-defined uncertainty metric. In our experiments we demonstrate the validity of SLMG using an existing data set from which we can extract soft labels, and leave experimentation with different soft label generation methods to future work.Future work includes investigation into data sets that can be used with SLMG and why certain fine-tuning sets lead to better performance in certain scenarios. Experiments with different loss functions (e.g. KL-Divergence) and different data can help to understand how SLMG affects the representations learned by a model.Our results suggest that future work training DNNs to learn a distribution over labels can lead to further improvements. 9.0pt10.0pt aaai
http://arxiv.org/abs/1702.08563v3
{ "authors": [ "John P. Lalor", "Hao Wu", "Hong Yu" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170227222545", "title": "Soft Label Memorization-Generalization for Natural Language Inference" }
fminipagealgbox[0]0.2in 6.3in 0.2inA 𝔼 𝐏 𝐃 0𝐗argmax argmin mod thmTheorem[section] theorem[thm]Theorem fact[thm]Fact lem[thm]Lemma lemma[thm]Lemma obs[thm]Observation clm[thm]Claim cor[thm]Corollary fac[thm]Fact rem[thm]Remark remark[thm]Remark *rem*Remarkpro[thm]Proposition con[thm]ConjectureconstructionConstruction protocolProtocol defi[thm]Definition assumption[thm]Assumption probProblemequationsection alpha An SDP-Based Algorithm for Linear-Sized Spectral Sparsification Yin Tat LeeMicrosoft ResearchRedmond, USA He SunThe University of BristolBristol, UK ===================================================================================================== For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges,a (1+ε)-spectral sparsifier if(1-)x^L_Gx≤ x^ L_H x≤ (1+ε) x^ L_Gxholds for any x∈^n, where L_G and L_H are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of GrequiresΩ(n) edges, a natural question is to investigate, for any constant , if a(1+)-spectral sparsifier of G with O(n) edges can be constructed in Õ(m) time, where the Õ notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification <cit.> require either super-linear number of edges or m^1+Ω(1) time.In this work we answer this question affirmatively by presenting an algorithm that, forany undirected graph G and >0, outputs a (1+)-spectral sparsifier of G with O(n/^2) edges in Õ(m/^O(1)) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references;(2) an efficient reduction from a two-sided spectral sparsifier to a one-sided spectral sparsifier; (3) constructing a one-sided spectral sparsifier by a semi-definite program. Keywords:spectral graph theory, spectral sparsificationempty § INTRODUCTION A sparse graph is one whose number of edges is reasonably viewed as being proportional to the number of vertices.Since most algorithms run faster on sparse instances of graphs and it is more space-efficient to store sparse graphs,it is useful to obtain a sparse representation H of G so that certain properties between G and H are preserved, see Figure <ref> for an illustration. Over the past three decades, different notions of graph sparsification have been proposed and widely used to design approximation algorithms.For instance, a spanner Hof a graph G is a subgraph of G so thatthe shortest path distance between any pair of vertices is approximately preserved <cit.>.Benczúr and Karger <cit.> defined a cut sparsifier of a graph G to be a sparse subgraph H such that thevalue of any cut between G and H are approximately the same.In particular, Spielman and Teng <cit.> introduced a spectral sparsifer, which is a sparse subgraph H of an undirected graph G such that many spectral properties of the Laplacian matrices between G and H are approximately preserved.Formally, for any undirectedgraph G with n vertices and m edges, we call a subgraph H of G, with proper reweighting of the edges,a (1+ε)-spectral sparsifier if (1-)x^L_Gx≤ x^ L_H x≤ (1+ε) x^ L_Gxholds for any x∈^n, where L_G and L_H are the respective Laplacian matrices of G and H.Spectral sparsification has been provento be a remarkably useful tool in algorithm design, linear algebra, combinatorial optimisation, machine learning, and network analysis.In the seminal work on spectral sparsification, Spielman and Teng <cit.> showed that, for any undirected graph G of n vertices, a spectral sparsifier of G with onlyO(nlog^cn/ε^2) edgesexists and can be constructed in nearly-linear time[We say a graph algorithm runs in nearly-linear time if the algorithm runs in O(m·polylog n) time, where m and n are the number of edges and vertices of the input graph.], where c≥ 2 is some constant. Both the runtime of their algorithmand the number of edges in the output graph involve large poly-logarithmic factors, and this motivates a sequence ofsimpler and faster constructions of spectral sparsifiers with fewer edges <cit.>.In particular, since any constant-degree expander graph of O(n) edges is a spectral sparsifier of an n-vertex complete graph, a natural question is to study, for any n-vertex undirectedgraph G and constant >0, if a (1+)-spectral sparsifier of G with O(n) edges can be constructed in nearly-linear time. Being considered as one of the most important open question about spectral sparsification by Batson et al. <cit.>, there has been many effortsfor fast constructions of linear-sized spectral sparsifiers, e.g. <cit.>, however the original problem posed in <cit.> has remained open.In this work we answer this question affirmatively bypresenting the first nearly-linear time algorithm for constructing a linear-sized spectral sparsifier. The formal description of our result is as follows: Let G be any undirected graph with n vertices and m edges. For any 0<ε<1, there is an algorithm that runs in Õ(m/^O(1)) work, Õ(1/^O(1)) depth, and produces a (1+)-spectral sparsifier of G with O(n/^2) edges[Here, the notation Õ(·) hides a factor of log^cn for some positive constant c.].maingraphshows thata linear-sized spectral sparsifier can be constructed in nearly-linear time in a single machine setting, and in polylogarithmic time in a parallel setting. The same algorithm can be applied to the matrix setting, whose result is summarised as follows:Given a set of m matrices {M_i}_i=1^m, where M_i∈^n× n. Let M = ∑_i=1^m M_i and Z=∑_i=1^mnnz(M_i), where nnz(M_i) is the number of non-zero entries in M_i. For any 1>ε>0, there is an algorithm that runs in Õ((Z + n^ω)/^O(1)) work, Õ(1/^O(1)) depth and produces a (1+)-spectral sparsifier of M with O(n/^2) components, i.e., there is an non-negative coefficients {c_i}_i=1^m such that | { c_i | c_i≠ 0}| =O(n/^2), and(1-)· M≼∑_i=1^m c_i M_i ≼ (1+)· M.Hereω is the matrix multiplication constant. §.§ Related workIn the seminal paper on spectral sparsification, Spielman and Teng <cit.>showed that a spectral sparsifier of any undirected graph G can be constructed by decomposing G into multiple nearly expander graphs, and sparsifying each subgraph individually. This method leads to the first nearly-linear time algorithm for constructing a spectral sparsifier with O(nlog^cn/ε^2) edges for some c≥ 2.However, both the runtime of their algorithm and the number of edges in the output graph involve large poly-logarithmic factors.Spielman and Srivastava <cit.> showed that a (1+)-spectral sparsifier of Gwith O(nlog n/^2) edges can be constructed by sampling the edges of G with probability proportional to their effective resistances, which is conceptually much simpler than the algorithm presented in <cit.>. Noticing that any constant-degree expander graph of O(n) edges is a spectral sparsifier of an n-vertex complete graph, Spielman and Srivastava <cit.> askedif any n-vertex graph has a spectral sparsifier with O(n) edges.To answer this question, Batson, Spielman and Srivastava <cit.> presented a polynomial-time algorithm that, for any undirected graph G of n vertices, produces a spectral sparsifier of G with O(n) edges.At a high level, their algorithm, a.k.a. the , proceeds for O(n) iterations, and in each iteration one edge is chosen deterministically to “optimise” the change of some potential function. Allen-Zhu et al. <cit.> noticed that a less “optimal" edge, based on a different potential function, can be found in almost-linear time and this leads to an almost-quadratic time algorithm. Generalising their techniques, Lee and Sun <cit.> showed that a linear-sized spectral sparsifier can be constructed in time O(m^1+c) for an arbitrary small constant c.Allof these algorithms proceed for Ω(n^c) iterations, and every iteration takesΩ(m^1+c) time for some constant c>0.Hence, to break the Ω(m^1+c) runtime barrier faced in all previous constructions multiple new techniques are needed.§.§ Organisation The remaining part of the paper is organised as follows. Weintroduce necessary notions about matrices and graphs inpre. In overview weoverview our algorithm and proof techniques.For readability, more detailed discussions and technical proofs are deferred to details. § PRELIMINARIES §.§ Matrices For any n× n real and symmetric matrix A, let λ_min(A)=λ_1(A)≤⋯≤λ_n(A)=λ_max(A) be the eigenvalues of A,where λ_min(A) and λ_max(A) represent the minimum and maximum eigenvalues of A.We call a matrix Apositive semi-definite () ifx^Ax≥ 0 holds for any x∈ℝ^n, and a matrix A positive definiteifx^Ax> 0 holds for any x∈ℝ^n.For any positive definite matrix A, we define the corresponding ellipsoid by 𝖤𝗅𝗅𝗂𝗉(A)≜{ x :  x^A^-1x≤ 1 }. §.§ Graph Laplacian Let G=(V,E,w) be a connected, undirected and weightedgraph with n vertices, m edges, and weight function w: E→ℝ_≥ 0. We fix an arbitrary orientation of the edges in G, and let B∈^m× n be the signed edge-vertex incidence matrix defined byB_G(e,v)={1 -1 0 .Wedefine an m× m diagonal matrix W_G by W_G(e,e)=w_e for any edge e∈ E[G]. The Laplacian matrix of G is an n× n matrix L defined byL_G(u,v)={-w(u,v) (u) 0 .where (v)=∑_u∼ v w(u,v). It is easy to verify thatx^L_G x = x^B_G^W_GB_Gx =∑_u∼ v w_u,v(x_u-x_v)^2 ≥ 0holds for any x∈ℝ^n. Hence, the Laplacian matrix of any undirected graph is amatrix. Notice that, by setting x_u=1 if u∈ S and x_u=0 otherwise, x^L_Gx equals to the value of the cut between S and V∖ S. Hence, a spectral sparsifier is a stronger notion than a cut sparsifer.§.§ Other notationsFor any sequence {α_i}_i=1^m, we use nnz(α) to denote the number of non-zeros in {α_i}_i=1^m. For any two matrices A and B, we write A≼ B to represent B-A is , and A≺ B to represent B-Ais positive definite.For any two matrices A and B of the same dimension, let A∙ B≜(A^B), and A⊕ B =[ [ A 0; 0 B; ]]. § OVERVIEW OF OUR ALGORITHM Without loss of generality we study the problem of sparsifying the sum ofmatrices. The one-to-one correspondence between the construction of a graph sparsifier and the following Problem <ref> was presented in <cit.>. Given a set S of mmatrices M_1,⋯,M_m with ∑_i=1^m M_i=I and 0<<1, find non-negative coefficients {c_i}_i=1^m such that | { c_i | c_i≠ 0}| =O(n/^2), and(1-)· I≼∑_i=1^m c_i M_i ≼ (1+)· I.For intuition, one can think all M_i are rank-1 matrices, i.e., M_i = v_i v^_i for some v_i∈^n. Given the correspondence betweenmatrices and ellipsoids, Problem <ref> essentially asks to use O(n/^2) vectors from S to construct an ellipsoid, whose shape is close to bea sphere. To construct such an ellipsoid with desired shape, all previous algorithms <cit.> proceed by iterations:in each iteration j the algorithm choosesone or more vectors, denoted by v_j_1,⋯, v_j_k, and adds Δ_j≜∑_t=1^k v_j_t v^_j_t to the currently constructed matrix by setting A_j=A_j-1+Δ_j.To control the shape of the constructed ellipsoid,two barrier values, the upper barrier u_j and the lower barrier ℓ_j, are maintainedsuch thatthe constructed ellipsoid (A_j)issandwiched betweenthe outer sphere u_j· I and theinner sphere ℓ_j· I for any iteration j. That is, the following invariant always maintains:ℓ_j· I≺ A_j≺u_j· I.To ensureinvariant holds, two barrier values ℓ_j and u_j are increased properly after each iteration, i.e., u_j+1= u_j +δ_u,j, ℓ_j+1 = ℓ_j + δ_ℓ,j for some positive values δ_u,j and δ_ℓ,j. The algorithmcontinues this process, untilafter T iterations (A_T) is close to be a sphere. This implies that A_T is a solution ofProblem <ref>, see bsspic for an illustration. However, turning the schemedescribed above into an efficient algorithmwe need to consider the followingissues: * Which vectorsshould we pick in each iteration?* How many vectors can be added in each iteration? * How should we update u_j and ℓ_j properly so that the invariant invariant always holds? These three questions closely relate to each other: onone hand, one can alwayspick a single “optimal” vector in each iteration based on some metric, and such conservative approach requires a linear number of iterations T=Ω(n/^2) and super-quadric time for each iteration. On the other hand, one can choose multiple less “optimal" vectors to construct Δ_j in iteration j, but this makes the update of barrier values more challenging to ensure the invariant invariant holds.Indeed, the previous constructions <cit.> speed up their algorithms at the cost of increasing the sparsity, i.e., the number of edges in a sparsifier, by more than a multiplicative constant.To address these, we introduce three novel techniques for constructing a spectral sparsifier: First of all, we define a new potential function which is much easier to compute yet has similar guarantee as the potential function introduced in <cit.>.Secondly we show that solving Problem <ref> with two-sided constraints in sscondition can be reduced to a similar problemwith only one-sided constraints.Thirdly we prove that the problem with one-sided constraints can be solved by a semi-definite program. §.§ A new potential function To ensure that the constructed ellipsoid A is always inside the outer sphere u· I, we introduce a potential function Φ_u(A) defined by Φ_u(A)≜exp((uI-A)^-1)=∑_i=1^n exp( 1/u-λ_i(A)).It is easy to see that, when(A) gets closer to the outer sphere, λ_i(u· I-A) becomes smaller and the value ofΦ_u(A) increases. Hence, a bounded value of Φ_u(A) ensures that (A) is inside the sphere u· I. For the same reason, we introduce a potential function Φ_ℓ(A) defined by Φ_ℓ(A)≜exp((A-ℓ I)^-1)=∑_i=1^n exp( 1/λ_i(A)-ℓ).to ensure that the inner sphere is always inside (A). We also define Φ_u,ℓ(A)≜Φ_u(A) + Φ_ℓ(A),as a bounded value of Φ_u,ℓ(A)implies that the two events occur simultaneously. Our goal is to design a proper update rule to construct {A_j} inductively, so that Φ_u_j, ℓ_j(A_j) is monotone non-increasing after each iteration. Assuming this, a bounded value of the initial potential function guarantees that the invariant invariant always holds.To analyse the change of the potential function, we first notice that Φ_u,ℓ(+Δ)≥Φ_u,ℓ() + ( e^(u-)^-1 (u-)^-2Δ) - (e^(-ℓ)^-1(-ℓ)^-2Δ)by the convexity of the function Φ_u,ℓ. Weprove that, as long as the matrix Δ satisfies 0≼Δ≼δ(uI-)^2 and 0≼Δ≼δ(-ℓ I)^2 for some small δ, the first-order approximation givesa good approximation.Let A be a symmetric matrix. Let u,ℓ be the barrier values such that u-ℓ≤1 and ℓ≺≺ u. Assume that Δ≻ 0,Δ≼δ(uI-)^2 and Δ≼δ(-ℓ I)^2 for δ≤ 1/10. Then, it holds thatΦ_u,ℓ(+Δ)≤ Φ_u,ℓ() + (1+2δ)( e^(u-)^-1 (u-)^-2Δ) -(1-2δ)(e^(-ℓ)^-1(-ℓ)^-2Δ).We remark that this is not the first paper to use a potential function to guide the growth of the ellipsoid. In <cit.>, the potential functionΛ_u,ℓ,p() = ((uI-A)^-p) + ((A-ℓ I)^-p)is used with p= 1. The main drawback is that Λ_u,ℓ,1 does not differentiate the following two cases:* Multiple eigenvalues of A are close to the boundary (both u and ℓ).* One of the eigenvalues of A is very close to the boundary (either u or ℓ).It is known that, when one of the eigenvalues of A is very close to the boundary, it is more difficult to findan “optimal” vector.<cit.> shows this problem can be alleviated by using p ≫ 1.However, this choice of p makes the function Λ_u,ℓ,p less smooth and hence one has to take a smaller step size δ, as shown in the following lemma by <cit.>.Let A be a symmetric matrix and Δ be a rank-1 matrix. Let u,ℓ be the barrier values such that ℓ≺≺ u. Assume that Δ≻ 0, Δ≼δ(uI-) and Δ≼δ(-ℓ I) for δ≤ 1/(10 p) and p ≥ 10. Then, it holds that Λ_u,ℓ,p(+Δ) ≤ Λ_u,ℓ,p() + p (1+p δ)((u-)^-(p+1)Δ)- p(1-p δ)((-ℓ)^-(p+1)Δ).Notice that, comparing with the potential function oldpotential, our new potential function defpotential blows up much faster when the eigenvalues of A are closer to the boundaries ℓ and u. This allows the problem of finding an “optimal” vector much easier than using Λ_u,ℓ,p. At the same time, we avoid the problem of taking a small step Δ≼ 1/p· (uI-) by taking a “non-linear" step Δ≼ (uI-)^2. As there cannot be too many eigenvalues close to the boundaries, this “non-linear” step allows us to take a large step except on a few directions.§.§ A simple construction based onThe second technique we introduce is the reduction fromaspectral sparsifier with two-sided constraintsto the one with one-sided constraints.Geometrically, it is equivalent to require the constructed ellipsoid inside another ellipsoid, instead of being sandwiched between two spheres as depicted in bsspic. Ideally, we want to reduce the two-sided problem to the following problem: for a set ofmatrices ={M_i }_i=1^m such that ∑_i=1^mM_i≼ I,find a sparse representation Δ=∑_i=1^m α_i M_i with small (α) such that Δ≼ I.However, in the reduction we use such matrixΔ=∑_i=1^m α_i M_i to update A_j and we need the length of (Δ) is large on the direction that (A_j) is small. To encode this information, we define the generalised one-sided problem as follows: [One-sided Oracle] Let≼ B ≼ I, ≽, ≽ besymmetric matrices, ℳ={M_i}_i=1^m bea set of matrices such that ∑_i=1^m_i =.We call a randomised algorithm (, B,, ) a one-sided oracle with speed S∈ (0,1] and error >0, if (, B,, )outputsa matrix Δ=∑_i=1^mα_i_i such that* nnz(α)≤λ_min () ·(^-1).* Δ≼ and α_i ≥ 0 for all i.* [∙Δ] ≥ S ·λ_min() ·() -S ·λ_min() ·(), where =-, and =+. Weshow in impleSDP the existence of a one-sided oracle withspeed S=Ω(1) and error =0, in which case the oracle only requiresas input, instead of and . However, to construct such an oracle efficiently an additional error is introduced, which depends on +.For the main algorithm 𝚂𝚙𝚊𝚛𝚜𝚒𝚏𝚢( ,ε), wemaintain the matrix A_j inductively as we discussed at the beginning of overview. By employing the ⊕ operator, we reduce the problem of constructing Δ_j with two-sided constraints to the problem ofconstructing Δ_j⊕Δ_j withone-sided constraints.We also use toensure that the length of (Δ) is large on the direction where the length of (A_j) is small. See Algorithm <ref> for formal description.To analyse Algorithm <ref>, we use the fact that the returned Δ_j satisfies the preconditions of potentialchange and prove in potential_decreasing that [Φ_u_j+1,ℓ_j+1(_j+1)]≤Φ_u_j,ℓ_j(_j)for any iteration j. Hence, with high probability the bounded ratio between u_T and ℓ_T after the final iteration T implies that the (A_T) is close to be a sphere. In particular,for any < 1/20, a (1+O())-spectral sparsfier can be constructed by callingO(log^2n/ε^2· S) times, which isdescribed in the lemma below.Let 0<ε<1/20. Suppose we have one-sided oracle 𝙾𝚛𝚊𝚌𝚕𝚎 with speed S and error . Then, with constant probability the algorithm 𝚂𝚙𝚊𝚛𝚜𝚒𝚏𝚢( ,ε) outputs a (1+O(ε))-spectral sparsifier withO(n/ε^2· S) vectors by callingO(log^2n/ε^2· S) times. §.§ Solvingvia SDPNow weshow that the required solution of(, B, ) indeed exists[As the goal here is to prove the existence ofwith error =0, the input here isinstead of C_+ and C_-.], and can be further solved in nearly-linear time bya semi-definite program. We first prove that the required matrix satisfying the conditions of Definition <ref> exists for some absolute constant S=Ω(1) and =0. To make a parallel discussion between Algorithm <ref> and the algorithm we will present later,weuse A to denote the output of theinstead of Δ. We adopt the ideas between the ellipsoid and two spheres discussed before, but onlyconsider one sphere for the one-sided case. Hence, we introduce a barrier value u_j for each iteration j, where u_0=0. We will use thepotential function Ψ_j=(u_jB-A_j )^-1in our analysis, where u_jis increased by δ_j≜ (Ψ_j·λ_min(B))^-1 afteriteration j.Moreover, since we only need to prove the existence of the required matrix A=∑_i=1^m α_i M_i, our process proceeds for T iterations, where only one vector is chosenin each iteration.To finda desired vector, we perform random sampling, whereeach matrix M_i is sampled with probability𝗉𝗋𝗈𝖻(M_i) proportional to M_i∙, i.e.,𝗉𝗋𝗈𝖻(M_i)≜(M_i∙)^+/∑_t=1^m (M_t∙)^+,where x^+≜max{x,0}. Notice that, sinceour goal is to construct A such that 𝔼[∙ A] is lower bounded by some threshold as stated in Definition <ref>, we should not pick any matrix M_i with M_i∙ < 0. This random sampling procedure is described in Algorithm <ref>, and the properties of the output matrix is summarised in one_sided.Let 0≼ B≼ I andbe symmetric matrices, and ={M_i}_i=1^m be a set ofmatrices such that ∑_i=1^m M_i = I. Then𝚂𝚘𝚕𝚞𝚝𝚒𝚘𝚗𝙴𝚡𝚒𝚜𝚝𝚎𝚗𝚌𝚎(,,) outputs a matrix =∑_i=1^mα_i _i such that the followingholds: * nnz(α)=⌊λ_min()·(^-1)⌋.* ≼, and α_i ≥ 0 for all i.* [∙]≥1/32·λ_min()·( ).one_sided shows that the required matrix A defined in Definition <ref> exists, and can be found byrandom samplingdescribed in Algorithm <ref>. Our key observation is that such matrix A can be constructedbya semi-definite program.Let 0≼ B≼ I,be symmetric matrices, and ={M_i}_i=1^m be a set of matrices such that ∑_i=1^m M_i = I.Let S⊆ [m] be a random set of ⌊λ_min()·(^-1)⌋ coordinates, where every index i is picked with probability 𝗉𝗋𝗈𝖻(M_i). Let ^⋆ be the solution of the following semidefinite programmax_α_i≥0∙(∑_i∈ Sα_i_i) subject to =∑_i∈ Sα_i_i≼Then, we have [∙^⋆] ≥1/32·λ_min()· (). Taking the SDP formuation SDP and the specific constraints of the 's input into account, the next lemmashows that the required matrix used in each iteration of can be computed efficiently by solving a semidefinite program. The Oracle used in Algorithmcan be implemented inÕ((Z+n^ω)·ε^-O(1))workandÕ(ε^-O(1)) depthwhere Z=∑_i=1^mnnz(M_i) is the total number of non-zeros in _i. When the matrix ∑_i=1^m M_i= comes from spectral sparsification of graphs, each iteration ofcan be implemented in Õ(mε^-O(1))workandÕ(ε^-O(1)) depth.Furthermore, the speed of this one-sided oracle is Ω(1) and the error of this one-sided oracle is . Combining reductionlem and oracle_implementation gives us the proof of the main result.oracle_implementation shows that we can constructwith Ω(1) speed anderror that runs in Õ((Z+n^ω)·ε^-O(1))workandÕ(ε^-O(1)) depthfor the matrix setting and Õ(mε^-O(1))workandÕ(ε^-O(1)) depth.for the graph setting.Combining this with reductionlem, which states thatit suffices to call Õ(1/^2) times, the main statements hold. §.§ Further discussion Before presenting a more detailed analysis of our algorithm,we compare our new approach with the previous ones for constructing a linear-sized spectral sparsifier, and see how weaddress the bottlenecks faced in previous constructions. Notice that all previous algorithms require super poly-logarithmic number ofiterations, and super linear-time for each iteration. For instance, our previous algorithm <cit.> for constructing a sparsifier with O(pn) edges requires Ω(n^1/p) iterations andΩ(n^1+1/p) time per iteration for the following reasons: * Ω(n^1+1/p) time is needed per iteration: Each iteration takes n/g^Ω(1) time to pick the vector(s) when (ℓ + g) I ≼ A ≼ (u-g) I. To avoid eigenvalues of A getting too close to the boundaryu or ℓ, i.e., g being too small, we choose the potential function whose value dramatically increases when the eigenvalues of A get close u or ℓ. As the cost, we need to scale down the added vectors by a n^1/p factor.* Ω(n^1/p) iterations are needed:By random sampling, we choose O(n^1-1/p) vectors each iteration and use the matrix Chernoff bound to show that the “quality” of addedO(n^1-1/p) vectors is just p=Θ(1) times worse than adding a single vector. Hence, this requires Ω(n^1/p) iterations.In contrast, our new approach breaks these two barriers through the following way: * A “non-linear” step: Instead of rescaling down the vectors we add uniformly, we pick much fewer vectors on the direction that blows up, i.e., we impose the condition Δ≼ (uI-A)^2 instead of Δ≼ 1/p· (uI-A). This allows us to use the new potential function defpotential with form exp(x^-1) to control the eigenvalues in a more aggressive way.* SDP filtering:By matrix Chernoof bound, we know that the probability that we sample a few “bad” vectors is small. Informally,we apply semi-definite programming to filter out those bad vectors, and this allows us to add Ω(n/log^O(1)(n)) vectors in each iteration. § DETAILED ANALYSISIn this section we givedetailed analysis for the statements presented in overview.§.§ Analysis of the potential function Now we analyse the properties of the potential function defpotential, and prove potentialchange.The following two facts from matrix analysis will be used in our analysis. Let A∈ℝ^n× n, U∈ℝ^n× k, C∈ℝ^k× k and V∈ℝ^k× n be matrices. Suppose that A, C and C^-1 + VA^-1U are invertible, it holds that(A+UCV)^-1= A^-1 - A^-1U( C^-1 + VA^-1U )^-1 VA^-1. It holds for any symmetric matrices A and B that(e^A+B) ≤(e^A·e^B ).We analyse the change of Φ_u(·) and Φ_ℓ(·) individually.First of all, notice that(uI-A-Δ)^-1=(uI-A)^-1/2( I-(uI-A)^-1/2Δ(uI-A)^-1/2)^-1(uI-A)^-1/2.We define Π=(uI-A)^-1/2Δ(uI-A)^-1/2. Since 0≼Δ≼δ(uI-A)^2 and u-ℓ≤ 1, it holds thatΠ≼δ (uI-A)≼δ (uI-ℓ I) ≼δ I,and therefore(I-Π)^-1≼ I+ 1/1-δ·Π.Hence, it holds that(uI-A-Δ)^-1 ≼ (uI-A)^-1/2( I+ 1/1-δ·Π)(uI-A)^-1/2 = (uI-A)^-1 + 1/1-δ· (uI-A)^-1Δ(uI-A)^-1.By the fact that exp is monotone and the Golden-Thompson Inequality (Golden), we have thatΦ_u(A+Δ)= exp( (uI-A-Δ)^-1) ≤exp( (uI-A)^-1 + 1/1-δ· (uI-A)^-1Δ(uI-A)^-1) ≤(exp(uI- A)^-1exp(1/1-δ· (uI- A)^-1Δ(uI-A)^-1)).Since 0 ≼Δ≼δ (uI-A)^2 and δ≤ 1/10 by assumption, we have that (uI-A)^-1Δ(uI-A)^-1≼δ I, andexp(1/1-δ· (uI-A)^-1Δ(uI-A)^-1)≼ I+(1+2δ)· (uI-A)^-1Δ(uI-A)^-1.Hence, it holds thatΦ_u(+Δ)≤(^(uI- A)^-1·( I+(1+2δ)(uI-A)^-1Δ(uI-A)^-1) ) =Φ_u()+(1+2δ)·(e^(uI-A )^-1(uI-A)^-2Δ). By the same analysis, we have that Φ_ℓ(+Δ)≤(^(A-ℓ I)^-1·( I-(1-2δ)(A- ℓ I)^-1Δ(A-ℓ I)^-1) ) =Φ_ℓ()-(1-2δ)·(e^(A-ℓ I )^-1(A- ℓ I)^-2Δ).Combining the analysis on Φ_u(A+Δ) and Φ_ℓ(A+Δ) finishes the proof.Let A be a symmetric matrix. Let u,ℓ be the barrier values such that u-ℓ≤1 and ℓ≺≺ u. Assume that 0≤δ_u≤δ·λ_min(uI-A)^2 and 0≤δ_ℓ≤δ·λ_min(A-ℓ I)^2 for δ≤ 1/10. Then, it holds that Φ_u+δ_u,ℓ+δ_ℓ()≤ Φ_u,ℓ()- (1-2δ) δ_u·(^(u-)^-1(u-)^-2) +(1+2δ)δ_ℓ·(^(-ℓ)^-1(-ℓ)^-2). Since 0≤δ_u≤δ·λ_min(uI-A)^2 and 0≤δ_ℓ≤δ·λ_min(A-ℓ I)^2,we have that δ_u· I≼δ· (uI-A)^2 and δ_ℓ· I≼δ· (A-ℓ I)^2. The statement follows by a similar analysis for proving potentialchange.§.§ Analysis of the reduction Now we present the detailed analysis for the reduction from a spectral sparsifier to a one-sided oracle. We first analyse Algorithm <ref>, and prove that in expectation the value of the potential function is not increasing. Based on this fact, we will give a proof of reductionlem, which shows that a (1+O(ε))-spectral sparsifier can be constructed by callingO(log^2n/ε^2· S) times. Let A_j and A_j+1 be the matrices constructed by Algorithm <ref> in iteration j and j+1, andassume that 0 ≤ε≤ 1/20. Then, it holds that[Φ_u_j+1,ℓ_j+1(_j+1)]≤Φ_u_j,ℓ_j(_j).By the description of Algorithm <ref> and Definition <ref>, it holds thatΔ_j⊕Δ_j ≼ (u_j-_j)^2⊕(_j-ℓ_j)^2,which implies that Δ_j≼(u_j-_j)^2 and Δ_j≼(_j-ℓ_j)^2. Since u_j-ℓ_j ≤ 1 by the algorithm description and 0≤≤ 1/20, by setting Δ=ε·Δ_j in potentialchange, we have Φ_u_j, ℓ_j (_j + ε·Δ_j)≤Φ_u_j,ℓ_j(_j) + ε (1+2ε)·(e^(u_j-_j)^-1(u_j-_j)^-2Δ_j) - ε (1-2ε)·(e^(_j-ℓ_j)^-1(_j-ℓ_j)^-2Δ_j) = Φ_u_j,ℓ_j(_j)-ε·∙Δ_j.Notice that the matrices {M_i⊕ M_i}_i=1^m as the input of 𝙾𝚛𝚊𝚌𝚕𝚎 always satisfy ∑_i=1^mM_i⊕ M_i=I⊕ I. Using this and the definition of 𝙾𝚛𝚊𝚌𝚕𝚎, we know that[∙Δ_j] ≥ S·λ_min(B_j)· ()-· S·λ_min(B_j)·( ),Let α_j=ε· S·λ_min(_j). Then we have that [Φ_u_j,ℓ_j(_j+1) ]≤Φ_u_j,ℓ_j(_j) - ε·[∙Δ_j]≤Φ_u_j,ℓ_j(_j)+(1+2ε) (1+)·α_j·(e^(u_j-_j)^-1(u_j-_j)^-2)- (1-2ε)(1-)·α_j·(e^(_j-ℓ_j)^-1(_j-ℓ_j)^-2). On the other hand, using that 0≤≤ 1/20, S ≤ 1 and Δ_j≼(u_j-_j), we have thatδ_u,j≤ε·(1+2ε)(1+)/1-4ε·λ_min(u_j-_j)^2≤ 2 ·λ_min(u_j-_j+1)^2andδ_ℓ,j≤ε·(1-2ε)(1-)/1+4ε·λ_min(_j-ℓ_j)^2≤ 2 ·λ_min(_j+1-ℓ_j)^2.Hence, potential2 shows thatΦ_u_j +δ_u,j,ℓ_j+δ_ℓ,j(_j+1) ≤Φ_u_j,ℓ_j(_j+1)- (1 - 4ε)δ_u,j·(e^(u_j-_j+1)^-1(u_j-_j+1)^-2)+ (1 + 4ε)δ_ℓ,j·(e^(_j+1-ℓ_j)^-1(_j+1-ℓ_j)^-2) ≤Φ_u_j,ℓ_j(_j+1) - (1 - 4ε)δ_u,j·(e^(u_j-_j)^-1(u_j-_j)^-2) + (1 + 4ε)δ_ℓ,j·(e^(_j-ℓ_j)^-1(_j-ℓ_j)^-2).By combining echange1, echange2, and setting (1-4ε)δ_u,j=(1+2ε)(1+)α_j,(1+4ε)δ_ℓ,j=(1-2ε)(1-)α_j, we have that 𝔼[Φ_u_j+1,ℓ_j+1(_j+1)]≤Φ_u_j,ℓ_j(_j). We first bound the number of times the algorithm calls the oracle.Notice that Φ_u_0, ℓ_0 = 2·exp((1/4I)^-1) = 2 ^4· n. Hence, by potential_decreasing we have𝔼[ Φ_u_j, ℓ_j (A_j) ]=O(n) for any iteration j. By Markov's inequality, it holds that Φ_u_j, ℓ_j (A_j)=n^O(1) with high probability in n. In the remainder of the proof, we assume that this event occurs.Since _j=(u_j-_j)^2⊕(_j-ℓ_j)^2 by definition, it holds that exp((λ_min(_j ))^-1/2) ≤Φ_u_j, ℓ_j(A_j) = n^O(1),which implies that λ_min( _j )=Ω(log^-2n).On the other hand, in iteration j the gap between u_j and ℓ_j is increased byδ_u,j-δ_ℓ,j=Ω(ε^2· S·λ_min(_j) ).Combining this with lblambdab gives us that δ_u,j-δ_ℓ,j=Ω(ε^2· S/log^2n)for any j. Since u_0-ℓ_0=1/2 and the algorithm terminates once u_j - ℓ_j>1 for some j, with high probability in n, the algorithm terminates in O( log^2 n/ε^2· S) iterations.Next we prove that the number of M_i's involved in the output is at mostO(n/^2· S). By the properties of 𝙾𝚛𝚊𝚌𝚕𝚎, the number of matrices in iteration j is at mostλ_min(B_j)·( B_j^-1). Since x^-2≤exp( x^-1) for all x>0, it holds for any iteration j that(_j^-1) =((u_j-_j)^-2) +((_j-ℓ_j)^-2) ≤Φ_u_j,ℓ_j(_j)By gapincrease, we know that for added matrix M_i in iteration j, the average increase of thegap u_j-ℓ_j for each added matrix isΩ(ε^2· S/Φ_u_j,ℓ_j(_j)). Since 𝔼[Φ_u_j, ℓ_j(A_j)]=O(n), for every new added matrix, in expectation the gap between u_j and ℓ_j is increased by Ω(ε^2· S/ n). By the ending condition of the algorithm, i.e., u_j-ℓ_j>1, and Markov's inequality, the number of matrices picked in total is at mostO (n/ε^2· S) with constant probability.Finally we prove that the output is a (1+O(ε))-spectral sparsifier. Since the condition number of the output matrix A_j is at mostu_j/ℓ_j = ( 1 - u_j-ℓ_j/u_j)^-1,it suffices to prove that (u_j-ℓ_j)/u_j=O(ε) and thiseasily follows from the ending condition of the algorithm andδ_u,j -δ_ℓ, j/δ_u,j= O(). §.§ Existence proof forThe property on nnz(α) follows from the algorithm description. For the second property, notice thatevery chosen matrix Δ_j in iteration j satisfiesΔ_j≼1/2(u_j-_j), which implies that_j≼ u_j holds for any iteration j. Hence, =1/u_T_T≼, and α_i ≥ 0 since Ψ_j≥ 0.Now we prove the third statement. Let β=∑_i=1^m (_i∙)^+.Then, foreach matrix _i_j picked in iteration j, ∙_j is increased by∙Δ_j=1/4Ψ_j·𝗉𝗋𝗈𝖻(M_i_j)·∙_i_j=β/4Ψ_j.On the other hand, it holds thatu_T= u_0 + ∑_j=0^T-1δ_j = 1+∑_j=0^T-1( Ψ_j·λ_min(B) )^-1.Hence, we have that∙=1/u_T· C∙(∑_j=0^T-1Δ_j )= ∑_j=0^T-1β·(4Ψ_j)^-1/1+∑_j=0^T-1 (Ψ_j·λ_min())^-1 =βλ_min() /4·∑_j=0^T-1Ψ_j^-1/λ_min() +∑_j=0^T-1Ψ_j^-1≥βλ_min()/4·∑_j=0^T-1(Ψ_j+Ψ_0)^-1/λ_min() +∑_j=0^T-1(Ψ_j+Ψ_0)^-1≥βλ_min() /4·∑_j=0^T-1(Ψ_j+Ψ_0)^-1/λ_min() + T ·Ψ_0^-1≥β/8·∑_j=0^T-1(Ψ_j+Ψ_0)^-1,where the last inequality follows by the choice of T. Hence, it suffices to bound Ψ_j. Since Δ_j≼1/2(u_j-_j)≼1/2(u_j+1-_j), we have that Ψ_j+1≤((u_j+1-_j)^-1)+2·Δ_j∙(u_j+1-_j)^-2.Since (u-_j)^-1 is convex in u, we have that((u_j-_j)^-1) ≥((u_j+1-_j)^-1) + δ_j ((u_j+1-_j)^-2 B)Combining mideq1 and mideq2, we have that Ψ_j+1 ≤((u_jB-A_j)^-1) -δ_j·tr( ( u_j+1 B -A_j)^-2B ) +2·Δ_j∙(u_j+1-_j)^-2 = Ψ_j-δ_j·λ_min(B)·((u_j+1B-A_j)^-2)+2·Δ_j∙(u_j+1-_j)^-2Let ℰ_j be the event that Δ_j≼1/2(u_j-_j).Notice that our picked Δ_j in each iteration always satisfies ℰ_j by algorithm description.Since [Δ_j∙(u_j-_j)^-1]=∑_i: (M_i∙ C)^+>0𝗉𝗋𝗈𝖻(M_i)·1/4Ψ_j·𝗉𝗋𝗈𝖻(M_i)·_i∙(u_j-_j)^-1≤1/4,by Markov inequality it holds that ℙ[ℰ_j]= ℙ[ Δ_j≼1/2(u_j-_j)] ≥1/2,and therefore[Δ_j∙(u_j+1-_j)^-2 | ℰ_j]≤[Δ_j∙(u_j+1-_j)^-2]/ℙ(ℰ_j)≤ 2·[Δ_j∙(u_j+1-_j)^-2]= 1/2Ψ_j·∑_i: (M_i∙ C)^+>0_i∙(u_j+1-_j)^-2≤1/2Ψ_j·(u_j+1-_j)^-2. Combining the inequality above,upphinext, and the fact that every Δ_j picked by the algorithm satisfies ℰ, we have that [ Ψ_j+1]≤Ψ_j+(1/Ψ_j-δ_j·λ_min())·(u_j+1-_j)^-2.By our choice of δ_j, it holds for any iteration j that [ Ψ_j+1] ≤Ψ_j, and [ (Ψ_j+1 + Ψ_0 )^-1]≥(Ψ_j + Ψ_0 )^-1≥1/2·Ψ_0.Combining this with CA_bound, it holds that [∙] ≥β/8∑_j=0^T-1[(Ψ_j+Ψ_0)^-1]≥β/16·T/Ψ_0 = β/16·T/( ^-1)≥() /16·T/( ^-1) .The result follows from the fact that T ≥λ_min() ( ^-1)/2.Using the lemma above, we can prove that suchcan be solved by a semidefinite program.Note that the probability we used in the statement is the same as the probability we used in 𝚂𝚘𝚕𝚞𝚝𝚒𝚘𝚗𝙴𝚡𝚒𝚜𝚝𝚎𝚗𝚌𝚎( ℳ,,). Therefore, Lemma <ref> shows that there is a matrix A of the form ∑_i=1^mα_i_i such that [ ∙] ≥1/32·λ_min()·(),The statement follows by the fact that A^⋆ is the solution of the semidefinite program SDP that maximises ∙. §.§ Implementing the SDP in nearly-linear time Now, we discuss how to solve the SDP SDP in nearly-linear time. Since this SDP is a packing SDP, it is known how to solve it in nearly-constant depth <cit.>. The following result will be used in our analysis. Given a SDP max_x≥0c^x subject to ∑_i=1^mx_i_i≼with _i≽, ≽ and c∈^m. Suppose that we are given a direct access to the vector c∈^m and an indirect access to _i andvia an oracle 𝒪_L,δ which inputs a vector x∈^m and outputs a vector v∈^m such thatv_i∈(1±δ/2)[_i∙^-1/2exp(L·^-1/2(∑_ix_i_i-)^-1/2)^-1/2]in 𝒲_L,δ work and 𝒟_L,δ depth for any x such that x_i≥0 and ∑_i=1^mx_i_i≼2. Then, we can output x such that [c^x ]≥(1-O(δ))𝖮𝖯𝖳with∑_i=1^mx_i_i≼ inO(𝒲_L,δlog m·log(nm/δ)/δ^3) workand O(𝒟_L,δlog m·log(nm/δ)/δ) depth,where L=(4/δ)·log(nm/δ).Since we are only interested in a fast implementation of the one-sided oracle usedin , it suffices to solve the SDP SDP for this particular situation.Our basic idea is to use solve_SDP as the one-sided oracle. Notice that each iteration ofuses the one-sided oracle with the input =1-2ε/2((-ℓ)^-2exp(-ℓ)^-1⊕(-ℓ)^-2exp(-ℓ)^-1), =1+2ε/2((u-)^-2exp(u-)^-1⊕(u-)^-2exp(u-)^-1),B =(u-)^2⊕(-ℓ)^2,where we drop the subscript j indicating iterations here for simplicity. To apply sdp, we first sample a subset S⊆ [m], then solve the SDPmax_β_i≥0,β_i=0 on i∉ S (-)∙(∑_i=1^mβ_i_i⊕_i) subject to ∑_i=1^mβ_i_i⊕_i≼.By ignoring the matrices _i with i∉S, this SDP is equivalent to the SDP max_β_i≥0c^⊤β subject to ∑_i=1^mβ_i_i⊕_i≼where c_i=(-)∙(_i⊕_i). Now assume that(i) we can approximate c_i with δ(+)∙(_i⊕_i) additive error, and (ii) for any x such that ∑_i=1^mx_i_i⊕_i≼2 and L =Õ(1/δ), we can approximate(_i⊕_i)∙^-1/2exp(L·^-1/2(∑_ix_i_i⊕_i-)^-1/2)^-1/2with 1±δ multiplicative error. Then, by solve_SDP we can find a vector β such that[ c^β]≥(1-O(δ))𝖮𝖯𝖳-δ∑_i=1^mβ_i(+)∙(_i⊕_i)≥(1-O(δ))𝖮𝖯𝖳-δ∑_i=1^mβ_i(+)∙≥𝖮𝖯𝖳-O(δ)(+)∙.where we used that _i⊕_i≼ and 𝖮𝖯𝖳≤(+)∙. Since u-ℓ≤1, we have that B≼ I⊕ I and hence[ c^β]≥𝖮𝖯𝖳-O(δ)·(+)≥1/32λ_min()·(()-O(δlog^2n)·(+))where we apply sdp and (<ref>) at the last line. Therefore, this gives an oracle with speed 1/32and ε error by setting δ=ε/log^2n.The problem of approximating sample probabilities, {c_i}, as well as implementing the oracle 𝒪_L,δ is similar with approximating leverage scores <cit.>, and relative leverage scores <cit.>. All these referencesusethe Johnson-Lindenstrauss lemmato reduce the problem of approximating matrix dot product or trace to matrix vector multiplication. The only difference is that, instead of computing (-ℓ)^-(q+1)x and (u-)^-(q+1)x for a given vector x in other references, we compute (-ℓ)^-2exp(-ℓ)^-1x and(u-)^-2exp(u-)^-1x. These can be approximated by Taylor expansion and the number of terms required for Taylor expansion depends on how close the eigenvalues of A are to the boundary (u or ℓ). In particular, we show in taylor that Õ(1/g^2) terms in Taylor expansion suffices, where the gap g is the largest number such that (ℓ +g)I ≼ A ≼ (u - g)I. Since 1/g^2 = Õ(1) by (<ref>), each iteration can be implemented via solving Õ(1/^O(1))linear systems and Õ(1/^O(1))matrix vector multiplication. For the matrices coming from graph sparsification, this can be done in nearly-linear work and nearly-constant depth <cit.>. For general matrices, this can be done in input sparsity time and nearly-constant depth <cit.>. §.§ Taylor Expansion of x^-2exp(x^-1)Suppose f is holomorphic on a neighborhood of the ball B≜{z∈ℂ : |z-s|≤ r}, then we have that|f^(k)(s)|≤k!/r^ksup_z∈ B|f(z)|.Let f(x)=x^-2exp(x^-1). For any 0<x≤ 1, we have that|f(x)-∑_k=0^d1/k!f^(k)(1)(x-1)^k|≤8(d+1)e^5/x-xd.In particular, if d≥c/x^2log(1/xε) for some large enough universal constant c, we have that|f(x)-∑_k=0^d1/k!f^(k)(1)(x-1)^k|≤ε.By the formula of the remainder term in Taylor series, we have thatf(x)=∑_k=0^d1/k!f^(k)(1)(x-1)^k+1/d!∫_1^xf^(d+1)(s)(x-s)^dds.For any s∈[x,1], we define D(s)={z∈ℂ : |z-s|≤ s-x/2}. Since |f(z)|≤(x/2)^-2exp(2/x) on z∈ D(s), Cauchy's estimates (Theorem <ref>) shows that|f^(d+1)(s)|≤(d+1)!/(s-x/2)^d+1sup_z∈ B(s)|f(z)|≤(d+1)!/(s-x/2)^d+14/x^2exp(2/x).Hence, we have that|f(x)-∑_k=0^d1/k!f^(k)(t)(x-t)^k|≤ 1/d!|∫_1^x(d+1)!/(s-x/2)^d+14/x^2exp(2/x)(x-s)^dds|=4(d+1)e^2/x/x^2∫_x^1(s-x)^d/(s-x/2)^d+1ds ≤ 8(d+1)e^2/x/x^3∫_x^1(1-x)^dds ≤8(d+1)·e^5/x-xd. § ACKNOWLEDGEMENTThe authorswould like to thank Michael Cohen for helpful discussions and suggesting the ideas to improve the sparsity from O(n/^3) to O(n/^2). § EXPECTATION BOUND IMPLIES HIGH PROBABILITY BOUND (I AM STILL WRITING) In this section, we assume thatGiven ∑_i=1^mv_iv_i^⊤= and _1,_2,_3,⋯,_k. Let S=∑_imax_jv_i^⊤_j^-1v_i and β=min_iλ_min_i. The algorithm 𝚂𝚙𝚊𝚛𝚜𝚒𝚏𝚢(v_i) outputs α_i such that (α)≤ c_1β S. Also, the matrix Δ=∑_i=1^mα_iv_iv_i^⊤ satisfies * c_2^-1β≼Δ≼ c_2β* Δ≼1/2_i for all i=1,2,⋯,k. Note that if we sample v_iv_i^⊤ with probability max_jv_i^⊤_j^-1v_i/∑_imax_jv_i^⊤_j^-1v_i and reweigh them by 1/max_jv_i^⊤_j^-1v_i. Then, we have that 𝔼sampled=/S. Hence, with β S sample, we have ∑sampled∼β. Next, we note thatλ_max(^-1/2sampled^-1/2)≤1.The chernoff bound gives∑sampled≼βλ_max(^-1)if βλ_max(^-1)≥1. In particular, we can pickβ=1/λ_max(^-1)=λ_min().This explains the statement up to Õ(1).Recall the lemmaSuppose that Δ≼δ(u-) and Δ≼δ(-ℓ), then we have thatΦ_u,ℓ(+Δ)≤Φ_u,ℓ()+1/1-δ⟨ (u-)^-2,Δ⟩ -1/1+δ⟨ (-ℓ)^-2,Δ⟩Although it seems that Δ≼δ(-ℓ) is not necessary, it is. In particular, if Δ is very large with small probability and Δ is 0 otherwise. Then, Δ is not useful and we are not making too much progress. Hence, both condition is important.Now, we use the algorithm 𝚂𝚙𝚊𝚛𝚜𝚒𝚏𝚢(v_i). Hence, we have thatc_2^-1β≼Δ≼ c_2βand Δ≼1/2(u-) and Δ≼1/2(-ℓ). Hence, we have thatΦ_u,ℓ(+Δ)≤ Φ_u,ℓ()+2⟨ (u-)^-2,Δ⟩ -2/3⟨ (-ℓ)^-2,Δ⟩≤ Φ_u,ℓ()+2c_2β(u-)^-2-2/3c_2^-1β(-ℓ)^-2.Assuming Φ_u,ℓ()≤1/3, we have thatΦ_u+u',ℓ+ℓ'(+Δ)≤ Φ_u,ℓ(+Δ)-2u'/3(u--Δ)^-2+2ℓ'(+Δ-ℓ)^-2≤ Φ_u,ℓ(+Δ)-2u'/3(u-)^-2+4ℓ'(-ℓ)^-2.Hence, by picking 2u'/3=2c_2β and 4ℓ'=2/3c_2^-1β, we have that the potential function is decreasing.Hence, we have thatu'=3c_2β and ℓ'=2/12c_2^-1β.Hence, if δ is small and c_2 close to 1, the increase of u and ℓ is very close.Next, we note that whenever we add β S many vectors,is increased by Δ∼β. In general, we need S many vectors to increaseby identity. Now, we need to increaseby n. Hence, we need n many vectors.Given the algorithm 𝚂𝚙𝚊𝚛𝚜𝚒𝚏𝚢(v_i), we can give you a two sided algorithm. Also, if c_2∼1, then we can get a better sparsifier.Note that there is no way we can make it better if c_2 is not close to 1. § HOW TO GET EXPECTATION BOUND? (I AM STILL WRITING) Now, we look at this again. We want to achieveGiven ∑_i=1^mv_iv_i^⊤= and _1,_2,_3,⋯,_k. Let S=∑_imax_jv_i^⊤_j^-1v_i and β=min_iλ_min_i. The algorithm 𝚂𝚙𝚊𝚛𝚜𝚒𝚏𝚢(v_i) outputs α_i such that (α)≤ c_1β S. Also, the matrix Δ=∑_i=1^mα_iv_iv_i^⊤ satisfiesc_2^-1β≼Δ≼ c_2βΔ≼_i for all i=1,2,⋯,k.————————Okay, every term here is important...———————–I believe if I can handle one term, I can handle all termsIn that case we have——————-Given ∑_i=1^mv_iv_i^⊤= and . The algorithm outputs the matrix Δ=∑_i=1^mα_iv_iv_i^⊤ such that * (α)≤ c_1(λ_min)^-1* c_2^-1(λ_min)≼Δ≼ c_2(λ_min)* Δ≼——————————“§ NEW ANALYSIS We look at the following problem without loss of generality: given ∑_i=1^m v_iv_i^⊤=𝐈 for m=Θ(nlog n),find scalers {α_i}_i=1^m with Θ( n/log^cn) non-zeros, for some constant c, such that the matrix Δ=∑_i=1^m α_i v_iv_i^⊤ satisfies the following conditions: * (1-ε)ε·/Θ(polylog n)≼𝔼Δ≼ (1+ε)ε·/Θ(polylog n);* Δ≼ε. We sample every vector v_i with probability 1/log^cn, and defineΔ=∑_v_iε· v_iv_i^⊤. It is straightforward to see thatΔ≼ε∑_i=1^m v_iv_i^⊤=ε·.Moreover,𝔼Δ=∑_i=1^m 1/log^c n·ε v_iv_i^⊤ = ε/log^cn·. § NEW SECTION 1 In this section we present the One-sided BSS problem, and show how such problem can be solved by SDP. Let{M_i}_i=1^m be m matrices,such that M_i≽0 and ∑_i=1^m M_i=I. The goal is to find {α_i}_i=1^m such that the matrix A=∑_i=1^m α_i M_i satisfies the following conditions: * A≼ (1+O(1))I;* (A)=nnz(α).Our algorithm proceeds for T iterations, and there is a parameter u_j associated with every iteration j. The formal description of our algorithm is as follows:Let A=∑_i=1^mα_iM_i be the output of 𝖲𝗂𝗆𝗉𝗅𝖾𝖡𝖲𝖲({M_i}, T) with ∑_i=1^m M_i=I. Then, the following two statements hold: * nnz(α)≤(A)=T;* A≼(u_0+ 4/1-4/u_0T/n)I. In particular, A≼(1+√(T/n))^2 I, if we set u_0=4(1+√(T/n));* Every sampled matrix Δ in round j satisfies Δ∙ (u_jI-A_j-1)^-2≤ 4/n·(u_jI-A_j-1)^-2 and Δ≼ 4/u_0· (u_jI-A_j-1) with probability at least 1/2. By the algorithm description, the resulting matrix A consists of T sampledmatrices, and the trace of every added matrix Δ equals to 1,hence (A)=T. Since there are at most T different sampled matrices, it holds that nnz(α)≤(A)=T.For proving the second statement, we define a function Ψ_j=(u_jI-A_j)^-1. By the description of the algorithm, we know that every picked matrix Δ in iteration j satisfies Δ≼ 4/u_0(u_jI-A_j-1). By the Woodbury matrix identity and the assumption that Δ≼ 4/u_0· (u_j-A_j-1), it holds thatΨ_j= (u_jI-A_j)^-1 =(u_jI- A_j-1 -Δ)^-1≤(u_jI- A_j-1)^-1 +1/1-4/u_0Δ∙(u_jI-A_j-1)^-2≤(u_j-1I- A_j-1)^-1 + 1/1-4/u_0Δ∙(u_jI-A_j-1)^-2- 4/1-4/u_01/n(u_j I - A_j-1)^-2,where the last inequality holds from the fact that the following functionf(t)≜((u_j-1 + t·4/1-4/u_01/n)I - A_j-1)^-1is convex anddf(t)/dt|_t=1≥ f(1)-f(0) = (u_jI-A_j-1)^-1 - (u_j-1I-A_j-1)^-1.Now we apply the assumption that Δ∙ (u_jI-A_j-1)^-2≤ 4/n·(u_jI-A_j-1)^-2, and obtainΨ_j ≤(u_j-1I- A_j-1)^-1 + 1/1-4/u_0·4/n(u_jI-A_j-1)^-2- 4/1-4/u_01/n(u_j I - A_j-1)^-2 = Ψ_j-1.Hence, by induction we have that Ψ_j ≤Ψ_0=n/u_0. This implies that, for any iteration j, it holds that λ_min(u_jI-A_j) ≥ u_0/n, and it holds for the output matrix A after T iterations thatλ_max(A)≤ u_0+ 4/1-4/u_0T/n-u_0/n≤ u_0+ 4/1-4/u_0T/n,which implies thatA≼(u_0 + 4/1-4/u_0T/n) I.By setting u_0=4(1+√(T/n)), we have A≼(1+√(T/n))^2 I by direct calculation.Now for the third statement. Let j≥ 0 be any fixed iteration. Notice thatΔ= ∑_i=1^m p_i Δ_i = ∑_i=1^m (M_i)/nM_i/(M_i) = 1/n· I,and Δ∙ (u_jI-A_j-1)^-2 = 1/n· (u_jI-A_j-1)^-2. By Marokv's Inequality, it holds with probability at least 3/4 that Δ∙ (u_jI-A_j-1)^-2≤4/n· (u_jI-A_j-1)^-2. Also, notice that(Δ(u_jI-A_j-1)^-1)=1/n·(u_jI-A_j-1)^-1≤1/nΨ_j≤1/u_0,and the condition Δ≼ 4/u_0·(u_jI-A_j-1) holds if and only if (Δ· (u_jI-A_j-1)^-1)≤ 4/u_0. By Markov's inequality a random sampled matrix Δ satisfiesΔ≼ 4/u_0·(u_jI-A_j-1)with probability at least 3/4. Hence, every sampled matrix satisfies the required two conditions with probability at least 1/2.The lemma above shows that, by sampling O(T) matrices with probability p_i, with high probability we can construct a matrix A from sampled matrices with desired properties. The following theorem shows that,our final matrix A can be constructed by a semidefinite programming.Let S be a random O(T) random coordinates sampled from [m]. Let A=∑_i∈ Sα_i M_i be the solution of the following semidefinite programmingmin_α≥ 0( ∑_i∈ Sα_i M_i )∑_i∈ Sα_iM_i≼ 4( 1+√(T/n))^2I.Then, with high probability in T we have that A≤ 4T. ?Sample each coordinator wst p_i to be added
http://arxiv.org/abs/1702.08415v1
{ "authors": [ "Yin Tat Lee", "He Sun" ], "categories": [ "cs.DS", "cs.LG" ], "primary_category": "cs.DS", "published": "20170227182328", "title": "An SDP-Based Algorithm for Linear-Sized Spectral Sparsification" }
http://arxiv.org/abs/1702.07846v3
{ "authors": [ "J. Stolze", "A. I. Zenchuk" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170225072113", "title": "Two-channel spin-chain communication line and simple quantum gates" }
Image Stitching by Line-guided Local Warping with Global Similarity Constraint Tianzhu Xiang^1, Gui-Song Xia^1, Xiang Bai^2, Liangpei Zhang^1^1State Key Lab. LIESMARS, Wuhan University, Wuhan, China.^2Electronic Information School, Huazhong University of Science and Technology, China. ========================================================================================================================================================================================================================= Consider reconstructing a signalby minimizing a weighted sum of aconvex differentiable NLL (data-fidelity) term and a convexregularization term that imposes a convex-set constraint onandenforces its sparsity using ℓ_1-norm analysis regularization. We compute upper bounds on the regularization tuning constant beyondwhich the regularization term overwhelmingly dominates the NLL termso that the set of minimum points of the objective function does notchange.Necessary and sufficient conditions for irrelevance of sparsesignal regularization and a condition for the existence of finite upperbounds are established.We formulate an optimization problem for findingthese bounds when the regularization term can be globally minimized by afeasibleandalso develop an ADMM type method for theircomputation.Simulation examples show that the derived and empiricalbounds match.CPU§ INTRODUCTIONSelection of the regularization tuning constant u > 0 in convexTikhonov-type <cit.> penalized NLL minimizationrClf_u()=() +u r()is a challenging problem critical for obtaining accurate estimates of thesignal<cit.>.Too little regularization leads tounstable reconstructions with large noise and artifacts due to, forexample, aliasing.With too much regularization, the reconstructionsaretoo smooth and often degenerate to constant signals.Finding bounds on theregularization constant u or finding conditions for the irrelevance ofsignal regularization has received little attention.In this paper, wedetermine upper bounds on u beyond which the regularization term r()overwhelmingly dominates the NLL term () in(<ref>) so that the minima of the objective functionf_u() do not change. For a linear measurement model with whiteGaussian noise and ℓ_1-norm regularization, a closed-form expressionfor such a bound is determined in <cit.>; see alsoExample <ref>.The obtained bounds can be used to designcontinuation procedures <cit.> that graduallydecrease u from a large starting point down to the desired value, whichimproves the numerical stability and convergence speed of the resultingminimization algorithm by taking advantage of the fact that penalizedNLL schemes converge faster for smoother problems with larger u<cit.>.In some scenarios, users can monitor thereconstructions as u decreases and terminate when the result issatisfactory.Consider a convex NLL () and a regularization term cr() =𝕀_C() +Ψ^H _1 that imposes a convex-set constraint on , ∈ C ⊆R^p, and sparsity of an appropriate linearly transformed, where Ψ∈C^p× p'is a knownsparsifying dictionary matrix. Assume that the NLL () is differentiable and lower bounded within the closed convex set C, and satisfies c() ⊇Cwhich ensures that() is computable for all ∈ C. Define the convex sets of solutions to min_ f_u(), min_ r(), and min_∈ Q():[The use of“≤” in the definitions of Q and ^♢in (<ref>) and (<ref>) makes it easier to identify both as convex sets.]rCl_u|f_u()=min_f_u()Q |r() = min_r()= ∈C| Ψ^H _1 ≤min_∈C Ψ^H _1 ^♢∈Q |() ≤min_∈Q ()≠∅where the existence of ^♢ is ensured by the assumption that() is lower bounded in C. We review the notation: “^*”, “^T”, “^H”, “^+”,·_p, ·, ⊗, “≽”, “≼”,I_N, 1_N × 1, and 0_N×1 denote complexconjugation, transpose, Hermitian transpose, Moore-Penrose matrix inverse,ℓ_p-norm over the complex vector space C^N defined by_p^p = ∑_i=1^N |z_i|^p for = (z_i) ∈C^N,absolute value, Kronecker product, elementwise versions of “≥” and“≤”, the identity matrix of size N and the N × 1 vectors ofones and zeros, respectively (replaced by I,1, and 0 when thedimensions can be inferred).𝕀_C() =0,∈ C+∞,otherwise, C = min_∈ C-_2^2, and exp_∘ denote the indicator function, projection onto C, and the elementwiseexponential function: exp_∘_i= exp a_i.Denote by 𝒩(A) and ℛ(A) the null space and range(column space)of a matrix A. These vector spaces are real or complexdepending on whether A is a real- or complex-valued matrix. For a set S of complex vectors of size p, define S∈R^p |+ ∈ Sfor some ∈R^p and S∩R^p∈R^p |+0∈ S, where =√(-1). For A ∈C^M × N,c'c𝒩(A^H) ∩R^M = 𝒩 A^T ,ℛ(A) =ℛ( A ) are the real null space and range of A^T andA, respectively, where cA AA∈R^M×2 N.If A in (<ref>)has full row rank, we can define c A^A^H (A A^H)^-1 which reduces to A^+ for real-valued A. The following are equivalent: (ℛΨ)=R^p, 𝒩(Ψ^H) ∩R^p={0}, and d=p, wherecd ℛ(Ψ) ≤min(p,2p').We can decompose Ψ as c Ψ= F Z where F ∈R^p × d andZ ∈C^d× p' with F = d and Z = d;Z =Z Z∈R^d× 2 p', consistent with the notation in (<ref>). Here, ℛF denotes the real range of the real-valuedmatrix F. Clearly, d≥ 1 is of interest; otherwise Ψ=0. Observe that(see (<ref>)) rCl(ΨZ^) = F ℛF = (ℛΨ).The subdifferential of the indicator functionN_C()=∂𝕀_C() is the normal cone to C at <cit.> and, by the definition of a cone, satisfies c"sN_C()=aN_C(), for any a>0.Define cG(s) s/|s|, s≠0w∈C ||w|≤1, s=0 and its elementwise extension G() for vector arguments , whichcan be interpreted as twice the Wirtinger subdifferential of _1with respect to<cit.>.Note that^HG()=_1, and, whenis a real vector,G() is the subdifferential of _1 withrespect to<cit.>. For Ψ∈C^p× p' and ∈R^p, thesubdifferential of Ψ^H_1 with respect tois c∂_Ψ^H _1 = ΨGΨ^H.(<ref>) follows from rCl∂_|_j^H| = _j G(_j^H)where _j is the jth column of Ψ.We obtain(<ref>) by replacing the linear transform matrix in<cit.> with _j_j^T.We now use Lemma <ref> to formulate the necessary and sufficient conditions for∈_u:rCl0∈u ΨGΨ^H+ ∇( ) + N_C( )and ∈ Q: rCl0 ∈ΨGΨ^H +N_Crespectively. When the signal vector =X corresponds to an image X ∈R^J× K, its isotropic and anisotropic TVregularizations correspond to <cit.> rCl"s"Ψ = Ψ_v+Ψ_h∈C^JK×JK(isotropic) Ψ = [Ψ_vΨ_h]∈R^JK×2JK (anisotropic)respectively, where Ψ_v=I_K⊗ D^T(J) and Ψ_h=D^T(K)⊗ I_J are the vertical and horizontal difference matrices (similar to those in <cit.>),and cD(L)[r] 1-1 1 -1⋱ ⋱1-10 0 ⋯0 0 ∈R^L×Lobtained by appending an all-zero row from below to the (L-1) × L upper-trapezoidal matrix with first row1,-1,0,…,0;note that D(1)=0. Here, d = JK-1 and c𝒩(Ψ^H)=ℛ(1)for both the isotropic and anisotropic TV regularizations. The scenario where rCl𝒩(Ψ^H) ∩C ≠∅ holds is of practical interest: then Q = 𝒩(Ψ^H) ∩ C and^♢∈^♢ globally minimize the regularization term:r^♢=0.If (<ref>) holds and ^♢∈^♢, then G(Ψ^H^♢)=H, where cH ∈C^p' ×1 |_∞≤1.If, in addition to (<ref>), * d=p, then^♢=Q={0};* 𝒩(Ψ^H) ∩R^p=ℛ(1),then Q= ℛ(1) ∩ C and ^♢∈^♢ are constant signals of theform ^♢ = 1 x_0^♢, x_0^♢∈R. In Section <ref>, we define and explain an upper bound U onuseful regularization constants u and establish conditions under which signalsparsity regularization is irrelevant and finite U does notexist. We then present an optimization problem for finding U when(<ref>) holds (Section <ref>), develop a generalnumerical method for computing bounds U (Section <ref>), presentnumerical examples (Section <ref>), and make concluding remarks(Section <ref>).§ UPPER BOUND DEFINITION AND PROPERTIESDefine cU infu ≥0_u ∩Q ≠∅ .If _u ∩ Q = ∅ for all u, then finite U does notexist, which we denote by U=+∞. We now show that, if u ≥ U, then the the set of minimum points _u of the objective function does notchange.* For any u, _u∩ Q=^♢ if and only if _u∩Q≠∅. *Assuming _U∩ Q ≠∅ for some U ≥ 0, _u =^♢ for u > U.We first prove <ref>. Necessity follows by the existence of ^♢; see (<ref>). We argue sufficiency by contradiction.Consider any_u∈_u∩ Q; i.e., _u minimizes both f_u() andr().If _u ∉^♢, there exists a ∈^♢ with ()<(_u) that, by the definition of^♢, also minimizes r().Therefore,f_u()=()+ur()<f_u(_u), which contradicts the assumption_u∈_u.Therefore, _u∩ Q⊆^♢.Ifthere exists a ∈^♢⊆ Q such that∉_u, then f_u()>f_u(_u) which, since bothand_u are in Q, implies that ()>(_u) and contradictsthe definition of ^♢.Therefore,^♢⊆_u.We now prove <ref>. By <ref>, _U∩Q=^♢, which confirms <ref> for u=U. Consider now u>U, a ∈_U∩ Q = ^♢, and any ∈_u. Then, rCl ()+Ur() ≥ ()+Ur()()+ur() ≥ ()+ur(). By summing the two inequalities in (<ref>) andrearranging, we obtain r() ≥ r().Since ∈ Q,isalso in Q; i.e., _u⊆ Q, which implies_u=^♢ by <ref>. As u increases, _u moves gradually towards Q and, according to thedefinition (<ref>), _u and Q do not intersect when u<U. Once u=U, the intersection of the two sets is ^♢, and, by Remark <ref><ref>, _u = ^♢ for all u > U. §.§ Irrelevant Signal Sparsity RegularizationThe following claims are equivalent: *^♢∩_0≠∅; i.e., there exists an^♢∈^♢ such that rCl0∈∇^♢+N_C^♢; * ^♢⊆_0; and*U=0; i.e., _0∩ Q≠∅. <ref> follows from <ref> because^♢⊆ Q.<ref> follows from <ref>by applying Remark <ref><ref> to obtain _0∩Q=^♢, which implies <ref>. Finally, <ref> implies <ref>.Having ∇(^♢)=0 for at least one^♢∈^♢ implies (<ref>) and is thereforea stronger condition than (<ref>).Consider ()=_2^2 and C=∈R^2 |-1_2× 1_2≤1. (Here, () could correspond to the Gaussian measurement model withmeasurements equal to zero.) Since C is a circle within R_+^2, the objective functionsfor the identity (Ψ=I_2) and 1D TVsparsifying transforms arerCl's f_u() = x_1^2+x_2^2 + u(x_1+x_2)+𝕀_C(), (identity) f_u() = x_1^2+x_2^2 + u|x_1-x_2|+𝕀_C(), (1D TV) respectively, where 𝒳_u = 𝒳^♢=Q={^♢} and ^♢ = 1-√(2)/21. Here,∇^♢=2-√(2)1_2× 1and N_C^♢=a1| a≤0, whichconfirms that (<ref>) holds. §.§ Condition for Infinite U and Guarantees for Finite UIf there exists ^♢∈^♢ such that rCl∇^♢+N_C^♢ ∩(ℛ Ψ)=∅.then U=+∞.When (<ref>) holds, the reverse is also true with a stronger claim: U=+∞implies (<ref>) for all ^♢∈^♢. First, we prove sufficiency by contradiction.If a finite U exists,then ^♢⊆_u for all u≥ U.Therefore,(<ref>) holds withbeing any^♢∈^♢, which contradicts (<ref>).In the case where (<ref>) holds, we prove the necessity bycontradiction. If (<ref>) does not hold for all^♢∈^♢, there exist ∈ N_C(^♢) and∈C^p' such that c0 = ∇(^♢)+Ψ+.Since (<ref>) holds, Ψ^H ^♢ = 0 andG(Ψ^H^♢)=H; see (<ref>).Whenu≥_∞, ∈ uH and Ψ∈ uΨ GΨ^H^♢. Therefore,(<ref>) holds at =^♢ for allu≥_∞, which contradicts U=+∞. Consider ()=x_1+𝕀_R_+(x_1), Ψ=I_2,and C=∈R^2 |-1_2× 1_2≤1. (Here, () could correspond to the (x_1) measurement model with measurement equal to zero.) Since C is a circle within R_+^2, the objective functionis rCl f_u()=(1+u)x_1+ux_2+𝕀_C()with 𝒳_u = {_u },𝒳^♢ = Q = {^♢}, andrCl_u=1_2 ×1-1/√(2+2/u+1/u^2) [ 1+1/u; 1 ] ^♢ =1-√(2)/21_2×1 which implies U=+∞, consistent with the observation that _u ∩ Q = ∅.Here, (<ref>) isnot satisfied:(<ref>) is only a sufficient condition forU=+∞ and does not hold in this example.Consider ()=^2_2, 1D TV sparsifying transformwith Ψ=D^T(2), and C=∈R^2 |-2, 0^T_2^2≤2. Since C is a circle with x_1-x_2≥0, the objective function isrCl f_u() = _2^2+u|x_1-x_2|+𝕀_C()=-12[u-u]^T_2^2 -u^2/2+𝕀_C() with 𝒳_u =2-(1+4/u)/q(u), 1/q(u) ^T, q(u) √(1+4/u+8/u^2), and 𝒳^♢ = Q = {1_2× 1}, which impliesU=+∞.Since (<ref>) holds in this example,(<ref>) is necessary and sufficient for U=+∞.Since -1^T∇^♢=-4 andN_C^♢=(-a,a)^T | a≥0, (<ref>) holds.§.§.§ Two cases of finite UIfd=p and (<ref>) holds, then U must be finite:in thiscase,condition (<ref>) in Remark <ref> cannothold, which is easy to confirm by substituting(ℛΨ)=R^p into(<ref>). U must also be finite if c ^♢∩C≠∅.Indeed, (<ref>) implies (<ref>) and that for ^♢∈^♢∩ C,rClN_C^♢={0}∇(^♢)∈ℛ(Ψ)and hence (<ref>) cannot hold upon substituting(<ref>) and (<ref>).Here,(<ref>) follows from0∈∇(^♢)+N_Q(^♢), the condition foroptimality of the optimization problem min_∈ Q() thatdefines ^♢, by using the fact that N_Q(^♢) =ℛ(Ψ) when ^♢∈^♢∩ C. If (<ref>) holds then,by Remark <ref>, U=0 ifand only if ∇^♢=0.§ BOUNDS WHEN (<REF>) HOLDS We now present an optimization problem for finding U when (<ref>)holds. Assume that (<ref>) holds and that the convex NLL() is differentiable within ^♢.Consider thefollowing optimization problem: s'c+l(P_0): U_0(^♢) = -15pt min_∈R^p, ∈C^p' (^♢,,)_∞subject to10pt∈N_C^♢ 48pt ∇^♢+∈ℛF with c(,,) +Z^F^+∇+-(Z).Then, U_0(^♢)=U for all ^♢∈^♢ and Uin(<ref>).Here,U=+∞ if and only if the constraints in(<ref>) and(<ref>) cannot be satisfied for any , which isequivalent to ^♢∈^♢ satisfying(<ref>) in Remark <ref>. Observe that G(Ψ^H^♢)=H for all ^♢∈^♢ and cΨ(,,) =∇+.due to (<ref>) and (<ref>), respectively.We first prove that ^♢⊆_u if u≥U_0(^♢). Consider any ^♢∈^♢ and denote by, a pair , thatsolves the minimization problem (P_0).Since u≥U_0(^♢), there exists an ∈ H such that^♢,,+u=0. Using (<ref>), we obtain c 0= Ψ^♢,, +u=uΨ+∇(^♢)+which implies ^♢∈_u according to(<ref>).Second, we prove that if u< U_0(^♢) for any^♢∈^♢, then ^♢∩_u=∅. Weemploy proof by contradiction.Suppose^♢∩_u≠∅; then, there exists an^♢∈^♢∩_u. According to (<ref>),there exist an ∈ H and an ∈N_C(^♢) such that0=uΨ+∇(^♢)+. Using (<ref>), we have rCl0= Ψu+^♢,,-u .Note that rCl u+^♢,,-u=Z^F^+∇^♢++u(Z). Inserting (<ref>) into (<ref>) and using(<ref>) and the fact that F has full column rank leads to0=F^+∇^♢++u(Z);thus rCl 0=u+^♢,,-u.Now,rearrange and use the fact that _∞≤1(see(<ref>)) to obtain c^♢,,-u_∞= u-_∞≤u < U_0(^♢)which contradicts (<ref>), where U_0(^♢) is theminimum.Finally, we prove by contradiction that U_0(^♢) is invariantwithin ^♢ if ^♢ has more than one element. Assume that there exist ^♢_1, ^♢_2 ∈^♢and u such that U_0(^♢_1)≤ u<U_0(^♢_2).Weobtain contradictory results: ^♢_1∈_u and^♢∩_u≠∅ because u≥ U_0(^♢_1)and u<U_0(^♢_2), respectively.Therefore,U=U_0(^♢) is invarant to ^♢∈^♢.The constraints onin (<ref>) and (<ref>) are equivalent to stating that (<ref>) does not hold for any ^♢∈𝒳^♢; see also (<ref>).If andoes not exist that satisfies these constraints, (<ref>) holds and U=+∞ according to Remark <ref>. We make a few observations: (P_0) is a linear programming problem withlinear constraints and can be solved using CVX <cit.> and Matlab'soptimization toolbox upon identifying N_C(^♢) andℛ(F) in (<ref>) and (<ref>),respectively.Theorem <ref> requires differentiability of theNLL only at =^♢∈^♢.If Ψ is real,then Z is real as well, the optimalin (P_0) has zero imaginarycomponent and the corresponding simplified version of Theorem <ref>follows and requires optimization in (P_0) with respect to real-valued∈R^p'.If Ψ is real and d = p', then we can select Z=I, which leads toZ^=I and cancellation of the variablein(<ref>) and simplification of (P_0).We now specialize Theorem <ref> to two cases with finite U.If d=p and if (<ref>) holds, then U in (<ref>) can becomputed as cU = min_∈N_C0, ∈C^p'+ Ψ^∇0+-(Ψ) _∞. Theorem <ref> applies, 𝒳^♢={0}, and Umust be finite.Setting F=I in (<ref>) leads to(<ref>).If C=R_+^p, then N_C(0)=R_-^p and thecondition ∈N_C0 reduces to ≼0. If (<ref>) holds, then U in (<ref>) can be computedas rClU = min_∈C^d+ Z^F^+ ∇^♢ - (Z )_∞with any ^♢∈^♢∩ C. Thanks to (<ref>), (<ref>) and(<ref>)–(<ref>) are satisfied,Theorem <ref> applies, U must be finite, and =0 (by (<ref>)). By usingthese facts,we simplify (<ref>) to obtain (<ref>). If d=p and 0∈ C, then both Corollaries <ref> and <ref> apply and the upper bound U can be obtained bysetting =0and N_C(0)={0} in (<ref>) or by setting^♢ = 0 andF=I in(<ref>).Consider a real invertible Ψ∈R^p× p. * If C=R_+^p, Corollary <ref> applies and (<ref>) becomesc U= min_≼0 Ψ^-1 ∇0 + _∞. In this case, U=0 and signal sparsity regularization is irrelevant if∇(0) ≽0, which follows by inspection from (<ref>), as well as from (<ref>) inRemark <ref>. If Ψ=I, (<ref>) further reduces to U=-min0,min_i[∇(0)]_i.*If 0∈ C, Corollaries <ref> and <ref> apply and the bound U simplifies to cU = Ψ^-1 ∇0_∞.For Ψ=I and a linear measurement model with white Gaussian noise,(<ref>) reduces to the expressions in <cit.> and <cit.>,used in <cit.> to design its continuation scheme; <cit.> and <cit.> also assumeC=R^p. [One-dimensional TV regularization]Consider 1D TV regularization with Ψ=D^T(p) ∈R^p × p obtained by setting K=1,J=p in(<ref>); note that d=p-1. Consider a constant signal^♢ = 1 x_0^♢∈^♢. ThenTheorem <ref> applies and yieldsrClU=min_∈N_C( 1 x_0^♢)max_1 ≤j < p ∑_i=1^j∇1 x_0^♢+_iwhere we have used the factorization (<ref>) with F obtained by the block partitioning Ψ = F0_p×1, Z=I_p-1 0_(p-1)×1, andthe fact that F^+ is equal to the (p-1)× p lower-triangularmatrix of ones.When(<ref>) holds, 1x_0^♢∈^♢∩ C, Corollary <ref> applies, = 0 (see (<ref>)), and (<ref>) reduces to: rClU=max_1 ≤j < p ∑_i=1^j ∇ 1 x_0^♢_i. The bounds obtained by solving (P_0) are often simple but restrictedto the scenario where (<ref>) holds.In the following section, weremove assumption(<ref>) and develop a general numerical methodfor finding U in (<ref>).§ ADMM ALGORITHM FOR COMPUTING UWe focus on the nontrivial scenario where (<ref>) does not hold and assume u>0. We also assume that an ^♢∈𝒳^♢ isavailable, which will be sufficient to obtain the U in (<ref>). We use the duality of norms <cit.>: cΨ^H_1 = max__∞≤1^HΨ^Hto rewrite the minimization of (<ref>) as the followingmin-max problem (see also (<ref>)): rClmin_ max_ ()+u ^HΨ^H +𝕀_C()-𝕀_H().Since the objective function in (<ref>) is convex with respect to and concave with respect to , the optimal (,)=_u,_u is at the saddle point of (<ref>) and satisfies rCl0 ∈ ∇_u +uΨ_u +N_C_u _u ∈ G(Ψ^H_u).Now, select U as the smallest u for which(<ref>)–(<ref>) hold with _u=^♢: cU=1/v^♢∇^♢_2 where v^♢, ^♢, ^♢ is the solution tothe following constrained linear programming problem: s'l+l+x*(P_1): v,,minimize -v +𝕀_G(Ψ^H^♢)() +𝕀_N_C^♢() subject to 20ptv+Ψ+ =0obtained from (<ref>)–(<ref>) with _u and _u replaced by ^♢ and .Here,c ∇^♢ /∇^♢_2 is the normalized gradient (for numerical stability) of the NLL at^♢; ∇^♢≠0 because (<ref>)does not hold.Due to (<ref>), v=0 is a feasible pointthat satisfies the constraints (<ref>), which implies thatv^♢≥0.When (<ref>) holds, v has to be zero,implying U=+∞. To solve (P_1) and find v^♢, we apply an iterative algorithmbased on ADMM <cit.> rCl^(i+1)= -10pt min_ ∈GΨ^H^♢v^(i)+ Ψ +^(i) +^(i) _2^2 v^(i+1) = ρ-^TΨ^(i+1)+^(i)+^(i)^(i+1) = P_N_C^♢ -v^(i+1)-Ψ^(i+1)-^(i)^(i+1) = ^(i)+ Ψ^(i+1) +v^(i+1)+^(i+1)where ρ>0 is a tuning parameter for the ADMM iteration and wesolve (<ref>) using theBroyden-Fletcher-Goldfarb-Shanno optimization algorithmwith box constraints <cit.> and PNPG algorithm<cit.> forreal and complex Ψ, respectively.We initialize the iteration(<ref>) with v^(0)=1, ^(0)=0,^(0)=0,and ρ=1, where ρ is adaptively adjusted thereafter using the scheme in <cit.>.In special cases, (<ref>) simplifies. If (<ref>) holds,then Ψ^H^♢ = 0 and the constraint in(<ref>) simplifies to _∞≤1; see(<ref>).If (ΨΨ^H)=cI, c>0, andΨ∈R^p× p or Ψ∈C^p× p/2,(<ref>) has the following analytical solution: c^(i+1) =GΨ^H^♢ -1/cΨ^Hv^(i)+^(i)+^(i).When (<ref>) holds, (<ref>) reduces to ^(i)=0for all i, thanks to (<ref>).When Ψ is real, the constraints imposed by𝕀_G(Ψ^H^♢)() become linear and (P_1) becomes a linear programming problem with linear constraints. retain-zero-exponent=true § NUMERICAL EXAMPLESMatlab implementations of the presented examples are available at<https://github.com/isucsp/imgRecSrc/uBoundEx>. In all numerical examples, the empirical upper bounds U were obtained by a grid search over u with𝒳_u ={_u} obtained using the PNPG method<cit.>.§.§ Signal reconstruction for Gaussian linear modelWe adopt the linear measurement model with white Gaussian noise and scaled NLL() = 0.5 -Φ_2^2, where the elements of the sensing matrix Φ∈R^N ×p are iid and drawn from the uniform distribution on a unit sphere. We reconstruct the nonnegative “skyline” signal _true∈R^1024 × 1in<cit.> from noisy linearmeasurementsusing the DWT and 1D TV regularizations, where the DWT matrix Ψ is orthogonal(ΨΨ^T = Ψ^T Ψ =I), constructed using the Daubechies-4 wavelet with three decomposition levels. Define the SNR as() =10log_10Φ_true_2^2/Nσ^2where σ^2 is the variance of the Gaussian noise added to Φ_true to create the noisy measurement vector . For C=R_+^p and C=R^p with DWTregularization, 𝒳^♢={0} andExample <ref> applies and yields the upper bounds(<ref>) and (<ref>), respectively.For TV regularization, we apply the result in Example <ref>. For C=R^p and C=R_+^p, we have𝒳^♢={1 x_0} and 𝒳^♢={1max(x_0,0) }, respectively, where rClx_0 min_x ∈R(1 x)=1^TΦ^T/ Φ1^2_2.If 1 x_0 ∈ C, which holds when C=R^p orwhen C=R_+^p and x_0 > 0, then the bound U is given by (<ref>).ForC=R_+^p and if x_0 ≤ 0, then 𝒳^♢={0} and (<ref>) applies. In thiscase, U=0 if [∇(0)]_i ≥ 0 for i=1,…,p-1, whichoccurs only when [∇(0)]_i=0 for all i.Table <ref> shows the theoretical and empirical bounds forDWT and TV regularizations and C=R_+^p and C=R^p; we decrease the SNR from30-30 with independent noise realizations fordifferent SNR.The theoretical bounds inSections <ref> and <ref> coincide. For DWT regularization, ^♢ is the same for both convexsets C and thus the upper bound U for C=R_+^p is alwayssmaller than its counterpart for C=R^p, thanks to beingoptimized over variablein (<ref>). For TV regularization, when x_0>0, the upper bounds U coincidefor both C because, in this case, ^♢ is the same for both C and ^♢∈ C. In the last row of Table <ref> we show the case wherex_0≤0; then, ^♢ differs for the two convex sets C, and the upper bound U for C=R_+^p is smaller than itscounterpart for C=R^p, thanks to being optimized over variable in (<ref>): compare (<ref>) with (<ref>). §.§ PET image reconstruction from Poisson measurements Consider PET reconstruction of the 128×128 concentration map_true in <cit.>,which represents simulated radiotracer activity in a human chest, fromindependent noisy Poisson-distributed measurements =(y_n) with means[Φ_true+]_n.The choices of parameters in the PETsystem setup and concentration map _true have been taken fromthe IRT <cit.>.Here,c ()=1^T Φ+- + ∑_n,y_n≠0y_nlny_n/Φ+_n and c Φ= w (exp_∘(-S+))S ∈R_+^N ×pis the known sensing matrix;is the density map needed to model the attenuation of the gamma rays<cit.>; =(b_i) is the known intercept termaccounting for background radiation, scattering effect, and accidentalcoincidence;[The elements of the intercept term have been set to aconstant equal to 10 of the sample mean of Φ_true: =[1^TΦ_true/(10N)]1.]is a known vector that models the detector efficiency variation; andw>0 is a known scaling constant, which we use to control the expectedtotal number of detectedphotons due to electron-positron annihilation,1^T(-) = 1^TΦ_true, an SNRmeasure.We collect the photons from 90 equally spaced directionsover180, with 128 radial samples at each direction. Here, we adopt the parallel strip-integral matrix S<cit.> and use its implementation in theIRT <cit.>. We now consider the nonnegative convex set C=R_+^p, which ensures that (<ref>) holds, and 2D isotropic and anisotropicTV and DWT regularizations, where the 2D DWT matrixΨ is constructed using the Daubechies-6 wavelet with six decompositionlevels. For TV regularizations, 𝒳^♢={1max(0,x_0)}, where x_0 = min_x ∈R(1 x), computedusing the bisection method that finds the zero of ∂(1x)/∂ x, which is an increasing function of x∈R_+.Here, no searchfor x_0 is needed when [0]∂(1x) / ∂ x_x=0 > 0, because in this case x_0<0.We computed the theoretical bounds using the ADMM-type algorithm inSection <ref>.Table <ref> shows the theoretical and empirical bounds forDWT and TV regularizations and the SNR1^TΦ_true varying from e1 to e9,with independent measurementrealizations for different SNR. Denote the isotropic and anisotropic 2D TV bounds by U_isoand U_ani, respectively. Then, it is easy to show that when(<ref>) holds, U_ani≤ U_iso≤√(2)U_ani, which follows byusing the inequalities√(2)√(a^2+b^2)≥a+b≥√(a^2+b^2) and isconfirmed in Table <ref>. § CONCLUDING REMARKSFuture work will include obtaining simple expressions for upper bounds Ufor isotropic 2D TV regularization, based on Theorem <ref>.
http://arxiv.org/abs/1702.07930v1
{ "authors": [ "Renliang Gu", "Aleksandar Dogandžić" ], "categories": [ "stat.CO", "math.OC" ], "primary_category": "stat.CO", "published": "20170225174733", "title": "Upper-Bounding the Regularization Constant for Convex Sparse Signal Reconstruction" }
Multiuser Precoding and Channel Estimation for Hybrid Millimeter Wave MIMO SystemsD. W. K. Ng is supported under Australian Research Council's Discovery Early Career Researcher Award funding scheme (project number DE170100137). This work was supported in part by the Australian Research Council (ARC) Linkage Project LP 160100708.Lou Zhao, Derrick Wing Kwan Ng, and Jinhong Yuan School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia Email: lou.zhao@unsw.edu.au, w.k.ng@unsw.edu.au, j.yuan@unsw.edu.au , December 30, 2023 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================In this paper, we develop a low-complexity channel estimation for hybrid millimeter wave (mmWave) systems, where the number of radio frequency (RF) chainsis much less than the number of antennas equipped at each transceiver. The proposed channel estimation algorithm aims to estimate the strongest angle-of-arrivals (AoAs) at both the base station (BS) and the users. Then all the users transmit orthogonal pilot symbols to the BS via these estimated strongest AoAs to facilitate the channel estimation. The algorithm does not require any explicit channel state information (CSI) feedback from the users and the associated signalling overhead of the algorithm is only proportional to the number of users, which is significantly less compared to various existing schemes. Besides, the proposed algorithm is applicable to both non-sparse and sparse mmWave channel environments. Based on the estimated CSI, zero-forcing (ZF) precoding is adopted for multiuser downlink transmission. In addition, we derive a tight achievable rate upper bound of the system. Our analytical and simulation results show that the proposed scheme offer a considerable achievable rate gain compared to fully digital systems, where the number of RF chains equipped at each transceiver is equal to the number of antennas. Furthermore, the achievable rate performance gap between the considered hybrid mmWave systems and the fully digital system is characterized, which provides useful system design insights. § INTRODUCTION Higher data rates, large bandwidth, and higher spectral efficiency are necessary for the fifth-generation (5G) wireless communication systems to support various emerging applications <cit.>. The combination of millimeter wave (mmWave) communication <cit.> with massive multiple-input multiple-output (MIMO) <cit.> is considered as one of the promising candidate technologies for 5G communication systems with many potential and exciting opportunities for research <cit.>. For example, the trade-offs between system performance, hardware complexity, and energy consumption <cit.> are still unclear. From the literature, it is certain that the conventional fully digital MIMO systems, in which each antenna connects with a dedicated radio frequency (RF) chain, are impractical for mmWave systems due to the prohibitively high cost, e.g. tremendous energy consumption of high resolution analog-to-digital convertors/digital-to-analog convertors (ADC/DACs) and power amplifiers (PAs). Therefore, several mmWave hybrid systems were proposed as compromised solutions which strike a balance between hardware complexity and system performance <cit.>. Specifically, the use of a large number of antennas, connected with only a small number of independent RF chains at transceivers, is adopted to exploit the large array gain to compensate the inherent high path loss in mmWave channels <cit.>.Yet, the hybrid system imposes a restriction on the number of RF chains whichintroducesaparadigmshiftinthe design of both resource allocation algorithms and transceiver signal processing.Conventionally, pilot-aided channel estimation algorithms are widely adopted for fully digital multiuser (MU) time-division duplex (TDD) massive MIMO systems <cit.>operating in sub-6 GHz frequency bands. However, these algorithms cannot bedirectly applied to hybrid mmWave systems as the number of RF chains is much small than the number of antennas. In fact, for the channel estimation in hybrid mmWave systems, the strategies of allocating analog/digital beams to different users and estimating the equivalent baseband channels are still an open area of research <cit.>. Recently, several improved mmWave channel estimation algorithms were proposed <cit.>. The overlapped beam patterns and rate adaptation channel estimation were investigated in <cit.> to reduce the required training time for channel estimation. Then, the improved limited feedback hybrid channel estimation was proposed <cit.> to maximize the received signal power at each single user so as to reduce the required training and feedback overheads. However, explicit channel state information (CSI) feedback from users is still required for these channel estimation algorithms. In practice, CSI feedbacks may cause system rate performance degradation due to the limited amount of the feedback and the limitedresolution of CSI quantization. In addition, the CSI feedback also requires exceedingly high consumption of time resource. Therefore, a low-complexity mmWave channel estimation algorithm, which does not require explicit CSI feedback, is necessary to unlock the potential of hybrid mmWave systems.In the literature, most of the existing mmWave channel estimation algorithms leverage the sparsity of mmWave channels due to the extremely short wavelength of mmWave <cit.>. Generally, in suburban areas or outdoor long distance propagation environment <cit.>, the sparsity of mmWave channels can be well exploited. In practical urban areas (especially in the city center), the number of unexpected scattering clusters increases significantly and mmW communication channels may not be necessarily sparse. For instance, in the field measurements in Daejeon city, Korea, and the associated ray-tracing simulation <cit.>, the angle of arrivals (AoAs) at the BS and the users were observed under the impact of non-negligible scattering clusters. In addition, existing mmWave channel estimation algorithms <cit.>, which are designed based on the assumption of channel sparsity, may not be applicable to non-sparse mmWave channels. Indeed, the scattering clusters of mmWave channels due to macro-objects or backscattering from the objects, have a significant impact on system performance and cannot be neglected in the system design. Therefore, there is an emerging need for a channel estimation algorithm which is applicable to both non-sparse and sparse mmWave channels.Motivated by aforementioned discussions, we consider a MU hybrid mmWave system. In particular, we propose and detail a novel non-feedback non-iterative channel estimation algorithm which is applicable to both non-sparse and sparse mmWave channels. Also, we analyze the achievable rate performance of the mmWave system using ZF precoding based on the estimated equivalent channel information.Our main contributions are summarized as follows:* We propose a three-step MU channel estimation scheme for mmWave channels.In the first two steps, we estimate the strongest AoAs at both the BS and the users instead of estimating the combination of multiple AoAs. The estimated strongest AoAs will be exploited for the design of BS and users beamforming matrices.In the third step, all the users transmit orthogonal pilot symbols to the BS via the beamforming matrices. We note that the proposed channel estimation scheme does not require explicit CSI feedbacks from the users as well as iterative measurements. The required training overheads of our proposed algorithm only scale with the number of users. Besides, the proposed algorithm is very general, which is not only applicable to sparse mmWave channels, but also suitable for non-sparse channels. * We analyze the achievable rate performance of the proposed ZF precoding scheme based on the estimated CSI of an equivalent channel. While assuming the equivalent CSI is perfectly known at the BS, we derive a tight performance upper bound of our proposed scheme. Also, we quantify the performance gap between the proposed hybrid scheme and the fully digital system in terms of achievable rate per user. It is interesting to note this the performance gap is determined by the strongest AoA component to the scattering component ratio. Notation: tr(· ) denotes trace operation; ·_F denotes the Frobenius norm of matrix; λ _i(· ) denotes the i-th maximum eigenvalue of a matrix; diag{a} is a diagonal matrix with the entries a on its diagonal; [· ]^∗ and [·]^T denote the complex conjugate and transpose operations, respectively. The distribution of a circularly symmetric complex Gaussian (CSCG) random vector with a mean vector 𝐱 and a covariance matrix σ^2𝐈is denoted by CN(𝐱,σ^2𝐈), and ∼ means “distributed as". ℂ^N× M denotes the space of N× M matrices with complexentries. § SYSTEM MODEL We consider a MU hybrid mmWave system which consists of one base station (BS) and N users in a single cell, as shown in Figure <ref>. The BS is equipped with M≥ 1 antennas and N_RFradio frequency (RF) chains to serve the N users. We assume that each user is equipped with P antennas connected to a single RF chain such that M⩾ N_RF⩾ N. In the following sections, we set N = N_RF to simplify the analysis. Each RF chain at the BS can access to all the antennas by using M phase shifters, as shown in Figure <ref>.At each BS, the number of phase shifters is M× N_RF. Due to significant propagation attenuation at mmWave frequency, the system is dedicated to cover a small area, e.g. cell radius is ∼ 150 m. We assume that the users and the BS are fully synchronized and time division duplex (TDD) is adopted to facilitate uplink and downlink communications <cit.>. In previous work <cit.>, mmWave channels were assumed to have sparse propagation paths between the BS and the users. However, in recent field tests, both a strong line-of-sight (LOS) component and non-negligible scattering component may exist in mmWave propagation channels <cit.>, especially in the urban area. In particular, mmWave channels can also be modeled by non-sparse Rician fading and with a large Rician K-factor[We note that existing models <cit.> for sparse mmWave channels are special cases of the considered model.] (approximately ≥5 dB) <cit.>. Let 𝐇_k∈ℂ^M× P be the uplink channel matrix between the k-th user and the BS in the cell. We assume that 𝐇_k is a slow time-varying block Rician fading channel, i.e., the channel is constant in a block but varies slowly from one block to another. Then, in this paper, we assume that the channel matrix 𝐇_k can be decomposed into a deterministic LOS channel matrix 𝐇_L,k∈ℂ^M× P and a scattered channel matrix 𝐇_S,k∈ℂ^M× P <cit.>, i.e.,𝐇_k=LOS component𝐇_L,k𝐆_L,k+Scattering component𝐇_S,k𝐆_S,k, where 𝐆_L,k∈ℂ^P× P and 𝐆_S,k∈ℂ^P× P are diagonal matrices with entries 𝐆_L,k=diag{√(υ _k/υ _k+1)} and 𝐆_S,k=diag{√(1/υ _k+1)}, respectively, and υ _k>0 is the Rician K-factor of user k. In general, we can adopt different array structures, e.g. uniform linear array (ULA) and uniform panel array (UPA) for both the BS and the users. Here, we adopt the ULA for it is commonly implemented in practice <cit.>. We assume that all the users are separated by hundreds of wavelengths or more <cit.>. Thus, we can express the deterministic LOS channel matrix 𝐇_L,k of the k-th user as <cit.> 𝐇_L,k=𝐡_L,k^BS𝐡_L,k^H, where 𝐡_L,k^BS ∈ℂ^M× 1 and 𝐡_L,k ∈ℂ^P× 1 are the antenna array response vectors of the BS and the k-th user respectively. In particular, 𝐡_L,k^BS and 𝐡_L,k can be expressed as <cit.> 𝐡_L,k^BS=[ [1, … , e^-j2π( M-1) dλcos( θ _k) ]] ^T and 𝐡_L,k=[[1, … , e^-j2π( M-1) dλcos( ϕ _k) ]]^T, respectively, where d is the distance between the neighboring antennas and λ is the wavelength of the carrier frequency. Variables θ _k∈[ 0,+π] and ϕ _k∈[ 0,+π] are the angles of incidence of the LOS path at antenna arrays of the BS and user k, respectively. For convenience, we set d=λ/2 for the rest of the paper which is an assumption commonly adopted in the literature <cit.>. Without loss of generality, we assume that the scattering component 𝐇_S,k consists N_cl clusters and each cluster contributes N_l propagation path <cit.>, which can be expressed as𝐇_S,k = √(1∑_i=1^N_clN_l,i)N_cli=1N_l,il=1α _i,l𝐡_i,l^BS𝐡_k,i,l^H= [ [ 𝐡_S,1,…,𝐡_S,P ]],where 𝐡_i,l^BS∈ℂ^M× 1 and 𝐡_k,i,l∈ℂ^P× 1 are the antenna array response vectors of the BS and the k-th user associated to the ( i,l)-th propagation path, respectively. Here, α _i,l∼𝒞𝒩( 0,1) represents the path attenuation of the (i,l)-th propagation path and 𝐡_S,k∈ℂ^M× 1 is the k-th column vector of 𝐇_S,k. With the increasing number of clusters, the path attenuation coefficients and the AoAs between the users and the BS become randomly distributed <cit.>. Therefore, we model the entries of scattering component 𝐇_S,k in a general manner as an independent and identically distributed (i.i.d.) random variable[To facilitate the study of the downlink hybrid precoding, we assume perfect long-term power control is performed to compensate for path loss and shadowing at the desired users and equal power allocation among different data streams of the users<cit.>. Thus, the entries of scattering component 𝐇_S,k are modeled by i.i.d. random variables. ] 𝒞𝒩( 0,1). § PROPOSED HYBRID CHANNEL ESTIMATIONIn practice, the hybrid system imposes a fundamental challenge for mmWave channel estimation. Unfortunately, the conventional pilot-aided channel estimation for fully digital systems, e.g. <cit.>, is not applicable to the considered hybrid mmWave system. The reasons are that the number of RF chains is much smaller than the number of antennas equipped at the BS and the transceiver beamforming matrix cannot be acquired.To address this important issue, we propose a new pilot-aided hybrid channel estimation, which mainly contains three steps as shown in Figure <ref>. In the first and second steps, we introduce unique unmodulated frequency tones for strongest AoAs estimation, which is inspired by signal processing in monopulse radar and sonar systems <cit.>. The estimated strongest AoAs at both the BS and the users sides will be used for the design of BS and users beamforming matrices. In the third step, orthogonal pilot sequences are transmitted from all the users to the BS to estimate the uplink channels, which will be adpoted for the design of the BS digital baseband downlink precoder by exploiting the reciprocity between the uplink and downlink channels.§.§.§ Step 1 First, all the users transmit unique frequency tones to the desired BS in the uplink simultaneously. For the k-th user, an unique unmodulated frequency tone, x_k=cos( 2π f_kt), k∈{1,⋯ ,N}, is transmitted from one of the omni-directional antennas in the antenna array to the BS. Here, f_k is the single carrier frequency and t stands for time and f_k≠ f_j,∀ k≠ j. For the AoA estimation, if the condition f_k-f_j/f_c < 10^-4, ∀ k≠ j is satisfied, the AoA estimation error by using ULA with a single tone is generally negligible <cit.>, where f_c is the system carrier frequency. The pass-band received signal of user k at the BS, 𝐲_k^BS, is given by 𝐲_k^BS=( √(υ _kυ _k+1)𝐡_L,k^BS+√(1υ _k+1)𝐡_S,k) x_k+𝐳_ BS, where 𝐳_BS denotes the thermal noise at the antenna array of the BS, 𝐳_BS∼𝒞𝒩( 0,σ_BS^2𝐈), and σ _BS^2 is the noise variance at each antenna of the BS.To facilitate the estimation of AoA, we perform a linear angular domain search in [0,180] with an angle search step size of 180/J. Therefore, the AoA detection matrix Γ_k∈ℂ^M× J, Γ_k= [ [ γ_k,1,…,γ_k,J ]] contains J column vectors. The i-th vector γ_k,i∈ℂ^M× 1, i∈{1,⋯,J}, stands for a potential AoA of user k at the BS and is given by γ_k,i= 1/√(M)[ [1, … , e^j2π( M-1) dλcos( θ_i) ]] ^T, where θ_i= (i-1)180/J, i∈{1,⋯,J}, is the assumed AoA and γ_k,i^Hγ_k,i=1. For the AoA estimation of user k, Γ_k is implemented in the M phase shifters connected by the k-th RF chain. The local oscillator (LO) of the k-th RF chain at the BS generates the same carrier frequency f_k to down convert the received signals to the baseband, as shown in Figure <ref>. After the down-conversion, the signals will be filtered by a low pass filter which can remove other frequency tones. The equivalent received signal at the BS from user k at the i-th potential AoA is given by r_k,i^BS= √(υ _kυ _k+1)γ_k,i^T𝐡_L,k^BS+√(1υ _k+1)γ_k,i^T𝐡_S,k +γ_k,i^T𝐳_BS. The potential AoA, which leads to the maximum value among the J observation directions, i.e., γ_k=∀γ_k,i,i∈{1,⋯,J}max| r_k,i^BS|, is considered as the AoA of user k. Besides, vector γ_k corresponding to the AoA with maximal value in (<ref>) will be exploited as the k-th user's beamforming vector at the BS. As a result, we can also estimate all other users' uplink AoAs at the BS from their corresponding transmitted signals simultaneously. For notational simplicity, we denote 𝐅_RF=[[ γ_1,…, γ_N ]] ∈ℂ^M× N as the BS beamforming matrix. §.§.§ Step 2 The BS sends orthogonal frequency tones to all the users exploiting beamforming matrix 𝐅_RF obtained in step 1. This facilitates the downlink AoAs estimation at the users and this information will be used to design the beamforming vectors to be adopted at the users.The received signal 𝐲_k^UE at user k can be expressed as 𝐲_k^UE=[ 𝐆_L,k𝐡_L,k^∗(𝐡_L,k^BS)^T+𝐆_S,k𝐇_S,k^T]γ_kx_k +𝐳_MS, where 𝐳_MS denotes the thermal noise at the antenna array of the users, 𝐳_MS∼𝒞𝒩( 0,σ_MS^2𝐈), and σ _MS^2 is the noise variance for all the users.The AoA detection matrix for user k, Ω_k∈ℂ^P× J, which also contains J estimation column vectors, is implemented at phase shifters of user k. The i-th column vector of matrix Ω_k for user k, ω_k,i∈ℂ^P× 1, i∈{1,⋯,J}, is given by ω_k,i=1/√(P)[ [ 1,… , e^j2π( P-1) dλcos(ϕ_i) ]] ^T, where ϕ_i= (i-1)180/J, i∈{1,⋯,J}, is the i-th potential AoA of user k and ω_k,i^Hω_k,i=1. With similarprocedures as shown in step 1, the equivalent received signal from the BS at user k of the i-th potential AoA is given byr _k,i^UE= ω_k,i^H√(υ _kυ _k+1)𝐡_L,k^∗(𝐡_L,k^BS)^Tγ_k+ω_k,i^H√(1υ _k+1)𝐇_S,k^Tγ_k+ω_k,i^H𝐳_MS. Similarly, we search for the maximum value among J observation directions and design the beamforming vector based on the estimated AoA of user k. The beamforming vector for user k is given by ω_k^∗=∀ω_k,i, i∈{1,⋯,J}max| r _k,i^UE|and we denote the matrix 𝐐_RF=[ [ ω_1^∗, …, ω_N^∗ ]] ∈ℂ^P× N as the users beamforming matrix. §.§.§ Step 3The BS and users beamforming matrices based on estimated uplink AoAs and downlink AoAs are designed via step 1 and step 2, respectively. After that, all the users transmit orthogonal pilot sequences to the BS via user beamforming vectors ω_k^∗.We denote the pilot sequences of the k-th user in the cell as Φ_k=[ ϑ _k( 1) ,ϑ _k( 2) ,....ϑ _k( N) ]^T, Φ_k∈ℂ^N× 1, stands for N symbols transmitted across time. The pilot symbols used for the equivalent channel[The equivalent channel composes of the BS beamforming matrix, the mmWave channel, and the users beamforming matrix.] estimation are transmitted in sequence from symbol ϑ _k( 1) to symbol ϑ _k( N). The pilot symbols for all the N users form a matrix, Ψ∈ℂ^N× N, where Φ_k is a column vector of matrix Ψ given by Ψ= √(E_P)[ [Φ_1Φ_2 ....Φ_N ]], where Φ_i^HΦ_j=0,∀ i≠ j,i,j∈{ 1,⋯ N}, and E_P represents the transmitted pilot symbol energy. Note that Ψ^HΨ=E_P𝐈_N. Meanwhile, the BS beamforming matrix 𝐅_RF is utilized to receive pilot sequences at all the RF chains. As the length of the pilot sequences is equal to the number of users, we obtain an N × N observation matrix from allthe RF chains at the BS. In particular, the received signal at the k-th RF chain at the BS is 𝐬_k^T∈ℂ^1× N, which is given by 𝐬_k^T=γ_k^TNi=1∑𝐇_iω_i^∗√(E_P)Φ_i^T+γ_k^T𝐙, where 𝐙∈ℂ^M× N denotes the additive white Gaussian noise matrix at the BS and the entries of 𝐙 are modeled by i.i.d. random variable with distribution 𝒞𝒩( 0,σ _BS^2). After [[ 𝐬_1, ⋯, 𝐬_N ]] is obtained, we then adopt the least square (LS) method for our equivalent channel estimation. We note here, the LS method is widely used in practice since it does not require any prior channel information. Subsequently, with the help of orthogonal pilot sequences, we can construct an equivalent hybrid uplink channel matrix 𝐇_eq∈ℂ^N× N formed by the proposed scheme via the LS estimation method. Then, due to the channel reciprocity, the equivalent downlink channel of the hybrid system 𝐇_eq^T can be expressed as:𝐇_eq^T =Ψ^H[[ 𝐬_1 … 𝐬_k … 𝐬_N ]] =𝐇_eq^T[ [ ω_1^H𝐇_1^T𝐅_RF;⋮; ω_N^H𝐇_N^T𝐅_RF ]] +effictivenoise1/√(E_P)[ [Φ_1^H𝐙^T𝐅_RF; ⋮; Φ _N^H𝐙^T𝐅_RF ]] . From Equation (<ref>), weobserve that the proposed hybrid channel estimation can obtain all users' equivalent CSI simultaneously. Compared to existing channel estimation methods, e.g. compressed-sensing algorithm <cit.>, the proposed algorithm does not require explicit CSI feedback from each antenna equipped at the desired users. § HYBRID ZF PRECODING AND PERFORMANCE ANALYSISIn this section, we illustrate and analyze the achievable rate performance per user of the considered hybrid mmWave system under ZF downlink transmission. The ZF downlink precoding is based on the estimated hybrid equivalent channel 𝐇_eq, which subsumes the BS beamforming matrix 𝐅_RF and the users beamforming matrix 𝐐_RF. We derive a closed-form upper bound of achievable rate per user of ZF precoding in hybrid mmWave systems. Also, we compare the system achievable rate upper bound obtained by the fully digital system exploiting ZF precoding for a large number of antennas.§.§ ZF Precoding Now, we utilize the estimated equivalent channel for downlink ZF precoding. To study the best achievable rate performance of the proposed scheme, we first assume that the equivalent channel is estimated in the high signal-to-noise ratio (SNR) regime, e.g. E_P→∞. Then, the baseband digital ZF precoder 𝐖_eq∈ℂ^N× N based on 𝐇_eq is given by𝐖_eq=𝐇_eq^∗(𝐇_eq^T𝐇_eq^∗)^-1=[[ 𝐰_eq,1,… ,𝐰_eq,N ]] ,where 𝐰_eq,k∈ℂ^N× 1 is the k-th column of ZF precoder for user k. As each user is equipped with only one RF chain, one superimposed signal is received at each user at each time instant with hybrid transceivers. The received signal at user k after receive beamforming can be expressed as:y_ZF^k =desiredsignalω_k^H𝐇_k^T𝐅 _RFβ𝐰_eq,kx_k+interferenceω_k^H𝐇_k^TNj=1,j≠ k∑𝐅 _RFβ𝐰_eq,jx_j+ noiseω_k^H𝐳_MS,k, where x_k∈ℂ^1× 1 is the transmitted symbol from the BS to user k in the desired cell, E[ | x_k^2|]=E_s, E_s is the average transmitted symbol energy for each user, β=√(1tr(𝐖 _eq𝐖_eq^H)) is the transmission power normalization factor, and the effective noise part 𝐳_MS, k∼𝒞𝒩( 0,σ_MS^2𝐈). Then we express the signal-to-interference-plus-noise ratio (SINR) of user k asSINR_ZF^k=β^2E_s/σ _MS^2. In the sequel, we study the performance of the considered hybrid mmWave system. For simplicity, we assume the mmWave channels of all the users have the same Rician K-factor, i.e., υ _k = υ, ∀ k.§.§ Performance Upper Bound of ZF Precoding Now, exploiting the SINR expression in (<ref>),we summarize the upper bound of achievable rate per user of the ZF precoding in a theorem at the top of this page.From Equation (<ref>), we see that the upper bound of achievable rate per user of the proposed hybrid ZF precoding depends on the Rician K-factor, υ. We can further observe that the upper bound of the achievable rate per user also depends on the BS beamforming matrix 𝐅_RF designed in step 2 of the proposed CSI estimation. With an increasing number of antennas at the BS, communication channels are more likely to be orthogonal. Therefore, it is interesting to evaluate the asymptotic upper bound R_HB^upper for the case of a large number of antennas. We note that, even if the number of antennas equipped at the BS is sufficiently large, the required number of RF chains is still equal to the number of users in the hybrid mmWave systems and the result is summarized in Corollary <ref> at the top of this page. In Equation (<ref>), we have the intuitive observation that the performance of theproposed hybrid precoding is mainly determined by the equipped numbers of antennas and RF chains. §.§ Performance of Fully Digital System In this section, we derive the achievable rate performance of a fully digital mmWave system in the large numbers of antennas regime. The obtained analytical results in this section will be used tocompare with the considered hybrid system in the simulation section. To this end, for the fully digital mmWave system, we assume that each user is equipped with one RF chain and P antennas. The P antenna array equipped at each user can provide 10log_10(P) dB array gain. We note that, the number of antennas equipped at the BS is M and the number of RF chains equipped at the BS is equal to the number of antennas.The channel matrix for user k is given by𝐇_k^T=𝐡_k^∗𝐡_BS,k^T. We assume that the CSI is perfectly known to the users and the BS is with the fully digital system to illustrate the maximal performance gap between the proposed structure and the perfect case. Therefore, the achievable rate per user upper bound of the fully digital system is summarized in the following Corollary 2.In the large numbers of antennas regime, the asymptotic achievable rate per user of the fully digital system is bounded above byR_FD⩽ R_FD^upper M→∞a.s.→log _2[ 1+MP /NE_sσ _MS^2] .The result follows similar procedures the proof as in Appendix A. In the large numbers of antennas regime, based on (<ref>) and (<ref>), it is interesting to observe that with an increasing Rician K-factor υ, the performance upper bounds of the two considered structures will coincide. § SIMULATION AND DISCUSSION In this section, we present numerical results to validate our analysis. We consider a single cell hybrid mmWave system. In Figure <ref>, we present a comparison between the achievable rate per user of the hybrid system and the fully digital system for M=100, N=10, and a Rician K-factor of υ_k =2,∀ k. First, our simulation results verify the tightness of derived upper bounds in (<ref>) and (<ref>). It can be observed from Figure <ref> that, even for a small value of Rician K-factor, our proposed channel estimation scheme with ZF precoding can achieve considerable high sum rate performance due to its interference suppression capability. In addition, the performance gap between the fully digital system and the hybrid system is small, which is determined by the strongest AoA component to the scattering component ratio. In Figure <ref>, we illustrate the effectiveness of the proposed non-sparse mmWave channel estimation algorithm. We assume perfect channel estimation with M=100, N=4, and P=16. For non-sparse mmWave channels, we assume υ_k =1,∀ k. In Figure <ref>, we compare between the achievable rates using the proposed hybrid algorithm and the algorithm proposed by <cit.> for sparse and non-sparse mmWave channels. For sparse single-path channels, the achievable rate of the proposed algorithm matches with the algorithm proposed in <cit.>. For non-sparse mmWave channels, with the number of multi-paths N_l=8, we observe that the proposed algorithm achieves a better system performance than that of the algorithm proposed in <cit.>. The reason is that, the proposed algorithm takes the scattering components into account and exploits the strongest AoAs of all the users to suppress the MU interference. In contrast, the algorithm proposed in <cit.>, which aims to maximize the desired signal energy, does not suppress the MU interference as effective as our proposed algorithm. Furthermore, Figure <ref> also illustrates that a significant achievable rate gain is brought by the proposed channel estimation and ZF precoding over a simple analog-only beamforming steering scheme.§ CONCLUSIONS In this paper, we proposed a low-complexity mmWave channel estimation for the MU hybrid mmWave systems, which is applicable for both sparse and non-sparse mmWave channel environments. The achievable rate performance of ZF precoding based on the proposed channel estimation scheme was derived and compared with the achievable rate of fully digital systems. The analytical and simulation results indicated that the proposed scheme can approach the rate performance achieved by the fully digital system with sufficient large Rician K-factors.§ APPENDIX §.§ Proof of Theorem 1The average achievable rate per user of ZF precoding is given byR_HB=E_H_S{log _2[ 1+[ tr[ ( 𝐇_eq^T𝐇_eq^∗)^-1]] ^-1E_sσ _MS^2] } . First, we introduce some preliminaries. Since 𝐇_eq^T𝐇_eq^∗ is a positive definite Hermitian matrix, by eigenvalue decomposition, it can be decomposed as 𝐇_eq^T𝐇_eq^∗=𝐔Λ 𝐕^H, Λ∈ℂ^N× N is the positive diagonal eigenvalue matrix, while 𝐕∈ℂ^N× N and 𝐔∈ℂ^N× N are unitary matrixes, 𝐔=𝐕^H. The sum of the eigenvalues of 𝐇_eq^T𝐇_eq^∗ equals to the trace of matrix Λ. Then we can rewrite the power normalization factor in (<ref>) as N/tr[ (𝐇_eq^T𝐇_eq^∗)^-1] =[ Ni=11/Nλ _i^-1]^-1, In addition, f(x)=x^-1 is a strictly decreasing convex function and exploiting the convexity, we have the following results <cit.>: [ Ni=11/Nλ _i^-1] ^-1⩽Ni=11/N[ ( λ _i^-1) ^-1] =Ni=11/Nλ _i. Therefore, based on (<ref>) and (<ref>), we have the following inequality:1/tr[ ( 𝐇_eq^T𝐇_eq^∗) ^-1] ⩽Ni=1 1/N^2λ _i=1/N^2tr[ 𝐇_eq^T𝐇_eq^∗] . From (<ref>), Equation (<ref>) can be rewritten as (<ref>) in Theorem 1. IEEEtran10url@samestyle Kwan_5G V. W. S. Wong, R. Schober, D. W. K. Ng, and L.-C. Wang, Key Technologies for 5G Wireless Systems.AZhang2015 J. A. Zhang, X. Huang, V. Dyadyuk, and Y. J. Guo, “Massive hybrid antenna array for millimeter-wave cellular communications,” IEEE Wireless Commun., vol. 22, no. 1, pp. 79–87, Feb. 2015.Dai2016 L. Dai, X. Gao, S. Han, C. L. I, and X. Wang, “Beamspace channel estimation for millimeter-wave massive MIMO systems with lens antenna array,” 2016. [Online]. Available: <http://arxiv.org/abs/1607.05130v1> Kokshoorn2016 M. Kokshoorn, H. Chen, P. Wang, Y. Li, and B. Vucetic, “Millimeter wave MIMO channel estimation using overlapped beam patterns and rate adaptation,” 2016. [Online]. Available: <https://arxiv.org/abs/1603.01926v2> JR:Kwan_massive_MIMO D. W. K. Ng, E. S. Lo, and R. Schober, “Energy-Efficient Resource Allocation in OFDMA Systems with Large Numbers of Base Station Antennas,” IEEE Trans. Commun., vol. 11, no. 9, pp. 3292–3304, 2012.Yang2015 N. Yang, L. Wang, G. Geraci, M. Elkashlan, J. Yuan, and M. D. Renzo, “Safeguarding 5G wireless communication networks using physical layer security,” IEEE Commun. Mag., vol. 53, no. 4, pp. 20–27, Apr. 2015.Bogale2015 T. E. Bogale and L. B. Le, “Massive MIMO and millimeter wave for 5G wireless HetNet: Potentials and challenges,” IEEE Veh. Technol. Mag., vol. 11, no. 1, pp. 64–75, Mar. 2016.Marzetta2010 T. L. Marzetta, “Noncooperative cellular wireless with unlimited numbers of base station antennas,” IEEE Trans. Wireless Commun., vol. 9, no. 11, pp. 3590–3600, Nov. 2010.Swindlehurst2014 A. L. Swindlehurst, E. Ayanoglu, P. Heydari, and F. Capolino, “Millimeter-wave massive MIMO: the next wireless revolution?” IEEE Commun. Mag., vol. 52, no. 9, pp. 56–62, Sept. 2014.Deng2015 Y. Deng, L. Wang, K. K. Wong, A. Nallanathan, M. Elkashlan, and S. Lambotharan, “Safeguarding massive MIMO aided hetnets using physical layer security,” in Intern. Conf. on Wireless Commun. Signal Process. (WCSP), Oct. 2015, pp. 1–5.Sohrabi2016 F. Sohrabi and W. Yu, “Hybrid digital and analog beamforming design for large-scale antenna arrays,” IEEE J. Select. Topics in Signal Process., vol. 10, no. 3, pp. 501–513, Apr. 2016.Rappaport2015 T. S. Rappaport, G. R. MacCartney, M. K. Samimi, and S. Sun, “Wideband millimeter-wave propagation measurements and channel models for future wireless communication system design,” IEEE Trans. Commun., vol. 63, no. 9, pp. 3029–3056, Sept. 2015.Bjornson2016 E. Björnson, E. G. Larsson, and T. L. Marzetta, “Massive MIMO: Ten myths and one critical question,” IEEE Commun. Mag., vol. 54, no. 2, pp. 114–123, Feb. 2016.Heath2016a R. W. Heath, N. G. Prelcic, S. Rangan, W. Roh, and A. M. Sayeed, “An overview of signal processing techniques for millimeter wave MIMO systems,” IEEE J. Select. Topics in Signal Process., vol. 10, no. 3, pp. 436–453, April 2016.Ni2016 W. Ni and X. Dong, “Hybrid block diagonalization for massive multiuser MIMO systems,” IEEE Trans. Commun., vol. 64, no. 1, pp. 201–211, Jan. 2016.Alkhateeb2015 A. Alkhateeb, G. Leus, and R. W. Heath, “Limited feedback hybrid precoding for multi-user millimeter wave systems,” IEEE Trans. Wireless Commun., vol. 14, no. 11, pp. 6481–6494, Nov. 2015.Ayach2014 O. E. Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath, “Spatially sparse precoding in millimeter wave MIMO systems,” IEEE Trans. Wireless Commun., vol. 13, no. 3, pp. 1499–1513, Mar. 2014.Han2015 S. Han, C. l. I, Z. Xu, and C. Rowell, “Large-scale antenna systems with hybrid analog and digital beamforming for millimeter wave 5G,” IEEE Commun. Mag., vol. 53, no. 1, pp. 186–194, Jan. 2015.Hur2016 S. Hur, S. Baek, B. Kim, Y. Chang, A. F. Molisch, T. S. Rappaport, K. Haneda, and J. Park, “Proposal on millimeter-wave channel modeling for 5G cellular system,” IEEE J. Select. Topics in Signal Process., vol. 10, no. 3, pp. 454–469, Apr. 2016.Alkhat2014 A. Alkhateeb, O. E. Ayach, G. Leuz, and R. W. Heath, “Channel estimation and hybrid precoding for millimeter wave cellular systems,” IEEE J. Sel. Topics in Signal Process., vol. 8, no. 5, pp. 831–846, Oct. 2014.Buzzi2016 S. Buzzi and C. D'Andrea, “Doubly massive mmWave MIMO systems: Using very large antenna arrays at both transmitter and receiver,” 2016. [Online]. Available: <https://arxiv.org/abs/1607.07234v1> Al-Daher2012 Z. Al-Daher, L. P. Ivrissimtzis, and A. Hammoudeh, “Electromagnetic modeling of high-frequency links with high-resolution terrain data,” IEEE Antennas and Wireless Propagation Lett., vol. 11, pp. 1269–1272, Oct. 2012.book:wireless_comm D. Tse and P. Viswanath, Fundamentals of wireless communication.1em plus 0.5em minus 0.4emCambridge University Press, 2005.Trees2002 H. L. V. Trees, Optimum array processing: Part IV of detection, estimation, and modulation theory.1em plus 0.5em minus 0.4emJohn Wiley & Sons, Inc., 2002.Yang2013 H. Yang and T. L. Marzetta, “Performance of conjugate and zero-forcing beamforming in large-scale antenna systems,” IEEE J. Select. Areas Commun., vol. 31, no. 2, pp. 172–179, Mar. 2013.book:infotheory T. M. Cover and J. A. Thomas, Elements of Information Theory.1em plus 0.5em minus 0.4emNew York: Wiley, 1991.
http://arxiv.org/abs/1702.08130v1
{ "authors": [ "Lou Zhao", "Derrick Wing Kwan Ng", "Jinhong Yuan" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170227030724", "title": "Multiuser Precoding and Channel Estimation for Hybrid Millimeter Wave MIMO Systems" }
.jpg,.pdf,.mps,.png 1 2 4 1 2001myheadings Relaxation processes in a complex system can be characterized by the pronounced memory effects, which are manifested in the non-exponential decay or oscillatory behavior of the time correlation functions (TCF's) for the corresponding dynamical variables <cit.>. Hence, one can reasonably assume that the direct accounting for the memory effects could simply the theoretical description of the system behavior. The convenient way to examine this is to consider a system, in which the origin of the memory effects is well studied. As an example of such the physical system one can take a high-density (viscous) liquid, a supercooled liquid or/and a glass <cit.>, where the memory effects appear in single-particle dynamics as well as in collective particle dynamics <cit.>.From theoretical point of view, the convenient way to take these effects into account most adequately is to use in the description the so-called memory function formalism <cit.>, which is associated with the projection operators technique of Zwanzing and Mori <cit.> as well as with the recurrent relations method suggested by Lee <cit.>. Remarkably, the memory function formalism allows one to represent the equation of the motion for a variable (originally, for the velocity of a particle in liquid) in the form of a non-Markovian integro-differential equation, which contains a characteristic component – a memory function.For the case when the velocity of a particle represents such the variable, the integro-differential equation is known as the generalized Langevin equation (GLE). Herein, if time behavior of the memory function is defined, then the solution of the GLE will determine the evolution of the variable (the velocity) and the corresponding TCF – the velocity autocorrelation function (VACF) – can be computed <cit.>. Nevertheless, although the technique of projection operators gives a prescription to calculate the memory function, the direct computations are very difficult to be realized for real physical systems <cit.>. In this work, we shall demonstrate that a solution of GLE can be derived by simple interpolation of its solutions for the memory-free case, the strong-memory case and the case with a moderate memory. Further, the resulted solution will contain the parameter which represents a quantitative measure of the memory effects.Let us take the velocity v_α of a αth particle in liquid as a dynamical variable. Then, GLE can be written as <cit.>dv_α(t)/dt=-Ω_1^2∫_0^tdτ M_1(t-τ)v_α(τ) +f(t),where f(t) is the random force per unit mass, M_1(t) is the normalized first order memory function, which is related to the random force f(t) by the second fluctuation-dissipation theorem <cit.>, and Ω_1^2 is the first-order frequency parameter arising from normalization of M_1(t). Note, it is assumed that ⟨ v_α(0)f(t)⟩=0. Multiplying Eq. (<ref>) by v_α(0), taking an appropriate ensemble average ⟨…⟩ and applying further the projection operators technique, it is possible to obtain a hierarchical chain of integro-differential non-Markovian equations in terms of TCF's:dM_i-1(t)/dt + Ω_i^2∫_0^tdτ M_i(t-τ)M_i-1(τ)=0, i = 1, 2,… .If VACF is chosen as an initial TCF of this hierarchy, then GLE will be the first equation (i.e. i=1) of this chain [Originally, equation (<ref>) written for the variable-velocity was called as the GLE. Nevertheless, the related integro-differential equation written for the corresponding time correlation function is also mentioned as the GLE in the modern studies <cit.>.]. In the case,M_0(t)=⟨ v_α(0) v_α(t)⟩/⟨ v_α(0)^2⟩is VACF; M_i(t) is TCF of the corresponding dynamical variable, which has meaning of the ith-order memory function <cit.>, whereas Ω_i^2 is the ith-order frequency parameter. Note that all TCF's of chain (<ref>) including VACF are normalized to unity for convenience, i.e.lim_t → 0M_i-1(t)=1, i=1, 2, ….Moreover, applying the operator of Laplace transformation ℒ̂=∫_0^∞dt e^-st[…]to equations of chain (<ref>), one obtains the infinite fraction <cit.>:M_0(s)=1/s+Ω_1^2M_1(s)= 1/ s+Ω_1^2/ s+Ω_2^2/ s+Ω_3^2/ s+…. It is necessary to note that the ith-order memory function M_i(t) corresponds to a concrete relaxation process, the physical meaning of which can be established directly from consideration of the analytical expression for M_i(t). On the other hand, the squared characteristic time scale τ^2 of the relaxation process can be defined as <cit.>τ_i-1^2 =|∫_0^∞t M_i-1(t)dt| =| lim_s → 0 ( - ∂M_i-1(s)/∂ s )| . i=1, 2, ….The time scales τ_i-1 corresponding to TCF's of chain (<ref>) form the hierarchy, which has the following peculiarity: the quantity τ_i defines a memory time scale for TCF M_i-1(t), the relaxation time of which is τ_i-1. As a quantitative measure of memory effects for the ith relaxation level it is convenient to use the dimensionless parameter <cit.>δ_i=τ_i-1^2/τ_i^2, i=1, 2, …,where τ_i^2 is defined by Eq. (<ref>). This simple criterion allows one to determine whether the considered process is characterized by a strong statistical memory, or it has a memoryless behavior. Namely, one has[ δ→ 0 for a strong memory limit,; δ≃ 1 for a case of moderate memory,;δ→∞ for a memory-free limit.;]It is remarkable that for the three cases determined by (<ref>) there are known exact solutions of the GLE written for the VACF <cit.>:dM_0(t)/dt+Ω_1^2∫_0^tdτ M_1(t-τ)M_0(τ)=0,where the first frequency parameter of a many-particle system (say, for a liquid), where atoms/moleculs interact through a spherical potential U(r), can be written as <cit.>Ω_1^2=4π n/3m∫_0^∞ dr g(r) r^3 [d/dr(1/rd U(r)/dr ) +3/r^2dU(r)/dr].Here n is the numerical density, m is the particle mass, and g(r) is the pair distribution function;and Eq. (<ref>) is the first equation of the chain (<ref>), i.e. at i=1. Let us consider these three cases in detail.First, one assumes that Eq. (<ref>) describesthe behavior of the system without memory, i.e. τ_0^2≫τ_1^2. For the case one has δ→∞. Here, the memory function M_1(t) has to decay extremely fast; and, therefore, it can be taking in the following form:M_1(t)=2τ_1δ(t),where δ(t) is the Dirac Delta-function, τ_1 is the time scale of M_1(t). By substituting Eq. (<ref>) into Eq. (<ref>) and solving the resulted equation, one obtain the VACF M_0(t) with ordinary exponential dependence:M_0(t)=e^-Ω_1^2τ_1t.As known, such the dependence is correct for the velocity correlation function of the Brownian particle with the relaxation time τ_0=(Ω_1^2τ_1)^-1=m/ξ_β, m and ξ_β are the mass and the friction coefficient, correspondingly. As for the self-diffusion phenomena in a liquid, where particle (α) moving with the velocity v_α(t) is identical to all others, exponential relaxation of the VACF is rather strongly idealized model <cit.>.Second, one considers the opposite situation appropriate to the system, the single-particle dynamics of which is characterized by a strong memory, i.e. τ_0^2≪τ_1^2. For the case one obtains from definition (<ref>)thatδ→ 0. The most relevant form of the non-decaying memory function can be taken asM_1(t)=H(t)={[1, t ≥ 0;0, t < 0; ], .where H(t) is the step Heaviside function. By substituting Eq. (<ref>) under the convolution integral of Eq. (<ref>), one obtainsdM_0(t)/dt=-Ω_1^2∫_0^tdτ M_0(τ).After solving this equation one finds thatM_0(t)=cos(Ω_1t).It should be noted that no the characteristic time-scale τ_0 is included into solution (<ref>) as well as that M_0(t) does not satisfy the condition of attenuation of correlation at t →∞ <cit.>. Actually, the system with an ideal memory “remembers” its initial state and returns periodically to this state, functionally reproducing it with precision.These two above considered cases are limiting ones. However, it is known that single-particle dynamics of real systems is characterized by a memory, albeit the memory is not ideal. Therefore, for the case one can write that the parameter δ takes values from the range0<δ≪∞. There is also the physically correct solution of Eq. (<ref>) for this region of δ, and the solution was firstly obtained by Yulmetyev (see Ref. <cit.>). Let us consider the case, when the time scales of the initial TCF and of its memory became comparable, i.e.δ≃ 1.The case can be realized at the time-scale invariance of the relaxation processes in many-particle systems <cit.>.In addition, if the time-dependencies of the VACF and of its memory function are approximately identical, then one can writeM_1(t) ≃ M_0(t).Taking into account relation (<ref>) and applying Laplace transformation to Eq. (<ref>), we obtain ordinary quadratic equation:Ω_1^2M_0^2(s)+sM_0(s)-1=0.By solving the last equation and by applying the operator of the inverse Laplace transformation ℒ̂^-1, we find the VACFM_0(t)=1/Ω_1tJ_1(2Ω_1t),where J_1(…) is the Bessel function of the first kind. Then, for the squared characteristic time scales of the VACF and of its memory function one obtains from definition (<ref>) and Eq. (<ref>) thatτ_0^2=τ_1^2=1/2Ω_1^2.Thus, the quantity proportional to the inverse frequency parameter determines both the squared time scales. Solution (<ref>) describes the damped oscillated behavior of the function M_0(t). It is worth nothing that the TCF's scenario is observed frequently in such the physical systems as electron gas models, linear chain of the neighbor-coupled harmonic oscillators and others <cit.> as well as in collective particle dynamics in simple liquids <cit.>, where the TCF of the local density fluctuations is considered.The three considered cases allow one to find solution of Eq. (<ref>) at the three different values of the memory parameter: δ=1 with solution (<ref>); the memory-free case with solution (<ref>) corresponds to δ→∞, whereas solution (<ref>) was obtained for the system with ideal memory at δ→ 0. So, if we shall generalize Eqs. (<ref>) and (<ref>) in a unified functional dependence, then we can obtain the following solution of Eq. (<ref>) in terms of the Mittag-Leffler function <cit.>:M_0(t)=∑_k=0^∞ (-Ω_1^2τ_1^1-νt^ν+1)^k/Γ(ν k+k+1),where Γ(…) is the Gamma function, τ_1 is the time scale of the memory function M_1(t), the frequency parameter is determined by (<ref>), and thedimensionless parameter ν is the redefined memory measure:ν=(2/π)arctan(1/δ),and ν∈[0;1]. For an strong memory case, i.e. δ→ 0, Eq. (<ref>) gives expansion in series of Eq. (<ref>), whereas for a memory-free limit with δ→∞ we obtain expansion of Eq. (<ref>). Relation (<ref>) has a stretched exponential behavior at short times and demonstrate an inverse power-law relaxation at long times. It should be noted that the similar solution of integro-differential equations was proposed earlier by Stanislavskii in Ref. <cit.> on the basis of applying the fractional calculus technique.Moreover, one can obtain the general solution of Eq. (<ref>) by interpolation of all three solutions (<ref>), (<ref>) and (<ref>). Assuming the smooth parabolic crossover from the strong memory and memory-free limits to a case of the moderate memory (δ =1) we obtainM_0(t) = 4(ν-1/2)^2∑_k=0^∞(-Ω_1^2τ_1^1-νt^ν+1)^k/Γ(ν k+k+1)+[1-4(ν-1/2)^2 ]∑_k=0^∞(-2 Ω_1^2t^2)^k/k!(k+1)!,where the parameter ν is defined by (<ref>). The significance of the first contribution increases at approaching δ to the zeroth value or at δ→∞. Thus, for example, Eq. (<ref>) gives a standard exponential relaxation in the memory-free limit with δ→∞. The second contribution in (<ref>) provides the Gaussian behavior for the short-time range t<Ω_1^-1. Further, the numerical coefficient before sum in the second contribution of Eq. (<ref>) dominates in the intermediate region, where the time scales of memory function and of the VACF are comparable. This contribution becomes maximal at δ=1, whereas the first item turns into zero [Notice that the second contribution in expression (<ref>) can be considered as a particular case of the Mainardi function <cit.> (or the Wright function <cit.>), which includes such functions as the Gaussian function, the Dirac delta-function and others.].Both the memory-free situation with absolutely uncorrelated particle motions and the strong-memory case with the pronounced correlations in the particle velocities related with the vibrational particle dynamics are only limit ones for a real liquids. A real liquid (a fluid) tends to the first one at high temperatures and low density, whereas it approaches to another limit at low temperatures with large values of the density. Moreover, it is realized a regime of dense fluids, where the surrounding medium with neighbor particles has an appreciable impact on a forward moving particle (α), causing the so-called vortex diffusion <cit.> and existence of the power law decay of M_0(t) with time. This indicates on the memory effects in single-particle dynamics, albeit the memory is far to be strong. Our numerical estimations of the memory effects for self-diffusion processes in the Lennard-Jones fluids <cit.> has found that the parameter δ at the reduced temperature T^*≈ 1 and the reduced density n^*=0.5 has a value ≈ 5.9, and then it increases with the growth of temperature and the decreasing of density. The parameter δ achieves value ≈ 8.7 at the temperature T^*≈ 4.8 and the density n^*=0.5, and demonstrates non-linear smooth Markovization. For more visibility of aforesaid we present in Fig. (<ref>) the density- andthe temperature-dependence of the memory parameter δ calculated for the VACF of Lennard-Jones fluids <cit.>.Moreover, as known, the Bessel function of Eq. (<ref>) has the following asymptotic behavior <cit.>:J_1(z)=√(2/π z){cos ( z-(3π/4))+ 𝒪(|z|^-1)}.Then, returning to Eq. (<ref>), it is easy to make sure that in a case of moderate memory this equation yields the following long-time tail:M_0(t) ∝ t^-3/2,which is a well-known feature of the VACF'sof liquids <cit.>. The long-time tails of the VACF of a simple liquid can be reproduced within the microscopic mode coupling theories (see, for example, <cit.>), according to which where it is related to a viscousmode. The approach presented in this studyis consistent with the mode-coupling theories and provides a theoretical description, in which information about complex correlated vibrational-diffusive motions of the particles is included into a single parameter δ.Furthermore, it is seen that Eqs. (<ref>) and (<ref>) contain such the characteristics of a many-particle system as the characteristic time scale of the memory τ_1 and the averaged frequency Ω_1^2, which is defined through the radial distribution function g(r) and the particle's interaction potential U(r). These quantities can be calculated from their definitions for concrete systems, or may be taken from molecular dynamics simulations (see, for example, <cit.>). On the other hand, while analysing experimental data, the term δ may be used as a fitting parameter to do the quantitative estimation of the memory effects in the considered system.As an example, we demonstrate in Fig. 2 results of Eq. (<ref>) for a model system with the frequency parameter Ω_1^2=5 ps^-2. The memory parameter δ varies within the interval from 0.001 to 6.593, whereas the memory time scale τ_1has been defined here as τ_1=1/√(2 Ω_1^2δ). Thus, the presented results include situations of a system with a strong memory when the ratio between the time scales of the VACF and of its memory function archive the value 0.001, and situations when these time scales are comparable. One can see from this figure that the oscillations in the VACF disappear with attenuation of the memory effects (at Markovization) <cit.>. These oscillations will disappear completely at δ→∞.The stronger memory effects in the single-particle dynamics of a system, the more considerable the amplitude of fluctuations and their decay. The dependence presented by Eq. (<ref>) is supported by experimental results. Experimental and molecular dynamic studies of simple liquids such as liquid tin <cit.>, liquid germanium and lithium <cit.>, liquid selenium <cit.>, liquid sodium <cit.>, Lennard-Jones fluids <cit.> and other systems <cit.> have allowed one to discover the relaxation of the VACF with the signatures of the pronounced memory effects, which is manifested, in particular, in algebraic decay of the VACF. To test Eq. (<ref>) for the liquids, we performthe compuations for the cases liquid tin and liquid lithium, for which the VACF's were found before from molecular dynamics simulations <cit.>. In Figs. 3 and 4, numerical solutions of Eq. (<ref>) are compared with the results of molecular dynamics simulations  <cit.>. As for Fig. 3, the full circles represent the VACF of liquid tin at T=523 K (the melting temperature T_m=505.08 K) calculated by the classical molecular dynamics <cit.> and the solid curve showssolution of the GLE (<ref>) with Ω_1^2≃ 130 ps^-2, δ≃ 0.75 and τ_1=1/√(2 Ω_1^2δ).It is seen that Eq. (<ref>) well agrees with the molecular dynamics simulations over the whole time interval. The value of the memory parameter reveals the pronounced memory effects in self-diffusion phenomena in liquid tin near its melting point. Strong memory effects, which take place in single-particle dynamics of liquid near its melting point, can be related with the structural transformations of the system <cit.>. Fig. 4 shows the VACF of liquid lithium at T=1073 K (the melting temperature T_m=453.65 K) determined from the molecular dynamics simulations <cit.> as full circles, whereas the solid line corresponds to the solution of Eq. (<ref>) with the frequency parameter Ω_1^2≃ 120ps^-1, the memory parameter δ≃ 10 and the time scale of the memory function τ_1=1/√(2 Ω_1^2δ). As may be seen from Fig. 4, the theoretical results and the data of the molecular dynamics simulations <cit.> are in good agreement. The VACF of liquid lithium at this temperature does not practically oscillate. This is direct indications of weak memory effects in the system, that can be caused by the absence of structural order in the the system and by the diffusive character of the single-particle dynamics <cit.>.Finally, the presented approach may also be used to investigate microscopical dynamics in more complex liquids, whose interatomic potentials include angular-dependent contributions. The memory function M_1(t) of these systems, which represents the TCF of the stochastic force f(t), will represent a combination of certain relaxation coupling modes. As a result, the VACF will have a more complex time behavior.1cmWe thank M. Howard Lee for useful discussions. This work was partially supported by the subsidy allocated to Kazan Federal University for the state assignment in the sphere of scientific activities. 99 Mokshin/Yulmetyev/Hanggi_PRL_2005 A.V. Mokshin, R.M. Yulmetyev, P. Hänggi. A simple measure of memory for dynamical processes described by the generalized Langevin equation. Phys. Rev. Lett. 95, 200601 (2005).Sibani_Complex_systems P. Sibani, J.J. Henrik. Stochastic Dynamics of Complex Systems: From Glasses to Evolution. (Imperial College Press, London, 2013).Stanley_1971 H.E. Stanley. Introduction to Phase Transitions and Critical Phenomena. (Oxford University Press, London, 1971).Boon J.P. Boon, S. Yip. Molecular Hydrodynamics. (McGraw-Hil, New-York, 1980).Hansen J.P. Hansen, I.R. McDonald. Theory of Simple Liquids. (Academic Press, London, 1986).Hanggi P. Hänggi, P. Talkner, M. Borkovec. Reaction-rate theory: fifty years after Kramers. Rev. Mod. Phys. 62, 251 (1990).Zwanzig_book R. Zwanzig. Nonequilibrium statistical mechanics. (University Press, Oxford, 2001).Zwanzig_1961 R. Zwanzig. Memory Effects in Irreversible Thermodynamics. Phys. Rev. 124, 983 (1961).Mori_1965 H. Mori. Transport, Collective Motion, and Brownian Motion. Prog. Theor. Phys. 33, 423 (1965).Lee M.H. Lee. Can the Velocity Autocorrelation Function Decay Exponentially? Phys. Rev. Lett. 51, 1227 (1983).Grigolini P. Grigolini, A. Rocco, B.J. West. Fractional calculus as a macroscopic manifestation of randomness. Phys. Rev. E. 59, 2603 (1999).Stanislavskii A.A. Stanislavsky. Memory effects and macroscopic manifestation of randomness. Phys. Rev. E. 61, 4752 (2000).Mokshin_TMP_2015 A.V. Mokshin. Self-consistent approach to the description of relaxation processes in classical multiparticle systems. Theor. and Math. Phys. 183, 449 (2015).my R.M. Yulmetyev, A.V. Mokshin, P. Hänggi. Diffusion time-scale invariance, randomization processes, and memory effects in Lennard-Jones liquids. Phys. Rev. E. 68, 051201 (2003).Mokshin/Yulmetyev_NJP_2004 A.V. Mokshin, R.M. Yulmetyev, P. Hänggi. Mokshin A.V. Diffusion processes and memory effects. New J. Phys. 6, 7 (2004).Mokshin/Yulmeyev A.V. Mokshin, R.M. Yulmetyev, P. Hänggi. Relaxation time scales in collective dynamics of liquid alkali metals. J. Chem. Phys. 121, 7341 (2004).Mokshin/Yulmetyev/Khusnutdinov_JETP_2006 A.V. Mokshin, R.M. Yulmetyev, R.M. Khusnutdinov, P. Hänggi. Collective dynamics in liquid aluminum near the melting temperature: Theory and computer simulation J. Exp. and Theor. Phys. 103, 841 (2006).Mokshin/Yulmetyev/Khusnutdinoff_JPCM_2007 A.V. Mokshin, R.M. Yulmetyev, R.M. Khusnutdinoff, P. Hänggi. Analysis of the Dynamics of Liquid Aluminium: Recurrent Relation approach. J. Phys.: Condens. Matter. 19, 046209 (2007).Resibua P. Resibois, M. de Leener. Classical Kinetic Theory of Fluids. (Wiley, New-York, 1977).Bogol N.N. Bogoliubov. Problems of Dynamic Theory in Statistical Physics. (Oak Ridge, Tenn., 1960).Yulmetyev76 R.M. Yulmetyev. Simple model for the calculation of the coefficient of self-diffusion in a liquid. Phys. Lett. A. 56, 387 (1976).Lee1 M.H. Lee. Remarks on hyperbolic secant memory functions. J. Phys.: Condens. Matter. 8, 3755 (1996).tranc Higher Transcendental Functions. Edited by A. Erdélyi. (McGraw-Hill, New-York, 1955).Mainardi F. Mainardi. Fractional relaxation-oscillation and fractional diffusion-wave phenomena. Chaos, Solitons and Fractals. 7, 1461 (1996).Cohen I.M. de Schepper, J.C. van Rijs, E.G.D. Cohen. Sound Propagation Gaps from the Navier-Stokes Equations. Physica A. 134, 1 (1985).Abram M. Abramowitz, I. Stegun. Handbook of Mathematical Functions. (National Bureau of Standarts, U.S. GPO, Washington, DC, 1964).Alder B.J. Alder, T.E. Wainwright. Decay of the Velocity Autocorrelation Function. Phys. Rev. A. 1, 18 (1970).Levesque D. Levesque, W.T. Ashurst. Long-Time Behavior of the Velocity Autocorrelation Function for a Fluid of Soft Repulsive Particles. Phys. Rev. Lett. 33, 277 (1974).Tuckerman M.E. Tuckerman, B.J. Berne. Stochastic molecular dynamics in systems with multiple time scales and memory friction. J. Chem. Phys. 95, 4389 (1991).jap1 S. Munejiri, T. Masaki, Y. Ishii, T. Kamiyama, Y. Senda, F. Shimojo, K. Hoshino, T. Itami. Structure Studies of Liquid Tin by Neutron Scattering Experiments and Ab-Initio Molecular-Dynamics Simulations.. J. Phys. Soc. Jpn. 70, 268 (2001).jap2 S. Munejiri, F. Shimojo, K. Hoshino, T. Masaki, Y. Ishii, T. Kamiyama, T. Itami. Structure and self-diffusion of liquid germanium studied by a first-principles molecular-dynamics simulation. J. Non-Cryst. Solids. 312, 182 (2002).jap3 F. Shimojo, K. Hoshino, Y. Zempo. Electronic and atomic structures of supercritical fluid selenium: ab initio molecular dynamics simulations. J. Non-Cryst. Solids. 312, 290 (2002).jap4 K. Hoshino, F. Shimojo, S. Munejiri. Mode-Coupling Analyses of Atomic Dynamics for Liquid Ge, Sn and Na. J. Phys. Soc. Jpn. 71, 119 (2002).jap5 M.J. Nuevo, J.J. Morales, D.M. Heyes. Temperature and density dependence of the self-diffusion coefficient and Mori coefficients of Lennard-Jones fluids by molecular dynamics simulation. Phys. Rev. E. 55, 4217 (1997).Morgado R. Morgado, F.A. Oliveira, G.G. Batrouni, A. Hansen. Relation between Anomalous and Normal Diffusion in Systems with Memory. Phys. Rev. Lett. 89, 100601 (2002).Khusnutdinoff/Mokshin_2010 R.M. Khusnutdinoff, A.V. Mokshin. Local Structural Order and Single-Particle Dynamics in Metallic Glass. Bulletin of RAS: Physics. 74, 640 (2010).Mokshin/Galimzyanov_JPCB_2013 A.V. Mokshin, B.N. Galimzyanov. Steady-State Homogeneous Nucleation and Growth of Water Droplets: Extended Numerical Treatment. J. Phys. Chem. B. 116, 11959 (2012).
http://arxiv.org/abs/1702.08182v1
{ "authors": [ "Anatolii V. Mokshin", "Bulat N. Galimzyanov" ], "categories": [ "cond-mat.stat-mech", "cond-mat.dis-nn" ], "primary_category": "cond-mat.stat-mech", "published": "20170227082637", "title": "A model solution of the generalized Langevin equation: Emergence and Breaking of Time-Scale Invariance in Single-Particle Dynamics of Liquids" }
Douglas J. Hemingway,1 and Isamu Matsuyama,21Department of Earth and Planetary Science, University of California, Berkeley, USA.2Lunar and Planetary Laboratory, University of Arizona, Tucson, Arizona, USA. Isostatic equilibrium is commonly defined as the state achieved when there are no lateral gradients in hydrostatic pressure, and thus no lateral flow, at depth within the lower viscosity mantle that underlies a planetary body's outer crust. In a constant-gravity Cartesian framework, this definition is equivalent to the requirement that columns of equal width contain equal masses. Here we show, however, that this equivalence breaks down when the spherical geometry of the problem is taken into account. Imposing the “equal masses” requirement in a spherical geometry, as is commonly done in the literature, leads to significant lateral pressure gradients along internal equipotential surfaces, and thus corresponds to a state of disequilibrium. Compared with the “equal pressures” model we present here, the “equal masses” model always overestimates the compensation depth—by ∼27% in the case of the lunar highlands and by nearly a factor of two in the case of Enceladus.§ INTRODUCTION Rocky and icy bodies with radii larger than roughly [200]km typically have figures that are close to the expectation for hydrostatic equilibrium (i.e., the surface conforms roughly to a gravitational equipotential) because their interiors are weak enough that they behave like fluids on geologic timescales. Because of high effective viscosities in their cold exteriors, however, these bodies canmaintain some non-hydrostatic topography, even on long timescales. This non-hydrostatic topography may be supported in part by bending and membrane stresses in the lithosphere [e.g., ], but over long timescales, and especially when considering broad topographic loads, or loads that formed at a time when the lithosphere was weak, the rocks may yield until much of the support comes from buoyancy—that is, the crustal material essentially floats on the higher density, lower viscosity mantle material beneath it. This is the classic picture of isostatic equilibrium, first discussed by Pratt and Airy in the 1850s, and is often invoked as a natural mechanism by which gravity anomalies associated with topography can be compensated [e.g., ].The two standard end-member models for isostatic compensation are Airy, involving lateral variations in crustal thickness, and Pratt, involving lateral variations in crustal density. The problem of modeling Airy-type isostatic compensation can be framed as the need to compute the deflection of the interface between the crust and the underlying higher density, lower viscosity material (we address Pratt-type compensation in the Supporting Information, section S2). Given the known surface topography (h_t), the Airy-compensated basal topography (h_b) can be computed as h_b=-h_tρ_c/Δρ where ρ_c is the density of the crustal material and Δρ is the density contrast at the crust/mantle interface. The negative sign reflects the fact that the basal topography is inverted with respect to the surface topography if both h_t and h_b are taken as positive upward relief with respect to their respective reference levels (i.e., the hypothetical equipotential surfaces to which the density interfaces would conform if the layers were all inviscid). This equation follows from requiring equal hydrostatic pressures at equal depths (or equivalently, requiring equal masses in columns of equal width), and ensures that, regardless of the topography, there are no horizontal pressure gradients and thus there is no lateral flow at depth within the fluid mantle (there is also no vertical flow because vertical pressure gradients are balanced by gravity). Hence—neglecting mantle dynamics and the slow relaxation of the crust itself—we have a state of equilibrium.Equation (<ref>) implicitly assumes a Cartesian geometry and a uniform gravity field. However, for long wavelength loads or when the compensation depth is a substantial fraction of the body's radius, it becomes necessary to take into account the spherical geometry of the problem. In this case, the requirement of equal masses in equal width columns leads to (section <ref>) h_b=-h_tρ_c/Δρ(R_t/R_b)^2 where R_t and R_b are the mean radii corresponding to the top and bottom of the crust, respectively. This expression (or its equivalent) is widely used in the literature [e.g., ]. However, as we show in section <ref>, this is not equivalent to the requirement of equal pressures at equal depths, which instead leads to h_b=-h_tρ_c/Δρ(g_t/g_b) where g_t and g_b are the mean gravitational accelerations at the top and bottom of the crust, respectively. Although the distinction between “equal masses" and “equal pressures" isostasy has long been recognized [e.g., ], it has widely been ignored because the effect is deemed negligible in the case of the Earth, where the crustal thickness is small compared to the radius. However, the difference between equations (<ref>) and (<ref>) becomes increasingly significant as the compensation depth becomes an increasingly large fraction of the total radius, and can therefore be important for bodies like the Moon, Mars, Ceres, Pluto and the outer solar system's many mid-sized moons. Arguably, this basic picture of isostatic equilibrium suffers from some internal inconsistencies in that, on one hand, it assumes that the crust is stiff or viscous enough that the topography does not relax away completely, while on the other hand assuming that the crust is weak enough that it cannot support vertical shear stresses, meaning that radial pressure gradients are the only available means of supporting the topographic loads against gravity.Besides handling the spherical geometry properly, a fully self-consistent conception of the problem would have to account for the internal stresses, the elastic and rheological behaviors of the crust and mantle, the nature of the topographic loads (i.e., where and when they were emplaced), and the system'stime-varying response to those loads. Elastic stresses may prevent or at least slow the progression towards equilibrium, especially in the case of relatively short-wavelength loads that deflect, but do not readily break, the lithosphere. Accordingly, many authors construct analytical models based on thin elastic shell theory [e.g., ], wherein the loads are supported by a wavelength-dependent combination of bending stresses, membrane stresses, and buoyancy (in which the “equal masses" versus “equal pressures" distinction remains important). Still more sophisticated approaches exist as well. <cit.>, for example, develops a more generalized analytical elastic shell model that allows for tangential loading and laterally variable elastic properties. Taking another approach, <cit.> solves the elastic-gravitational problem numerically, accommodating the spherical geometry and the force balances in a self-consistent manner.In the limit of a weak lithosphere (the isostatic limit), however, elastic stresses do not play such a significant role in supporting the topography. Some authors thus define isostatic equilibrium as the state of minimum deviatoric stresses within the lithosphere [e.g., ]. This state is achieved in such models by splitting the crustal thickening (or thinning) into a suitable combination of surface and basal loads—in reality, the applied loads may have been entirely at the surface, entirely at the base, or some combination of the two; the combination that yields the state of minimum deviatoric stresses is merely intended to represent the final stress state after the lithosphere has finished failing or deforming in response to the applied loads. This approach aligns well with the basic concept of complete isostatic equilibrium in that it involves supporting the topography mainly by buoyancy, but with the additional advantage of maintaining internal consistency—deviatoric stresses do not go precisely to zero, and can thus keep the topography from relaxing away completely. Whereas implementation of this solution is far from straightforward [e.g.,  and references therein], our simplified approach, in spite of its limitations, leads to a result that closely matches the minimum deviatoric stress result of <cit.>.One further consideration is the fact that relaxation may continue even after the initial gross isostatic adjustments have taken place. Provided that a topographic load is broad, and that the underlying layer is much weaker, the system will respond relatively rapidly at first, on a timescale governed mainly by the viscosity of the underlying weaker mantle, until reaching a quasi-static equilibrium in which the lateral flow of that weak material is reduced to nearly zero. Relaxation does, however, continue after this point, and may not necessarily be negligible, especially when the base of the crust is relatively warm and ductile [e.g., ]. Nevertheless, this latter stage of relaxation will usually be slow compared with the timescale for reaching isostatic equilibrium, and so we will often use the word “equilibrium” without qualification, even as we recognize the system may be continuing to evolve at some slow rate following the initial isostatic adjustment. We stress, however, that this is merely an assumption, and that caution should be used in cases where the materials are likely to relax more rapidly.Notwithstanding the above complicating factors, the basic concept of isostatic equilibrium, in which topographic loads are supported entirely by buoyancy (i.e., without appeal to elastic stresses), has been widely and productively adopted as a useful approximation in Earth and planetary sciences. To the extent that such a simplified model remains desirable for analyses of planetary topography, it should at least be consistent with its core principle of avoiding lateral gradients in hydrostatic pressure at depth. This paper's modest goal is to show that, when accounting for the spherical geometry, the “equal pressures" model, equation (<ref>), provides a very good approximation that is consistent with this principle, while the commonly used “equal masses" model, equation (<ref>), does not.In section <ref>, we show how we obtained equations (<ref>) and (<ref>), and we compare the two in terms of the resulting internal pressure anomalies. In section <ref>, we show how the two different conceptions of isostasy affect spectral admittance and geoid-to-topography ratio (GTR) models, addressing implications including crustal thickness estimates for the specific examples of the lunar and Martian highlands, as well as the ice shell thickness on Enceladus. Finally, we make concluding remarks in section <ref>.§ ANALYSIS§.§ Framework Consider a body consisting of concentric layers, each having uniform density, and with the layer densities increasing monotonically inward. The shape of the i^th layer can be expanded in spherical harmonics as H_i(θ,ϕ)=R_i+∑_l=1^∞∑_m=-l^lH_ilmY_lm(θ,ϕ) where θ and ϕ are the colatitude and longitude, respectively, Y_lm(θ,ϕ) are the spherical harmonic functions for degree-l and order-m [e.g., ], R_i is the mean radius of the i^th layer, and where the coefficients H_ilm describe the departure from spherical symmetry for the i^th layer. Each layer's shape is primarily a figure determined by hydrostatic equilibrium, but may include smaller additional non-hydrostatic topographic anomalies. Hence, we take the shape coefficients to be the sum of their hydrostatic and non-hydrostatic parts, H_ilm=H_ilm^hyd+H_ilm^nh. Since isostatic equilibrium concerns providing support for the departures from hydrostatic equilibrium, it is only the non-hydrostatic topographic anomalies, H_ilm^nh, that are involved in the isostatic equations. To a good approximation, the hydrostatic equilibrium figure can be described by a degree-2 spherical harmonic function. Hence, this complication generally does not apply to the topographic relief at degrees 3 and higher, where H_ilm^hyd=0. A possible exception is fast-rotating bodies, for which higher order hydrostatic terms may be non-negligible [].We assume that the outermost shell (the “crust") does not relax on the timescale relevant for achieving isostatic equilibrium, whereas we take the layer below the crust (the “mantle") to be inviscid. Given the observed topographic relief at the surface, H_tlm^nh, we are concerned with finding the basal relief, H_blm^nh, required to deliver isostatic equilibrium. We consider the condition of isostatic equilibrium to be satisfied when there are no lateral variations in hydrostatic pressure along equipotential surfaces within the inviscid layer below the crust. The hydrostatic pressure at radial position r is given by p(r,θ,ϕ)=∫_r^∞ρ(r',θ,ϕ)g(r')dr' where g(r)=GM(r)/r^2 is the gravitational acceleration at radius r, and where M(r) is the enclosed mass at radius r. Here, the small lateral variations in gravitational acceleration are neglected. Although lateral variations in gravity can approach a few percent due to rotation and tidal forces, this simplification is justified on the grounds that the quantity of interest is often the ratio g_t/g_b, as in equation (<ref>) for example, and this ratio may be regarded as laterally constant.A datum equipotential surface with mean radius R_d can be approximated to first order as E_d(θ,ϕ)=R_d-ΔU(R_d,θ,ϕ)/g(R_d) where g(R_d) is the mean gravitational acceleration at r=R_d andwhere ΔU(r,θ,ϕ) represents the lateral variations in the potential (section S1.3), given by ΔU(r,θ,ϕ)=U^rot(r,θ,ϕ)+U^tid(r,θ,ϕ)+∑_l=1^∞∑_m=-l^lU_lm(r)Y_lm(θ,ϕ) where U^rot and U^tid are the laterally varying rotational and (if applicable) tidal potentials, respectively, and where the coefficients U_lm account for the gravitation associated with the topography and thus depend on the layer shapes and densities, and are given by U_lm(r)=-4π Gr/2l+1∑_i=1^NΔρ_iH_ilm(R_i/r)^l+2r≥ R_i (r/R_i)^l-1r<R_i where Δρ_i is the density contrast between layer i and the layer above it.Below, we examine two distinct conceptions of the condition of Airy-type isostasy in spherical coordinates: 1) the requirement of equal masses in columns (or cones) of equal solid angle; and 2) the requirement of the absence of lateral pressure gradients at depth, where pressure is assumed to be hydrostatic. We use simplifying assumptions to obtain compact expressions for each case. We then evaluate these simple models by computing lateral pressure variations along the equipotential surface defined by (<ref>). A good model should yield little or no lateral pressure gradients along this equipotential surface. For both models, we consider a two-layer body having a crust with density ρ_c, and an underlying mantle with density ρ_m, where ρ_m>ρ_c. For clarity and simplicity in the following derivations, we assume the body is not subjected to rotational or tidal deformation so that H_ilm^hyd=0. The top and bottom of the crust have mean radii R_t and R_b, respectively. A portion of the body has some positive topographic anomaly at the top of the crust (h_t>0) and a corresponding compensating isostatic root (inverted topography) at the base of the crust (h_b<0) (Figure S2a). A reference datum is defined at an arbitrary internal radius R_d<R_b+h_b.§.§ Equal Masses in Equal Columns The mass above radius r, in any given column, taken as a narrow wedge, or cone, is given by M=∫_r^∞ρ(r',θ,ϕ)r'^2sinθ dθ dϕ dr' where θ and ϕ are colatitude and longitude, respectively. Equating the wedge mass in the absence of the topographic anomaly (left side of Figure S2a) with the wedge mass in the presence of the topographic anomaly (right side of Figure S2a), yields Δρ∫_R_b+h_b^R_br^2dr=ρ_c∫_R_t^R_t+h_tr^2dr where Δρ=ρ_m-ρ_c. After integrating, and some manipulation, we obtain h_b=-h_tρ_c/Δρ(R_t/R_b)^2(1+h_t/R_t+h_t^2/3R_t^2)(1+h_b/R_b+h_b^2/3R_b^2)^-1 If |h_t|≪ R_t and |h_b|≪ R_b, this expression reduces to equation (<ref>) h_b≈-h_tρ_c/Δρ(R_t/R_b)^2 §.§ Equal Pressures at Depth Equating the hydrostatic pressure in the absence of the topographic anomaly (left side of Figure S2a) with the hydrostatic pressure in the presence of the topographic anomaly (right side of Figure S2a), in both cases evaluated at r=R_d, we obtain Δρ∫_R_b+h_b^R_bg(r)dr=ρ_c∫_R_t^R_t+h_tg(r)dr where again Δρ=ρ_m-ρ_c. If |h_t|≪ R_t, then over the small radial distance between R_t and R_t+h_t, the integrand on the right hand side has a nearly constant value of g_t, the mean gravitational acceleration at r=R_t. Similarly, if |h_b|≪ R_b, then on the left hand side, the integrand is always close to g_b, the mean gravitational acceleration at r=R_b. Hence, if the relief at the density interfaces is small, then it is a good approximation to write Δρ g_b∫_R_b+h_b^R_bdr≈ρ_cg_t∫_R_t^R_t+h_tdr leading to equation (<ref>) h_b≈-h_tρ_c/Δρ(g_t/g_b)Because it is often more convenient to specify ρ_c/ρ̅ (the ratio of the crustal density to the body's bulk density), it is useful to note that g_t/g_b is given by (section S1.5) g_t/g_b=(R_b/R_t)^2/1+((R_b/R_t)^3-1)ρ_c/ρ̅Note that the mass anomalies associated with the topographic anomaly and its compensating isostatic root will displace the datum equipotential surface slightly—an effect that is captured in (<ref>), but which we have neglected in the derivation of equation (<ref>). If the radial displacement of this equipotential surface is h_d, the hydrostatic pressure at this depth (within the mantle) will be different by approximately ρ_m g_d h_d, where g_d is the mean gravitational acceleration on this datum surface. For comparison, <cit.> include the equivalent of this additional term (which they call ρ_m g h_g) in their pressure balance (their equation 3), though they neglect the radial variation in gravity and the fact that the shape of this equipotential surface will vary with depth (i.e., they evaluate h_g only at the exterior surface, using their equation 25). In the limit of complete isostatic compensation, their h_g goes to zero (substitute their eq. 28 into their eq. 25). Hence, in the isostatic limit, their equation 3 is identical to ours, except that we also account for the radial variation in gravity. In reality, due to the finite thickness of the crust, the displacement h_d will not be precisely zero (it goes to zero for <cit.> owing to some approximations they make to simplify their equation 25), but because we are concerned only with relatively small departures from hydrostatic equilibrium, h_d is minuscule, and, as we show in the next section, in spite of our neglecting the ρ_m g_d h_d term in the above derivation, our equation (<ref>) is nevertheless an excellent approximation when the goal is to make internal equipotential surfaces isobaric. §.§ Comparison In spite of the simplifications used to obtain equations (<ref>) and (<ref>), it is clear that the two results are not equivalent. To illustrate the difference, consider the case of a 2-layer body (high viscosity crust, low viscosity mantle) that is initially spherically symmetric (for simplicity, we again assume no tidal or rotational deforming potentials). We impose some topography at the top of the crust, H_t(θ,ϕ), and compute the amplitude of the corresponding basal topography, H_b(θ,ϕ), using either (<ref>), (<ref>), or (<ref>). In each case, we then use (<ref>) to compute the hydrostatic pressure at depth.Again, we are ultimately concerned with eliminating pressure gradients along equipotential surfaces at depth, not just at a specific radial position, so we compute internal pressure along the equipotential surface defined by (<ref>). Figure <ref> illustrates an example in which the surface topography is described by a single non-zero coefficient, H_t30, which is longitudinally symmetric, allowing us to plot the internal pressure anomalies on an internal reference equipotential surface as a function of colatitude only. For reference, when the basal topography, H_blm, is zero, there are of course significant lateral variations in pressure along the equipotential surface, meaning we have a state of disequilibrium (dotted black line in Figure <ref>). When the topography is compensated according to equation (<ref>), the pressure anomalies are reduced, but not eliminated (dash-dotted blue line). When the topography is compensated according to equation (<ref>), the internal pressures change substantially, but large lateral pressure gradients remain, and so we still have a state of disequilibrium (dashed red line). When the topography is compensated according to equation (<ref>), on the other hand, the lateral pressure gradients nearly vanish (solid gold line), as expected if the assumptions made in section <ref> are reasonable. Hence, only equation (<ref>) describes a condition that is close to equilibrium. In this example, we arbitrarily set ρ_c=[1000]kg/m^3, ρ_m=[3000]kg/m^3, R_t=[100]km, R_b=[80]km, such that ρ_c/ρ̅≈0.49, R_d=[50]km, and we impose a topographic anomaly with amplitude H_t30=[200]m, 1% of the mean crustal thickness. The fundamental conclusions are not, however, sensitive to these choices: compared with equation (<ref>), equation (<ref>) always gives rise to larger pressure anomalies.When compensation depths are shallow, g_t≈ g_b and R_t≈ R_b, so that equations (<ref>) and (<ref>) both reduce to the usual Cartesian form of the isostatic balance. However, when compensation depths become non-negligible fractions of the body's total radius, equations (<ref>), (<ref>), and (<ref>) begin to diverge. When the crustal density is less than ∼70% of the body's bulk density, then g_t<g_b (section S1.5, Figure S1), meaning that equation (<ref>) generally overestimates the amplitude of the basal topography. When the crustal density is more than ∼70% of the body's bulk density (as is likely the case for Mars, for example), g_t may be larger than g_b, and so equation (<ref>) could underestimate the amplitude of the basal topography. However, of the three equations, (<ref>) always yields the largest (most overestimated) isostatic roots because R_t>R_b and because, assuming density does not increase with radius, ρ̅≤ρ̅_b (section S1.5). § IMPLICATIONS§.§ Spectral Admittance In combined studies of gravity and topography, it is common to use the spectral admittance as a means of characterizing the degree or depth of compensation [e.g., ]. The mass associated with any surface topography (represented using spherical harmonic expansion coefficients, H_tlm) produces a corresponding gravity anomaly. However, if the topography is compensated isostatically—that is, if there is some compensating basal topography (H_blm)—the gravity anomaly can be reduced. Using equation (S13), we can compute the surface gravity anomaly caused by the topography at the top and bottom of the crust, yielding g_lm=l+1/2l+14π G(ρ_cH_tlm+Δρ H_blm(R_b/R_t)^l+2) where again, ρ_c is the density of the crust, Δρ is the density contrast at the crust/mantle interface,and where we have neglected any contributions that may arise from asymmetries on deeper density interfaces. Taking the degree-l admittance, Z_l, to be the ratio of gravitational acceleration (g_lm) to topography (H_tlm), and assuming complete Airy compensation, with the basal topography (H_blm) computed via the “equal masses” model, equation (<ref>), we have Z_l=l+1/2l+14π Gρ_c(1-(R_b/R_t)^l)Equation (<ref>) is commonly used to generate a model admittance spectrum under the assumption of complete Airy compensation. Comparison of the model admittance with the observed admittance, along with an assumption about the crustal density then allows for an estimate of the compensation depth, d=R_t-R_b. However, when we instead compute the basal topography using the “equal pressures” equation (<ref>), we obtain Z_l=l+1/2l+14π Gρ_c(1-(g_t/g_b)(R_b/R_t)^l+2) where again g_t/g_b is given by equation (<ref>).Compared with equation (<ref>), equation (<ref>) will always lead to an overestimate of the compensation depth. That is, at any given spherical harmonic degree, using equation (<ref>) yields the same admittance with a smaller compensation depth (Figure <ref>a). Equivalently, for any given compensation depth, the model admittance spectrum computed via equation (<ref>) is larger than that obtained via equation (<ref>) (Figure <ref>b). The discrepancy is always greatest at low spherical harmonic degrees (e.g., focusing on degree 3, and assuming that ρ_c/ρ̅=0.6, would yield a compensation depth estimate that is roughly ∼50% too large) and vanishes in the short wavelength limit (e.g., the compensation depth overestimate reduces to <5% for l>50). For clarity and simplicity, we have not included the finite amplitude (or terrain) correction [e.g., ] in the above admittance equations. When the topographic relief is a non-negligible fraction of the body's radius, it may be important to include this effect, which will in general lead to larger admittances. However, the point of this paper is not so much to advocate the use of equation (<ref>) in the admittance calculation, but rather, more fundamentally, to advocate the use of equation (<ref>) in computing the basal topography.It is worth emphasizing that the degree-2 admittance is complicated by the effects of rotational and possibly tidal deformation. A meaningful admittance calculation for degree-2 requires first removing the tidal/rotational effects from both the gravity and topography signals. Only the remaining, non-hydrostatic, signals should then be used in the admittance calculation. Unfortunately, determination of the hydrostatic components of the degree-2 gravity and topography signals requires knowledge of the body's interior structure, which may not be readily available. In such cases, the easiest option would be to simply exclude the degree-2 terms in the admittance analysis. Alternatively, one might appeal to self-consistency arguments to constrain the internal structure and admittance simultaneously [e.g., ]. §.§ Geoid-to-Topography Ratio (GTR) A closely related concept is the geoid-to-topography ratio (GTR), which has been used to estimate regional crustal thicknesses in situations where local isostasy can be reasonably expected [e.g., ]. <cit.> showed that the GTR is primarily a function of crustal thickness and can be computed from a compensation model according toGTR=R_t∑_l=l_min^l_maxW_lQ_l where W_l is a weighting coefficient for degree-l, and Q_l is a transfer function relating the degree-l gravitational potential and topography coefficients Q_l=C_lm/H_lmThe weighting coefficients reflect the fact that the geoid is most strongly affected by the longest wavelengths (lowest spherical harmonic degrees) and are constructed based on the topographic power spectrum, S_hh(l)=∑_m=-l^lH_lm^2, according toW_l=S_hh(l)/∑_i=l_min^l_maxS_hh(i)<cit.>. Q_l may be regarded as another expression for the spectral admittance (Z_l), except that it employs dimensionless gravitational potential coefficients rather than acceleration, and so we denote it here with a distinct symbol (also in accord with <cit.>). Neglecting the effects of topography on boundaries other than the surface and the crust/mantle interface, we can use equation (S12) to rewrite (<ref>) as Q_l=3/2l+1(ρ_c/R_tρ̅)(1+Δρ H_blm/ρ_cH_tlm(R_b/R_t)^l+2)Assuming complete Airy compensation, with the basal topography (H_blm) computed via the “equal masses” equation (<ref>), we then have GTR=∑_l=l_min^l_maxW_l(3/2l+1)(ρ_c/ρ̅)(1-(R_b/R_t)^l)If we instead compute the basal topography using the “equal pressures” equation (<ref>), we obtain GTR=∑_l=l_min^l_maxW_l(3/2l+1)(ρ_c/ρ̅)(1-(g_t/g_b)(R_b/R_t)^l+2)For reference, the linear dipole moment approximation <cit.> can be written GTR=(3/2)(ρ_c/ρ̅)(1-R_b/R_t)Each model thus suggests a different relationship between the GTR and the compensation depth (Figure <ref>). For shallow compensation depths (i.e., less than ∼4% of the body's radius assuming ρ_c/ρ̅=0.6), the “equal pressures” conception of isostasy and the linear dipole moment approximation give similar results. For deeper compensation depths, the dipole moment approach begins to overestimate the GTR. In all cases, the “equal masses” approach underestimates the GTR, and therefore leads to an overestimate of the compensation depth (Figure <ref>).§.§ Application to the Moon, Mars, and Icy Satellites Here we consider a few realistic examples to illustrate how crustal thickness estimates differ when one adopts the “equal pressures” rather than the “equal masses” model. Note that the “equal pressures”-based crustal thickness values discussed in this section should not be taken as definitive new estimates. There are many subtleties to the interpretation of gravity and topography data that we have ignored here. The tools discussed in sections <ref> and <ref> will comprise only one component of any meaningful analysis of planetary crusts. <cit.>, for instance, provide a comprehensive analysis that incorporates geochemical and mechanical equilibrium considerations to complement their GTR analysis. An updated estimate of the Martian highlands crustal thickness would require careful consideration of a wide range of relevant factors and an exploration of the permissible parameter space. Here, we wish only to illustrate, using a few specific examples, the importance of adjusting the admittance and GTR components of the analysis to incorporate the “equal pressures” isostatic equilibrium model rather than the “equal masses” model.For the case of the nearside lunar highlands, <cit.> obtained geoid-to-topography ratios (GTRs) of roughly [14-34]m/km. Taking the case of a single layer crust (<cit.> also considered dual-layer crusts), with a density of [2900]kg/m^3 (ρ_c/ρ̅≈0.87), this yields a crustal thickness estimate of roughly [22-61]km when the topography is assumed to be in isostatic equilibrium in the “equal masses” sense. Adopting the “equal pressures” model instead leads to crustal thickness estimates of [18-48]km, suggesting that the “equal masses” model overestimates the crustal thickness by up to ∼27% in this case (section S3.1, Figure S6a). For the Martian highlands, <cit.> obtained GTRs of roughly [13-19]m/km, corresponding to crustal thicknesses of roughly [48-73]km, assuming a crustal density of [2900]kg/m^3 (ρ_c/ρ̅≈0.74) and adopting the “equal masses” approach. The “equal pressures” model instead leads to crustal thicknesses of roughly [44-66]km, not as dramatically different as in the case of the lunar highlands, but still indicating that the “equal masses” model overestimates the crustal thickness by ∼10% in the case of the Martian highlands (section S3.1, Figure S6b). For icy bodies, the ice shell's density can be a considerably smaller fraction of the bulk density, leading to smaller g_t/g_b ratios and therefore even more pronounced differences between the “equal masses” and “equal pressures” isostasy models (Figure S3). In the case of Europa, for example, a crustal density of [930]kg/m^3 corresponds to ρ_c/ρ̅≈0.31, leading the crustal thickness estimates to differ by a factor of roughly two at the lowest spherical harmonic degrees. For Encleadus (ρ_c/ρ̅≈0.58, assuming [ρ_c=930]kg/m^3), where the degree-2 and -3 gravity terms have been measured based on a series of Cassini flybys, <cit.> were able to obtain a degree-3 admittance of [14.0±2.8]mGal/km, which allows for a crustal thickness estimate of [30±6]km, adopting the “equal masses” model. Adopting the “equal pressures” model instead leads to a remarkably different estimate of just [17±4]km (section S3.2, Figure S7).§ CONCLUSIONS To the extent that isostatic equilibrium is a useful model for the state of mature planetary crusts, where broad topographic loads are supported mainly by buoyancy, it should be taken to mean a state in which hydrostatic (or lithostatic) pressures are equal along equipotential surfaces within the relatively low viscosity mantle. However, it is common in the literature to define isostatic equilibrium as the requirement that columns of equal width contain equal masses. Whereas these two definitions would be equivalent in a Cartesian framework, we have shown here that they are not equivalent in a spherical geometry (section <ref>). We have demonstrated that adopting the “equal masses” model leads to lateral pressure gradients that can be nearly as large (though opposite in sign) as if there were no isostatic compensation at all (Figure <ref>). We further showed that the “equal masses” model leads to an overestimate of either the compensating basal topography in the case of Airy compensation (section <ref>), or the compensating lateral crustal density variations in the case of Pratt compensation (section S2). In combined studies of gravity and topography, using an “equal masses” model leads to an overestimate of the compensation depth (Figures <ref> and S4). The discrepancy is always most significant at the lowest spherical harmonic degrees (longest wavelengths) and increases as the crustal density becomes a smaller fraction of the body's bulk density. As examples, we showed that, in the case of the lunar and Martian highlands, the “equal masses” model could overestimate the crustal thicknesses by ∼27% and ∼10%, respectively. For the case of Enceladus, where the compensation depth may be on the order of 10% of the radius and where the ice shell density is roughly 58% of the bulk density, the “equal masses” model may overestimate the shell thickness by nearly a factor of two. In the case of asymmetric loads (odd harmonics), we additionally note that the “equal masses” and “equal pressures” models will lead to distinct center of mass-center of figure offsets, a factor that could be important for smaller bodies.Whereas, for the sake of clarity, we have focused here on the end-member case of complete isostatic equilibrium (purely buoyant support), the distinction between “equal masses” and “equal pressures” remains important for models in which the topography is supported by a combination of both buoyancy and elastic flexure—a topic that is beyond thescope of this work. While we acknowledge the limitations of the very concept of isostatic equilibrium (see Introduction), our goal here is merely to ensure that isostasy models at least correspond to what they are intended to mean—no lateral flow at depth when topographic loads are supported entirely by buoyancy. That is, in order to be consistent with the basic principle of isostasy, we must be sure to use the “equal pressures” model presented here and not the “equal masses” model. Beyond this simple picture, a fully self-consistent model of a planetary crust and its topography requires consideration of its loading history (i.e., where and when the loads were emplaced), the state of internal stresses (and failures) through time, and the potentially time-varying rheology of the relevant materials, within both the crust and the underlying mantle. Such models could be highly valuable, but only where sufficient clues are available to meaningfully constrain these many factors. In the absence of such information, the condition of isostatic equilibrium, as we have presented it here, is likely to remain a useful model, at least as a reference end member case. This work was initially motivated by a discussion with Bill McKinnon and also benefited from exchanges with Bruce Buffet, Anton Ermakov, Roger Fu, Michael Manga, Tushar Mittal, Francis Nimmo, Gabriel Tobie, and especially Mikael Beuthe. We thank Mark Wieczorek and Dave Stevenson for constructive reviews that improved the manuscript. All data are publicly available, as described in the text. Financial support was provided by the Miller Institute for Basic Research in Science at the University of California Berkeley, and the NASA Gravity Recovery and Interior Laboratory Guest Scientist Program.Hubbard1984,Hemingway2016,Iess2010,Nimmo2011aagu08
http://arxiv.org/abs/1702.08198v2
{ "authors": [ "Douglas J. Hemingway", "Isamu Matsuyama" ], "categories": [ "physics.geo-ph", "astro-ph.EP" ], "primary_category": "physics.geo-ph", "published": "20170227092105", "title": "Isostatic equilibrium in spherical coordinates and implications for crustal thickness on the Moon, Mars, Enceladus, and elsewhere" }
Georgios C. ChasparisGeorgios C. ChasparisSoftware Competence Center Hagenberg GmbH, Softwarepark 21, A-4232 Hagenberg, Austriageorgios.chasparis@scch.at, Stochastic Stability Analysis of Perturbed Learning Automata with Constant Step-Size in Strategic-Form GamesThis work has been partially supported by the European Union grant EU H2020-ICT-2014-1 project RePhrase (No. 644235). Georgios C. Chasparis December 30, 2023 =================================================================================================================================================================================================================================This paper considers a class of reinforcement-learning that belongs to the family of Learning Automata and provides a stochastic-stability analysis in strategic-form games. For this class of dynamics, convergence to pure Nash equilibria has been demonstrated only for the fine class of potential games. Prior work primarily provides convergence properties of the dynamics through stochastic approximations, where the asymptotic behavior can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through the ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the applicabity of these algorithms to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing stochastic-stability that is based upon an explicit characterization of the (unique) invariant probability measure of the induced Markov chain. § INTRODUCTIONRecently, multi-agent formulations have been utilized to tackle distributed optimization problems, since communication and computation complexity might be an issue in centralized optimization problems.In such formulations, decisions are usually taken in a repeated fashion, where agents select their next actions based on their own prior experience of the game. The present paper discusses a class of reinforcement-learning dynamics, that belongs to the large family of Learning Automata <cit.>, within the context of (non-cooperative) strategic-form games.In this class of dynamics, agents are repeatedly involved in a game with a fixed payoff-matrix, and they need to decide which action to play next having only access to their own prior actions and payoffs. In Learning Automata, agents build their confidence over an action through repeated selection of this action and proportionally to the reward received from this action. Naturally, it has been utilized to analyze human-like (bounded) rationality <cit.>.Reinforcement learning has been applied in evolutionary economics, for modeling human and economic behavior <cit.>.It is also highly attractive to several engineering applications, since agents do not need to know neither the actions of the other agents, nor their own utility function. It has been utilized for system identification and pattern recognition <cit.>, distributed network formation and coordination problems <cit.>.In strategic-form games, the main goal is to derive conditions under which convergence to Nash equilibria can be achieved. In social sciences, deriving such conditions may be important for justifying emergence of certain social phenomena.In engineering, convergence to Nash equilibria may also be desirable in distributed optimization problems, when the set of optimal solutions coincides with the set of Nash equilibria. In Learning Automata, deriving conditions under which convergence to Nash equilibria is achieved may not be a trivial task. In particular, there are two main difficulties: a) excluding convergence to pure strategies that are not Nash equilibria, and b) excluding convergence to mixed strategy profiles. As it will be discussed in detail in a forthcoming Section <ref>, for some classes of (discrete-time) reinforcement-learning algorithms, convergence to non-Nash pure strategies may be achieved with positive probability. Moreover, excluding convergence to mixed strategy profiles may only be achieved under strong conditions in the utilities of the agents, (e.g., existence of a potential function).In the present paper, we consider a class of (discrete-time) reinforcement-learning algorithms introduced in <cit.> that is closely related to existing algorithms for modeling human-like behavior, e.g., <cit.>. The main difference with prior reinforcement learning schemes lies in a) the step-size sequence, and b) the perturbation (or mutations) term. The step-size sequence is assumed constant, thus introducing a fading-memory effect of past experiences in each agent's strategy. On the other hand, the perturbation term introduces errors in the selection process of each agent.Both these two features can be used for designing a desirable asymptotic behavior.We provide an analytical framework for deriving conclusions over the asymptotic behavior of the dynamics that is based on an explicit characterization of the invariant probability measure of the induced Markov chain. In particular, we show that in all strategic-form games satisfying the Positive-Utility Property, the support of the invariant probability measure coincides with the set of pure strategy profiles. This extends prior work where nonconvergence to mixed strategy profiles may only be excluded under strong conditions in the payoff matrix (e.g., existence of a potential function). A detailed discussion of the exact contributions of this paper is provided in the forthcoming Section <ref>. At the end of the paper, we also provide a brief discussion over how the proposed framework can be further utilized to provide a more detailed characterization of the stochastically stable states (e.g., excluding convergence to non-Nash pure strategy profiles). Due to space limitations, this analysis is not presented in this paper.In the remainder of the paper, Section <ref> presents a class of reinforcement-learning dynamics, related work and the main contribution of this paper. Section <ref> provides the main result of this paper (Theorem <ref>), where the set of stochastically stable states is characterized. A short discussion is also provided over the significance of this result and how it can be utilized to provide further conclusions. Finally, Section <ref> provides the technical derivation of the main result and Section <ref> presents concluding remarks.Notation: * For a Euclidean topological space 𝒳⊂ℝ^n, let 𝒩_δ(x) denote the δ-neighborhood of x∈ℝ^n, i.e.,𝒩_δ(x) {y∈𝒳:|x-y|<δ},where |·| denotes the Euclidean distance.* e_j denotes the unit vector in ℝ^n where its jth entry is equal to 1 and all other entries is equal to 0.* Δ(n) denotes the probability simplex of dimension n, i.e.,Δ(n) { x∈ℝ^n : x≥0, 1 x=1 }. * For some set A in a topological space , let 𝕀_A:→{0,1} denote the index function, i.e.,𝕀_A(x)1 x∈A, 0* δ_x denotes the Dirac measure at x. * Let A be a finite set and let any (finite) probability distribution σ∈Δ(A). The random selection of an element of A will be denoted rand_σ[A]. If σ=(1/A,...,1/A), i.e., it corresponds to the uniform distribution, the random selection will be denoted by rand_ unif[A]. § REINFORCEMENT LEARNING §.§ Terminology We consider the standard setup of finite strategic-form games. Consider a finite set of agents (or players) ℐ = {1,...,n}, and let each agent have a finite set of actions 𝒜_i. Let α_i∈𝒜_i denote any such action of agent i. The set of action profiles is the Cartesian product 𝒜𝒜_1×⋯×𝒜_n and let α=(α_1,...,α_n) be a representative element of this set. We will denote -i to be the complementary set \i and often decompose an action profile as follows α=(α_i,α_-i). The payoff/utility function of agent i is a mapping i·:𝒜→ℝ. A strategic-form game is defined by the triple ⟨ℐ,𝒜,{i·}_i⟩. For the remainder of the paper, we will be concerned with strategic-form games that satisfy the Positive-Utility Property. [Positive Utility Property] For any agent i∈ℐ and any action profile α∈𝒜, iα>0. §.§ Reinforcement-learning algorithm We consider a form of reinforcement learning that belongs to the general class of learning automata <cit.>. In learning automata, each agent updates a finite probability distribution x_i∈Δ(𝒜_i) representing its beliefs with respect to the most profitable action. The precise manner in which x_i(t) changes at time t, depending on the performed action and the response of the environment, completely defines the reinforcement learning model.The proposed reinforcement learning model is described in Table <ref>. At the first step, each agent i updates its action given its current strategy vector x_i(t). Its selection is slightly perturbed by a perturbation (or mutations) factor λ>0, such that, with a small probability λ agent i follows a uniform strategy (or, it trembles).At the second step, agent i evaluates its new selection by collecting a utility measurement, while in the last step, agent i updates its strategy vector given its new experience. Here we identify actions 𝒜_i with vertices of the simplex, {e_1,...,e_𝒜_i}. For example, if agent i selects its jth action at time t, then e_α_i(t)≡ e_j. Note that by letting the step-size ϵ to be sufficiently small and since the utility function i· is uniformly bounded in 𝒜, x_i(t)∈Δ(𝒜_i) for all t. In case λ=0, the above update recursion will be referred to as the unperturbed reinforcement learning.§.§ Related work §.§.§ Erev-Roth type dynamics In prior reinforcement learning in games, analysis has been restricted to decreasing step-size sequences ϵ(t) and λ=0. More specifically, in <cit.>, the step-size sequence of agent i is ϵ_i(t) = 1/(ct^ν+iα(t+1) for some positive constant c and for 0<ν<1 (in the place of the constant step size ϵ of (<ref>)). A comparative model is also used by <cit.>, with ϵ_i(t) = 1/(V_i(t)+iα(t+1)), where V_i(t) is the accumulated benefits of agent i up to time t which gives rise to an urn process<cit.>. Some similarities are also shared with the Cross' learning model of <cit.>, where ϵ(t)=1 and iα(t)≤1, and its modification presented in <cit.>, where ϵ(t), instead, is assumed decreasing.The main difference of the proposed reinforcement-learning algorithm (Table <ref>) lies in the perturbation parameter λ>0 which was first introduced and analyzed in <cit.>. A state-dependent perturbation term has also been investigated in <cit.>. The perturbation parameter may serve as an equilibrium selection mechanism, since it excludes convergence to non-Nash action profiles. It resolved one of the main issues of several (discrete-time) reinforcement-learning algorithms, that is the positive probability of convergence to non-Nash action profiles under some conditions in the payoff function and the step-size sequence. This issue has also been raised by <cit.>. Reference <cit.> considered the model by <cit.> and showed that convergence to non-Nash pure strategy profiles can be excluded as long as c>iα for all i∈ℐ and ν=1. On the other hand, convergence to non-Nash action profiles was not an issue with the urn model of <cit.> (as analyzed in <cit.>). However, the use of an urn-process type step-size sequence significantly reduces the applicability of the reinforcement learning scheme. In conclusion, the perturbation parameter λ>0 may serve as a design tool for reinforcing convergence to Nash equilibria without necessarily employing an urn-process type step-size sequence. For engineering applications this is a desirable feature.Although excluding convergence to non-Nash pure strategies can be guaranteed by using λ>0, establishing convergence to pure Nash equilibria may still be an issue, since it further requires excluding convergence to mixed strategy profiles. As presented in <cit.>, this can be guaranteed only under strong conditions in the payoff matrix. For example, as shown in <cit.>, excluding convergence to mixed strategy profiles requires a) the existence of a potential function, b) conditions over the second gradient of the potential function. Requiring the existence of a potential function considerably restricts the class of games where equilibrium selection can be described. Furthermore, condition (b) may not easily be verified in games of large number of players or actions.§.§.§ Learning automata Certain forms of learning automata have been shown to converge to Nash equilibria in some classes of strategic-form games. For example, in <cit.>, and for a generalized nonlinear reward-inaction scheme, convergence to Nash equilibrium strategies can be shown in identical interest games. Similar are the results presented in <cit.> for a linear reward-inaction scheme. These convergence results are restricted to games of payoffs in [0,1]. Extension to a larger class of games is possible if absolute monotonicity (cf., <cit.>) is shown (similarly to the discussion in <cit.>). Reference <cit.> introduced a class of linear reward-inaction schemes in combination with a coordinated exploration phase so that convergence to the efficient Nash equilibrium is achieved. However, coordination of the exploration phase requires communication between the players.Recently, work by the author <cit.> has introduced a new class of learning automata (namely, perturbed learning automata) which can be applied in games with no restriction in the payoff matrix. Furthermore, a small perturbation factor also influences the decisions of the players, through which convergence to non-Nash pure strategy profiles can be excluded. However, to demonstrate global convergence, a monotonicity condition still needs to be established <cit.>.§.§.§ Q-learning Similar questions of convergence to Nash equilibria also appear in alternative reinforcement learning formulations, such as approximate dynamic programming methodologies and Q-learning. However, this is usually accomplished under a stronger set of assumptions, which increases the computational complexity of the dynamics. For example, the Nash-Q learning algorithm of <cit.> addresses the problem of maximizing the discounted expected rewards for each agent by updating an approximation of the cost-to-go function (or Q-values). Alternative objectives may be used, such as the minimax criterion of <cit.>. However, it is indirectly assumed that agents need to have full access to the joint action space and the rewards received by the other agents. More recently, reference <cit.> introduces a Q-learning scheme in combination with either adaptive play or better-reply dynamics in order to attain convergence to Nash equilibria in potential games <cit.> or weakly-acyclic games. However, this form of dynamics require that each player observes the actions selected by the other players, since a Q-value needs to be assigned in each joint action.When the evaluation of the Q-values is totally independent, as in the individual Q-learning in <cit.>, then convergence to Nash equilibria has been shown only for 2-player zero-sum games and 2-player partnership games with countably many Nash equilibria. Currently, there are no convergent results in games in multi-player games.§.§.§ Payoff-based learning The aforementioned types of dynamics can be considered as a form of payoff-based learning dynamics, since adaptation is only governed by the perceived utility of the players. Recently, there have been several attempts for establishing convergence to Nash equilibria through alternative payoff-based learning dynamics, (see, e.g., the benchmark-based dynamics of <cit.>, or the aspiration-based dynamics in <cit.>). For these type of dynamics, convergence to Nash equilibria can be established without requiring any strong monotonicity property (e.g., in multi-player weakly-acyclic games in <cit.>). However, an investigation is required with respect to the resulting convergence rates as compared to the dynamics incorporating policy iterations (e.g., the Erev-Roth type of dynamics or the learning automata discussed above). §.§ Objective This paper provides an analytical framework for analyzing convergence in multi-player strategic-form games when players implement a class of perturbed learning-automata. We wish to impose no strong monotonicity assumptions in the structure of the game (e.g., the existence of a potential function). We provide a characterization of the invariant probability measure of the induced Markov chain that shows that only the pure-strategy profiles belong to its support. Thus, we implicitly exclude convergence to any mixed strategy profile (including mixed Nash equilibria). This result imposes no restrictions in the payoff matrix other than the Positive-Utility Property.§ CONVERGENCE ANALYSIS §.§ Terminology and notation Let 𝒜×Δ, where ΔΔ(𝒜_1)×…×Δ(𝒜_n), i.e., pairs of joint actions α and nominal strategy profiles x. The set 𝒜 is endowed with the discrete topology, Δ with its usual Euclidean topology, andwith the corresponding product topology. We also let () denote the Borel σ-field of , and 𝔓() the set of probability measures on () endowed with the Prohorov topology, i.e., the topology of weak convergence. The algorithm introduced in Table <ref> defines an -valued Markov chain. Let P_λ:×()→[0,1] denote its transition probability function (t.p.f.), parameterized by λ>0. We refer to the process with λ>0 as the perturbed process. Let also P:×() denote the t.p.f. of the unperturbed process, i.e., when λ=0.We let C_b() denote the Banach space of real-valued continuous functions onunder the sup-norm (denoted by ·_∞) topology. For f∈(), defineP_λf(z) ∫_P_λ(z,dy)f(y),and μ[f] ∫_μ(dx)f(z), μ∈𝔓(). The process governed by the unperturbed process P will be denoted by {Z_t : t≥0}. Let Ω^∞ denote the canonical path space, i.e., an element ω∈Ω is a sequence {ω(0),ω(1),…}, with ω(t)= (α(t),x(t))∈. We use the same notation for the elements (α,x) of the spaceand for the coordinates of the process Z_t=(α(t),x(t)). Let also _z[·] denote the unique probability measure induced by the unperturbed process P on the product σ-algebra of ^∞, initialized at z=(α,x), and _z[·] the corresponding expectation operator. Let also _t,t≥0, denote the σ-algebra generated by {Z_τ, τ≤t}.§.§ Stochastic stability First, we note that both P and P_λ (λ>0) satisfy the weak Feller property (cf., <cit.>). Both the unperturbed process P (λ=0) and the perturbed process P_λ (λ>0) have the weak Feller property. Let us consider any sequence {Z^(k)=(α^(k),x^(k))} such that Z^(k)→Z=(α,x)∈.For the unperturbed process governed by P(·,·), and for any open set O∈(), the following holds:P(Z^(k)=(α^(k),x^(k)),O) =∑_α∈𝒫_(O){_Z^(k)[ rand_x_i^(k)[𝒜_i]=α_i, ∀ i∈ℐ] . ·.∏_i=1^n_Z^(k)[ℛ_i(α,x_i^(k))∈𝒫_𝒳_i(O)] }=∑_α∈𝒫_𝒜(O){∏_i=1^n𝕀_𝒫_𝒳_i(O)(ℛ_i(α,x_i^(k))) x_iα_i^(k)},where 𝒫_𝒳_i(O) and 𝒫_𝒜(O) are the canonical projections defined by the product topology. Similarly, we have: P(Z=(α,x),O) =∑_α∈𝒫_𝒜(O){∏_i=1^n𝕀_𝒫_𝒳_i(O)(ℛ_i(α,x_i)) x_iα_i}. (a) Consider the case x∈Δ^o, i.e., x belongs to the interior of Δ. For all i∈, due to the continuity of ℛ_i(·,·) with respect to its second argument, and the fact that O is an open set, there exists δ>0 such that 𝕀_𝒫_𝒳_i(O)(ℛ_i(α,x_i)) = 𝕀_𝒫_𝒳_i(O)(α,y_i)) for all y_i∈𝒩_δ(x_i). Thus, for any sequenceZ^(k)=(α^(k),x^(k)) such that Z^(k)→ Z=(α,x), we have thatP(Z^(k),O) → P(Z,O), as k→∞.(b) Consider the case x∈∂Δ, i.e., x belongs to the boundary of Δ.Then, there exists i∈ such that x_i∈∂Δ(𝒜_i), i.e., there exists an action j∈_i such that x_ij=0. For any open set O∈(), x_i∉𝒫_𝒳_i(O). Furthermore, for any α_i∈ rand_x_i[_i], 𝕀_𝒫_𝒳_i(O)(ℛ_i((α_i,α_-i),x_i) = 0 (since x_ij=0 and therefore x_i cannot escape from the boundary). This directly implies that P(Z=(α,x),O)=0. Construct a sequence (α^(k),x^(k)) that converges to (α,x) such that α^(k)=α, x_iα_i^(k)>0 and x_i=e_α_i, i.e., the strategy of player i converges to the vertex of action α_i. Pick also O∈(), such that 𝕀_𝒫_𝒳_i(O)(ℛ_i(α,x_i^(k))) = 1 for all large k. This is always possible by selecting an open set O such that x∈∂𝒫_𝒳(O) and x^(k)∈𝒫_𝒳(O) for all k. In this case, lim_k→∞P(Z^(k),O) = 1. We conclude that for any sequence Z^(k)=(α^(k),x^(k)) that converges to Z=(α,x), such that x∈∂Δ, and for any open set O∈(),lim_k→∞P(Z^(k),O) ≥ P(Z,O)=0. By <cit.>, we conclude that P satisfies the weak Feller property. The same steps can be followed to show that P_λ also satisfies the weak Feller property. The measure μ_λ∈𝔓() is called an invariant probability measure for P_λ if(μ_λP_λ)(A) ∫_μ_λ(dx)P_λ(z,A) = μ_λ(A),A∈().Sincedefines a locally compact separable metric space and P, P_λ have the weak Feller property, they both admit an invariant probability measure, denoted μ and μ_λ, respectively <cit.>.We would like to characterize the stochastically stable states z∈ of P_λ, that is any state z∈ for which any collection of invariant probability measures {μ_λ∈𝔓():μ_λP_λ=μ_λ,λ>0} satisfies lim inf_λ→0μ_λ(z)>0. As the forthcoming analysis will show, the stochastically stable states will be a subset of the set of pure strategy states (p.s.s.) defined as follows: A pure strategy state is a state s=(α,x)∈ such that for all i∈ℐ, x_i = e_α_i, i.e., x_i coincides with the vertex of the probability simplex Δ(𝒜_i) which assigns probability 1 to action α_i.We will denote the set of pure strategy states by 𝒮. There exists a unique probability vector π=(π_1,...,π_𝒮) such that for any collection of invariant probability measures {μ_λ∈𝔓():μ_λP_λ=μ_λ, λ>0}, the following hold: (a) lim_λ→0μ_λ(·) = μ̂(·) ∑_s∈𝒮π_sδ_s(·), where convergence is in the weak sense.(b) The probability vector π is an invariant distribution of the (finite-state) Markov process P̂, such that, for any s,s'∈𝒮, P̂_ss'lim_t→∞ QP^t(s,𝒩_δ(s')),for any δ>0 sufficiently small, where Q is the t.p.f. corresponding to only one player trembling (i.e., following the uniform distribution of (<ref>)).The proof of Theorem <ref> requires a series of propositions and will be presented in detail in Section <ref>.§.§ Discussion Theorem <ref> establishes an important observation. That is, the “equivalence” (in a weak convergence sense) of the original (perturbed) learning process with a simplified process, where agents simultaneously tremble at the first iteration and then they do not tremble. This form of simplification of the dynamics has originally been exploited to analyze aspiration learning dynamics in <cit.>, and it is based upon the fact that under the unperturbed dynamics, agents' strategies will eventually converge to a pure strategy profile.Furthermore, the limiting behavior of the original (perturbed) dynamics can be characterized by the (unique) invariant distribution of a finite-state Markov chain {P_ss'}, whose states correspond to the pure-strategy states of the game. In other words, we should expect that as the perturbation parameter λ approaches zero, the algorithm spends the majority of the time on pure strategy profiles. The importance of this result lies on the fact that no constraints have been imposed in the payoff matrix of the game other than the Positive-Utility Property <ref>. Thus, it extends to games beyond the fine set of potential games.This convergence result can further be augmented with an ODE analysis for stochastic approximations to exclude convergence to pure strategies that are not Nash equilibria (as derived in <cit.> for the case of diminishing step size). Due to space limitations this analysis is not presented in this paper, however it can be the subject of future work. § TECHNICAL DERIVATION §.§ Unperturbed Process For t≥0 define the setsA_t {ω∈Ω:α(τ)=α(t) ,  for all τ≥ t } ,B_t {ω∈Ω:α(τ)=α(0) ,  for all  0≤τ≤t} .Note that {B_t:t≥0} is a non-increasing sequence, i.e., B_t+1⊆ B_t, while {A_t:t≥0} is non-decreasing, i.e., A_t+1⊇ A_t. LetA_∞⋃_t=0^∞A_t B_∞⋂_t=1^∞B_t.In other words, the set A_∞ corresponds to the event that agents eventually play the same action profile, while B_∞ corresponds to the event that agents never change their actions. Let us assume that the step size ϵ>0 is sufficiently small such that 0<ϵ u_i(α)<1 for all α∈𝒜 and for all agents i∈ℐ. Then, the following hold:(a) inf_z∈ ℙ_z[B_∞]>0, (b) inf_z∈_z[A_∞]=1.The first statement of Proposition <ref> states that the probability that agents never change their actions is bounded away from zero, while the second statement states that the probability that eventually agents play the same action profile is one.(a) Let us consider an action profile α=(α_1,...,α_n)∈𝒜, and an initial strategy profile x(0)=(x_1(0),...,x_n(0)) such that x_iα_i(0)>0 for all i∈ℐ. Note that if the same action profile α is selected up to time t, then the strategy of agent i satisfies:x_i(t) = e_α_i - (1-ϵ u_i(α))^t(e_α_i-x_i(0)).Given that B_t is non-increasing, from continuity from above we have_z[B_∞] = lim_t→∞_z[B_t] = lim_t→∞∏_k=0^t∏_i=1^nx_iα_i(k).Note that [B_∞] > 0 if and only if ∑_t=1^∞log(x_iα_i(t)) > -∞.Let us introduce the variabley_i(t)1-x_iα_i(t) = ∑_j∈𝒜_i\α_ix_ij(t),which corresponds to the probability of agent i selecting any action other than α_i. Condition (<ref>) is equivalent to-∑_t=0^∞log(1-y_i(t)) < ∞, i∈ℐ.We also have that lim_t→∞-log(1-y_i(t))/y_i(t) = lim_t→∞1/1-y_i(t) > ρfor some ρ>0, since 0≤ y_i(t) ≤ 1. Thus, from the Limit Comparison Test, we conclude that condition (<ref>) holds if and only if ∑_t=1^∞y_i(t) < ∞, for each i∈ℐ.Lastly, note that y_i(t+1)/y_i(t) = 1-ϵ u_i(α). By Raabe's criterion, the series ∑_t=0^∞y_i(t) is convergent if lim_t→∞t(y_i(t)/y_i(t+1) - 1 ) > 1. We havet(y_i(t)/y_i(t+1)-1) = tϵ u_i(α)/1-ϵ u_i(α).Thus, if ϵ u_i(α)<1 for all α∈𝒜 and i∈ℐ, then 1-ϵ u_i(α)>0 and lim_t→∞t(ϵ u_i(α)/1-ϵ u_i(α)) > 1,which implies that the series ∑_t=1^∞y_i(t) is convergent. Thus, we conclude that _z[B_∞]>0.(b) Define the eventC_t {∃α'≠α(t):x_iα_i'(t)>0,i∈ℐ},i.e., C_t corresponds to the event that there exists an action profile different from the current action profile for which the nominal strategy assigns positive probability for all agents i. Note that A_t^c ⊆ C_t, since A_t^c occurs only if there is some action profile α'≠α(t) for which the nominal strategy assigns positive probability. This further implies that _z[A_t^c]≤_z[C_t]. Then, we have:_z[A_t+1|A_t^c]=_z[A_t+1∩ A_t^c]/_z[A_t^c] ≥ _z[A_t+1∩ A_t^c]/_z[C_t] ≥ _z[A_t+1∩ A_t^c|C_t] =_z[{α(τ)=α'≠α(t),∀τ>t}|C_t]≥ inf_α'≠α∏_i=1^nx_iα_i'(t)∏_k=t+1^∞{1-(1-ϵiα')^k-t-1c_i(α')} ≥ inf_α'≠α∏_i=1^nx_iα_i'(t)∏_k=0^∞{1-(1-ϵiα')^kc_i(α')}where c_i(α') 1 - x_iα_i'(t) ≥ 0. We have already shown in part (a) that the second part of the r.h.s. is bounded away from zero. Therefore, we conclude that _z[A_t+1|A_t^c]>0. Thus, from the counterpart of the Borel-Cantelli Lemma, _z[A_∞]=1. The above proposition is rather useful in characterizing the support of any invariant measure of the unperturbed process, as the following proposition shows. Let μ denote an invariant probability measure of P. Then, there exists a t.p.f. Π on ×() such that(a) for μ-a.e. z∈, Π(z,·) is an invariant probability measure for P;(b) for all f∈(), lim_t→∞P^tf-Π f_∞=0;(c) μ is an invariant probability measure of Π;(d) the support[The support of a measure μ onis the unique closed set F⊂() such that μ(\F)=0 and μ(F∩O)>0 for every open set O⊂ such that F∩O≠∅.] of Π is on 𝒮 for all z∈.The state spaceis a locally compact separable metric space and the t.p.f. of the unperturbed process P admits an invariant probability measure due to Proposition <ref>. Thus, statements (a), (b) and (c) follow directly from <cit.>. (d) Let us assume that the support of Π includes points inother than the pure strategy states. Let also O⊂ be an open set such that O∩=∅ and Π(z^*,O)>0 for some z^*∈. Given that P^t converges weakly to Π as t→∞, from Portmanteau theorem (cf., <cit.>), we have thatlim inf_t→∞ P^t(z^*,O) ≥Π(z^*,O)>0.This is a contradiction of Proposition <ref>(b). Thus, the conclusion follows.Proposition <ref> states that the limiting unperturbed t.p.f. converges weakly to a t.p.f. Π which accepts the same invariant p.m. as P. Furthermore, the support of Π is the set of pure strategy states 𝒮. This is a rather important observation, since the limiting perturbed process can also be “related” (in a weak-convergence sense) to the t.p.f. Π, as it will be shown in the following section.§.§ Decomposition of perturbed t.p.f. We can decompose the t.p.f. of the perturbed process as follows:P_λ = (1-φ(λ))P + φ(λ)Q_λwhere φ(λ) = 1-(1-λ)^n is the probability that at least one agent trembles (since (1-λ)^n is the probability that no agent trembles), and Q_λ corresponds to the t.p.f. induced by the one-step reinforcement-learning update when at least one agent trembles. Note that φ(λ)→0 as λ→0. Define also Q to be the t.p.f. when only one players trembles, and Q^* is the t.p.f. where at least two players tremble. Then, we may write:Q_λ = (1-ψ(λ))Q + ψ(λ)Q^*,where ψ(λ)1-nλ/1-(1-λ)^n corresponds to the probability that at least two players tremble given that at least one player trembles. Let us also define the infinite-step t.p.f. when trembling only at the first step (briefly, lifted t.p.f.) as follows: P_λ^Lφ(λ)∑_t=0^∞(1-φ(λ))^tQ_λP^t = Q_λR_λwhereR_λφ(λ)∑_t=0^∞(1-φ(λ))^tP^t,i.e., R_λ corresponds to the resolvent t.p.f. The following hold: (a) For f∈(), lim_λ→0R_λf-Πf_∞ = 0.(b) For f∈(), lim_λ→0P_λ^Lf-QΠf_∞ = 0.(c) Any invariant distribution μ_λ of P_λ is also an invariant distribution of P_λ^L.(d) Any weak limit point in 𝔓() of μ_λ, as λ→0, is an invariant probability measure of QΠ. (a) For any f∈ C_b(), we haveR_λf - Πf_∞=φ(λ)∑_t=0^∞(1-φ(λ))^tP^tf - Π f=φ(λ)∑_t=0^∞(1-φ(λ))^t(P^t f - Π f)where we have used the property φ(λ)∑_t=0^∞(1-φ(λ))^t=1. Note thatφ(λ)∑_t=T^∞(1-φ(λ))^tP^tf-Π f ≤(1-φ(λ))^Tsup_t≥TP^tf - Π f.From Proposition <ref>(b), we have that for any δ>0, there exists T=T(δ)>0 such that the r.h.s. is uniformly bounded by δ for all t≥ T. Thus, the sequenceA_Tφ(λ)∑_t=0^T(1-φ(λ))^t(P^tf - Π f)is Cauchy and therefore convergent (under the sup-norm). In other words, there exists A∈ℝ such thatlim_T→∞A_T-A=0. For every T>0, we haveR_λf-Πf≤A_T + A - A_T.Note that A_T≤φ(λ) ∑_t=0^T (1-φ(λ))^tP^t f-Π f.If we take λ↓0, then the r.h.s. converges to zero. Thus,R_λf-Πf_∞≤A-A_T,T>0,which concludes the proof.(b) For any f∈ C_b(), we haveP_λ^Lf-QΠf_∞ ≤ Q_λ(R_λf-Πf)_∞ + Q_λΠf - QΠf_∞ ≤ R_λf-Πf_∞ + Q_λΠf-QΠf_∞.The first term of the r.h.s. approaches 0 as λ↓0 according to (a). The second term of the r.h.s. also approaches 0 as λ↓0 since Q_λ→Q as λ↓0.(c) Note that, by definition of the perturbed t.p.f. P_λ, we haveP_λR_λ = (1-φ(λ))PR_λ + φ(λ)Q_λR_λ.Note further that Q_λR_λ=P_λ^L and(1-φ(λ))PR_λ = R_λ - φ(λ)I,where I corresponds to the identity operator. Thus, we haveP_λR_λ = R_λ-φ(λ)I+φ(λ)P_λ^L.For any invariant probability measure of P_λ, μ_λ, we haveμ_λP_λR_λ = μ_λR_λ-φ(λ)μ_λ+φ(λ)μ_λP_λ^L,which equivalently implies that μ_λ = μ_λP_λ^L,since μ_λP_λ = μ_λ. Thus, we conclude that μ_λ is also an invariant p.m. of P_λ^L.(d) Let μ̂ denote a weak limit point of μ_λ as λ↓0. To see that such a limit exists, take μ̂ to be an invariant probability measure of P. Then, P_λf-Pf_∞ ≥ μ_λ(P_λf-Pf)_∞=(μ_λ-μ̂)(I-P)[f]_∞. Note that the weak convergence of P_λ to P, it necessarily implies that μ_λ⇒μ̂. Note further thatμ̂[f] - μ̂QΠf= (μ̂[f]-μ_λ[f]) + μ_λ[P_λ^Lf-QΠf]+(μ_λ[QΠf]-μ̂[QΠf]).The first and the third term of the r.h.s. approaches 0 as λ↓0 due to the fact that μ_λ⇒μ̂. The same holds for the second term of the r.h.s. due to part (b). Thus, we conclude that any weak limit point of μ_λ as λ↓0 is an invariant p.m. of QΠ. §.§ Invariant p.m. of one-step perturbed process Define the finite-state Markov process P̂ as in (<ref>). There exists a unique invariant probability measure μ̂ of QΠ. It satisfies μ̂(·) = ∑_s∈𝒮π_sδ_s(·)for some constants π_s≥0, s∈𝒮. Moreover, π=(π_1,...,π_𝒮) is an invariant distribution of P̂, i.e., π=πP̂.From Proposition <ref>(d), we know that the support of Π is on the set of pure strategy states 𝒮. Thus, the support of QΠ is also on 𝒮. From Proposition <ref>, we know that QΠ admits an invariant measure, say μ̂, whose support is also 𝒮. Thus, μ̂ admits the form of (<ref>), for some constants π_s≥0, s∈𝒮.Note also that 𝒩_δ(s') is a continuity set of QΠ(s,·), i.e., QΠ(s,∂𝒩_δ(s'))=0. Thus, from Portmanteau theorem, given that QP^t⇒ QΠ,QΠ(s,𝒩_δ(s')) = lim_t→∞QP^t(s,𝒩_δ(s')) = P̂_ss'. If we also define π_s μ̂(𝒩_δ(s)), thenπ_s' = μ̂(𝒩_δ(s')) = ∑_s∈𝒮π_s QΠ(s,𝒩_δ(s')) = ∑_s∈𝒮π_sP̂_ss',which shows that π is an invariant distribution of P̂, i.e., π=πP̂.It remains to establish uniqueness of the invariant distribution of QΠ. Note that the set 𝒮 of pure strategy states is isomorphic with the set 𝒜 of action profiles. If agent i trembles (as t.p.f. Q dictates), then all actions in 𝒜_i have positive probability of being selected, i.e., Q(α,(α_i',α_-i))>0 for all α_i'∈𝒜_i and i∈. It follows by Proposition <ref> that QΠ(α,(α_i',α_-i))>0 for all α_i'∈𝒜_i and i∈. Finite induction then shows that (QΠ)^n(α,α')>0 for all α,α'∈. It follows that if we restrict the domain of QΠ to 𝒮, it defines an irreducible stochastic matrix. Therefore, QΠ has a unique invariant distribution. §.§ Proof of Theorem <ref> Theorem <ref>(a)–(b) is a direct implication of Propositions <ref>–<ref>. § CONCLUSIONS & FUTURE WORK In this paper, we considered a class of reinforcement-learning algorithms that belong to the family of learning automata, and we provided an explicit characterization of the invariant probability measure of its induced Markov chain. Through this analysis, we demonstrated convergence (in a weak sense) to the set of pure-strategy states, overcoming prior restrictions necessary under an ODE-approximation analysis, such as the existence of a potential function. Thus, we opened up new possibilities for equilibrium selection through this type of algorithms that goes beyond the fine class of potential games.Although the set of pure-strategy-states (which are the stochastically-stable states) may contain non-Nash pure strategy profiles, a follow-up analysis that excludes convergence to such pure-strategy-states may be performed (similarly to the analysis presented in <cit.> for diminishing step size). IEEEtran
http://arxiv.org/abs/1702.08334v1
{ "authors": [ "Georgios C. Chasparis" ], "categories": [ "cs.GT" ], "primary_category": "cs.GT", "published": "20170227154156", "title": "Stochastic Stability Analysis of Perturbed Learning Automata with Constant Step-Size in Strategic-Form Games" }
=10000 arabic23.0cm 16.2 cm 0.5cm 0.5cm =-1.0cm -0.5cmaddtoresetequationsection 10 May 2017
http://arxiv.org/abs/1702.08161v2
{ "authors": [ "Yutaka Hosotani" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170227064619", "title": "New dimensions from gauge-Higgs unification" }
mymainaddress]A. F. Borghesanimycorrespondingauthor [mymainaddress]CNISM Unit, Department of Physics and Astronomy, University of Padua andIstituto Nazionale Fisica Nucleare, sez. Padovavia F-Marzolo 8, I-35131 Padua, Italy[mycorrespondingauthor]Corresponding author armandofrancesco.borghesani@unipd.itmytertiaryaddress]C. Braggiomysecondaryaddress]G. Carugno [mysecondaryaddress]Istituto Nazionale Fisica Nucleare, sez. Padova and Department of Physics and Astronomy, University of Padua via F-Marzolo 8, I-35131 Padua, Italy mytertiaryaddress]F. Chiossi [mytertiaryaddress]Department of Physics and Astronomy, University of Padua and Istituto Nazionale Fisica Nucleare, sez. Padovavia F-Marzolo 8, I-35131 Padua, Italymytertiaryaddress]M. GuariseWe have studied the cathodo- and radioluminescence of Nd:YAG and of Tm:YAG single crystals in an extended wavelength range up to ≈ 5 μm in view of developing a new kind of detector for low-energy, low-rate energy deposition events. Whereas the light yield in the visible range is as large as ≈ 10^4photons/MeV, in good agreement with literature results, in the infrared range we have found a light yield ≈ 5× 10^4photons/MeV, thereby proving that ionizing radiation is particularly efficient in populating the low lying levels of rare earth doped crystals.Nd:YAG, Tm:YAG, Cathodoluminescence, Radioluminescence, Infrared and visible light yield. § INTRODUCTIONIn our laboratory we are developing a new kind of scintillation detectors to investigate low-rate, low-energy deposition events. We have decided to adopt the so calledInfraRed Quantum Counter (IRQC) scheme, initially proposed by Bloembergen as early as 60 years ago <cit.>. In the IRQC scheme the intrinsic limitations of traditional infrared detection are overcome by shifting the detection in the visible range.This scheme requires that the active material of the detector has, at least, three energy levels: the ground state, an intermediate low-energy level, and a high-energy one. The particle to be detected excites the material from the ground state into the intermediate level. The population of this level is promoted to the highly-lying level by means of a suitably tuned laser. Finally, the high-lying state radiatively relaxes to the ground state.visible fluorescence is then easily detected with usual techniques. Promising candidates for the active material of the detector are Rare Earth (RE) doped crystals because the upconversion processes are highly efficient and are also well studied <cit.>. The possibility to apply the IRQC scheme for particle detection has been demonstrated in Er-doped YAG single crystal <cit.>.A key assumption for the development of an upconversion-based detector is that the particle energy loss in the material originates a wideband excitation that more efficiently contributes to the population of the low-lying energy levels rather than the higher-lying ones. We then expect thatthe cathodo- and radioluminescence spectra display a large infrared component. In this way, this new kind of detector should be endowed of an improved efficiency and energy resolution with respect to the most commonly used solid-state inorganic scintillation detectors. A quantitative validation of this hypothesis cannot be obtained by literature. Actually, the infrared luminescence is hardly investigated because of the long lifetime of the low-lying levels involved <cit.>. In fact, quantitative studies have been carried out in the visible range by exciting RE-doped crystals with different ionizing radiations, e.g., γ- and X-rays, and neutrons (for a review, see <cit.>), ion beams <cit.>, α - particles <cit.>, and synchrotron radiation <cit.>, whereas only optical spectroscopic investigations are available in the infrared range <cit.>.In order to check the validity of our assumption, we have started a systematic study of the luminescence properties of RE-doped crystals excited by energetic electrons and X-rays, focusing on the infrared component of the emission.The quantitative analysis of the infrared component we report in this work is based on the comparison with the visible counterpart. The good agreementof our results in the visible range with literature data lends credibility to the quantitative results we get in the infrared range. We show here the first results of this comprehensive study on cathodo- (CL) and radioluminescence (RL) in Nd:YAG and in Tm:YAG single crystal samples from deep UV (DUV) up to mid IR (≈ 5 μm). § EXPERIMENTAL METHOD AND APPARATUSThe experimental method consists in exciting the crystal under investigation by either high-energy electrons or X-rays.The induced CL and RL are then analyzed in a very wide wavelength band covering the range from DUV λ≈ 200nm up to mid infrared λ≈ 5 μm. The apparatus built for implementing this method is schematically shown in fig:app. Electrons of energy up to 100keV are produced by a home-made electron gun (e-gun) that has been thoroughly described elsewhere <cit.>.We briefly recall here only its main features.It can be operated in continuous or pulsed mode. In the continuous mode, it can deliver current of intensity up to 15 μA on the crystal. In pulsed mode, electrons are supplied in bunches containing up to a few tens of nC with a pulse duration adjustable at will between a few tens of μs up to the ms-range. It is also possible to vary the pulse repetition rate in the range 20-1000Hz when the radiative lifetime of the crystal levels is measured. In both modes, the electron energy is set at 70keV.The electron beam is collimated to a spot area of ≈ 3mm^2. The crystalis mounted on an insulated flange, as shown in fig:montaggioXtal, and is in electrical contact with it by means of a ≈ 10 μm thin metal foil. Crystal and flange act as a Faraday cup (or, beam stopper) that allows us to measure the amount of injected charge. Ti is used for the metal foil if CL has to be studied. Its small atomic number Z allows the electrons to cross the foiland to impinge on the crystal surface with 15keV energy loss <cit.> and to be immediately recollected on the Faraday cup for the charge measurement.On the other hand, if the high Z, thin Ta foil is used, electrons are stopped in the metal foil and X-rays are produced, whose intensity is proportional to the amount of the injected charge. In this way, RL can be investigated.We used two commercial YAG single crystals doped with Nd^3+(1.1 % at.) and with Tm^3+ (4.4% at.),respectively. They are shaped as small cylinders of 3mm in height. Their diameters are 3mm for Nd:YAG and 5mm for Tm. The luminescence produced by the crystal can be spectrally analyzed with the aid of suitable spectrometers (path 1 in fig:app). For the 200-1000nm range,we have used Si CCD spectrometer (OceanOptics, mod. RedTide 650). For the 900-1700nm range, we used the InGaAs CCD spectrometer (OceanOptics, mod. NIR 512).For the 2000-12000 ^-1 wavenumber range, we used a Fourier-Transform Infrared interferometer (Bruker, mod. Equinox 55) equipped with either InGaAs, InAs, or InSb detectors depending on the investigated spectral ranges. We have devised a procedure, described in the Appendix, to reliably merge spectra obtained with spectrometers spanning different spectral ranges.Our apparatus is also designedto make absolute measurements of the light yield (LY), i.e., the total amount of emitted light as a function of the energy deposited into the crystal by either electrons or X-rays. To this purpose (path 2 in fig:app), the luminescence light is collected by a photodetector PD. The PD signal is amplified by amplifier A and recorded on a storage oscilloscope. A PC fetches the data from the oscilloscope for offline analysis. The LY measurements can be restricted to selected wavelength bands by using suitable combinations of optical filters F. We used two Si- (Thorlabs, mod. Det36A and Hamamatsu, mod. S1337-1010BQ), InGaAs (Thorlabs, mod. Det20C), and a liquid nitrogen cooled InAs (Hamamatsu, mod. P7163) PDs.The PD is mounted on a z-translational stage in order to change the detector-to-crystal distance and, thus, to measure the solid angle subtended by the crystal at the detector for normalization purposes.The output signal of the PD can be either integrated, if one is only interested to the total amount of emitted light, or can be linearly amplified if the lifetime of the radiative levels of the crystal has to be determined from the time evolution of the signal. For the former goal, we used an active integrator with time constant τ_c≈ 480 μs and conversion factor G=0.25mV/fC that makes this device particularly useful for the measurements with X-rays that produce a very faint luminescence signal. For the lifetime measurements, we used a transimpedance amplifier (Fem­to, mod. DLPCA-200) with gain G=10^6V/A. In this condition, the amplifier bandwidth is ≈ 500kHz and relatively fast PD responses are quite faithfully reproduced.In order to show the accuracy with which we can measure relatively long radiative lifetimes,we report in fig:segnali the time evolution of the infrared emission of Nd:YAG crystal excited by an electron pulse. In fig:segnali I_bs is the ≈ 40 μs-long current pulse (left scale) and I_d (right scale) is the amplified Si PD response. The dashed line is the PD response numerically computed by using the experimentally measured current injection shape, from which a value τ =(212± 10) μs is obtained, in agreement with literature data <cit.>. The inset in the figure shows thePD response to the visible light originating from the decay of theshort-lived ^2F(2)_5/2 manifold(τ≈ 3 μs) <cit.> excited by electron pulse.In this case, the amplified PD response quite rapidly follows the current injection. Although the lifetime can be determined by an experimental fit of the signal long-time tail, our approach to numerically simulate the signal allow us, if necessary, to investigate the level population kinetics. The integration of the beam stopper and PD signals yieldsthe amount of charge Q_d generated in the active material of the detector and the amount of charge per pulse Q_bs accelerated towards the crystals and collected by the Faraday cup. § EXPERIMENTAL RESULTS AND DISCUSSIONThe response of a detector to the luminescence light produced by a scintillator material is given byQ_d =eη E_i(ΔΩ/4π) LYin which e is the elementary charge, ηis the detector quantum efficiency, and E_i is the amount of energy released in the crystal by the ionizing radiation. ΔΩ=S/d^2 is the fraction of solid angle subtended by the detector of cross sectional area S at the source located at distance d.From eq:QD the LY_i, number of photons/MeV emitted in the optical range Δλ_i, can be expressed asLY_i= 1/k4 π d^2/SQ_d/Q_bs [(∫_Δλ_i I(λ) λ dλ) / ( ∫η(λ) T(λ) I(λ)λ dλ) ]where I is the scintillator emission spectrum and, thus, the term in square parenthesis takes into account the wavelength dependence of η and the transmission T of the optical filter possibly mounted in front of the detector.The constant k= E_i/Q_bs takes on different values for electron- or X-rays excitation.In our experiment we directly measure Q_d and Q_bs and we have verified their proportionality with all crystals in any wavelength bands, independently of the excitation source. As an example, we show in fig:LinTmegun Q_d as a function of Q_bs for the electron-beam excited Tm:YAG crystal.These quantities are also measured for different crystal-to-detector distances in order to determine the solid angle S/d^2. As an example, we show in fig:distanze the ratio Q_d/Q_bs as a function of relative crystal-to-detector distance x=d-x_0 obtained for the Tm:YAG crystal excited with both electron-beam and X-rays. The absolute distance is defined within an arbitrary offset x_0 that depends on the physical detector mounting. Assuming that the crystal can be treated as a point source, the data must obey the equationQ_d(x)/Q_bs= ( Q_d,0/Q_bs d^2) 1/(x+x_0)^2= a/(x+x_0)^2The parameters a=(Q_d,0/Q_bs) d^2 and x_0 are obtained by fitting eq:d2 to the experimental data. The fit goodness confirms the validity of the point source approximation and allows us to determine the solid angle.Finally, a more accurate determination of LY in the extended wavelength rangecan only be obtained by recording the emission spectrumI(λ) in the whole band.Moreover, the spectral analysis is necessary to identify the levels responsible for the emission in the different bands and to ascertain how they are populated by the particle passage. By so doing, we can also compare our results with literature results in the bands, in which they are available, obtained with several different excitation techniques, including optical excitation <cit.>. Unfortunately, the complete spectrum in the extended wavelength band from DUV to mid IR cannot be obtained with a single spectrometer. As mentioned in Sect. <ref>, we used two spectrometer types covering different spectral ranges. The procedure to merge the suitably normalized spectra is described in the Appendix. The CL spectra for Nd- and Tm- doped YAG crystals obtained in the extended spectral range are reported in fig:spettroNd and fig:spettroTm. In the 1.5-5 μm range for Nd:YAG and 2.2-5 μm range for Tm:YAG no lines have been observed. For this reason, these parts of the spectra are not displayed in the figures.§.§ Nd:YAG spectrumThe visible portion of our extended CL spectrum for Nd:YAG compares favorably with the spectra in the same region obtained by several groups that excited the crystals with different techniques (laser <cit.>;CL <cit.>; ion beam <cit.>; α-particle <cit.>; synchrotron <cit.>). It is due to the transitions from the 4f^3manifold ^2F(2)_5/2towards several lower lying manifolds that lie in the 11000-22000 ^-1 wavenumber range andthat are explicitly identified in literature <cit.>. The ultimate fate of all of these manifolds is to nonradiatively relax to the lowest of them, namely the ^4_3/2 <cit.>. The energy level scheme of Nd is shown in fig:schemalivelli. The^4_3/2 manifold is responsible for all of the 27 lines we observed in the IR part of the spectrum. In agreement with literature <cit.>, we have been able to identify all the Stark levels of the manifolds involved in the transitions. The ^4_3/2→^4_9/2,^4_3/2→^4_11/2, and ^4_3/2→^4_13/2transitions lie in the 850-950nm, 1040-1110nm, and 1350-1450nm ranges, respectively. The^4_3/2→^4_15/2 transitions lying in the1730-2100nm range are very faint because of their little branching ratio and are obscured by the experimental noise.The emission from the ^4_J, as for the other manifolds, has not been observed mainly because it is quenched due to the large phonon energies of the oxide matrix, The knowledge of the spectrum intensity I(λ) allows us to compute the LY in a given wavelength band Δλ_i, according to the formula LY_i∝∫_Δλ_i I(λ) λ dλWe found a ratio of the number of photons N_IR emitted in the IR band Δλ_IR (800-1500 ) to that of photons N_v emitted in the visible range Δλ_v (390 -700 )α_Nd=N_IR/N_v= 5.2with an accuracy better than 5 %.There is a strong indication in literature that the ^2F(2)_5/2 manifold nearly completely relaxes by radiative decay because the multiphonon relaxation lifetime is more than two orders of magnitude longer than the radiative lifetime and because Nd-Nd energy transfer processes involving this manifold at the concentration of our experiment are negligible <cit.>, as confirmed by the fact that the lifetime difference in two single crystals of very different concentration is small <cit.>. Moreover, according to the branching ratio values found in literature <cit.>, ≈ 95 % of the radiative emission from the ^2(2)_5/2 manifold lies in the visible range. Actually, we do not observe any IR emission attributable to this manifold and the UV emission originating from the ^2(2)_5/2→^4_J transitions is a negligible fraction of the total one, as can be ascertained by inspecting the left inset of fig:spettroNd.Owing to the low dopant concentration and small size of our sample, we can estimate the visible reabsorption to be of a few %, at most <cit.>.As a consequence, the ratio α is a quite accurate estimate of the ratio of the numbers of photons emitted by the^4F_3/2 and ^2F(2)_5/2 manifolds, respectively.In contrast with the ^2F(2)_5/2 manifold, the ^4F_3/2 manifoldshows a nonradiative loss channel due to the resonant energy transfer <cit.> ^4F_3/2 + ^4I_9/2→^4I_15/2 + ^4I_15/2At the concentration of our sample, this kind of loss amounts to ≈ 10 % <cit.>. Owing to these estimates, we can draw the conclusion that α_Nd=5.2 is also a direct measure of the ratio of the numbers of ions excited in these two manifolds. However, ifelectrons had the effect to only excite the upper^2F(2)_5/2 manifold, the previous arguments would lead us to the conclusion that weshould get α_Nd≈ 1, as demonstrated by Venikouas et al. who directly excited the ^2F(2)_5/2 manifold in a sample similar to our with a quadrupled Nd:YAG laser pumping at λ = 266nm <cit.>.Therefore, we are forced to draw the conclusion that, by exciting the crystals with particles, the ^4_3/2 is populated by the ^2(2)_5/2 direct relaxation by only ≈ 20 %. Thus other excitation mechanisms have to be active.Whereas the processes leading to the population of the charged particle-excited ^2(2)_5/2 as a consequence of the relaxation of the high-lying 4f^25d levels are known <cit.> (i.e, multiphonon cascading through the lowest 4f^25d level,then to the ^2(2)_7/2, and, finally, to the ^2(2)_5/2), the processes of populating the IR emitting ^4_3/2 manifold, at the best of our knowledge,are not studied at all. We can, tentatively, suggest that, among the processes leading to particle excitation of the ^4_3/2 manifold, the most important could be (i) direct electromagnetic excitation from the RE ion ground state, (ii)excitation by secondary electrons, and (iii) particle induced lattice distortions that relax by phonon emission.§.§ Tm:YAG spectrumThe extended CL spectrum of the Tm:YAG crystal is reported in fig:spettroTm. Also in this case, the visible and very near IR region of our spectrum (up to λ≈ 850nm) favorably compares with those obtained by Yanagida et al. by exciting Tm:YAG <cit.>, and Tm:LuAG- and Tm:YAP <cit.> crystals with γ-rays. Tm shows a scheme of the 4f energy levels that is far less rich than Nd.Most of its manifolds are well separated in energy. As their multiphonon relaxation rate is low, they mainly decay radiatively thereby making more difficult the identification of the transitions from the lines observed in the spectrum. As different levels may emit at the same wavelength or at very near wavelengths but have different lifetimes, we have been ableto resolve the identification ambiguity by analyzing the time evolution of the PD signal in the desired wavelength band. The identification of the manifolds involved in the several transitions and the branching ratio we have obtained agree well with the results obtained by the groups that selectively excited the same manifolds by usinglasers <cit.>.The manifolds responsible for the visible emission are: ^1_6, ^1_2, and ^1_4. The energy level scheme for Tm is shown in fig:schemalivelli.Our CL spectrum for λ >1600nm is identical with that obtained with laser excitation <cit.>, well reproduced by the theoretical computation due to Fei et al. <cit.>, and is associated with the transition from the^3_4 manifoldtowards the ^3_6 ground state. We have estimated the lifetime of the emitting manifold to be ≈ 6.6ms. This result is in reasonable agreement with literature data, whose spread (from ≈ 4ms to ≈ 12ms) is, unfortunately, very large <cit.>.The emission in the 1350-1550nm range, shown in the inset in fig:spettroTm, and that in the 800-850nm range are ascribed to the transitions ^3_4→^3_4 and ^3_4→^3_6, respectively. As these emissions are originating from the same manifold, they share the same time evolution, from which we determined a lifetime of≈ 110 μs, in agreement with literature results obtained in a sample with similar dopant concentration <cit.>.We can now give an estimate of α . In this case, we have chosenΔλ_IR to span the 1600-2200nm range in order to account for the emission of the first excited ^3_4 manifold. For Δλ_v we have chosen the 280-850nm range that accounts for the emission of all other higher lying manifolds. We getα_Tm≈ 6.4Contrary to the Nd:YAG case, the value of the α_Tm parameter is not relatedin a simple way to the number of ions excited by the charged particles in specific manifolds because their quantum efficiency is hardly known. Actually,the Tm energy level scheme is such that to favor concentration-dependent energy transfer processes that are in competition with the channel of radiative decay. The presence of many and important energy transfer processes (namely, cross relaxation (CR)) has been confirmed both by computation and observation for Tm in YLF matrix <cit.>. We believe that the same energy transfer processes also occur in Tm:YAG because of the following experimental observations. First of all, we measureda lifetime value of ≈ 110 μs for the ^3_4 manifold that is much shorter than the value ≳ 500μs observed and computed in low dopant concentration Tm:YAG crystals <cit.>. An efficient cross relaxation process affects the ^3_4 manifold according to the scheme^3_4 + ^3_6→^3_4 + ^3_4that leads to a nonradiative increase of the population of the IR emitting ^3_4 manifold. Secondly, the presence of energy transfer processes is confirmed by comparing the visible CL spectrum with the visible spectrum obtained by exciting the crystal with a quadrupled Nd:YAG laser at 266nm. The two spectra are practically identical because they are both originated by the relaxation of the ^1_6 manifold. The laser directly populates the ^3P_J manifolds whereas charged particles populate the same manifolds by nonradiative relaxation of the 4f^115d levels. Then, the ^3_J nonradiatively relax to the ^1_6. As we observe visible emission originating from the ^1_2 and ^1_4 manifolds, we have to conclude that the latter two manifolds are populated by cross relaxation becauseno radiative transitions from the ^1_6 manifold towardsthem are observed and because multiphonon relaxation is negligible owing to the large energy gap between the manifolds. In third place, the great influence of CR processes in our sampleis confirmed by the fact that the emission from ^1_2 and ^1_4 is much more intense than in a sample of much lower concentration <cit.>.In any case, as the quantum efficiency of the ^3_4 manifold is ≈ 100 % <cit.>, the number of photons emitted in the Δλ_IR band equals the number of ions populating this manifold via the several aforementioned mechanisms.We can conclude this section on the Tm:YAG spectrum in the extended wavelength range by remarking that energy transfer processes are a very important channel to populate the low energy, IR emitting manifolds. Moreover, their influence does not allow us to precisely identify and quantify the remaining channels.As a final remark, we note that the characteristic emission <cit.> due to the Fe^3+ contamination of the YAG matrix we obtained by exciting the crystal with a quadrupled Nd:YAG laser at 266nm is absent under electron-beam excitation. §.§ Absolute calibration of LY with X-raysAs a result of the analysis of the Nd:YAG and Tm:YAG luminescence we have demonstrated that the low lying, IR emitting manifolds are very efficiently populated, roughly a factor 5 more than the higher lying levels that emit in the visible range. The α parameter defined above represents the IR LY relative to the visible one.However, the knowledge of the absolute value of the light yield is required to design a detector. Therefore, we have compared the luminescence of our samples with that of a reference crystal whose light yield is known.Unfortunately, absolute LY measurements are usually carried out with X- or γ-rays excitation. As a consequence, we also measured the RL of our samples by exciting them with X-rays of energy of a few tens of keV, produced by converting the energy of the electrons impinging on the Ta film.For the calibration purposes, we have measured the RL of a Pr:LuYaGcrystal whose LY is reported to be 2.7× 10^4ph/MeV in the 300-450nm range when excited with 662keV γ-rays <cit.>. From the our RL spectrum, by taking into account the small non-proportionality of the LY over a wide energy range  <cit.>, we obtain LY=3.3× 10^4ph/MeV in the optical range of the Si detector we used <cit.>.Also in the case of X-rays we have a direct proportionality between Q_d and the X-rays intensity that is proportional itself to Q_bs, as shown in fig:NdTmPrLinearitaX. The calibration consists in determining the k constant in eqn:LY5. As the detector quantum efficiency η is practically constant over the wavelength range of Pr emission, we obtain k from a measurement of the parameter a and from the LY as k= (a/)1/4π Sη The LY for the Nd:YAG and Tm:YAG crystals in the extended range can now be obtained from measurements of a=(Q_d,0/Q_bs)d^2 and from the emission spectra. Unfortunately, the intensity of RL is much weaker than that of CL and RL spectra can only be recorded with CCD spectrometers.The RL and CL spectra we obtained for both Nd:YAG and Tm:YAG are identical except for the different resolution of the different spectrometers used for the IR range. In particular, for the Nd:YAG case, in the 200-1000nm range we observe that the relative intensities of theemissions stemming from the ^2(2)_5/2 and from the ^4_3/2 are are the same in both CL and RL spectra. As the spectrum for λ>1000nm is only originated by the same ^4_3/2 manifold responsible for the emission around λ≈ 800nm and the branching ratios do not depend on the excitation type, we can use the more accurate CL spectra in order to compute the LY. This conclusion is supported by the argument that ionizing radiation (X-rays or electrons) may originate similar cascading processes of energy degradation <cit.>. In the case of Tm:YAG the emission from ^3_4 falls beyond the reach of the CCD spectrometers. Thus, only the CL spectrum can be recorded with the FT-IR interferometer because the RL is too weak. In any case, owing to the Nd:YAG results and the relative arguments, we safely assume that CL spectra can be used to compute the LY.The results are reported in tab:LYNdTm.Their accuracy is estimated to be of the order of 15 % .For the Nd:YAG crystal we obtain LY≈ 10^4ph/MeV in the visible rangethat compares very favorably with literature results. Actually, Yanagida et al. have reported a UV-visible LY=(11.0 ± 1.1)× 10^3ph/MeV for 662 keV γ-ray both for Nd:YAG 1.1% at. <cit.> and for Tm:YAG 0.5% at. crystals <cit.>.The small discrepancy between our result for Nd:YAG and that of Yanagida et al. is probably due to the difference in the energyof the ionizing radiation.For Tm:YAG we obtain a significantly smaller value than Yanagida et al.. The reason can mainly be attributed to the much larger dopant concentration of our sample for which the cross-relaxation rate is expected to be much larger. On the other hand, the manifolds of lower energy emit a much larger photon number, 5× 10^4ph/MeV in the Nd:YAG case and 4.5× 10^4ph/MeV in the Tm:YAG case.These large numbers of ph/MeV emitted by the crystals in the IR range might attributed, as previously suggested, to the high probability of populating the lower energy levels of the dopant by means of several physical processes including cross-relaxation. § CONCLUSIONS In this work wereport the CL and RL of Nd- and Tm:YAG crystals in a wavelength range particularly extended toward the near or mid IR. The motivation of our study is the possibility to develop a low-threshold, low-rate particle detector based on the Infrared Quantum Counter scheme. The spectral analysis of CL and RL suggests that the low-lying levels emitting in the IR band are directly populated from the ground state by the ionizing radiation. We estimate that the IR light yield of the investigated crystals is a factor ∼ 5 larger than in the visible range. In particular, we get an IR light yield ≈ 5× 10^4photons/MeV for both crystals stemming from the metastable manifolds we are interested in for the IRQC scheme. This piece of information is very useful for the detector design. We believe that the present work contributes a further step in the understanding of the particle excitation of lower lying levels in active materials. The goal of our future investigations is to identify the best combination of crystal and dopant species to achieve the highest possible light yield in the IR region taking into account the best upconversion schemes. § ACKNOWLEDGMENTSThe research is sponsored by Istituto Nazionale di Fisica Nucleare (INFN) within the AXIOMA project. The authors gratefully acknowledge useful discussions with Prof. M. Tonelli of the University of Pisa and the technical assistance of Mr. L. Barcellan and E. Berto. § MERGING SPECTRA OBTAINED IN TWO HARDLY OVERLAPPING BANDSIn our setup we cover the wavelength band from 200nm to 1000nm with CCD spectrometers and the band from 900nm up to 5 μm with the FT-IR interferometer endowed with either a InGaA, or a InAs, or a InSb photodetector. In order to obtain a spectrum over the extended wavelength range we have devised a procedure to merge the spectra recorded in the two different bands. The main requirement for this procedure to be valid is that the spectra intensity over all wavelengths is directly proportional to the amount of energy released in the crystals, as we have experimentally verified (see, for instance, fig:LinTmegun and fig:NdTmPrLinearitaX). Thus, the spectrum shape is independent of Q_bs. The procedure is based on the relative measurement of the light yields in the two distinct wavelength bands, which we conventionally term Δλ_1 and Δλ_2. The normalization factor of the two spectra isℱ= ∫_Δλ_1I_1(λ)λ dλ/∫_Δλ_2I_2(λ)λ dλ (_2/_1) in which I_1 and I_2 are the unnormalized spectra recorded in the respective bands. The light yields are computed with the help of eqn:LY5 from the measured values of a=(Q_d,0/Q_bs)d^2 and from the integrated spectra.The determination of the LY's is easy if the product T(λ)η(λ) of the PD and filter combination with which a is measured is different from zero in a wavelength band completely contained within the working band of the spectrometer used. In this case, only the wavelengths belonging to the recorded spectrum I(λ) contribute to the measured Q_d value. Moreover, in this way we get rid of the necessity to measure the conversion factor k in eqn:LY5.A problem, however, arises because the optical range of Si detector used to measure Q_d in the shorter wavelength region is a little broader than than that of the CCD spectrometer range. Whereas the latter extends up to 1000nm, the sensitivity of the former is non negligible up ≈ 1100nm, at least. This issue only affects the normalization procedure for Nd:YAG as it shows significant emission in this restricted wavelength range. On the contrary, the procedure for Tm:YAG is not affected because it negligibly emits in the same region.The simplest way to overcome this problem would be to use suitable filters of known transmittance T(λ).However, we have devised a more general procedure in case the right filters are not available. It is based on the realization that the LY in the overlapping region of the optical ranges of the two PD's must be the same, independently of the PD used. We obtain the _c in the 1000-1100nmrange as a fraction of the total, uncalibrated infrared LY.As the _c has to be independent on the detector type and the spectrum is known, we can compute the response a_c of the Si detector in the region of interest. Now, its response ã in the spectrometer working range is simply obtained as ã = a_m -a_c, where a_m is the measured Si detector response over its entire range. Once ã is known, eqn:LY5 allows us to compute the LY in the range of the CCD spectrometer, which is finally used in eq:F to obtain the normalization factor ℱ. It turns out that the following equation must hold trueQ_d/Q_bsd^2∝∫ I(λ)T(λ)η(λ) λ dλIn fig:validazione we show the detectors' response (Q_d,0/Q_bs)d^2 over several wavelength bands defined by suitable filters vs the numerical integration of the normalized spectrum over the same bands.The direct proportionality between the two different determinations of the same quantity is the validation of the normalization procedure. § BIBLIOGRAPHY 10 url<#>1urlprefixURL href#1#2#2 #1#1Basov1959 N. Bloembergen, Phys. Rev. Lett. 1 (1959) 84–85. http://dx.doi.org/10.1103/PhysRevLett.2.84 doi:10.1103/PhysRevLett.2.84.auzel2004 F. Auzel, Chem. Rev. 104 (2004) 139–174.borghesani2015 A. F. Borghesani, C. Braggio, G. Carugno, F. Chiossi, A. Di Lieto, M. Guarise, G. Ruoso, M. Tonelli, Appl. Phys. Lett. 107 (2015) 193501. http://dx.doi.org/10.1063/1.4935151 doi:10.1063/1.4935151.moses1998 W. W. Moses, M. J. Weber, S. E. Derenzo, D. Perry, P. Berdahl, L. A. Boatner, IEEE Trans. Nucl. Sci. 45 (1998) 462–466.antonini2002 P. Antonini, S. Belogurov, G. Bressi, G. Carugno, D. Iannuzzi, Nucl. Instr. Meth. A 486 (2002) 799–802. http://dx.doi.org/10.1016/S0168-9002(01)02164-7 doi:10.1016/S0168-9002(01)02164-7.yanagida2013 T. Yanagida, Opt. Mater. 35 (2013) 1987–1992. http://dx.doi.org/10.1016/j.optmat.2012.11.002 doi:10.1016/j.optmat.2012.11.002.brooks2002 R. J. Brooks, D. E. Hole, P. D. Townsend, NIM B 190 (2002) 136–140. http://dx.doi.org/10.1016/S0168-583X(01)01226-5 doi:10.1016/S0168-583X(01)01226-5.khanlary2005 M. Khanlary, D. E. Hole, P. D. Townsend, NIM B 227 (2005) 379–384. http://dx.doi.org/10.1016/j.nimb.2004.08.023 doi:10.1016/j.nimb.2004.08.023.yanagida2011 T. Yanagida, K. Kamada, Y. Fujimoto, Y. Yokota, A. Yoshikawa, H. Yagi, T. Yanagitani, NIM A 631 (2011) 54–57. http://dx.doi.org/10.1016/j.nima.2010.12.038 doi:10.1016/j.nima.2010.12.038.Ning2007 L. Ning, P. A. Tanner, V. V. Harutunyan, E. Aleksanyan, V. N. Makhov, M. Kirm, J. Lumin. 127 (2007) 397–403. http://dx.doi.org/10.1016/j.jlumin.2007.01.019 doi:10.1016/j.jlumin.2007.01.019.eichhorn2008 M. Eichhorn, Appl. Phys. B 93 (2008) 269–316. http://dx.doi.org/10.1007/s00340-008-3214-0 doi:10.1007/s00340-008-3214-0.barcellan2011 L. Barcellan, E. Berto, G. Carugno, G. Galet, G. Galeazzi, A. F. Borghesani, Rev. Sci. Instrum. 82 (2011) 095103. http://dx.doi.org/10.1063/1.3636078 doi:10.1063/1.3636078.estar M. Z. M.J. Berger, J.S. Coursey, J. Chang, Stopping-Power and Range Tables for Electrons, Protons, and Helium Ions, Tech. Rep. NISTIR 4999, NIST, Gaithersburg, USA (2005).Venikouas1984 G. E. Venikouas, G. J. Quarles, J. P. King, R. C. Powell, Phys. Rev. B 30 (1984) 2401–2409. http://dx.doi.org/10.1103/PhysRevB.30.2401 doi:10.1103/PhysRevB.30.2401.Gruber1989 J. B. Gruber, M. E. Hills, R. M. MacFarlane, C. A. Morrison, G. A. Turner, G. J. Quarles, G. J. Kintz, L. Esterowitz, Phys. Rev. B 40 (1989) 9464–9478. http://dx.doi.org/10.1103/PhysRevB.40.9464 doi:10.1103/PhysRevB.40.9464.Burdick1994 G. W. Burdick, C. K. Jayasankar, F. S. Richardson, M. F. Reid, Phys. Rev. B 50 (1994) 16309–16325. http://dx.doi.org/10.1103/PhysRevB.50.16309 doi:10.1103/PhysRevB.50.16309.Gruber2004 J. B. Gruber, D. K. Sardar, R. M. Yow, T. H. Allik, B. Zandi, J. Appl. Phys. 96 (2004) 3050. http://dx.doi.org/10.1063/1.1776320 doi:10.1063/1.1776320.Gulyaeva2013 K. N. Gulyaeva, A. N. Trofimov, M. V. Zamoryanskaya, Opt. Spectrosc. 114 (2013) 709–712. http://dx.doi.org/10.1134/S0030400X13050056 doi:10.1134/S0030400X13050056.Seregina2013 E. A. Seregina, A. A. Seregin, Quantum Electron. 43 (2013) 150–156. http://dx.doi.org/10.1070/QE2013v043n02ABEH014776 doi:10.1070/QE2013v043n02ABEH014776.Singh1974 S. Singh, R. G. Smith, L. G. Van Uitert, Phys. Rev. B 10 (1974) 2566–2572. http://dx.doi.org/10.1103/PhysRevB.10.2566 doi:10.1103/PhysRevB.10.2566.Dong2005 J. Dong, A. Rapaport, M. Bass, F. Szipocs, K. I. Ueda, Phys. Status Solidi Appl. Mater. Sci. 202 (2005) 2565–2573. http://dx.doi.org/10.1002/pssa.200421122 doi:10.1002/pssa.200421122.Blatte1973 H. G. Danielmeyer, M. Bla̋tte, P. Balmer, Appl. Phys. 1 (1973) 269–274. http://dx.doi.org/10.1007/BF00889774 doi:10.1007/BF00889774.lempicki1995 A. Lempicki, J. Appl. Spectrosc. 62 (1995) 787–802.Fujimoto2013 Y. Fujimoto, M.Sugiyama, T. Yanagida, S.Wakahara, S.Suzuki, S. Kurosawa, V. Chani, A. Yoshikawa, Opt. Mat. 35 (2013) 2023 – 2026. http://dx.doi.org/http://dx.doi.org/10.1016/j.optmat.2012.10.010 doi:http://dx.doi.org/10.1016/j.optmat.2012.10.010.Fei2013 B. Fei, J. Huang, W. Guo, Q. Huang, J. Chen, F. Tang, W. Wang, Y. Cao, J. Lumin. 142 (2013) 189–195. http://dx.doi.org/10.1016/j.jlumin.2013.02.015 doi:10.1016/j.jlumin.2013.02.015.Thomas2013 J. T. Thomas, M. Tonelli, S. Veronesi, E. Cavalli, X. Mateos, V. Petrov, U. Griebner, J. Li, Y. Pan, J. Guo, J. Phys. D. Appl. Phys. 46 (2013) 375301. http://dx.doi.org/10.1088/0022-3727/46/37/375301 doi:10.1088/0022-3727/46/37/375301.antipenko1978 B. M. Antipenko, Y. V. Tomashevich, Opt. Spectrosc. 44 (1978) 157–159.barnes2002 N. Barnes, B. M. Walsh, Quantum efficiency measurements of Nd:YAG, Yb:YAG, and Tm:YAG, in: M. E. Fermann, L. R. Marshall (Eds.), Advanced Solid State Lasers, Vol. 68 of Trends in Optics and Photonics, Optical Society of America, Washington D.C., 2002, p. B15. http://dx.doi.org/10.1364/ASSL.2002.TuB15 doi:10.1364/ASSL.2002.TuB15.tkachuk2001 A. M. Tkachuk, I. K. Razumova, E. Y. Perlin, M. Joubert, R. Moncorge, Sol. St. Spectrosc. 90 (2001) 88–99. http://dx.doi.org/10.1134/1.1343549 doi:10.1134/1.1343549.anedda2006 A. Anedda, C. M. Carbonaro, D. Chiriu, R. Corpino, M. Marceddu, P. C. Ricci, Phys. Rev. B 74 (2006) 245108. http://dx.doi.org/10.1103/PhysRevB.74.245108 doi:10.1103/PhysRevB.74.245108.rydberg2013 S. Rydberg, M. Engholm, J. Appl. Phys. 113 (2013) 223510. http://dx.doi.org/10.1063/1.4810858 doi:10.1063/1.4810858.drozdowski2014 W. Drozdowski, K. Brylew, A. J. Wojtowicz, J. Kisielewski, M. Świrkowicz, T. Łukasiewicz, J. T. de Haas, P. Dorenbos, Opt. Mat. 4 (2014) 1207–1212.swiderski2009light L. Swiderski, M. Moszynski, A. Nassalski, A. Syntfeld-Kazuch, T. Szczesniak, K. Kamada, K. Tsutsumi, Y. Usuki, T. Yanagida, A. Yoshikawa, IEEE Trans. Nucl. Sci. 56 (2009) 934.chiossi2017 F. Chiossi, K. Brylew, A. F. Borghesani, C. Braggio, G. Carugno, W. Drozdowski, M. Guarise, NIM AAccepted for publication. http://dx.doi.org/http://dx.doi.org/10.1016/j.nima.2017.01.063 doi:http://dx.doi.org/10.1016/j.nima.2017.01.063.
http://arxiv.org/abs/1702.08386v1
{ "authors": [ "A. F. Borghesani", "C. Braggio", "G. Carugno", "F. Chiossi", "M. Guarise" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170227171545", "title": "Cathodo- and radioluminescence of Tm$^{3+}$:YAG and Nd$^{3+}$:YAG in an extended wavelength range" }
Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the Netherlandsmohsen.ghafoorian@radboudumc.nl Radiology Department, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USAmehrtash@bwh.harvard.edu Institute for Computing and Information Sciences, Radboud University, Nijmegen, the Netherlands University of British Columbia, Vancouver, BC, Canada Donders Institute for Brain, Cognition and Behaviour, Department of Neurology, Radboud University Medical Center, Nijmegen, the NetherlandsTransfer Learning for Domain Adaptation to Brain MRI SegmentationTransfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation Mohsen Ghafoorian1,2,3 Alireza Mehrtash2,4Mohsen Ghafoorian and Alireza Mehrtash contributed equally to this work.Tina Kapur2Nico Karssemeijer1Elena Marchiori3 Mehran Pesteie4Charles R. G. Guttmann2 Frank-Erik de Leeuw5Clare M. Tempany2Bram van Ginneken1 Andriy Fedorov2Purang Abolmaesumi4Bram Platel1William M. Wells III2December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================== Magnetic Resonance Imaging (MRI) is widely used in routine clinical diagnosis and treatment.However, variations in MRI acquisition protocols result in different appearances of normal and diseased tissue in the images.Convolutional neural networks (CNNs), which have shown to be successful in many medical image analysis tasks, are typically sensitive to the variations in imaging protocols.Therefore, in many cases, networks trained on data acquired with one MRI protocol, do not perform satisfactorily on data acquired with different protocols.This limits the use of models trained with large annotated legacy datasets on a new dataset with a different domain which is often a recurring situation in clinical settings. In this study, we aim to answer the following central questions regarding domain adaptation in medical image analysis: Given a fitted legacy model, 1) How much data from the new domain is required for a decent adaptation of the original network?; and, 2) What portion of the pre-trained model parameters should be retrained given a certain number of the new domain training samples? To address these questions, we conducted extensive experiments in white matter hyperintensity segmentation task.We trained a CNN on legacy MR images of brain and evaluated the performance of the domain-adapted network on the same task with images from a different domain.We then compared the performance of the model to the surrogate scenarios where either the same trained network is used or a new network is trained from scratch on the new dataset. The domain-adapted network tuned only by two training examples achieved a Dice score of 0.63 substantially outperforming a similar network trained on the same set of examples from scratch. § INTRODUCTIONDeep neural networks have been extensively used in medical image analysis and have outperformed the conventional methods for specific tasks such as segmentation, classification and detection <cit.>. For instance on brain MR analysis, convolutional neural networks (CNN) have been shown to achieve outstanding performance for various tasks including white matter hyperintensities (WMH) segmentation <cit.>, tumor segmentation <cit.>, microbleed detection <cit.>, and lacune detection <cit.>. Although many studies report excellent results on specific domains and image acquisition protocols, the generalizability of these models on test data with different distributions are often not investigated and evaluated. Therefore, to ensure the usability of the trained models in real world practice, which involves imaging data from various scanners and protocols, domain adaptation remains a valuable field of study. This becomes even more important when dealing with Magnetic Resonance Imaging (MRI), which demonstrates high variations in soft tissue appearances and contrasts among different protocols and settings. Mathematically, a domain D can be expressed by a feature space 1.351ptχ and a marginal probability distribution P(X), where X = {x_1,...,x_n}∈1.351ptχ <cit.>. A supervised learning task on a specific domain D = {1.351ptχ , P(X)}, consists of a pair of a label space Y and an objective predictive function f(.) (denoted by T= {Y, f(.)}). The objective function f(.) can be learned from the training data, which consists of pairs {x_i, y_i}, where x_i ∈ X and y_i ∈ Y. After the training process, the learned model denoted by f̃(.) is used to predict the label for a new instance x. Given a source domain D_S with a learning task T_S and a target domain D_T with learning task T_T, transfer learning is defined as the process of improving the learning of the target predictive function f_T(.) in D_T using the information inD_S and T_S, where D_S≠ D_T, or T_S≠ T_T <cit.>.We denote f̃_ST(.) as the predictive model initially trained on the source domain D_S, and domain-adapted to the target domain D_T.In the medical image analysis literature, transfer classifiers such as adaptive SVM and transfer AdaBoost, are shown to outperform the common supervised learning approaches in segmenting brain MRI, trained only on a small set of target domain images <cit.>.In another study a machine learning based sample weighting strategy was shown to be capable of handling multi-center chronic obstructive pulmonary disease images <cit.>. Recently, also several studies have investigated transfer learning methodologies on deep neural networks applied to medical image analysis tasks. A number of studies used networks pre-trained on natural images to extract features and followed by another classifier, such as a Support Vector Machine (SVM) or a random forest <cit.>. Other studies <cit.> performed layer fine-tuning on the pre-trained networks for adapting the learned features to the target domain.Considering the hierarchical feature learning fashion in CNN, we expect the first few layers to learn features for general simple visual building blocks, such as edges, corners and simple blob-like structures, while the deeper layers learn more complicated abstract task-dependent features. In general, the ability to learn domain-dependent high-level representations is an advantage enabling CNNs to achieve great recognition capabilities.However, it is not obvious how these qualities are preserved during the transfer learning process for domain adaptation.For example, it would be practically important to determine how much data on the target domain is required for domain adaptation with sufficient accuracy for a given task, or how many layers from a model fitted on the source domain can be effectively transferred to the target domain.Or more interestingly, given a number of available samples on the target domain, what layer types and how many of those can we afford to fine-tune.Moreover, there is a common scenario in which a large set of annotated legacy data is available, often collected in a time-consuming and costly process.Upgrades in the scanners, acquisition protocols, etc., as we will show, might make the direct application of models trained on the legacy data unsuccessful.To what extent these legacy data can contribute to a better analysis of new datasets, or vice versa, is another question worth investigating.In this study, we aim towards answering the questions discussed above. We use transfer learning methodology for domain adaptation of models trained on legacy MRI data on brain WMH segmentation.§ MATERIALS AND METHOD §.§ DatasetRadboud University Nijmegen Diffusion tensor and Magnetic resonance imaging Cohort (RUN DMC) <cit.> is a longitudinal study of patients diagnosed with small vessel disease.The baseline scans acquired in 2006 consisted of fluid-attenuated inversion recovery (FLAIR) images with voxel size of 1.0×1.2×5.0 mm and an inter-slice gap of 1.0 mm, scanned with a 1.5 T Siemens scanner.However, the follow-up scans in 2011 were acquired differently with a voxel size of 1.0×1.2×3.0 mm, including a slice gap of 0.5 mm.The follow-up scans demonstrate a higher contrast as the partial volume effect is less of an issue due to thinner slices.For each subject, we also used 3D T1 magnetization-prepared rapid gradient-echo (MPRAGE) with voxel size of 1.0×1.0×1.0 mm which is the same among the two datasets.Reference WMH annotations on both datasets were provided semi-automatically, by manually editing segmentations provided by a WMH segmentation method <cit.> wherever needed. The T1 images were linearly registered to FLAIR scans, followed by brain extraction and bias-filed correction operations. We then normalized the image intensities to be within the range of [0, 1].In this study, we used 280 patient acquisitions with WMH annotations from the baseline as the source domain, and 159 scans from all the patients that were rescanned in the follow-up as the target domain. Table <ref> shows the data split into the training, validation and test sets. It should be noted that the same patient-level partitioning which was used on the baseline, was respected on the follow-up dataset to prevent potential label leakages.§.§ SamplingWe sampled 32×32 patches to capture local neighborhoods around WMH and normal voxels from both FLAIR and T1 images. We assigned each patch with the label of the corresponding central voxel. To be more precise, we randomly selected 25% of all voxels within the WMH masks, and randomly selected the same number of negative samples from the normal appearing voxels inside the brain mask. We augmented the dataset by flipping the patches along the y axis. This procedure resulted in training and validation datasets of size ∼1.2m and ∼150k on the baseline, and ∼1.75m and ∼200k on the followup. §.§ Network Architecture and TrainingWe stacked the FLAIR and T1 patches as the input channels and used a 15-layer architecture consisting of 12 convolutional layers of 3×3 filters and 3 dense layers of 256, 128 and 2 neurons, and a final softmax layer. We avoided using pooling layers as they would result in a shift-invariance property that is not desirable in segmentation tasks, where the spatial information of the features are important to be preserved. The network architecture is illustrated in Figure <ref>. To tune the weights in the network, we used the Adam update rule <cit.> with a mini-batch size of 128 and a binary cross-entropy loss function.We used the Rectified Linear Unit (ReLU) activation function as the non-linearity and the He method <cit.> that randomly initializes the weights drawn from a 𝒩(0, √(2/m))distribution, where m is the number of inputs to a neuron. Activations of all layers were batch-normalized to speed up the convergence <cit.>.A decaying learning rate was used with a starting value of 0.0001 for the optimization process. To avoid over-fitting, we regularized our networks with a drop-out rate of 0.3 as well as the L_2 weight decay with λ_2=0.0001. We trained our networks for a maximum of 100 epochs with an early stopping policy. For each experiment, we picked the model with the highest area under the curve on the validation set.We trained our networks with a patch-based approach.At segmentation time, however, we converted the dense layers to their equivalent convolutional counterparts to form a fully convolutional network (FCN).FCNs are much more efficient as they avoid the repetitive computations on neighboring patches by feeding the whole image into the network.We prefer the conceptual distinction between dense and convolutional layers at the training time, to keep the generality of experiments for classification problems as well (e.g., testing the benefits of fine-tuning the convolutional layers in addition to the dense layers). Patch-based training allows class-specific data augmentation to handle domains with hugely imbalanced class ratios (e.g., WMH segmentation domain). §.§ Domain AdaptationTo build the model f̃_ST(.), we transferred the learned weights from f̃_S, then we froze shallowest i layers and fine-tuned the remaining d-i deeper layers with the training data from D_T, where d is the depth of the trained CNN. This is illustrated in Figure <ref>. We used the same optimization update-rule, loss function, and regularization techniques as described in Section <ref>. §.§ ExperimentsOn the WMH segmentation domain, we investigated and compared three different scenarios:1) Training a model on the source domain and directly applying it on the target domain;2) Training networks on the target domain data from scratch;and 3) Transferring model learned on the source domain onto the target domain with fine-tuning.In order to identify the target domain dataset sizes where transfer learning is most useful, the second and third scenarios were explored with different training set sizes of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 25, 50 and 100 cases.We extensively expanded the third scenario investigating the best freezing/tuning cut-off for each of the mentioned target domain training set sizes.We used the same network architecture and training procedure among the different experiments.The reported metric for the segmentation quality assessment is the Dice score.§ RESULTSThe model trained on the set of images from the source domain (f̃_S), achieved a Dice score of 0.76. The same model, without fine-tuning, failed on the target domain with a Dice score of 0.005. Figure <ref>(a) demonstrates and compares the Dice scores obtained with three domain-adapted models to a network trained from scratch on different target training set sizes. Figure <ref>(b) illustrates the target domain test set Dice scores as a function of target domain training set size and the number of abstract layers that were fine-tuned. Figure <ref> presents and compares qualitative results of WMH segmentation of several different models of a single sample slice. § DISCUSSION AND CONCLUSIONS We observed that while f̃_S demonstrated a decent performance on D_S, it totally failed on D_T.Although the same set of learned representations is expected to be useful for both as the two tasks are similar, the failure comes to no surprise as the distribution of the responses to these features are different.Observing the comparisons presented by Figure <ref>(a), it turns out that given only a small set of training examples on D_T, the domain adapted model substantially outperforms the model trained from scratch with the same size of training data.For instance, given only two training images, f̃_ST achieved a Dice score of 0.63 on a test set of 33 target domain test images, while f̃_T resulted in a dice of 0.15. As Figure <ref>(b) suggests, with only a few D_T training cases available, best results can be achieved by fine-tuning only the last dense layers, otherwise enormous number of parameters compared to the training sample size would result in over-fitting. As soon as more training data becomes available, it makes more sense to fine-tune the shallower representations (e.g., the last convolutional layers). It is also interesting to note that tuning the first few convolutional layers is rarely useful considering their domain-independent characteristics.unsrt
http://arxiv.org/abs/1702.07841v1
{ "authors": [ "Mohsen Ghafoorian", "Alireza Mehrtash", "Tina Kapur", "Nico Karssemeijer", "Elena Marchiori", "Mehran Pesteie", "Charles R. G. Guttmann", "Frank-Erik de Leeuw", "Clare M. Tempany", "Bram van Ginneken", "Andriy Fedorov", "Purang Abolmaesumi", "Bram Platel", "William M. Wells III" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170225070425", "title": "Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation" }
[]tomos.david@physics.ox.ac.ukUniversity of Oxford, Department of Physics, Clarendon Laboratory, Parks Road, Oxford, OX1 3PU University of Oxford, Department of Physics, Clarendon Laboratory, Parks Road, Oxford, OX1 3PU University of Oxford, Department of Physics, Clarendon Laboratory, Parks Road, Oxford, OX1 3PUUnderstanding the statistics of ocean geostrophic turbulence is of utmost importance in understanding its interactions with the global ocean circulation and the climate system as a whole. Here, a study of eddy-mixing entropy in a forced-dissipative barotropic ocean model is presented. Entropy is a concept of fundamental importance in statistical physics and information theory; motivated by equilibrium statistical mechanics theories of ideal geophysical fluids, we consider the effect of forcing and dissipation on eddy-mixing entropy, both analytically and numerically. By diagnosing the time evolution of eddy-mixing entropy it is shown that the entropy provides a descriptive tool for understanding three stages of the turbulence life cycle: growth of instability, formation of large scale structures and steady state fluctuations. Further, by determining the relationship between the time evolution of entropy and the maximum entropy principle, evidence is found for the action of this principle in a forced-dissipative flow. The maximum entropy potential vorticity statistics are calculated for the flow and are compared with numerical simulations. Deficiencies of the maximum entropy statistics are discussed in the context of the mean-field approximation for energy. This study highlights the importance entropy and statistical mechanics in the study of geostrophic turbulence. Eddy-mixing entropy as a measure of turbulent disorder in barotropic ocean jets David P. Marshall December 30, 2023 ===============================================================================§ INTRODUCTION Geostrophic turbulence plays an important role in establishing the dynamical balances of the ocean and is an important ingredient in understanding the global ocean circulation <cit.> and its feedback onto the climate system <cit.>. In the Southern Ocean for example, geostrophic turbulence plays a leading order role in setting the transport and stratification for the Antarctic Circumpolar Current <cit.>. Unfortunately the eddies which make up this turbulence lie beyond the computational reach of modern climate models when run to dynamic and thermodynamic equilibrium. These climate models typically have an ocean resolution of 50-100km while mesoscale eddies have typical length scales of 10-100km, meaning that it is still necessary to parameterize the effect of eddies on the mean flow. Currently the parameterizations typically used in ocean and climate models are variants of that developed by <cit.> and <cit.> at coarse resolution.As computational power increases, the resolution of today's climate models is starting to approach the mesoscale therefore permitting a partial representation of the eddy length scales. Although eddy-permitting resolution is advantageous in terms of beginning to resolve eddy dynamics, it may be the case that is becomes no longer appropriate to use a deterministic parametrization such Gent-McWilliams. As resolutions approach the mesoscale the average effect of sub-grid eddies may not be representative as there will be an insufficient number of eddies below the sub-grid scale for averaging to be meaningful. An alternative approach is to model eddies with a stochastic parameterization. Recent studies have explored this possibility <cit.> for ocean models. With this growing interest in the stochastic nature of mesoscale eddies, it is timely to study the statistics of potential vorticity, and the underlying organizing principles influencing these statistics, in simplified ocean models. One approach is to borrow concepts from statistical physics such as entropy and equilibrium statistical mechanics. This approach has a long history which goes back to the birth of turbulence theory. The first example of the application of statistical mechanics to two-dimensional turbulence comes from <cit.>, where a model of singular point vortices was proposed. The statistical mechanics of point vortices has received much study since then. <cit.> led the way towards continuous vorticity fields through utilizing the invariance of energy and enstrophy in variational problems. Geostrophic flows over topography were tackled using this methodology: in <cit.> via the maximization of an entropy; and independently in <cit.> via a phenomenological minimum enstrophy principle. In the early 1990s, work was published which led to a theory of equilibrium statistical mechanics of two-dimensional and geophysical flows <cit.>. We refer to this theory as the `Miller-Robert-Sommeria statistical mechanics'. Reviews of this field are given by <cit.> and <cit.>. The power of the Miller-Robert-Sommeria theory is that all the work of <cit.>, <cit.>, <cit.> and <cit.> are contained within this framework as particular limits or simplifications. This ideal (no forcing, no dissipation) theory has been used to suggest a statistical explanation for the formation of ocean rings and jets in <cit.>. A further example of equilibrium statistical mechanics ideas being applied to the problem of ocean circulation and its associated density profiles is presented by <cit.>. In this study we consider the concept of entropy in the context of dissipative and forced-dissipative ocean flow. The powerful theory of equilibrium statistical mechanics is limited in usefulness when tackling non-ideal fluids; incorporating the effect of forcing and dissipation on entropy is essential in addressing the extent to which entropic ideas can be applied to the problem of mesoscale ocean turbulence. The aims of this study are as follows. * To determine the impact of forcing and dissipation on the evolution of entropy in a turbulent barotropic jet, analytically and numerically.* To test the maximum entropy principle[Not to be confused with the similarly named maximum entropy production principle.] and to understand the utility of this principle in the context of an idealized forced-dissipative turbulent jet.* To use the maximum entropy principle as a means to formulate a relationship between dynamically balanced quantities and the statistics of the flow.In Section <ref>, we outline the barotropic model used in this study and describe the numerical experiments. In Section <ref>, we introduce the eddy-mixing entropy and define the key concepts in this study. In Section <ref>, we derive analytical expressions for the influence of constant forcing and linear drag on theentropy as well as consider the case of a Gaussian stochastic forcing. In Section <ref>, we diagnose the entropy for a freely-decaying turbulent jet and use this to test the predictions of Section <ref>; in Section <ref>, we repeat the diagnosis for a forced-dissipative turbulent jet and consider entropy as balanced dynamical quantity. In Section <ref>, we consider the maximum entropy principle and relate it to the ideas developed in the subsequent parts of this study, evidence is given in support of the maximum entropy principle and we discuss this in relation to forced-dissipative flow. In addition the mathematical problem of obtaining the statistics of the flow from the assumption of maximum entropy is presented and the maximum entropy statistics are calculated and compared with simulations. In Section <ref>, we discuss the mean-field approximation for energy and its relation to the maximum entropy statistics. In Section <ref>, the study is concluded with some closing remarks. § MODEL AND EXPERIMENTS§.§ Model We solve the barotropic vorticity equation on a β plane within a singly-periodic domain, recently used in <cit.>. The simplicity of this model allows us to perform many high resolution simulations and the channel configuration provides an analogue to the Southern Ocean. The equation of motion is given byqt = - {ψ, q } - r ∇^2 ψ - ν_h ∇^6 ψ - ∂_y τ(y),where the potential vorticity is given by q = ∇^2 ψ + β y;∇^2 ψ is the relative vorticity and β y is the planetary vorticity making q equivalent to the absolute vorticity in this barotropic model; τ is the zonal wind stress which is defined to be a function of meridional distance only; r and ν_h are the linear drag coefficient and the hyper-viscosity respectively; the braces denote the horizontal Jacobian operator. The biharmonic diffusion is used for numerical stability, preferentially dissipating the grid-scale noise compared to Laplacian diffusion. The hyper-viscosity is chosen to be as small as possible while allowing us to treat the linear drag as the dominant dissipative term in this study. Linear drag is an attractive choice of dissipative term due to its analytical tractability as well as its being analogous to oceanic bottom drag.The periodicity in the zonal direction is employed to solve the model using a pseudo-spectral method and is modified from a pre-existing code <cit.>. The model domain is shown in Figure<ref>. The boundary conditions are free-slip∇^2nψ|_N,S = 0,where n=1,3; no normal flow∂_x ψ|_N,S = 0; and global momentum conservingψ|_N,S = ±Γ(t)/2.We find Γ by solving the prognostic integral momentum balance,Γt = -∬2x[ r ψy + τ].This is the same condition used by <cit.> and is the barotropic (and rigid lid) limit of the general integral momentum balance derived in <cit.>. By applying this boundary condition we are able to impose a fixed wind stress forcing rather than relaxing to a background shear as is often done for models of this type <cit.>. It is important to note that the ideal dynamics of this flow conserves the following quantities in addition to momentum. In ideal flow the energyE = 1/2∬2x(∇ψ) · (∇ψ)is conserved. This can be rewritten as E = Γ/4 (u_S - u_N) - 1/2∬2x ψ (q - β y),exploiting the relationship between q and ψ as well as the boundary condition described above, where u_N and u_S are the velocities along the north and south boundaries respectively. When the flow exhibits symmetry breaking this term will be non-zero, this is not the case in the flow realizations presented in this study. In addition, ideal flow conserves the integral of any function of potential vorticityC = ∬2xc(q),where c is an arbitrary function. These quantities are called Casimirs <cit.>. We are primarily interested in the polynomial Casimirs which we will denote as C_n = ∬2xq^n.Two Casimirs of particular physical importance are the circulation, n=1, and the enstrophy, n=2. Alternatively, all Casimirs can be conserved simultaneously by conserving the global potential vorticity distribution, Π, whereΠ(q) = A(q)qwhere A(q) is the area of the domain occupied by points with a value of potential vorticity less than q, the global cumulative potential vorticity distribution function.§.§ Experiments Two numerical experiments are performed using the above model. The first experiment is a freely-decaying unstable jet in which the initial jet has a velocity profileu(y) = u_0 sech^2(y), with u_0=10. The unstable jet evolves freely under the action hyper-viscosity and varying strengths of linear drag. The second experiment is a forced dissipative turbulent jet spun up from rest with varied strength of the wind stress. The wind stress profile is kept the same for all simulations as,τ = τ_0 sech^2(y/δ), but the magnitude of the jet is varied by changing the value of τ_0. δ is the width parameter and is fixed for all simulations. Table <ref> gives the values used in the different simulations. The parameters which are held constant for both experiments are given in Table<ref>.§ EDDY-MIXING ENTROPYThe eddy-mixing entropy is not the same as the thermodynamic entropy associated with molecular motions. The eddy-mixing entropy is a measure of the disorder of the large scale turbulent flow, and depends on the choice of coarse-graining which distinguishes between the large scales and the small scales of the flow. To formally define the eddy-mixing entropy we follow the approach presented in <cit.>. To proceed we will define two sub-systems of the full flow: * A micro-cell which is the smallest scale over which the details of the flow are important. The micro-cell is equivalent to the grid-cell for a high resolution numerical simulation. We think of each micro-cell as being characterized by only one value of the potential vorticity.* A macro-cell which is comprised of a number of micro-cells and is related to a choice of some coarse graining scale. The macro-cells should be chosen to exploit some dynamical symmetry of the system. In our case we, for the most part, choose zonal bands exploiting the zonal symmetry of the system apart from in Section <ref> where we choose contours of instantaneous streamfunction.The macro- and micro-cells used in this study are schematically illustrated in Figure <ref>. Having defined the marco-cells as such, it is possible to define an eddy-mixing entropy by counting the number of ways to arrange the micro-cells of value of potential vorticity into the macro-cells. This gives the expressionS = ln W = ln∏_I M^(I)!/∏_r M^(I)_r !for the eddy-mixing entropy, where M^(I) is the number of micro-cells in the I^th macro-cell and M^(I)_r is the number of micro-cells with the r^th value of potential vorticity in the I^th macro-cell. This counting method is adapted from <cit.> where is was used to study the statistical mechanics of stellar systems. For large numbers of micro- and macro-cells we can take the continuous limit to get S[ρ]=- ∫2xdq̃ ρ(q̃ |𝐱) ln( ρ(q̃ |𝐱)),in terms of the probability distribution function, ρ. In words, ρ is the probability of measuring a value, q̃, of the potential vorticity at the point 𝐱 in the domain. It is important to note that in this expression 𝐱 has taken the place of I in labelling the macro-cell. The coordinate 𝐱 should be interpreted as a coarse-grained or smoothed coordinate. The q̃ is not the potential vorticity field, q(𝐱), but a random variable representing the result of a measurement of potential vorticity. The eddy-mixing entropy is the sum over the continuous information entropies associated with the distribution of potential vorticity in each macro-cell. The method of numerically determining these entropies are given in an appendix and is used throughout this study.§ ANALYTICAL MODEL FOR EVOLUTION FOR ENTROPYThe fundamental quantity that we are interested in is the eddy mixing entropy given by (<ref>). In this section we derive a tendency equation for this entropy. We are not able to derive a full theory as the effects of the non-linear or non-local terms in the vorticity equation, (<ref>), do not seem to be analytically tractable.Nevertheless it is possible to derive analytical expressions for the entropy evolution due to the remaining linear and local terms in the vorticity equation: the wind stress curl and the linear drag. To allow for wider applicability to future studies, we also include a Gaussian stochastic noise, although we do not consider the stochastic forcing case in the numerical experiments described in this study. Ignoring the non-local (hyper-viscous) and non-linear (advection) terms we have the following equation for the evolution of potential vorticity:qt = -r (q - β y) - g(y) + ηwhere g(y)=-∂_y τ(y) is the constant (in time) part of the forcing and η is a Gaussian noise with zero mean and is uncorrelated in time. The stochastic part of the forcing might be considered as an analogue to the high frequency part of the atmosphere's interaction with the ocean as well as high frequency ocean processes not represented in this model. Equation (<ref>), ignoring the non-local and non-linear terms, represents a Ornstein–Uhlenbeck process <cit.> and the corresponding probability distribution satisfies the Fokker-Planck equation,ρt = q̃ [r(q̃ - β y)ρ] - g(y)ρq̃ + ⟨η^2 ⟩/2^2 ρq̃^2,where ⟨η^2 ⟩ is the variance of the noise. The fist two terms on the right hand side of equation (<ref>) were also derived in <cit.> but here we consider the effect of these terms on the entropy. We can write the entropy tendency as Ṡ = -∫2xdq̃ ρ̇lnρ,where the dot represents differentiation with respect to time. By substituting equation (<ref>) into (<ref>) we can derive the influence of these terms on the entropy, yielding Ṡ = P -r + ⟨η^2 ⟩/2∫2xdq̃ ^2 ρq̃^2lnρ,where we have now included the effect of advection and hyper-viscosity as the residual, P; as the hyper-viscosity is small in our numerical calculations, we will take the liberty of referring to P as the advective production of entropy. Notably, (<ref>) does not have an explicit dependence on the zonally symmetric forcing as a constant wind stress only shifts the distribution in each zonal band and the entropy is invariant to these shifts. The linear drag leads to a remarkably simple term which is minus the drag coefficient, that is a perpetual and constant sink of entropy. Thus, any increase in entropy must arise through the effect of turbulence or stochastic forcing. In the absence of turbulence, the stochastic noise term will solely act to spread out the probability distribution and will consequently act as a source of entropy. In this case we would expect there to be a stationary solution for the entropy evolution in the case of no eddies where the source of entropy due to stochastic noise balances the sink of entropy due to linear drag. This can be tested by numerically solving the linear equation of motion, (<ref>), and comparing with the steady state solution of the Fokker-Planck equation, (<ref>), and its corresponding entropy. In what follows we will not consider the case of Gaussian stochastic forcing but will concentrate of simulations with constant zonally symmetric forcing. We include the above as an important caveat to the numerical results presented in this study; in a realistic ocean simulation high frequency noise will introduce a source of entropy from its fast time-scale variability. § NUMERICAL EXPERIMENTS §.§ Eddy-mixing entropy in freely-decaying turbulenceFor our numerical simulations (no stochastic term), the entropy tendency is simply given asṠ = P - r,with no explicit contribution from the deterministic forcing term. However, the forcing will contribute to determining the behaviour of the advective production of entropy, P. To illuminate the effect of forcing we first consider the evolution of entropy in the absence of forcing: the freely decaying unstable flow.We begin by examining the evolution of entropy for short times as the instabilities grow then decay shown in Figure <ref>. In region A of Figure <ref>, the entropy increases very quickly concomitant with the exponential growth of eddy energy through shear instability in the jet. There is little spread in rate of the entropy growth in simulations D_1,...,D_7 with changed drag parameter. This growth is arrested for all experiments at a maximum value in region B, where the maximum is also not greatly changed with the differing linear drag coefficient. The entropy then decreases, in regions B to C, towards its asymptotic behaviour. This decrease is greater than can be explained by the sink of entropy due to linear drag meaning that in this region, B to C, the advective production of entropy must become negative and acts as a sink of entropy. There is an interesting difference between simulations D_1,...,D_3 and the other simulations. These low drag simulations see a second increase in entropy (Figure <ref>, near Time = 1000) toward the long-time behaviour as well as an oscillation about the long time behaviour. These effects can be illuminated by examining the flow, at low linear drag coefficient a persistent Rossby wave forms causing an increase in the disorder as compared to laminar flow as well as the observed oscillations. In freely-decaying turbulence the eddies will ultimately die away through the action of linear drag and hyper-viscosity causing the advective production of entropy, P, to tend to zero at long times. In this case equation (<ref>) tends to the asymptotic solution for the entropy evolution S(t) ≈ -r t + K for long times,where K is a constant of integration. We can test this hypothesis in a simulation of a freely decaying unstable jet as for long times we would expect the activity of the eddies to asymptotically decay to zero. We initialize the model with an unstable zonal jet and allow the flow to evolve under the action of linear drag and hyper-viscosity only, the latter being small.The long time behaviour of the entropy is also shown in Figure <ref>. We see a striking agreement between the long time behaviour predicted by (<ref>) and the slopes diagnosed from the simulations. Figure<ref> shows the agreement between the predicted and the measured long time slope of the linear entropy decrease which is found to be near exact. §.§ Eddy-mixing entropy in forced-dissipative turbulence We now turn our attention to the entropy evolution in the forced-dissipative experiments FD_1,...,FD_7. We start by considering the entropy for short times, comparing it to snapshots of the potential vorticity. As an example we consider experiment FD_3. Figure<ref> shows the evolution of entropy as the system evolves to a statistically steady state. Initially, in region A, the flow is near laminar with only the small initial perturbation. We see that this corresponds to a low value of entropy. Once instabilities begin to grow the corresponding growth of entropy is fast and grows to a maximum value much like the evolution in the freely-decaying simulations. At the maximum of entropy, region B, the turbulence has covered the whole domain with small scale eddies. As these eddies mix the potential vorticity we see a slump in the entropy. When we examine the flow at the bottom of the slump, region C, we see that a large scale Rossby wave has emerged propagating on a sharp potential vorticity gradient corresponding to a mixing barrier. This decrease of entropy, or disorder, in the system allows us to describe the way in which energy has condensed at large scales in an entropic sense. The concomitance of this decrease in entropy with the emergence of large scales leads to a novel interpretation of well known inverse cascade phenomena: the emergence of large scales coherent can be described as the decrease of entropy in this system. At longer times the entropy fluctuates around a balanced steady state value. This behaviour of entropy is the same for all the forced-dissipative simulations, FD_1,...,FD_7, shown in Figure<ref>a, much like statistically steady state balance of energy, shown in Figure <ref>b. Both the time-mean entropy and energy increase with the wind stress strength in steady state. The balanced steady state behaviour of the entropy is explained, according to the reasoning of Section <ref>, by the competition between the advective production of entropy and the constant sink due to linear drag, that isP-r = 0, where the over over-line denotes the time-mean in statistically steady state. It is important to note that, because -r is a merely a negative number, both the increase and decrease in the entropy fluctuations arise from the advective production, P. That is, eddies can act as both a source and a sink of entropy. It is important to note that the action of P as a source and a sink must be associated with the presence of dissipation which allows there to be fluctuations in otherwise conserved global quantities such as energy and entropy which exhibit inverse and direct transfers between scales. Although the time derivative of entropy has no explicit dependence on forcing, the forcing does supply energy to the turbulent motions by sustaining the eddy production of entropy, unlike in the case of the freely-decaying experiment. The forcing implicitly sets the maximum and steady state value of entropy, we will further consider how the entropy is related to other well-known dynamical quantities in the following section.§ RELATION TO THE MAXIMUM ENTROPY PRINCIPLE§.§ Time evolution of entropy and the maximum entropy principleIn what has been discussed so far we have considered the derivative of entropy with respect to time. Now we consider its relation to the maximum entropy principle which is at the core of the equilibrium statistical mechanics theory of ideal geophysical flow <cit.>. The maximum entropy principle states that the entropy should be stationary with respect to variations in the probability distribution, ρ, given appropriate dynamical constraints. We can relate the time derivative of the entropy, S, to the functional derivative using the relationSt = ∫2xdq̃ ρtSρ.Assuming that the system is in a maximum entropy state constrained by the value of energy and N Casimirs we have that the variational problemSρ + α(t) ρ(∫2xdq̃ ⟨ψ⟩q̃ρ- E(t) ) + ∑_n=1^Nγ_n(t) ρ(∫2xdq̃ q̃^n ρ - C_n(t)) = 0,is satisfied, where α and γ_ns are Lagrange multipliers and where E(t) and C_n(t) are the time varying values of energy and Casimirs. Substituting this condition into (<ref>), we relate the time derivative of entropy to that of energy and the Casimirs:St = - α(t) E(t)t - ∑_n=1^Nγ_n(t) C_n(t)t.This equation says that if the entropy is maximal under some constraints then the evolution of the entropy can be entirely explained by the evolution the quantities constraining it. We can split the time evolution of the Lagrange multipliers into temporal mean and fluctuations such that α = α + α', and similarly for the other Lagrange multipliers. Assuming that the deviations in the Lagrange multipliers are small, which we will address in more detail in due course, and integrating (<ref>), we can write an approximate relation for the time evolution of entropy in terms of the time evolution of energy and the Casimirs, givingS(t) ≈ - α E(t) - ∑_n=1^Nγ_n C_n(t) + K,where K is a constant of integration. This expression relies on two assumptions. Firstly, the entropy is maximized constrained by the value of energy and the Casimirs at any point in time; in other words, the system is in a quasi-equilibrium state where the time scale for changes in the balanced quantities is larger that the time the eddies take to drive the system to equilibrium. Secondly, the fluctuations in the Lagrange multipliers (sensitivities of the entropy) are small. We now turn to testing the relation, (<ref>), in order to test these assumptions. §.§ Reconstruction of entropy evolution We can test equation (<ref>) by regressing the diagnosed time evolution of entropy onto the time evolution of energy and the other conserved quantities, the Casimirs, and comparing the reconstructed entropy time series against the diagnosed entropy time series. It was found the this procedure does not give a good reconstruction when the spin up is included in the time series, however there is good agreement when we only consider the statistically steady state: it is likely that the time dependence of the Lagrange multipliers is large during spin up but not in the statistically steady state. An example is given in Figure<ref> for simulation D_6. Figure <ref> shows the reconstruction of the entropy time series using only the first two Casimirs, circulation and enstrophy, in addition to the energy as well as using Casimirs up to tenth order. The correlation for the first two Casimirs is 0.73, and when ten Casimirs are used the correlation is 0.93. This is a striking agreement and provides evidence that the turbulence acts to maximize entropy, according to equation (<ref>), at each point in time in statistically steady state. The analysis has been repeated for all forced-dissipative simulations FD_1,...,FD_7 and including differing numbers of Casimirs. Figure <ref>a shows the correlation between reconstructed and diagnosed entropy time series as a function of number of Casimirs for FD_1,...,FD_7. We see a marked increase in the correlation with increased numbers of Casimirs. This shows the importance of higher order Casimirs in this statistical mechanics approach. It is also apparent that odd power Casimirs do not contribute significantly to increasing the correlation. We can repeat this analysis, now calculating the entropy along instantaneous streamfunction contours. This means that the macro-cells defined in Section <ref> become contours of streamfunction rather than zonal bands as has been used up to now. The entropy will change as a consequence of this transformation but the energy and Casimirs will not - this implies that only the projection in (<ref>) of the entropy evolution onto the energy and Casimirs that will change. The correlation of the reconstructed entropy with the diagnosed entropy for this choice of macro-cell is shown in Figure <ref>b. The main point to note here is that the correlation is higher for fewer Casimirs. The fourth order Casimir seems to be of particular importance with all simulations having a correlation of greater than 0.7 if four Casimirs are used. This agrees with the observation of <cit.> that viewing potential vorticity distributions in a more of aLagrangian sense acts to simplify the statistics requiring a reduced number of moments to describe to probability distribution. The particular importance of the fourth order Casimir here is in contrast to the study of <cit.> who found that, in their numerical set-up, no Casimir above order three was relevant; this indicates that the particular Casimirs which are most important depend heavily on the particulars of any given system (i.e. domain geometry and nature of forcing). The fact that it is possible, to a large extent, to reconstruct the statistically steady state time series for entropy using the corresponding dynamically balanced quantities of the flow is tantalizingly suggestive that in a statistically steady state of a forced-dissipative flow the eddies push the system into a quasi-equilibrium. That is, although not in a true equilibrium (ideal flow with maximal entropy), the rate at which eddies push the entropy to its maximum allowed value is much faster than the time scales over which the conserved quantities are changing. As we saw in Sections <ref> and <ref>, the eddies can act as a source or sink of entropy. Indeed, the fact that the balanced quantities such as energy (Figure <ref>b) fluctuate at all is due to the turbulence, if there were no non-linearities then we would have steady flow and no fluctuations. We suggest that the eddies play a double role, simultaneously maintaining the quasi-equilibrium and modulating its constraints. Further work, over a wider range of parameters, is required to obtain firmer evidence for the maximum entropy principle at work.§.§ Solving for the Lagrange multipliers Solving the variational problem, (<ref>), gives us a probability distribution in terms of a set of unknown Lagrange multipliers, in this section we describe the method for determining these from knowledge of the energy and Casimirs of the flow. To determine the Lagrange multipliers it is necessary to solve the non-linear simultaneous equations- 2 E = - α∫2x ln𝒵(α,γ_1,...,γ_N)for the energy constraint and C_n = - γ_n∫2x ln𝒵(α,γ_1,...,γ_N)for each Casimir constraint. Here 𝒵 is the local normalization, or the partition function, of the probability distribution given as𝒵 = -∫dq̃ exp(-α⟨ψ⟩q̃ - ∑_i=n^Nγ_n q̃^n ).This problem is numerically difficult and its solution is not tackled in this study. However, we can proceed by reducing the dimensionality of the problem. Ironically this is achieved by first considering the case of infinite dimensions. Constraining the entropy of the flow the first N polynomial Casimirs of the flow is a truncated version of the exact constraint. To constrain by all Casimirs of the flow we constrain by the global potential vorticity distribution discussed in Section <ref>. The constraint is given by Π(q̃) = ∫2x ρ(q̃|𝐱),and the Lagrange multiplier becomes a function of q̃, γ(q̃).The corresponding variational problem now gives the solution ρ(q̃|𝐱) = 1/𝒵exp( -α⟨ψ⟩q̃ - γ(q̃) ),for the probability distribution. Substituting (<ref>) into (<ref>) we obtain the expressionΠ(q̃) = e^-γ(q̃)∫2x e^-α⟨ψ⟩q̃/𝒵.The integral here is a function of potential vorticity only and we can write γ(q̃) in terms of Π and the integral. This allows us to eliminate the Lagrange multiplier corresponding to the Casimir constraint leaving us with only α to find. Eliminating γ from (<ref>), we obtain the probability distribution in terms of the Lagrange multiplier, α, and the global distribution, Π:ρ(q̃|𝐱) = 1/𝒵(𝐱) ∫2x (e^-α⟨ψ⟩q̃/𝒵(𝐱))Π(q̃) e^-α⟨ψ⟩q̃.A numerical method can be constructed for the two-dimensional problem of optimizing the values of α and 𝒵 simultaneously thus finding the maximum entropy distribution from only knowledge of global quantities. The results of this methodology follow. §.§ Reconstruction of statistics In this section we reconstruct the statistics from the maximum entropy distribution, (<ref>), by optimizing for the Lagrange multiplier, α. The resulting values for α in simulations FD_1,...,FD_7 are shown in Figure <ref>. We see that the value of α has a strong dependence on the strength of wind stress and time. Figure <ref>a shows the evolution of α during spin up, we see a strong reduction in the value of α at short times with the value of α fluctuating about a statistically steady state value for long times. In steady state the fluctuations around the time-mean value is very small, this supports the assumption made in Section <ref> to derive equation (<ref>). Also shown is the dependence of the time-mean value of α on the wind stress strength in Figure <ref>b. The steady state sensitivity of entropy to energy is drastically decreased with the strength of wind stress suggesting that the entropy of the system becomes insensitive to perturbations in energy at high wind stress.Interpretations of the Lagrange multiplier, α, come with a caveat; the numerically determined value of α is only as accurate as the maximum entropy hypothesis and, in particular, the mean field approximation for the energy. To test the accuracy of these devices we compare the reconstructed statistics from the distribution, (<ref>), and the diagnosed statistics from the numerical simulations. Figure<ref> shows diagnostics comparing the maximum entropy distribution evaluated using (<ref>) with the statistics diagnosed from the numerical simulation itself, that is, the `truth'. In short, despite displaying qualitative agreement with the simulations, the maximum entropy distribution fails to quantitatively capture the statistics of simulations FD_1,...,FD_6. Figure <ref>a shows a comparison between the reconstructed and diagnosed ⟨ q ⟩-⟨ψ⟩ relation for simulation FD_6. The maximum entropy reconstruction shows good qualitative agreement but seems smoothed compared to the diagnosed relationship and fails to capture all the inflection points. Also for simulation FD_6, Figure <ref>b compares the SD(q)-⟨ψ⟩ relation and we see that the quantitative agreement is poorer. The standard deviation is underestimated in the centre of channel while being overestimated in the flanks. Figure <ref>c compares the probability distribution, predicted and diagnosed, in the centre of the channel for simulation FD_6. Despite capturing the trimodal nature of the distribution we can see that the maximum entropy distribution overestimates the weight of the central peak compared to the side peaks. As well as these comparisons, higher order moments, not shown here for brevity, were also considered such as the skewness and kurtosis. Again, despite giving qualitatively consistent features the skewness and kurtosis displayed considerable quantitative deviations from the simulations and were, for example, highly overestimated in the flanks. Many of the ways in which the maximum entropy statistics deviates from the simulations can be interpreted as a degradation of the strong persistent mixing barrier which the simulations FD_1,...,FD_6 exhibit. The statistical nature and consequences of this mixing barrier has been extensively studied, for this model, in <cit.>. In summary, the maximum entropy distribution, (<ref>), fails to quantitatively reproduce the simulated statistics despite the suggestive evidence for the entropy being maximized presented in Section <ref>. The derivation of the maximum entropy distribution relies on two main assumptions in the equilibrium, Miller-Robert-Sommeria, statistical mechanics: a) maximization of entropy; and b) the mean field approximation. We argue that the failure of the maximum entropy distribution presented in this numerical experiment can be attributed to the break-down of the mean field approximation in a forced dissipative system. This will be discussed in detail in the following section.§ THE MEAN-FIELD APPROXIMATIONA possible reason for the lack of success of the predicted probability distribution function, (<ref>), is that the maximum entropy principle is not at work. Nevertheless, we believe that the analysis presented in Section <ref> provides sufficient evidence to look for other reasons why (<ref>) fails to quantitatively capture the statistics. In this section we consider the mean field approximation of Miller-Robert-Sommeria equilibrium statistical mechanics. The energy of the flow is given asE[q] = -1/2∫𝐝𝐱 ψ(𝐱) (q(𝐱) - β y),because of the presence of ψ it is not clear how make the necessary substitution to write E as a functional of ρ allowing us to tackle this constraint analytically. We rewrite the energy asE[q]= -1/2∫∫𝐝𝐱𝐝𝐱'(q(𝐱) - β y) G(𝐱,𝐱') q(𝐱')where G(𝐱,𝐱') is the Green's function of the differential operator defined by, q = ∇^2 ψ +β y. Now, by swapping the potential vorticity field for its average value, and defining ⟨q̃⟩≡∇^2⟨ψ⟩+β y, we get;E_M[ρ] = -1/2∫𝐝𝐱 ⟨ψ⟩ (⟨q̃⟩ - β y).This step is the mean-field approximation, often used in models of condensed matter physics. In essence, we are saying that, rather than considering the interaction energy between all pairs of potential vorticity patches, we consider that each patch of potential vorticity only feels mean effect of all other patches. For an ideal fluid in statistical equilibrium, the mean-field approximation ceases to be approximate and becomes exact due to the non-local aspect of the Green's function, G <cit.>. However, this necessitates two other properties of ideal equilibria: a) that the mean eddy potential vorticity flux is zero, ∇·⟨𝐮'q' ⟩ = 0; and b) that neighbouring macro-cells of the flow are uncorrelated. These properties are fundamentally linked with the mean field approximation and are manifestly not satisfied in a forced-dissipative statistically steady state. Therefore, we suggest that while the maximization of entropy might be an useful organizing principle in forced-dissipative flow, the mean-field approximation remains only a crude approximation. We propose two potential avenues for future study to tackle this problem. * It may be possible to find a coarse-graining (i.e., macro-cells) which is partially Lagrangian to reduce difference betweenE[q(𝐱)] and E_M[ρ(q̃|𝐱)]. This is the approach taken in <cit.> which produces good agreement between experiment and equilibrium theory by moving into a frame of reference moving at the phase speed of a large scale Rossby wave. However, this is unlikely to work in the presence of multiple wave modes as is the case with simulations FD_1,...,FD_6, see discussion in <cit.>.* Alternatively, we speculate that a perturbation method could be applied to the mean-field approximation to yield a more realistic probability distribution. However, precisely how this might be achieved remains to be determined and is subject of future work.Indeed, it may be that a combination of the two approaches will provide the means of deriving forced-dissipative statistics from the maximum entropy principle.§ CONCLUSIONIn this study we have shown how an eddy mixing entropy can be used as a measure of turbulent disorder. By deriving the influence of forcing and linear drag, we were able to use entropy to describe the turbulence in a freely-decaying and forced-dissipative flow. The evolution of entropy describes the three stages of the eddy life cycle and eddy-mean interaction: growth of instability, formation of large scale coherent structures and steady state fluctuations. In particular, the eddy production of entropy, which has been focus of much theoretic inquiry, can be explicitly computed from data. This will inform work on stochastic parametrization by describing the disorder in a turbulent jet in a way that links to both statistical physics and information theory.The relationship between the temporal evolution of entropy and the maximum entropy principle was considered in Section <ref>. Under the assumption of maximum entropy it was found that the time evolution of entropy was set by the time evolution of its constraints. Suggestive evidence was found that the entropy is maximized in the model simulations considered in this study. It is clear that if a variational problem can be used to infer the statistics then the number of Casimir constraints has to be large. With this evidence for the maximum entropy principle being a physically meaningful candidate for describing the behaviour of turbulence in the system studied here, we considered the problem of inferring the sub-grid scale statistics. This is equivalent to inferring the Lagrange multipliers, used in the maximum entropy variational problem, from the constraints applied. We presented the mathematical formulation of this problem in Section <ref> and showed how the dimensionality could be reduced given knowledge of the global potential vorticity distribution. Further, we reconstructed the maximum entropy statistics from knowledge of the energy, global potential vorticity distribution, and zonal mean streamfunction as a functions of time. We find that although the maximum entropy statistics reproduces qualitatively representative features of the flow, quantitative agreement is lacking, especially for higher order statistical moments. In Section <ref>, the mean-field approximation was discussed as a potential culprit for the quantitative disagreement and avenues for future investigation were proposed.In this study we have presented eddy-mixing entropy as both a descriptive tool and a dynamically balanced quantity in a barotropic turbulent jet. We have also demonstrated the relationship between the statistical mechanics of forced-dissipative flow and well-known globally balanced quantities such as the energy and enstrophy of the flow. In doing so we were able to provide evidence for the maximum entropy principle in a forced-dissipative system, where the eddies act to maximize entropy under time varying constraints. The question of the usefulness of statistical mechanics theories, such as the Miller-Robert-Sommeria theory, in understanding the statistically steady states of ideal two-dimensional and geophysical turbulence has received much attention <cit.>. By explicitly considering the evolution of eddy mixing entropy in a forced-dissipative model we are able to demonstrate the importance and utility of eddy mixing entropy in the study of forced-dissipative geophysical turbulence, opening the door to revisiting the application of statistical mechanics to ocean mesoscale eddy parameterizations <cit.>. This work is funded by the UK Natural Environment Research Council. * § NUMERICAL COMPUTATION OF ENTROPY It is important to note the difference between the discrete, or Shannon, entropyS = - ∑_iρ_i lnρ_i,and the continuous, or differential, entropy we use in this studyS = - ∫dx ρ(x) lnρ(x).One of the clear differences between these two entropies is that the continuous entropy can become negative whereas the discrete entropy is never less than zero. On the other hand the continuous entropy can be negative and indeed tends to negative infinity for the asymptotic limit of a delta-function. This means that we need to be careful when numerically evaluating an estimator for the continuous entropy. Naïvely using the standard discrete approximation for the integral in (<ref>) leads to calculating a quantity proportional to the discrete entropy, (<ref>). To find an approximation for, (<ref>), we must evaluate the quantity S ≈ - Δ x ∑_iρ_i lnρ_i + lnΔ x,where ρ_i becomes a histogram approximation to the distribution ρ(x). However this method of approximation was found to be biased and introduced a systematic error into the results presented in this study. Instead, we used a sample-spacing estimator for the distribution leading to an improved numerical approximation for the continuous entropy <cit.>. The sample-spacing estimator relies of the idea that when the data is ordered, from smallest to largest value, then the reciprocal of the difference between two samples, separated by m, spaces is an estimator for the probability density. Using this, the following expression is found for the entropyS ≈1/N-m∑_i=1^N-mln(N+1/m(x^(i+m) - x^(i)) ).Here, N is the number of samples; m is the spacing size; and x^(i) represents the i^th ordered sample. This method was used to evaluate the entropy throughout this study and is found to be considerably better than more naïve methods. The simpler methods proved to have a strong dependence on the choice of histogram bin width rendering them unusable for quantitative comparison with theory.apa
http://arxiv.org/abs/1702.08041v1
{ "authors": [ "Tomos W. David", "Laure Zanna", "David P. Marshall" ], "categories": [ "physics.flu-dyn", "cond-mat.stat-mech", "physics.ao-ph" ], "primary_category": "physics.flu-dyn", "published": "20170226144746", "title": "Eddy-mixing entropy as a measure of turbulent disorder in barotropic ocean jets" }
pba@institutoptique.frLaboratoire Charles Fabry, UMR 8501, Institut d'Optique, CNRS, Université Paris-Saclay, 2 Avenue Augustin Fresnel, 91127 Palaiseau Cedex, France Université de Sherbrooke, Department of Mechanical Engineering, Sherbrooke, PQ J1K 2R1, Canada 44.10.+i, 05.45.-a, 05.60.-k A memristor is one of four fundamental two-terminal solid elements in electronics. In addition with the resistor, the capacitor and the inductor, this passive element relates the electric charges to current in solid state elements. Here we report the existence of a thermal analog for this element made with metal-insulator transition materials. We demonstrate that these memristive systems can be used to create thermal neurons opening so the way to neuromophic networks for smart thermal management andinformation treatment.Thermal memristor and neuromorphic networks for manipulating heat flow Philippe Ben-Abdallah====================================================================== During almost two centuries it was admitted that only three fundamental passive elements, the resistor, the capacitor and the inductor were the building blocks to relate voltage v, current i, charge q, and magnetic flux φ in solid elements. However, in 1971 Chua <cit.> envisioned, through symmetry arguments, the existence of another fundamental element, the memristor a two-terminal non-linear component relating electric charge to flux in electronics circuits. In 2008 Strukov et al. <cit.> shown using tunable doped metal-oxide-semiconductors films that this vision was true. The basic mathematical modelling of a memristive system typically takes the following formv=R(x,w)i,dx/dt=f(x,w),where x is a state variable and R is a generalized resistance which depends on this variable and on either the voltage (i.e. w=v for a voltage-controlled memristor) or on the intensity (i.e. w=i for a current-controlled memristor). The distinction between memristive systems and arbitrary dynamical systems is the fact that the voltagev (output) is always zero when the current i (input) is zero, resulting in zero-crossing Lissajous v- i curves. In this Letter we extend this concept to the heat transport by conduction and we explore the possibilities offered by thermal memristive systems to manage heat exchanges and make information treatment with heat rather than with electric currents as suggested by Li et al. <cit.>. The basic system we consider is sketched on Fig. 1-a. It is a cylindrical wire of radius r and length l>>r made with vanadium dioxide (VO_2) a metal-insulator transition (MIT) material. This wire is in contact on its two extremities with two themal reservoirs at temperature T_L and T_R<T_L, respectively. The MIT materialis able to change its thermal conductivity following a hystereris curve (see Fig. 1-b) with respect to the temperature around a critical temperature T_c=341K. Beyond T_c the wire tends to become metallic (amorphous) and for sufficiently high temperaturesits thermal conductivity is Λ_m≈ 6 W.m^-1.K^-1. At the opposite, below T_c VO_2 tends to be crystalline (i.e. insulating) and Λ_d≈ 3.6 W.m^-1.K^-1 at sufficiently low temperature. However the evolution between these two extremes values follows a hysterezis loop <cit.> with respect to the temperature. Recent works have demonstrated that the thermalbistability of these MITs can be exploited to store thermal information <cit.>.By changing the temperature gradient along the wire, the phase front moves along the wire so that its thermal resistance R_th changes with respect to the temperature difference Δ T=T_L-T_R between the two reservoirs. In steady state regime and without convection on the external surface of wire, the heat conduction obeys the following equationd/dx(Λ(T(x))dT(x)/dx)=0.By applying a Kirchoff's transformation W(x)=∫_T_R^T(x)Λ(T̃(x)) dT̃,on the thermal conductivity Λ, it is straighforward to show that the temperature profile along the wire is solution of the following equation∫_T_R^T(x)Λ(T̃(x)) dT̃= ∫_T_R^T_LΛ(T̃(x)) dT̃(1-x/l).Using the piecewise decompositionΛ(T)= {[Λ_d, T<T_1^(i); a_i T+b_i, T_1^(i)<T< T_2^(i);Λ_m, T>T_2^(i) ]. of the conductivity with respect to the temperature, an explicit expression for the temperature profile can be derived <cit.> from (<ref>). In this decomposition, the subscript i refers to the heating (i=1) or the cooling (i=2) phase. The flux φ flowing accross a wire of section S is related to the temperature difference Δ T between the two reservoirs and to the thermal resistance R_th by the simple system of equationsΔ T=R_th(x_1^(i),x_2^(i),Δ T)Sφ,dx_j^(i)/dt=g^(j)(Δ T), j=1,2where the state variables x_1^(i) and x_2^(i) represent here the location along the wire below (resp. beyond)which the MIT material becomes metallic (resp. insulating) during the heating or coolingphase. g^(j) denotes the function which drive the location of phase front inside the wire. Note that, depending on the thermal boundary conditions applied on the wire, these points can potentially be located outside of the wire (see <cit.>). The thermal resistanceis obtained by summing the resistances in series R_1^(i) along wire below x_2^(i), R_2^(i) between x_2^(i) andx_1^(i) andR_3^(i) beyond x_1^(i). Accordingly R_th(x_1^(i),x_2^(i),Δ T)=j∑R_j^(i)with R_1^(i) =x_2^(i)/π r^2Λ_m, R_2^(i) =1/π r^2∫_max(x_2^(i),0)^min(x_1^(i),l)dx/a_i T(x)+b_i and R_3^(i) =l-x_1^(i)/π r^2Λ_d. Notice that these resistances can vanish depending on the value of state variables x_1^(i) and x_2^(i).In Fig. 2(a) are plotted the temperature profiles along a VO_2 wire under a temperature gradient during the cooling and heating steps. Contrary to a classical conduction process those temperature profiles are not linear because of the temperature dependence of the thermal resistance as shown in Fig. 2(b). It results from this nonlinearity a nonlinear variation of flux crossing the wire as well (Fig. 2(c)) with respect to the temperature difference applied on it. Following a similar approach as in the Chua' s work <cit.>, based on symmetry arguments, we can derive the equivalence rulesbetween the electronic and the thermal problem for the fundamental quantities.φ↔ i, T ↔ v, R_th=dT/dφ↔ R,q_th=∫φ dt↔ q,where q_th and q denote the thermal charge obtained by time integration of heat flux and the classical electric charge, respectively. It follows that the thermal capacity can be defined asC_th=dq_th/dT=φ (dT/dt)^-1.As for the thermal analog of magnetic flux, it is given byΦ_th=∫ T dt.Finally, the thermal memristance ("the missing element") readsM_th=dΦ_th/dq_th=dΦ_th/dt(dq_th/dt)^-1=φ/T. A key pointfor a memristor is its operating mode under a transient excitation.Provide the timescale at which the boundary conditions varie is large enough compared with the relaxation time of temperature field inside the wire itself, the variation of flux crossing the system can be calculated from relation (<ref>). In typical solids, the thermalization timescale varies between few picosecond at nanoscale (phonon-relaxation time) to few microsecond at microscale (diffusion time t∼ l^2/α, α being the thermal diffusivity).The application of an external bias Δ T(t) across the system moves the position of state variablesx_1^(i) and x_2^(i) causing a time evolution of the thermal resistance R_th.In Fig. 3 we show the time evolution of this flux for a sinusoidal variation of Δ T(t) with a period t_0 large enough compared to the relaxation time of memristor. Since the effective conductance (resp. resistance) of memristor increases (resp. decreases) when the system switch from the heating to the cooling phase we observe a significant enhancement of heat flux (up to 25%) flowing through the memristor each time T_Ldecays. This sharp variation of physical properties can be exploited to design basic neuronal circuits and make logical operations with thermal signals. Recent works have demonstrated the possibility to make such logical operations with acoustic phonons <cit.> or thermal photons <cit.> by using phononic and photonic counterpart of diodes and transistors.Here, we demonstrate that memristive systems can be an alternative to these systems. To show that, let us consider a simple neuron <cit.> made with two memristors connected to the same node (output) as sketched in Fig. 4. One temperature (T_1) is used as an input signal while the second temperature T_2 plays the role of a simple bias and is held at a fixed value. The time variation of the first input (related to the power added or extracted from the solid in contact to the left side of first memristor) set the second input parameter. Depending on its sign (i.e. heating or cooling process) this parameter can be assimilated to a binary parameter as shown on the truth table in Fig. 4. Then, the output temperature T_3 is obtained with respect to T_1 and T_2 by solving,in steady state regime,the energy balance equationj≠3∑ R_th^-1(T_3,T_j)(T_j-T_3)=0.According to the change in the two inputs signals, a sharp transition for the output can be observed (Fig. 4). This transition occurs precisely when the memristors switches from their heating to their cooling operating mode.The horizontal lines in Fig.4 given by |T_1-T_2|=T_1 and |T_3-T|=T_3(where T=(T_3,min+T_3,max)/2) allow to define the 0 and 1 states for the gate. By using the truth table shown in Fig. 4 it appears that this simple neuron operates as an AND gate. Beyond thislogical operation more complex neuronal architectures, where the memristorsare used ason/off temperature dependent bistable switchs,can be designed to implement other boolean operations. To summarize, we have introduced the concept ofphase-change thermal memristor and shown that it constitutes a fundamentalbuilding block for the implementation of basic logical operations with neurons entirely driven by heat. The relaxation dynamic of these memristors combined with the massively parallelism of neuronal networks make promising these systems both for thermal computing and active thermal management at a submicronic time scale. §SUPPLEMENTARY MATERIALIn the supplementary material we give the analytical expression of Γ parameter and of temperature field inside a memristor with respect to the temperature difference apply on it.§ACKNOWLEDGMENTSP.B.-A. acknowledges discussions with Dr I. Latella 44Chua L. O. Chua, IEEE Trans. Circuit Theory 18, 507-519 (1971). Strukov D. B. Strukov, G. S. Snider, D. R. Stewart and R. S. Williams, Nature, 453, 1 May (2008). BaowenLiEtAl2012 N. Li, J. Ren, L. Wang G. Zhang, P. Hänggi, and B. Li, Rev. Mod. Phys. 84, 1045 (2012). Oh D.W. Oh, C Ko, S Ramanathan, DG Cahill. APL,96, 151906 (2010). BaowenLi3 L. Wang and B. Li, Phys. Rev. Lett. 101, 267203 (2008). Xie R. Xie, C. Ting Bui, B. Varghese, Q. Zhang, C. Haur Sow, B. Li and J. T. L. Thong, Adv. Funct. Mater. 21, 1602 (2011).Slava V. Kubytskyi, S.-A. Biehsand P. Ben-Abdallah, Phys. Rev. Lett. 113, 074301 (2014). DyakovMemory S.A. Dyakov, J. Dai, M. Yan, M. Qiu, J.of Phys. D: Applied Physics 48 (30), 305104 (2015). SupplMat See EPAPS Document No. [number will be inserted by publisher].BaowenLi2 L. Wang, B. Li, Phys. Rev. Lett. 99, 177208 (2007). OteyEtAl2010 C. R. Otey, W. T. Lau, and S. Fan, Phys. Rev. Lett. 104, 154301 (2010).BasuFrancoeur2011 S. Basu and M. Francoeur, Appl. Phys. Lett. 98, 113106 (2011). Huang J. G. Huang, Q. Li, Z. H. Zheng and Y. M. Xuan, Int. J. Heat and Mass Trans., 67, 575 (2013).DamesZ. Chen, C. Wong, S. Lubner, S. Yee, J. Miller, W. Jang, C. Hardin, A. Fong, J. E. Garay and C. Dames, Nature Comm., 5, 5446 (2014).PBA_APL P. Ben-Abdallah and S.-A. Biehs,Appl. Phys. Lett. 103, 191907 (2013). Ito K. Ito, K. Nishikawa, H. Lizuka and H. Toshiyoshi, Appl. Phys. Lett. 105, 253503 (2014).van Zwol1 P. van Zwol, K. Joulain, P. Ben-Abdallah, and J. Chevrier, Phys. Rev. B, 84, 161413(R) (2011). PBA_PRL2014 P. Ben-Abdallah and S.-A. Biehs,Phys.Rev.Lett. 112, 044301 (2014). McCulloch W. S. McCulloch and W. Pitts, Bull. Mathematical Biophysics, 5, 115 (1943).
http://arxiv.org/abs/1702.08804v2
{ "authors": [ "Philippe Ben-Abdallah" ], "categories": [ "cond-mat.dis-nn", "cond-mat.mes-hall", "cond-mat.soft", "physics.class-ph" ], "primary_category": "cond-mat.dis-nn", "published": "20170226135832", "title": "Thermal memristor and neuromorphic networks for manipulating heat flow" }
6in.25in.25in8.5in0in pf enumitemp enumeratecontinuethmTheorem[section]theorem[thm]Theoremlemma[thm]Lemma*theoremATheorem A *theoremBTheorem B *theoremCTheorem C *theoremDTheorem D *theoremETheorem E *theoremFHyperbolic action theorem (for automorphism subgroups) *theoremGTheorem G *theoremGPlusTheorem G^+ *theoremDWPDTheorem D (Special Version) *theoremDWWPDTheorem D (General Version) *corollary*Corollary *disintegrationtheoremDisintegration Theoremcor[thm]Corollary corollary[thm]Corollary proposition[thm]Proposition *proposition*Proposition question[thm]Question *restate *mainthmMain Theoremconj[thm]Conjecture prop[thm]Propositionsublemma[thm]Sublemma fact[thm]Fact *WeakWA1propositionWeak version of Proposition <ref> *TheoremSevenPointOneTheorem 7.1 of*TheoremSevenPointTwoTheorem 7.2 of definitiondefinition[thm]Definition *defn*Definition defns[thm]Definitions notation[thm]Notation notations[thm]Notations com[thm]Remark ex[thm]Example exs[thm]Examples remark[thm]Remark remarkremarks remarks*Remarksremarks. remarksℬHyperbolic actions and 2nd bounded cohomologyof subgroups of (F_n) Part II: Finite lamination subgroups Michael Handel and Lee Mosher The first authorwas supported by the National Science Foundation under Grant No. DMS-1308710 and by PSC-CUNY under grants in Program Years 46 and 47. The second author was supported by the National Science Foundation under Grant No. DMS-1406376. December 30, 2023 ========================================================================================================================================================================================================================================================================================= This is the second part of a two part work in which we prove that for every finitely generated subgroup Γ(F_n), either Γ is virtually abelian or its second bounded cohomology H^2_b(Γ;) contains an embedding of ℓ^1. Here in Part II we focus on finite lamination subgroups Γ — meaning that the set of all attracting laminations of elements of Γ is finite — and on the construction of useful hyperbolic actions of those subgroups. § INTRODUCTION In this two part work, we study the second bounded cohomology of finitely generated subgroups of (F_n), the outer automorphism group of a free group of finite rank n. The main theorem of the whole work is an “H^2_b-alternative” analogous to similar results for subgroups of mapping class groupsand for groups acting in certain ways on hyperbolic spaces <cit.>:For every finitely generated subgroup Γ(F_n), either Γ is virtually abelian or H^2_b(Γ;) has an embedded copy of ℓ^1 and so is of uncountable dimension. In Part I , Theorem A was reduced to Theorem C which is the main result here in Part II. Theorem C explains how to produce useful hyperbolic actions for a certain class of finite lamination subgroups of (F_n). To fully state Theorem C we first briefly review the mathematical objects relevant to its statement: hyperbolic actions and their WWPD elements; subgroups of (F_n) which have (virtually) abelian restriction to any invariant proper free factor; and the dichotomy of finite lamination subgroups versus infinite lamination subgroups of (F_n).Consider a group action G on a hyperbolic space. Recall that γ∈ G is loxodromic if it acts on the Gromov boundarywith a unique repeller-attractor pair (_- γ, _+ γ) ∈× - Δ. Recall also that G is nonelementary if (following ) there exist independent loxodromic elements δ,γ∈ G, meaning that the sets {_-δ,_+δ}, {_-γ,_+γ} are disjoint. Given a loxodromic element γ∈Γ, the WWPD property for γ was first defined in <cit.>, and that property has several equivalent reformulations that are collected together in <cit.> and are denoted WWPD (1)–(4). In particular, WWPD (2) says that the G-orbit of the ordered pair (_- γ, _+ γ) is a discrete subset of the space of distinct ordered pairs × - Δ.Let (F_n) denote the finite index normal subgroup consisting of all outer automorphisms whose induced action on H_1(F_n;/3) is trivial. A subgroup Γ has (virtually) abelian restrictions(Definition <ref>) if for each proper free factor AF_n whose conjugacy class [A] is fixed by each element of Γ, the natural restriction homomorphism Γ↦(A) has (virtually) abelian image. As explained in Part I , the property of Γ having (virtually) abelian restrictions plays a role in our theory analogous to the role played by irreducible subgroups of mapping class groups in the theory of Bestvina and Fujiwara .The decomposition theory of Bestvina, Feighn and Handelassociates to each ϕ∈(F_n) a finite set Ł(ϕ) of attracting laminations. Associated to a subgroup Γ(F_n) is the set Ł(Γ) = _ϕ∈Γ Ł(Γ). If Ł(Γ) is finite then Γ is a finite lamination subgroup, otherwise Γ is an infinite lamination subgroup. In Part I we reduced Theorem A to two results about subgroups ofwith (virtually) abelian restrictions: Theorem B concerning infinite lamination subgroups which was proved there in Part I; and Theorem C concerning finite lamination subgroups, which is proved here in Part II. Each of Theorems B and C concludes with the existence of a hyperbolic action of a finite index normal subgroup possessing a sufficiently rich collection of WWPD elements. The reduction argument in Part I combines those two conclusions with the Global WWPD Theorem of <cit.> to prove Theorem A. Here is the main theorem of this paper:For any finitely generated, non virtually abelian, finite lamination subgroup Γ such that Γ has virtually abelian restrictions, there exists a finite index normal subgroup N Γ and an action N on a hyperbolic space, such that the following hold: *Every element of N acts either elliptically or loxodromically on ;*The action N is nonelementary;* Every loxodromic element of the commutator subgroup [N,N] is a strongly axial, WWPD element with respect to the action N. See below, after the Hyperbolic Action Theorem, for the definition of “strongly axial”. Remarks: WPD versus WWPD. In lectures on this topic we stated a version of Theorem C ItemThmC_WWPD with a stronger conclusion, saying that in the group ⊷(N ↦()), every loxodromic element of the commutator subgroup satisfies WPD. That conclusion requires a stronger hypothesis, saying roughly that “virtually abelian restrictions” holds for a broader class of Γ-invariant subgroups of F_n than just free factors. This makes the proof and the applications of the theorem considerably more intricate, which was not necessary for the application to Theorem A in Part I, and so we have settled for the version of Theorem C presented here. §.§ Methods of proof of Theorem C The first step of Theorem C will be to reduce it to a theorem which produces hyperbolic actions of certain subgroups of automorphism groups of free groups. The key step of the reduction argument, carried out in Proposition <ref>, is the construction of “automorphic lifts”: for each subgroup Γ(F_n) that satisfies the hypotheses of Theorem C, there exists a free factor A < F_n of rank 2 ≤ k ≤ n-1, such that its conjugacy class [A] is Γ-invariant, and such that the natural homomorphism Γ↦(A) lifts to a homomorphism Γ↦(A) whose image in (A) is not virtually abelian. In Section <ref> we shall show, using an automorphic lift for which the free factor factor A has minimal rank, how to reduce Theorem C to the following statement, in which we have identified A ≈ F_k and then rewritten k as n. Recall the canonical isomorphism F_n ≈(F_n) which associates to each γ∈ F_n the inner automorphism i_γ(δ)=γδγ^. Consider a subgroup (F_n) with n ≥ 2, and denote = ⊷(↦(F_n)) and J = (↦) = (F_n), giving the following commutative diagram of short exact sequences:1 [r]J [r]^⊂[d]^-⊂ @->>[r] [d]^-⊂ [r] [d]^-⊂1 1 [r] F_n ≈(F_n) [r]_⊂ (F_n) @->>[r](F_n) [r] 1 Ifis abelian, ifis not virtually abelian, and if no proper, nontrivial free factor of F_n is fixed by the action ofon the set of subgroups of F_n, then there exists a finite index normal subgroupand an actionon a hyperbolic space such that the following properties hold: *Every element ofacts either elliptically or loxodromically on ;*The actionis nonelementary;*Every loxodromic element of J is a strongly axial, WWPD element with respect to the action .To say that a loxodromic element ϕ∈ G of a hyperbolic action GX is strongly axial means that it has a strong axis, which is a quasi-isometric embedding ℓ→ X for which there exists a homomorphism τ(_±ϕ) → such that for all ψ∈(_±ϕ) and s ∈ we have ψ(ℓ(s)) = ℓ(s+τ(ψ)).The proof of the Hyperbolic Action Theorem begins in Section <ref> by reducing to the case .We then choose a maximal, proper, -invariant free factor system  in F_n, and break the proof into cases depending on the “co-edge number” of , which is the minimum integer k ≥ 1 such thatis represented by a subgraph H ⊂ G of a marked graph G for which the complement G ∖ H has k edges. The “one-edge” case, where the co-edge number ofequals 1, is handled in Section <ref> using an action ofon a simplicial tree that is naturally associated to the free factor system . The “multi-edge” case, where the co-edge number ofis ≥ 2, takes up the majority of the paper from Section <ref> to the end. For a full introduction to the multi-edge case, see Section <ref>. In brief, one finds ϕ∈ which is fully irreducible relative to the free factor system , and one uses the topstratum of a good relative train track representative of ϕto produce a certain hyperbolic suspension space . The construction of , the formulation and verification of flaring properties of , and the proof of hyperbolicity ofby applying the Mj-Sardar combination theorem <cit.>, are found in Sections <ref> and <ref>. Then one constructs an isometric actionby applying the theory of abelian subgroups of (F_n) <cit.>; see Sections <ref>, <ref> and <ref>. The pieces are put together, and the multi-edge case of the Hyperbolic Action Theorem is proved, in Section <ref>. Prerequisites from the theory of (F_n). We will assume that the reader is familiar with certain basic topics of (F_n) that have already been reviewed in Part I of this paper , with detailed references found there. We list these some of these topics here with original citations: <cit.> Marked graphs, topological representatives, and relative train track maps . Free factor systems and attracting laminations . <cit.> Properties of = ((F_n) ↦ GL(n,/3)) (<cit.> and ).<cit.> Elements and subgroups of (F_n) which are fully irreducible relative to a free factor system  of (F_n). The co-edge number of a free factor systemin F_n.<cit.> Weak attraction theory. The nonattracting subgroup system _Λ^±_ϕ associated to ϕ∈(F_n) and one of its lamination pairs Λ^±_ϕ(<cit.> and ).Where needed in this paper, we will conduct reviews of other basic concepts. § LIFTING TO AN AUTOMORPHISM GROUP In this section, the first thing we do is to study the structure of finitely generated, finite lamination subgroups Γ which are not abelian but have virtually abelian restrictions. The motivating question of this study is: If AF_n is a proper, nontrivial free factor whose conjugacy class [A] is fixed by Γ, can the natural restriction map Γ↦(A) be lifted to a homomorphism Γ↦(A)? And can this be done so that the image is still not virtually abelian? Sections <ref>–<ref> are devoted to constructions of such “automorphic lifts”. Using this construction, in Section <ref> we prove the implication (Hyperbolic Action Theorem)(Theorem C). After that, in Section <ref>, we consider any free splitting F_nT, and we study a natural subgroup of (F_n) which acts on T in a manner that extends the free splitting action of F_n ≈(F_n). That study is used in Section <ref> to prove one of the two major cases of the Hyperbolic Action Theorem. §.§ Definition of automorphic liftsRecall that for any group and subgroup HG which is its own normalizer (e.g. a free factor), letting [H] (G) be the stabilizer of the conjugacy class of H, the natural restriction homomorphism [H] ↦(H), denoted ϕ↦ϕ H, is well-defined by choosing Φ∈(G) representing ϕ and preserving H, and then taking ϕ H to be the outer automophism class of Φ H ∈(H) (see e.g.  Fact 1.4).Throughout the paper we use the theorem that virtually abelian subgroups ofare abelian <cit.>. We sometimes write “(virtually) abelian” as a reminder that one may freely include or ignore the adverb “virtually” in front of the adjective “abelian” in the context of a subgroup of _F(/3) for any finite rank free group F. One has this freedom, for example, in the following definition (see <cit.>):A subgroup Γ has (virtually) abelian restrictions if for any proper free factor AF_n such that Γ[A], the restriction homomorphism Γ↦(A) has (virtually) abelian image.Let Γ be a subgroup which is not (virtually) abelian and which has (virtually) abelian restrictions. An automorphic lift of Γ is a homomorphism ρΓ↦(A), where AF_n is a proper free factor and Γ[A], such that the group = ⊷(ρ) is not virtually abelian, and such that the following triangle commutes(A) [d]Γ[ur]^ρ[r] (A)In this diagram the horizontal arrow is the natural restriction homomorphism [A] ↦(A) with domain restricted to Γ, and the vertical arrow is the natural quotient homomorphism. To emphasize the role of A we will sometimes refer to an automorphic lift of Γ rel A. Adopting the notation of the Hyperbolic Action Theorem, we set = ⊷(ρ), = ⊷(↦(A)) = ⊷(Γ↦(A)), and J = (↦) = (A), thus obtaining the commutative diagram shown in Figure <ref>.We note two properties which follow from the definition: *is abelian;* The free group J has rank ≥ 2.The first holds because A < F_n is proper and Γ has (virtually) abelian restrictions (see Definition <ref>). The second is a consequence of the first combined with the defining requirement thatis not virtually abelian, for otherwise the free group J is abelian and sois virtually solvable, but (A) injects into (F_n) and solvable subgroups of (F_n) are virtually abelian by . We put no further conditions on the rank of J, it could well be infinite. When referring to J, each of its elements will be thought of ambiguously as an element of the free factor AF_n or as the corresponding element of the inner automorphism group (A); this ambiguity should cause little trouble, by using the canonical isomorphism A ↔(A) given by δ↔ i_δ where i_δ(γ)=δγδ^. The rank of the automorphic lift Γ↦(A) is defined to be (A), and note that (A) ≥ 2 because otherwise (A) is finite in which case each of its subgroups is virtually abelian. This completes Definition <ref>. Here is the first of two main results of Section <ref>. If Γ is a finitely generated, finite lamination subgroup which is not (virtually) abelian but which has (virtually) abelian restrictions, then there exists an automorphic lift Γ↦(A).The proof is found in Section <ref>, preceded by Lemma <ref> in Section <ref>. When n=2, one recovers from Proposition <ref> the simple fact that every finite lamination subgroup of (F_2) is virtually abelian, for otherwise the intersection withwould have an automorphic lift to (A) for some proper free factor A, from which it would follow that 2 ≤(A) ≤ n-1=1. Of course this fact has a simple proof, expressed in terms the isomorphism (F_2) ↦(H_1(F_2;)) ≈ GL(2,), which we leave to the reader.§.§ A sufficient condition to be abelian. In this section we prove Lemma <ref> which gives a sufficient condition for a finitely generated, finite lamination subgroup Γ to be abelian. The negation of this condition then becomes a property that must hold when Γ is not abelian.Let Γ be a finitely generated, finite lamination subgroup. If = {[A_1],…,[A_I]} is a maximal proper Γ-invariant , if each restriction Γ A_i (A_i) is abelian for i=1,…,I, and ifhas co-edge number ≥ 2 in F_n, then Γ is abelian.By Theorem C of , there exists η∈Γ which is fully irreducible rel , meaning that there does not exist any η-invariant proper free factor system that strictly contains . By relative train track theory, there is a unique lamination pair Λ^±_η∈Ł^±(η) for η which is not carried by  (<cit.> Section 3; also seeFact 1.55). The nonattracting subgroup system of Λ^±_η has one of two forms, either _(Λ^±_η) = or _(Λ^±_η) = {[C]} where CF_n is a certain maximal infinite cyclic subgroup that is not carried by(see <cit.>; also see , Definitions 1.2 and Corollary 1.9).For every nonperiodic line ℓ that is not carried by , evidently ℓ is not carried by [C], and so ℓ is not carried by _(Λ^±_η). By applying Theorem H of , it then follows that ℓ is weakly attracted to either Λ^+_η by iteration of η or to Λ^-_η by iteration of η^. From this it follows that the two laminations Λ^+_η, Λ^+_η^=Λ^-_η are the unique elements of Ł(Γ) not carried by , for if there existed ψ∈Γ with attracting lamination Λ^+_ψ∈Ł(Γ) - {Λ^±_η} not supported by  then the generic lines of Λ^+_ψ would be weakly attracted either to Λ^+_η by iteration of η or to Λ^-_η by iteration of η^, and in either case the set of laminations η^k(Λ^+_ψ) = Λ^+_η^k ψη^-k∈Ł(Γ), k ∈, would form an infinite set, contradicting that Γ is a finite lamination group.Since Ł(Γ) is finite, for each ϕ∈Γ each element of Ł(Γ) has finite orbit under the action of of ϕ. Since Γ, it follows by <cit.> that each element of Ł(Γ) is fixed by ϕ. In particular, Γ(Λ^+_η). By <cit.>, there exists a homomorphism _Λ^+_η(Λ^+_η) → having the property that for each ϕ∈(Λ^+_η), the inequality _Λ^+_η(ϕ)0 holds if and only if Λ^+_η∈Ł(ϕ). Consider the following homomorphism, the range of which is abelian:ΩΓ→ ⊕Γ A_1 ⊕⋯⊕Γ A_IΩ(ϕ) = PF_Λ^+_η(ϕ)⊕ϕ A_1 ⊕⋯⊕ϕ A_IWe claim that that the kernel of Ω is also abelian. This claim completes the proof of the lemma, because every solvable subgroup of (F_n) is virtually abelian , and every virtually abelian subgroup ofis abelian <cit.>. To prove the claim, by Proposition 5.5 of , every subgroup of the group (PF_Λ^+_η(ϕ)) consisting entirely ofelements is abelian, and so we need only check that each element of (Ω) is . By <cit.>, everyelement ofis , and so we need only check that each ϕ∈(Ω) is , equivalently Ł(ϕ) = ∅. Suppose to the contrary that there exists Λ^+_ϕ∈Ł(ϕ), with dual repelling lamination Λ^-_ϕ∈Ł(ϕ^). Since ϕ A_i is trivial in (A_i) for each component [A_i] of , neither of the laminations Λ^±_ϕ is supported by . Since Ł(ϕ) ⊂Ł(Γ), it follows as shown above that {Λ^+_ϕ,Λ^-_ϕ} = {Λ^+_η,Λ^-_η}, and so PF_Λ^+_η(ϕ)0 a contradiction. §.§ Constructing automorphic lifts: proof of Proposition <ref>Let Γ be a finitely generated, finite lamination subgroup that is not (virtually) abelian and has (virtually) abelian restrictions. Choose a maximal proper Γ-invariant free factor system = {[A_1],…,[A_I]}, and so each restricted group denoted _i = Γ [A_i] (A_i) is abelian. Since Γ, it follows that each component [A_i] ofis fixed by Γ (, Lemma 4.2). The group Γ is not abelian by <cit.>, and so by Lemma <ref> the extension ⊏{[F_n]} is a one-edge extension. We may therefore choose a marked graph pair (G,H) representingso that G ∖ H = E is a single edge, and we may choose H so that its components are roses, with each endpoint of E being the rose vertex of a component of H. The number of components ofequals the number of components of H, that number being either one or two, and we cover those cases separately. Case 1:has two components, say = {[A_1],[A_2]} where F_n = A_1 ∗ A_2. We construct a commutative diagram as follows:(A_1) ⊕(A_2) [r]^q(A_1) ⊕(A_2) Γ[ul]^α=α_1 ⊕α_2 [u]_ρ=ρ_1⊕ρ_2[dl]_ω[d]^⊂ (F_n,A_1,A_2) [uu]^r [r](F_n)(F_n,A_1,A_2) is the subgroup of (F_n) that preserves both A_1 and A_2. The homomorphism r is induced by restricting (F_n,A_1,A_2) to (A_1) and to (A_2). Evidently r is injective, since an automorphism of F_n is determined by its restrictions to the complementary free factors A_1,A_2. The homomorphisms denoted by the top and bottom arrows of the diagram are induced by canonical homomorphisms from automorphism groups to outer automorphism groups. For i=1,2 the homomorphism ρ_i is the composition Γ[A_i] ↦(A_i) where the latter map is the natural restriction homomorphism.We must construct ω, α_1, α_2. We may choose the marked graph pair (G,H) representing the free factor system  to have the following properties: the two rose components H_1, H_2 of the subgraph H have ranks equal to (A_1), (A_2) respectively; the edge E is oriented and decomposes into oriented half-edges E=E_1 E_2; the common initial point of E_1,E_2 is denoted w; their respective terminal vertices are the rose vertices v_i ∈ H_i; and there is an isomorphism π_1(G,w) ≈ F_n which restricts to isomorphisms π_1(E_iH_i, w) ≈ A_i for i=1,2. Given ϕ∈Γ, let fG → G be a homotopy equivalence that represents ϕ, preserves H_1 and H_2, and restricts to a locally injective path on E = E_1 E_2. By Corollary 3.2.2 ofwe have f(E) = u̅_1 E^± 1 u_2 for possibly trivial paths closed paths u_i in H_i, i=1,2, and in fact the plus sign occurs and so f(E) = u̅_1 E u_2, because ϕ∈Γ. After pre-composing f with a homeomorphism of G isotopic to the identity that restricts to the identity on H_1H_2 and that moves the point w ∈ E to f^(w) ∈ E, we may also assume that f(w)=w, and so f(E_1) = E_1 u_1 and f(E_2) = E_2 u_2. Define α_i(ϕ) ∈(A_i) ≈(π_1(E_iH_i,v_i)) to be the automorphism induced by fE_iH_i, and then use the isomorphism F_n = A_1 * A_2 to define ω(ϕ). Note that ω(ϕ) is the unique lift of ϕ∈(F_n) to (F_n) which preserves A_1 and A_2, because any two such lifts differ by an inner automorphism i_c that preserves both A_1 and A_2, implying by malnormality that c ∈ A_1A_2 and so is trivial. It follows from uniqueness that ω, α_1, and α_2 are homomorphisms, and hence so is α. Commutativity of the diagram is straightforward from the construction. The homomorphism ω is injective because it is a lift of the inclusion Γ(F_n). Since r and ω are injective, by commutativity of the diagram α is also injective. At least one of the two maps α_i Γ→(A_i) is an automorphic lift because at least one of the corresponding images _i = ⊷(α_i) < (A_i) is not virtually abelian: if both were virtually abelian, then α(Γ) _1 ⊕_2 would be virtually abelian, but α is injective and so Γ would be virtually abelian, a contradiction. Case 2:has a single component, say ={[A]}, and so F_n = A * b for some b ∈ F_n. The proof in this case is similar to Case 1, the main differences being that in place of direct sum we use fiber sum, and the marked graph pair (G,H) representingwill have connected subgraph H.We shall construct the following commutative diagram:^2(A,A^b) [rr]^q(A)[u]_ρ[ull]^α[dll]^ω[d]^⊂ (F_n,A,A^b) [uu]^r[rr] (F_n)In this diagram we use the following notations: the conjugate A^b = b A b^; the restricted inner automorphism i_b A → A^b where i_b(a)=b a b^; the adjoint isomorphism _b (A) →(A^b) where _b(Φ) = i^_bΦ i^_b; the canonical epimorphism q_A (A) →(A); and the epimorphism q_A^b = q_A _b^(A^b) →(A). Also, the fiber sum of q_A and q_A^b is the following subgroup of (A) ⊕(A^b):^2(A,A^b) = {(Φ,Φ') ∈(A) ⊕(A^b)q_A(Φ)=q_A^b(Φ')}Define the homomorphism q by q(Φ,Φ') = q_A(Φ)=q_A^b(Φ'). Define (F_n,A,A^b) (F_n) to be the subgroup that preserves both A and A^b. The homomorphism r is jointly induced by the two restriction homomorphisms r_A, r_A^b from (F_n,A,A^b) to (A), (A^b) because the two compositions q_Ar_A, q_A^b r_A^b(F_n,A,A^b) →(A) are evidently the same. Note that r is injective, for if Φ∈(F_n,A,A^b) restricts to the identity on each of A and A^b, then Φ(a)=a and Φ(a^b)=a^b for all a ∈ A, and it follows that b^Φ(b) commutes with all a ∈ A; since (A) ≥ 2, we have Φ(b)=b, and hence Φ is trivial.To construct the homomorphism ω, we may in this case choose the marked graph pair (G,H) representingso that: H is a rose whose rank equals (A)=n-1; as before E=E_1 E_2 with w the common initial point of E_1,E_2; the terminal endpoints of both E_1 and E_2 equal the rose vertex v ∈ H; and there is an isomorphism π_1(G,w) ≈ F_n which restricts to π_1(E_1H,w) ≈ A and E_2 E_1 ≈ b. It follows that π_1(E_2H,w) ≈ A^b. For each ϕ∈, applying Corollary 3.2.2 ofas in the previous case, we find Φ = ω(ϕ) ∈(F_n,A,A^b) that represents ϕ and that is represented by a homotopy equivalence f_ϕ G → G that preserves H, fixes w, and takes E_i ↦ E_i u_i for possibly trivial paths u_1,u_2 in H based at v. The map ω is a homomorphism because Φ is the unique element of (F_n,A,A^b) that represents ϕ, for if c ∈ F_n and if the inner automorphism i_c preserves both A and A^b then by malnormality of A and A^b we have c ∈ A ∩ A^b, and again by malnormality we have that c is trivial. It follows that ω is injective. Since r is injective it follows that α = r ω is injective. Denote α(ϕ) = (α_A(ϕ),α_A^b(ϕ)) ∈^2(A,A^b). Obviously the compositions q_A α_A, q_A^bα_A^b→(A) are the same, and hence there is an induced homomorphism ρ→(A) which is topologically represented by f_ϕ H. This completes the construction of the above diagram, and commutativity is evident. As in the previous case, we will be done if we can show that at least one of the two homomorphisms α_A Γ_0 →(A) or α_A^bΓ_0 →(A^b) has image that is not virtually abelian, but if both are virtually abelian then ⊷(α) is contained in a virtually abelian subgroup of ^2(A,A^b), which by injectivity of α implies that Γ is virtually abelian, a contradiction. §.§ Proof that the Hyperbolic Action Theorem implies Theorem CLet Γ be a finitely generated, finite lamination subgroup which is not (virtually) abelian and which has (virtually) abelian restrictions. By applying Proposition <ref>, there exists a free factor AF_n such that Γ[A], and there exists an automorphic lift ρΓ↦(A); we may assume that (A) is minimal amongst all choices of A and ρ. We adopt the notation of Figure <ref> in Section <ref>, matching that notation with the Hyperbolic Action Theorem by choosing an isomorphism A ≈ F_k where k = (A). Most of the hypotheses of the Hyperbolic Action Theorem are now immediate: = ⊷(ρ) (A) is not virtually abelian by definition of automorphic lifts; also = ⊷(↦(A)) is abelian.We must check the one remaining hypothesis of the Hyperbolic Action Theorem, namely that no proper, nontrivial free factor BA is preserved by the action of . Assuming by contradiction that B is preserved by , consider the restriction homomorphism σ↦(B). We claim that the composition σρΓ→(B) is an automorphic lift of Γ. Since (B)<(A), once this claim is proved, it contradicts the assumption that ρΓ→(A) is an automorphic lift of minimal rank. The canonical isomorphism (F_k) ↔ F_k, denoted i_δ↔δ, restricts to an isomorphism between J = (F_k) and some subgroup of F_k. If i_δ∈ J then i_δ preserves B, and since B is malnormal in A it follows that δ∈ B. Thus σ restricts to an injection from J to (B). Also, the group J is a free group of rank ≥ 2, for if it were trivial or cyclic thenwould be virtually solvable and hence, by , virtually abelian, a contradiction. Since the image of the map σρΓ(B) contains σ(J), it follows that ⊷(σρ) is not virtually abelian. Sincepreserves the A-conjugacy class of B, and since B is malnormal in F_n, it follows Γ preserves the F_n-conjugacy class of B. Tracing through the definitions one easily sees that the composed homomorphism Γ(B) ↦(B) is equal to the composition of Γ_(F_n)[B] ↦(B), where the latter map is the natural restriction homomorphism. Thus σρΓ→(B) is an automorphic lift of Γ, completing the proof of the claim.Applying the conclusions of the Hyperbolic Action Theorem using the free group A ≈ F_k and (F_k), we obtain a finite index normal subgroup N and a hyperbolic action N satisfying conclusions ItemThmF_EllOrLox, ItemThmF_Nonelem and ItemThmF_WWPD of that theorem. The subgroup N = ρ^( N) Γ is a finite index normal subgroup of Γ, and by composition we have a pullback action N ↦ N. By the Hyperbolic Action Theorem ItemThmF_EllOrLox, each element of N acts elliptically or loxodromically on , and so the same holds for each element of N, which is Theorem C ItemThmC_EllOrLox. By item ItemThmF_Nonelem of the Hyperbolic Action Theorem, the action N is nonelementary, and so the same holds for the action N, which is Theorem C ItemThmC_Nonelem. Since the image of the homomorphism N ↦ N ↦ is abelian, it follows that the image inof the commutator subgroup [N,N] is contained in J = (↦), and hence the image of [N,N] in N is contained in JN. By conclusion ItemThmF_WWPD of the Hyperbolic Action Theorem, each loxodromic element of JN is a strongly axial WWPD element with respect to the action N. Applying <cit.> which says that WWPD is preserved under pullback, and using the evident fact that the strongly axial property is preserved under pullback, it follows that each loxodromic element of [N,N] is a strongly axial, WWPD element of the action N, which is the statement of Theorem C ItemThmC_WWPD.§.§ Automorphic extensions of free splitting actions From the hypothesis of the Hyperbolic Action Theorem, our interest is now transferred to the context of a finite rank free group F_n — perhaps identified isomorphically with some free factor of a higher rank free group — and of a subgroup (F_n) that has the following irreducibility property: no proper, nontrivial free factor of F_n is preserved by . To prove the Hyperbolic Action Theorem one needs actions of such groupson hyperbolic spaces. In this section we focus on a natural situation which produces actions on trees. Free splittings. Recall that a free splitting of F_n is a minimal, simplicial action F_nT on a simplicial tree T such that the stabilizer of each edge is trivial. Two free splittings F_nS,T are simplicially equivalent if there exists an F_n-equivariant simplicial isomorphism S ↦ T; we sometimes use the notation [T] for the simplicial equivalence class of a free splitting F_nT. Formally the action F_nT is given by a homomorphism α F_n ↦(T), which we denote more briefly as α F_nT. In this formal notation, (T) refers to the group of simplicial self-isomorphisms of T, equivalently the self-isometry group of T using the geodesic metric given by barycentric coordinates on simplices of T. We note that an element of (T) is determined by its restriction to the vertex set, in fact it is determined by its restriction to the subset of vertices of valence ≥ 3. Two free splittings are equivalent if there is an F_n-equivariant simplicial isomorphism between them, the equivalence class of a free splitting F_nT is denoted [T], and the group (F_n) acts naturally on the set of equivalence classes of free splittings.Given a free splitting F_nT, the set of conjugacy classes of nontrivial vertex stabilizers of a free splitting is a free factor system of F_n called the vertex group system of T denoted as (T). The function which assigns to each free splitting T its vertex group system (T) induces a well-defined, (F_n)-equivariant function [T] ↦(T)from the set of simplicial equivalence classes of free splittings to the set of free factor systems. Every free splitting F_nT can be realized by some marked graph pair (G,H) in the sense that T is the F_n-equivariant quotient of the universal cover G where each component of the total lift H ⊂ G is collapsed to a point. One may also assume that each component of H is noncontractible, in which case the same marked graph pair (G,H) topologically represents the vertex group system of T. Twisted equivariance (functional notation). Given two free splittings F_nS,T and an automorphism Φ∈(F_n), a map hS → T is said to be Φ-twisted equivariant ifh(γ· x) = Φ(γ) · h(x) for all x ∈ S, γ∈ F_n.The special case when Φ = is simply called equivariance. A twisted equivariant map behaves well with respect to stabilizers, as shown in the following simple fact: For any free splittings F_nS,T, for each Φ∈(F_n), and for each Φ-twisted equivariant map fS → T, we have *Φ((x)) (f(x)) for all x ∈ S.*If in addition the map fS → T is a simplicial isomorphism, then the inclusion of item ItemSublemmaPoints is an equality: Φ((x)) = (f(x)). Furthermore, γ∈ F_n acts loxodromically on S if and only if Φ(γ) acts loxodromically on T, in which case their axes A^S_γ⊂ S and A^T_Φ(γ) ⊂ T satisfy A^T_Φ(γ) = f(A^S_γ). Remark. One could approach this proof by first working out the equivariant case (Φ =), and then reducing the twisted case to the equivariant case by conjugating the action F_nT using Φ. We instead give a direct proof. To prove ItemSublemmaPoints, for each x ∈ S and γ∈ F_n we haveγ∈Φ((x))Φ^(γ) ∈(x) Φ^(γ) · x = xf(Φ^(γ) · x) = f(x) Φ(Φ^(γ)) · f(x) = f(x) (by twisted equivariance)γ· f(x) = f(x) γ∈(f(x))and so Φ((x)) (Φ· x). To prove ItemSublemmaAxes, consider the inverse simplicial automorphism f^ T → S. The implication in the second line may be inverted by applying f^ to both sides of the equation in the second line. For the rest of ItemSublemmaAxes, it suffices to prove the “only if” direction, because the “if” direction can then be proved using that f^ is Φ^-twisted equivariant. Assuming γ is loxodromic in S with axis A^S_γ, consider the line f(A^S_γ) ⊂ T. Calculating exactly as above one shows that the equation Φ((A^S_γ)) = (f(A^S_γ)) holds. We may assume that γ is a generator of the infinite cyclic group (A^S_γ), and so Φ(γ) = Φγ = (f(A^S_γ)). Since the stabilizer of the line f(A^S_γ) is the infinite cyclic group Φ(γ), it follows that Φ(γ) is loxodromic and its axis A^T_Φ(γ) is equal to f(A^S_γ).Free splittings of co-edge number 1. Recall the co-edge number of a free factor systemof F_n (see for example <cit.>), which is the minimal number of edges of G ∖ H amongst all marked graph pairs (G,H) such that H is a representative of . There is a tight relationship between free splittings with a single edge orbit and free factor systems with co-edge number 1. To be precise:<cit.> When the (F_n)-equivariant function [T] ↦ = (T) is restricted to free splittings T with one edge orbit and free factor systemswith co-edge number 1, the result is a bijection, and hence [T] = ()whenever T andcorrespond under this bijection.Under the bijection in Fact <ref>, the number of components ofequals the number of vertex orbits of T which equals 1 or 2.If ={[A]} has a single component then (A)=n-1, there is a free factorization F_n = A * B where (B)=1, and the quotient graph of groups T / F_n is a circle with one edge and one vertex. On the other hand if = {[A_1],[A_2]} has two components then (A_1) + (A_2)=n, A_1,A_2 can be chosen in their conjugacy classes so that there is a free factorization F_n = A_1 * A_2, and the quotient graph of groups T / F_n is an arc with one edge and two vertices.The stabilizer of a free splitting and its automorphic extension Consider a free splitting α F_nT and its stabilizer subgroup [T] (F_n). Let [T] (F_n) be the preimage of [T] under the standard projection homomorphism (F_n) ↦(F_n), and so we have a short exact sequence1 ↦ F_n ≈(F_n) [T] ↦[T] ↦ 1From this setup we shall define in a natural way an action [T]T which extends the given free splitting action α F_nT. We proceed as follows. For each Φ∈(F_n) we have the composed action α∘Φ F_nT. Assuming in addition that Φ∈[T], in other words that Φ is a representative of some element of [T], it follows the actions α and α∘Φ F_nT are equivalent, meaning that there exists a simplicial automorphism h ∈(T) such that h α(γ) = α(Φ(γ))h. When this equation is rewritten in action notation it simply says that h satisfies Φ-twisted equivariance: h(γ· x) = Φ(γ) · h(x) for all γ∈ F_n, x ∈ S. Suppose conversely that for some Φ∈(F_n) there exists a Φ-twisted equivariant h ∈(T). Since h conjugates the action α F_nT to the action α∘Φ F_nT, it follows that ϕ∈[T] and Φ∈[T]. This proves the equivalence of ItemLTUInStabT, ItemLTUForall and ItemLTUExists in the following lemma, which also contains some uniqueness information regarding the conjugating maps h.For each free splitting α F_nT and each ϕ∈(F_n) the following are equivalent: *ϕ∈[T]*For each Φ∈(F_n) representing ϕ, there exists a Φ-twisted equivariant isomorphism hT → T.*For some Φ∈(F_n) representing ϕ, there exists a Φ-twisted isomorphism hT → T.Furthermore,*If ϕ satisfies the equivalent conditions ItemLTUInStabT, ItemLTUForall, ItemLTUExists then for each Φ∈[T] representing ϕ, the Φ-twisted equivariant isomorphism of h_Φ is uniquely determined by Φ, and is denotedh_Φ T → T *For each γ∈ F_n with corresponding inner automorphism i_γ(δ)=γδγ^, the two maps h_i_γ T → T and α(γ)T → T are equal. Remark: In item ItemLTUInner, note that h_i_γ is defined because i_γ∈(F_n) [T].The uniqueness statement ItemLTUUnique is a special case of a more general uniqueness statement that we will make use of later: For any two free splittings F_nS,T and any Φ∈(F_n), there exists at most one Φ-equivariant simplicial isomorphism hS ↦ T. In particular, taking Φ =, there exists at most one equivariant simplicial isomorphism hS ↦ T.Suppose that a Φ-twisted equivariant simplicial isomorphism hS ↦ T exists. For each γ∈ F_n, it follows by Φ-twisted equivariance that γ acts loxodromically on S with axis A^S_γ if and only if Φ(γ) acts loxodromically on T with axis A^T_γ = f(A^Sγ) (by Lemma <ref>). The map that h induces on the set of axes of loxodromic elements is therefore uniquely determined. It follows that the restriction of h to the set of vertices v ∈ T of valence ≥ 3 is uniquely determined by Φ, because v may be expressed in the form {v} = A^S_β A^S_γ A^S_δ for a certain choice of β,γ,δ∈ F_n, and hence {h(v)} = h(A^S_β)h(A^S_γ)h(A^S_δ) = A^T_Φ(β) A^T_Φ(γ) A^T_Φ(δ)Since h is uniquely determined by its restriction to the vertices of valence ≥ 3, it follows that h is uniquely determined amongst simplicial isomorphisms.The uniqueness clause ItemLTUUnique is an immediate consequence of Lemma <ref>. Item ItemLTUInner follows from the uniqueness clause ItemLTUUnique, because for each γ∈ F_n the map h = α(γ) clearly satisfies i_γ twisted equivariant: α(γ) α(δ) = α(i_γ(δ)) α(γ) for all δ∈ F_n.Consider now the function α[T] ↦[T] defined by α(Φ) = h_Φ as given by Lemma <ref>. This defines an action α[T]T, because for each Φ,Ψ∈[T] both sides of the action equation h_Φ∘ h_Ψ = h_ΦΨ clearly satisfy ΦΨ-twisted equivariance, and hence the equation holds by application of the uniqueness clause ItemLTUUnique of Lemma <ref>. The following lemma summarizes this discussion together with the evident generalization to subgroups of [T], and rewrites the twisted equivariance property using action notation instead of functional notation. Associated to each free splitting F_kT there is a unique isometric action [T]T which assigns to each Φ∈[T] the unique simplicial isomorphism T ↦ T as stated in Lemma <ref>, denoted in action notation as x ↦Φ· x, satisfying the following: Twisted equivariance (action notation): For all Φ∈(F_n), γ∈ F_n, x ∈ T,Φ· (γ· x) = Φ(γ) · (Φ· x)More generally, the restriction to any subgroup [T] of the action [T]T is the unique isometric action T such that satisfies twisted equivariance.Remark:In the twisted equivariance equation Φ· (γ· x) = Φ(γ) · (Φ· x), the action dot “·” is used ambiguously for both the action of F_n on T and the action of [T] on T. The meaning of any particular action dot should be clear by context. Furthermore, in contexts where the two meanings overlap they will always agree. For example, Lemma <ref> ItemLTUInner says that the action dots always respect the standard isomorphism F_n ≈(F_n) given by δ≈ i_δ.The next lemma will be a key step of the proof of the loxodromic and WWPD portions of the Hyperbolic Action Theorem (see remarks after the statement).In brief, given a free splitting F_nT and a certain subgroup J ⊂ F_n, the lemma gives a criterion for verifying that the restriction of the free splitting action F_nT to a certain group J ⊂ F_n is nonelementary. To understand the statement of the lemma, the reader may note that by applying Lemma <ref>, the entire setup of the lemma — including hypotheses ItemJActionsAgree and ItemHatHTwistedEquiv — is satisfied for any free splitting F_nT and any subgroup [T].Let F_nT be a free splitting with vertex set V, and let V^ be the subset of all v ∈ V such that (v) is nontrivial. Let (F_n) be a subgroup with normal subgroup J = (F_n). Let JT denote the restriction to J of the given free splitting action (F_n) ≈ F_nT. Let V^ be an action that satisfies the following: *The two actions JV^, one obtained by restricting to J the given action V^, the other by restricting the action JT to the subset V^, are identical. *The following twisted equivariance condition holds:Φ· (γ· x) = Φ(γ) · (Φ· x), for all Φ∈, γ∈ F_n, x ∈ V^ If no subgroup _F_n(v) (v ∈ V^(T)) is fixed by the whole group , and if the free group J has rank ≥ 2, then the action JT is nonelementary. Remarks. Lemma <ref> is applied in Section <ref> when proving the one-edge case of the Hyperbolic Action Theorem, in which case we have a group [T] for which the hypotheses ItemJActionsAgree and ItemHatHTwistedEquiv hold automatically (by Lemma <ref>).Lemma <ref> is also applied in Section <ref> when proving the multi-edge case of the Hyperbolic Action Theorem. When Lemma <ref> is applied in that case, the group (F_n) is not contained in [T], and the given action V^ does not extend to an action ofon T itself. Nonethelesswill have a kind of “semi-action” on T which extends the given free splitting action JT, and this will be enough to give an action V^(T) that also satisfies ItemJActionsAgree and ItemHatHTwistedEquiv.By hypothesis ItemJActionsAgree the action JT has trivial edge stabilizers and each point of V-V^ has trivial stabilizer. It follows that for each nontrivial α∈ J, either α is elliptic and its fixed point set (α) ⊂ T is a single point of V^, or α is loxodromic with repeller–attractor pair (_-α,_+α) ⊂ T × T and _J(_-α) = _J(_+α) = _J{_-α,_+α} is an infinite cyclic group. We claim there is no point v ∈ V^(T) such that (α)={v} for each nontrivial α∈ J. Otherwise that point v is unique, and its stabilizer B_v = _F_n(v) is the unique nontrivial vertex stabilizer fixed by the action of J on subgroups of F_n, because the bijection v ↔ B_v is J-equivariant (by Lemma <ref>). It follows that each Φ∈ also fixes B_v, because J = Φ J Φ^ fixes the subgroup Φ(B_v)F_n which is also a nontrivial vertex stabilizer, namely Φ(B_v) is equal to the stabilizer of Φ· v by hypothesis ItemHatHTwistedEquiv; by uniqueness of B_v we therefore have Φ(B_v)=B_v. Since this holds for all Φ∈, we have contradicted the hypothesis of the lemma, thus proving the claim.The proof that the action JT is nonelementary now follows a standard argument. Some γ∈ J is loxodromic, for otherwise by applying the claim it follows that J has nontrivial elliptic elements α,β with (α) (β) ∈ T but in that case γ=αβ is loxodromic (see for example <cit.>). Since (J) ≥ 2, there exists δ∈ J - _J{_-γ,_+γ}. It follows that γ and δγδ^ are independent loxodromic elements of J. §.§ Hyperbolic Action Theorem, One-edge case: Proof.Adopting the notation of the Hyperbolic Action Theorem, we may assume that . Chooseto be a maximal, proper, -invariant free factor system of F_n, and sois fully irreducible relative to the extension ⊏{[F_n]}, meaning that there is no free factor systemwith strict nesting ⊏⊏{[F_n} such thatis invariant under any finite index subgroup of , equivalently (since ) such thatis invariant under . The proof of the Hyperbolic Action Theorem breaks into two cases: The one-edge case: The co-edge number ofequals 1.The multi-edge case: The co-edge number ofis ≥ 2.Proof of the Hyperbolic Action Theorem in the one-edge case. Assuming that ⊏{[F_n]} is a one-edge extension, using that (), and applying Fact <ref>, there exists a free splitting F_nT with one edge orbit whose vertex stabilizer system forms the free factor system , and we have equality of stabilizer subgroups () = [T]. It follows that [T]. Applying Lemma <ref>, consider the resulting action [T]T. We show that the restricted action T satisfies the conclusions of the Hyperbolic Action Theorem (using =T and =). Since every isometry of a tree is either elliptic or loxodromic, Conclusion ItemThmF_EllOrLox of the Hyperbolic Action Theorem holds. For proving Conclusion ItemThmF_Nonelem of the Hyperbolic Action Theorem we shall apply Lemma <ref> to the action T, so we must check its hypotheses. Hypotheses ItemJActionsAgree and ItemHatHTwistedEquiv of Lemma <ref> hold by applying Lemma <ref> to the subgroup [T]. Also, by the hypothesis of the Hyperbolic Action Theorem, the subgroup (F_n) does not preserve any proper, nontrivial free factor of F_n, in particular it does not preserve any nontrivial vertex stabilizer of the free splitting F_nT. Finally, by hypothesis of the Hyperbolic Action Theorem,is abelian andis not virtually abelian, and so it follows that J has rank ≥ 2, for otherwise the group (F_n) (F_k+1) would be solvable and hence virtually abelian (by ), a contradiction. The conclusion of Lemma <ref> therefore holds, saying that the restricted action JT is nonelementary. The action T is therefore nonelementary, which verifies conclusion ItemThmF_Nonelem of the Hyperbolic Action Theorem.Conclusion ItemThmF_WWPD of the Hyperbolic Action Theorem says that each loxodromic element of J is a strongly axial, WWPD element for the action T, and this is an immediate consequence of the next lemma (which will also be used in the multi-edge case of the Hyperbolic Action Theorem). In formulating this lemma, we were inspired by results of Minasyan and Osin <cit.> producing WPD elements of certain group actions on trees. Let GT be a group action on a simplicial tree equipped with the simplicial metric on T that assigns length 1 to each edge. If JG is a normal subgroup such that the restricted action JT has trivial edge stabilizers,then each loxodromic element of J is a strongly axial, WWPD element of the action GT.Given an oriented line A ⊂ T, let the stabilizers of A under the actions of G and of J be denoted_G(A)= {γ∈ G γ(A)=A} _J(A)= J _G(A)Consider a loxodromic μ∈ J. Clearly the axis A_μ⊂ T of μ is a strong axis of μ. Orient A_μ so that μ translates in the positive direction, with repelling/attracting endpoints _- A_μ, _+ A_μ∈ T. Since JT has trivial edge stabilizers, the group _J(_- A_μ) = _J(_+ A_μ) = _J(_- A_μ,_+A_μ) is infinite cyclic.By <cit.>, for μ to satisfy WWPD with respect to the action GT is equivalent to saying that the ordered pair _± A_μ = (_-A_μ,_+A_μ) is an isolated point in its orbit G ·_± A_μ⊂ T × T - Δ (this is the property denoted “WWPD (2)” in <cit.>). We may assume that μ is a generator for the infinite cyclic group _J(_± A_μ); if this is not already true then, without affecting the WWPD property for μ, we may replace μ with a generator. Letting ℓ_μ>0 denote the integer valued length of a fundamental domain for the action of μ on A_μ, it follows that any edge path in A_μ of length ℓ_μ is a fundamental domain for the _J(_± A_μ) on A_μ. Choose a subsegment α⊂ A_μ of length ℓ_μ+1. There is a corresponding neighborhood U_α⊂ T × T - Δ consisting of all endpoint pairs of oriented lines in T containing α as an oriented subsegment. Consider γ∈ G such that γ(_± A_μ) ⊂ U_α. It follows that μ' = γμγ^∈ J has axis A_μ' = γ(A_μ) and that A_μ A_μ' contains α, and hence A_μ A_μ' has length ≥ℓ_μ+1. Also, the map γ A_μ→ A_μ' takes the restricted orientation of the subsegment A_μ A_μ'⊂ A_μ to the restricted orientation on the subsegment γ(A_μ A_μ') ⊂ A_μ'. Let the edges of the oriented segment A_μ A_μ' be parameterized as E_0 E_1 E_2 … E_J, J ≥ℓ_μ. Since μ and μ' both have translation number ℓ_μ it follows that μ(E_0)=μ'(E_0)=E_ℓ_μ, and so μ^μ' ∈_J(E_0). Since J has trivial edge stabilizers it follows that μ=μ' and so γ∈_J{_- A_μ,_+ A_μ} = μ. And having shown that γ preserves orientation, it follows that γ(_± A_μ) = _± A_μ. This shows that _± A_μ is isolated in its orbit G ·_± A_μ, being the unique element in the intersection U_α (G ·_± A_μ). This completes the proof of the Hyperbolic Action Theorem in the one-edge case. § HYPERBOLIC ACTION THEOREM, MULTI-EDGE CASE: INTRODUCTION. The proof of the Hyperbolic Action Theorem in the multi-edge case will take up the rest of the paper. In this section, we give a broad outline of the methods of proof, followed by some motivation coming from well-known constructions in geometric group theory.§.§ Outline of the multi-edge case.We will make heavy use of the theory of abelian subgroups of (F_n) developed by Feighn and Handel . In very brief outline here are the main features of that theory we will need. Disintegration groups. (, and see Section <ref>) Any element of (F_n) has a uniformly bounded power which is rotationless, meaning roughly that each of various natural finite permutations induced by that element are trivial. Any rotationless ϕ∈(F_n) has a particularly nice relative train track representative called a . For anyfG → G there is an associated abelian subgroup (f) (F_n) that contains ϕ and is called the “disintegration subgroup” of f. The idea of the disintegration group is to first disintegrate or decompose f into pieces, one piece for each non-fixed stratum H_r, equal to f on H_r and to the identity elsewhere. Then one re-integrates those pieces to form generators of the group (f), by choosing a list of non-negative exponents, one per non-fixed stratum, and composing the associated powers of the pieces of f. However, in order for this composition to be continuous and a homotopy equivalence, and for (f) to be abelian, the exponents in that list cannot be chosen independently. Instead two constraints are imposed: the non-fixed strata are partitioned into a collection of “almost invariant subgraphs” on each of which the exponent must be constant; and certain linear relations are required amongst strata that wrap around a common twist path of f.The following key theorem about disintegration groups lets us study an abelian subgroup of (F_n) (up to finite index) by working entirely in an appropriate disintegration group: [<cit.>] For every rotationless abelian subgroup (F_n) there exists ϕ∈ such that for everyfG → G representing ϕ, the intersection (f) has finite index in . The proof of the Hyperbolic Action Theorem in the multi-edge case. The detailed proof is carried out in Section <ref>, based on material whose development we soon commence. Here is a sketch.By a finite index argument we may assume that every element of the abelian groupis rotationless. Letbe a maximal, proper, -invariant free factor system of F_n. Being in the multi-edge case means thathas co-edge number ≥ 2. From the Disintegration Theorem we obtain ϕ∈, and we apply the conclusions of that theorem to arepresentative fG → G of ϕ having a proper core filtration element G_t representing . We may assume that (f), by replacingwith its finite index subgroup (f). From the construction of the disintegration group (f), any core filtration element properly contained between G_t and G would represent a free factor system which is (f)-invariant, hence -invariant, contradicting maximality of . Thus G_t is the maximal proper core filtration element, and ϕ is fully irreducible relative to . Sincehas co-edge number ≥ 2, the top stratum H_u is anstratum. By maximality of G_t, every stratum strictly between G_t and G is either an -linear edge with terminal endpoint attached to G_t or a zero stratum enveloped by H_u.The hard work of the proof breaks into two major phases, the first of which is:Sections <ref>,<ref>: Construction and hyperbolicity of .The hyperbolic metric spaceneeded for verifying the conclusions of the Hyperbolic Action Theorem is constructed in terms of thefG → G (see Section <ref> for further motivation of the construction). First we describein the simpler case that f has no height u indivisible Nielsen path. Starting with fG → G, lift to the universal cover to obtain fG → G. Let G_u-1⊂ G be the total lift of G_u-1. Collapse to a point each component of G_u-1, obtaining a simplicial tree T. The map fG → G induces a map f_TT → T. Letbe the bi-infinite mapping cylinder of f_T, obtained from T[0,1] by identifying (x,n,1) ∼ (f_T(x),n+1,0) for each x ∈ T and n ∈. The construction ofis more complex when H_u is geometric, meaning that f has a unique (up to inversion) height u indivisible Nielsen path ρ, and ρ is closed, forming a circuit c in G (, and see ). Each line c ⊂ G obtained by lifting c projects to a line in T called a geometric axis, and the projections to T of the lifts of ρ in c are fundamental domains for that axis. In the bi-infinite mapping cylinder as defined above, the portion of the mapping cylinder corresponding to each geometric axis is a quasiflat, contradicting hyperbolicity. Thus we do not taketo be the bi-infinite mapping cylinder itself, insteadis constructed from the bi-infinite mapping cylinder by coning off each geometric axis in Tn for each n ∈, using one cone point per geometric axis, attaching arcs that connect the cone point to the endpoints of the fundamental domains of that axis.In Section <ref>, we use the Mj-Sardar combination theorem <cit.> to prove hyperbolicity of .The Mj-Sardar theorem, a generalization of the Bestvina-Feighn Combination Theorem <cit.>, requires us to verify a flaring hypothesis. To do this, in Section <ref> we study relative flaring properties offG → G, specifically: how f flares relative to the lower filtration element G_u-1; and in the geometric case, how f flares relative to the Nielsen path ρ. Then in Section <ref>, we study flaring properties of the induced map f_TT → T (and, in the geometric case, flaring properties of the induced map obtained by coning off each geometric axis of T). The required flaring hypothesis onitself can then be verified in Section <ref>, allowing application of the Mj-Sardar theorem to deduce hyperbolicity of .The other major phase of the proof is: Sections <ref>, <ref>: Use the theory of disintegration groups to obtain an isometric actionwith appropriate WWPD elements.To do this, one uses that there is a number λ > 0 and a homomorphism PF_Λ(f) →such that for each ψ∈(f), the lamination Λ is an attracting lamination for ψ if and only if PF(ψ)>0, in which case the stretch factor is λ^(ψ). One would like to think of H_u as anstratum for ψ having Perron-Frobenius eigenvalue λ^(ψ), but this does not yet make sense because we do not yet have an appropriate topological representative of ψ on G. We define a subgroup and sub-semigroup _0(f)= (PF_Λ)_+(f)= PF_Λ^[0,∞)and we then lift the sequence of inclusions _0(f) ⊂_+(f) ⊂(f) (F_n) to a sequence of inclusions to D_0(f) ⊂ D_+(f) ⊂ D(f) (F_n)The hard work in Section <ref> is to use the theory of disintegration groups to construct a natural action of the semigroup _+(f) on T, in which each element Ψ∈_+(f) acts on T by stretching each edge by a uniform factor equal to λ^(ψ), and such that for each Ψ∈_+(f) the resulting map x ↦Ψ· x of T is Ψ-twisted equivariant. When restricted to the subgroup _0 we obtain an action on T by twisted equivariant isometries, which allows us to identify the action _0T with the one described in Lemma <ref>.Then what we do in Section <ref> is to suspend the semigroup action of _+(f) on T (and, in the geometric case, on the graph obtained by coning off geometric axes), obtaining an isometric action (f). By restriction we obtain the required action .Finally, Section <ref>: Put the pieces together and verify the conclusions of the Hyperbolic Action Theorem.The basis of the WWPD conclusions in the multi-edge case of the Hyperbolic Action Theorem is Lemma <ref>, which has already played the same role for the one-edge case. §.§ Motivation: Suspension actions and combination theorems.The construction of the hyperbolic suspension action (f) may be motivated by looking at some familiar examples in a somewhat unfamiliar way. Example: Mapping torus hyperbolization after Thurston. Consider a pseudo-Anosov homeomorphism fS → S of a closed, oriented hyperbolic surface S of genus ≥ 2 with associated deck action π_1 SS = ^2. The map f uniquely determines an outer automorphism ϕ∈(π_1 S). The associated extension group Γ(π_1 S) is the inverse image of the infinite cyclic group ϕ(π_1 S) under the natural homomorphism (π_1 S) ↦(π_1 S). A choice of Φ∈(π_1 S) representing ϕ naturally determines a semidirect product structure Γ≈π_1(S) _Φ. The deck transformation action π_1 S ^2 extends naturally to an action Γ^2, whereby if Ψ∈Γ projects to ψ = ϕ^k then the action of Ψ on ^2, denoted F_Ψ^2 →^2, is the lift of f^k whose induced action on the circle at infinity ^2 agrees with the induced action of γ. Equivalently, F_Ψ is the unique Ψ-twisted equivariant lift of f^k. Although the deck action π_1 S ^2 is by isometries, the extended action Γ^2 is not by isometries. However, there is a way to suspend the action Γ^2 obtaining an isometric action Γ, as follows. The suspension spaceis the bi-infinite mapping cylinder obtained as the quotient of ^2 ×× [0,1] under the identifications (x,n,1) ∼ (F_Φ(x),n+1,0). One may check (and we shall do in Section <ref> in a different context) that there is an action Γ which is generated by letting π_1 S act as the deck group on ^2 ≈^2 {0}⊂ and extending naturally over the rest of , and by letting Φ according to the formula Φ· (x,n,t)=(x,n-1,t). One may also check that the hyperbolic metrics on the slices ^2 × n × 0 extend to a path metric on , uniquely up to quasi-isometry, such that the action Γ is by isometries.Returning to the land of the familiar, the group Γ may be identified with the fundamental group of the 3-dimensional mapping torus M_f of the surface homeomorphism fS → S. By Thurston's hyperbolization theorem applied to M_f, the suspension spacemay be identified up to Γ-equivariant quasi-isometry with the universal cover M_f ≈^3 equipped with its deck transformation action by the group Γ≈π_1 M_f. Example: Mapping torus hyperbolization after . Consider now an irreducible train track representative fG → G of a nongeometric, fully irreducible outer automorphism ϕ∈(F_n). Again we have a natural extension group Γ(F_n) of ϕ∈(F_n), with a semidirect product structure Γ = F_n _Φ determined by a choice of Φ∈(F_n) representing ϕ.Unlike in the previous situation where we started with a surface homeomorphism, here we have started with a non-homeomorphic topological representative fG → G of ϕ, and so the deck action F_nG does not extend to an action Γ G. But it does extend to an action of the semigroup Γ_+ which is the inverse image under (F_n) ↦(F_n) of the semigroup ϕ_+ = ϕ^ii ∈{0,1,2,…}: for each Ψ∈Γ_+ mapping to ϕ^i with i ≥ 0, the associated map F_Ψ G → G is the unique Ψ-twisted equivariant lift of f^i. Although the deck action F_nG is by isometries, the semigroup action Γ_+G is not by isometries.But there is a way to suspend the semigroup action Γ_+G to an isometric action Γ whereis the bi-infinite mapping cyclinder of F_Φ G → G, defined exactly as above, namely the quotient spaceof G ×× [0,1] where (x,n,1) ∼ (F_Φ(x),n+1,0) with an appropriate metric. What is done in <cit.> is to apply properties of the train track map fG → G to prove a flaring hypothesis, allowing one to apply Bestvina–Feighn combination theorem <cit.> to conclude thatis Gromov hyperbolic, thus proving that Γ is a hyperbolic group.Flaring methods. Our proof of the multi-edge case will use the combination theorem of Mj and Sarder <cit.>, a generalization of the Bestvina and Feighn combination theorem <cit.>. A common feature of the above examples that is shared in the construction of this paper is thatis a “metric bundle over ”: there is a Lipschitz projection map π↦ such that the minimum distance from π^(s) to π^(t) equals s-t for all s,t ∈, and each point x ∈ is contained in the image of a geodesic section σ→ of the map π. In studying the large scale geometry ofit is important to study quasigeodesic sections σ→ and their flaring properties. In our context it is convenient to translate these concepts into dynamical systems parlance: each such quasigeodesic section turns out to be a “pseudo-orbit” of a suspension semiflow on  whose true orbits are the geodesic sections (see the closing paragraphs of Section <ref>). The combination theorems of <cit.> and its predecessor <cit.> share key hypotheses regarding the asymptotic “flaring” behavior of such pseudo-orbits (see Definition <ref>). We remark, though, that those combination theorems hold in much more general settings, e.g. certain kinds of metric bundles over more general metric base spaces.In Section <ref> we study flaring of pseudo-orbits in the context of a relative train track map, which is then used in Section <ref> to extend to further contexts building up to the suspension space , its metric bundle ↦, and its pseudo-orbits, allowing application of the Mj-Sardar combination theorem. § FLARING IN A TOPSTRATUMWe assume the reader is familiar with the theory ofrepresentatives of elements of (F_n), developed originally in . More narrowly, we shall focus on the terminology and notation of ahaving a topstratum, as laid out in Section 2.2 of Part I , under the headingproperties of . Here is a brief review, in order to fix notations for what follows in the rest of this paper; see <cit.> for detailed citations drawn primarily from . At present we need no more aboutthan is described here; when needed in Section <ref> we will give a more thorough review of .We fix ϕ∈(F_n) and a relative train track representative fG → G with associated f-invariant filtration ∅ = G_0 ⊂ G_1 ⊂⋯⊂ G_u = G satisfying the following: * The top stratum H_u iswith Perron-Frobenius transition matrix M_u having top eigenvalue λ > 1. The attracting lamination of ϕ corresponding to H_u is denoted Λ^+ or just Λ, and its dual repelling lamination is Λ^-.*There exists (up to reversal) at most one indivisible periodic Nielsen path ρ of height u, meaning that ρ is not contained in G_u-1. In this case ρ and its inverse ρ̅ are Nielsen paths, there is a decomposition ρ = αβ where α,β are u-legal paths with endpoints at vertices, and (α,β) is the unique illegal turn in H_u. At least one endpoint of ρ is disjoint from G_u-1; we assume that ρ is oriented with initial point p ∉G_u-1 and terminal point q. The initial and terminal directions are distinct fixed directions in H_u. The stratum H_u is geometric if and only if ρ exists and is a closed path in which case p=q.*From the matrix M_u one obtains an eigenlength function l_(σ) defined on all paths σ in G having endpoints at vertices, and having the following properties:* for each edge E ⊂ G we have l_(E)=0 if and only if E ⊂ G_u-1; * in general if σ = E_1 … E_K then l_(σ) = l_(E_1) + ⋯ + l_(E_K); * for all edges E ⊂ G we have l_(f(E)) = λl_(E). * If ρ = αβ exists as in ItemCTiNP then l_(α)=l_(β) = 1/2 l_(ρ).*If γ is a path with endpoints at vertices of H_u or a circuit crossing an edge of H_u then for all sufficiently large i the path f^i_#(γ) has a splitting with terms in the set {edges of H_u}{ρ,ρ̅}{paths in G_u-1 with endpoints in H_u}.In what follows, there are three cases to consider: the ageometric case where ρ does not exist; the parageometric case where ρ exists but is not closed; and the geometric case where ρ exists and is closed. Occasionally we will need to consider these cases separately. But for the most part we will attempt to smoothly handle all three cases simultaneously, using the following conventions: * In the ageometric case, where ρ does not exist, the notation ρ should simply be ignored (for an exception see the “Notational Convention” following Corollary <ref>);* In the parageometric case, where ρ exists but is not closed, the notations ρ^i for i≥ 2 should be ignored.This completes Notations <ref>.The main result of this section is Proposition <ref>, a uniform flaring property for the topstratum H_u of f. This result generalizes <cit.> which covers the special case that f is a nongeometric train track map, that is, u=1 and if ρ exists then it is not closed. There are two features which distinguish our present general situation from that special case. First, our flaring property is formulated relative to the penultimate filtration element G_u-1 of G. Second, if the height u indivisible Nielsen path ρ exists then our flaring property is formulated relative to ρ. Taken together, these two features require very careful isolation and annihilation of the metric effects of paths in G_u-1 and of the path ρ. This task is particularly tricky in the geometric case where ρ exists and is closed, in which situation we must also annihilate the metric effects of all iterates ρ^n.§.§ The path functions L_u and L_.Throughout this paper, and following standard terminology in the literature of relative train track maps, a (finite) path in a graph G is by definition a locally injective continuous function σ [0,1] → G. We often restrict further by requiring that the endpoints σ(0),σ(1) be vertices in which case σ is a concatenation of edges without backtracking. A more general concatenation of edges in which locally injectivity may fail, hence backtracking may occur, will be called an “edge path”.Generalizing the eigenlength function l_ of Notations <ref> ItemCTEigen, a path function on a graph is simply a function l(·) which assigns, to each finite path σ having endpoints at vertices, a number l(σ) ∈ [0,), subject to two metric-like properties: * l(·) is symmetric, meaning that l(σ)=l(σ̅). * l(·) assigns length zero to any trivial path. We do not require that l(σ) > 0 for all nontrivial paths σ. We also do not require additivity: the value of l(·) on a path σ need not equal the sum of its values over the 1-edge subpaths of σ. We also do not require any version of the triangle inequality, but see Lemma <ref> and the preceding discussion regarding coarse triangle inequalities for the path functions of most interest to us.Henceforth in this section we adopt adopt Notations <ref>.The “big L” path functions L_u, L_ are defined in terms of auxiliary “little l” path functions l_u, l_ by omitting certain terms. The function l_ is already defined in Notations <ref> ItemCTEigen. Consider a path σ in G with edge path decomposition σ = E_1 … E_K. Define l_u(σ) to be the number of terms E_k contained in H_u. Now define L_u(σ) and L_(σ) by summing l_u(E_k) and l_(E_k) (respectively) only over those terms E_k which do not occur in any ρ or ρ̅ subpath of σ. Note that for each edge E ⊂ G the “little l” eigenvector equation l_(f(E)) = λl_(E) implies: “Big L” eigenvector equation:L_(f(E)) = λL_(E) for all edges E ⊂ G. To see why this holds, if E ⊂ G_u-1 both sides are zero. For E ⊂ H_u this follows from the fact that f(E) is H_u-legal and hence has no ρ or ρ̅ subpath.The following “quasicomparability” result is an obvious consequence of the fact that the Perron-Frobenius eigenvector of the transition matrix M_u has positive entries:There is a constant K = K_<ref>(f) ≥ 1 such that for all edges E of H_u we have 1/K L_(E) ≤ L_u(E) ≤ K · L_(E). The next lemma says in part that ρ and ρ̅ subpaths never overlap.For any path σ in G with endpoints, if any, at vertices, there is a unique decomposition σ = …σ_i… σ_j… with the following properties: * Each term σ_i is an edge or a copy of ρ or ρ̅.* Every subpath of σ which is a copy of ρ or ρ̅ is a term in the decomposition. Note that we do not assume σ is finite in this statement. Leaving the proof of Lemma <ref> for the end of Section <ref>, we give applications. For a path σ in G with endpoints at vertices, let K_ρ(σ) denote the number of ρ or ρ̅ subpaths in the Lemma <ref> decomposition of σ.For any finite path σ with endpoints at vertices, letting its Lemma <ref> decomposition be σ = σ_1 …σ_B, we haveL_u(σ)= l_u(σ) - K_ρ(σ) · l_u(ρ)in generall_u(σ)if ρ does not exist and similarly L_(σ)= l_(σ) - K_ρ(σ) · l_(ρ)in generall_(σ)if ρ does not exist Notational convention when ρ does not exist:Motivated by Corollary <ref>, we extend the meaning of the notations l_u(ρ) and l_(ρ) to the case that ρ does not exist by definingl_u(ρ)=l_(ρ)=0 For any finite path σ in G with endpoints at vertices, there is a unique decomposition σ = μ_0ν_1μ_1… ν_Aμ_A with the following properties: * If ρ does not exist then A=0 and σ=μ_0.* If ρ exists then each ν_a is an iterate of ρ or ρ̅ (the iteration exponent equals 1 if ρ is not closed), and each μ_a contains no ρ or ρ̅ subpath.*If ρ exists, and if 1 ≤ a < a+1 ≤ A-1, then at least one of the subpaths μ_a,μ_a+1 contains an edge of H_u; if in addition ρ is closed then each μ_a subpath contains an edge of H_u. Note that in the context of Corollary <ref> the subpaths μ_a are nondegenerate for 1 ≤ a ≤ A-1, but the subpaths μ_0 and μ_A are allowed to be degenerate. Everything is an immediate consequence of Lemma <ref> except perhaps item ItemEveryOtherHasHu, which follows from the fact that at least one endpoint of ρ is disjoint from G_u-1 (Notations <ref> ItemCTiNP).The following is an immediate consequence of Lemma <ref> and Corollary <ref>, with Lemma <ref> applied term-by-term for the “Furthermore” part: For any finite path σ with endpoints at vertices, and using the Corollary <ref> decomposition of σ, we haveL_u(σ) = ∑_a=1^A L_u(μ_a) = ∑_a=1^A l_u(μ_a),L_(σ) = ∑_a=1^A L_(μ_a) = ∑_a=1^A l_(μ_a)Furthermore, letting K = K_<ref>(f) ≥ 1, we have1/K L_(σ) ≤ L_u(σ) ≤ K · L_(σ) The latter inequality of Corollary <ref> gives us the freedom to switch back and forth between L_ and L_u, using L_u in more combinatorial situations and L_ in more geometric situations through much of the rest of the paper.Coarse triangle inequalities for L_u and L_. The path functions l_u and l_ satisfy a version of the triangle inequality: for any two paths γ,δ with endpoints at vertices such that the terminal vertex of γ equals the initial vertex of δ we havel_u[γδ] ≤ l_u(γ)+l_u(δ),l_[γδ] ≤ l_(γ) + l_(δ)where [·] denotes the operation that straightens an edge path rel endpoints to obtain a path. For L_u and L_ the best we can get are coarse triangle inequalities: There exists a constant C=C_<ref> such that for any finite paths γ,δ in G with endpoints at vertices such that the terminal point of γ coincides with the initial point of δ we haveL_u[γδ]≤ L_u(γ) + L_u(δ) + C L_[γδ]≤ L_(γ) + L_(δ) + CConsider the Lemma <ref> decompositions of γ and δ. We may write γ=γ_1γ_2γ_3, δ=δ_1δ_2δ_3 so that [γδ] = γ_1 γ_2 δ_2 δ_3 and so that the following hold: γ_1 is a concatenation of terms of the Lemma <ref> decomposition of γ, and γ_2 is either degenerate or an initial subpath of a single ρ or ρ̅ term of that decomposition; δ_2 is either degenerate or a terminal subpath of a single ρ or ρ̅ term of the Lemma <ref> decomposition of δ, and δ_3 is a concatenation of terms of that decomposition. It follows thatL_u[γδ]≤ L_u(γ_1) + l_u(γ_2) + l_u(δ_2) + L_u(δ_3) ≤ L_u(γ) + L_u(δ) + 2 l_u(ρ)and similarly L_[γδ]≤ L_(γ) + L_(δ) + 2 l_(ρ) Quasi-isometry properties of f. The next lemma describes quasi-isometry properties of the relative train track map fG → G with respect to the path functions L_u and L_.There exist constants D=D_<ref> > 1, E=E_<ref> > 0 such that for any finite path γ in G with endpoints at vertices, the following hold:L_u(f_#(γ)) ≤ D · L_u(γ) + Eand L_u(γ) ≤ D · L_u(f_#(γ)) + E L_(f_#(γ)) ≤ D · L_(γ) + Eand L_(γ) ≤ D · L_(f_#(γ)) + E Once D,E are found satisfying ItemQiU, by applying Corollary <ref> we find D,E satisfying ItemQiPF, and by maximizing we obtain D,E satisfying both. If ρ exists let P be its endpoint set, a single point if H_u is geometric and two points otherwise; and if ρ does not exist let P=∅. Each point of P is fixed by f. By elementary homotopy theory (see e.g. Lemma <ref> ItemGPHomInvExists in Section <ref>), the map of pairs f(G,P) → (G,P) has a homotopy inverse f̅ (G,P) → (G,P) in the category of topological pairs. By homotoping f̅ rel vertices, we may assume that f̅ takes vertices to vertices and edges to (possibly degenerate) paths. If ρ exists then, using that ρ = f_#(ρ), and applying f̅_# to both sides, we obtain f̅_#(ρ)=f̅_#(f_#(ρ)) = ρ. We prove the following general fact: if g(G,P) → (G,P) is a homotopy equivalence of pairs which takes vertices to vertices and takes edges to paths, and if g_#(ρ)=ρ assuming ρ exists, then there exist D ≥ 1, E ≥ 0 such that for each path δ with endpoints at vertices we haveL_u(g_#(δ)) ≤ D · L_u(δ) + EThis obviously suffices for the first inequality of ItemQiU taking g=f and δ=γ. It also suffices for the second inequality of ItemQiU taking g = f̅ and δ=γ because in that case we have g_#(ρ) = f̅_#(ρ) = f̅_#(f_#(ρ)) = ρ.To prove ItemQiG, consider the Corollary <ref> decomposition γ = μ_0ν_1μ_1…ν_Aμ_A. Applying Corollary <ref> we haveL_u(γ) = l_u(μ_0) + ⋯ + l_u(μ_A)We also haveg_#(γ) = [g_#(μ_0)ν_1 g_#(μ_1)… ν_A g_#(μ_A)]By inductive application of Lemma <ref> combined with the fact that each L_u(ν_a)=0, it follows thatL_u(g_#(γ))≤ L_u(g_#(μ_0)) + ⋯ + L_u(g_#(μ_A)) + 2 A C_<ref>≤ l_u(g_#(μ_0)) + ⋯ + l_u(g_#(μ_A)) + 2 AC_<ref>≤ D'(l_u(μ_0) + ⋯ + l_u(μ_A)) + 2 AC_<ref>where D' is the maximum number of H_u edges crossed by g(E) for any edge E ⊂ G. If ρ does not exist then A=0 and we are done. If ρ exists then, by Corollary <ref> ItemEveryOtherHasHu, if 1 ≤ a ≤ a+1 ≤ A-1 then at least one of l_u(μ_a),l_u(μ_a+1) is ≥ 1. It follows that A ≤ 2(l_u(μ_0)+⋯+l_u(μ_A)) + 3 and thereforeL_u(g_#(γ)) ≤ (D' + 4 C_<ref>) L_u(γ) + 6 C_<ref> Proof of Lemma <ref>. The lemma is equivalent to saying that two distinct ρ or ρ̅ subpaths of σ cannot overlap in an edge. Lifting to the universal cover G this is equivalent to saying that for any path σ⊂ G, if μ and μ' are subpaths of σ each of which projects to either ρ or ρ̅, and if μ and μ' have at least one edge in common, then μ = μ'.As a first case, assume that μ and μ' both project to ρ or both project to ρ̅; up to reversal we may assume the former. Consider the decomposition ρ = αβ where α,β are legal and the turn (α̅, β) is illegal and contained in H_u. There are induced decompositions μ= αβ and μ' = α' β'. If the intersection μ∩μ' contains the height u illegal turn of μ or μ' then μ = μ' so assume that it does not. After interchanging μ and μ' we may assume that μ∩μ' ⊂β. Projecting down to ρ we see that β = β_1 β_2 and α = α_1 α_2 where β_2=α_1 is the projection of μ∩μ'. This implies the initial directions of α̅_1 and of β_2 (which are respectively equal to the terminal and initial directions of ρ) are fixed, distinct, and in H_u, and so for all k ≥ 0 the initial directions of f^k_#(α̅_1) and of f^k_#(β_2) are also fixed, distinct, and in H_u. Using the eigenlength function l_ (Notations <ref> ItemCTEigen) we have l_(α)=l_(β) and l_(α_1)=l_(β_2) and so l_(α_2)=l_(β_1). Thus when the path f^k_#(α_1) f^k_#(α_2)f^k_#( β_1)f^k_#(β_2) is tightened to form f^k_#(ρ) = ρ for large k, the subpaths f^k_#(α_2) and f^k_#( β_1) cancel each other out, and so the concatenation f^k_#(α_1) f^k_#(β_2) tightens to ρ. But this contradicts that the two terms of the concatenation are u-legal and that the turn {f^k_#(α̅_1),f^k_#(β_2)} taken at the concatenation point is also u-legal as seen above.The remaining case is that orientations on the projections of μ and μ' do not agree. In this case there is either an initial or terminal subpath of ρ that is its own inverse, which is impossible. §.§ Flaring of the path functions L_u and L_In this section we continue to adopt Notations <ref>. For each path function l on G and each η≥ 0 we define a relation β∼γ on paths β,γ in G with endpoints at vertices: this relation means that there exist paths α,ω with endpoints at vertices such that γ = [αβω] and such that l(α),l(γ) ≤η. Note that this relation is symmetric because β = [α̅γω̅]. When we need to emphasize the dependence of the relation on l and η we will write more formally β ηl γ. A path function l on G is said to satisfy the flaring condition with respect to fG → G if for each μ > 1, η≥ 0 there exist integers R ≥ 1 and A ≥ 0 such that for any sequence of paths β_-R,β_-R+1,…,β_0,…,β_R-1,β_R in G with endpoints at vertices, the flaring inequality μ· l(β_0) ≤max{ l(β_-R), l(β_R) } holds if the following two properties hold:Threshold property: l(β_0) ≥ A;Pseudo-orbit property: β_r ηl f_#(β_r-1) for each -R < r ≤ R.Remark. These two properties are translations into our present context of two requirements in the original treatment of flaring by Bestvina and Feighn <cit.>: the threshold property corresponds to the requirement of large “girth”; and the “pseudo-orbit property” corresponds to the “ρ-thin” requirement.The path functions L_u and L_ each satisfy the flaring condition with respect to fG → G, with constants R = R_<ref>(μ,η) and A = A_<ref>(μ,η). Much of the work is to prove the following, which is a version of Proposition <ref> under the special situation of Definition <ref> where l=L_ and η = 0. Note that η=0 implies that β_r=f_#(β_r-1), and so the choice of γ = β_-R determines the whole sequence up to β_R.For any ν>1 there exist positive integers N ≥ 1 and A=A_<ref>≥ 0 so that if γ is a finite path in G with endpoints at vertices and if L_u(f^N_#(γ)) ≥ A then ν L_u(f^N_#(γ)) ≤max{ L_u(γ), L_u(f^2N_#(γ)) }The proof of this special flaring condition, which takes up Sections <ref>–<ref>, is similar in spirit to <cit.>, and is based on two special cases: a “negative flaring” result, Lemma <ref> in Section <ref>; and a “positive flaring” result, Lemma <ref> estimate3 in Section <ref>. These two special cases are united in Section <ref>, using the uniform splitting property given in Lemma 3.1 of <cit.>, in order to prove the Special Flaring Condition.Before embarking on all that, we apply Lemma <ref> to: Applying Corollary <ref> it is easy to see that L_u satisfies a flaring condition if and only if L_ satisfies a flaring condition. Thus it suffices to assume that the special flaring condition for L_u holds, and use it to prove the general flaring condition for L_u. The idea of the proof is standard: the exponential growth given by Proposition <ref> swamps the constant error represented by the relation ηL_u which we will denote in shorthand as ∼. The hard work is to carefully keep track of various constants and other notations.Fixing μ>1 and η≥ 0, consider the relation ∼ given formally as ηL_u. Fix an integer R ≥ 1 whose value is to be determined — once an application of Lemma <ref> has been arranged, R will be set equal to the constant N of that lemma.Consider a sequence of paths β_-R,β_-R+1,…,β_0,…,β_R-1,β_R in G with endpoints at vertices, such that we haveβ_r ∼ f_#(β_r-1) for -R < r ≤ R Choose a vertex V ∈ G which is f-periodic, say f^K(V)=V for some integer K ≥ 1. We may assume that β_0 has endpoints at V: if not then there are paths α'_0,ω'_0 of uniformly bounded length such that β'_0 = [α'_0β_0ω'_0] has endpoints at V, and replacing β_0 with β'_0 reduces the flaring property as stated to the flaring property with uniform changes to the constants μ, η, and A.We choose several paths with endpoints at vertices, denoted α_r,ω_r,γ_r,α'_r,ω'_r, as follows. First, using that f_#(β_r-1) ∼β_r, there exist α_r,ω_r so that L_u(α_r),L_u(ω_r) ≤η and β_r = [α_r f_#(β_r-1)ω_r] and hence f_#(β_r-1) = [α̅_r β_r ω̅_r]. Next, anticipating an application of Lemma <ref>, choose γ_-R so that f^R_#(γ_-R)=β_0, and then for -R ≤ r ≤ R let γ_r = f^r+R_#(γ_-R), hence γ_0=β_0. Finally, choose α'_r,ω'_r so that γ_r = [α'_r β_r ω'_r] and hence β_r = [α̅'_r γ_r ω̅'_r] (in particular α'_0,ω'_0 are trivial paths). We also require α'_r,ω'_r to be chosen so that if r > -R then we havef_#(β_r-1)= f_#(α̅'_r-1 γ_r-1 ω̅'_r-1) = [f_#(α̅'_r-1)γ_r f_#(ω̅'_r-1)] = [f_#(α̅'_r-1)α'_rβ_rω'_r f_#(ω̅'_r-1)] = [ [f_#(α̅'_r-1)α'_r]_=α̅_r β_r[ω'_r f_#(ω̅'_r-1)]_=ω̅_r ]from which we record the following identities:(*) α̅_r = [f_#(α̅'_r-1) α'_r] ω̅_r = [ω'_r f_#(ω̅'_r-1)]To see why these choices of γ_r,α'_r,ω'_r are possible, we work with a lift to the universal cover fG → G. First choose any lift β_-R. By induction for -R< r ≤ R the lifts α_r, ω_r, β_r are determined by the equation β_r = [α_rf_#(β_r-1) ω_r]. Using f-periodicity of V, the f^R-preimages of the initial and terminal vertices of β_0 contain vertices of G amongst which we choose the initial and terminal vertices of γ̃_-R, respectively; it follows that f^R_#(γ_-R) = β_0. Then define γ_r =f^r+R_#(γ_-R), define α'_r to be the path from the initial vertex of γ_r to the initial vertex of β_r, and define ω'_r to be the path from the terminal vertex of β_r to the terminal vertex of γ_r. Projecting down from G to G we obtain the paths γ_r,α'_r,ω'_r. The identities (*) follow from the evident fact that the paths α'_r, α_r may be concatenated to form a path α'_r α_r, that the two paths α'_r α_r and f_#(α'_r-1) have the same initial endpoint as f_#(γ'_r-1)=γ_r, and that those two paths have the same terminal endpoint as the initial endpoint of f_#(β_r-1); and similarly for the ω's.We need new upper bounds on the quantities L_u(α'_r) and L_u(ω'_r) which represent the difference between L_u(β_r) and L_u(γ_r). These bounds will be expressed in terms of the known upper bound η on L_u(α_r) and L_u(ω_r), and will be derived by applying applying Lemmas <ref> and <ref> inductively, starting from L_u(α'_0)=L_u(ω'_0)=0 and applying (*) in the induction step. These new upper bounds will then be used to derive an expression for the threshold constant A_<ref>, which will be so large so that the differences L_u(α'_r), L_u(ω'_r) become insignificant.The new upper bounds have the form L_u(α'_r), L_u(ω'_r) ≤ F_r(C,D,E,η) for -R ≤ r ≤ Rwhere for each r the expression F_r(C,D,E,η) represents a certain polynomial with integer coefficients in the variables C,D,E,η and where we substitute C = C_<ref>,D=D_<ref>,E=E_<ref> The proofs of these inequalities are almost identical for α' and for ω'; we carry out the proof for the latter. To start the induction, since γ_0=β_0 the path ω'_0 degenerates to a point and so L_u(ω'_0)=0 ≡ F_0(C,D,E,η). Inducting in the forward direction on the interval 1 ≤ r ≤ R, and using from (*) thatω'_r = [ω̅_r f_#(ω'_r-1)]we have L_u(ω'_r)≤ L_u(ω_r) + L_u(f_#(ω'_r-1)) + C ≤η + (D · F_r-1(C,D,E,η) + E ) + C ≡ F_r(C,D,E,η)Inducting in the backward direction on the interval -R ≤ r ≤ -1, and using from (*) that f_#(ω'_r) = [ω_r+1 ω'_r+1]we haveL_u(ω'_r)≤ D · L_u(f_#(ω'_r)) + E ≤ D · (L_u(ω_r+1) + L_u(ω'_r+1)+C) + E ≤ D (η + F_r+1(C,D,E,η)+C) + E ≡ F_r(C,D,E,η) To summarize, we have provedL_u(α'_-R), L_u(ω'_-R)≤ F_-R(C,D,E,η) L_u(α'_R), L_u(ω'_R)≤ F_R(C,D,E,η)From this it follows thatL_u(γ_-R)≤ L_u(α'_-R) + L_u(β_-R) + L_u(ω'_-R) + 2C ≤ L_u(β_-R) + 2F_-R(C,D,E,η) + 2C and similarly L_u(γ_R)≤ L_u(β_R) + 2F_R(C,D,E,η) + 2C Let M = 2max{F_-R(C,D,E,η) + F_R(C,D,E,η)} + 2C, which we use below in setting the value of the threshold constant A_<ref>.Now apply Lemma <ref>, the special flaring condition for L_u, with the constant ν = 2μ-1 >1, to obtain integers N ≥ 1 and A_<ref>. Setting R=N, from the threshold requirement L_u(β_0) ≥ A_<ref> it follows that ν L_u(β_0)= ν L_u(γ_0) ≤max{L_u(γ_-R),L_u(γ_R)}≤max{L_u(β_-R),L_u(β_R)} + MWith the additional threshold requirement L_u(β_0) ≥2 M/ν-1 we have:ν L_u(β_0)≤max{L_u(β_-R),L_u(β_R)} + ν-1/2 L_u(β_0)μ L_u(β_0) = ν+1/2 L_u(β_0)≤max{L_u(β_-R), L_u(β_R)}Thus we have proved the (general) flaring condition for L_u given any μ≥ 1, η≥ 0, using R_<ref>=N and A_<ref> = max{A_<ref>,2M/ν-1}. §.§ Negative flaring of L_u We continue to adopt Notations <ref>. In the context of Lemma <ref> on Special Flaring for L_u, the next lemma establishes flaring in the “negative direction”. For any path σ⊂ G, define (σ) to be the maximum of l_u(τ) over all paths τ in G such that that τ is a subpath both of σ and of some leaf ℓ of Λ^- realized in G. In this definition we may always assume that ℓ is a generic leaf of Λ^-, because every finite subpath of every leaf is a subpath of every generic leaf. Notice that if σ is already a subpath of a leaf of Λ^- then (σ)=l_u(σ). There exists L_<ref>≥ 1 such that for each L ≥ L_<ref> and each a>0 there exists an integer N ≥ 1, so that the following holds: for each n ≥ N, for each finite subpath τ of a generic leaf ℓ of Λ^- realized in G such that τ has endpoints at vertices and such that (τ) = l_u(τ) = L, and for each finite path σ in G, if f^n_#(σ) = τ then (σ) ≥ aL. Intuitively, at least in the nongeometric case (see the proof of Proposition 6.0.8 of , page 609), the point of this lemma is that in any leaf of Λ^-, the illegal turns have a uniform density with respect to the path function l_u, and those turns die off at an exponential rate under iteration of f_#. For the proof, a height u illegal turn in a path σ is said to be out of play if it is the illegal turn in some ρ or ρ̅ subpath of σ, otherwise that turn is in play.The generic leaf ℓ of Λ^-, as realized in G, is birecurrent, is not periodic, and is not weakly attracted to Λ^+ under iteration of f_#. It follows that ℓ does not split as a concatenation of u-legal paths and copies of ρ and ρ̅. By applying Lemma <ref> it follows that ℓ has an illegal turn that is in play. In addition, ℓ is quasiperiodic with respect to edges of H_u <cit.>: for any integer L>0 there exists an integer L'>0 such that for any finite subpaths α, β of ℓ, if l_u(α) = L and l_u(β) ≥ L' then β contains a subpath which is a copy of α. It follows that there exists an integer L_<ref>>0 so that if τ is a subpath of ℓ such that l_u(τ) ≥ L_<ref>, then τ has at least three height u illegal turns that are in play. Arguing by contradiction, if the lemma fails using the value of L_<ref> just given then there exist L ≥ L_<ref>, a>0, positive integers n_i → +, finite paths τ_i in G with endpoints at vertices, and paths σ_i in G, such that τ_i is a subpath of ℓ, and l_u(τ_i) = L, and f^n_i_#(σ_i) = τ_i, and (σ_i) < a L. We derive contradictions in separate cases. Case 1: l_u(σ_i) has an upper bound B. Decompose σ_i as σ_i = ϵ^-_i η_i ϵ^+_i where ϵ^±_i are each either partial edges or trivial, and where η_i is a path with endpoints at vertices. There exists a positive integer d depending only on B such that for each i the path f^d_#(η_i) has a splitting into terms each of which is either a single edge in H_u, a copy of ρ or ρ̅, or a subpath of G_u-1, and therefore each height u illegal turn in the interior of f^d_#(η_i) is out of play (seeLemma 4.25; also seeLemma 1.53). For each i such that n_i ≥ d the paths f^n_i_#(ϵ^±_i) are each u-legal, and each height u illegal turn of f^n_i_#(η_i)= f^n_i-d_#(f^d_#(η_i)) is out of play. Since τ_i is obtained from f^n_i_#(ϵ^-_i) f^n_#(η_i) f^n_i_#(ϵ^+_i) by tightening, at most two illegal turns of τ_i are in play, a contradiction to our choice of L_<ref>. Case 2: l_u(σ_i) has no upper bound. Consider a line M in G which is a weak limit of a subsequence σ_i_m such that M crosses at least one edge of H_u. We apply to each such M the weak attraction results ofas follows. If H_u is non-geometric then there are two options for M: either the closure of M contains Λ^-; or M is weakly attracted to Λ^+ (see Lemma 2.18 of ). If H_u is geometric — and so a height u closed indivisible Nielsen path ρ exists — then there is a third option, namely that M is a bi-infinite iterate of ρ or ρ̅ (see Lemma 2.19 of ). Since no σ_i contains a subpath of a leaf of Λ^- that crosses aL edges of H_u, neither does M. This shows that the closure of M does not contain Λ^-. If M were weakly attracted to Λ^+ then for any K>0 there would exist l> 0 such that f^l_#(M), and hence f^l_#(σ_i_m) for all sufficiently large m, contains a u-legal subpath that crosses 2K+1 edges of H_u. ByLemma 4.2.2 we can choose K so that f^l_#(σ_i_m) splits at the endpoints of the middle H_u edge of that subpath, and hence the number of edges of H_u crossed by τ_i_m = f_#^n_i_m-l(f^l_#(σ_i_m)) goes to infinity with m, contradicting that l_u(τ_i)=L.We have shown that each of the first two options leads to a contradiction for any choice of M as above. This concludes the proof if H_u is nongeometric. It remains to show that if H_u is geometric then the third option can also be avoided by careful choice of M, and hence the desired contradiction is achieved. That is, we show that there exists a weak limit M of a subsequence of σ_i such that M contains at least one edge of H_u and M is not a bi-infinite iterate of the closed path ρ or ρ̅. This may be done by setting up an application of Lemma 1.11 of , but it is just as simple to give a direct proof. Lift σ_i to the universal cover of G and write it as an edge path σ_i =E_i1 E_i2… E_iJ_i⊂ G; the first and last terms are allowed to be partial edges. Let b equal twice the number of edges in ρ. Given j ∈{1+b,…,J_i-b}, we say that E_ij is well covered if E_i,j-b, E_i,j+b are full edges and there is a periodic line ρ_ij⊂ G that projects to ρ or to ρ̅ and that contains E_i,j-b… E_ij … E_i,j+b as a subpath. Since the intersection of distinct periodic lines cannot contain two fundamental domains of both lines, ρ_ij is unique if it exists. Moreover, if both E_ij and E_i,j+1 are well covered then ρ_ij=ρ_i,j+1. It follows that if E_ij is well covered then we can inductively move forward and backward past other well covered edges of σ̃_i all in the same lift of ρ, until either encountering an edge that is not well covered, or encountering initial and terminal subsegments of σ_i of uniform length. After passing to a subsequence, one of the following is therefore satisfied:*There exists a sequence of integers K_i such that 1 < K_i < J_i, and K_i →, and J_i - K_i →∞, and such that E_iK_i⊂ H_u is not well covered.*σ_i = α_i ρ^p_iβ_i where the number of edges crossed by α_i and β_i is bounded independently of i and p_i→∞. If subcase good weak limit holds then the existence of a weak limit that crosses an edge of H_u and is not a bi-infinite iterate of ρ or ρ̅ follows immediately. If subcase just rho holds, τ_i is obtained from f^n_i_#(α_i) ρ^p_if^n_i_#(β_i) by tightening. Since the number of illegal turns of height u in the first and last terms is uniformly bounded, the number of edges that are cancelled during the tightening is uniformly bounded, and it follows that τ_i contains ρ^q_i as a subpath where |q_i| →∞, a contradiction to the fact that ρ is not a leaf of Λ^-.§.§ Positive flaring and other properties of L_uIn this section we continue to adopt Notations <ref>.In the context of Lemma <ref> on Special Flaring for L_u, item estimate3 of Lemma <ref> establishes flaring in the “positive direction”.Lemma <ref> also includes several useful metrical/dynamical properties of L_u, which will be used in Section <ref> where we tie together the negative and positive flaring results to prove Lemma <ref>.The lemma and its applications make use of the path map f_ and its properties as laid out inSection 1.1.6, particularly Lemma 1.6 which we refer to as the “ Lemma”. Roughly speaking, the path f_(σ) is defined for any finite path σ in G as follows: for any finite path τ containing σ as a subpath, the straightened image f_#(τ) contains a subpath of f_#(σ) that is obtained by deleting initial and terminal subpaths of f_#(σ) that are no longer than the bounded cancellation constant of f; the path f_(σ) is the longest subpath of f_#(σ) which survives this deletion operation for all choices of τ. The following conditions hold: *For each path σ with endpoints at vertices and any decomposition into subpaths σ = σ_0 σ_1 …σ_K with endpoints at vertices we have∑_0^K L_u(σ_k) - K l_u(ρ) ≤ L_u(σ ) ≤∑_0^K L_u(σ_k)and similarly with L_ and l_ in place of L_u and l_u respectively.*For all positive integers d there exist B = B_<ref>(d) > 0 and b = b_<ref>(d) > 0 so that for all finite paths σ with endpoints at vertices, if L_u(σ) ≥ B then L_u(f^d_(σ)) ≥ b L_u(σ). *There exist constants A > 0 and κ≥ 1 so that for all subpaths τ of leaves of Λ^- in G, if l_u(τ) ≥ A then l_u(τ) ≤ L_u(τ) ≤κl_u(τ)*There exists a positive integer A = A_<ref> and a constant 0< R = R_<ref> <1, so that if a path α⊂ G splits as a concatenation of u-legal paths and copies of ρ or ρ̅, and if L_u(α) ≥ A, then for all m ≥ 1 we haveL_u(f^m_(α)) ≥ Rλ^m/2L_u(α)where λ is the expansion factor for f and Λ^+. Remark: The notation f^m_ is disambiguated by requiring that the exponent binds more tightly than the -operator, hence f^m_ = (f^m)_ — this is how items estimate1 and estimate3 are applied in what follows. Note that this makes the statements of estimate1 and estimate3 weaker than if f^m_ were intepreted as (f_)^m, because (f_)^m(α) is a subpath of (f^m)_(α). We prove ItemAlmostAdditive for L_u when K=1; the proof for general K follows by an easy induction, and the statement for L_ is proved in the exact same manner. If there are non-trivial decompositions σ_0 = σ'_0 σ”_0 and σ_1 =σ'_1 σ”_1 such that σ”_0σ'_1 is a copy of ρ or ρ̅ then H_u-edges in σ”_0σ'_1 contribute to L_u(σ_0) + L_u(σ_1) but not to L_u(σ), and all other H_u-edges contribute to L_u(σ_0) + L_u(σ_1) if and only if they contribute to L_u(σ). In this case L_u(σ) = L_u(σ_0) + L_u(σ_1) - l_u(ρ). If there are no such non-trivial decompositions then concatenating the decompositions of σ_0 and σ_1 given by Lemma <ref> produces the decomposition of σ given by Lemma <ref> and L_u(σ) = L_u(σ_0) + L_u(σ_1). This completes the proof of ItemAlmostAdditive.We next prove estimate1. Fix the integer d ≥ 1. The path f^d_(σ) is obtained from f^d_#(σ) by removing initial and terminal segments that cross a number of edges that is bounded above, independently of σ, by the bounded cancellation constant of f^d. It follows that L_u(f^d_(σ))-L_u(f^d_#(σ)) is bounded above independently of σ, so it suffices to find B,b > 0 depending only on d so that if L_u(σ) > B then L_u(f^d_#(σ)) > b L_u(σ). Applying Lemma <ref> ItemQiU we obtain D > 1, E > 0 so that L_u(γ) ≤ D L_u(f_#(γ)) + E for all finite paths γ with endpoints at vertices, from which it follows by induction thatL_u(f^d_#(γ))≥1/D^d L_u(γ) - E( 1/D + ⋯ + 1/D^d) ≥1/D^d L_u(γ) - E/D-1 and so if L_u(γ) ≥2D^d E/D-1 then L_u(f^d_#(γ))≥1/D^d L_u(γ) - 1/2D^d L_u(γ) = 1/2D^d L_u(γ) The first inequality of estimate2 follows immediately from Corollary <ref>, for any subpath τ of a leaf of Λ^-. To prepare for proving the second inequality, given a positive integer C let Σ_C be the set of paths in G that do not contain a subpath of the form ρ^ϵ C where ϵ=± 1. Since ρ has an endpoint that is not contained in G_u-1 ( Fact 1.43), each maximal subpath of σ of the form ρ^k or ρ̅^k that is neither initial nor terminal in σ is adjacent in σ to an H_u-edge that contributes to L_u(σ). Applying Corollary <ref> it follows that (*) For any fixed C, amongst those paths σ∈Σ_C for which L_u(σ) > 0, the ratio l_u(σ) / L_u(σ) has a positive lower bound that is independent of σ. For proving estimate2, the key observation is that there exists a positive integer C so that each subpath τ of Λ^- is contained in Σ_C — this is equivalent to saying that ρ^∞ is not a weak limit of lines in Λ^-, which is equivalent to saying that ρ^∞ is not a leaf of Λ^-, which follows from Lemma 3.1.15 ofand the fact that Λ^- ρ^∞. Item estimate2 therefore follows by combining this observation with (*).For proving estimate3, we focus primarily on an analogue for L_, connecting it with L_u version stated in estimate3 by applying Corollary <ref>. From the assumption on a splitting of α we haveL_(f^m_#(α)) = λ^m L_(α)We shall show how to replace f^m_# by f^m_, at the expense of replacing λ by its square root, and of requiring L_(α) to exceed some threshold constant. To be precise, we have: Claim: There exists A' ≥ 0 such that if L_(α) ≥ A' then for all m ≥ 0 we haveL_(f^m_(α)) ≥λ^m/2 L_(α) This suffices to prove estimate3, because if L_u(α) ≥ K_<ref>(f) · A' = A then from Corollary <ref> it follows that L_(α) ≥ A', from which using the Claim we obtain L_(f^m_(α)) ≥λ^m/2 L_(α), and then by two more applications of Corollary <ref> we obtainL_u(f^m_(α)) ≥1/K_<ref>(f) L_(α) ≥1/K_<ref>(f)λ^m/2 L_(α) ≥1/(K_<ref>(f))^2λ^m/2 L_u(α) To prove the claim, the case m=0 is evident, so suppose by induction thatL_(f^m-1_(α)) ≥λ^(m-1)/2 L_(α)Since f^m-1_(α) is a subpath of f^m-1_#(α), and since the latter splits into terms each of which is an edge of H_u, a copy of ρ or ρ̅, or a path in G_u-1, it follows that f^m-1_(α) may be deconcatenated in the formf^m-1_(α) = ζα̂ωsuch that α̂ splits into terms exactly as above, and such that either ζ,ω are both trivial, or ρ exists and ζ,ω are both proper subpaths of ρ or ρ̅; it follows that l_(ζ),l_(ω) ≤ l_(ρ).Applying item ItemAlmostAdditive it follows thatL_(f^m-1_(α))≤ L_(α̂) + 2 l_(ρ) L_(α̂)≥λ^(m-1)/2L_(α) -2 l_(ρ)Using the splitting of α̂ we obtain L_(f_#(α̂))= λ L_(α̂) ≥λ^(m+1)/2L_(α) -2λl_(ρ)The path f_(α̂) is obtained from f_#(α̂) by truncating initial and terminal segments no longer than the bounded cancellation constant of f, and since this is a finite number of paths their L_-values have a finite upper bound C_2, so by applying item ItemAlmostAdditive it follows thatL_(f_(α̂)) ≥λ^(m+1)/2L_(α) -2λl_(ρ) - 2 C_2Now we apply theLemma. Since α̂ is a subpath of f^m-1_(α) it follows that f_(α̂) is a subpath of f_(f^m-1_(α)) ( Lemma 1.6 (3)), which is a subpath of f^m_(α) ( Lemma 1.6 (4)). Thus we have f^m_(α) = η f_(α̂) θ for some paths η,θ, and hence by item ItemAlmostAdditive we haveL_(f^m_(α))≥λ^(m+1)/2L_(α) -2λl_(ρ) - 2 C_2 - 2 l_(ρ)To complete the induction we show that with appropriate threshold constant the quantity on the right is ≥λ^m/2L_(α), equivalentlyλ^(m+1)/2L_(α)≥λ^m/2L_(α) + (2 (λ+1) l_(ρ) + 2 C_2_C_1)λ^1/2 ≥ 1 + C_1/λ^m/2L_(α)Since λ>1 is suffices to showλ^1/2≥ 1 + C_1/L_(α) L_(α) ≥C_1/λ^1/2 - 1Taking the threshold constant to be A' = C_1/λ^1/2 - 1the induction is complete. §.§ Proof of Lemma <ref>: the Special Flaring Condition for L_u Once this proof is complete, the proof of the General Flaring Condition stated in Proposition <ref> will also be complete, as shown in Section <ref>.If the Special Flaring Condition for L_u fails then there exists a sequence n_k →∞, and there exist paths γ_k ⊂ G with endpoints at vertices, such that L_u(f^n_k_#(γ_k)) →∞ as k →∞, and such that (*) νL_u(f^n_k_#(γ_k))≥max{ L_u(γ_k), L_u(f^2 n_k_#(γ_k))}Assuming this, we argue to a contradiction.Consider the integer L_<ref>≥ 1 satisfying the conclusions of Lemma <ref>. By Lemma <ref> estimate2 there is an integer L_2 so that if μ is a subpath of Λ^- that crosses ≥ L_2 edges of H_u then L_u(μ) ≥ 1. Let L_1 = max{L_<ref>,L_2}. Choose an integer d ≥ 1 satisfying the conclusion of <cit.>, the “uniform splitting lemma”, with respect to the constant L_1. This conclusion says that for any finite path σ in G with endpoints at vertices of H_u, if ℓ^-_u(σ) < L_1 then the path f^d_#(σ) splits into terms each of which is u-legal or a copy of ρ or ρ̅. In this context we shall refer to d as the “uniform splitting exponent”. Let {μ_ik} be a maximal collection of subpaths of f^n_k_#(γ_k) with endpoints at vertices that have disjoint interiors, that are subpaths of Λ^-, and that cross ≥ L_1 edges of H_u. The complementary subpaths {ν_jk} of f^n_k_#(γ_k) all satisfy (ν_jk) < L_1 and all have endpoints at vertices as well. Our first claim is that (i) lim_k →∞∑_i L_u(μ_ik)/L_u(f^n_k_#(γ_k)) = 0If not, then after passing to a subsequence we may assume that ∑_i L_u(μ_ik) > ϵ_1 L_u(f^n_k_#(γ_k))for some ϵ_1> 0 and all k. Choose subpaths σ_ik of γ_k with disjoint interiors such that f_#^n_k(σ_ik) = μ_ik. Since l_u(μ_ik) ≥ L_1 ≥ L_<ref>, and since n_k → +∞, we may apply “Negative Flaring”, Lemma <ref>, to obtain subpaths σ_ik' of σ_ik which have endpoints at vertices and which are also subpaths of Λ^- such that for all i we havelim_k →∞l_u(σ'_ik)/l_u(μ_ik)= ∞ The ratios l_u(σ'_ik)/L_u(σ'_ik) and l_u(μ_ik)/L_u(μ_ik) have positive upper and lower bounds independent of i and k: the upper bound of 1 follows from Corollary <ref>; and the lower bound comes from Lemma <ref> estimate2. For all i we therefore obtain lim_k →∞L_u(σ'_ik)/L_u(μ_ik)= ∞Using this limit, and using that L_u(μ_ik) ≥ 1, it follows that for all sufficiently large k we haveL_u(γ_k) ≥∑_i (L_u(σ'_ik) - 2ρ) > ν/ϵ_1∑_i L_u(μ_ik) > νL_u(f^n_k_#(γ_k))where the first inequality follows by applying Lemma <ref> ItemAlmostAdditiveto the subdivision of γ_k into the paths σ'_ik and their complementary subpaths. This contradicts (*), verifying the first claim.Our second claim is that for any constant A (on which constraints will be placed below when this claim is applied) we have(ii) lim_k →∞∑_L_u(ν_jk)≥ A L_u(ν_jk) / L_u(f^n_k_#(γ_k))= 1To see why, let I_k be the number of μ_ik subpaths of f^n_k_#(γ_k), let J_k be the number of ν_jk subpaths, and let K_k = I_k + J_k, and so J_k ≤ I_k + 1 and K_k ≤ 2I_k+1. By Lemma <ref> ItemAlmostAdditive applied to f^n_k_#(γ_k) we obtainL_u(f^n_k_#(γ_k))≤ ∑_j L_u(ν_jk) +∑_i L_u(μ_ik) ≤L_u(f^n_k_#(γ_k)) + K_k l_u(ρ)1 ≤∑_L_u(ν_jk) ≥ A L_u(ν_jk) /L_u(f^n_k_#(γ_k)) + ∑_i L_u(μ_ik)/L_u(f^n_k_#(γ_k))_δ_k + ∑_L_u(ν_jk)<A L_u(ν_jk)/L_u(f^n_k_#(γ_k))_ϵ_k≤ 1 + K_k l_u(ρ)/L_u(f^n_k_#(γ_k))_ζ_kFrom (i) it follows that δ_k → 0 as k → +. Multiplying the inequalityK_k ≤ 2I_k+1 by l_u(ρ) / L_u(f^n_k_#(γ_k)), and using that L_u(μ_ik) ≥ 1, it follows that 0 ≤ζ_k ≤ 2 l_u(ρ)δ_k + l_u(ρ)/L_u(f^n_k_#(γ_k))and so ζ_k → 0 as k → +. Multiplying the inequality J_k ≤ I_k + 1 by A / L_u(f^n_k_#(γ_k)), it follows that 0 ≤ϵ_k ≤ A δ_k + A/L_u(f^n_k_#(γ_k))and so ϵ_k → 0 as k → +. This proves the second claim. In what follows we will be applying Lemma <ref> estimate3, and we will use the constants A_<ref>, R_<ref> involved in that statement.By definition of L_1 and by the choice of the uniform splitting exponent d, since ℓ^-_u(ν_jk) < L_1 it follows that f^d_#(ν_jk) splits into terms each of which is either u-legal or a copy of ρ or ρ̅. Consider the constants B = B_<ref>(d) > 0 and b = b_<ref>(d) > 0 of Lemma <ref>. Constraining A ≥ B, we may combine (ii) with Lemma <ref> estimate1 to obtain(iii) ∑_L_u(ν_jk)≥ A L_u(f^d_(ν_jk)) /L_u(f^n_k_#(γ_k))≥ b ·∑_L_u(ν_jk)≥ A L_u(ν_jk) /L_u(f^n_k_#(γ_k)) > 3b/4for sufficiently large values of k.By construction, the paths {ν_jk} occur in order of the subscript j as subpaths of f^n_k_#(γ_k) with disjoint interiors. By applying theLemma using f^n_k, it follows that the the paths f^n_k_(ν_jk) occur in order as subpaths of the path f^2n_k_#(γ_k) with disjoint interiors ( Lemma 1.6 (5)). It then follows that f^n_k-d_f^d_(ν_jk) is a subpath of f^n_k_(ν_jk) ( Lemma 1.6 (4)). Putting these together we see that the paths f^n_k-d_f^d_(ν_jk) occur in order as subpaths of the path f^2n_k_#(γ_k) with disjoint interiors. These subpaths being J_k in number, together with their complementary subpaths one has a decomposition of f^2n_k_#(γ_k) into at most 2J_k+1 paths. Ignoring the complementary subpaths, Lemma <ref> ItemAlmostAdditive therefore implies L_u(f^2n_k_#(γ_k)) ≥∑_ L_u(f^n_k-d_f^d_(ν_jk))-2 J_k l_u(ρ) ≥∑_L_u(ν_jk)≥ A L_u(f^n_k-d_f^d_(ν_jk))-2 J_k l_u(ρ) ≥∑_L_u(ν_jk)≥ A L_u(f^n_k-d_f^d_(ν_jk))-l_u(ρ) L_u(f^n_k_#(γ_k))where the last inequality follows for sufficiently large k by applying (i) and the inequality L_u(μ_ik) ≥ 1 to conclude that L_u(f^n_k_#(γ_k)) ≥ 2∑_i L_u(μ_ik)+2 ≥ 2I_k+2 ≥ 2J_kFor sufficiently large k we therefore have(iv) L_u(f^2n_k_#(γ_k))/L_u(f^n_k_#(γ_k)) >∑_L_u(ν_jk)≥ A L_u(f^n_k-d_f^d_(ν_jk))/L_u(f^n_k_#(γ_k)) - l_u(ρ) We have already constrained A so that A ≥ B, and we now put one more constraint on A. Applying Lemma <ref> estimate1 to f^d it follows that if L_u(ν_jk) ≥ B then L_u(f^d_(ν_jk)) ≥ b L_u(ν_jk), and so if L_u(ν_jk) ≥ A = max{B,1/b A_<ref>} it follows that L_u(f^d_(ν_jk)) ≥ A_<ref>. This allows us to apply “Positive Flaring”, Lemma <ref> estimate3, with the conclusion that, letting R = R_<ref>,(v)L_u(f^n_k-d_f^d_(ν_jk)) ≥ Rλ^(n_k - d)/2 L_u(f^d_(ν_jk))as long as L_u(ν_jk) ≥ A and as long as k is sufficiently large.Combining (iv) and (v), if k is sufficiently large we obtainL_u(f^2n_k_#(γ_k))/L_u(f^n_k_#(γ_k))> Rλ^(n_k - d)/2 ∑_L_u(ν_jk)≥ A L_u(f^d_#(ν_jk)) /L_u(f^n_k_#(γ_k)) - l_u(ρ)and combining this with (iii) we obtainL_u(f^2n_k_#(γ_k))/L_u(f^n_k_#(γ_k))> 3 b R/4λ^(n_k - d)/2 - l_u(ρ) > νwhere the second inequality holds for sufficiently large k. This gives us the final contradiction to (*), which completes the proof of Lemma <ref>.§.§ Appendix: The graph homotopy principle Lemma <ref> in this section was used earlier in the proof of Lemma <ref>, and it will be used later in the construction of the “homotopy semigroup action” in Section <ref>. It is an elementary result in homotopy theory; for precision we state the result in the language of category theory, and we give the complete proof.Define the graph-point category, a subcategory of the standard homotopy category of pairs, as follows. The objects are pairs (G,P) where G is a finite graph and P ⊂ G is a finite subset. Each morphism, denoted [f](G,P) ↦ (H,Q), is the homotopy class rel P of a homotopy equivalence fG → H that restricts to a bijection fP → Q. Define the fundamental group functor from the graph-point category to the category of indexed groups as follows. To each pair (G,P) we associate the indexed family of groups π_1(G,P) = (π_1(G,p))_p ∈ P, and to each morphism [f](G,P) → (H,Q) we associate the indexed family of group isomorphisms [f]_* π_1(G,P) →π_1(H,Q)= (f_* π_1(G,p) →π_1(H,f(p)))_p ∈ PThe category and functor axioms implicit in this discussion are easily checked.Let ^(G,P) denote the group of automorphisms of (G,P) in the graph-point category. Let ^_0(G,P) ^(G,P), which we call the pure automorphism group of (G,P) in the graph-point category, denote the finite index subgroup consisting of those [f] ∈^(G,P) such that fP → P is the identity.*The graph-point category is a groupoid: every morphism [f](G,P) → (H,Q) has an inverse morphism [g](H,Q) → (G,P), meaning that g ∘ f(G,P) → (G,P) is homotopic to the identity rel P and f ∘ g(H,Q) → (H,Q) is homotopic to the identity rel Q.*The fundamental group functor is faithful: for any pair of morphisms [f], [f'](G,P) → (H,Q), we have [f]=[f'] if and only if the restricted maps f, f'P ↦ Q are equal and the induced isomorphisms f^_*, f'_* π_1(G,p) →π_1(H,f(p)) are equal for all p ∈ P.*Two morphisms [f](G,P) → (H,Q) and [g](H,Q) → (G,P) are inverses if and only if their restrictions fP → Q and g Q → P are inverses and the isomorphisms f_* π_1(G,p) →π_1(H,f(q)) and g_* π_1(H,f(q)) →π_1(G,p) are inverses. *The fundamental group functor restricts to an injective homomorphism defined on the pure automorphism group ^_0(π_1(G,P)) ↦⊕_p ∈ P(π_1(G,p)).Once ItemGPHomInvExists and ItemGPInverse are proved, ItemGPFaithful and ItemGPAut follow immediately.To prove ItemGPInverse, the “only if” direction is obvious, and for the “if” direction it suffices to prove this special case: for any self-morphism [f](G,P) → (G,P), if f fixes each p ∈ P and induces the identity π_1(G,p) for each p ∈ P, then f is homotopic to the identity rel P. For the proof, we know that f is freely homotopic to the identity map _G, because G is an Eilenberg-MacClane space and f induces the identity on its fundamental group. Choose a homotopy hG × [0,1] → G from f to _G. We claim that for each p ∈ P the closed path γ_p(t) = h(p,t) (0 ≤ t ≤ 1) is trivial in π_1(G,p). Applying this claim, we alter the homotopy h as follows: using the homotopy extension property, for each p ∈ P we may homotope the map hG × [0,1] → G, keeping it stationary on G × 0, stationary on G × 1, and stationary outside of U_p × [0,1] for an arbitrarily small neighborhood U_p of p, to arrange that h(p × [0,1])=p; note that for a homotopy X × [0,1] → Y to be “stationary on A ⊂ X” means that the restricted map {a}× [0,1] → Y is constant for each a ∈ A. Doing this independently for each p ∈ P, we obtain a homotopy rel P from f to the identity and we are done, subject to the claim.To prove the claim, consider a closed path δ [0,1] → G based at p, representing an arbitrary element [δ] ∈π_1(G,p). We obtain a path homotopy H_t[0,1] → G from the path H_0 = f ∘δ to the concatenated path H_1 = γ_p * δ * γ̅_p as follows: H_t(s) = γ_p(3s)if 0 ≤ s ≤ t/3h(δ(3s - t/3- 2t),t)if t/3 ≤ s ≤ 1 - t/3 γ_p(3-3s)if 1 - t/3 ≤ s ≤ 1Since for all [δ] ∈π_1(G,p) we have [δ] = [f ∘δ] = [γ_p] · [δ] · [γ_p]^, it follows that [γ_p] is in the center of π_1(G,p) ≈ F_n, hence is trivial, completing the proof of ItemGPInverse.To prove ItemGPHomInvExists, start with any homotopy inverse g'H → G of f. We may assume that the maps fP → Q and g'Q → P are inverses, because by the homotopy extension property we may homotope g' to be stationary outside of a small neighborhood of P so that for each p ∈ P the track of the homotopy on the point g'(f(p)) moves it back to p. Since g' ∘ fG → G fixes each point in P and is homotopic to the identity, for each p ∈ P the induced map (g' ∘ f)_* π_1(G,p) →π_1(G,p) is an inner automorphism represented by some closed curve γ_p based at p, and so for each element of π_1(G,p) having the form [δ] for some closed curve δ based at p we have (g' ∘ f)_*(δ) = [γ_p * δ * γ̅_p]. Let h(G,p) → (G,p) be the morphism obtained from the identity by a homotopy that is stationary outside a small neighborhood of P and such that the track of the homotopy on each p ∈ P is the closed curve γ̅_p; again we are applying the homotopy extension property. Letting g = h ∘ g' we may apply ItemGPInverse to conclude that the morphism [f] is an isomorphism with inverse [g].Remark. Note that the proof of ItemGPInverse depends heavily on the fact that the center of F_n is trivial. The proof breaks down, for instance, if G is replaced by a torus; in fact the analogue of Lemma <ref>, where a graph is replaced by a torus and P is a two-point subset of the torus, is false. On the other hand the analogue for any K(π,1) space whose fundamental group has trivial center is true. § FLARING IN T^* AND HYPERBOLICITY OF . Throughout this section we continue with Notations <ref> ItemCTEG–ItemCTSplitSimple regarding an outer automorphism ϕ∈(F_n) and a relative train track representative fG → G having penultimate filtration element G_u-1 and topstratum H_u. The main result of this section is the construction, carried out in Section <ref>, of the Gromov hyperbolic spacethat is used in later sections for proving the multi-edge case of the Hyperbolic Action Theorem. The construction of  is based on results found in Sections <ref>–<ref>, in particular Proposition <ref> which is a re-interpretation of the flaring result of Proposition <ref> expressed in the context of a certain natural free splitting. The statement of Proposition <ref> is found in Section <ref>, after preliminary work carried out in Sections <ref>–<ref>.Once some of the definitions have been formulated, the reader may wish to pause to consider the “Motivational Remarks” found in Section <ref> following Lemma <ref>. §.§ The free splitting F_nT and its Nielsen lines.We begin with a description of the free splitting F_nT associated to the marked graph G and its subgraph G_u-1, together with a description of some features of T associated to height u Nielsen paths in G. Letdenote the free factor system corresponding to the subgraph G_u-1, having the form = {[A_1],…,[A_K]} where G_u-1 has noncontractible components C_1,…,C_K and A_kF_n is in the conjugacy class of the image of the injection π_1(C_k) π_1(G) ≈ F_n (that injection determined up to inner automorphism of F_n by appropriate choices of base points and paths between them).Let F_nT denote the free splitting corresponding to the subgraph G_u-1⊂ G. What this means is that, starting from the deck action F_nG associated to the universal covering map G ↦ G, the tree T is obtained from G by collapsing to a point each component of the total lift G_u-1 of G_u-1. Let pG → T denote the F_n-equivariant collapse map. Since G_u-1 is F_n-invariant, the action F_nG induces via p an action F_nT which is evidently a free splitting, i.e. a minimal action on a simplicial tree with trivial edge stabilizers. Note that the set of conjugacy classes of nontrivial vertex stabilizers of this action is precisely the free factor system  — indeed the stabilizer of a vertex v ∈ T equals the stabilizer of p^(v) ⊂ G which is nontrivial if and only if p^(v) is a component of G_u-1 covering some noncontractible component C_k ⊂ G_u-1, in which case the stabilizer of v is conjugate to A_k.Fixing a standard isomorphism between the deck transformation group of the universal covering map G ↦ G and the group F_n ≈π_1(G), recall from covering space theory that the lifts G → G of fG → G are in bijective correspondence with the automorphisms Φ∈(F_n) that represent ϕ, where the bijection Φ↔ f^Φ is given by the following relation: Φ-twisted equivariance in G: f^Φ(γ· x) = Φ(γ) · f^Φ(x) for all γ∈ F_n, x ∈ GSince f preserves G_u-1, any of its lifts f^Φ preserves G_u-1, and hence f^Φ induces a map f^Φ_TT → T. The Φ twisted equivariance property in G implies a corresponding property in T: Φ-twisted equivariance in T: f^Φ_T(γ· x) = Φ(γ) · f^Φ_T(x) for all γ∈ F_n, x ∈ TWhen the automorphism Φ is understood or is not important to the discussion, we will often drop it from the notations f^Φ and f^Φ_T, writing simply f and f_T instead.In the next section we will impose an additional metric constraint on f_T; see under the heading “Stretch properties of f_T”.In the geometric and parageometric cases, where ρ,ρ̅ exist and are the unique inverse pair of Nielsen paths of height u,a Nielsen path in G is any lift of ρ or ρ̅, and a Nielsen path in T is any projection to T of any Nielsen path in G. In the geometric case, where ρ is closed and has distinct initial and terminal directions, a Nielsen line in G is a line which projects to a bi-infinite iterate of ρ, and a Nielsen line in T is the projection of a Nielsen line in G. The Nielsen set , a collection of subsets of T, is defined as follows. In the ageometric case, = ∅; in the geometric case,is the set of Nielsen lines in T; and in the parageometric case,is the set of Nielsen paths in T. Furthermore, for each N ∈ its basepoint set or base lattice, denoted Z(N), is defined as follows. In the geometric case, the Nielsen line N has a unique decomposition as a bi-infinite concatenation of Nielsen paths, and Z(N) is defined to be the set of concatenation points. In the parageometric case, where N is just a single Nielsen path, Z(N) is its endpoint pair.Note that in the geometric case, basepoint sets of distinct Nielsen lines are disjoint — for all NN' ∈ we have Z(N)Z(N') = ∅. This follows from two facts about the base point p of ρ. First, each point p ∈ G lying over p is an endpoint of exactly two Nielsen paths in G, both contained in the same Nielsen line in G. Second, p is not contained in G_u-1 <cit.>, so no lift p is contained in G_u-1, and the projection map G ↦ T is locally injective on the complement of G_u-1.Consider a finite path σ_ G in G with endpoints at vertices, with projections σ_G in G and σ_T in T. If σ_G = ρ^i or ρ̅^i for some integer i0 — in the parageometric case where ρ is not closed, i must equal 1 — then we say that σ_T is a ρ^*-path in T, the superscript ^* representing the exponent. Note that ρ^* paths in T are precisely the paths of the form QQ' for which there exists N ∈ such that Q,Q' ∈ Z(N).§.§ Path functions on G and T. In a tree, a finite path with initial and terminal endpoints V,W is determined by those endpoints and is denoted VW.Each of l_u, l_, L_u, L_ is a path function on G that vanishes on paths in G_u-1 (see Section <ref>). Each lifts via the universal covering map G → G to an F_n-invariant path function on G that vanishes on paths in G_u-1, and hence projects via qG → T to a well-defined and F_n-invariant path function on T. We re-use the notations l_u, l_, L_u, L_ for these path functions on G and on T; the context should help to avoid ambiguities. For any path β_ G in G with endpoints at vertices, letting β_G be its projection to G and β_T its projection to T, it follows from the definitions that l(β_G)=l(β_ G)=l(β_T) for any of l=l_u, l_, L_u, or L_. Remark. The point of “well-definedness” of, say, L_ on T is that for any vertices V,W ∈ T, if V_1,V_2 ∈ G both map to V and if W_1,W_2 both map to W then each of the paths V_1V_2 and W_1W_2 is either degenerate or is contained in G_u-1, and hence L_(V_1W_1) = L_(V_2W_2) = L_(VW). Associated to the path functions l_u(·), l_(·), L_u(·),L_(·) on G and on T, we have, respectively, F_n-equivariant functions d_u(·,·), d_(·,·), D_u(·,·), D_(·,·) on pairs of vertices in G and in T. For exampleD_(V,W) = L_(VW)In particular, tracing back through the definitions one sees that for vertices V,W ∈ T their distance d_u(V,W) simply counts the number of edges of T in the path VW. For each of l=l_u, l_, L_u, L_ the quantity l_(E)=L_(E) is bounded away from zero as E ⊂ T varies over edges; this is an immediate consequence of the fact that as e varies over the finite set of edges of H_u, the finite set of positive numbers l(e) has a positive minimum. We record this fact as: There exists η = η_<ref> > 0 such that for each edge E ⊂ T with endpoints VW ∈ T, the values of d_u, d_, D_u, D_ on V,W are all ≥η. Each of the path functions l_u and l_ is additive, meaning that its value on an edge path is the sum of its values on individual edges. It follows that each of d_u and d_ is a path metric on T. Furthermore, d_u and d_ are quasicomparable to each other, because H_u has only finitely many edges hence T has only finitely many edge orbits under the action of F_n, and the values of d_u and d_ on the endpoint pair of each edge is positive (Lemma <ref>). The “metrics” D_u and D_ are also quasicomparable to each other, by application of Corollary <ref>. However, D_u and D_ are not actual metrics because they may violate the triangle inequality. Nonetheless D_u and D_ do satisfy a coarse version of the triangle inequality, as a consequence of Lemma <ref>, and we will refer to this by saying that D_u and D_ are coarse metrics on the vertex set of  T. We record these observations as: d_u and d_ are quasicomparable, F_n-equivariant metrics on vertices of T. Also, D_u and D_ are quasicomparable, F_n-equivariant coarse metrics on vertices in T. Motivational remarks.The metrics d_u and d_ may fail to satisfy the desired flaring condition: if H_u is geometric then for any Nielsen line N ⊂ T, iteration of f_T=f_T^Φ produces a sequence of Nielsen lines N_i = (f_T^k)_#(N) (k ∈), and furthermore the map f^k_T takes the base lattice Z(N) to the base lattice Z(N_k) preserving path distance. Since the base lattice of a Nielsen line has infinite diameter in any invariant path metric on T, this demonstrates the failure of flaring. In hopes of averting a failure of flaring, we might instead consider using the coarse metrics D_u and D_: when two vertices V,W are contained in the base lattice of the same Nielsen line we have the equations D_u(V,W)=D_(V,W)=0 which are precisely designed to correct the failure of flaring. Although using D_u or D_ creates its own problem because they are not actually metrics, in Section <ref> we shall solve that problem by using the coning construction often employed in studies of relative hyperbolicity, coning off those paths in T which exhibit nonflaring behavior to obtain a graph T^*. Furthermore, this graph will come equipped with an actual path metric d^* such that the inclusion TT^* is a quasi-isometry from D_ to d^*; see Proposition <ref> in Section <ref>.Next we translate several results on path functions in Section <ref> into the context of the coarse metric D_ on T:For any vertices VW ∈ T there is a unique decomposition of the path VW = μ_0 ν_1 μ_1 ⋯ν_A μ_A such that the following properties hold: *If ρ does not exist then A=0.*If ρ exists then the ν_a's are precisely all of the maximal ρ^* paths of VW, and so each μ_a contains no ρ^* subpath.*If ρ exists and if 1 ≤ a < a+1 ≤ A-1 then at least one of the subpaths μ_a,μ_a+1 is nondegenerate; in the geometric case, all μ_a-subpaths are nondegenerate. *D_(V,W) = l_(μ_0) + ⋯ + l_(μ_A)Furthermore, given any path γ in T — finite, singly infinite, or bi-infinite — whose endpoints, if any, are at vertices, and assuming that ρ exists, there is a unique decomposition of γ as an alternating concatenation of its maximal ρ^* paths (the ν-subpaths) and paths that contain no ρ^* subpath (the μ-subpaths) such that for any two consecutive μ-subpaths at least one is nondegenerate, all μ-subpaths being nondegenerate in the geometric case.Items ItemAIsZero–ItemOneNotDeg are translations of Corollary <ref>, and item ItemDPFFormula is a translation of Corollary <ref>; the proofs are immediate from those results combined with the definitions. The “Furthermore…” clause is a quick consequence of the following observations that hold for any nested pair of finite subpaths VW⊂V'W' in T. First, every ρ^* subpath of VW is a ρ^* subpath of V'W'. Also, every maximal ρ^* subpath of VW whose d_ distances from V and from W are greater than l_(ρ) is a maximal ρ^* subpath of V'W'. Remark on item ItemOneNotDeg.Consider the paths μ_a for 1 ≤ a ≤ A-1. In the geometric case each such μ_a is nondegenerate, and this can be used to improve the constants in some applications of ItemOneNotDeg (underlying nondegeneracy of μ_a is the fact that the base point of the closed Nielsen path ρ is disjoint from G_u-1). In the parageometric case, on the other hand, one of the paths μ_a, μ_a+1 may be degenerate. This happens for μ_a only if, up to orientation reversal, the Nielsen path ρ has initial vertex p ∈ G_u-1 (the terminal vertex is necessarily disjoint from G_u-1, the fact which underlies item ItemOneNotDeg), in which case μ_a is degenerate if and only if ν_a μ_a ν_a-1 lifts to a path in G that projects to a path in G of the form ρ̅μρ where μ is a nondegenerate closed path in G_u-1 based at p.§.§ Constructing T^* by coning off Nielsen axes of T.Embed the tree T into a graph denoted T^*, and extend the action F_nT to an action F_nT^*, as follows. Index the Nielsen set as = {N_j}_j ∈ J, letting Z_j be the basepoint set of N_j (Definition <ref>). For each j ∈ J we cone off Z_j by adding a new vertex P_j=P(N_j) and attaching a unique edge P_j Q for each Q ∈ Z_j. The points P_j are called cone points and the edges P_j Q are called cone edges. Since the simplicial action F_nT takes Nielsen paths to Nielsen paths and hence induces a basepoint preserving permutation of the Nielsen set , this action extends uniquely to F_nT^* permuting the cone points and the cone edges. Let V^∞⊂ T^* be the set of vertices v ∈ T^* whose stabilizer subgroup (v) is infinite. The formula v ↦(v) defines an injection from V^∞ to the set of nontrivial subgroups of F_n. A subgroup SF_n is equal to (v) for some v ∈ V^∞ if and only if S is conjugate to the fundamental group of some noncontractible component of G_u-1 in π_1(G) ≈ F_n, or H_u is a geometric stratum and S is conjugate to the infinite cyclic subgroup ρ F_n.This fact may be extracted from Theorem F of . But a direct proof is easy; here is a sketch.We have a partition V^∞ = V^∞_u-1∐ V^∞_where V^∞_u-1 = V^∞ T and V^∞_ = {cone points} = {P_j}The collapse map G ↦ T induces an equivariant and hence stabilizer preserving bijection between V^∞_u-1 and the set of components of G_u-1 having nontrivial stabilizer. Using covering space theory, the stabilizers of the latter components are precisely those subgroups of F_n conjugate to the fundamental group of some noncontractible component of G_u-1. Furthermore, since distinct components of G_u-1 are disjoint subtrees of G, the intersections of their stabilizers are trivial, and so the stabilizers are unequal if they are nontrivial. This completes the proof of H_u is nongeometric.If H_u is geometric then we have equivariant and hence stabilizer preserving bijections N_j ↔ N_j ↔ P_j where N_j is the Nielsen line in G mapping to N_j under the collapse map G ↦ T. By definition the N_j are precisely those lines in G that cover the closed Nielsen path ρ. The element of π_1(G) represented by ρ (and denoted by ρ) is root free in π_1(G) because ρ has a unique u-illegal turn (Notations <ref> ItemCTiNP), and so by covering space theory the stabilizers of the lines N_j are precisely the infinite cyclic subgroups in the conjugacy class of the group ρπ_1(G) ≈ F_n. And as before, two different such lines have distinct stabilizers. The proof is completed by noting that ρ is not conjugate in F_n to the fundamental group of a noncontractible component of G_u-1, because ρ is a circuit not contained in G_u-1 (Notations <ref> ItemCTiNP).We may construct an F_n-equivariant piecewise Riemannian metric on the tree T, denoted ds, such that for vertices V,W ∈ T we have d_(V,W)=∫_VW ds. We may extend ds to an F_n-equivariant piecewise Riemannian metric denoted ds^* on T^* as follows. In the nongeometric case there is nothing to do. In the geometric case the group F_n acts freely and transitively on the set of cone edges {P_j Q j ∈ J,Q ∈ Z(N_j)}; extend ds over a single cone edge P_j Q, then extend it over all other cone edges equivariantly to obtain ds^*; note that the length ∫_P_jQ ds^* is independent of j and Q. In the parageometric case, the group F_n acts freely on the set of cone edges, and there are two orbits of cone edges corresponding to the two endpoints of ρ; we extend ds over a single cone edge in each of the two orbits, then extend equivariantly to obtain ds^*; also, we require that length ∫_P_j Q ds^* be the same on both orbits of cone edges. Define the cone height to be the length of any cone edge in the metric ds^*.Next let d^*(·,·) be the path metric on T^* obtained by minimizing path lengths: d^*(x,y) equals the minimum of ∫_γ ds^* over all continuous paths γ in T^* having endpoints x,y. The infimum is evidently minimized by some embedded edge path in T^* having endpoints x,y. Note that since T^* is not a tree, embedded edge paths need not be determined by their endpoints.Bypasses in T^*. For any ρ^* path QQ' we let QQ' denote the path QP(N) * P(N)Q' in T^*, called the bypass of QQ'. Note that a path in the graph T^* is a bypass if and only if it is a two-edge path having a cone point as its midpoint, and furthermore a bypass is completely determined by its endpoints. We thus have a one-to-one correspondence ν↔ν between the set of ρ^* paths and the set of bypasses. Extending the map f^Φ_TT → T to f^Φ_T^* T^* → T^*. Following Definition <ref>, and choosing Φ∈(F_n) representing ϕ, lift fG → G to G and project to T obtaining Φ-twisted equivariant mapsf =f^Φ G → Gf_T = f^Φ_TT → TWith respect to the inclusion TT^* we extend f^Φ_T to a Φ-twisted equivariant map = ^Φ T^* → T^*as follows. As noted, for convenience we will often suppress Φ from the notation for these maps.The action of f_T on T induces a well-defined action on the Nielsen set , and so we can extend f_T over the set of cone points by setting (P(N)) = P(f_T(N)) for each N ∈. Furthermore, for each N ∈ the map f_T restricts to a bijection of basepoint sets f_TZ(N) → Z(f_T(N)), and so for each Q ∈ Z(N) we can extend the endpoint map (P(N),Q) ↦ (P(f_T(N)), f_T(Q)) uniquely to an isometry between cone edges P(N),Q→P(f_T(N)),f_T(Q). For each edge E ⊂ T we have the following equation that follows from the “big L” eigenlength equation in Section <ref>: Eigenlength equation in T: ∫_f_T(E) ds = λ∫_E dsIt follows that by equivariantly homotoping f_T relative to endpoints of edges we may arrange that f_T stretches each edge of T by a uniform factor λ. Since its extension f_T* is an isometry on edges of T^* ∖ T, it follows that f_T* is λ-Lipschitz. These conditions constitute additional constraints on the maps f_T and f_T^* which we record here:Stretch properties of f_T and f_T^*: * The maps f_T,f_T^* stretch each edge E ⊂ T by a constant factor λ over the path f_T(E)=f_T^*(E).* The map T^* permutes cone edges, taking each cone edge isometrically to its image.These stretch properties are the first step of Lemma <ref> to follow.We recall here some basic definitions. Given constants k ≥ 1, c ≥ 0, a map of metric spaces fX → Y is a (k,c)-quasi-isometric embedding if for all p,q ∈ X we have1/kd(p,q)-c ≤ d(f(p),f(q)) ≤ k d(p,q) + cIf in addition each point of Y has distance ≤ c from some point of f(X) then f is a (k,c)-quasi-isometry. If the domain X is a subinterval ofthen we say that f is a (k,c)-quasigeodesic. Sometimes we conflate the constants k,c by setting k=c and using terminology like “k-quasi-isometries” etc. Sometimes we ignore k,c altogether and use terminology like “quasi-isometries” etc. The map f_T^* T^* → T^* is a quasi-isometry.Let Φ∈(F_n) be the representative of ϕ corresponding to f_T^*, and so f_T^* satisfies the Φ-twisted equivariance equation (see Definition <ref>). The map f_T^* is Lipschitz, by the “Stretch properties” noted above. To complete the proof it suffices to show that there is a Lipschitz map f̅_T^* T^* → T^* such that f_T^* and f̅_T^* are coarse inverses, meaning that each of the composed maps f̅_T^*∘ f_T^* and f_T^*∘f̅_T^* moves each point of T^* a uniformly bounded distance. We construct f̅_T^* by taking advantage of twisted equivariance of f_T^* combined with the fact that the action F_nT^* has finitely many vertex and edge orbits (this is a kind of “twisted equivariant” version of the Milnor-Svarc lemma).Consider the vertex set V^* of T^* and its partition V^* = V^0 ∐ V^∞ into points whose F_n-stabilizers are trivial and infinite, respectively. Using Φ-twisted equivariance it follows that for any vertex v ∈ V we have a subgroup inclusion Φ((v)) ⊂(f_T^*(v)). It follows that f_T^*(V^∞) ⊂ V^∞. Furthermore, that subgroup inclusion is an equation Φ((v)) = (f_T^*(v)) — this is a consequence of Lemma <ref> combined with the fact that fG → G restricts to a homotopy equivalence of the union of noncontractible components of G_u-1 and with the fact that f_#(ρ)=ρ. It follows that the restricted map f_T^* V^∞→ V^∞ is a bijection of the set V^∞. Define the restriction f̅_T^* V^∞ to equal the inverse of the restriction f_T^* V^∞; this map f̅_T^* V^∞ is automatically Φ^-twisted equivariant. Define the restriction f̅_T^* V^0 as follows: choose one representative v ∈ V^0 of each orbit of the action F_nV^0, choose f̅_T^*(v) ∈ V^0 arbitrarily, and extend over all of V^0 by Φ^-twisted equivariance. Having defined a Φ^-twisted equivariant map f̅_T^* V^* → V^*, extend f̅_T^* over each edge to stretch distance by a constant factor, and hence we obtain a Φ^-twisted equivariant map f̅_T^* T^* → T^*. Since there are only finitely many orbits of edges, f̅_T^* is Lipschitz. Since f_T^* is Φ-twisted equivariant and f̅_T^* is Φ^-twisted equivariant, it follows that the two compositions f_T^*f̅_T^* and f̅_T^* f_T^* are equivariant in the ordinary untwisted sense. Each of these compositions therefore moves each point a uniformly bounded distance, hence f_T^* and f̅_T^* are coarse inverses.§.§ Geometry and dynamics on T^*.In this section we prove several propositions regarding T^*, including Proposition <ref> which is our interpretation of flaring in T^*. The proofs will follow after stating all of the propositions.The inclusion of the vertex set of T into the vertex set of T^* is a quasi-isometry from the coarse metric D_ to the metric d^*: there exist constants K = K_<ref>≥ 1, C=C_<ref>≥ 0 such that for all vertices V,W ∈ T we have1/Kd^*(V,W) - C ≤ D_(V,W) ≤ K d^*(V,W) + CGiven an end ξ∈ T, for any vertices V,W ∈ T the intersection of the two rays Vξ, Wξ is a subray of each. It follows that as we hold ξ fixed and let V vary, one of two alternatives holds: each ray Vξ has infinite d^* diameter in which case we say that ξ is infinitely far in T^*; or each ray Vξ has finite d^* diameter in which case we say that ξ is finitely far in T^*.Recall (e.g. <cit.>) that a path fI → T^* defined on an interval I ⊂ is a reparameterized quasigeodesic if there exists a monotonic, surjective function μ J → I defined on an interval J ⊂ such that the composition f ∘μ J → T^* is a quasigeodesic. The graph T^* with the metric d^* is Gromov hyperbolic. Each geodesic segment, ray, or line in T is a reparameterized quasigeodesic in T^* with respect to d^*, with uniform quasigeodesic constants. Furthermore, there is an injection that assigns to each ξ∈ T which is infinitely far in T^* a point ξ^* ∈ T^*, so that for any two points ξη∈ T both of which are infinitely far in T^*, the line ξη⊂ T is the unique line in T which is a reparameterized quasigeodesic line in T^* with ideal endpoints ξ^*,η^*.We now state our re-interpretation of the earlier flaring result Proposition <ref> in the setting of T^*.Given η>0, and given a sequence of vertices V_r ∈ T^* defined for r in some interval of integers a ≤ r ≤ b, we say that this sequence is an η-pseudo-orbit ofif d^*((V_r),V_r+1) ≤η for all a ≤ r < r+1 ≤ b.For each μ > 1, η≥ 0 there exist integers R ≥ 1 and A ≥ 0 such that for any pair of η-pseudo-orbits V_r and W_r of f_T^* defined on the integer interval -R ≤ r ≤ R, if d^*(V_0,W_0) ≥ A thenμ· d^*(V_0,W_0) ≤max{d^*(V_-R,W_-R),d^*(V_R,W_R)} Proof of Proposition <ref>: Quasicomparibility in T^*.In the ageometric case where ρ does not exist then T=T^* and D_ = d^* and we are done. Henceforth we assume that ρ exists. Letting h denote the cone height, it follows that each bypass has length 2h. Given vertices V,W ∈ T, using Proposition <ref> we obtain a decomposition of the path VW and an accompanying formula for D_(V,W):VW = μ_0ν_1μ_1… ν_A-1 μ_A-1 ν_Aμ_A D_(V,W)= l_(μ_0) + ⋯ + l_(μ_A)Each ν_i is a subpath of some element N ∈ of the Nielsen set and the endpoints of ν_i are distinct points in Z(N), and so the corresponding bypass ν_i is defined. We thus obtain a path in T^* and a length calculation as follows:VW =μ_0ν_1μ_1… ν_A-1 μ_A-1 ν_Aμ_A(VW)= l_(μ_0) + ⋯ + l_(μ_A) + 2hAApplying Proposition <ref> with its constant η = η_<ref>, and applying Proposition <ref> ItemOneNotDeg, it follows that if 0 ≤ a < a+1≤ A then at least one of l_(μ_a), l_(μ_a+1) is ≥η, and hencel_(μ_0) + ⋯ + l_(μ_A) ≥(A-1)η / 2We therefore haved^*(V,W)≤(VW) ≤ l_(μ_0) + ⋯ + l_(μ_A) + 2h( 2 (l_(μ_0) + ⋯ + l_(μ_A))/η+ 1 ) = (1 + 4h/η) D_(V,W)+ 2hThis proves the first inequality using any K ≥ 1 + 4h/η and C = 2h/K.We turn to the opposite inequality. Before starting, we shall normalize the choice of cone height to be h = 1/2 l_(ρ) and so each bypass ν has length l_(ρ). Proving Proposition <ref> with h normalized in this fashion implies the proposition for any value of h, because the normalized version of d^* is bi-Lipschitz equivalent to any other version. Given vertices V,W ∈ T, choose an embedded edge path γ^* ⊂ T^* with endpoints V,W satisfying two optimization conditions: (i) ∫_γ ds^* is minimal, and so d^*(V,W) = ∫_γ ds^*.(ii) Subject to (i), ∫_γ T ds^* = ∫_γ T ds is minimal.There is a unique decomposition of γ as an alternating concatenation of subpaths in T and bypasses:γ^* = μ_0ν_1μ_1… μ_A-1ν_Aμ_Afrom which we obtain the formulad^*(V,W) = l_(μ_0) + ⋯ + l_(μ_A) + A l_(ρ)We claim that each μ_a has no ρ^*-subpath. Otherwise we can decompose μ_a = μ' ν' μ” where ν' is a Nielsen path in T. In the case that each of μ',μ” is nondegenerate, or that a=0 and μ” is nondegenerate, or that a=A and μ' is nondegenerate, construct a path γ' from γ by replacing ν' with the corresponding bypass; from the choice of normalization we have ∫_γ' ds^* = ∫_γ ds^* but ∫_γ'T ds < ∫_γ T ds, a contradiction. The other cases all lead to a path γ' exhibiting the same contradiction, and are described as follows. In the case that a ≥ 1 and μ' is degenerate, replace the subpath ν_aν' of γ with the unique bypass having the same endpoints. And in the case that a ≤ A-1 and μ” is degenerate, replace the subpath ν' ν_a+1 with the unique bypass having the same endpoints. And in the last remaining case, where a=A=0 and μ',μ” are both degenerate, we have ν'=γ^* and we let γ' = ν' be the corresponding bypass. Consider the concatenated edge path in T denotedμ_0ν_1μ_1… μ_A-1 ν_Aμ_Awhich is obtained from γ^* by replacing each bypass ν_a with the corresponding ρ^*-subpath ν_a ⊂ T. Straightening this concatenation in T produces the path VW to which we may inductively apply the coarse triangle inequality for L_ given in Lemma <ref>, with the conclusion thatD_(V,W)= L_(VW) ≤ L_(μ_0) + L_(ν_1) + L_(μ_1) + ⋯ +L_(μ_A-1) + L_(ν_A) + L_(μ_A) + (2A-1) C_<ref> Since μ_a has no ρ^± i subpath we have L_(μ_a)=l_(μ_a), and since L_(ρ^± i) = 0 we have L_(ν_a)=0, and therefore D_(V,W)≤ l_(μ_0) + ⋯ + l_(μ_A) + 2AC_<ref>≤ l_(μ_0) + ⋯ + l_(μ_A) + 2C_<ref>/l_(ρ)· A l_(ρ)≤ K d^*(V,W) for any K ≥max{1,2 C_<ref> / l_(ρ)}. This completes the proof of Proposition <ref>.Proof of Proposition <ref>: Hyperbolicity of T^*. If ρ does not exist, i.e. in the ageometric case, since T^*=T is a tree we are done. The parageometric case is similarly easy, but it is also subsumed by the general proof when ρ does exist for which purpose we will apply a result of Kapovich and Rafi <cit.>. Let T^** be the graph obtained from T by attaching an edge Q Q' for each unordered pair QQ' ∈ Z(N) for each element N ∈ of the Nielsen set. We put the simplicial metric on T^**, assigning length 1 to each edge. We have a map T^**→ T^* extending the inclusion TT^*, defined to take each attached edge Q Q'⊂ T^** to the corresponding bypass QQ'. This map is evidently a quasi-isometry from the vertices of T^** to the vertices of T^*, and this quasi-isometry commutes with the inclusions of T into T^* and into T^**. The conclusions in the first two sentences of Proposition <ref>, namely hyperbolicity of T^* and the fact that geodesics in T are uniform reparameterized quasigeodesics in T^*, will therefore follow once we demonstrate the same conclusions with T^** in place of T^*. Those conclusions for T^** are identical to the conclusions of <cit.> applied to the inclusion map TT^**: the graph T^** is hyperbolic; and each arc VW⊂ T is uniformly Hausdorff close in T^** to each geodesic in T^** with the same endpoints V,W. So we need only verify that the inclusion map TT^** satisfies the hypotheses of <cit.> with respect to the simplicial path metrics on T and T^** that assign length 1 to each edge.One hypothesis <cit.> is that T be hyperbolic, which holds for all path metrics on trees. Another hypothesis is that the inclusion TT^** be a Lipschitz graph map which means that it takes each edge of T to a bounded length edge path of T^*, but this is immediate since the inclusion is an isometry onto its image. The remaining hypotheses of <cit.> are numbered (1), (2), (3), the first two of which are trivially satisfied using that the inclusion map from vertices of T to vertices T^** is surjective (this is why we use T^** instead of T^*). The last hypothesis (3) says that there exists an integer M > 0 so that for any vertices VW ∈ T, if V,W are connected by an edge in T^** then the diameter in T^** of the path VW⊂ T is uniformly bounded. The only case that needs attention is when VW is not a single edge in T but V,W are connected by an edge in T^**. This happens only if VW is a ρ^* subpath of some element N ∈ of the Nielsen set, and so VW is a concatenation of a sequence of Nielsen paths Q_k-1Q_k in N, where the concatenation points form a consecutive sequence in Z(N) VW of the formV=Q_0,Q_1,…,Q_K=WEach pair Q_i,Q_j is connected by an edge Q_iQ_j⊂ T^** (i ≤ j =0,…,K). Since each vertex of T along VW is contained in one of the Nielsen paths Q_k-1Q_k and hence its distance to one of Q_k-1 or Q_k is at most l_u(ρ) / 2, it follows that the diameter of VW in T^** is bounded above by M = 1 + l_u(ρ).It remains to prove the “Furthemore” sentence. Given ξ∈ T, for any vertex V ∈ T the ray Vξ is a reparameterized quasigeodesic in T^*. If ξ is infinitely far in T^* then the reparameterization of Vξ defines a quasigeodesic ray in T^* which therefore limits on a unique ξ^* ∈ T^*. This point ξ^* is well-defined because for any other vertex W ∈ T the intersection of the two rays Vξ, Wξ has finite Hausdorff distance in T^* from each of them, hence the reparameterizations of those two rays limit on the same point in T^*. For any two points ξη∈ T that are both infinitely far in T^*, consider the line ξη. Choose a vertex V ∈ T, and choose sequences x_i in Vξ and y_i in Vη so that in T the sequence x_i limits to ξ and y_i limits to η. It follows that in T^* the sequence x_i limits to ξ^* and y_i limits to η^*. Since d^*(x_i,y_i) →∞ and since ξη is a reparameterized quasigeodesic, this is only possible if ξ^* η^*, proving injectivity of the map ξ↦ξ^*. To prove the required uniqueness property of ξη, consider any other two points ξ' η' ∈ T. If one or both of ξ' or η' is finitely far in T^* then the reparameterization of ξ'η' is a quasigeodesic ray or segment, hence has infinite Hausdorff distance in T^* from the quasigeodesic line ξη. If both of ξ',η' are infinitely far in T^*, and if ξη and ξ'η' have finite Hausdorff distance in T^*, then it follows that {ξ^*,η^*}={ξ'^*,η'^*}, and hence by injectivity we have {ξ,η} = {ξ',η'} and therefore ξη=ξ'η'. Proof of Proposition <ref>: Flaring in T^*. We denote the constants of Proposition <ref> in shorthand as K=K_<ref>, C=C_<ref>.Fix μ>1 and η≥ 0. Consider R ≥ 1, to be specified, and a pair of η-pseudo-orbits V_r, W_r for f_T^* defined for -R ≤ r ≤ R. Choose vertices V_r, W_r ∈ G projecting to V_r, W_r respectively. In G denote the path β_r = V_r W_r, and let β_r be its projection to G. Also, for -R<r≤ R denote α_r = V_r,f(V_r-1) and ω_r =f(W_r-1), W_r, which are the unique paths such that β_r = [α_rf_#(β_r-1) ω_r]. Let α_r,ω_r be their projections to G. By assumption we have d^*(f_T^*(V_r-1),V_r) ≤η and d^*(f_T^*(W_r-1),W_r) ≤η, and applying Proposition <ref> it follows thatL_(α_r),L_(ω_r) ≤ Kη + C ≡η'and so f_#(β_r-1) η'L_β_r. Let μ' = 2K^2μ.By Proposition <ref> the path function L_ satisfies the flaring condition of Definition <ref> with respect to f, from which using μ' and η' we obtain constants R' ≥ 1, A' ≥ 1. We now specify R = R'. Also let A = max{KA' + C, 2K^2C + 2C/μ}Applying Proposition <ref> it follows that if d^*(V_0,W_0) ≥ A then L_(β_0) ≥ A'. The flaring condition of L_ therefore applies with the conclusion thatμ' L_(β_0)≤max{L_(β_-R), L_(β_R)} and soμd^*(V_0,W_0)≤ K μL_(β_0) + K C μ≤K μ/μ'max{L_(β_-R),L_(β_R)} + K C μ Applying Proposition <ref> again we haveμd^*(V_0,W_0) ≤K^2 μ/μ'max{d^*(V_-R,W_-R),d^*(V_R,W_R)} + K^2Cμ+C ≤1/2max{d^*(V_-R,W_-R),d^*(V_R,W_R)} + 1/2μA ≤1/2max{d^*(V_-R,W_-R),d^*(V_R,W_R)} + 1/2μd^*(V_0,W_0)1/2μd^*(V_0,W_0)≤1/2max{d^*(V_-R,W_-R),d^*(V_R,W_R)}which completes the proof.§.§ Construction of .We continue to fix the choice of Φ∈(F_n) representing ϕ, and we consider the corresponding Φ-twisted equivariant mapf_T^* = f^Φ_T^* T^* → T^*. Our definition of the suspension spacewill formally depend on this choice (but see remarks at the end of the section regarding dependence on Φ of the constructions to follow). We defineto be the suspension space of T^*, namely the quotient of T^* ×× [0,1] modulo the gluing identification (x,k,1) ≈ (f_T^*(x),k+1,0) for each k ∈ and x ∈ T^*. Let [x,k,r] ∈ denote the equivalence class of (x,k,r). We have a well-defined and continuous projection map p → given by p[x,k,r] = k+r. For each closed connected subset J ⊂ we denote _J = p^(J) which we refer to as a slice of . In the special case of a singleton s ∈ we refer to _s = p^(s) as a fiber. Each fiber may be regarded as a copy of T^*, in the sense that if k = ⌊ s ⌋ and r = s-k then we obtain a homeomorphism j_sT^* → S_s given by j_s(x)=[x,k,r]; in the special case that s is an integer we have j_s(x)=[x,s,0].We have an action F_n which is induced by the action F_nT^* as follows: for each γ∈ F_n and each [x,k,r] ∈ we haveγ· [x,k,r] = [Φ^k(γ) · x, k, r]This action is well-defined because, using Φ-twisted equivariance of f_T^* T^* → T^*, we haveγ· [x,k,1]= [Φ^k(γ) · x,k,1] = [f_T^*(Φ^k(γ) · x),k+1,0] ==[Φ(Φ^k(γ)) · f_T^*(x),k+1,0] = [Φ^k+1(γ) · f_T^*(x),k+1,0] = γ· [f_T^*(x),k+1,0]Note that the homeomorphism j_0T^* →_0 is equivariant with respect to F_n actions. For generally, for each integer k the homeomorphism j_kT^* →_k is Φ^-k-twisted equivariant, becausej_k(γ· x) = [γ· x,k,0] = Φ^-k(γ) · [x,k,0] = Φ^-k(γ) · j_k(x) We have a semiflow × [0,) →, which is partially defined by the formula [x,k,s] · t = [x,k,s+t] (x ∈ T^*,k ∈,0 ≤ s ≤ 1,0 ≤ t ≤ 1-s)and which uniquely extends to all t ≥ 0 by requiring the semiflow equation (p · t) · u = p · (t+u) to hold for all t,u ≥ 0. In particular _s · t = _s+t for all s ∈, t ≥ 0. For each b ∈ we define the first hitting map h_b _(-,b]↦_b by letting ξ∈_(-∞,b] flow forward from ξ∈_p(ξ) to _b along the flow segment ξ· [0,b-p(ξ)], thus obtaining the formula h_b(ξ) = ξ· (b-p(ξ)). This completes Definition <ref>. We define a piecewise Riemannian metric on . Recall the piecewise Riemannian metric ds^* on T^* (Section <ref>). For each edge E ⊂ T^* and each integer n we define a Riemannian metric d_E on E × n × [0,1] (≈ E × [0,1] and not depending on n), in two cases. In the case E ⊂ T, use the metric d_E^2=λ^2t (ds^*)^2 + dt^2; note that f_T^* stretches the length of E by a constant factor λ, and hence the gluing map (x,n,1) ↦ (f_T^*(x),n+1,0) takes E × n × 1 isometrically onto f_T^*(E) × (n+1) × 0. In the case where E ⊂ T^* ∖ T, equivalently we are in the geometric or parageometric case and E is a cone edge, use the metric d_E^2 = (ds^*)^2 + dt^2; note that E'=f_T^*(E) is also a cone edge and that f_T^* maps E isometrically to E', and so once again the gluing map (x,n,1) ↦ (f_T^*(x),n+1,0) takes E × n × 1 isometrically onto f_T^*(E) × (n+1) × 0. These metrics on all of the rectangles E × n × [0,1] glue up isometrically along their common boundaries in , as follows. First, for any two edges E,E' ⊂ T^* having a common endpoint x = EE' =EE', the restrictions to [0,1] ≈ x × n × [0,1] of the metric d_E on E × [0,1] ≈ E × n × [0,1] and the metric d_E' on E' × [0,1] ≈ E' × n × [0,1] are both equal to dt. Fixing n and letting E ⊂ T^* vary, this allows us to glue up all of the rectangles E × n × [0,1] to get a well-defined piecewise Riemannian metric on T^* × n × [0,1]. Next, as noted above the gluing map from T^* × n × 1 to T^* × (n+1) × 0 maps each E × n × 1 isometrically onto f_T^*(E) × (n+1) × 0; letting n vary this allows us to glue up T^* × n × [0,1] to get the desired piecewise Riemannian metric on .By minimizing path lengths using the above piecewise Riemannian metric on , we obtain a geodesic metric d_(·,·) on . We may similarly define a geodesic metric d_J(·,·) on any slice _J, by minimizing path length with respect to the restricted piecewise Riemannian metric on _J. In the special case of a fiber _s, letting n = ⌊ s ⌋, for each edge E ⊂ T^* with image edge j_s(E) ⊂_s the metrics d^* on E and d_s in j_s(E) are related so that if E ⊂ T then j_s stretches the metric by a factor of λ^s-n, whereas if E ⊂ T^* ∖ T then j_s preserves the metric. In particular the map j_sT^* →_s is a λ^s-n bilipschitz homeomorphism; it is therefore an isometry if s is an integer, and a λ-bilipschitz homeomorphism in general.This completes Definition <ref>. For use at the very end of the proof of the Hyperbolic Action Theorem, when verifying the WWPD conclusions, we shall need the following metric property of :For any two fibers _s,_t ⊂ and any x ∈_s, y ∈_t we have d_(x,y) ≥s-t. The lemma follows by noting that for each edge E ⊂ T^*, the projection function pE × [0,1] → [0,1] has the property that for each tangent vector v ∈ T(E × [0,1]) we have Dp(v)≥v, using the Riemannian metric d_E on the right hand side of the inequality.The following metric property will be used later to study how the inclusion map of fibers _i can distort distance.For each integer m there exist constants k_m ≥ 1, c_m ≥ 0 such that for each integer a and each s ∈ J = [a,a+m] the inclusion map i _s _J is a k_m,c_m quasi-isometry.We construct a cycle of equivariant Lipschitz maps as follows:_s [r]^i _J [d]^h_a+m _a [u]^h_r _a+m[l]^h̅The inclusion map i is 1-Lipschitz. The equivariant maps h_a+m and h_r are first hitting maps, each of which is λ^m Lipschitz. The map h̅ will be an equivariant coarse inverse to the equivariant map h_a+m_a →_a+m, for the construction of which we consider the commutative diagramT^* [r]^j_a[d]_f^m_T^* _a [d]^h_a+mT^* [r]_j_a+m@/^3pc/@.>[u]^f̅^m_T^* _a+m@/_3pc/@.>[u]_h̅In this diagram the map f^m_T^* is Φ^m-twisted equivariant, and the map h_a+m is (untwisted) equivariant. The top and bottom maps are the instances k=a and k=a+m of the Φ^-k-twisted equivariant isometry j_kT^* →_k, whose inverse j^_k _k → T^* is Φ^k twisted equivariant. By Lemma <ref> the map f_T^* has a Φ^-1-twisted equivariant Lipschitz coarse inverse f̅_T^*. It follows that f^m_T^* has a Φ^-m-twisted equivariant Lipschitz coarse inverse f̅^m_T^* whose Lipschitz and coarse inverse constants depend on m. We may therefore fill in the diagram with the map h̅ = j_a ∘f̅^m_T^*∘ j_a+m^ which makes the diagram commute and which is an untwisted equivariant Lipschitz coarse inverse for h_a+m, with Lipschitz and coarse inverse constants depending on m. Going round the cycle from _s to itself and from _J to itself we obtain two equivariant self-maps both of which move points uniformly bounded distances depending on m. The maps i _s →_J and h_r ∘h̅∘ h_a+m_J →_s are therefore Lipschitz coarse inverses and hence quasi-isometries with constants depending on m.Remark. The definition of the suspension spacedepends ostensibly on the choice of representative Φ∈(F_n) of ϕ, but in fact different choices of Φ produces suspension spaces which are F_n-equivariantly isometric, as can easily be checked using the fact that distinct choices of Φ differ by an inner automorphism of F_n.§.§ Proof of hyperbolicity of .We prove thatis a hyperbolic metric space by applying the Mj-Sardar combination theorem <cit.>, a descendant of the Bestvina–Feighn combination theorem <cit.>. The hypotheses of the Mj-Sardar theorem consist of an opening hypothesis and four numbered hypotheses which we must check. The last of those four is the “flaring condition” which we prove by application of Proposition <ref>.Opening hypothesis of <cit.>: A metric bundle. This hypothesis says thatis a metric bundle overwith respect to the map p →. We must therefore verify that the map p → satisfies the definition of a metric bundle as given in<cit.>. First, p must be Lipschitz map, and each fiber _s must be a geodesic metric space with respect to the path metric induced from , each of which we have already verified. Also, item 2) of <cit.>, when translated into our setting, requires that for each interval [a,b] ⊂ such that b-a ≤ 1, and for each s ∈ [a,b] and ξ∈_s, there exists a path inof uniformly bounded length that passes through ξ and has endpoints on _a and _b respectively. To obtain this path choose η∈_a so that η· (s-a)=ξ and take the path t ↦η· t defined on a ≤ t ≤ b, whose length equals b-a ≤ 1.The remaining verification needed forto be a metric bundle is that the set of inclusions _s (s ∈) is uniformly proper, meaning that these inclusions are uniformly Lipschitz — in fact they are all 1-Lipschitz by construction — and that there exists a single nondecreasing “gauge” function δ [0,) → [0,) for these inclusions, having the following property: (*) for any s ∈, any x,y ∈_s, and any D ≥ 0, if d_(x,y) ≤ D then d_s(x,y) ≤δ(D). To define δ consider x,y ∈_s connected by a geodesic path γ inwith (γ) = d_(x,y) ≤ D. The projection p γ inhas length ≤ D and so, letting m = ⌊ D+2 ⌋, there are integers a and b=a+m such that ⊷(p γ) ⊂ [a,b], implying that ⊷(γ) ⊂_[a,b] which implies in turn that d_(x,y) = d_[a,b](x,y). Applying Lemma <ref> we have 1/k_md_s(x,y) - c_m ≤ d_[a,b](x,y) and hence d_s(x,y) ≤ k_m d_(x,y) + k_m c_m. We may assume that k_m and c_m are nondecreasing functions of m and hence k_⌊ D ⌋ and c_⌊ D ⌋ are nondecreasing functions of D, and so using δ(D) = k_⌊ D ⌋(D + c_⌊ D ⌋) we are done.We record for later use the following:For each s ∈ the inclusion _s is uniformly proper. In particular, the inclusion T^* = _0 is uniformly proper. Hypotheses (1), (2) of <cit.>: Base and fiber hyperbolicity. These hypotheses require that the base spaceis hyperbolic which is evident, and that the fibers _s are hyperbolic with uniform hyperbolicity constant. Proposition <ref> gives us a constant δ' ≥ 0 such that T^* is δ'-hyperbolic. Since each fiber _s is λ-bilipschitz homeomorphic to T^*, it follows that _s is hyperbolic with a uniform hyperbolicity constant δ depending only on δ' and λ.Hypothesis (3) of <cit.>: Barycenters. This hypothesis says that the barycenter maps ^3 _s →_s are uniformly coarsely surjective as s varies. We review what this means from <cit.>. Given a δ-hyperbolic geodesic metric space X with Gromov boundary X, considerthe triple space ^3 X = {(ξ_1,ξ_2,ξ_3) ∈ ( X)^3 ξ_i ξ_jifij}The barycenter map ^3 X → X is a coarsely well-defined map as follows. There exists constants K ≥ 1, C ≥ 0 depending only on δ such that for any two points ξ_1 ξ_2 ∈ X there exists a K,C-quasigeodesically embedded line in X having endpoints ξ_1,ξ_2; we use the notation ξ_1ξ_2 for any such quasigeodesic line. By <cit.> there exist constants D,L ≥ 0 depending only on δ such that for each triple ξ=(ξ_1,ξ_2,ξ_3) ∈^3 X there exists a point b_ξ∈ X which comes within distance D of any of the lines ξ_iξ_j, ij ∈{1,2,3}, and for any other such point b'_ξ the distance between b_ξ and b'_ξ is ≤ L. Once the constants K, C, D, L have been chosen, any such point b_ξ is called a barycenter of ξ, and any map ^3 X → X taking each triple ξ to a barycenter b_ξ is called a barycenter map for X.To say that the barycenter maps ^3_s →_s are uniformly coarsely surjective means that there exists a “coboundedness constant” E ≥ 0 such that for each s ∈ the image of each barycenter map ^3_s →_s comes within distance E of each point of _s. For the hyperbolic space T^*, the action F_nT^* has a fundamental domain τ⊂ T^* of bounded diameter, and so E” = (τ) is a coboundedness constant for any F_n-equivariant barycenter map ^3 T^* → T^*, hence there is a uniform coboundedness constant E' for all barycenter maps ^3 T^* → T^*. Since each of the fibers _s comes equipped with a λ-bilipschitz homeomorphism j_sT^* →_s, their barycenter maps have a uniform coboundedness constant E = λ E'.Hypothesis (4) of <cit.> aka <cit.>. Here is a slight restatement of the hypothesis specialized to our present context.Flaring 1: For all k_1 ≥ 1, ν_1 > 1, there exist integers A_1, R ≥ 0 such that for any s ∈ and any two k_1-quasigeodesics γ_1,γ_2[s-R,s+R] →which are sections of the projection map p — meaning that p γ_i is the identity on the interval [s-R,s+R] — the following implication holds:(F_1) if d_s(γ_1(s),γ_2(s)) ≥ A_1 thenν_1·d_s(γ_1(s),γ_2(s)) ≤max{ d_s-R(γ_1(s-R),γ_2(s-R)), d_s+R(γ_1(s+R),γ_2(s+R)) } This statement tautologically implies the flaring hypothesis given in <cit.>, the difference being that in the latter statement the quantifier order starts out as “For all k ≥ 1 there exist μ > 1 and integers A,r ≥ 0 such that…” with the remainder of the statement unchanged (a simple geometric estimation argument yields the converse implication, but we will not need this).For proving Flaring 1 we first reduce it to a “discretized” version taking place solely in the fibers _r for integer values of r, as follows:Flaring 2: For all k_2 ≥ 1, ν_2 > 1, there exist integers A_2, R ≥ 0 such that for any integer m ∈ and any two k_2-quasigeodesic mapsδ_1,δ_2 {m-R,…,m,…,m+R}→which are sections of the projection map p, the following implication holds:(F_2) if d_m(δ_1(m),δ_2(m)) ≥ A_2 then ν_2·d_m(δ_1(m),δ_2(m)) ≤max{d_m-R(δ_1(m-R),δ_2(m-R)), d_m+R(δ_1(m+R),δ_2(m+R)) } To show that Flaring 2 implies Flaring 1, choose k_1,ν_1. Consider any integer R ≥ 0, any s ∈, and any pair γ_1,γ_2[s-R,s+R] → of k_1-quasigeodesic sections of the projection map p. Let m = ⌊ s ⌋ and let t = s - m. The semiflow restricts to λ-bilipschitz homeomorphisms h_r _r ↦_r+t defined by h_r[x,r,0] = [x,r,t] for any integer r, having the property that the distance from each [x,r,0] to its h_r-image [x,r,t] inis at most 1. It follows that the functionsδ_j {m-R,…,m+R}→, δ_j(r) = h_r^(γ_j(r+t))are k_2-quasi-isometric sections of p, where the constant k_2 depends only on k_1 and λ. Applying Flaring 2, for any ν_2 > 1 there exist integers A_2,R ≥ 0 such that the implication (F_2) holds. Again using that the maps h_r are λ-bilipschitz homeomorphisms, if we take ν_2 = ν_1 ·λ^2 and A_1 = A_2 ·λ then the hypothesis of (F_1) implies the hypothesis of (F_2) and the conclusion of (F_2) implies the conclusion of (F_1). It remains to verify Flaring 2 by applying Proposition <ref> which we restate here for convenience in a form matching that of Flaring 2:Flaring 3: For each μ > 1, η≥ 0 there exist integers A_2 ≥ 0, R ≥ 1 such that for any pair of η-pseudo-orbits V_r and W_r of f_T^* T^* → T^* defined for integers -R ≤ r ≤ R, the following implication holds:(F_3) if d^*(V_0,W_0) ≥ A_2 then μ· d^*(V_0,W_0) ≤max{ d^*(V_-R,W_-R) , d^*(V_R,W_R) } For the proof that Flaring 3Flaring 2, for each r ∈ we have a commutative diagramT^* [r]^f_T^*[d]_j_r-1T^* [d]^j_r _r-1[r]_h_r _rwhere h_r is the first hitting map. Note that in this diagram the bottom is equivariant, the top is Φ-twisted equivariant, the left is a Φ^1-r-twisted equivariant isometry, and the right is Φ^-r-twisted equivariant isometry.Choose k_2 ≥ 1, ν_2>1, consider integers R ≥ 0 and m, and consider a pair of k_2-quasigeodesic maps δ_i {m-R,…,m+R}→ which are sections of p, for i=1,2. It follows that if m-R ≤ r-1 < r ≤ m+R then d_(δ_i(r-1),δ_i(r)) ≤ 2 k_2 (recall that “k_2-quasigeodesic” is synonymous with “(k_2,k_2)-quasigeodesic”). For each x ∈_r-1 we have d_(h_r(x),x)) ≤ 1 and hence d_(h_r(δ_i(r-1)),δ_i(r)) ≤ 2k_2 + 1. For r ∈{-R,…,+R} denote V_r = j_m+r^(δ_1(m+r)), W_r = j_m+r^(δ_2(m+r)). It follows that d_*(f_T^*(V_r-1),V_r), d_*(f_T^*(W_r-1,W_r) ≤ 2k_2+1, and so the sequences (V_r) and (W_r) are both η-pseudo-orbits of f_T^*, defined for m-R ≤ r ≤ m+R and with η = 2k_2+1. Since j_m is an isometry we have d_m(δ_1(m),δ_2(m))=d^*(V_0,W_0), and so the hypothesis of (F_2) implies the hypothesis of (F_3). Similarly since j_m-R and j_m+R are isometries, the conclusion of (F_3) implies the conclusion of (F_2) using μ=ν_2. We have therefore proved that Flaring 3Flaring 2.This completes the verification of the hypotheses of the Mj-Sardar combination theorem, and so the spaceis therefore hyperbolic. § ABELIAN SUBGROUPS OF (F_N) This section reviews some background material needed for the rest of the paper. Section <ref> contains basic material fromregarding automorphisms and outer automorphisms (see alsofor a comprehensive overview). Section <ref> reviews elements of the theory of abelian subgroups of (F_n) developed in <cit.>, focussing on disintegration subgroups.§.§ Background review §.§.§ More aboutIn Notations <ref> we reviewed features of afG → G with associated f-invariant filtration G_1 ⊂⋯⊂ G_u under the assumption that the top stratum H_u is . In studying disintegration groups we shall need some further defining properties and derived properties of  (for this section only the reader may ignore the assumption that the top stratum is ). We shall refer tofor material specific to , tofor more general material, and tofor certain “compilation” results with multiple sources inor . One may also consultfor a comprehensive overview. General properties of strata. <cit.> Each stratum H_i = G_i ∖ G_i-1 is either an irreducible stratum meaning that its transition matrix M_i is an irreducible matrix, or a zero stratum meaning that M_i is a zero matrix. Each irreducible stratum satisfies one of the following:H_i is anstratum: <cit.> The matrix M_i is a k × k Perron-Frobenius matrix for some k ≥ 2, having eigenvalue λ>1; or H_i is anstratum: <cit.> H_i = E_i is a single edge with an orientation such that f(E_i) = E_i u where u is either trivial or a closed path in G_i-1 having distinct initial and terminal directions.Anstratum H_i is a fixed stratum if u is trivial, a linear stratum if u is a Nielsen path (equivalently u is a periodic Nielsen path), and a superlinear stratum otherwise.Properties of -linear strata: An -linear stratum H_i = E_i will also be referred to as a linear edge of G. The linear edges of G have the following features:Twist path and twist coefficient: <cit.> For each linear edge E_i we have f(E_i) = E_i w_i^d_i for a unique closed Nielsen path w_i which is root free meaning that w_i is not an iterate of any shorter closed path (equivalently, if p is the base point of w_i, then the element of the group π_1(G,p) represented by w_i is root free). We say that w_i is the twist path of E_i and that the integer d_i0 is its twist coefficient. If E_jE_i is another linear edge having twist path w_j, and if w_i and w_j determine the same conjugacy class up to inversion, then w_i=w_j and d_id_j. Nielsen paths: <cit.>For eachedge E_i, if there is an indivisible Nielsen path contained in G_i but not in G_i-1 then E_i is a linear edge, and every such Nielsen path has the form E_i w_i^k E_i for some k0.Exceptional paths: <cit.> These are paths of the form E_i w^k E_j where E_iE_j are linear edges having the same twist path w=w_i=w_j and having twist coefficients d_i,d_j of the same sign.Properties ofstrata: These properties were stated in Notations <ref> with respect to the topstratum H_u, but we go over them again here for an arbitrary -stratum H_i, and with a somewhat different emphasis. Lines: <cit.> Recall the spaces of lines in F_n and in G and the canonical homeomorphism between them:(F_n) = {2 point subsets of F_n} / F_n(G)= {bi-infinite paths in G} / reparameterizationwhere the topology on (F_n) is induced by the Hausdorff topology on compact subsets of F_n, and the topology on (G) has a basis element for each finite path γ consisting of all bi-infinite paths having γ as a subpath. The homeomorphism (F_n) ↔(G) is induced by the universal covering map G → G and the natural bijection F_n ≈ G. We refer to this homeomorphism by saying that a line in F_n is realized by the corresponding line in G. Attracting laminations: <cit.>, <cit.> Associated to H_i is its attracting lamination Λ_i ⊂(F_n) which is the set of all lines in F_n whose realization ℓ in G has the property that for each finite subpath γ of ℓ and each edge E ⊂ H_i there exists k ≥ 1 such that γ is a subpath of f^k_#(E). For distinctstrata H_i, H_j (ij) the corresponding laminations Λ_i,Λ_j are distinct. The set of laminations Ł(ϕ)={Λ_i} is independent of the choice ofrepresentative fG → G. EG Nielsen paths: <cit.>,Fact 1.42. Up to inversion there exists at most one indivisible periodic Nielsen path ρ contained in G_i but not in G_i-1. Its initial and terminal directions are distinct, and at least one endpoint of ρ is not contained in G_i-1. Geometricity: , Fact 2.3. The stratum H_i is geometric if and only if ρ exists and is a closed path.Fixed circuits: , Fact 1.39. If σ is a circuit fixed by f_# then σ is a concatenation of fixed edges and indivisible Nielsen paths.Properties of zero strata: <cit.>Each zero stratum H_i ⊂ G has the following properties: Envelopment: There exist indices s<i<r such that H_s is an irreducible stratum, H_r is anstratum, each component of G_r is noncontractible, and each H_j with s<j<r is a zero stratum and a contractible component of G_r-1. We say that the zero strata H_j with s<j<r are enveloped by H_r, and we denote H^z_r to be the union of H_r with its enveloped zero strata. The filtration element G_s is the union of the noncontractible components of G_r-1, and H^z_r = G_r ∖ G_s. Taken paths.These are the paths μ in H_i for which there exists an edge E of some irreducible stratum H_q with q>i, and there exists k ≥ 1, such that μ is a maximal subpath in H_i of the path f^k_#(E); we say more specifically that the path μ is q-taken. If H_r is thestratum that envelopes H_i then every edge E ⊂ H_i is an r-taken path, from which it follows that the endpoints of E are vertices of H_r not contained in G_r-1. Properties of complete splittings: <cit.>, <cit.>A splitting of a path γ in G is a concatenation expression γ = γ_1 ·…·γ_J such that for all k ≥ 1 we have f^k_#(γ) = f^k_#(γ_1) ·…· f^k_#(γ_J).The characteristic property of a  — short for “completely split relative train track map” — is the following:Complete splitting: For each edge E there is a unique splitting f(E) = σ_1 ·…·σ_n which is complete, meaning that each term σ_i is either an edge in an irreducible stratum, anindivisible Nielsen path, anindivisible Nielsen path, an exceptional path, or a maximal taken subpath in a zero stratum. §.§.§ Principal automorphisms and rotationless outer automorphisms.Consider an automorphism Φ∈(F_n),with induced boundary homeomorphism denoted Φ F_n → F_n, and with fixed subgroup denoted (Φ)F_n. Denote the sets of periodic points and fixed points of Φ as (Φ) and (Φ) ⊂ F_n, respectively. Consider ξ∈(Φ) of period k ≥ 1. We say ξ is an attractor for Φ if it has a neighborhood U ⊂ F_n such that Φ^k(U) ⊂ U and the collection {Φ^ki(U)i ≥ 0} is a neighborhood basis of ξ.Also, ξ is a repeller for Φ if it is an attractor for Φ^.Within (Φ) and (Φ) denote the sets of attracting, repelling, and nonrepelling points, respectively, as _+(Φ),_-(Φ),_N(Φ)⊂(Φ)_+(Φ),_-(Φ),_N(Φ)⊂(Φ) For each c ∈ F_n associated inner automorphism i_c(a)=cac^ we use the following special notations c = (i_c) = {_-c, _+c}, {_- c} = _-(i_c), {_+ c} = _+(i_c)The following equivalences are collected in <cit.> based on results from <cit.> and <cit.>. For each Φ∈(F_n) and each nontrivial c ∈ F_n we haveΦ∈(c) i_c commutes withΦ(Φ) is invariant underî_cc ⊂(Φ) Principal automorphisms. <cit.>. We say that Φ is a principal automorphism if _N(Φ)≥ 2, and furthermore if _N(Φ)=2 then _N(Φ) is neither equal to c for any nontrivial c ∈ F_n, nor equal to the set of endpoints of a lift of a generic leaf of an attracting lamination of the outer automorphism representing Φ. For each ϕ∈(F_n) let P(ϕ) ⊂(F_n) denote the set of principal automorphisms representing ϕ.For each ϕ∈(F_n) let P^±(ϕ) denote[Inthe definition of P^±(ϕ) was incorrectly stated as P(ϕ)P(ϕ^). The definition given here should replace the one in . No further changes are required in , because the arguments there which use P^±(ϕ) are all written using the current, correct definition.] the set of all Φ∈(F_n) representing ϕ such that either Φ or Φ^ is principal, equivalently, P^±(ϕ) = P(ϕ)(P(ϕ^))^ The symmetry equation P^±(ϕ^)=(P^±(ϕ))^ is useful in situations where one is trying to prove a certain property of Φ∈ P^±(ϕ) that is symmetric under inversion of Φ: one may reduce to the case that Φ is principal by replacing ϕ∈(F_n) and Φ by their inverses in the case where Φ is not already principal. We use this reduction argument with little comment in many places. Rotationless outer automorphisms. <cit.>. The concept of forward rotationless outer automorphisms is defined in , where it is proved that the forward rotationless outer automorphisms are precisely those outer automorphisms which haverepresentatives. Here we use the stricter property of rotationless defined in , which is symmetric under inversion, and which is better adapted to the study of abelian subgroups. We say that ϕ∈(F_n) is rotationless if two conditions hold. First, for each Φ∈ P(ϕ) we have (Φ)=(Φ) (in “forward rotationless” this condition is replaced by the weaker _N(Φ)=_N(Φ)). Second, for each integer k ≥ 1 the map Φ↦Φ^k is a bijection between P^±(ϕ) and P^±(ϕ^k) — recall from <cit.> that injective is always true, and so bijective holds if and only if surjective holds. Expansion factor homomorphisms. <cit.> Consider ϕ∈(F_n) and an attracting lamination Λ∈Ł(ϕ). Under the action of (F_n) on the set of lines (F_n), consider the subgroup (Λ) (F_n) that stabilizes Λ. The expansion factor homomorphism of Λ is the unique surjective homomorphism _Λ(Λ) → such that for each ψ∈(Λ) we have _Λ(ψ) ≥ 1 if and only if Λ∈Ł(ψ). Furthermore, there exists μ > 1 such that if ψ∈(Λ) and if _Λ(ψ) > 1 then any relative train track representative fG → G of ψ has an -aperiodic stratum corresponding to Λ on which the Perron-Frobenius eigenvalue of the transition matrix equals μ^_Λ(ψ). Twistors (aka Axes). <cit.>. Recall that two elements a,b ∈ F_n are said to be unoriented conjugate if a is conjugate to b or to b^. The ordinary conjugacy class of a is denoted [a] and the unoriented conjugacy class is denoted [a]_u. For any marked graph G, nontrivial conjugacy classes of F_n correspond bijectively with circuits S^1 ↦ G up to orientation preserving homeomorphisms of the domain S^1. Note that a is root free in F_n if and only if the oriented circuit S^1 ↦ G representing [a] does not factor through a nontrivial covering map of S^1, and a,b are unoriented conjugate if and only if the oriented circuits representing [a],[b] differ by an arbitrary homeomorphism of S^1.Consider ϕ∈(F_n) and a nontrivial, unoriented, root free conjugacy class μ = [c]_u, c ∈ F_n. For any two representatives Φ_1 Φ_2 ∈(F_n) of ϕ, the following are equivalent: Φ_1, Φ_2 ∈(c) c = (Φ_1) (Φ_2)Furthermore, if these equivalent conditions hold then Φ_2^Φ^_1 = i_c^d for some integer d0. If there exists a pair Φ_1 Φ_2 ∈ P(ϕ) such that the above equivalent conditions hold, then μ is said to be a twistor for ϕ, and the number of distinct elements of the set P(ϕ) (c) is called the multiplicity of μ as a twistor for ϕ; these properties are independent of the choice of c ∈ F_n representing μ. The number of twistors of ϕ and the multiplicity of each twistor is finite, as follows. First, for anyfG → G representing ϕ, for μ to be a twistor it is equivalent that somelinear edge E twists around μ, meaning that E has twist path w such that either w or w^ represents μ. Furthermore, the multiplicity of μ is one more than the number of linear edges that twist around μ <cit.>. §.§.§ Rotationless abelian subgroupsPrincipal sets for rotationless abelian subgroups. <cit.>.An abelian subgroup A (F_n) is rotationless if each of its elements is rotationless, equivalently A has a rotationless generating set. Every abelian subgroup contains a rotationless subgroup of index bounded uniformly by a constant depending only on n, namely the subgroup of k^th powers where k ≥ 1 is an integer such that the k^th power of every element of (F_n) is rotationless <cit.>.Assuming A (F_n) is rotationless, a subset ⊂∂ F_n with ≥ 3 elements is a principal set for A if each ψ∈ A is represented by Ψ∈(F_n) such that ⊂(Ψ) (which determines Ψ uniquely amongst representatives of Ψ) and such that Ψ∈ P^±(ψ). When the principal setis fixed, the map ψ↦Ψ defines a homomorphism sA →(F_n) that is a section of the canonical map (F_n) →(F_n) over the subgroup A. Also, the set= ∩_ψ∈ A(s(ψ)) is the unique principal set which is maximal up to inclusion subject to the property thatcontains ; this setdefines the same section sA →(F_n) as<cit.>. Comparison homomorphisms of rotationless abelian subgroups. <cit.>. Consider a rotationless abelian subgroup A. Consider also two principal sets _1,_2 for A that define lifts s_1,s_2A →(F_n). Let _1,_2 be the maximal principal sets containing _1,_2 respectively, and so s_1,s_2 are also the lifts defined by _1,_2. Suppose that s_1s_2A →(F_n) (this if and only if _1 _2), and suppose also that _1 ∩_2 ∅ and hence _1 ∩_2 ∅. It follows that the set Y_1 ∩ Y_2 is fixed by distinct automorphisms representing the same element of A, and so _1 _2 = T^±_c for some nontrivial root free c ∈ F_n. In this situation there is an associated comparison homomorphism ω A → which is characterized by the equations_2(ψ) = i_c^ω(ψ) s_1(ψ) for all ψ∈ A.The number of distinct comparison homomorphisms A → is finite. The coordinate homomorphism. <cit.>. Every abelian subgroup A (F_n) is a finite lamination group, that is, the set Ł(A) = _ϕ∈ AŁ(ϕ) is finite. If in addition A is rotationless then each Λ∈Ł(A) is ψ-invariant for all ψ∈ A, and so A ⋂_Λ∈Ł(A)(Λ)We thus obtain a finite collection of expansion factor homomorphisms defined on A, namely _Λ A →, Λ∈Ł(A)Choosing an enumeration Ω_1,…,Ω_N of the expansion homomorphisms and the comparison homomorphisms, we obtain the coordinate homomorphism Ω A →^Nwhich is injective. §.§ Disintegration subgroupsIn this section we fix a rotationless ϕ∈(F_n) and arepresentative fG → G with corresponding f-invariant filtration G_1 ⊂⋯⊂ G_u = G (that is all we need from Notations <ref> for this section). Using thestructure of f (see Section <ref>) one builds up to the definition of the disintegration subgroup associated to f, a certain abelian subgroup (f) (F_n) constructed in <cit.> by an explicit combinatorial procedure that we review here, using which one also obtains a description of the coordinate homomorphism Ω(f) →^N (see Section <ref>). Noting that (f) depends on the choice of f, all definitions in this section depend on f. §.§.§ QE-paths and QE-splittingsWe describe structures associated to the collection of linear edges of G, augmenting the structures described under the heading “Properties of -linear strata” in Section <ref>.<cit.>Two linear edges E_i,E_j ⊂ G with a common twist path w are said to be linearly equivalent, abbreviated to LIN-equivalent, and the associated LIN-equivalence class is denoted _w. Recall that distinct elements E_iE_j ∈_w have distinct twist coefficients d_id_j. <cit.> Given a twist path w and distinct linear edges E_iE_j ∈_w, a path of the form E_i w^p E_j is called a quasi-exceptional path or QE-path (it is an exceptional path if and only if the twist coefficients d_i,d_j have the same sign). For every completely split path σ, any QE-subpath E_i w^p E_j of σ is a concatenation of terms of the complete splitting of σ: either d_i,d_j have the same sign and E_i w^p E_j is an exceptional path and hence a single term; or the terms consist of the initial E_i, the terms of the complete splitting of w^p, and the terminal E_j. No two QE-subpaths can overlap in an edge, and so there is a uniquely defined QE-splitting of σ which is obtained from the complete splitting by conglomerating each QE-subpath of σ into a single term. Consider a twist path w. For each E_i,E_j ∈_w, the associated quasi-exceptional family is the set of pathsE_i w^* E_j = {E_i w^p E_jp ∈}Also, associated to w itself is its linear family, the set_w ⋃_ E_iE_j ∈_w E_i w^* E_j Every quasi-exceptional path belongs to a unique quasi-exceptional family, and everylinear edge or quasi-exceptional path belongs to a unique linear family. §.§.§ Almost invariant subgraphsThe intuition behind the “almost invariant subgraphs” of G is that they are a partition of the collection of non-fixed strata determined by the following requirement: if one perturbs fG → G by letting its restrictions fH_i to non-fixed strata be replaced by iterates f^a_i H_i with varying integer exponents a_i ≥ 0, and if one wishes to do this perturbation so that the resulting outer automorphisms commute with the outer automorphism represented by f, then the exponents a_i should be constant on the edges contained in each almost invariant subgraph.<cit.> Define an equivalence relation on the non-fixed irreducible strata {H_i} of G as follows. Given H_i, a path μ is a short path for H_i if μ an edge in H_i, or if H_i isand μ is a taken connecting path in a zero stratum enveloped by H_i. Define a relation amongst the non-fixed irreducible strata denoted H_iH_j, meaning that there exists a short path μ for H_i such that some term of the QE-splitting of f_#(μ) is an edge of H_j; note that if H_iH_j ⋯ H_k then i ≥ j ≥⋯≥ k. Let B be the directed graph whose vertex set is the set of non-fixed irreducible strata, with a directed edge for each relation H_iH_j. Let B_1,…, B_S denote the connected components of the graph B. For each s = 1,…,S define the almost invariant subgraph X_s ⊂ G to be the union of the strata H_i comprising the vertices of B_s, together with any zero stratum enveloped by one of these H_i. Note that the set of almost invariant subgraphs {X_1,…,X_S} partitions the set of the nonfixed strata. §.§.§ Admissible S-tuples; quasi-exceptional families<cit.> For each S-tuple =(a_1,…, a_S) of non-negative integers define f^ : G → G on each edge E ⊂ G as follows:[Inthe notation f_ was used for what we here denote f^.] if E ⊂ X_s for some s then f^(E) = f^a_s_#(E); otherwise E is fixed by f and then f^(E) = E. Each such f^ is a homotopy equivalence representing an outer automorphism denoted ϕ^. By construction f^ preserves the given filtration G_1 ⊂⋯⊂ G_u=G.One would like to arrange (among other things) that (f^)^k_#(E) = f_#^a_s k(E) for all k ≥ 1, all s=1,…,S, and all edges E ⊂ X_s. This need not hold for general S-tuples, but it does hold when a certain relation amongst linear edges and QE-splitting terms is satisfied. Given an almost invariant subgraph X_r and linear edges E_i, E_j, we say that the triple (X_r,E_i,E_j) is quasi-twist related if there exists a stratum H_k ⊂ X_r, a short path μ for H_k, a twist path w, and an integer p ≥ 1 such that E_i, E_j ∈_w, and such that E_i w^p E_j is a term in the QE-splitting of f_#(μ). We say that an S-tupleis admissible if for all quasi-twist related triples (X_r,E_i,E_j), letting d_i,d_j be the twist coefficients of E_i,E_j respectively, and letting X_s,X_t be the almost invariant subgraphs containing E_i,E_j respectively, the following “twisting relation” holds:a_r(d_i - d_j) = a_s d_i - a_t d_j <cit.> Consider an almost invariant graph X_r. For each pair of linear edges E_i,E_j such that the triple (X_r,E_i,E_j) is quasi-twist related, the associated quasi-exceptional family is defined to be the set of all paths of the form E_i w^* E_j. We let _r denote the set of all quasi-exceptional families associated to quasi-twist related triples (X_r,E_i,E_j). QE-equivalence of linear edges. We say that a pair of linear edges E_i,E_j is QE-related if there exists an almost invariant subgraph X_r such that (X_r,E_i,E_j) is quasi-twist related. Equivalently E_i,E_j are QE-related if and only if they are in the same linear family and, letting w be the unique twist path for which E_i, E_j ∈_w, there exists r such that the family E_i w^* E_j is in the set _r. The equivalence relation on linear edges generated by being QE-related is called QE-equivalence and is written ∼_QE. Note that the QE-equivalence relation amongst linear edges is a refinement of the LIN-equivalence relation. Note that if an exceptional path E_i w^p E_j occurs as a term in the complete splitting of some iterate of some short path in X_r then the quasi-exceptional family E_i w^* E_j is an element of _r and the linear edges E_i and E_j are QE-related.§.§.§ X_s paths.<cit.> For each almost invariant subgraph X_s, we use the terminology X_s-paths to refer to the subpaths of G which form the elements of the set 𝒫_s that is defined in <cit.> — namely, the completely split paths γ such that each term of the QE-splitting of γ is one of the following:* a Nielsen path; or* a short path in a stratum contained in X_s, for example any edge in H^z_i for anystratum H_i ⊂ X_s; or* a quasi-exceptional path in the family _s.Furthermore, for any admissible S-tuple , any almost invariant subgraph X_s, and any X_s path γ, we have the following: *Each iterate f^k_#(γ) is also an X_s path;*f^_#(γ) = f^a_s_#(γ).§.§.§ The disintegration subgroup (f)Here we recall the definition of the disintegration subgroup (f) (F_n). We also recall the “admissible semigroup” (f), which we will use as an aid to understanding properties of (f).The subgroup (f).<cit.> This is the subgroup of (F_n) generated by the set of elements {ϕ^is an admissible S-tuple}The subgroup (f) is abelian and rotationless.The dependence of (f) on f rather than just ϕ was suppressed in <cit.> where the notation (ϕ) was used; see <cit.>. The admissible semigroup (f). <cit.> Let (f) ⊂^S denote the set of admissible S-tuples, which forms a sub-semigroup of ^S. Let Ł(f) ^S be the subgroup generated by (f). The map (f) ↦(f) defined by ↦ϕ^ is an injective semigroup homomorphism, and it extends to an isomorphism Ł(f) ↦(f). Every element of Ł(f) can be written as the difference - of two elements ,∈(f), and so every ψ∈(f) can be written in the form ψ = (ϕ^)^ϕ^ for some ,∈(f).We record here a simple consequence of these definitions for later use: The function which assigns to each ∈(f) the map f^ G → G satisfies the following “homomorphic” property: f^_# (f^_#(E)) = f^+_#(E) for each ,∈(f) and each edge E ⊂ G If E is a fixed edge of f then both sides equal E. Otherwise E is an edge in some almost invariant graph X_s, hence E is an X_s-path, hence f^_#(E) = (f^b_s)_#(E) is an X_s path, hence both sides equal (f^a_s+b_s)_#(E).We repeat here the theorem cited in Section <ref>:[<cit.>] For every rotationless abelian subgroup (F_n) there exists ϕ∈ such that for everyfG → G representing ϕ, the intersection (f) has finite index in . Remark. In this statement of the Disintegration Theorem we have made explicit the fact that <cit.> holds for any choice ofrepresenting ϕ. This was already implicit in the notation (ϕ) used for disintegration groups in <cit.> where any representativeis allowed.§.§.§ The coordinate homomorphism of a disintegration group (f)The disintegration group (f), being rotationless and abelian, has an injective coordinate homomorphism Ω(f) ↦^N as defined at the end of Section <ref>. The individual coordinate functions of Ω are the comparison homomorphismsand the expansion factor homomorphisms of the abelian group (f). We review here how one may use the structure of thef to sift through the coordinate functions of Ω — keeping a subset of the comparison homomorphisms, and keeping a normalized version of each expansion factor homomorphism — to obtain a homomorphism denoted Ω^f (f) →^ which is still injective. We will not normalize comparison homomorphisms, because it would screw up the “twisting relation” described in Section <ref>.<cit.>. Let = {iH_i is eitherlinear or }. The homomorphism Ω^f (f) →^ has coordinate functions denoted ω_i (f) → for each i ∈. For each i ∈, the function ω_i is characterized in one of two ways, depending on whether H_iislinear or . Let X_s be the almost invariant subgraph containing H_i. linear coordinate homomorphism (a component of Ω and of Ω^f): If H_i = E_i islinear, with twist path w and twist coefficient d_i satisfying f(E_i) = E_i w^d_i, then for each admissible S-tuple  we set ω_i(ϕ^) = a_s d_i.Consider also the difference homomorphisms ω_i,j(f) →, one for each LIN-equivalent pair of linear edges E_i,E_j, defined by:ω_i,j = ω_i - ω_jAs shown in <cit.> just following Theorem 7.2, each of these difference homomorphisms is a comparison homomorphism on (f) (Section <ref>), and furthermore the set of comparison homomorphisms on (f) is exactly the set of functions consisting of thelinear coordinate homomorphisms ω_i, their additive inverses -ω_i, and their differences ω_i,j = ω_i - ω_j for LIN-equivalent linear edges E_i,E_j. Unnormalizedcoordinate homomorphism (a component of Ω): If H_i iswith associated attracting lamination Λ_i ∈Ł(ϕ) then for each admissible S-tuplewe set ω_i(ϕ^) = _Λ_i (ϕ^). As alluded to in Section <ref>, this definition makes sense because (f) (Λ_i).In both thelinear case and thecase, for each admissible S-tuplethe following equation holds<cit.>:ω_i(ϕ^) = a_s ω_i(ϕ) Noting that the constant sequence = (1,1,…,1) is admissible and satisfies the equation ϕ^ = ϕ, it follows that the subgroup ⊷(ω_i) is generated by the integer ω_i(ϕ). If H_i isthen we normalize the function ω_i by dividing it by the integer ω_i(ϕ); the preceding equation still holds. After this normalization we have: Normalizedcoordinate homomorphism (a component of Ω^f): If H_i isthen, with notation as above, we haveω_i(ϕ^) = a_sand in particular ω_i(ϕ) = 1. It follows that if λ_i is the Perron-Frobenius eigenvalue of the transition matrix of f on H_i, then the expansion factor of ϕ^ on the lamination Λ_i is equal to λ_i^ω_i(ϕ^)= λ_i^a_s. For later reference we summarize the properties of Ω^f (f) →^ as follows:∙ The homomorphism Ω^f (f) →^ is injective.∙For each i ∈ and each admissible S-tuple  we have ω_i(ϕ^) = a_s ω_i(ϕ).∙ The coordinate function ω_i of Ω^f associated to anstratum H_i is the normalized version of _Λ_i satisfying ω_i(ϕ)=1.§ A TRAIN TRACK SEMIGROUP ACTIONThroughout this section we continue to adopt Notations <ref> regarding a rotationless ϕ∈(F_n) with a representative fG → G whose top stratum H_u iswith associated attracting lamination Λ. Consider the disintegration group =(f), reviewed in Section <ref>. Recall that each element ofstabilizes Λ. We shall focus on the subsemigroup _+ consisting of all ψ∈ for which the value of top coordinate homomorphism ω(ψ) is non-negative. The value of ω(ψ) is a logarithm of the asymptotic stretch factor of ψ on Λ, hence for ω(ψ) to be non-negative means that either ψ does not stretch Λ or Λ is an attracting lamination of ψ.Using the theory of disintegration groups, for each ψ∈_+ we construct a topological representative f^ψ G → G whose action on the edges of H_u agrees with the action of the appropriate iterate of thefG → G, namely f^ω(ψ)_#. Then, by lifting these topological representatives to the universal cover G, projecting to the tree T, and extending to the coned off graph T^*, we obtain semigroup actions of _+ on T and on T^*, detailed properties of which are described in Section <ref>. An important feature of the action _+T is that it is not an isometric action, in fact the action of each Ψ∈_+ will stretch lengths in T by a uniform amount depending on the appropriate value of the coordinate homomorphism ω (see Section <ref> Item_f_PsiEdges). On the other hand, by restricting the semigroup actions _+T,T^* to the subgroup _0 we obtain true isometric actions _0T,T^*, some properties of which are studied in Section <ref>. The actions constructed in this section will be the basis for the construction in Section <ref> of an isometric group action ofon the hyperbolic suspension complex .For use throughout Sections <ref> and Section <ref> we establish some further notations.Given a rotationless ϕ∈(F_n) withrepresentative fG → G having associated filtration G_1 ⊂⋯⊂ G_u with its top stratum H_u being , and with all associated objects and notations as described in Notations <ref>, we have the following additional objects and notations: *[Section <ref>, “Properties of zero strata”] H^z_u is the union of H_u with all zero strata enveloped by H_u. For some t ≤ u-1 this set of zero strata has the form {H_it+1 ≤ i ≤ u-1} (which is empty if t=u-1), where* H_t is the highest irreducible stratum below H_u, * G_t is the union of noncontractible components of G_u-1, the contractible components being the zero strata H_i, t+1 ≤ i ≤ u-1.*[Definitions <ref>, <ref>]F_nT denotes the free splitting corresponding to the marked graph pair (G,G_u-1), with associated Nielsen set {N_j}_j ∈ J, each N_j having basepoint set Z_j ⊂ N_j. Also, F_nT^* denotes the action obtained from F_nT by coning off each basepoint set Z_j to a cone point P_j with edges P_n Q (Q ∈ Z_j).*[Definition <ref>] Associated to each automorphism Φ∈(F_n) representing ϕ are unique Φ-twisted equivariant maps as follows:* f^Φ G → G, a lift of f; *f^Φ_TT → T, induced by f^Φ with respect to the collapse map G ↦ T.* f_T^* T^* → T^*, an extension of f^Φ_T that permutes cone points and cone edges.The maps f^Φ_T, f^Φ_T^* satisfy the “Stretch properties” recorded in Section <ref>. *The disintegration group of f is denoted = (f). Its full pre-image in (F_n) is denotedand is called the extended disintegration group. * Associated to the topstratum H_u we have the following objects:* The almost invariant subgraph X_s ⊂ G containing H^z_u; * (Section <ref>) The coordinate homomorphism ω→ associated to X_s, a scaled copy of the expansion factor homomorphism _Λ that is normalized so thatω(ϕ)=1In particular, ω is surjective.* The lifted coordinate homomorphism ω→, obtained by pre-composing ω with the projection map ↦.* The kernels of these homomorphisms denoted_0 = (ω) _0 = (ω) We thus have a commutative diagram with short exact rows:1 [r](ω) = _0 [r][r]^ω@=[d][r] 1 1 [r] F_n ≈(F_n) [r] @=[d] [u]_⊂ [r] [d]^⊂ [r] [d]^⊂[u]_ω 1 1 [r] F_n ≈(F_n) [r](F_n) [r](F_n) [r] 1 *Letting [0,∞) = {0,1,2,…} we have subsemigroups D_+ and _+ and inclusions as follows:_0 = ω^(0) _+ = ω^[0,∞) _0 = ω^(0) _+ = ω^[0,∞) §.§ A “homotopy semigroup action” of _+ on G. To prepare for the construction of the semigroup action _+T, in this section we work downstairs in G and construct a “homotopy semigroup action” of _+ on G. What this means will be clear from the construction, but the intuitive idea is we associate a topological representative f^ψ G → G to each ψ∈_+ so that although the action equation f^ψ∘ f^ψ' = f^ψψ' does not hold exactly, it does hold “up to homotopy relative to G_t”. The values of f^ψ on edges of H^z_n are determined by appropriate iterates of f itself, and the values on G_t are determined, up to homotopy, by the “graph homotopy principle” of Section <ref>.Letting ‖(G) denote the set of vertices of G, we define subsetsP ⊂ Fix ⊂ V ⊂‖(G)as follows. First, V = ‖(G)H_u = ‖(G)H^z_u, the latter equation holding because each edge E ⊂ H^z_u ∖ H_u has both endpoints in H_u <cit.>. Next, P = H^z_uG_t = H_uG_t; note that P ⊂‖(G) because H^z_u and G_t are subgraphs having no edges in common; also P is the frontier both of H^z_u and of G_t in G = H^z_uG_t. Finally, Fix denotes the set of points in V fixed by f, and the inclusion P ⊂ Fix follows from <cit.>. Here is a summary of the construction of the “homotopy semigroup action”: *For each ψ∈_+ we define a topological representative f^ψ G → G such that the following hold: *We have f^__+ = _G (where __+∈_+ denotes the identity group element, and _GG → G the identity map).*f^ψ(G_u-1) ⊂ G_u-1 and f^ψ(G_t) ⊂ G_t.*For each edge E ⊂ H^z_u we have f^ψ_(E) = f^ω(ψ)_#(E). In particular, if ψ∈_0 then f^ψ H^z_u is the identity.*f^ψ V = f^ω(ψ) V. In particular f^ψ(V) ⊂ V and f^ψ fixes each point of Fix and of its subset P. *If a height u indivisible Nielsen path ρ exists then f^ψ fixes each endpoint of ρ and f^ψ_#(ρ)=ρ.*For each ψ,ψ' ∈_+ we define a homotopy h^ψ,ψ' G × [0,1] → G between f^ψ f^ψ' and f^ψψ' such that the following hold: *For each edge E ⊂ H_u the homotopy h^ψ,ψ' straightens the edge path f^ψ f^ψ' E relative to its endpoints to form the path f^ψψ' E by cancelling only edges of G_u-1, without ever cancelling any edges of H_u.*h^ψ,ψ' is stationary on V. *h^ψ,ψ' preserves G_t. Recall that for a homotopy hA × [0,1] → A to “preserve” a subset B ⊂ A means that h(B × [0,1]) ⊂ B, and for h to be “stationary” on B means that h(x × [0,1]) is constant for each x ∈ B.First we will construct the maps f^ψ and then the homotopies h^ψ,ψ', along the way proving the various requirements of ItemOnHzN and ItemSemiUnique.Constructing f^ψ. We being with the construction of f^ψ_t = f^ψ G_tG_t → G_t by applying the graph homotopy principle Lemma <ref>.Recall from Section <ref> the abelian semigroup of admissible S-tuples (f). Recall also the injection (f) ≈Ł(f) defined by ↦ϕ^ with topological representative f^ G → G; this injection is a semigroup homomorphism whose image Ł(f) generates . Since the restricted map f^_t = f^ G_tG_t → G_t is a homotopy equivalence that fixes each point of P, by Lemma <ref> ItemGPHomInvExists the map of pairs f^_t(G_t,P) → (G_t,P) is a homotopy equivalence in the category of pairs, and hence its homotopy class rel P is an element [f^_t] of ^_0(G_t,P), the pure automorphism group of (G_t,P) in the graph point category (Lemma <ref> ItemGPAut). By Fact <ref>, for each ,∈(f) the maps f^_t ∘ f^_t and f^+_t(G_t,P) → (G_t,P) are homotopic rel P, and so the map Ł(f) ↦^_0(G_t,P) defined by (ϕ^) = [f^_t] is a semigroup homomorphism. Since the commutative groupis generated by its subsemigroup Ł(f), a simple semigroup argument shows thatextends uniquely to a group homomorphism →^_0(G_t,P). For each ψ∈ choose f^ψ_t(G_t,P) → (G_t,P) to be a representative of (ψ); if ψ = ϕ^ is already in Ł(f) then we choose f^ψ_t=f^_t, and so if ψ is the identity then f^ψ_t is the identity. Notice that no straightening is carried out when f^ψ_t is applied, and so there is no need for the “#” operator in the definition of f^ψ_t.For later use, for each ∈(f) we denote f̅^_t = f^(ϕ^)^_t(G_t,P) → (G_t,P), which is the homotopy inverse of f^_t that represents ((ϕ^)^) = (ϕ^)^∈^_0(G_t,P).For each ψ∈_+ we may now define f^ψ G → G as follows: f^ψ(E)= f^ω(ψ)_#(E) for each E ⊂ H^z_u, andf^ψ G_t = f^ψ_tIf ψ∈ is the identity then clearly f^ψ G → G is the identity, verifying ItemOnIdentity. By construction f^ψ satisfies ItemOnGNMinusOne and ItemOnE. For item ItemOnFrontier, note first that for each x ∈ V there exists an oriented edge E ⊂ H_u with initial vertex x, and for each such E the initial vertex of the path f^ψ(E) = f^ω(ψ)_#(E) equals f^ω(ψ)(x); and if in addition x ∈ P then both of f^ω(ψ) and f^ψ_t fix x. This shows that f^ψ is well-defined on V, and that it restricts to f^ω(ψ) on V, completing the proof of item ItemOnFrontier; it also follows that f^ψ is continuous. The proof that f^ψ is a homotopy equivalence and is a topological representative of ψ will be delayed to the very end.X_s-paths under f^ψ. Item ItemOnNielsen is encompassed in the following generalization of ItemOnE, which will also be used for item ItemSemiUnique: *For each ψ∈_+ and each X_s-path γ with endpoints in V we have f^ψ_#(γ) = f^ω(ψ)_#(γ).To see why this holds, the general X_s-path γ with endpoints in V is a concatenation of three special types, and it suffices to check Item_f_psiOnXsPath only for these types:Type (a): edges in H^z_u; Type (b): X_s-paths in G_t having endpoints in the set P. Type (c): a height u indivisible Nielsen path of f;Type (a) is the special case handled already in item ItemOnE. If γ is of type (b), first note that for ∈(f) and ψ = ϕ^∈Ł(f) we have f^ψ_#(γ) = (f^ψ_t)_#(γ)=(f^_t)_#(γ)=f^_#(γ)=f^a_s_#(γ) = f^ω(ψ)_#(γ)where the second-to-last equation follows from Section <ref> ItemXsPathImages. For more general ψ∈_+, choose ,∈(f) so that ψ = (ϕ^b)^ϕ^, and note that a_s - b_s = ω(ϕ^) - ω(ϕ^) = ω(ψ) ≥ 0. In the group ^(G_t,P) we have the equation [f^ψ_t] = [f̅^_t] [f^_t] and so we may calculatef^ψ_#(γ)= (f^ψ_t)_#(γ) =(f̅^_tf^_t)_#(γ)=(f̅^_t)_#((f^_t)_#(γ))=(f̅^_t)_#(f^_#(γ))=(f̅^_t)_#(f^a_s_#(γ))=(f̅^_t)_#(f^b_s_#(f^a_s-b_s_#(γ)))=(f̅^_t)_#(f^_#(f^ω(ψ)_#(γ)))=(f̅^_t)_#((f^_t)_#(f^ω(ψ)_#(γ)))=(f̅^_tf^_t)_#(f^ω(ψ)_#(γ))=f^ω(ψ)_#(γ)where the second equations of the second and third lines follow from Section <ref> ItemXsPathIterates and ItemXsPathImages.For type (c), let γ be a height u indivisible Nielsen path of f. We may write γ as an alternating concatenation of nontrivial paths of the formγ = η_0μ_1η_1⋯ η_K-1 μ_Kη_Kwhere each η_k is a path in H^z_u and each μ_k is a Nielsen path of f in G_t with endpoints in P <cit.>. By definition of f^ψ we havef^ψ_#(γ) = [f^ω(ψ)_#(E_1)(f^ψ_t)_#(μ_1) f^ω(ψ)_#(η_1) ⋯ f^ω(ψ)_#(η_K-1)(f^ψ_t)_#(μ_K)f^ω(ψ)_#(η_K)]We claim that each μ_k is a Nielsen path of f^ψ_t for each ψ∈ D, and so (f^ψ_t)_#(μ_k)=μ_k. To prove this claim, it holds if ψ = ϕ^∈Ł(f) for some ∈(f), because in that case the left hand side equals (f^)_#(μ_k) = μ_k. Using that [f̅^ψ_t] = [f^ψ_t]^∈^(G,P), we have μ_k = (f̅^ψ_t)_#(f^ψ_t(μ_k)) = (f̅^ψ_t)_#(μ_k), and so the claim holds if ψ = (ϕ^)^ for some ∈(f). The general case holds because ψ = (ϕ^)^ϕ^ for some ,∈(f). Applying the claim we have:f^ψ_#(γ)= [f^ω(ψ)_#(E_1) μ_1 f^ω(ψ)_#(η_1) ⋯ f^ω(ψ)_#(η_K-1) μ_Kf^ω(ψ)_#(η_K)] = [f^ω(ψ)_#(E_1)f^ω(ψ)_#(μ_1) f^ω(ψ)_#(η_1) ⋯ f^ω(ψ)_#(η_K-1)f^ω(ψ)_#(μ_K)f^ω(ψ)_#(η_K)] = f^ω(ψ)_#(γ)This complete the proof of Item_f_psiOnXsPath. This also completes the construction of f^ψ G → G and the proof of ItemOnHzN for each ψ∈_+, except that we will delay further the proof that f^ψ is a homotopy equivalence and a topological representative of ψ. Constructing h^ψ,ψ'. Given ψ,ψ' ∈_+ we turn to the construction of the homotopy h^ψ,ψ' G × [0,1] → G from f^ψ∘ f^ψ' to f^ψψ'. Let θ = ψψ' ∈_+. First, using the homomorphism →^_0(G,P) we have (ψψ')=(ψ)(ψ') which translates to [f^ψψ'_t] = [f^ψ_t] [f^ψ'_t] and so there exists a homotopy rel P from f^ψ_t ∘ f^ψ'_t to f^ψψ'_t denoted h^ψ,ψ'_tG_t × [0,1] → G_t. Second, for each E ⊂ H^z_u we have f^ψ_# (f^ψ'_#(E)) = f^ψ_#(f^ω(ψ')_#(E)) = f^ω(ψ)_#(f^ω(ψ')_#(E)) = f^ω(ψψ')_#(E) = f^ψψ'_#(E)where the second equation holds by applying Item_f_psiOnXsPath together with the fact that f^ω(ψ')_#(E) is an X_s-path (Section <ref> ItemXsPathIterates). Putting these together, we may define h^ψ,ψ' so that for each edge E ⊂ H^z_u its restriction to E × [0,1] is a homotopy rel endpoints from f^ψ∘ f^ψ' E to f^ψψ' E, and its restriction to G_t ×[0,1] is equal to h^ψ,ψ'_t. Items ItemVNStationary and ItemHomotopyPreservesGNMinusOne are evident from the construction. Item ItemPsiPsiPrimeEEquation follows the definition of a relative train track map, which tells us that for each edge E ⊂ H_u and each integer i ≥ 0 the path f^i_#(E) is u-legal, and that for each u-legal path γ the homotopy that straightenes f^i(γ) to produce f^i_#(γ) cancels no edges of H_u. Topological representatives. For each ψ∈_+, since Ł(f) generateswe may choose ,∈(f) so that ψ = (ϕ^)^ϕ^, and hence ϕ^ψ = ϕ^. Since all three of ϕ^, ψ, ϕ^ are in _+, we have a homotopy h^ϕ^,ψ from f^∘ f^ψ to f^. Since f^,f^ are homotopy equivalences it follows that f^ψ is also, and since f^,f^ are topological representatives of ϕ^,ϕ^ respectively it follows that f^ψ is a topological representative of (ϕ^)^ϕ^=ψ. §.§ Semigroup actions of _+ on T and T^* We turn now to the construction of the action _+T, deriving various properties of the construction, and then we extend the action to _+T^*. Associated to each Ψ∈_+ we define a map f^Ψ T → T by carrying out the following steps. First let ψ∈_+ be the image of Ψ under the homomorphism (F_n) ↦(F_n), and consider the map f^ψ (G,G_u-1) → (G,G_u-1), part of the “homotopy semigroup action” constructed in Section <ref>. Let f^Ψ ( G, G_u-1) → ( G, G_u-1) be the unique Ψ-twisted equivariant lift of f^ψ to G. Let f^Ψ T → T be the Ψ-twisted equivariant map induced from f^Ψ by collapsing each component of G_u-1 to a point and then straightening on each edge E ⊂ T so that f^Ψ E stretches length by a constant factor. We record several properties of this semigroup action: * Twisted equivariance:The map f^Ψ is Ψ-twisted equivariant.* Semigroup action property:f^Ψ f^Ψ' = f^ΨΨ' for all Ψ,Ψ' ∈_+.Property Item_f_PsiComp follows from the fact that f^Ψ(f^Ψ'(E)) = f^ΨΨ'(E) for each edge E ⊂ T, which is a consequence of Section <ref> item ItemPsiPsiPrimeEEquationapplied to edges of H_u. * Vertex Action Property: f^Ψ takes vertices to vertices, and it restricts to a bijection of the set of vertices having nontrivial stabilizer.For the proof, denote vertex sets by V(G) ⊂ G and V(T) ⊂ T, and let V^(T) denote the subset of all v ∈ V(T) such that _F_n(v) is nontrivial. By Section <ref> items ItemOnFrontier and ItemOnGNMinusOne, the map f^ψ takes the set G_u-1 V(G) to itself, and since f^ψ is a topological representative it follows that f^ψ restricts further to a homotopy equivalence amongst the noncontractible components of G_u-1 V(G). Letting G_u-1⊂ G and V( G) ⊂ G be the total lifts of G_u-1 and V(G), it follows that f^Ψ G → G restricts to a self-map of the components of G_u-1 V( G), and it restricts further to a bijection amongst the components having nontrivial stabilizer. Under the quotient map G → T, the set of components of G_u-1 V( G)corresponds bijectively and F_n-equivariantly to the vertex set V(T). It follows that f^Ψ T → T restricts to a self map of V(T), and that it restricts further to a bijection of V^(T), proving property Item_f_PsiVertices. * Stretch Property:f^Ψ maps each edge E ⊂ T to an edge path f^Ψ(E) ⊂ T, stretching length by a uniform factor of λ^ω(Ψ). This follows from the definition of the piecewise Riemannian metric on T in Section <ref>, the eigenlength equation in T in Section <ref>, and Section <ref> item ItemOnE. * Train Track Property:For each Ψ,Ψ' ∈ and each edge E of T, the restriction of f^Ψ to the edge path f^Ψ'(E) is injective. This follows from property Item_f_PsiComp together with property Item_f_PsiEdges as applied to ΨΨ'. For the statement of property Item_f_KernelActs, recall from Section <ref> the subgroup [T] (F_n) and its pre-image [T] (F_n). Recall particularly Lemma <ref> of that section, which says that each subgroup of [T] containing (F_n) has a unique action on T extending the given action of the free group F_n ≈(F_n) such that each element of the action satisfies a twisted equivariance property. * Restricted action of _0:We have _0 [T] and hence _0 [T]. Furthermore, the restriction to _0 of the semi-action _+T is identical to the action _0T given by Lemma <ref>: the unique isometric action of _0 on T such that each Ψ∈_0 satisfies Ψ-twisted equivariance.For the proof, consider Ψ∈_0 with projected image ψ∈_0. The map f^ψ G → G restricts to the identity on each edge E ⊂ H_u, by Section <ref> item ItemOnE. The map f^Ψ G → G therefore permutes the edges of H_u, mapping each isometrically to its image. It follows that the map f^Ψ T → T is an isometry. Since f^Ψ satisfies Ψ-twisted equivariance (property Item_f_Twisted), it follows that ψ∈[T] (Lemma <ref>). Since ψ∈_0 is arbitrary, we have proved _0 [T] and that _0 [T]. Applying, Lemma <ref> we have also proved that the map Ψ↦ f^Ψ is the same as the restriction to _0 of the action [T]T given in that lemma, namely the unique action assigning to Ψ the unique Ψ-twisted equivariant isometry of T. * Invariance of Nielsen data: For each Ψ∈ and each j ∈ J there exists j' ∈ J such that N_j' = (f^Ψ)_#(N_j), and f^Ψ restricts to an order preserving bijection of basepoint sets Z_jZ_j'. In particular f^Ψ_# induces a bijection of the Nielsen paths in T (Definition <ref>). This follows immediately from Section <ref> item ItemOnNielsen. Finally, as an immediate consequence of property Item_f_PsiNielsen we have: *Extension to _+T^*: The semigroup action _+T extends uniquely to a semigroup action _+T^*, denoted f^Ψ_* = f^Ψ_T^* T^* → T^* for each Ψ∈_+, such that f^Ψ_* permutes the cone points P_j, and f^Ψ_* permutes the cone edges P_j Q by isometries. In particular, for each j ∈ J and Q ∈ Z_j, letting N_j'=f^Ψ_#(N_j) and Q' = f^Ψ(Q), we have f^Ψ_*(P_j)=P_j'andΨ( P_jQ) = P'_jQ'. §.§ Dynamics of the group action _0T^*Isometries of a simplicial tree equipped with a geodesic metric satisfy a dichotomy: each isometry is either elliptic or loxodromic. This dichotomy does not hold for all isometries of Gromov hyperbolic spaces — in general, there is a third category of isometries, namely the parabolics. In this section we prove Lemma <ref> which says in part that the dichotomy does hold for the action _0T^*. Under the action _0T^*, each element Δ∈_0 acts either loxodromically or elliptically on T^*. More precisely, _0 acts loxodromically on T^* if and only if _0 acts loxodromically on T and its axis A ⊂ T is not a Nielsen line (Definition <ref>), in which case A is a quasi-axis for Δ in T^*. Furthermore, the loxodromic elements of _0 have “uniform uniformly positive stable translation length” in the following sense: there exist constants η,κ>0 such that for each Δ∈_0 acting loxodromically on T^*, and for each x ∈ T^* and each integer k ≥ 0, we haved_*(Δ^k(x),x) ≥ kη - κRemarks.In this lemma, the terminology “uniform uniformly positive stable translation length” refers to the corollary that the stable translation length lim_k →∞1/k d_*(Δ^k(x),x))has a uniform positive lower bound η>0 independent of the choice of a loxodromic element Δ∈_0, and the rate at which this positive lower bound is approached is uniform. This property will be applied in Section <ref>, for purposes of characterizing the loxodromic elements of the actionthat is described in Section <ref>.Lemma <ref> and proof could already have been presented almost word-for-word back in Section <ref> for the restricted action (F_n) ≈ F_nT^*. Other than the methods available in Section <ref>, the additional things needed to prove the lemma are that the larger action _0T^*, like its restriction, satisfies twisted equivariance and preserves Nielsen paths and associated objects (Section <ref> Item_f_PsiNielsen, Item_f_StarExtension). Throughout the proof we will apply Section <ref> Item_f_PsiNielsen, Item_f_StarExtension regarding the action of _0 on Nielsen paths, elements of the Nielsen set {N_i}, the basepoint sets Z_i ⊂ N_i, the cone points P_i, and the cone edges P_i Q, Q ∈ Z_i. In particular, because the action of each Δ∈_0 on T preserves Nielsen paths, it takes maximal ρ^*-subpaths of any path α to maximal ρ^*-subpaths of the path Δ(α).Consider Δ∈_0. If Δ acts elliptically on T then it fixes a vertex of T; and if in the geometric case Δ acts loxodromically on T and its axis is a Nielsen line N_j, then Δ fixes the corresponding cone point P_j; in either case Δ acts elliptically on T^*.Suppose that Δ acts loxodromically on T and its axis A ⊂ T is not a Nielsen line (this always happens in the nongeometric case, but the terminology of our proof applies to all cases). For each vertex x ∈ T and each integer k ≥ 1 consider the Proposition <ref> decomposition of the path [x,Δ^k(x)] into “ν-subpaths” meaning maximal ρ^* subpaths, alternating with “μ-subpaths” each having no ρ^* subpath. Since the intersection [x,Δ^k(x)]A contains a concatenation of k fundamental domains for the action of Δ on A, it follows that if [x,Δ^k(x)]A contains a ν-subpath of [x,Δ^k(x)] then it must contain k-1 distinct ν-subpaths of [x,Δ^k(x)], between which there are k-2 distinct μ-subpaths of [x,Δ^k(x)]; here we are applying the fact that Δ takes maximal ρ^* paths to maximal ρ^* paths. As a consequence, the collection of μ-subpaths of [x,Δ^k(x)] contains at least k-2 distinct edges of T. Applying Proposition <ref> ItemDPFFormula we obtain D_(x,Δ^k(x)) ≥ (k-2) η' where η' = min{l_(E) E is an edge of T}. Applying Proposition <ref>, in T^* we haved_*(x,Δ^k(x)) ≥ k η'/K_=η - (2η'/K + C_=κ')It immediately follows that Δ acts loxodromically on T^*. This estimate is for vertices of T, but a similar estimate holds for arbitrary points x ∈ T^*, replacing κ' by κ = κ' + 2 δ where δ is an upper bound for the distance in T^* from an arbitrary point of T^* to the vertex set of T.§ THE SUSPENSION ACTION. In this section we complete the proof of the multi-edge case of the Hyperbolic Action Theorem. Throughout this section we adopt Notations <ref> and <ref> concerning a rotationless ϕ∈(F_n) and arepresentative fG → G with topstratum H_u, with disintegration group =(f) (F_n), and with free splitting F_nT associated to the pair (G,G_u). In particular, in Section <ref> we used the coned off free splitting F_nT^* to construct the suspension space , proving thatis Gromov hyperbolic by using flaring properties of the action F_nT^*. In Section <ref>we studied the extended disintegration group (F_n) and its subsemigroup _+, and we constructed the train track semigroup action _+T^*.In Section <ref> we shall show how to suspend the semigroup action _+T^* to an isometric action(this is a completely general construction that applies to the disintegration group of anyhaving topstratum).In Section <ref> we put the pieces together to complete the proof. §.§ The suspension action . Recall from Notations <ref> the short exact sequence1 ↦_0 ↦→ 1Choose an automorphism Φ∈ representing ϕ∈, and so ω_u(Φ)=ω(ϕ)=1. It follows that Φ determines a splitting of the above short exact sequence, and hence a semidirect product structure ≈_0 _Φ expressed in terms of the inner automorphism ondetermined by Φ, namely I(Ψ) = I_Φ(Ψ) = ΦΨΦ^. Thus each element ofmay be written in the form ΔΦ^m for unique Δ∈_0 and m ∈, and the group operation in this notation is defined by(Δ_1 Φ^m_1)(Δ_2 Φ^m_2) = (Δ_1 I^m_1(Δ_2)) Φ^m_1+m_2Recall from Definition <ref> the construction of : using the chosen automorphism Φ representing ϕ, the corresponding Φ-twisted equivariant map of T^* is denoted in abbreviated form as f_* = f^Φ_T^*, andis the suspension space of f_*, namely the quotient of T ×× [0,1] with identifications [x,k,1] ∼ [f_*(x),k+1,0]. Recall also various other notations associated tothat are introduced in Definitions <ref> and <ref>.To define the action onof each element Ψ = ΔΦ^k ∈ it suffices to carry out the following steps: first we define the group action _0; next we define the action of the element Φ; then we define the action of each element ΔΦ^k by composition; and finally we verify that the two sides of the semidirect product relation ΦΔ = I(Δ) Φ act identically. Let each Δ∈_0 act on [x,k,t] ∈ by the equationΔ· [x,k,t] = [f^I^k(Δ)_*(x), k, t] = [f^Φ^k ΔΦ^-k_*(x),k,t]and note that this formula is well defined because, using the properties of the semigroup action _+T from Section <ref>, we haveΔ· [x,k,1]= [f^Φ^k ΔΦ^-k_*(x),k,1] = [f^Φ_* ∘ f^Φ^k ΔΦ^-k_*(x) ,k+1,0] = [f^Φ^k+1ΔΦ^-(k+1)_* ∘ f^Φ_*(x),k+1,0] = Δ· [f^Φ_*(x),k+1,0] = Δ· [f_*(x),k+1,0]Each Δ∈_0 evidently acts by an isometry. Again using the semigroup action properties, the action equation is satisfied for each Δ',Δ∈_0:Δ' · (Δ· [x,k,t])= [f^I^k(Δ')_*∘ f^I^k(Δ)_*(x),k,t] = [f^I^k(Δ') ∘ I^k(Δ)_*(x),k,t] = [f^I^k(Δ'Δ)_*(x),k,t] = Δ' Δ· [x,k,t]This completes the definition of the isometric action _0. We note that the restriction to F_n ≈(F_n) of the action _0 agrees with the action F_n given in Definition <ref>, defined in general by γ· [x,k,r] = [Φ^k(γ) · x, k, r] for each γ∈ F_n. In the special case k=0, r=0, the equation γ· x = f^i_γ_*(x) holds by Lemma <ref> ItemLTUInner. The extension to the general case follows easily.Next, let Φ act by shifting downward,Φ· [x,k,t] = [x,k-1,t]which is evidently a well-defined isometry. As required, for each Δ∈_0 the two sides of the semidirect product equation act in the same way:Φ·Δ· [x,k,t]= Φ· [f^I^k(Δ)_*(x),k,t] = [f^I^k(Δ)_*(x),k-1,t] I(Δ) ·Φ· [x,k,t]= I(Δ) · [x,k-1,t] = [f^I^k-1(I(Δ))_*(x),k-1,t] = [f^I^k(Δ)_*(x),k-1,t]Notice that since the action of each Δ∈_0 preserves each “horizontal level set” _s, and since the action of Φ has “vertical translation” equal to -1=-ω(Φ) meaning that Φ(_s) = _s-ω(Φ), it follows that for each Ψ∈ the integer -ω(Ψ) is the “vertical translation length” for the action of Ψ in . We record this for later use as: For each Ψ∈ and each fiber _s (s ∈), we have Ψ(_s) = _s - ω(Ψ)for any s ∈§.§ Hyperbolic Action Theorem, Multi-edge case: Proof.First we set up the notation, based on the formulation of the multi-edge case in Section <ref> and the outline of the proof in Section <ref>, and we apply the results of Section <ref> to obtain a hyperbolic action . After that, Conclusions ItemThmF_EllOrLox, ItemThmF_Nonelem and ItemThmF_WWPD of the Hyperbolic Action Theorem are proved in order.Setup of the proof. We have a subgroup (F_n), having image (F_n) under the quotient homomorphism (F_n) ↦(F_n), and having kernel J = (↦), which satisfies the hypotheses of the Hyperbolic Action Theorem: is abelian; is finitely generated and not virtually abelian;and no proper, nontrivial free factor of F_n is -invariant. The conclusion of the Hyperbolic Action Theorem is about the existence of a certain kind of hyperbolic action of a finite index normal subgroup of , and so we are free to replacewith any finite index subgroup ', because once the conclusion is proved using some hyperbolic action ' of some finite index normal subgroup ' ', the same conclusion follows for the restriction of ' to the actionwhereis the intersection of all of itsconjugates of '; one need only observe that the conclusions of the Hyperbolic Action Theorem for the action 'S imply the same conclusions when restricted to any finite index subgroup of '.We may therefore assume thatis a rotationless abelian subgroup, using the existence of an integer constant k such that the k^th power of each element of (F_n) is rotationless, and replace the abelian groupby its finite index subgroup of k^th powers.We have a maximal, proper, -invariant free factor systemof co-edge number ≥ 2 in F_n. Applying the Disintegration Theorem, we obtain ϕ∈ such that for anyrepresentative fG → G of ϕ with disintegration group =(f), the subgrouphas finite index in . We choose f so that the penultimate filtration element G_u-1 represents the free factor system . Since the co-edge number ofis ≥ 2, the top stratum H_u is . Replacingwith , we may thus assume . We now adopt Notations <ref> and <ref>, regarding the free splitting F_nT associated to the marked graph pair (G,G_u-1), the action F_nT^* obtained by coning off the elements of the Nielsen set, the disintegration group =(f), and its associated extended disintegration group . Using thatit follows that . Setting J = (↦) and K = () we may augment the commutative diagram of Notations <ref> ItemDisintGroup to obtain the commutative diagram with short exact rows shown in Figure <ref>. From Notations <ref> ItemSubsemigroups we also have the subgroups and subsemigroups _0_+ and _0 _+.Let _+T^* be the semigroup action described in Section <ref>, associating to each Ψ∈_+ a map f^Ψ_T^* T^* → T^*. Pick Φ∈ to be any pre-image of ϕ∈, and so ω_u(Φ) = ω_u(ϕ)=1 (see Notations <ref> ItemCoordHomDf). Letbe the suspension space of f^Φ_T^* T^* → T^* as constructed in Definition <ref>. Letbe the isometric action constructed in Section <ref>. We will make heavy use of the integer sections _j (j ∈) and of the _0-equivariant identification _0 ↔ T^*.We shall abuse notation freely by identifying F_n ≈(F_n) given by γ↔ i_γ, where i_γ(δ)=γδγ^. For example, using this identification we often think of JK as subgroups of F_n. We also note the equationi_Ψ(γ) = Ψ∘ i_γ∘Ψ^, Ψ∈(F_n), γ∈ F_nwhich expresses the fact that the isomorphism F_n ≈(F_n) is equivariant with respect to two actions of (F_n): its standard action on F_n; and its action by inner automorphisms on its normal subgroup (F_n).Combining this equation with Item_f_Twisted Twisted equivariance of Section <ref>, it follows that under the isomorphism F_n ≈(F_n), the action F_nT agrees with the action (F_n)T obtained by restricting the action _0T. We turn to the proof of the three conclusions of the Hyperbolic Action Theorem for the action of = on . Proof of Conclusion ItemThmF_EllOrLox: Every element ofacts either elliptically or loxodromically on . We show more generally that every element ofacts either elliptically or loxodromically on . From Section <ref> the general element ofhas the form Ψ = ΔΦ^m for some Δ∈_0, andΨ(_j) = _j-m for any j ∈. If m0 then, by Lemma <ref>, for each k ∈ and each x ∈ we have d(x,Ψ^k(x)) ≥km, and so Ψ is loxodromic. Suppose then that m=0 and so Ψ=Δ preserves _j for each j ∈. Consider in particular the restriction Δ_0 ≈ T^* and the further restriction Δ T. If Δ is elliptic on T then it fixes a point of T, hence Δ fixes a point of T^* ≈_0 ⊂, and so Δ is elliptic on T^* and on . If Δ is loxodromic on T with axis L_Δ⊂ T, and if L_Δ is a Nielsen line N_j then Δ fixes the corresponding cone point P_j ∈ T^* ∈_0 ⊂, and so Δ is elliptic on T^* and on . It remains to consider those Δ∈_0 which act loxodromically on T and whose axes in T are not Nielsen lines. Applying Lemma <ref>, each such Δ acts loxodromically on T^* and for each x ∈ T^* and each integer k ≥ 1 we have(*)d_*(x,Δ^k(x)) ≥ kη - κwhere the constants η,κ > 0 are independent of Δ and x.Consider the function which assigns to each integer k the minimum translation distance of vertices v ∈ under the action of Δ^k:σ(k) = inf_v ∈ d_(v,Δ^k(v))To prove that Δ acts loxodromically onwe apply the classification of isometries of Gromov hyperbolic spaces <cit.>, which says that every isometry is elliptic, parabolic, or loxodromic. But if Δ is elliptic or parabolic then the function σ(k) is bounded. Thus it suffices to prove that lim_k →∞σ(k) = ∞. For each integer i, consider Δ'_i = Φ^i ΔΦ^-i∈_0. Consider also the Φ^-i-twisted equivariant map j_iT^* = _0 → S_i given by j_i[x,0,0] = [x,i,0], which is an isometry from the metric d_*=d_0 to the metric d_i. This map j_i conjugates the action of Δ^k on _i to the action of (Δ'_i)^k = Φ^i Δ^k Φ^-i∈_0 on _0, becauseΔ^k j_i[x,0,0]= Δ^k[x,i,0] = [f_*^Φ^i Δ^k Φ^-i(x),i,0] = [f_*^Δ'_i(x),i,0]= j_i [f_*^Δ'_i(x),0,0] = j_i Δ'_i [x,0,0]Applying the inequality (*) to Δ'_i, and then applying the twisted equivariant isometric conjugacy j_i between Δ^k and (Δ'_i)^k, for each vertex p ∈_i we have (**)d_i(p,Δ^k(p)) ≥ kη - κAs seen earlier, the uniformly proper maps _i have a uniform gauge function δ [0,∞) → [0,∞) (see (*) in Section <ref> just before Lemma <ref>). If σ(k) does not diverge to ∞ then there is a constant M and arbitrarily large values of k such that d_(p,Δ^k(p)) ≤ M holds for some i and some vertex p ∈_i, implying by that d_i(p,Δ^k(p)) ≤δ(M), contradicting (**) for sufficiently large k. This completes the proof of Conclusion ItemThmF_EllOrLox. Furthermore, we have proved the following which will be useful later: * For each Δ∈_0 the following are equivalent: Δ acts loxodromically on ; Δ acts loxodromically on T^*; Δ acts loxodromically on T and its axis is not a Nielsen line.Proof of Conclusion ItemThmF_Nonelem: The actionis nonelementary. We shall focus on the restricted actions of the subgroup J (often abusing notation, as warned earlier, by identifying J (F_n) with the corresponding subgroup of F_n). We prove first that J has nonelementary action on T, then on T^*, and then on ; from the latter it follows that the whole actionis nonelementary.We shall apply Lemma <ref> and so we must check the hypotheses of that lemma. Let V^ be the set of vertices v ∈ T having nontrivial stabilizer under the action of F_n. As shown in Section <ref> Item_f_PsiVertices, the semigroup action _+T restricts to a semigroup action _+V^ having the property that each element of Ψ∈_+ acts by a Ψ-twisted equivariant bijection of V^, and it follows immediately that this semigroup action extends uniquely to a group action V^ having the same property. Restricting towe obtain an action V^ satisfying the hypotheses in the first paragraph of Lemma <ref>. By hypothesis of the Hyperbolic Action Theorem, no proper nontrivial free factor of F_n is fixed by , and in particular no subgroup _F_n(v) (v ∈ V^) is fixed by . Finally, the free group J has rank ≥ 2 because otherwise, sinceis abelian, it would follow thatis a solvable subgroup of (F_n) (F_n+1) and hence is virtually abelian by , contradicting the hypothesis thatis not virtually abelian. Having verified all the hypotheses of Lemma <ref>, from its conclusion we have that the action JT is nonelementary.Since the action JT is nonelementary, its minimal subtree T^J is not a line, and so T^J contains a finite path α which is not contained in any Nielsen line. Furthermore, α is contained in the axis of some loxodromic element γ∈ J whose axis L_γ is therefore not a Nielsen line. Applying Lemma <ref>, the action of γ on T^* is loxodromic and L_γ is a quasi-axis for γ. Choosing δ∈ J - _F_n(L_γ) and letting γ' = δγδ^, it follows that γ' also acts loxodromically on T and on T^*, and that its axis L_γ' in T is also a quasi-axis in T^*. By choice of δ the intersection L_γ L_γ' is either empty or finite. Since neither of the lines L_γ, L_γ' is a Nielsen axis, each ray in each line has infinite D_ diameter and so goes infinitely far away from the other line in T^*-distance. It follows that γ,γ' are independent loxodromic elements on T^*, proving that JT^* is nonelementary. Finally, using the same γ,γ' as in the previous paragraph whose axes L_γ,L_γ' in T are not Nielsen lines, we showed in the proof of Conclusion ItemThmF_EllOrLox that γ,γ' act loxodromically on . Furthermore, since the lines L_γ,L_γ' have infinite Hausdorff distance in T^*, it follows by they also have infinite Hausdorff distance in , as shown in item (*) of Section <ref>. This proves that γ,γ' are independent loxodromics onand hence the action J is nonelementary.Proof of Conclusion ItemThmF_WWPD: Each element of J that acts loxodromically onis a strongly axial, WWPD element of . Given γ∈ J = (F_n)K, as was shown earlier under the proof of Condition ItemThmF_EllOrLox, we know that the action of γ onis loxodromic if and only if its action on T^* is loxodromic, if and only if its action on T is loxodromic with an axis L_γ⊂ T that is not a Nielsen line. It follows that L_γ is a quasi-axis for the actions of γ on T^* and on .We shall prove that each such element γ is a WWPD element with respect to three actions:Step 1: The action KT, with respect to the metric d_u (Definition <ref>); Step 2: The action KT^*, with respect to the metric d^* (Definition <ref>); Step 3: The action , with respect to the metric d_ (Definition <ref>). The proofs in Steps 2 and 3 are bootstrapping arguments, deriving the WWPD property for the current step from the WWPD property of the previous step. Once the WWPD arguments are complete, we will verify that γ is strongly axial with respect to the action .Step 1: The action KT. As shown in Section <ref> Item_f_KernelActs, the action JT is the restriction of the action KT. Since J is normal in K, and since JT is the restriction to J of the free splitting action (F_n) ≈ F_nT, we may apply Lemma <ref> to conclude that γ is a WWPD element of the action KT with respect to the edge counting metric d_u. Step 2: The action KT^*.The underlying intuition of this bootstrapping proof is that WWPD behaves well with respect to coning operations, for elements whose loxodromicity survives the coning process. We shall use the WWPD property of γ with respect to the action KT and metric d_u to derive the WWPD property of γ with respect to the action KT^* and the “coned off” metric d^*. For this purpose we shall use the original version of WWPD from <cit.>, referred to in <cit.> as “WWPD (3)”: WWPD (3): Given a hyperbolic action Γ X, a loxodromic element h ∈Γ satisfies WWPD if and only if for any quasi-axis ℓ of h and for any closest point map π X →ℓ there exists B ≥ 0 such that for any g ∈Γ - ( h), the set g(ℓ) has diameter ≤ B.Remark: This statement is equivalent to any alternate version in which either of the universal quantifiers on ℓ or on π is replaced with an existential quantifier, because any two quasi-axes of h have finite Hausdorff distance, and any two closest point maps X →ℓ have finite distance in the sup metric. We shall use these equivalences silently in what follows. Let π T → L_γ be the retraction which maps each component C of T-L_γ to the point where the closure of C intersects L_γ. This map π is the unique closest point map from T to L_γ with respect to the metric d_u, and so in the context of Step 1 we can apply WWPD (3) to conclude that there is a constant B ≥ 0 such that for each g ∈ K - (γ), the set π(g(L_γ)) has d_u-diameter ≤ B, and hence the set π(g(L_γ)) has D^u-diameter ≤ B (Corollary <ref>). Recall two facts: the coarse metrics D_u and D_ are quasicomparable on T (Definition <ref>); and the inclusion map TT^* is a quasi-isometry with respect to D_ on T and d^* on T^* (Proposition <ref>). It follows from these two facts that the sets π(g(L_γ)) have uniformly bounded d^*-diameter for all g ∈ K - (γ). Property WWPD (3) for h with respect to the action KT^* and the metric d^* will therefore be proved if we can demonstrate the following: for any closest point map π^*T^* → L_γ with respect to the metric d^*, the distance d^*(π(x),π^*(x)) is uniformly bounded as x varies over T. For this purpose, using the same two facts above, it suffices to prove that the quasidistance D_u(π(x),π^*(x)) is uniformly bounded as x varies over T. Applying the Coarse Triangle Inequality for the path function L_u (Lemma <ref>), and using that π^*(x) minimizes D^u distance from x to L_γ, we haveD_u(π(x),π^*(x))≤ D_u(π(x),x) + D_u(x,π^*(x)) +C_<ref>≤ D_u(π(x),x) + D_u(x,π(x)) + C_<ref>≤ 2B + C_<ref>This completes the proof that γ is a WWPD element of the action KT^*.Step 3: The action .Arguing by contradiction, suppose that γ fails to be a WWPD element of the actionwith respect to the metric d_. Fix a closest point map π→ L_γ. Applying WWPD (3) (stated above, and see the following remark), we obtain an infinite sequence Ψ_i ∈ - _(γ) (i=1,2,…) such that the diameter of π(Ψ_i(L_γ)) goes to +∞ as i → +∞. Denote L_i = Ψ_i(L_γ) which is a (k,c)-quasiaxis infor Ψ_i γΨ_i^, where k ≥ 1, c ≥ 0 are independent of i. Using that the coordinate homomorphism ω_u → is surjective with kernel K, and that ω_u(Φ)=1 (see Figure <ref>), for each i we have Ψ_i = Δ_i Φ^m_i for a unique Δ_i ∈ K and a unique integer m_i. Choose points w_i,x_i ∈ L_γ and y_i,z_i ∈ L_i such that π(y_i)=w_i, π(z_i)=x_i, the point w_i precedes the point x_i in the oriented axis L_γ, and d_(w_i,x_i) → +∞. By replacing Ψ_i with Δ_i Φ^m_iγ^k for an appropriate exponent k that depends on i, after passing to a subsequence we may assume that the subpaths [w_i,x_i] of L_γ are nested:(*)[w_1,x_1] ⊂⋯⊂ [w_i,x_i] ⊂ [w_i+1,x_i+1] ⊂⋯ We next prove that the sequence of integers (m_i) takes only finitely many values. Consider the (k,c)-quasigeodesic quadrilateral Q_i inhaving the (k,c)-quasigeodesics [w_i,x_i] ⊂ L_γ and [y_i,z_i] ⊂ L_i as one pair of opposite sides, and having geodesics w_i y_i, x_i z_i as the other pair of opposite sides. By a basic result in Gromov hyperbolic geometry <cit.>, there exists a finite, connected graph G_i equipped with a path metric, and having exactly four valence 1 vertices denoted w̅_i, x̅_i, y̅_i, z̅_i, and there exists a quasi-isometry f_i(Q_i,w_i,x_i,y_i,z_i) → (G_i,w̅_i, x̅_i, y̅_i, z̅_i) with uniform quasi-isometry constants independent of i (depending only on k, c, and the hyperbolicity constant of ). We consider two cases, depending on properties of the graph G_i. In the first case, the paths [w̅_i, x̅_i] and [y̅_i, z̅_i] are disjoint in G_i, and so the points y̅_i, z̅_i have the same image under the closest point projection G_i ↦ [w̅_i, x̅_i]; it follows that π(y_i)=w_i and π(z_i)=x_i have uniformly bounded distance, which happens for only a finite number of values of i. In the second case the paths [w̅_i, x̅_i] and [y̅_i, z̅_i] are not disjoint in G_i, and so the minimum distance between the paths [w_i,x_i] and [y_i,z_i] inis uniformly bounded, implying that the minimum distance inbetween _0 and Φ^m_i(_0) is uniformly bounded, but that distance equals m_i by Lemma <ref>. In each case it follows that m_i takes on only finitely many values. We may therefore pass to a subsequence so that m_i=m is constant, hence Ψ_i = Δ_i Φ^m and L_i ⊂_m. By restriction of the actionwe obtain an isometric action K _m. Reverting to formal action notation, let _0K _0 and _mK _m denote the two restrictions of the action K. We have a commutative diagram K [rrr]^_0[d]_i_Φ^m (_0) [d]^Ad_Φ^mK [rrr]^_m (_m)in which the left vertical arrow is the restriction to the normal subgroup K of the inner automorphism i_Φ^m→, and the right vertical arrow is the “adjoint” isomorphism given by the formula Ad_Φ^m(Θ)(y) = Φ^m ·Θ(Φ^-m· y) for each y ∈_m and each Θ∈(_0) Using the above commutative diagram, observe that i_Φ^m takes loxodromic WWPD elements of the action _0K _0 to loxodromic WWPD elements of the action _mK _m. Combining this with Step 2, it follows that γ' = i_Φ^m(γ) is a loxodromic WWPD element of the action K _m. Noting that the line L_γ' = Φ^m(L_γ) is a quasi-axis for γ' in _m, we may apply the property WWPD (3) to L_γ', with the conclusion that for each Δ∈ K - (γ') the image of an _m-closest point map Δ· L_γ'↦γ' has uniformly bounded diameter. By careful choice of Δ, namely the members of the sequence Δ_i^Δ^_i-1∈ K - (γ'), we shall use the nesting property (*) to obtain a contradiction. Knowing that [y_i-1,z_i-1] ⊂ L_i-1 is uniformly Hausdorff close into [w_i-1,x_i-1], and using that the inclusion _m → is uniformly property (Lemma <ref>), it follows that the diameter of [y_i-1,z_i-1] in _m goes to +∞ as i → +∞. Also, knowing that [w_i-1,x_i-1] is a subpath of [w_i,x_i], and that [w_i,x_i] is uniformly Hausdorff close into [y_i,z_i] ⊂ L_i, it follows that [y_i-1,z_i-1] is uniformly Hausdorff close into a subpath of [y_i,z_i]. Again using that _m → is uniformly proper (Lemma <ref>), it follows that [y_i-1,z_i-1] is uniformly Hausdorff close in _m to a subpath of [y_i,z_i]. It follows that the diameter of the image of the _m-closest point map L_i-1↦ L_i goes to +∞, because it is greater than __m[y_i,z_i] - C for some constant C independent of i. Applying the isometric action of the group element Δ_i^, it follows that diameter of the image of the _m-closest point map from Δ_i^Δ_i-1 L_γ' = Δ_i^ L_i-1 to Δ_i^ L_i = L_γ' goes to +∞ as i → +∞. This gives the desired contradiction, completing Step 3.The strong axial property. For each γ∈ J that acts loxodromically on the tree T, we regard its axis L_γ as an oriented line with positive/negative ideal endpoints equal to the attracting/repelling points _+ γ, _-γ, respectively. Assuming also L_γ is not a Nielsen line, we shall prove that L_γ is a strong axis with respect to the action . It suffices to prove this for the set Z ⊂ J consisting of all root free γ∈ J such that L_γ is not a Nielsen line. We have indexed sets of oriented lines Ł = {L_γγ∈ Z} and of ideal points points = {_+ γγ∈ Z}, with bijective indexing maps Z ↦ L_Z and Z ↦. Having shown already that each γ∈ Z acts loxodromically on T^* and on , with attracting/repelling pairs denoted _±γ∈ T^* and _±γ∈, we obtain indexed sets ^* = {_+γγ∈ Z} and ^ = {_+γγ∈ Z}. We next show that the indexing map Z ↦^ is a bijection. The oriented line L_γ is also a quasi-axis for γ in , and hence the positive end of L_γ limits on _+γ in . Consider γδ∈ Z. The line L_γδ⊂ T with ideal endpoints _+γ,_+δ in T is a concatenation of the form L_γδ = R̅_γ A R_δ where R_γ is a positive ray in L_γ and R_δ is a positive ray in L_δ, hence the two ends of L_γδ limit on _+γ,_-δ in . To show that _+γ_+δ it therefore suffices to prove that L_γδ is a quasigeodesic line in . Since L_γδ is a reparameterized quasigeodesic in T^* (Proposition <ref>), and since its disjoint subrays R_γ, R_δ each have infinite D_ diameter in T and hence infinite d^* diameter in T^* (Proposition <ref>), it follows that L_γδ is a quasigeodesic line in T^*; this shows that _+γ_+δ. Arguing by contradiction, suppose that _+γ = _+δ∈, and so both ends of L_γδ limit on that point of . Since the rays R_γ, R_δ are quasigeodesics in , it follows that there are sequences (x_i) in R_γ and (y_i) in R_δ such that the distances d^(x_i,y_i) are bounded. Since x_i,y_i approach _+γ_+δ in T^*, respectively, it follows that d^*(x_i,y_i) approaches +∞. Since the inclusion T^* is uniformly proper (Lemma <ref>), it also follows that the distances d^(x_i,y_i) approach +∞, a contradiction.We claim that there exist actions of the groupon the sets Z,and ^ satisfying the following properties: *The indexing bijections Z ↦, ^ are -equivariant.*No element of - K fixes any element of Z. Once this claim is proved, from ItemHEqBij and ItemHMinusJ it follows that for each γ∈ Z we have _K(_±γ) = _K(^_±γ) = _ (^_±γ). In the place where Lemma <ref> is applied in Step 1 we may further conclude that each γ∈ Z is strongly axial with respect to the action KT, and hence the action of _K(_±γ) on T preserves L_γ. The action of _ (^_±γ) ontherefore preserves L_γ, proving that γ is strongly axial with respect to the action , and we are done. For proving the claim, we will for the most part restrict our attention to the semigroup _+ = _+ and its semigroup action _+T which is obtained by restricting _+T (Section <ref>).The action Z is defined by restricting to Z the natural inner action ofon its own normal subgroup J = (F_n), but we must verify that Z is invariant under that action. Given γ∈ J, and Ψ∈_+ = _+, we note three things.First, applying Property Item_f_Twisted Twisted Equivariance of Section <ref>, the action of γ on T is loxodromic with axis L_γ if and only if the action of Ψ(γ) on T is loxodromic with axis f^Ψ_#(L_γ)=L_Ψ(γ). Second, γ is root free if and only if Ψ(γ) is root free. Third, applying Property Item_f_PsiNielsen Invariance of Nielsen data of Section <ref>, the axis L_γ is a Nielsen line if and only if L_Ψ(γ) is a Nielsen line. It follows that γ∈ Z if and only if Ψ(γ) ∈ Z, hence Z is -invariant.The argument in the previous paragraph yields a little more: the bijection Z ↔Ł is equivariant with respect to the action Z and the action Ł, the latter of which is defined by Ψ· L_γ = f^Ψ_#(L_γ) = L_Ψ(γ). Existence of actions ofonand ^ satisfying item ItemHEqBij is an immediate consequence: for each Ψ∈_+, the ideal point inor in ^ represented by the positive end of the oriented line L_γ is taken by Ψ∈_+ to the ideal point represented by the positive end of L_Ψ(γ).Finally, item ItemHMinusJ is a special case of the following: for each Ψ∈ - _0 and each γ∈ F_n, if γ is loxodromic and if its axis L_γ is not a Nielsen line then Ψ(γ) γ. Suppose to the contrary that Ψ(γ)=γ, let Ψ = ΔΦ^m where Δ∈(F_n) and m0, and let c be the conjugacy class of γ in F_n, and so ϕ^m(c)=c. Let σ be the circuit in G represented by c (Notations <ref>). We may assume m > 0, and so f^m_#(σ)=σ. It follows that σ is a concatenation of fixed edges and indivisible Nielsen paths of f (, Fact 1.39). However, no edge of H_u is fixed, and the only indivisible Nielsen path that crosses an edge of H_u is ρ_u^± 1 which has at least one endpoint disjoint from G_u-1 (Notations <ref> ItemCTiNP). One of two cases must therefore hold: either σ is contained in G_u-1, implying that γ is not loxodromic (Definition <ref>); or ρ exists and is closed, and σ is an iterate of ρ or ρ^, implying that L_γ is a Nielsen line (Definition <ref>). In either case, we are done verifying item ItemHMinusJ. amsalpha
http://arxiv.org/abs/1702.08050v5
{ "authors": [ "Michael Handel", "Lee Mosher" ], "categories": [ "math.GR", "20F65 (primary) 57M07 (secondary)" ], "primary_category": "math.GR", "published": "20170226163201", "title": "Hyperbolic actions and 2nd bounded cohomology of subgroups of $\\mathsf{Out}(F_n)$. Part II: Finite lamination subgroups" }
angsumandas@sxccal.eduDepartment of Mathematics,St. Xavier's College, Kolkata, India. angsumandas@sxccal.edu [cor1]Corresponding author In this paper we introduce a graph structure, called subspace sum graph 𝒢(𝕍) on a finite dimensional vector space 𝕍 where the vertex set is the collection of non-trivial proper subspaces of a vector space and two vertices W_1,W_2 are adjacent if W_1 + W_2=𝕍. The diameter, girth, connectivity, maximal independent sets, different variants of domination number, clique number and chromatic number of 𝒢(𝕍) are studied. It is shown that two subspace sum graphs are isomorphic if and only if the base vector spaces are isomorphic. Finally some properties of subspace sum graph are studied when the base field is finite. subspace Galois numbers q-binomial coefficient [2008] 05C25 05C69 § INTRODUCTIONApart from its combinatorial motivation, graph theory also helps to characterize various algebraic structures by means of studying certain graphs associated to them. Till date, a lot of research, e.g., <cit.> has been done in connecting graph structures to various algebraic objects. Recently, some works associating graphs with subspaces of vector spaces can be found in <cit.>. In this paper we define a graph structure on a finite dimensional vector space 𝕍 over a field 𝔽, called Subspace Sum Graph of 𝕍 and derive some properties of the graph using the algebraic properties of vector subspaces.§ DEFINITIONS AND PRELIMINARIESIn this section, for convenience of the reader and also for later use, we recall some definitions, notations and results concerning elementary graph theory. For undefined terms and concepts the reader is referred to <cit.>.By a graph G=(V,E), we mean a non-empty set V and a symmetric binary relation (possibly empty) E on V. The set V is called the set of vertices and E is called the set of edges of G. Two element u and v in V are said to be adjacent if (u,v) ∈ E. H=(W,F) is called a subgraph of G if H itself is a graph and ϕ≠ W ⊆ V and F ⊆ E. If V is finite, the graph G is said to be finite, otherwise it is infinite. Two graphs G=(V,E) and G'=(V',E') are said to be isomorphic if there exists a bijection ϕ: V → V' such that (u,v) ∈ E(ϕ(u),ϕ(v)) ∈ E'. A path of length k in a graph is an alternating sequence of vertices and edges, v_0,e_0,v_1,e_1,v_2,…, v_k-1,e_k-1,v_k, where v_i's are distinct (except possibly the first and last vertices) and e_i is the edge joining v_i and v_i+1. We call this a path joining v_0 and v_k. A cycle is a path with v_0=v_k. A graph is said to be Eulerian if it contains a cycle containing all the edges in G exactly once. A cycle of length 3 is called a triangle. A graph is connected if for any pair of vertices u,v ∈ V, there exists a path joining u and v. A graph is said to be triangulated if for any vertex u in V, there exist v,w in V, such that (u,v,w) is a triangle. The distance between two vertices u,v ∈ V,  d(u,v) is defined as the length of the shortest path joining u and v, if it exists. Otherwise, d(u,v) is defined as ∞. The diameter of a graph is defined as diam(G)=max_u,v ∈ V  d(u,v), the largest distance between pairs of vertices of the graph, if it exists. Otherwise, diam(G) is defined as ∞. The girth of a graph is the length of its shortest cycle, if it exists. Otherwise, it is defined as ∞. If all the vertices of G are pairwise adjacent, then G is said to be complete. A complete subgraph of a graph G is called a clique. A maximal clique is a clique which is maximal with respect to inclusion. The clique number of G, written as ω(G), is the maximum size of a clique in G. A subset I of V is said to be independent if any two vertices in that subset are pairwise non-adjacent. A maximal independent set is an independent set which is maximal with respect to inclusion. The chromatic number of G, denoted as χ(G), is the minimum number of colours needed to label the vertices so that the adjacent vertices receive different colours. It is known that for any graph G, χ(G)≥ω(G).A graph G is said to be weakly perfect if χ(G)= ω(G). A graph G is said to be perfect if χ(H)= ω(H), for all induced subgraph H of G.A subset D of V is said to be dominating set if any vector in V ∖ D is adjacent to at least one vertex in D. A subset D of V is said to be total dominating set if any vector in V is adjacent to at least one vertex in D. A dominating set Dof G is said to be a connected dominating set of G if the subgraph generated by D, ⟨ D ⟩ is connected. A dominating set Dof G is said to be a dominating clique of G if the subgraph generated by D, ⟨ D ⟩ is complete. The dominating number γ(G), the total dominating number γ_t(G), the connected dominating number γ_c(G) and the clique domination number are the minimum size of a dominating set, a total dominating set, a connected dominating set and a dominating clique in G respectively.§ SUBSPACE SUM GRAPH OF A VECTOR SPACELet 𝕍 be a finite dimensional vector space over a field 𝔽 of dimension greater than 1 and θ denote the null vector. We define a graph 𝒢(𝕍)=(V,E) as follows: V= the collection of non-trivial proper subspaces of 𝕍 and for W_1,W_2 ∈ V, W_1 ∼ W_2 or (W_1,W_2) ∈ E if W_1 + W_2=𝕍. Since, dim(𝕍)>1, V ≠∅.Consider a 3 dimensional vector space 𝕍 over ℤ_2 with a basis {α_1,α_2,α_3}. The following are the possible non-trivial proper subspaces of 𝕍: W_1=⟨α_1 ⟩,W_2=⟨α_2 ⟩, W_3=⟨α_3 ⟩,W_4=⟨α_1 +α_2 ⟩,W_5=⟨α_2+α_3 ⟩,W_6=⟨α_1+α_3 ⟩,W_7=⟨α_1+α_2+α_3 ⟩,W_8=⟨α_1,α_2 ⟩,W_9=⟨α_1,α_3 ⟩,W_10=⟨α_2,α_3 ⟩,W_11=⟨α_1,α_2+α_3 ⟩,W_12=⟨α_2, α_1+α_3 ⟩,W_13=⟨α_3,α_1+α_2 ⟩,W_14=⟨α_1+α_2,α_2+α_3 ⟩. Then the graph 𝒢(𝕍) is given in Figure <ref>. For discussion on chromatic number and clique number of this graph, see Remark <ref> of Section <ref>. Throughout this paper, even if it is not mentioned explicitly, the underlying field is 𝔽 and 𝕍 is finite dimensional. Now we study some basic properties like completeness, connectedness, diameter and girth of 𝒢(𝕍). 𝒢(𝕍) is complete if and only if dim(𝕍)=2.If dim(𝕍)=2, then the vertices of 𝒢(𝕍) are the one dimensional subspaces of 𝕍. Now, the sum of two distinct one dimensional subspaces in a two dimensional vector space is two dimensional and hence equal to 𝕍 and hence 𝒢(𝕍) is complete. Conversely, if 𝒢(𝕍) is complete and dim(𝕍)≥ 2, then there exists two distinct one dimensional subspaces of 𝕍 whose sum is not 𝕍. This contradicts the completeness of 𝒢(𝕍) and hence dim(𝕍)=2.If dim(𝕍)≥ 3, then 𝒢(𝕍) is connected anddiam(𝒢(𝕍))= 2.Let W_1, W_2 be two distinct non-trivial proper subspaces of 𝕍. If W_1 + W_2 = 𝕍, then d(W_1, W_2)=1 and we are done. However, as dim(𝕍)≥ 3, by Theorem <ref>, there exists W_1, W_2 which are not adjacent in 𝒢(𝕍). If W_1 ≁W_2, then W_1 + W_2 ⊂𝕍. Two cases may arise. Case 1: W_1 ∩ W_2 ≠{θ}. Then there exists a non-trivial proper subspace W_3 such that (W_1 ∩ W_2)+W_3=𝕍. Since, W_1 ∩ W_2 ⊂ W_1,W_2, therefore W_1 + W_3=𝕍 and W_2+ W_3 = 𝕍. Thus, we have W_1 ∼ W_3 ∼ W_2 and hence d(W_1,W_2)=2.Case 2: W_1 ∩ W_2 = {θ}. Let dim(W_1)=k_1; dim(W_2)=k_2 with k_1 ≤ k_2. Moreover, let W_1=⟨α_1,α_2,…,α_k_1⟩; W_2=⟨β_1,β_2,…,β_k_2⟩. Since W_1 ∩ W_2 = {θ}, therefore {α_1,α_2,…,α_k_1,β_1,β_2,…,β_k_2} is linearly independent. Also, as W_1 + W_2 ⊂𝕍, {α_1,α_2,…,α_k_1,β_1,β_2,…,β_k_2} can be extended to a basis{α_1,α_2,…,α_k_1,β_1,β_2,…,β_k_2,γ_1,γ_2,…,γ_k_3}𝕍, dim(𝕍)=n=k_1 + k_2 + k_3.Set W_3=⟨α_1 + β_1, α_2 + β_2,…, α_k_1+β_k_1,β_k_1 + 1,β_k_1 + 2,…,β_k_2,γ_1,γ_2,…,γ_k_3⟩. Clearly, dim(W_3)=k_2+k_3<n. We claim that W_1+W_3=𝕍 and W_2+W_3=𝕍.Proof of Claim: β_1=(α_1+β_1)-α_1 ∈ W_1+W_3. Similarly, β_2,β_3,…,β_k_1∈ W_1+W_3. Also, β_k_1 +1,…, β_k_2,γ_1,γ_2,…,γ_k_3∈ W_3 and α_1,α_2,…,α_k_1∈ W_1. Thus, ⟨α_1,α_2,…,α_k_1,β_1,β_2,…,β_k_2,γ_1,γ_2,…,γ_k_3⟩⊂ W_1+W_3 and hence W_1+W_3=𝕍. Similarly W_2+W_3=𝕍.Thus we have W_1 ∼ W_3 ∼ W_2, and combining both the cases, we getdiam(𝒢(𝕍))= 2. 𝒢(𝕍) is triangulated and hence girth(𝒢(𝕍))=3.Let W_1=⟨α_1,α_2,…,α_k ⟩ be a k-dimensional subspace of 𝕍 and {α_1,α_2,…,α_k} be extended to a basis of 𝕍 as {α_1,α_2,…,α_k,β_1,β_2,…,β_l }where n=k+l. Without loss of generality, we take k ≤ l and set W_2=⟨β_1, β_2, …, β_l ⟩ and W_3= ⟨α_1 + β_1, α_2+ β_2, …, α_k + β_k, β_k+1,…, β_l ⟩. By similar arguments to that of the proof of claim in Theorem <ref>, it is clear that W_1+W_3=W_2+W_3=W_1+W_2=𝕍 and hence W_1,W_2,W_3 forms a triangle in 𝒢(𝕍). § MAXIMAL CLIQUES AND MAXIMAL INDEPENDENT SETS IN 𝒢(𝕍)In this section, we study the structure of maximal cliques and maximal independent sets in 𝒢(𝕍). The collection 𝒲_n-1 of all (n-1) dimensional subspaces of 𝕍 is a maximal clique in 𝒢(𝕍).As any two distinct n-1 dimensional subspaces are adjacent in 𝒢(𝕍), the collection 𝒲_n-1 of all (n-1) dimensional subspaces of 𝕍 is a clique in 𝒢(𝕍). For maximality, let if possible, 𝒲_n-1∪{W} be a clique where W is non-trivial proper subspace of 𝕍 with dim(W)=k < n-1. Since k < n-1, there exist W' ∈𝒲_n-1 such that W ⊂ W'. Thus W+W'=W'≠𝕍, thereby contradicting that 𝒲_n-1∪{W} is a clique. Hence, 𝒲_n-1 is a maximal clique in 𝒢(𝕍).There exists a graph homomorphism from 𝒢(𝕍) to the subgraph induced by 𝒲_n-1, defined in Theorem <ref>. Since 𝒲_n-1 is a clique, subgraph induced by it ⟨𝒲_n-1⟩ is a complete graph. As every non-trivial proper subspace W of 𝕍 is contained in at least one (possibly more than one) subspace in 𝒲_n-1,there exists a map φ: 𝒢(𝕍) →⟨𝒲_n-1⟩ given by W ↦ W', where W' is an n-1 dimensional subspace of 𝕍 containing W. The existence of such a map is guaranteed by the axiom of choice. It is to be noted that if W_1 ∼ W_2 in 𝒢(𝕍), i.e., W_1+W_2=𝕍, then W_1 and W_2 are not contained in same subspace in 𝒲_n-1. Thus W'_1≠ W'_2. Thus, as ⟨𝒲_n-1⟩ is a complete graph, W'_1∼ W'_2. Hence φ preserve adjacency and is a graph homomorphism. Lemma <ref> will be useful in finding clique number and chromatic number of 𝒢(𝕍) when the field 𝔽 is finite. (See Theorem <ref>) If n is odd, i.e., n=2m+1, then the collection 𝒲[m] of all non-trivial subspaces of 𝕍 with dimension less than or equal to m is a maximal independent set in 𝒢(𝕍).For any W_1,W_2 ∈𝒲[m], W_1+W_2 ≠𝕍 as dim(W_1+W_2)≤ 2m <n. Thus, 𝒲[m] is an independent set. For any W with dim(W)>m, there exists a subspace W' ∈𝒲[m] such that W+W'=𝕍. Thus, 𝒲[m] is a maximal independent set.Let n is even, i.e., n=2m and 𝒲[m-1] be the collection of all non-trivial subspaces of 𝕍 with dimension less than or equal to m-1. Let α (≠θ) ∈𝕍 and 𝒲^α[m] be the collection of all non-trivial subspaces of 𝕍 containing α and having dimension m. Then 𝒲[m-1]∪𝒲^α[m] is a maximal independent set in 𝒢(𝕍).Let W_1,W_2 ∈𝒲[m-1]∪𝒲^α[m]. If at least one of W_1,W_2 ∈𝒲[m-1], W_1+W_2 ≠𝕍 as dim(W_1+W_2)≤ 2m-1 <n and thus W_1 ≁W_2. If both W_1,W_2 ∈𝒲^α[m], then as α∈ W_1 ∩ W_2, dim(W_1∩ W_2)≥ 1 and as a result dim(W_1+W_2)=dim(W_1)+dim(W_2)-dim(W_1∩ W_2)≤ 2m-1<n. Thus W_1 ≁W_2. Hence 𝒲[m-1]∪𝒲^α[m] is an independent set.For maximality, let W be a proper subspace of 𝕍 not in 𝒲[m-1]∪𝒲^α[m]. If dim(W)=m, then as α∉W, there exists a subspace W' ∈𝒲^α[m] such that W+W'=𝕍. If dim(W)>m, then there exists a subspace W”∈𝒲[m-1] such that W+W”=𝕍. Thus, 𝒲[m-1]∪𝒲^α[m] is a maximal independent set in 𝒢(𝕍). § DOMINATING SETS IN 𝒢(𝕍)In this section, we study the minimal dominating sets and domination number of 𝒢(𝕍) and use it prove that the subspace sum graph of two vector spaces are isomorphic if and only if the two vector spaces are isomorphic.If 𝒲 be a dominating set in 𝒢(𝕍), then |𝒲|≥ n, where dim(𝕍)=n.Let α be a non-null vector in 𝕍. Consider the subspace ⟨α⟩. Since 𝒲 is a dominating set, either ⟨α⟩∈𝒲 or there exists W_1 ∈𝒲 such that ⟨α⟩∼ W_1 i.e., ⟨α⟩ + W_1 = 𝕍. If ⟨α⟩∈𝒲, we choose some other α'≠θ from 𝕍 such that ⟨α' ⟩∉𝒲. However, if ⟨α⟩∈𝒲 for all α∈𝕍∖{θ}, then 𝒲 contains all the one-dimensional subspaces of 𝕍. Now, irrespective of whether the field is finite or infinite, the number of one-dimensional subspaces of 𝕍 is greater than n. (See Remark <ref>) Hence, the lemma follows trivially. Thus, we assume that α∉𝒲. Then as indicated earlier there exists W_1 ∈𝒲 such that ⟨α⟩ + W_1 = 𝕍. Note that dim(W_1)=n-1. We choose a non-null vector α_1 ∈ W_1. By similar arguments we can assume that ⟨α_1⟩∉𝒲. As 𝒲 is a dominating set, there exists W_2∈𝒲 such that ⟨α_1 ⟩ + W_2 =𝕍. It is to be noted that dim(W_2)=n-1 and W_1≠ W_2, as α_1∈ W_1∖ W_2.Now consider the subspace W_1∩ W_2. As W_1,W_2 are of dimension n-1, W_1+W_2=𝕍. Thus by the result dim(W_1+W_2)=dim(W_1)+dim(W_2)-dim(W_1 ∩ W_2), we have dim(W_1 ∩ W_2)=n-2. We choose α_2(≠θ) ∈ W_1 ∩ W_2 and without loss of generality assume that ⟨α_2 ⟩∉𝒲. Therefore there exists W_3 ∈𝒲 such that ⟨α_2 ⟩ + W_3=𝕍. Again we note that dim(W_3)=n-1 and W_3≠ W_1, W_3 ≠ W_2, as α_2∈ (W_1 ∩ W_2)∖ W_3.Now consider the subspace W_1∩ W_2 ∩ W_3. Since dim(W_3)=n-1 and (W_1 ∩ W_2)⊄W_3, we have (W_1∩ W_2)+W_3=𝕍. Thus by using dim[(W_1∩ W_2)+W_3]=dim(W_1∩ W_2)+dim(W_3)-dim(W_1 ∩ W_2 ∩ W_3), we have dim(W_1 ∩ W_2∩ W_3)=n-3. As in earlier cases, we choose α_3(≠θ) ∈ W_1 ∩ W_2∩ W_3 and there exists W_4 ∈𝒲 such that ⟨α_3 ⟩ + W_4=𝕍 with dim(W_4)=n-1 and W_4≠ W_1, W_2, W_3.This process continues till we get W_n ∈𝒲 such that ⟨α_n-1⟩ + W_n=𝕍 with dim(W_n)=n-1 and W_n≠ W_1, W_2,…,W_n-1 and dim(W_1 ∩ W_2 ∩⋯ W_n)=0.Thus, there are at least n many distinct subspaces W_1, W_2,…,W_n in 𝒲 and the lemma holds.If the base field 𝔽 is infinite and since n>1, then there are infinitely many one-dimensional subspaces. If 𝔽 is finite with q elements and since n>1, then from the proof of Theorem <ref>, it follows that number of one-dimensional subspaces is q^n-1+q^n-2+⋯+1 is greater than n. Let {α_1,α_2,…,α_n} be a basis of 𝕍 and W_i=⟨α_1,α_2,…,α_i-1,α_i+1,…α_n⟩ for i=1,2,⋯,n. Then the collection 𝒲={W_i: 1≤ i ≤ n} is a minimum dominating set in 𝒢(𝕍) and hence γ(𝒢(𝕍))=n.Let W be an arbitrary vertex of 𝒢(𝕍) and θ≠α∈ W. Suppose α=c_1α_1+c_2α_2+⋯ +c_n α_n with at least one c_i≠ 0. If c_j ≠ 0, then ⟨α⟩ + W_j = 𝕍 and hence W+W_j=𝕍, i.e., W ∼ W_j. Thus 𝒲 is a dominating set. Now consider 𝒲∖{W_i}. As ⟨α_i ⟩ + W_j=W_j for i ≠ j, we have ⟨α_i ⟩≁W_j, ∀ W_j ∈𝒲∖{W_i}. Thus 𝒲∖{W_i} is not a dominating set, thereby showing that 𝒲 is a minimal dominating set. Now the theorem follows from Lemma <ref>. Let 𝕍_1 and 𝕍_2 be two finite dimensional vector spaces over the same field 𝔽. Then 𝕍_1 and 𝕍_2 are isomorphic as vector spaces if and only if 𝒢(𝕍_1) and 𝒢(𝕍_2) are isomorphic as graphs.It is quite obvious that if 𝕍_1 and 𝕍_2 are isomorphic as vector spaces, then 𝒢(𝕍_1) and 𝒢(𝕍_2) are isomorphic as graphs. For the other part, let 𝒢(𝕍_1) and 𝒢(𝕍_2) be isomorphic as graphs. Let dim(𝕍_1)=n_1 and dim(𝕍_2)=n_2. Then, by Theorem <ref>, the domination numbers of 𝒢(𝕍_1) and 𝒢(𝕍_2) are n_1 and n_2 respectively. However, as the two graphs are isomorphic, n_1=n_2=n (say). Thus 𝕍_1 and 𝕍_2 are of same finite dimension over the field 𝔽 and hence both are isomorphic to 𝔽^n as vector spaces.If 𝕍 is an n dimensional vector space, then the total domination number γ_t, connected domination number γ_c and clique domination number γ_cl of 𝒢(𝕍) are all equal to n, i.e.,γ=γ_t=γ_c=γ_cl=n.It is known that if a graph has a dominating clique and γ≥ 2, then γ≤γ_t ≤γ_c ≤γ_cl (See pg. 167 of <cit.>). Thus, it suffices to show that the minimum dominating set 𝒲={W_i: 1≤ i ≤ n} (constructed as in Theorem <ref>) is a total dominating set, a connected dominating set as well as a dominating clique. Since W_i ∼ W_j in 𝒢(𝕍) for i ≠ j, 𝒲 is a total dominating set. Moreover, the subgraph ⟨𝒲⟩ spanned by 𝒲 is a complete graph and hence connected. Thus, 𝒲 is a connected dominating set as well as a dominating clique. § THE CASE OF FINITE FIELDSIn this section, we study some properties of 𝒢(𝕍) if the base field 𝔽 is finite, say of order q=p^r where p is a prime. In particular, we find the order, degree, chromatic number, clique number and edge connectivity of 𝒢(𝕍) and show that 𝒢(𝕍) is never Hamiltonian.It is known that the number of k dimensional subspaces of an n-dimensional vector space over a finite field of order q is the q-binomial coefficient (See Chapter 7 of <cit.>)[ [ n; k ]]_q=(q^n -1)(q^n-q)⋯ (q^n-q^k-1)(q^k-1)(q^k-q)⋯ (q^k-q^k-1),and hence the total number of non-trivial proper subspaces of 𝕍, i.e., the order of 𝒢(𝕍) is given by∑_k=1^n-1[ [ n; k ]]_q=G(n,q)-2,where G(n,q) is the Galois number[For definition, see <cit.>.].Let W be a k-dimensional subspace of an n-dimensional vector space 𝕍 over a finite field 𝔽 with q elements. Then degree of W in 𝒢(𝕍), deg(W)=∑_r=0^k-1N_r, whereN_r=(q^k-1)(q^k-q)⋯(q^k-q^r-1)(q^n-q^k)(q^n-q^k+1)⋯(q^n-q^n-1)(q^n-k+r-1)(q^n-k+r-q)⋯(q^n-k+r-q^n-k+r-1) Since deg(W)= the number of subspaces whose sum with W is 𝕍 and dim(W)=k<n, the subspaces adjacent to W have dimension at least n-k, i.e., a subspace W' is adjacent to W if dim(W')=n-k+r and dim(W∩ W')=r where 0≤ r ≤ k-1. To find such subspaces W', we choose r linearly independent vectors from W and n-k linearly independent vectors from 𝕍∖ W, and generate W' with these n-k+r linearly independent vectors.[For details, see pg. 22 of <cit.>] Since, the number of ways we can choose r linearly independent vectors from W is (q^k-1)(q^k-q)⋯(q^k-q^r-1), the number of ways we can choose n-k linearly independent vectors from 𝕍∖ W is (q^n-q^k)(q^n-q^k+1)⋯(q^n-q^n-1) and the number of bases of an n-k+r dimensional subspace is (q^n-k+r-1)(q^n-k+r-q)⋯(q^n-k+r-q^n-k+r-1), the number of subspaces W' with dim(W')=n-k+r and dim(W∩ W')=r is N_r=(q^k-1)(q^k-q)⋯(q^k-q^r-1)(q^n-q^k)(q^n-q^k+1)⋯(q^n-q^n-1)(q^n-k+r-1)(q^n-k+r-q)⋯(q^n-k+r-q^n-k+r-1)Now, as 0≤ r ≤ k-1, deg(W)=∑_r=0^k-1N_r.It is clear from Theorem <ref> that deg(W) depends solely on its dimension and it is minimized if dim(W)=1 and maximized if dim(W)=n-1, i.e.,δ=q^n-1Δ=∑_r=0^n-2[ [ n-1; r ]]_q q^n-r-1q^r+q^r-1+⋯ +q+1.𝒢(𝕍) is Eulerian if and only if q is even.From Theorem <ref>, N_r={(q^k-1)(q^k-q)⋯(q^k-q^r-1)}{(q^n-q^k)(q^n-q^k+1)⋯(q^n-q^n-1)}(q^n-k+r-1)(q^n-k+r-q)⋯(q^n-k+r-q^n-k+r-1) ={q^{1+2+⋯+(r-1)}(q^k-1)⋯(q^k-r+1-1)}{q^{k+(k+1)+⋯ +(n-1)}(q^n-k-1)⋯(q-1)}q^{1+2+⋯+ (n-k+r-1)}(q^n-k+r-1)(q^n-k+r-1-1)⋯(q-1) =q^r(r-1)/2(q^k-1)⋯(q^k-r+1-1)q^{n(n-1)-k(k-1)}/2(q^n-k-1)⋯(q-1)q^{(n-k+r)(n-k+r-1)}/2(q^n-k+r-1)(q^n-k+r-1-1)⋯(q-1) =q^r(r-1)/2q^{n(n-1)-k(k-1)}/2q^{(n-k+r)(n-k+r-1)}/2· (q^k-1)⋯(q^k-r+1-1)(q^n-k-1)⋯(q^n-k+r+1-1) =q^(n-k)(k-r)· (q^k-1)⋯(q^k-r+1-1)(q^n-k-1)⋯(q^n-k+r+1-1) If q is even, N_r is even for all r satisfying 0≤ r ≤ k-1. Hence, by Theorem <ref>, for any subspace W of 𝕍 of dimension k, deg(W)=∑_r=0^k-1N_r is even. As all vertices of 𝒢(𝕍) are of even degree, 𝒢(𝕍) is Eulerian. On the other hand, if q is odd, as the minimum degree δ=q^n-1, 𝒢(𝕍) is not Eulerian. Edge connectivity of Γ(𝕍) is q^n-1.From <cit.>, as Γ(𝕍) is of diameter 2 (by Theorem <ref>), its edge connectivity is equal to its minimum degree, i.e., q^n-1.Let 𝕍 be an n-dimensional vector space over a finite field of order q. Then the clique number and chromatic number of 𝒢(𝕍) are both equal to 1+q+⋯+q^n-1.By Theorem <ref>, 𝒲_n-1 is a maximal clique and|𝒲_n-1|=[ [ n; n-1 ]]_q=[ [ n; 1 ]]_q=q^n-1q-1=1+q+⋯+q^n-1.Now, by Lemma <ref>, 𝒢(𝕍) is |𝒲_n-1|-colourable, i.e., χ(𝒢(𝕍))≤ |𝒲_n-1|. Moreover, as 𝒲_n-1 is a maximal clique, ω(𝒢(𝕍))≥ |𝒲_n-1|. As chromatic number of a graph is greater or equal to its clique number, we have the following inequality:ω(𝒢(𝕍)) ≤χ(𝒢(𝕍))≤ |𝒲_n-1| ≤ω(𝒢(𝕍)),i.e.,ω(𝒢(𝕍)) = χ(𝒢(𝕍))= |𝒲_n-1| =1+q+⋯+q^n-1.Theorem <ref> shows that 𝒢(𝕍) is weakly perfect. In Theorem <ref>, we establish a necessary and sufficient condition for 𝒢(𝕍) to be prefect. In Figure <ref>, the 7 inner vertices forms a maximum clique and the chromatic number is also 1+2+2^2=7. Also, as q=2, by Corollary <ref>, 𝒢(𝕍) is Eulerian. 𝒢(𝕍) is perfect if and only if dim(𝕍)=3.Let n=dim(𝕍)≥ 4. Let S={α_1,α_2,α_3,α_4} be 4 linearly independent vectors in 𝕍 and let S∪ T be the extension of S to a basis of 𝕍. Consider the following induced 5-cycle in 𝒢(𝕍): W_1∼ W_2 ∼ W_3 ∼ W_4 ∼ W_5 ∼ W_1 where W_1=⟨α_1,α_2,T⟩, W_2=⟨α_3,α_4,T⟩, W_3=⟨α_1+α_3,α_2,T⟩, W_4=⟨α_1,α_4, T ⟩ and W_5=⟨α_1+α_3,α_2+α_4,T ⟩. Thus, by Strong Perfect Graph Theorem, 𝒢(𝕍) is not perfect.Let n=dim(𝕍)=3 and if possible, let there exists an induced odd cycle W_1∼ W_2∼ W_3∼⋯∼ W_2k+1∼ W_1 oflength greater than 3 in 𝒢(𝕍), where W_i's are distinct proper non-trivial subspaces of 𝕍. Since, W_i ∼ W_i+1, therefore W_i + W_i+1=𝕍, i.e., at least one of W_i or W_i+1 is of dimension 2. Without loss of generality, let W_1 be of dimension 2. Now, there are two possibilities: dim(W_2)=1 or dim(W_2)=2.If dim(W_2)=1, then as W_2 ∼ W_3, dim(W_3) must be 2. However, in that case we have W_1+W_3=𝕍, i.e., W_1 ∼ W_3, a contradiction. On the other hand, let dim(W_2)=2. Since, W_1 ≁W_3, i.e., W_1+W_3≠𝕍, we have dim(W_3)=1. Again, as W_2 ≁W_4, i.e., W_2+W_4≠𝕍, we have dim(W_4)=1. But, this implies W_3+W_4 ≠𝕍, i.e., W_3 ≁W_4, a contradiction. Thus, there does not exist any induced odd cycle of length greater than 3 in 𝒢(𝕍).Let n=dim(𝕍)=3 and if possible, let there exists an induced odd cycle W_1∼ W_2∼ W_3∼⋯∼ W_2k+1∼ W_1 oflength greater than 3 in 𝒢(𝕍), where W_i's are distinct proper non-trivial subspaces of 𝕍. It is to be noted that in 𝒢(𝕍), W_i ≁W_j implies W_i+W_j=𝕍. Thus, as in previous case, without loss of generality, let W_1 be of dimension 2. Since, W_2 ∼ W_1 in 𝒢(𝕍), W_1+W_2≠𝕍, i.e.,W_2 ⊂ W_1 and dim(W_2)=1.Now, as W_2 ≁W_4 in 𝒢(𝕍), W_2+W_4 = 𝕍, i.e., dim(W_4)=2. Also, as W_3∼ W_4, we have W_3+W_4≠𝕍 and hence dim(W_3)=1. Similarly, as W_5∼ W_4, we have W_5+W_4≠𝕍 and hence dim(W_5)=1. Again as W_3≁W_5, we have W_3+W_5= 𝕍. However this is impossible as dim(W_3)=dim(W_5)=1. Thus, there does not exist any induced odd cycle of length greater than 3 in 𝒢(𝕍).Thus, as neither 𝒢(𝕍) nor its complement have any induced odd cycle of length greater than 3, by Strong Perfect Graph Theorem, 𝒢(𝕍) is perfect. Hence, the theorem. § CONCLUSIONIn this paper, we represent subspaces of a finite dimensional vector space as a graph and study various inter-relationships among 𝒢(𝕍) as a graph and 𝕍 as a vector space. The main goal of these discussions was to study the algebraic properties of 𝕍 and graph theoretic properties of 𝒢(𝕍), and to establish the equivalence between the corresponding graph and vector space isomorphisms. Apart from this, we also study basic properties e.g., completeness, connectedness, domination, chromaticity, maximal cliques, perfectness and maximal independent sets of 𝒢(𝕍). As a topic of further research, one can look into the structure of automorphism groups, Hamiltonicity and independence number of 𝒢(𝕍). § ACKNOWLEDGEMENTThe author is thankful to Sabyasachi Dutta, Anirban Bose and Sourav Tarafder for some fruitful discussions on the paper. The research is partially funded by NBHM Research Project Grant, (Sanction No. 2/48(10)/2013/ NBHM(R.P.)/R&D II/695), Govt. of India. 20 anderson-livingston D. F. Anderson and P. S. Livingston: The zero-divisor graph of a commutative ring, Journal of Algebra, 217 (1999), 434-447. mks-ideal I. Chakrabarty, S. Ghosh, T.K. Mukherjee, and M.K. Sen: Intersection graphs of ideals of rings, Discrete Mathematics 309, 17 (2009): 5381-5392. angsu-comm-alg A. Das: Non-Zero Component Graph of a Finite Dimensional Vector Space, Communications in Algebra, Vol. 44, Issue 9, 2016: 3918-3926. angsu-lin-mult-alg A. Das: Non-Zero Component Union Graph of a Finite Dimensional Vector Space, Linear and Multilinear Algebra, DOI: 10.1080/03081087.2016.1234577 angsu-comm-alg-2 A. Das: Subspace Inclusion Graph of a Vector Space, Communications in Algebra, Vol. 44, Issue 11, 2016: 4724-4731. angsu-jaa A. Das: On Non-Zero Component Graph of Vector Spaces over Finite Fields, Journal of Algebra and Its Applications, Volume 16, Issue 01, January 2017.galois-number J. Goldman, G.C. Rota: The number of subspaces of a vector space, Recent Progress in Combinatorics (Proc. Third Waterloo Conf. on Combinatorics, 1968), Academic Press, New York, 75-83, 1969. domination-book T.W. Haynes, S.T. Hedetniemi and P.J. Slater: Fundamentals of Domination in Graphs, Marcel Dekker Inc., 1998. int-vecsp-2 N. Jafari Rad, S.H. Jafari: Results on the intersection graphs of subspaces of a vector space, http://arxiv.org/abs/1105.0803v1 quantum-book V. Kac, P. Cheung: Quantum Calculus, Universitext, Springer, 2002. int-vecsp-3 J.D. Laison and Y. Qing: Subspace Intersection Graphs, Discrete Mathematics 310, 3413-3416, 2010. survey2 H.R. Maimani, M.R. Pournaki, A. Tehranian, S. Yassemi: Graphs Attached to Rings Revisited, Arab J. Sci. Eng. 36, 997-1011, 2011. plesnik J. Plesnik: Critical graphs of given diameter, Acta Fac. Rerum Natur. Univ. Comenian. Math., 30, 71-93, 1975. int-vecsp-1 Y. Talebi, M.S. Esmaeilifar, S. Azizpour: A kind of intersection graph of vector space, Journal of Discrete Mathematical Sciences and Cryptography 12, no. 6, 681-689, 2009. west-graph-book D.B. West: Introduction to Graph Theory, Prentice Hall, 2001.
http://arxiv.org/abs/1702.08245v1
{ "authors": [ "Angsuman Das" ], "categories": [ "math.CO", "05C25, 05C69" ], "primary_category": "math.CO", "published": "20170227114321", "title": "Subspace Sum Graph of a Vector Space" }
Automated Verification and Synthesis of Embedded Systems using Machine Learning Lucas CordeiroDepartment of Computer Science, University of Oxford, UKE-mail: lucas.cordeiro@cs.ox.ac.ukDecember 30, 2023 ================================================================================================================================= The dependency on the correct functioning of embedded systems is rapidly growing, mainly due to their wide range of applications, such as micro-grids, automotive device control, health care, surveillance, mobile devices, and consumer electronics. Their structures are becoming more and more complex and now require multi-core processors with scalable shared memory, in order to meet increasing computational power demands. As a consequence, reliability of embedded (distributed) software becomes a key issue during system development, which must be carefully addressed and assured.The present research discusses challenges, problems, and recent advances to ensure correctness and timeliness regarding embedded systems. Reliability issues, in the development of micro-grids and cyber-physical systems, are then considered, as a prominent verification and synthesis application. In particular, machine learning techniques emerge as one of the main approaches to learn reliable implementations of embedded software for achieving a correct-by-construction design. § INTRODUCTIONGenerally, embedded computer systems perform dedicated functions with a high degree of reliability. They are used in a variety of sophisticated applications, which range from entertainment software, such as games and graphics animation, to safety-critical systems, including nuclear reactors and automotive controllers <cit.>. Embedded systems are ubiquitous in modern day information systems, and are also becoming increasingly important in our society, especially in micro-grids, where reliability and carbon emission reduction are of paramount importance <cit.>, and in cyber-physical systems (CPS), which demand short development cycles and again a high-level of reliability <cit.>. As a consequence, human life has also become more and more dependent on the services provided by this type of system and, in particular, their success is strictly related to both service relevance and quality. Figure <ref> shows embedded systems examples, which typically consist of a human-machine interface (e.g., keyboard and LCD), a processing unit (e.g., real-time computer system), and an instrumentation interface (e.g., sensor, network, and actuator) <cit.>. Indeed, many current embedded systems, such as unmanned aerial vehicles (UAVs) <cit.> and medical monitoring systems <cit.>, become interesting solutions only if they can reliably perform their target tasks. Besides, when physical interaction with the real world is needed, which happens in CPS, additional care must be taken, mainly when human action is directly replaced, as in vehicle driving. Regarding the latter, even human-in-the-loop feedback control can be employed, which raises deeper concerns w.r.t. reliability of human behavior modeling and system implementation. Consequently, it is important to go beyond design correctness and also address behavior correctness, which may be performed by incorporating system models. In particular, these models can be used for synthesizing a given system, ensuring that all needed functions are correctly implemented and the correct behavior exhibited, i.e., the system is indeed correct by its method of construction <cit.>. Here, machine learning emerges as a powerful technique to automatically learn the correct behavior of the system, which must provably satisfy a given correctness specification σ. Specifically, synthesizers can use σ as starting point and then incrementally produce a sequence of candidate solutions that satisfy σ, by integrating deductive methods with inductive inference (learning from counterexamples) <cit.>. As a result, a given candidate solution can be iteratively refined to match the specification σ based on a counterexample-guided learning approach. § VERIFICATION AND SYNTHESIS CHALLENGES FOR EMBEDDED SYSTEMSState-of-the-art verification methodologies for embedded systems generate test vectors (with constraints) and use assertion-based verification and high-level processor models, during simulation <cit.>, as shown in Figure <ref>. Here, the main challenges regarding the verification of embedded systems lie on improving coverage, pruning the state-space exploration during verification, and incorporating system models, which allow specific checks regarding system behavior and not only code correctness. Additionally, embedded system verification raises additional challenges, such as: (1) time and energy constraints; (2) handling of concurrent software; (3) platform restrictions; (4) legacy designs; (5) support to different programming languages and interfaces; and (6) handling of non-linear and non-convex optimization functions.Indeed, the first two aspects are of extreme relevance in micro-grids and cyber-physical systems, in order to ensure reliability, which is a key issue for (smart) cities, industries, and consumers, and the third one is essential in systems that implement device models, such as digital filters and controllers, which present a behavior that is highly dependent on signal inputs and outputs and whose deployment may be heavily affected by hardware restrictions. The fourth aspect is inherent to a large number of embedded systems fromtelecommunications, control systems, and medical devices. In particular, software developed for those systems has been extensively tested and verified, and also optimized for efficiency over years. Therefore, when a new product is derived from a given platform, a lot of legacy code is usually reused for reducing development time and improving code quality. The fifth aspect is related to the evolution of development processes and technologies, which may delay the application of suitable verification and synthesis approaches if verifiers and synthesizers do not support different programming languages and interfaces. The last one is related to the widespread use of embedded systems in autonomous vehicle navigation systems, which demand optimization solving during their execution for a wide range of functions, including non-linear and non-convex optimization functions.Those challenges place difficulties for developing (reliable) synthesizers for embedded systems, especially for CPS and micro-grids, where the controlled object (e.g., physical plant) typically exhibits continuous behavior whereas the controller (usually implemented by a real-time computer system) operates in discrete time and over a quantized domain (cf. intelligent product in Figure <ref>). In particular, synthesizers for those systems need to consider the effects of the quantizers (A/D and D/A converters), when a digital equivalent of the controlled object is considered, i.e., a model of their physical environment. Additionally,finite-precision arithmetic and their related rounding errors need to be considered when correct-by-construction code is generated for embedded systems. The main challenge lies on exploiting effectively and efficiently counterexamples provided by verifiers to automatically learn reliable embedded software implementations (cf. Figure <ref>). § RESEARCH PROBLEM (RP) This research statement tackles six major problems in computer-aided verification and synthesis for embedded systems, which are (partially) open in current published research.(RP1) provide suitable encoding into SMT, which may extend the background theories typically supported by SMT solvers, with the goal of reasoning accurately and effectively about realistic embedded (control) software.(RP2) exploit SMT techniques to leverage bounded model checking of multi-threaded software, in order to mitigate the state-explosion problem due to thread interleaving. (RP3) prove correctness and timeliness of embedded systems, by taking into account stringent constraints imposed by hardware. (RP4) incorporate knowledge about system purpose and associated features to detect system-level and behavior failures.(RP5) provide tools and approaches capable of addressing different programming languages and application interfaces, with the goal of reducing the time needed to adapt current verification techniques to new developments and technologies.(RP6) develop automated synthesis approaches that are algorithmically and numerically sound, in order to handle embedded (control) software that is tightly coupled with the physical environment by considering uncertain models and FWL effects. § CURRENT ACHIEVEMENTS AND FUTURE TRENDS In order to support SMT encoding (RP1), Cordeiro, Fischer, and Marques-Silva proposed the first SMT-based BMC for full C programs, called Efficient SMT-Based Context-Bounded Model Checker (ESBMC) <cit.>, which was later extended to support C++, CUDA, and Qt-based consumer electronics applications. This approach was also able to find undiscovered bugs related to arithmetic overflow, buffer overflow, and invalid pointer, in standard benchmarks, which were later confirmed by the benchmarks' creators (e.g., NOKIA, NEC, NXP, and VERISEC). Other SMT-based BMC approaches have also been proposed and implemented in the literature <cit.>, but the coverage and verification time of all existing ones are still limited to specific classes of programs, especially for those that contain intensive floating-point arithmetic and dynamic memory allocation. One possible research direction is to bridge the gap between BMC tools and SMT solvers to propose background theories and develop more efficient decision procedures to handle specific classes of programs.The SMT-based BMC approach proposed by Cordeiro, Fischer, and Marques-Silva was further developed to verify correct lock acquisition ordering and the absence of deadlocks, data races, and atomicity violations in multi-threaded software based on POSIX and CUDA libraries (RP2) <cit.>, considering monotonic partial-order reduction and state-hashing techniques, in order to prune the state-space exploration. Recent advances for verifying multi-threaded C programs have been proposed to speed up the verification time, which significantly prune the state-space exploration; however, the class of concurrent programs (e.g., OpenCL and MPI) that can be verified is still very limited. One possible research direction is to further extend BMC of multi-threaded programs via Sequentialization <cit.> and also analyze interpolants to prove non-interference of context switches <cit.>.Novel approaches to model check embedded software using k-induction and invariants were proposed and evaluated in the literature (RP3), which demonstrate its effectiveness in some real-life embedded-system applications <cit.>. However, the main challenge still remains open, i.e., to compute and strengthen loop invariants to prove program correctness and timeliness in a more efficient and effective way, in order to be competitive with other model-checking approaches <cit.>. In particular, invariant-generation algorithms have substantially evolved over the last years, with the goal of discovering inductive invariants of programs or continuously refine them during verification <cit.>. Yet there is still a lack of studies for exploiting the combination of different invariant-generation algorithms (e.g., interval analysis, linear inequalities, polynomial equalities and inequalities) and how to strengthen them.State-of-the-art SMT-based context-BMC approaches were extended to verify overflow, limit cycle, stability, and minimum phase, in digital systems (RP4). Indeed, digital filters and controllers were tackled, in order to verify system-level properties of those systems, specified as linear-time temporal logic <cit.>. In particular, a specific UAV application was tackled, with the goal to verify its attitude controllers. In general, however, there is still a lack of studies to verify system-level properties related to embedded systems; emphasis should be given to micro-grids and cyber-physical systems, which require high-dependability requirements for computation, control, and communication. Additionally, the application of automated fault detection, localization, and correction techniques to digital systems represents an important research direction to make BMC tools useful for engineers.Although ESBMC was extended to support C/C++ and some variants (RP5), new application interfaces and programming languages are often developed, which require suitable verifiers. Indeed, it would be interesting if a new programming language model could be loaded, which along with a BMC core could check different programs. Some work towards that was already presented by <cit.>, which employed operational models for checking Qt-based programs from consumer electronics. In summary, the BMC core is not changed, but instead an operational model, which implements the behavior and features of Qt libraries, is used for providing the new code structure to be checked. Such research problem is closely related to the first one (RP1) and has the potential to devise a new paradigm in software verification.State-of-the-art synthesis approaches (RP6) for embedded (control) systems typically disregard the platform in which the embedded system software operates and restrict itself to generate code that do not take into account FWL effects. However, the synthesized system must include the physical plant to avoid serious system's malfunctioning (or even a catastrophe) due to the embedded (control) software, e.g., the Mars Polar Lander did not account for leg compressions prior to landing <cit.>. Research in this direction has made some progress to design, implement, and evaluate an automated approach for generating correct-by-construction digital controllers that is based on state-of-the-art inductive synthesis techniques <cit.>. However, there is still little evidence whether that approach can scale for larger systems modeled by other types of representations (e.g., state-space). Another research direction for synthesizers is to automatically produce UAV trajectory and mission planning code, by taking into account system's dynamics and nonholonomic constraints. As a result, verifiers and synthesizers need to handle a wide range of functions, including non-linear and non-convex optimization problems <cit.>. Machine learning techniques could be employed here to learn from counterexamples, i.e., in the inductive step, synthesizers could learn the model from raw data, and in the deductive step, the model could be applied to predict the behaviour of new data <cit.>. § CONCLUSIONS This research statement presented the main challenges related to the verification of design correctness, in embedded systems, and also raised some important side considerations about synthesis.Given that software complexity has significantly increased in embedded products, there is still the need for stressing and exhaustively covering the entire system state space, in order to verify low-level properties that have to meet the application's deadline, access memory regions, handle concurrency, and control hardware registers. Besides, there is a trend towards incorporating knowledge about the system to be verified, which may take software verification and synthesis one step further, where not only code correctness will be addressed, but also full system reliability. Finally, it seems interesting to provide behavioral models when new application interfaces or programming language features are used, in order to extend the capabilities of current verification tools, without changing the core BMC module.As future perspective, the main goal of this research is to extend BMC as a verification and synthesis tool for achieving correct-by-construction embedded system implementations. Special attention will be given to CPS and modern micro-grids, considering small-scale versions of a distributed system, so that reliability and other system-level properties (e.g., carbon emission reduction in smart cities) are amenable to automated verification and synthesis, probably through behavior models.1 Kopetz11 H. Kopetz: Real-Time Systems - Design Principles for Distributed Embedded Applications. Real-Time Systems Series, Springer, ISBN 978-1-4419-8236-0, pp. 1–376, 2011. xu15 Xua X., Jiaa H., Wanga D., Yub D., Chiangc H.: Hierarchical energy management system for multi-source multi-product microgrids. Renewable Energy, v. 78, pp. 621–630, 2015. leeCPS2 Lee E.: The Past, Present and Future of Cyber-Physical Systems: A Focus on Models.Sensors 15(3): pp. 4837–4869, 2015. groza2015formal Groza A., Letia I., Goron A., Zaporojan S.:A formal approach for identifying assurance deficits in unmanned aerial vehicle software. Progress in Systems Engineering, Springer, pp. 233–239, 2015. Cordeiro09 Cordeiro L., Fischer B., Chen H., Marques-Silva J.: Semiformal Verification of Embedded Software in Medical Devices Considering Stringent Hardware Constraints. ICESS, pp. 396–403, 2009. Abate17 Abate A., Bessa I., Cattaruzza D., Cordeiro L., David C., Kesseli P., Kroening D.: Sound and Automated Synthesis of Digital Stabilizing Controllers for Continuous Plants. HSCC, 2017 (to appear). Seshia15 Sanjit A. Seshia: Combining Induction, Deduction, and Structure for Verification and Synthesis.Proc. of the IEEE 103(11): pp. 2036–2051, 2015. Behrend15 Behrend J., Lettnin D., Gruenhage A., Ruf J., Kropf T., Rosenstiel W.: Scalable and Optimized Hybrid Verification of Embedded Software.J. Electronic Testing 31(2): pp. 151–166, 2015. Cordeiro12 Cordeiro L., Fischer B., Marques-Silva J.: SMT-based Bounded Model Checking for Embedded ANSI-C Software. IEEE Trans. Software Eng. 38(4), pp. 957–974, 2012. Armando06 Armando A., Mantovani J., Platania L.: Bounded Model Checking of Software Using SMT Solvers Instead of SAT Solvers.SPIN, LNCS 3925, pp. 146-162, 2006. MerzFS12 Merz F., Falke S., Sinz C.: LLBMC: Bounded Model Checking of C and C++ Programs using a Compiler IR. VSTTE, LNCS 7152, pp. 146–161, 2012. CordeiroF11 Cordeiro L., Fischer B.: Verifying Multi-threaded Software using SMT-based Context-Bounded Model Checking. ICSE, pp. 331–340, 2011. Pereira15 Pereira P.Albuquerque H., Marques H., Silva I., Carvalho C., Santos V., Ferreira R., Cordeiro L.:SMT-Based Context-Bounded Model Checking for CUDA Programs. Concurrency and Computation: P&E., 2017 (to appear).Inverso14 Inverso O., Tomasco E., Fischer B., La Torre S., Parlato G.: Bounded Model Checking of Multi-threaded C Programs via Lazy Sequentialization. CAV, LNCS 8559, pp. 585-602, 2014. McMillan11 K. McMillan: Widening and Interpolation.SAS, LCNS 6887, pp. 1, 2011. Gadelha15 Gadelha M., Ismail H., Cordeiro L.: Handling Loops in Bounded Model Checking of C Programs via k-Induction. STTT 19(1), pp. 97–114, 2017. Rocha17 Rocha, W., Rocha, H., Ismail H., Cordeiro, L., Fischer, B.: DepthK: A k-Induction Verifier Based on Invariant Inference for C Programs (Competition Contribution).TACAS, 2017 (to appear). Beyer15 Beyer D., Dangl M., Wendler P.: Boosting k-Induction with Continuously-Refined Invariants. CAV, LNCS 9206, pp. 622–640, 2015. JMorse15 Morse J., Cordeiro L., Nicole D., Fischer B.: Model Checking LTL Properties over ANSI-C Programs with Bounded Traces.Software and System Modeling 14(1): pp. 65–81, 2015. Bessa17 Bessa I., Ismail H., Palhares, R., Cordeiro L., Chaves Filho J.: Formal Non-Fragile Stability Verification of Digital Control Systems with Uncertainty.IEEE Trans. Computers 66(3), pp. 545–552, 2017. Sousa17 Sousa F., Garcia M., Cordeiro, L., Lima Filho E.: Bounded Model Checking of C++ Programs based on the Qt Cross-Platform Framework.Softw. Test., Verif. Reliab., 2017 (to appear). Jackson16 Jackson D., Vaziri M.:Correct or usable? the limits of traditional verification.ESEC/SIGSOFT FSE, pp. 11, 2016. Araujo17 Araujo, R., Albuquerque, H., Bessa, I., Cordeiro, L., Chaves Filho, J.:Counterexample Guided Inductive Optmization.Sci. Comput. Program., 2017 (under review). Alur13 Alur R., Bodík R., Juniwal G., Martin M., Raghothaman M., Seshia S., Singh R., Solar-Lezama A., Torlak E., Udupa A.: Syntax-guided synthesis.FMCAD, pp. 1–8, 2013.
http://arxiv.org/abs/1702.07847v2
{ "authors": [ "Lucas Cordeiro" ], "categories": [ "cs.LO", "cs.SY", "C.3; D.2.4; I.2.2" ], "primary_category": "cs.LO", "published": "20170225072924", "title": "Automated Verification and Synthesis of Embedded Systems using Machine Learning" }
Three-Particle Correlations in Liquid and Amorphous Aluminium [Abstract 0.9 We discuss effects of the brane-localized mass terms on the fixed points of the toroidal orbifold T^2/Z_2 under the presence of background magnetic fluxes, where multiple lowest and higher-level Kaluza–Klein (KK) modes are realized before introducing the localized masses in general. Through the knowledge of linear algebra, we find that, in each KK level, one of or more than one of the degenerate KK modes are almost inevitably perturbed, when single or multiple brane-localized mass terms are introduced. When the typical scale of the compactification is far above the electroweak scale or the TeV scale, we apply this mechanism for uplifting unwanted massless or light modes which are prone to appear in models on magnetized orbifolds. ===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================fpheader 16.6cm 23.5cmℓ_P det hαβ̱γϵκ̨ΛμνϕΦψφθρ̊στ∇∂≡() [ ]↔
http://arxiv.org/abs/1702.08184v1
{ "authors": [ "Meng-Sen Ma" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170227083001", "title": "Horizon thermodynamics in fourth-order gravity" }
chenhshf@ahu.edu.cnhzhlj@ustc.edu.cn^1School of Physics and Materials Science, Anhui University, Hefei, 230601, China^2School of Mathematical Science, Anhui University, Hefei, 230601, China ^3Hefei National Laboratory for Physical Sciences at Microscales & Department of Chemical Physics, University ofScience and Technology of China, Hefei, 230026, China Efficient allocation of limited medical resources is crucial for controlling epidemic spreading on networks. Based on the susceptible-infected-susceptible model, we solve an optimization problem as how best to allocate the limited resources so as to minimize the prevalence, providing that the curing rate of each node is positively correlated to its medical resource. By quenched mean-field theory and heterogeneous mean-field (HMF) theory, we prove that epidemic outbreak will be suppressed to the greatest extent if the curing rate of each node is directly proportional to its degree, under which the effective infection rate λ has a maximal threshold λ_c^opt=1/⟨ k ⟩ where ⟨ k ⟩ is average degree of the underlying network. For weak infection region (λ≳λ_c^opt), we combine a perturbation theory with Lagrange multiplier method (LMM) to derive the analytical expression of optimal allocation of the curing rates and the corresponding minimized prevalence. For general infection region (λ>λ_c^opt), the high-dimensional optimization problem is converted into numerically solving low-dimensional nonlinear equations by the HMF theory and LMM. Counterintuitively, in the strong infection region the low-degree nodes should be allocated more medical resources than the high-degree nodes to minimize the prevalence. Finally, we use simulated annealing to validate the theoretical results. 05.10.-a, 64.60.aq, 89.75.Hc Optimal Allocation of Resources for Suppressing Epidemic Spreading on Networks Zhonghuai Hou^3 December 30, 2023 ==============================================================================A challenging problem in epidemiology is how best to allocate limited resources of treatment and vaccination so that they will be most effective in suppressing or reducing outbreaks of epidemics. This problem has been a subject of intense research in statistical physics and many other disciplines <cit.>. Inspired by the percolation theory, the simplest strategy is to randomly choose a fraction of nodes to immunize. However, the random immunization is inefficient for heterogeneous networks. Later on, many more effective immunization strategies have been developed, ranging from global strategies like targeted immunization based on node degree <cit.> or betweenness centrality <cit.> to local strategies, like acquaintance immunization <cit.> and (bias) random walk immunization <cit.> and to some others in between <cit.>. Further improvements were done by graph partitioning <cit.> and the optimization of the susceptible size <cit.>. Besides the degree heterogeneity, community structure has also a major impact on disease immunity <cit.>. Recently, a message-passing approach was used to find an optimal set of nodes for immunization <cit.>. The immunization has been mapped onto the optimal percolation problem <cit.>. Based on the idea of explosive percolation, an “explosive immunization" methodhas been proposed <cit.>. However, some diseases like the common cold and influenza that can be modeled by the susceptible-infected-susceptible (SIS) model, do not confer immunity and individuals can be infected over and over again. Under the situations, one way to control the spread of the diseases is to reduce the risk of the infection, such as adaptive rewiring links incident to infected individuals <cit.> and dynamical interplay between awareness and epidemic spreading <cit.>. An alternative way to control the epidemic spreading of SIS type by designing an optimal strategy for distributing the limited medical resources so as to suppress the epidemic outbreak to the greatest extent and minimize the prevalence once the epidemic outbreak has happened. It is reasonable to assume the curing rate of each node is positively correlated to the medical resources allocated to it. Therefore, the optimal allocation of medical resources is equivalent to that of the curing rates. Assuming the total medical resources are limited, the average curing rate is thus considered to be fixed. This problem has been addressed as a constraint optimization problem in several previous works. When the curing rate can be only tuned in a fixed number of feasible values, this problem has been proved to be NP-complete <cit.>. Instead, when the curing rate can continuously varies in a given interval, some efficient algorithms have been developed for minimizing the threshold of epidemic outbreak <cit.> or the steady-state infection density <cit.>. In the present work, we theoretically solve the constraint optimization problem in both epidemic-free and endemic phases within the mean-field framework. On the one hand, we prove that the epidemic outbreak can be suppressed to the most extent when the curing rate of each node is directly proportional to its degree, under which the epidemic threshold is maximized that is the inverse of the average degree of the underlying network. On the other hand, once the epidemic has broken out but close to the threshold, we analytically show the optimal curing rate should be adjusted in terms of the difference of node degree with average degree and the distance to epidemic threshold. For the general infection region, the optimization problem can be simplified to solve three nonlinear equations.To formulate our problem, we consider the SIS model on an undirected network of size N. The network is described by an adjacency matrix 𝔸 whose entries are defined as A_ij=1 if nodes i and j are connected, and A_ij=0 otherwise. Each node is either susceptible or infected. A susceptible node i can be infected by its infective neighbor with an infection rate β, and an infected node i recovers with a nonvanishing curing rate μ_i. Here, we consider that the curing rate is allowed to vary from one node to another one. In general, the more available medical resource of a node i has, the larger μ_i is. Assuming that the total amount of medicine resource is limited, the average curing rate is thus fixed, i.e.,⟨μ _i⟩= μ 10ptand10ptμ _i⩾ 0, 5pt∀ i.Our goal is to find out an optimal allocation of {μ_i} under the constraint Eq.(<ref>) so as to minimize the prevalence ρ, that is the fraction of infected nodes.In the quenched mean-field (QMF) theory, the probability ρ_i(t) that node i is infected at time t is described by N-intertwined equations <cit.>,dρ _i(t)/dt =- μ _iρ _i(t) + β[ 1 - ρ _i(t)]∑_j A_ijρ _j(t).In the steady state, dρ_i(t)/dt=0, ρ_i is determined by a set of nonlinear equations,ρ _i = β∑_j A_ijρ _j/μ _i + β∑_j A_ijρ _j.One can notice that ρ_i=0 is always a solution of Eq.(<ref>). This trivial solution corresponds to an absorbing state with no infective nodes. A nonzero solution ρ_i>0 exists if the effective infection rate λ=β/μ is larger than the so-called epidemic threshold λ_c. In this case, the prevalence ρ = ∑_i ρ _i /N is nonzero corresponding to an endemic state. By linear stability analysis for Eq.(<ref>) around ρ_i=0, λ_c is determined by which the largest eigenvalue of the matrix, -𝕌+β𝔸, is zero, where 𝕌= diag(μ_i) is a diagonal matrix. For the standard SIS model, μ_i≡μ for all i, one can immediately obtain the well-known result, λ_c,QMF^sta=1/Λ_max(𝔸) with the largest eigenvalue of the adjacency matrix Λ_max(𝔸). In our SIS model, the outbreak of epidemics will be suppressed to the greatest extent, which implies that the epidemic threshold of the optimal SIS model will be maximized. For this purpose, we first decompose the diagonal matrix 𝕌 into two diagonal matrices, 𝕌=𝕌̅+ Δ𝕌, where 𝕌̅= diag{μ k_i/⟨ k ⟩} with k_i being the degree of node i and Δ𝕌= diag{Δμ_i}. Since Tr(𝕌)=Tr(𝕌̅)=Nμ, Δ𝕌 must satisfy the constraint Tr(Δ𝕌)=0. For the real symmetric matrix, 𝕌-β𝔸, its largest eigenvalue Λ_max satisfies the following inequality,Λ_max≥v^T (-𝕌+β𝔸) v,where v is a column vector satisfying v∈ℝ^N and ||v||=1. If we set v=1/√(N)( 1, ⋯ ,1)^T, Eq.(<ref>) becomesΛ_max≥v^T (-𝕌̅+β𝔸)v- v^T Δ𝕌v =-μ+β⟨ k ⟩.Since Λ_max=0 at the epidemic threshold, Eq.(<ref>) leads to an upper bound of epidemic threshold, λ_c≤1/⟨ k ⟩. The condition that the epidemic threshold equals to the upper bound holds when v is the eigenvector of 𝕌-β𝔸 corresponding to its largest eigenvalue. If we set 𝕌=𝕌̅ and β=μ/⟨ k ⟩, -𝕌+β𝔸=-μ/⟨ k ⟩𝕃, where 𝕃 is the Laplacian matrix of the underlying network. It is well-known that the smallest eigenvalue of 𝕃 is zero and the corresponding eigenvector is v. Therefore, if the curing rate of each node is directly proportional to its degree, i.e.,μ _i = μ_i^*=μk_i/⟨ k ⟩,the epidemic threshold will be maximized,λ_c,QMF^opt=1/⟨ k ⟩.In the QMF theory, the epidemic threshold of the optimal SIS model is no less than that of the standard SIS model, λ_c,QMF^opt≥λ_c,QMF^sta, as the lower bound of Λ_max(𝔸) is ⟨ k ⟩ for any types of networks <cit.>.The above results can be also derived from the heterogeneous mean-field (HMF) theory. In the framework of HMF, these nodes with the same degree are considered to be statistically equivalent. The constraint Eq.(<ref>) becomes⟨μ _k⟩= ∑_k P(k)μ _k 1pt= μ 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt and 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1ptμ _k⩾ 0, 1pt 1pt 1pt 1pt∀ k 1pt,where μ_k is the curing rate of nodes of degree k, and P(k) is the degree distribution. Some related works have studied the SIS model <cit.> and its metapopulation version <cit.> with the curing rate, μ_k∼ k^α, but such a power-law form did not guarantee to be the optimal one. In <cit.>, the authors consider a simple heuristic strategy to control epidemic extinction where the curing rate is directly proportional to node degree. They showed that on any graph with bounded degree the extinction time is sublinear with the size of the network. Further improvement has been done by a heuristic PageRank algorithm to allocate curing rates based on the initial condition of infected nodes <cit.>. The present study does not require any assumptions about the form of the cure rate with node degree in advance except to the constraint Eq.(<ref>). The dynamical evolution of ρ_k(t), the probability of nodes of degree k being infected at time t, reads <cit.>,dρ _k( t )/dt =- μ _kρ _k( t ) + β[ 1 - ρ _k( t )]kΘ(t),where Θ is the probability of finding an infected node following a randomly chosen edge. In the case of uncorrelated networks, Θ(t) can be written asΘ(t)= ∑_k kP(k)/⟨ k ⟩ρ _k(t).In the steady state, dρ_k(t)/dt=0, Eq.(<ref>) becomesρ _k = β kΘ/μ _k +β kΘ.Substituting Eq.(<ref>) into Eq.(<ref>), we obtain a self-consistent equation of Θ,Θ= ∑_k kP(k)/⟨ k ⟩β kΘ/μ _k +β kΘ.The epidemic threshold is determined by which the derivation of the r.h.s of Eq.(<ref>) with respect to Θ at Θ=0 equals to one, leading toβ _c,HMF = ⟨ k ⟩/∑_k k^2P(k)/μ _k.For a given P(k), maximizing β_c is equivalent to minimizing the denominator of the r.h.s of Eq.(<ref>). For this purpose, we employ Lagrange multiplier method (LMM) to maximize the epidemic threshold, where the Lagrange function is written as,ℒ = ∑_k k^2P(k)/μ _k+ τ( ∑_k P(k)μ _k - μ),where τ is called the Lagrange multiplier. Taking the derivation of ℒ with respect to μ_k,∂ℒ/∂μ _k =- k^2P(k)/μ _k^2 + τ P(k),and letting ∂ℒ/.-∂μ _k = 0 combined with Eq.(8), we arrive at a maximal epidemic thresholdλ _c,HMF^opt = 1/⟨ k ⟩,and the corresponding allocation of {μ_k},μ _k = μ_k^*=μk/⟨ k ⟩Interestingly, the HMF results are consistent with the QMF ones. Also, in the HMF theory the epidemic threshold of the optimal SIS model is no less than that of the standard SIS model, λ_c,HMF^opt≥λ_c,HMF^sta=⟨ k ⟩/⟨ k^2 ⟩. For λ larger than but close to λ_c^opt, λ≳λ_c^opt, we shall combine a perturbation theory with LMM to optimize the prevalence. To the end, we assume that for λ = λ_c^opt+ Δλ, μ_k=μ_k^*+Δμ_k and Θ=Θ^*+ΔΘ, where Θ^*=1 - μ/β⟨ k ⟩ is the solution of Eq.(<ref>) for μ_k=μ_k^*. Expanding Eq.(<ref>) around ( μ _k^*,Θ ^*) to the second order, and then using the constraint ∑_k P(k)Δμ_k=0 and simultaneously ignoring the second-order small quantity ΔΘ^2, it yields [See Appendix A for details]ΔΘ= 1/β ^2⟨ k ⟩∑_k P(k)/kΔμ _k^2 1pt.Around ( μ _k^*,Θ ^*), the change Δρ in the prevalence ρ=∑_k P(k)ρ _k can be written asΔρ=- Θ ^*/β∑_k P(k)/kΔμ _k + ( 1 - Θ ^*)ΔΘ.Again using LMM to minimize Δρ under the constraints ∑_k P(k)Δμ_k=0 and Eq.(<ref>), we obtain a minimal ρ=ρ^*+Δρ^opt with ρ ^* =∑_k P(k)β kΘ ^*/μ _k^* + β kΘ ^* andΔρ^opt = - 1/4λ⟨ k ⟩ ^2( ⟨k^ - 1⟩ - ⟨ k ⟩^ - 1)Δλ ^2≃- 1/4⟨ k ⟩ ^3( ⟨k^ - 1⟩- ⟨ k ⟩^ - 1)Δλ ^2.Since ⟨k^ - 1⟩> ⟨ k ⟩ ^ - 1 for any degree inhomogeneous networks in terms of Jensen's inequality, Δρ^opt<0 and thus ρ will be reduced. The corresponding optimal allocation μ_k=μ_k^*+Δμ_k withΔμ _k = μ/2⟨ k ⟩λ( ⟨ k ⟩- k) Δλ≃μ/2( ⟨ k ⟩-k)Δλ.This implies that as λ is increased from λ_c^opt, the curing rates of the nodes with degrees less than the average degree will be increased, while the curing rates of the nodes with degrees larger than the average degree will be decreased. The amplitude of the change will depend on the difference between the degree of each node and the average degree, ⟨ k ⟩-k, and the distance of the effective infection rate to its critical value, Δλ. For λ is larger than but not close to λ_c^opt, λ>λ_c^opt, since the nonlinear characteristic of the model, analytical expression of optimal allocation of {μ_k} and the corresponding the minimal ρ is almost impossible. However, with the aid of HMF theory and LMM, the high-dimensional optimization problem can be converted to numerically solving the low-dimensional nonlinear equations [See Appendix B for details]. In the general infection region, μ_k satisfies the following equation,μ _k = {√(β kΘ/τ + κβk^2/τ⟨ k ⟩)- β kΘ >0,k <k_c 0, k ≥ k_c.where τ and κ are the Lagrange multipliers, and k_c is a threshold degree to guarantee μ_k>0 for k<k_c and it will be determined later. Θ, τ and κ are determined by the following three equations,√(βτ⟨ k ⟩/Θ)∑_k = k_min^k_max√(ξ)P(k)- βτ⟨ k ⟩∑_k = k_min^k_maxξ P(k)- βκτ∑_k = k_min^k_maxkξ P(k) - κ/⟨ k ⟩Θ∑_k = k_c^k_maxkP(k) = 0, μ=√(βΘ/τ⟨ k ⟩)∑_k = k_min^k_ck ξ^-1/2P(k) -βΘ∑_k = k_min^k_ck P(k), Θ= √(βτΘ/⟨ k ⟩)∑_k = k_min^k_ck^2 ξ^-1/2P(k) + 1/⟨ k ⟩∑_k = k_c^k_maxk P(k),where we have used ξ=k/(⟨ k⟩+κ k).To numerically solve Θ, τ and κ by Eqs.(<ref>,<ref>,<ref>), k_c is needed to be known in advance. To the end, we adopt a numerical scheme as follows. (i) Firstly we set k_c=k_max where k_max is the maximal degree of the underlying network; (ii) we numerically solve Θ, τ and κ by Eqs.(<ref>,<ref>,<ref>), and then test the condition μ_k>0 for all k<k_c by Eq.(<ref>); (iii) if the condition is not satisfied, k_c will be decreased by k_c← k_c-1 and return to ii) until the condition Eq.(<ref>) is fulfilled.Figure <ref> shows the optimized results of ρ as a function λ (solid line) in Erdös-Rényi (ER) random networks (a) and Barabási-Albert scale-free networks (b) with equal network size N=1000 and average degree ⟨ k ⟩=4. For comparison, we also show the results of the standard SIS model (dotted line) and of the SIS model with the curing rates μ_i=μ_i^* (dashed line). As expected by the theoretical prediction, the epidemic threshold of the optimal SIS model λ_c^opt=1/⟨ k ⟩, which is significantly larger than that of the standard SIS model, but coincides with the case of μ_i=μ_i^*. While for λ>λ_c^opt, the prevalence for μ_i=μ_i^* is always larger than the optimal choice, and even larger than the standard SIS model in the strong infection region, indicating that μ_i=μ_i^* is not a good choice once the epidemic outbreak has happened.We use the simulated annealing (SA) technique to validate our theoretical results. The SA builds a Monte Carlo Markov Chain that in the long run converges to the minimum of a given energy function ℰ, where ℰ=ρ can be obtained by numerically iterating Eq.(<ref>). The main steps of SA are as follows. At beginning, we assign to a given set of {μ_i} satisfying the constraint Eq.(<ref>) (e.g., μ_i=μ for all i). Then, we randomly choose two distinct nodes, say i and j, and try to make the changes μ_i ←μ_i +δ and μ_j ←μ_j -δ with the standard Metropolis probability min(1, e^-β_SAΔℰ), where δ is randomly chosen between -μ_i and μ_i+μ_j to guarantee the curing rate is always not less than zero. β_SA is the inverse temperature of SA which slowly increases from 10^-2 to 10^4 via an annealing protocol. Δℰ is the change of the energy function ℰ due the change of μ_i and μ_j, We tested several different annealing protocols and we adopted one in which the inverse temperature of SA β_SA is updated by β_SA← 1.01 β_SA after each N attempts for updating {μ_i}. The SA results are also shown in Fig.1 (square dots), which agree with the theoretical prediction. In Fig.<ref>, we show the optimal allocation of {μ_k} as a function of node degree k for several distinct λ in ER random networks (a) and BA scale-free networks (b), in which the theoretical results and the SA ones are indicated by the lines and dots, respectively. For λ≳λ_c^opt, μ_k increases linearly as k with the slope depending on the distance to the epidemic threshold. The results have been well predicted by Eq.(<ref>) and Eq.(<ref>). For the region away from the threshold, μ_k will deviate from linear relation with k. For sufficiently large λ, μ_k for large k can be less than that for small k, and even μ_k vanishes when k exceeds a threshold value, as given by Eq.(<ref>). This surprising result implies that in the strong infection region more medicine resources should be put into these low-degree nodes other than high-degree nodes. In conclusion, we have theoretically studied a constraint optimization problem as how best to distribute the limited medicine resources (curing rates) for controlling the epidemics of SIS type. Based on the QMF and HMF theories, we have shown that the optimal allocation lies in the effective infection rate λ (or the basic reproduction number R_0=⟨ k ⟩λ). If R_0⩽ 1, the curing rate of each node should be in direct proportion to its degree, under which the epidemic outbreak will be suppressed to the most extent and the epidemic threshold will be maximized, Eq.(<ref>) or Eq.(<ref>). Once the maximal epidemic threshold is just across (R_0≳1), the epidemic will spread persistently. In this case, we have analytically shown that the change in the curing rate of each node depends linearly on the difference between the average degree and its degree and the distance to epidemic threshold, Eq.(<ref>). For the general infection region (R_0>1), it is almost impossible to derive an analytical solution of the optimization problem; however, it can be simplified to an much more easily problem of numerical calculation of three nonlinear equations, Eqs.(<ref>,<ref>,<ref>). Surprisingly, we found that in the strong infection region the curing rates of the low-degree nodes can overpower those of the high-degree nodes to ensure the minimization of the prevalence.An interesting generalization is how to solve the present constraint optimization problem based on other existing theoretical methods, such as pair mean-field method that takes into account the role of dynamical correlations between neighboring nodes <cit.>. Moreover, the method presented here could be applied to a number of other optimization problems, for example, controlling opinion dynamics in social networks <cit.>. This will be the subject of future work. Acknowledgments: This work was supported by National Science Foundation of China (Grants Nos. 11205002, 61473001, 21673212), the Key Scientific Research Fund of Anhui Provincial Education Department (Grant No. KJ2016A015) and “211" Project of Anhui University (Grant No. J01005106). apsrev 40 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Pastor-Satorras et al.(2015)Pastor-Satorras, Castellano, Van Mieghem, and Vespignani]RevModPhys.87.925 authorR. Pastor-Satorras, authorC. Castellano, authorP. Van Mieghem, and authorA. Vespignani, journalRev. Mod. Phys. volume87, pages925 (year2015).[Nowzari et al.(2016)Nowzari, Preciado, and Pappas]IEEE.36.26 authorC. Nowzari, authorV. M. Preciado, and authorG. J. Pappas, journalIEEE Control Systems volume36, pages26 (year2016).[Pastor-Satorras and Vespignani(2002)]PhysRevE.65.036104 authorR. Pastor-Satorras and authorA. Vespignani, journalPhys. Rev. E volume65, pages036104 (year2002).[Holme et al.(2002)Holme, Kim, Yoon, and Han]PhysRevE.65.056109 authorP. Holme, authorB. J. Kim, authorC. N. Yoon, and authorS. K. Han, journalPhys. Rev. E volume65, pages056109 (year2002).[Cohen et al.(2003)Cohen, Havlin, and ben Avraham]PhysRevLett.91.247901 authorR. Cohen, authorS. Havlin, and authorD. ben Avraham, journalPhys. Rev. Lett. volume91, pages247901 (year2003).[Holme(2004)]EPL.68.908 authorP. Holme, journalEurophys. Lett. volume68, pages908 (year2004).[Stauffer and Barbosa(2006)]PhysRevE.74.056105 authorA. O. Stauffer and authorV. C. Barbosa, journalPhys. Rev. E volume74, pages056105 (year2006).[Gomez-Gardenes et al.(2006)Gomez-Gardenes, Echenique, and Moreno]EPJB49.259 authorJ. Gomez-Gardenes, authorP. Echenique, and authorY. Moreno, journalEur. Phys. J. B volume49, pages259 (year2006).[Chen et al.(2008)Chen, Paul, Havlin, Liljeros, and Stanley]PhysRevLett.101.058701 authorY. Chen, authorG. Paul, authorS. Havlin, authorF. Liljeros, and authorH. E. Stanley, journalPhys. Rev. Lett. volume101, pages058701 (year2008).[Schneider et al.(2011)Schneider, Mihaljev, Havlin, and Herrmann]PhysRevE.84.061911 authorC. M. Schneider, authorT. Mihaljev, authorS. Havlin, and authorH. J. Herrmann, journalPhys. Rev. E volume84, pages061911 (year2011).[Masuda(2009)]NJP11.123018 authorN. Masuda, journalNew J. Phys. volume11, pages123018 (year2009).[Salathé and Jones(2010)]PlosCP6.e1000736 authorM. Salathé and authorJ. H. Jones, journalPLoS Comput. Biol. volume6, pagese1000736 (year2010).[Altarelli et al.(2014)Altarelli, Braunstein, Dall'Asta, Wakeling, and Zecchina]PhysRevX.4.021024 authorF. Altarelli, authorA. Braunstein, authorL. Dall'Asta, authorJ. R. Wakeling, and authorR. Zecchina, journalPhys. Rev. X volume4, pages021024 (year2014).[Morone and Makse(2015)]Morone2015 authorF. Morone and authorH. A. Makse, journalNature (London) volume524, pages65 (year2015).[Clusella et al.(2016)Clusella, Grassberger, Pérez-Reche, and Politi]PhysRevLett.117.208301 authorP. Clusella, authorP. Grassberger, authorF. J. Pérez-Reche, and authorA. Politi, journalPhys. Rev. Lett. volume117, pages208301 (year2016).[Gross et al.(2006)Gross, D'Lima, and Blasius]PhysRevLett.96.208701 authorT. Gross, authorC. J. D. D'Lima, and authorB. Blasius, journalPhys. Rev. Lett. volume96, pages208701 (year2006).[Granell et al.(2013)Granell, Gómez, and Arenas]PhysRevLett.111.128701 authorC. Granell, authorS. Gómez, and authorA. Arenas, journalPhys. Rev. Lett. volume111, pages128701 (year2013).[Prakash et al.(2013)Prakash, Adamic, Iwashnya, Tong, and Faloutsos]Prakash2013 authorB. A. Prakash, authorL. Adamic, authorT. Iwashnya, authorH. Tong, and authorC. Faloutsos, journalProc. SIAM Int. Conf. Data Mining, Austin, TX p. pages659C667 (year2013).[Wan et al.(2008)Wan, Roy, and Saberi]Wan2008 authorY. Wan, authorS. Roy, and authorA. Saberi, journalSyst. Biol. IET volume2, pages184 (year2008).[Preciado et al.(2013)Preciado, Zargham, Enyioha, Jadbabaie, and Pappas]Preciado2013 authorV. M. Preciado, authorM. Zargham, authorC. Enyioha, authorA. Jadbabaie, and authorG. J. Pappas, journalProc. IEEE Conf. Decision Control, Florence, Italy pp. pages7486–7491 (year2013).[Gourdin et al.(2011)Gourdin, Omic, and Mieghem]Gourdin2011 authorE. Gourdin, authorJ. Omic, and authorP. V. Mieghem, journalProc. 8th Int. Workshop Design Reliable Communication Networks pp. pages86–93 (year2011).[Y. Wang()]Wang2003 authorY. Wang et al., journal22nd International Symposium on Reliable Distributed Systems (SRDS03) (IEEE Computer Society, 2003), p. 25.[Mieghem et al.(2009)Mieghem, Omic, and Kooij]Mieghem2009 authorP. V. Mieghem, authorJ. Omic, and authorR. Kooij, journalIEEE ACM Trans. Netw. volume17, pages1 (year2009).[Gómez et al.(2010)Gómez, Arenas, Borge-Holthoefer, Meloni, and Moreno]Gomez2010 authorS. Gómez, authorA. Arenas, authorJ. Borge-Holthoefer, authorS. Meloni, and authorY. Moreno, journalEurophys. Lett. volume89, pages38009 (year2010).[Mieghem(2011)]Mieghem_Book2011 authorP. V. Mieghem, titleGraph Spectra for Complex Networks (publisherCambridge University Press, year2011).[Dezs ő and Barabási(2002)]PhysRevE.65.055103 authorZ. Dezs ő and authorA.-L. Barabási, journalPhys. Rev. E volume65, pages055103 (year2002).[Shen et al.(2012)Shen, Chen, and Hou]PhysRevE.86.036114 authorC. Shen, authorH. Chen, and authorZ. Hou, journalPhys. Rev. E volume86, pages036114 (year2012).[Borgs et al.(2010)Borgs, Chayes, Ganesh, and Saberi]Borgs2010 authorC. Borgs, authorJ. Chayes, authorA. Ganesh, and authorA. Saberi, journalRandom Struct. Alg. volume37, pages204 (year2010).[Chung et al.(2009)Chung, Horn, and Tsiatas]Chung2009 authorF. Chung, authorP. Horn, and authorA. Tsiatas, journalInternet Math. volume6, pages237 (year2009).[Pastor-Satorras and Vespignani(2001)]PRL01003200 authorR. Pastor-Satorras and authorA. Vespignani, journalPhys. Rev. Lett. volume86, pages3200 (year2001). [Eames and Keeling(2002)]Eames2002 authorK. T. D. Eames and authorM. J. Keeling, journalProc. Natl. Acad. Sci. U.S.A. volume99, pages13330 (year2002).[Gleeson(2011)]PhysRevLett.107.068701 authorJ. P. Gleeson, journalPhys. Rev. Lett. volume107, pages068701 (year2011).[Boguñá et al.(2013)Boguñá, Castellano, and Pastor-Satorras]PhysRevLett.111.068701 authorM. Boguñá, authorC. Castellano, and authorR. Pastor-Satorras, journalPhys. Rev. Lett. volume111, pages068701 (year2013).[Mata et al.(2014)Mata, Ferreira, and Ferreira]Mata2014 authorA. S. Mata, authorR. S. Ferreira, and authorS. C. Ferreira, journalNew J. Phys. volume16, pages053006 (year2014).[Kiss et al.(2015)Kiss, Röst, and Vizi]PhysRevLett.115.078701 authorI. Z. Kiss, authorG. Röst, and authorZ. Vizi, journalPhys. Rev. Lett. volume115, pages078701 (year2015).[Cai et al.(2016)Cai, Wu, Chen, Holme, and Guan]PhysRevLett.116.258301 authorC.-R. Cai, authorZ.-X. Wu, authorM. Z. Q. Chen, authorP. Holme, and authorJ.-Y. Guan, journalPhys. Rev. Lett. volume116, pages258301 (year2016).[Cator and Van Mieghem(2012)]PhysRevE.85.056111 authorE. Cator and authorP. Van Mieghem, journalPhys. Rev. E volume85, pages056111 (year2012).[Mata and Ferreira(2013)]Mata2103 authorA. S. Mata and authorS. C. Ferreira, journalEurophys. Lett. volume103, pages48003 (year2013).[I. Z. Kiss(2015)]Kiss2015 author I. Z. Kiss et al., journalJ. Math. Biol. volume70, pages437 (year2015).[Castellano et al.(2009)Castellano, Fortunato, and Loreto]RMP09000591 authorC. Castellano, authorS. Fortunato, and authorV. Loreto, journalRev. Mod. Phys. volume81, pages591 (year2009). § WEAK INFECTION REGIONFor λ larger than but close to λ_c^opt, λ≳λ_c^opt, we have combined a perturbation theory with Lagrange multiplier method (LMM) to optimize the prevalence ρ. For λ = λ_c^opt+ Δλ, we have μ_k=μ_k^*+Δμ_k and Θ=Θ^*+ΔΘ, where μ_k^*=μ k/⟨ k ⟩, and Θ^*=1 - μ/β⟨ k ⟩ is the solution of self-consistent equation of Θ, Eq.(<ref>) in the main text, under μ_k=μ_k^*. Since Θ>0 in the region of epidemic spreading, Eq.(<ref>) in the main text can be rewritten asS1β/⟨ k ⟩∑_k k^2P(k)/μ _k + β kΘ=1.Expanding the above equation around ( μ _k^*,Θ ^*) to the second-order, it yieldsS2 ∑_k . ∂ f/∂μ _k|_( μ _k^*,Θ ^*)Δμ _k + . ∂ f/∂Θ|_( μ _k^*,Θ ^*)ΔΘ+ 1/2∑_k . ∑_k'∂ ^2f/∂μ _k∂μ _k'|_( μ _k^*,Θ ^*)Δμ _kΔμ _k' 1pt 1pt 1pt + ∑_k . ∂ ^2f/∂μ _k∂Θ|_( μ _k^*,Θ ^*)Δμ _kΔΘ+ . 1/2∂ ^2f/∂Θ∂Θ|_( μ _k^*,Θ ^*)ΔΘ^2 =0,where f Δ = β/⟨ k ⟩∑_k k^2P(k)/μ _k + β kΘ -1, andS3 . ∂ f/∂μ _k|_( μ _k^*,Θ ^*) = - P(k)/β⟨ k ⟩. ∂ f/∂Θ|_( μ _k^*,Θ ^*) = - 1. ∂ ^2f/∂μ _k∂μ _k'|_( μ _k^*,Θ ^*) = δ _kk'2P(k)/β ^2⟨ k ⟩ k. ∂ ^2f/∂μ _k∂Θ|_( μ _k^*,Θ ^*) = 2P(k)/β⟨ k ⟩. ∂ ^2f/∂Θ∂Θ|_( μ _k^*,Θ ^*) = 2.Substituting Eq.(<ref>) into Eq.(<ref>), we obtainS4- 1/β⟨ k ⟩∑_k P(k)Δμ _k - ΔΘ+ 1/β ^2⟨ k ⟩∑_k P(k)/kΔμ _k^2 1pt+ 2/β⟨ k ⟩∑_k P(k)Δμ _kΔΘ+ ΔΘ ^2 = 0.Using the constraint ∑_k P(k)Δμ_k=0 and ignoring the second-order small quantity ΔΘ^2≪ΔΘ, Eq.(<ref>) becomesS5ΔΘ= 1/β ^2⟨ k ⟩∑_k P(k)/kΔμ _k^2 1pt.Around ( μ _k^*,Θ ^*), the change Δρ in the prevalence ρ=∑_k P(k)ρ _k can be expanded in the leading orderS6Δρ= ∑_k . ∂ρ/∂μ _k|_( μ _k^*,Θ ^*)Δμ _k + . ∂ρ/∂Θ|_( μ _k^*,Θ ^*)ΔΘ,whereS7 . ∂ρ/∂μ _k|_( μ _k^*,Θ ^*) =- P(k)Θ ^*/β k, . ∂ρ/∂Θ|_( μ _k^*,Θ ^*) = 1 - Θ ^*.Substituting Eq.(<ref>) into Eq(<ref>), we obtainS8Δρ=- Θ ^*/β∑_k P(k)/kΔμ _k + ( 1 - Θ ^*)ΔΘ.In the following we use LMM to minimize Δρ under the constraints ∑_k P(k)Δμ_k=0 and Eq.(<ref>). Note that the first constraint is due to the fixed average curing rate, and the second one is the requirement of the HMF dynamics. Utilizing Eq.(<ref>) and the two constraints, the Lagrange function can be written asS9ℒ = - Θ ^*/β∑_k P(k)/kΔμ _k + ( 1 - Θ ^*)ΔΘ+ τ(- ΔΘ+ 1/β ^2⟨ k ⟩∑_k P(k)/kΔμ _k^2) 1pt+ κ∑_k P(k)Δμ _k,where τ and κ are the Lagrange multipliers. Taking the derivative of ℒ with respect to ΔΘ and Δμ_k, we obtainS10∂ℒ/∂ΔΘ = ( 1 - Θ ^*) - τ,andS11∂ℒ/∂Δμ _k =- Θ ^*/βP(k)/k + τ1/β ^2⟨ k ⟩2P(k)/kΔμ _k + κ P(k).Letting ∂ L/∂ΔΘ =0 and ∂ L/∂Δμ_k=0, we obtainS12τ= 1 - Θ ^*,andS13- Θ ^*/β k + 2τ/β ^2k⟨ k ⟩Δμ _k + κ= 0,respectively. Substituting Eq.(<ref>) into the constraint ∑_k P(k)Δμ_k=0, we obtainS14κ= Θ ^*/β⟨ k ⟩.Combining Eqs.(<ref>,<ref>,<ref>), we obtainS15Δμ _k = μ/2⟨ k ⟩λ( ⟨ k ⟩- k) Δλ≃μ/2( ⟨ k ⟩- k)Δλ.Substituting Eq.(<ref>) and Eq.(<ref>) into Eq.(<ref>), we obtainS16Δρ^opt= - 1/4λ⟨ k ⟩ ^2( ⟨k^ - 1⟩ - ⟨ k ⟩^ - 1)Δλ ^2 ≃- 1/4⟨ k ⟩ ^3( ⟨k^ - 1⟩- ⟨ k ⟩^ - 1)Δλ ^2.§ GENERAL INFECTION REGIONFor λ is larger than but not close to λ_c^opt, since the nonlinear character of the model, analytical expression of optimal allocation of {μ_k} and the corresponding the minimal ρ is in general impossible. However, with the aid of HMF theory and LMM, the high-dimensional optimization problem can be converted to numerically solving low-dimensional nonlinear equations. We first write a Lagrange function asS17ℒ= ∑_k P(k)β kΘ/μ _k + β kΘ + τ( ∑_k P(k)μ _k - μ) + κ( ∑_k kP(k)/⟨ k ⟩β k Θ/μ _k + β kΘ - Θ),where τ and κ are the Lagrange multipliers. Taking the derivative of ℒ with respect to μ_k and Θ, we obtainS18∂ℒ/∂μ _k =- P(k)β kΘ/( μ _k + β kΘ)^2 + τ P(k)- κkP(k)/⟨ k ⟩β k Θ/( μ _k + β k )^2,andS19∂ℒ/∂Θ = ∑_k β kP(k)/μ _k + β kΘ - ∑_k β ^2k^2P(k)Θ/( μ _k + β kΘ)^2 - κ∑_k kP(k)/⟨ k ⟩β ^2k^2Θ/( μ _k + β kΘ)^2.Taking the derivative of ℒ with respect to the Lagrange multipliers τ and κ, we obtain the constraint equation Eq.(<ref>) and the self-consistent equation Eq.(<ref>) of Θ in the main text.Letting ∂ℒ/∂μ_k=0, we obtainS20μ _k = {√(β kΘ/τ + κβk^2/τ⟨ k ⟩)- β kΘ >0,k <k_c 0, k ≥ k_c.where k_c is a threshold degree to guarantee μ_k>0 for k<k_c and it will be determined later. Substituting Eq.(<ref>) into Eq.(<ref>) and letting ∂ℒ/∂Θ=0, we obtainS21√(βτ⟨ k ⟩/Θ)∑_k = k_min^k_cP(k)√(k)/√(⟨ k ⟩+ κ k)- βτ⟨ k ⟩∑_k = k_min^k_cP(k)k/⟨ k ⟩+ κ k - βκτ∑_k = k_min^k_cP(k)k^2/⟨ k ⟩+ κ k- κ/⟨ k ⟩Θ∑_k = k_c^k_maxkP(k)= 0.Combining Eq.(<ref>) in the main text and Eq.(<ref>), we obtainS22μ= √(βΘ/τ⟨ k ⟩)∑_k = k_min^k_cP(k)√(k)( √(⟨ k ⟩+ κ k)) -βΘ∑_k = k_min^k_cP(k)k.Combining Eq.(<ref>) in the main text and Eq.(<ref>), we obtainS23Θ= √(βτΘ/⟨ k ⟩)∑_k = k_min^k_cP(k)k^3/2/√(⟨ k ⟩+ κ k)+ 1/⟨ k ⟩∑_k = k_c^k_maxkP(k).
http://arxiv.org/abs/1702.08444v1
{ "authors": [ "Hanshuang Chen", "Guofeng Li", "Haifeng Zhang", "Zhonghuai Hou" ], "categories": [ "q-bio.PE", "physics.soc-ph" ], "primary_category": "q-bio.PE", "published": "20170226113220", "title": "Optimal Allocation of Resources for Suppressing Epidemic Spreading on Networks" }
]Arthur ValencioCorresponding author: a.valencio@abdn.ac.uk ]Celso Grebogi ]Murilo S. Baptista []Institute for Complex Systems and Mathematical Biology, University of Aberdeen, Aberdeen AB24 3UE, United KingdomMethods for removal of unwanted signals from gravity time-series: comparison using linear techniques complemented with analysis of system dynamics [================================================================================================================================================== The presence of undesirable dominating signals in geophysical experimental data is a challenge in many subfields. One remarkable example is surface gravimetry, where frequencies from Earth tides correspond to time-series fluctuations up to a thousand times larger than the phenomena of major interest, such as hydrological gravity effects or co-seismic gravity changes. This work discusses general methods for removal of unwanted dominating signals by applying them to 8 long-period gravity time-series of the International Geodynamics and Earth Tides Service, equivalent to the acquisition from 8 instruments in 5 locations representative of the network. We compare three different conceptual approaches for tide removal: frequency filtering, physical modelling and data-based modelling. Each approach reveals a different limitation to be considered depending on the intended application. Vestiges of tides remain in the residues for the modelling procedures, whereas the signal was distorted in different ways by the filtering and data-based procedures. The linear techniques employed were power spectral density, spectrogram, cross-correlation and classical harmonics decomposition, while the system dynamics was analysed by state-space reconstruction and estimation of the largest Lyapunov exponent. Although the tides could not be completely eliminated, they were sufficiently reduced to allow observation of geophysical events of interest above the 10nm s^-2 level, exemplified by a hydrology-related event of 60nm s^-2. The implementations adopted for each conceptual approach are general, so that their principles could be applied to other kinds of data affected by undesired signals composed mainly by periodic or quasi-periodic components. Keywords: Time-variable gravity – Earth tides – Time-series analysis – Gravity residuals – Tidal filtering. § INTRODUCTION Many geophysical data present strong periodic or quasi-periodic signals masking the observations of the phenomena of interest. For the case of precise surface gravimetry, phenomena of interest are polar ice cap variations and melting <cit.>, hydrological effects including remote assessment of underground water reservoirs <cit.> forest evapotranspiration rates <cit.>, co-seismic and post-seismic deformations <cit.>, and proposals of gravity-field perturbations before arrival of compressional seismic waves <cit.>. However these effects, typically on the range of 0.1-100nm s^-2, are hindered by gravity tides, with amplitudes of 2000 nm s^-2.The device adopted for such gravity applications is the superconducting gravimeter, with precision level smaller than 1nm s^-2. Its operation is a mathematical equivalent of an ideal spring, built by the employment of a superconducting sphere suspended on a magnetic field produced by persistent currents in a coil. In such device, changes of local gravity induces changes to the equilibrium of the system, which is translated to a relative gravity measurement <cit.>. A global network of these instruments, the Global Geodynamics Project (GGP), was implemented in the 1990's to address common problems to the gravimetry community <cit.>, and the project is now taken over by the International Geodynamics and Earth Tide Service (IGETS), enabling the access to data from 33 stations (http://isdc.gfz-potsdam.de/igets-data-base). However, the issue of tidal filtering remains a challenge with many possible solution approaches and no full consensus.The tides observed represent several single frequencies mostly around the diurnal, semi-diurnal and terdiurnal values. Despite progress in the development of more complete tidal tables, the Darwin nomenclature <cit.> for the main tidal modes remains in use for easy identification (e.g. diurnal: K1, O1; semidiurnal: M2, S2; terdiurnal: M3, MK3). The highest amplitude tidal signals are able to influence or mask geophysical observations conducted in frequencies below 5 cycles per day (5.79·10^-5 Hz), including ground displacement GNSS measurements, ground strain and monitoring of water-body levels. Other systems may have unwanted frequencies different from tides, due to contamination from other sources. Hence general methods are necessary as the first procedure of analysis.In this paper, we consider three conceptually distinct general methods for removing undesired periodic signals: (i) frequency filtering of the components (known from spectroscopy or model); (ii) physical modelling of the contributing sources to the signal and subtraction from observation; (iii) and data-based modelling. The latter infers from the data the parameters that better describe the unwanted signal and extract the residual. The description of how each method is implemented and applied to gravity tides is described in Sec. <ref>. Although special focus is given to the gravity time-series, the principles are general and comprehensive to the preliminary signal analysis of many applications, and the necessary adjustments should be kept to a minimum. In the results (Sec. <ref>) it is shown that a FFT-based frequency filtering may appear effective but the artificial removal of information near tidal frequencies and the addition of the Gibbs ringing phenomenon generates undesirable consequences for the observation of geophysical events of interest or other physical applications. A frequency filtering based on a multiband filter also distorts the frequency spectrum, and is shown to be unable to completely remove all tidal components. The physical modelling reduces the tidal oscillations, but tidal peaks remain present in the frequency domain. Time-series analysis based on nonlinear approaches for the state-space reconstruction and the estimation of Lyapunov exponents shows that the nonlinear features of the original time-series is preserved in the residual from the physical modelling. However, comparatively this is the method where the original physical nature of the system is most preserved based on the state-space reconstruction and sensitivity to initial conditions. The data-based method has the best performance in reducing the gravity residuals without appearing to distort the frequency spectrum, though the state-space plot features are less preserved. The residuals from this method enable the observation of hydrology-induced gravity changes, exemplified in Sec. <ref>.§ MATERIALS AND METHODS§.§ Selected stations For this study, Superconducting Gravimeters were selected located on Sutherland in South Africa (SU, 3 instruments), Schiltach/Black Forest in Germany (BF, 2 instruments), Ny-Ålesund in Svalbard island, Norway (NY, 1 instrument), Matsushiro in Japan (MA, 1 instrument), and Apache Point in New Mexico, USA (AP, 1 instrument), which distributed according to Fig. <ref>. The data period of the time-series used and the type of each instrument that generated it are detailed in Table <ref>. The chosen instruments are a representative sample of the IGETS network, including different latitudes, different site conditions (e.g. continentality, vegetation, climate conditions, etc), and different generations of gravimeter instruments. For this analysis it was used 1-min sampling time-series data. Example of time-series and frequency spectrum are shown in Fig. <ref>.§.§ Tidal removal methods §.§.§ Pre-processing Although local operators provide the IGETS users with data corrected for spikes and clippings related to the helium refill procedure or to strong motions in the location, the time-series still contain data gaps, offsets and instrument linear trends, which must be considered prior to the application of the tidal removal methods. Also the contribution of the atmospheric mass to the gravity signal must be accounted. The three classes of methods considered for tidal removal, frequency filtering, physical modelling and data-based modelling, have different demands of pre-processing. In the case of frequency filtering, data gaps cannot be present, hence missing intervals, which can be as large as many months, must be temporarily replaced with a synthetic gravity signal. Regardless of that, 90% of the local atmospheric contribution to gravity can be removed by considering a linear proportion between the atmospheric gravity variation (δ g_atm) and the air-pressure change (δ p), with the proportionality constant (atmospheric admittance) being α=δ g_atm/δ p =-3.56[nm s^-2mbar^-1] <cit.>, which is considered a sufficient atmospheric correction for this method. The physical modelling and data-based modelling procedures do not require filling the gaps in the time-series, and they use a more complete analysis for atmospheric contribution, thus being sufficient to only correct for offsets and instrument linear trends on the pre-processing. This was performed with a semi-automatic implementation of the remove-restore procedure <cit.> on Matlab. §.§.§ Frequency filtering This filtering method consists of simply removing the undesirable tidal frequencies. It is mainly adopted as a preliminary analysis of the spectrum or for the investigation of low-frequency seismic modes known to be out-of-resonance with tides. The tidal frequencies were obtained from the Tamura <cit.> tables, containing 1200 constituent waves. These frequencies may be removed from the original gravity signal either in the time-domain or frequency-domain. For the time-domain it was applied a multiband finite-impulse response (FIR) filter with zero-phase distortion, and for the frequency domain it was adopted the classical procedure of analysing the Fourier spectrum, removing the selected frequencies (setting them to zero in the Fourier domain), and reconstructing the time-series by inverse fast Fourier transform (FFT filtering). Original gaps were reintroduced in the residuals, and extra gap margins of 1-week before and after the original gaps were included to remove artificial ringing effects (Gibbs phenomenon). It must be stressed that these procedures for obtaining the gravity residuals are not suitable for all applications. In particular applications involving signals close to resonance with tides or investigation of very broad-band events should not adopt this method. It is also not recommended for studies that require the quantification of informational measures in the time-series, such as studies of causality, since this method artificially removes information from the data. Although Zetler<cit.> arguments that finite-numbered strong periodicities should not be analysed with FFT or removed with frequency filtering procedures, this technique remained common practice to remove tides until much later <cit.>, and may still be used in other fields.§.§.§ Physical modelling This method is based on modelling all known tidal contributing sources (physical events), resulting in a theoretical prediction. Such prediction is then subtracted from the observed signal so to obtain the gravity residuals, which would only contain the events of interest, for example, co-seismic changes. Therefore it is required to properly select the other phenomena that produces gravity changes, and to consider how they are described. For this study it is considered the following effects: solid Earth tides, ocean tidal loading, atmospheric gravity contribution, ocean non-tidal loading, hydrology loading and polar tides.The solid Earth tides, also referred as body tides, are the direct effects of the gravitational pull of astronomical objects over either a homogeneous or layered Earth model, resulting in ground displacements in the order of tenths of cm and local gravity changes in the order of μm s^-2. This is calculated through the following tide generating potential <cit.>V_SE(r;t)=g_e[∑_n=2^∞∑_m=1^n c_nm^*(t)Y_lm(θ,ϕ) ], with (r,θ,ϕ) specifying the location (radial distance, co-latitude, longitude), g_e the mean gravity of the Earth at the equator, Y the spherical harmonics, and c_lm(t) the complex coefficients calculated from the attraction of the astronomical bodies, which are typically computed from an harmonic expansion. To such Earth model it is then subsequently added the contribution of the oceans, which changes gravity due to the direct mass movement but also deforms back the ground due to the significant weight of water being periodically redistributed. This effect is the ocean tidal loading, computed from a tidal generating potential given by <cit.>V_OL(r;t)=ρ∬_oceanG(r-r')H(r')dS, with ρ the density of water, H(r') the tide at the ocean in the location r', and G(r-r') the Green's function for the distance, which appears as solutions for the elastic and Poisson equations of a layered Earth. The gravity tide is obtained from the generating potentials by taking the derivative in the radial direction. The work of Farrell <cit.> describes how the tide generating potentials can also be applied to measurements of tidal displacements, tilts, or strain, and, in addition, exemplifies how the Green's function used can be obtained from first principles for a layered Earth model. Both the ocean tidal loading and the solid Earth tides were computed for the selected stations using the software ATLANTIDA3.1_2014 <cit.>, with the assumptions of tidal periodicities following the Tamura <cit.> tables, layered Earth model IASP91, and ocean model FES2012.Mass redistribution in the atmosphere also causes significant gravity variations, up to the order of 100nm s^-2. Analogously to what occurs with the ocean tidal loading, the atmospheric mass redistribution leads to fluctuations on the surface of ground and oceans, in particular with the oceans responding as an inverted barometer for periods larger than one week. Although the atmospheric contribution is dominated by the local admittance, based on reading the air pressure at the station, a full description involves computing local and non-local air mass displacements.There are two services providing numerical results of atmospheric gravity based on finite element models: Atmospheric Loading service, provided by EOST/University of Strasbourg (http://loading.u-strasbourg.fr/sg_atmos.php), and ATMACS, provided by BKG (http://atmacs.bkg.bund.de). For this study it has been adopted the first, selecting the atmospheric data provided by ECMWF ERA-Interim reanalysis (http://www.ecmwf.int/en/research/climate-reanalysis/era-interim), which is able to cover the entire gravity time-series period for all stations. The data, however, is sampled in 6h, and interpolation is necessary. The ocean non-tidal loading refers to other changes in the water mass due to circulation of currents and wind forcing. These lead to gravity changes that can be larger than ocean tidal loading. Theoretical predictions are calculated similarly to the atmospheric loading, but using the ocean bottom pressure data from the ECCO2 model (http://ecco2.jpl.nasa.gov/). Similar procedure is adopted for the calculation of hydrology loading contributions, due to soil moisture changes, where again it was selected weather data from ERA-Interim reanalysis. Services providing the numerical results using such models for all stations are available from EOST/University of Strasbourg (http://loading.u-strasbg.fr/sg_ocean.php, http://loading.u-strasbg.fr/sg_hydro.php). Finally, the polar tides are a result of the Chandler wobble, small variation of Earth’s axis of rotation. Using the Earth Orientation data EOPC04 from the International Earth Rotation Service (ftp://hpiers.obspr.fr/iers/eop/eopc04), the polar tides can be calculated as δ g_polar =-39 × 10^6 sin 2θ (m_1 cosϕ + m_2 sinϕ) [nm s^-2], with (θ, ϕ) again the co-latitude and longitude of the station, and (m_1,m_2) the equivalent (x,y) polar motion amplitudes converted to radians <cit.>. Figure <ref> exemplifies the scale from each contribution and procedure for tidal signal removal. A detailed review of these processes can be found in Crossley et al. <cit.>, Hinderer et al.<cit.> and Boy and Hinderer <cit.>. The input parameters for physical modelling refers only to the station location and local/global conditions, and no a priori information of the gravity time-series is used. Advantages of this method are the maintenance of information produced on all frequency bands provided by the device, and the possibility of clearly defining the physical origin of any given contribution, including control to maintain aspects of interest according to the application. For example, it is of interest to maintain hydrology loading when the objective of the research is to investigate the gravity response to rainfall, evapotranspiration and aquifer recharge, but still all other contributions should be removed. The limitation in the method is that misfitting in the models introduces undesirable fluctuations to the residuals. Figure <ref> (h) makes this evident by the oscillatory pattern with semi-diurnal frequencies. Spectral analysis identifies these oscillatory frequencies in the residuals as corresponding to the tidal modes M2, S2 and K2. Although not adopted in this study, the recently developed software mGlobe<cit.> performs similar operations with little more input required from the user than the location of interest. §.§.§ Data-based modelling In the previous method the theoretical prediction of tidal mode amplitudes from solid Earth tides and ocean tidal loading was based on a tide generating potential model using only information of location of the instrument and initial time of the dataset as inputs. The tidal modes that remained in the residuals of Fig. <ref>(h), however, are strongly associated with these two physical origins, meaning that the theoretical model of these tidal components does not completely fit with the observations at the station. A way of overcoming this is to analyse the regularities in the gravity data itself, extracting from the data the suitable tidal coefficients to the station location. The usual approach is the classical harmonic or least-square fitting, based on defining amplitudes and phases to sinusoids of tabled tidal frequencies, in such a way that the sum of the square of the residual values is minimised. Currently, there is still a lack in the literature about alternative data-based methods, such as the use of Artificial Neural Networks, leading to prediction of gravity tides with the same precision. For compatibility, the Tamura <cit.> tables are again adopted to provide the tidal frequencies for the least-square fitting. Recent tools developed for oceanography, such as UTide <cit.>, have included a series of optimisations, being able to account for data gaps and long time-series without issues, and can be easily adapted to gravity data. The table of data-based tidal constituents is then used to reconstruct only the tidal part of the time-series, and the difference with the observation provides the residue. We have adopted for this analysis the software UTide with the option to use only the Ordinary Least-Squares method. Two issues arise with this method. The first, highlighted by Kantz and Schreiber <cit.>, is that misfitting can occur in the presence of non-white noise, and the assumed background (physical) noise profile of gravity time-series exhibits redness (i.e. varies with frequency approximately on f^-2). The second issue consists in the fact that noise factors or other geophysical events of potential interest might have components near tidal frequencies, and the procedure might mistakenly consider these as part of the contribution of the tidal constituent, hence a type of overfitting. Developments such as the ETERNA <cit.>, VAV <cit.>, and BAYTAP <cit.> deal with these aspects in a number of ways, including bandpassing around the tidal frequencies, i.e. effectively making an hybrid of data-based and frequency filtering approaches, and additional use of ARMA models to separate the noise and the periodic parts. However these implementations are specifically designed to Earth tides and can require substantial modifications if applied to removal of dominating signal of other geophysical nature, contrary to the more general methods we discuss here. A comparison of performance between these three recent developments was done by Dierks and Neumeyer <cit.>.§ RESULTS AND DISCUSSIONS Once the residuals from each method have been obtained, it is of interest to observe if the tides were eliminated by analysing the frequency spectra. Fig. <ref> shows the Lomb-Scragle power spectral density of the residuals for the BF1 instrument, in Schiltach/Black Forest, Germany. This type of power spectrum calculation, is preferred over other types of periodogram for its direct applicability to data with large gaps <cit.>, such as the gravity time-series, and is calculated byP(f)=1/2σ^2{[∑_i=1^Nx_i -x̅cos(2π f(t_i-τ))]^2/∑_i=1^Ncos ^2 (2π f(t_i-τ))+ [∑_i=1^Nx_i -x̅sin(2π f(t_i-τ))]^2/∑_i=1^Nsin ^2 (2π f(t_i-τ))} .In the expression, x_i the data points at times t_i, and the average and variance given by x̅ and σ^2; the constant τ is only a time offset that ensures time-invariance during computation. Figures <ref> (d) and (e) show that the diurnal, semidiurnal and terdiurnal tides are still present after filtering with the physical modelling and data-based modelling methods. The amplitude of the semidiurnal tides are slightly larger in the physical modelling case, whereas the data-based modelling reveals larger terdiurnal constituents. However in both cases the highest tidal peaks were reduced below the New Earth High Noise Model (NHMN) reference line, and some tidal constituents, especially the diurnal (around 1.16·10^-5 Hz), went also below the New Earth Low Noise Model (NHLN) <cit.>. These models provide a limit reference of the spectra of a non-seismic background noise, estimated empirically from IRIS/IDA network of broadband seismometers (but a very seismically quiet site may, in special circumstances, have background noise below NHLN). Frequency filtering (Figs. <ref> (b) and (c)) were able to eliminate the diurnal and semidiurnal tides, as well as the FIR filtering. The effect of FFT filtering in deleting the tidal frequencies is evident in Fig. <ref> (b), with considerable frequency gaps where information is lost. The FIR filtering proved more adequate, once it was implemented to strongly damp the tidal frequencies instead of deleting them. However, due to limitations of design (constrained by the highest order possible to obtain and apply to data), it produced artificial distortions in regions around 0.5·10^-5 Hz, 3.2·10^-5 Hz and 4.4·10^-5 Hz, while a terdiurnal tide (M3) remained present 3.4·10^-5 Hz. The more precise modelling of the atmospheric contribution adopted in the physical and data-based modelling have significantly reduced the power of the residuals over the whole range of frequencies plotted. That is revealed by the drop in the base (noise) level in Fig. <ref> (d) and (e) compared to (b) and (c). Due to this, the quaterdiurnal tides, which typically are not observable for their small amplitude compared to background noise, expressed visible peaks at 4.6·10^-5 Hz in Figs. <ref> (d) and (e). The spectral results for other stations are similar, with few specificities relating to site conditions; the plots are available in the Supplementary Material.As the power spectrum indicates that tidal components remain in the residuals, the amplitude levels of the tidal constituents in the residuals were calculated from classical harmonic analysis, with the main observed modes shown in Table <ref>. For the Fourier filtering method (FFT), the results confirm that the main peaks were largely eliminated, with largest modes appearing being consistent with noise. The finite-impulse response (FIR) filtering, though, consistently was not able to filter the terdiurnal tides, with the main mode (M3) appearing to even have a small gain, which reveals a issue with the filter design. The physical modelling (PM) was able to eliminate the terdiurnal components, but it remained with noticeable amplitudes in the important diurnal (K1,O1,S1) and semidiurnal (M2,S2,K2) modes, despite reductions above 90% in the levels of the greatest peaks. Except from rare occasions, possibly related to instrument site conditions, the least-squares data-based modelling (LS) was able to reduce the main peaks to levels below 2 nm s^-2, with the largest peak typically not being the original largest mode but a neighbour. A further inquiry is whether the energy in the observed peaks components change in time or remain constant. If it changes, the observed peak in the spectrum might be the result of a temporary effect, of non-tidal origin. In this case, the spectrogram would reveal the energy release to be time-confined. The spectrogram of the residuals (Fig. <ref>) shows, however, uniformity in these frequency contributions along the time, as indicated by the arrows (vertical dark regions are due to time-series gaps). Constant lines at specified frequencies (i.e. horizontal lines crossing the plot) are indications of a particular oscillatory mode being present at all times – a tide. In a pure gravity residual, these should not appear, instead revealing a more diffuse structure when the time-series is observed as a whole. Reinforcing the argument that tides remain present in the residuals, the cross-correlation between the residuals and the theoretical tides (Fig. <ref>(d)) is particularly high at all time lags with the physical modelling residuals (d). However, the residuals from other methods had low correlation with theoretical tides [Fig. <ref> (b), (c) and (e)], below the comparative margins from a red-noise model for gravity residuals. These correlation results reinforce the observation from Table <ref> that the physical modelling residuals remain with significant components of the main tides, whereas the residuals from the other methods removes the main tides but maintain the smaller ones. The data-based method produced the best results for tidal filtering without canceling other frequency information, with observed oscillations in the residual time-series typically lower than 100 nm s^-2 daily amplitude in the time domain. Checking for consistency between the residuals from different methods, it is observed that the correlation between the FFT and the FIR residuals is low for all time-series (observed absolute value of Pearson coefficient below 0.03 in all time-series), while it is typically between residuals from physical and data-based modelling (Pearson coefficient above 0.1 for most time-series, reaching up to 0.32 for Apache Point). Exceptions are the time-series from Matsushiro (MA), Japan, and two instruments in Sutherland (SU), South Africa, which also showed low correlations between the residuals from modelling. For MA the local seasonal effects, particularly from atmospheric and ocean events, provides a difficult scenario for the physical modelling, specifically in fitting the diurnal tidal components particular to this station, while the data-based residuals can better adjust to observations, leading to a lower correlation (p=-0.01). The cases of SU2 and SU3 (p=0.03 and 0.02, respectively) are being investigated, but probable cause is also agreement with theoretical models due to local conditions. Higher correlation in other stations (p≥0.11) indicates the methods of physical and data-based modelling might have a tendency to converge to similar results as compared to frequency filtering, having low correlation in all cases (p<0.03). The presence of the terdiurnal tide in the FIR residuals is a factor contributing to this result. §.§ Tidal removal and analysis of the system dynamics The removal of tides employing linear methods is an approximation, once nonlinearities are also present <cit.>. It is important then to check for artificial changes in the overall system dynamics after applying these techniques. One fast approach is to observe the state-space plot (x_1;x_2;...;x_m)=(g(t);g(t+τ);...;g(t+(m-1)τ)) and compare it with known features. For this it is required to select optimal time-delay τ_opt and a dimension m that permits maximal unfolding of the dynamical structure. This optimal time-delay is obtained by the time-lag for which the delayed average mutual information reaches its first minimum <cit.>, in the present case between 200 and 233 samples (3.3–3.9h). We will adopt τ_opt=216 samples (3.6h) as a compromise to enable the comparison among stations, without significant disruption to the observation of the state-space plots. In order to minimize numerical errors, we choose the embedding dimension m to be the minimum number of coordinates necessary completely describe the attractor from the data. This minimal value is calculated by the method of false-nearest neighbours<cit.>, i.e., by producing state-space plots with increasing embedding dimension m'=2,3,4,... and calculating the percentage of nearest neighbours to each point. When this percentage drops to zero the state-plot dimension is the minimum embedding dimension (m=m'). The value observed for the gravity time series from all stations is m=4. Comparatively, this value situates between what is expected from a periodic or a stochastic system. The value obtained of m=4 for the gravity system is also compatible with analogous observation of m=4-6 for shallow water ocean level <cit.>. For visualization, a surface of section is arbitrarily selected as the hyperplane x_4=⟨ g(t+3τ)⟩, and a 3-dimensional map is generated. This map is constituted by the locations where the trajectory crosses the surface section, and, additionally, we have colour-coded the direction of crossing. Adopting the same parameters for the original gravity series, the residuals were observed. The case for Apache Point time-series is shown in Fig. <ref>. The same result is observed on the other stations. The original gravity series has a helicoidal structure on the map, with points uniformly distributed along its shape and a clear spatial division between the different directions of crossing (Fig. <ref>(a)). As reference, a sine wave (in principle a 2-dimensional system) would present a circular shape only when the chosen section section matches with the plane where lie all the points, but if this choice is not optimal, only two points (one for each direction of crossing) appear. Analogous effect is observed on the signal composed by 1200 sinusoids (Fig. <ref>(f)), where the distribution of points is approximately uniform in a stretched region and the spatial separation of different directions of crossing is also observed. The opposite case, the red-noise structure (Fig. <ref>(g)), as expected is not uniform and no spatial separation of crossing directions is present. Regarding the residuals from the different methods, the physical modelling (Fig. <ref>(d)) also shows a helicoidal structure, although in squeezed and modified form compared with the original time-series, and the separation of crossing directions. Data-based modelling hellicoidal features are less evident, with pattern approaching red-noise (Fig. <ref>(e)). The FIR residuals (Fig. <ref>(c)) show a torus instead, and FFT residuals (Fig. <ref>(b)) show isolated points, both without spatial distinction of crossing directions, meaning that the residuals from these two methods are fundamentally different from the dynamics of the original system. The difference in shape between the original gravity time-series and the time-series from 1200 sinusoidals generated from the Tamura tidal table (Fig. <ref>(f)) suggests that the underlying dynamics of the geophysical system is more complex. The standard measure to determine if a system is predictable or chaotic is by the largest Lyapunov exponent (λ_1), defined as the exponential increase of distance between initial neighbours in the embedded space. As a general rule, a very large positive Lyapunov exponent indicates noise (stochasticity), a finite positive value is an indication of deterministic chaos, zero indicates limit-cycle stability (e.g. a sinusoid) and a negative value indicates point stability (convergence of signals). We have implemented an algorithm based on Rosenstein <cit.> method for the calculation of λ_1 by analysing how the nearest-neighbour to a base point in the embedded space will diverge after times τ,2τ,...,iτ. The optimal delay of complete unfolding, τ_opt=216 minutes, was too large, not producing exponential divergence/convergence, hence the embedding delay was adjusted empirically to 30 minutes for the Lyapunov exponent analysis. The larger delay is likely due to the low-period tidal frequencies, however not appropriate to capture the nonlinear features of the time-series. The average over 2000 randomly chosen initial base points was employed for improved statistics. The results obtained for the original time-series consisted of positive largest Lyapunov exponent (Table <ref>). These are indicators for a chaotic nature of the signal. Additionally, the values suggest that the Earth responds to the tides by inheriting a small sensitivity to the initial conditions, thus enhancing the oscillations promoted by tides instead of damping them. The numbers obtained for gravity signals are consistent with observations of another time-series with tidal effects, the shallow water ocean levels <cit.>, of λ_1=0.57–4.54 bits h^-1. Applying the same procedure for the residuals, the Lyapunov exponent values decreased slightly for each station, but the sensitivity to the initial conditions behaviour is maintained (Table <ref>). As the largest Lyapunov exponent relates to the entropy of the signal, this reduction indicates that the residuals are less entropic than the original signal, as expected. Physical modelling residuals presented Lyapunov exponents closer to the original time-series, while other methods presented larger reductions but also greater differences between the stations (Table <ref>). It is not possible to unambiguously infer that larger reductions are due to the removal of tides instead of other components which might be of geophysical interest. §.§ Example of application of the residuals The residuals from the physical and data-based modelling still appear to present some oscillatory behaviour in the time-domain, however their amplitudes have reduced considerably from the 2000nm s^-2 daily levels to the 100 nm s^-2 and 10–50nm s^-2 daily levels, respectively. This reduction enables, for example, the observation of hydrology-induced gravity variations in the time-series. This has been object of previous study on the MA station (Matsushiro, Japan) by Imanishi et al. <cit.>, who developed a model predicting a sharp drop of gravity measurement in the order of tenths of nm s^-2 during strong rainfall events, and a slow increase of gravity in the subsequent dry weeks, at rates associated to local evapotransporation and water infiltration phenomena. Figure <ref> reveals that this behaviour is present in the residuals from data-based modelling residuals during the summer of 2002, where the gravity variations in the order of 60nm s^-2 are associated with intense rainfall in the first weeks of July and the relative drier weeks in the period afterwards. This residuals presented daily oscillations around 20nm s^-2 in this period, which allowed for the observation of the distinct features associated with the hydrological phenomena. The same could not be verified in the time-series of the residuals from physical modelling due to larger amplitude oscillations still present (100nm s^-2), masking the event. Co-seismic and post-seismic gravity changes still could not be observed at this resolution, yet the sensitivity level required (0.1–50nm s^-2) is very close.§ CONCLUSIONS This work has presented three conceptually different approaches to filter tides in a general geophysical time-series, and classical implementations were applied to data from a network of superconducting gravimeters. These data sets in particular exhibit oscillations with amplitudes in the order of 2000nm s^-2 of tidal origin, while, comparatively, the sensitivity of the instruments is of the order of 0.1–1nm s^-2 and the phenomena of interest in the current frontier geophysical research is in the order of 0.1–100nm s^-2. The three methods adopted here for removing the tides were frequency filtering, based on use of FFT or FIR filter to delete the undesired signal, physical modelling, where the undesired tidal signal was modelled from theoretical predictions and subtracted from the observation, and data-based modelling, where the parameters of the tides where obtained from the signal itself via least squares, and the reconstructed undesired signal was subtracted from the observation. The frequency filtering approach artificially distorted the signal (whether if implemented in frequency domain as FFT filtering or in time-domain as a FIR filter), and both the physical and data-based modelling methods, although not able to completely eliminate oscillations in the time-series, could reduce them significantly. In the time-domain, the frequency filtering residuals exhibited typically periodic/quasi-periodic daily amplitudes of the order of 10nm s^-2 with FFT frequency filtering and 50nm s^-2 with FIR frequency filtering. However, the frequency spectrum of the signal exhibited significant artificially induced changes, further confirmed by the change in the dynamics of the system. In comparison, the procedures of physical modelling and least-squares data-based modelling generated residuals with periodic/quasi-periodic daily amplitudes in the time-series typically in the order of 100nm s^-2 and 10-50nm s^-2, respectively, without significant disruption to the frequency domain (except reduction of the tidal peaks). Our dynamical analysis showed that the original system exhibits positive maximal Lyapunov exponent, an indicator of chaos, and this is preserved in the residuals. The data-based residuals reached enough sensitivity to monitor hydrology-related gravity changes in MA station (Matsushiro, Japan) in the order of 60nm s^-2, and could also be applied to observation of any other phenomena above this level.§ SUPPLEMENTARY MATERIALS AND DATA AVAILABILITYSee the Supplementary Materials for the power spectral density plots of all the gravity residuals obtained from the eight time-series studied. IGETS network operators and GFZ-Potsdam provided the raw gravity data at http://isdc.gfz-potsdam.de/igets-data-base/.§ ACKNOWLEDGMENTSWe thanks the participants of the 35th General Assembly of the European Seismological Commission for comments on preliminary results. The authors are grateful to all IGETS contributors, particularly to the station operators and to ISDC/GFZ-Potsdam for providing the original gravity data used in this study. We also thank the developers of ATLANTIDA3.1 and UTide. Part of this work was performed using the ICSMB High Performance Computing Cluster, University of Aberdeen. We also thanks M. Thiel and A. Moura for reviewing a preliminary version and making comments on the methods section and M.A. Araújo for comments on Lyapunov exponents.Funding: A. Valencio is supported by CNPq, Brazil [206246/2014-5]; and received a travel grant from the School of Natural and Computing Sciences, University of Aberdeen [PO2073498], for a presentation including preliminary results.10Sato2006 T. Sato, J. Okuno, J. Hinderer, D. S. MacMillan, H.P. Plag, O. Francis, R. Falk, and Y. Fukuda. A geophysical interpretation of the secular displacement and gravity rates observed at Ny-Ålesund, Svalbard in the Arctic - Effects of post-glacial rebound and present-day ice melting. Geophysical Journal International, 165(3):729–743, 2006.Imanishi2006 Y. Imanishi, K. Kokubo, and H. Tatehata. Effect of underground water on gravity observation at Matsushiro, Japan. Journal of Geodynamics, 41(1-3):221–226, 2006.VanCamp2016 M. Van Camp, O. de Viron, G. Pajot-Métivier, F. Casenave, A. Watlet, A. Dassargues, and M. Vanclooster. Direct measurement of evapotranspiration from a forest using a superconducting gravimeter. Geophysical Research Letters, 43(19):10225–10231, 2016.Imanishi2004 Y. Imanishi. A Network of Superconducting Gravimeters Detects Submicrogal Coseismic Gravity Changes. Science, 306(5695):476–478, 2004.Soldati1998 G. Soldati, A. Piersanti, and E. Boschi. Global postseismic gravity changes of a viscoelastic Earth. Journal of Geophysical Research, 103(12):29867–29885, 1998.Montagner2016 J.P. Montagner, K. Juhel, M. Barsuglia, J.P. Ampuero, E. Chassande-Mottin, J. Harms, B. Whiting, P. Bernard, E. Clévédé, and P. Lognonné. Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake. Nature Communications, 7:13349, 2016.Goodkind1999 J.M. Goodkind. The superconducting gravimeter. Review of Scientific Instruments, 70(11):4131–4152, 1999.Crossley2009 D. Crossley and J. Hinderer. A review of the GGP network and scientific challenges. Journal of Geodynamics, 48(3-5):299–304, 2009.Darwin1907 G.H. Darwin. Scientific Papers: sir George Howard Darwin, v.1. Cambridge University Press, Cambridge, 1907.Merrian1992 J.B. Merrian. Atmospheric pressure and gravity. Geophysical Journal International, 109:488–500, 1992.Hinderer2015 J.Hinderer, D.Crossley, and R.J. Warburton. Superconducting Gravimetry. In Treatise on Geophysics, pp. 59–115, 2015.Tamura1987 Y. Tamura. A Harmonic Development of the Tide-Generating Potential. Bulletin d'Informations Mareés Terrestres, 99:6813–6855, 1987.Zetler1964 B.D. Zetler. The use of power spectrum analysis for Earth tides. Bulletin d'Informations Mareés Terrestres, 33:1157–1164, 1964.Walters1981 R.A. Walters and C. Heston. Removing tidal period variations from time-series data using low-pass digital filters. Journal of Physical Oceanography, 12:112–115, 1981.Agnew2007 D.C. Agnew. Earth Tides. In T.A. Herring, editor, Treatise on Geophysics and Geodesy, pages 163–195. Elsevier, New York, 2007.Farrell1973 W.E. Farrell. Earth Tides, Ocean Tides and Tidal Loading. Philosophical Transactions of the Royal Society of London A, 274(1239):253–259, 1973.Farrell1972 W.E. Farrell. Deformation of the Earth by surface loads. Reviews of Geophysics and Space Physics, 10(3):761–797, 1972.Spiridonov2015 E.Spiridonov, O.Vinogradova, E.Boyarskiy, and L.Afanasyeva. Atlantida3.1_2014 for Windows: a software for tidal prediction. Bulletin d'Informations Mareés Terrestres, 149:12062–12081, 2015.Wahr1985a J.M. Wahr. Deformation induced by polar motion. Journal of Geophysical Research, 90(B11):9363–9368, 1985.Crossley2013 David Crossley, Jacques Hinderer, and Umberto Riccardi. The measurement of surface gravity. Reports on Progress in Physics, 76(4):046101, apr 2013.Boy2006 J.P. Boy and J. Hinderer. Study of the seasonal gravity signal in superconducting gravimeter data. Journal of Geodynamics, 41(1):227–233, 2006.Mikolaj2016 M. Mikolaj, B. Meurers, and A. Günter. Modelling of global mass effects in hydrology, atmosphere and oceans on surface gravity. Computers and Geosciences, 93:12–20, 2016.Codiga2011 D.L. Codiga. Unified tidal analysis and prediction using the UTide Matlab functions. Technical report, Graduate School of Oceanography, University of Rhode Island, Narragansett, RI, 2011.Kantz1997 H. Kantz and T. Schreiber. Nonlinear Time Series Analysis. Cambridge University Press, Cambridge, 1997.Wenzel1996 H.G. Wenzel. The nanogal software: Earth tide data processing package ETERNA 3.30. Bulletin d'Informations Mareés Terrestres, 124:9425–9439, 1996.Venedikov2001 A. Venedikov, J. Arnoso, and R. Vieira. Program VAV-2000 for Tidal Analysis of Unevenly Spaced Data with Irregular Drift and Colored Noise. Journal of the Geodetic Society of Japan, 47(1):281–286, 2001.Tamura2008 Y. Tamura and D.C. Agnew. Baytap08 User's Manual. Technical report, Scripps Institution of Oceanography, UC San Diego, La Jolla, CA, 2008.Dierks2002 O. Dierks and J. Neumeyer. Comparison of Earth Tides Analysis Programs. Bulletin d'Informations Mareés Terrestres, 135:10669–10688, 2002.Scragle1982 J.D. Scragle. Studies in astronomical time series analysis. II. Statistical aspects of spectral analysis of unevenly spaced data. The Astrophysical Journal, 263:835–853, 1982.Peterson1993 J. Peterson. Observations and modelling of background seismic noise. Technical report, USGS, Albuquerque, NM, 1993.Munk1966 W.H. Munk and D.E. Cartwright. Tidal Spectroscopy and Prediction. Philosophical Transactions of the Royal Society of London A, 259(1105):533 LP – 581, 1966.Abarbanel1996 H.D.I. Abarbanel. Analysis of Observed Chaotic Data. Springer, New York, 1996.Frison1999 T.W. Frison, T.W., H.D.I. Abrabanel,M.D. Earle, J.R. Schultz, W.D. Scherer. Chaos and Predictability in Ocean Water Levels. Journal of Geophysical Research, 104(C4):7935–7951, 1999.Rosenstein1993 M.T. Rosenstein. A practical method for calculating largest Lyapunov exponents from small data sets. Physica D, 65(1-2):117–134, 1993.Imanishi2013 Y. Imanishi, K. Nawa, and H. Takayama. Local hydrological processes in a fractured bedrock and the short-term effect on gravity at Matsushiro, Japan. Journal of Geodynamics, 63:62–68, 2013.§ SUPPLEMENTARY MATERIALSOn the Results section, Fig.4 provided the power spectral density of the original signal and the gravity residuals obtained from the different methods for the BF1 station, Schiltach/Black Forest, Germany. Figures (<ref>–<ref>) here in the Supplementary Material present the resulting plots for the same analysis in the remaining stations.Comparing the procedures for tidal removal, it should be noted in all cases the distortion of the spectrum after the procedures of frequency filtering. For FFT filtering, the frequency gaps related to removed tides are significant, implying in considerable bandwidths unavailable from geophysical analysis, and artificial loss of information for physical applications. The case of FIR filtering maintained for most stations the terdiurnal tides (around 3.5·10^-5 Hz) and the overall frequency spectrum usually present distortions most evident around 0.5·10^-5 Hz and 4.4·10^-5 Hz. The considerable peaks in diurnal (around 1.16·10^-5 Hz) and especially semidiurnal (around 2.31·10^-5 Hz) bands in the residuals from physical modelling indicate present-day challenges in the theoretical description of the Earth system. These residuals can vary considerably with location, depending on how well the theoretical model reflects the local environment. Particular challenges are shown at the SU stations (Sutherland, South Africa). The data-based residuals exhibits a greater reduction of tidal amplitudes compared to physical modelling, although tides remain present. The background noise level is being lowered in amplitude in the modelling procedures compared to the filtering methods. As a results it is possible to observe the quaterdiurnal tides (around 4.63·10^-5 Hz) in most stations using the modelling methods.
http://arxiv.org/abs/1702.08363v2
{ "authors": [ "Arthur Valencio", "Celso Grebogi", "Murilo S. Baptista" ], "categories": [ "physics.geo-ph" ], "primary_category": "physics.geo-ph", "published": "20170227164136", "title": "Methods for removal of unwanted signals from gravity time-series: comparison using linear techniques complemented with analysis of system dynamics" }
—On the Origin of Deep Learning Haohan Wang haohanw@cs.cmu.edu Bhiksha Raj bhiksha@cs.cmu.eduLanguage Technologies Institute School of Computer Science Carnegie Mellon UniversityDecember 30, 2023 =================================================================================================================================================================================================================================This paper is a review of the evolutionary history of deep learning models. It covers from the genesis of neural networks when associationism modeling of the brain is studied, to the models that dominate the last decade of research in deep learning like convolutional neural networks, deep belief networks, and recurrent neural networks.In addition to a review of these models, this paper primarily focuses on the precedents of the models above, examining how the initial ideas are assembled to construct the early models and how these preliminary models are developed into their current forms.Many of these evolutionary paths last more than half a century and have a diversity of directions. For example, CNN is built on prior knowledge of biological vision system; DBN is evolved from a trade-off of modeling power and computation complexity of graphical models and many nowadays models are neural counterparts of ancient linear models.This paper reviews these evolutionary paths and offers a concise thought flow of how these models are developed, and aims to provide a thorough background for deep learning.More importantly, along with the path, this paper summarizes the gist behind these milestones and proposes many directions to guide the future research of deep learning. § INTRODUCTION Deep learning has dramatically improved the state-of-the-art in many different artificial intelligent tasks like object detection, speech recognition, machine translation <cit.>. Its deep architecture nature grants deep learning the possibility of solving many more complicated AI tasks <cit.>. As a result, researchers are extending deep learning to a variety of different modern domains and tasks in additional to traditional tasks like object detection, face recognition, or language models, for example, <cit.> uses the recurrent neural network to denoise speech signals, <cit.> uses stacked autoencoders to discover clustering patterns of gene expressions. <cit.> uses a neural model to generate images with different styles. <cit.> uses deep learning to allow sentiment analysis from multiple modalities simultaneously, etc.This period is the era to witness the blooming of deep learning research. However, to fundamentally push the deep learning research frontier forward, one needs to thoroughly understand what has been attempted in the history and why current models exist in present forms. This paper summarizes the evolutionary history of several different deep learning models and explains the main ideas behind these models and their relationship to the ancestors.To understand the past work is not trivial as deep learning has evolved over a long time of history, as showed in Table <ref>. Therefore, this paper aims to offer the readers awalk-through of the major milestones of deep learning research. We will cover the milestones as showed in Table <ref>, as well as many additional works. We will split the story into different sections for the clearness of presentation. This paper starts the discussion from research on the human brain modeling. Although the success of deep learning nowadays is not necessarily due to its resemblance of the human brain (more due to its deep architecture), the ambition to build a system that simulate brain indeed thrust the initial development of neural networks. Therefore, the next section begins with connectionism and naturally leads to the age when shallow neural network matures. With the maturity of neural networks, this paper continues to briefly discuss the necessity of extending shallow neural networks into deeper ones, as well as the promises deep neural networks make and the challenges deep architecture introduces. With the establishment of the deep neural network, this paper diverges into three different popular deep learning topics. Specifically, in Section <ref>, this paper elaborates how Deep Belief Nets and its construction component Restricted Boltzmann Machine evolve as a trade-off of modeling power and computation loads. In Section <ref>, this paper focuses on the development history of Convolutional Neural Network, featured with the prominent steps along the ladder of ImageNet competition.In Section <ref>, this paper discusses the development of Recurrent Neural Networks, its successors like LSTM, attention models and the successes they achieved. While this paper primarily discusses deep learning models, optimization of deep architecture is an inevitable topic in this society. Section <ref> is devoted to a brief summary of optimization techniques, including advanced gradient method, Dropout, Batch Normalization, etc. This paper could be read as a complementary of <cit.>. Schmidhuber's paper is aimed to assign credit to all those who contributed to the present state of the art, so his paper focuses on every single incremental work along the path, therefore cannot elaborate well enough on each of them. On the other hand, our paper is aimed at providing the background for readers to understand how these models are developed. Therefore, we emphasize on the milestones and elaborate those ideas to help build associations between these ideas. In addition to the paths of classical deep learning models in <cit.>, we also discuss those recent deep learning work that builds from classical linear models. Another article that readers could read as a complementary is <cit.> where the authors conducted extensive interviews with well-known scientific leaders in the 90s on the topic of the neural networks' history.§ FROM ARISTOTLE TO MODERN ARTIFICIAL NEURAL NETWORKSThe study of deep learning and artificial neural networks originates from our ambition to build a computer system simulating the human brain. To build such a system requires understandings of the functionality of our cognitive system. Therefore, this paper traces all the way back to the origins of attempts to understand the brain and starts the discussion of Aristotle's Associationism around 300 B.C.§.§ Associationism “When, therefore, we accomplish an act of reminiscence, we pass through a certain series of precursive movements, until we arrive at a movement on which the one we are in quest of is habitually consequent. Hence, too, it is that we hunt through the mental train, excogitating from the present or some other, and from similar or contrary or coadjacent. Through this process reminiscence takes place. For the movements are, in these cases, sometimes at the same time, sometimes parts of the same whole, so that the subsequent movement is already more than half accomplished."This remarkable paragraph of Aristotle is seen as the starting point of Associationism <cit.>. Associationism is a theory states that mind is a set of conceptual elements that are organized as associations between these elements. Inspired by Plato, Aristotle examined the processes of remembrance and recall and brought up with four laws of association <cit.>.* Contiguity: Things or events with spatial or temporal proximity tend to be associated in the mind. * Frequency: The number of occurrences of two events is proportional to the strength of association between these two events. * Similarity: Thought of one event tends to trigger the thought of a similar event. * Contrast: Thought of one event tends to trigger the thought of an opposite event.Back then, Aristotle described the implementation of these laws in our mind as common sense. For example, the feel, the smell, or the taste of an apple should naturally lead to the concept of an apple, as common sense. Nowadays, it is surprising to see that these laws proposed more than 2000 years ago still serve as the fundamental assumptions of machine learning methods. For example, samples that are near each other (under a defined distance) are clustered into one group; explanatory variables that frequently occur with response variables draw more attention from the model; similar/dissimilar data are usually represented with more similar/dissimilar embeddings in latent space. Contemporaneously, similar laws were also proposed by Zeno of Citium, Epicurus and St Augustine of Hippo. The theory of associationism was later strengthened with a variety of philosophers or psychologists. Thomas Hobbes (1588-1679) stated that the complex experiences were the association of simple experiences, which were associations of sensations. He also believed that association exists by means of coherence and frequency as its strength factor. Meanwhile, John Locke (1632-1704) introduced the concept of “association of ideas”. He still separated the concept of ideas of sensation and ideas of reflection and he stated that complex ideas could be derived from a combination of these two simple ideas. David Hume (1711-1776) later reduced Aristotle's four laws into three: resemblance (similarity), contiguity, and cause and effect. He believed that whatever coherence the world seemed to have was a matter of these three laws. Dugald Stewart (1753-1828) extended these three laws with several other principles, among an obvious one: accidental coincidence in the sounds of words. Thomas Reid (1710-1796) believed that no original quality of mind was required to explain the spontaneous recurrence of thinking, rather than habits. James Mill (1773-1836) emphasized on the law of frequency as the key to learning, which is very similar to later stages of research.David Hartley (1705-1757), as a physician, was remarkably regarded as the one that made associationism popular <cit.>.In addition to existing laws, he proposed his argument that memory could be conceived as smaller scale vibrations in the same regions of the brain as the original sensory experience. These vibrations can link up to represent complex ideas and therefore act as a material basis for the stream of consciousness.This idea potentially inspired Hebbian learning rule, which will be discussed later in this paper to lay the foundation of neural networks.§.§ Bain and Neural Groupings Besides David Hartley, Alexander Bain (1818-1903) also contributed to the fundamental ideas of Hebbian Learning Rule <cit.>. In this book, <cit.> related the processes of associative memory to the distribution of activity of neural groupings (a term that he used to denote neural networks back then). He proposed a constructive mode of storage capable of assembling what was required, in contrast to alternative traditional mode of storage with prestored memories. To further illustrate his ideas, Bain first described the computational flexibility that allows a neural grouping to function when multiple associations are to be stored. With a few hypothesis, Bain managed to describe a structure that highly resembled the neural networks of today: an individual cell is summarizing the stimulation from other selected linked cells within a grouping, as showed in Figure <ref>. The joint stimulation from a and c triggers X, stimulation from b and c triggers Y and stimulation from a and c triggers Z. In his original illustration, a, b, c stand for simulations, X and Y are outcomes of cells. With the establishment of how this associative structure of neural grouping can function as memory, Bain proceeded to describe the construction of these structures. He followed the directions of associationism and stated that relevant impressions of neural groupings must be made in temporal contiguity for a period, either on one occasion or repeated occasions. Further, Bain described the computational properties of neural grouping: connections are strengthened or weakened through experience via changes of intervening cell-substance. Therefore, the induction of these circuits would be selected comparatively strong or weak. As we will see in the following section, Hebb's postulate highly resembles Bain's description, although nowadays we usually label this postulate as Hebb's, rather than Bain's, according to <cit.>. This omission of Bain's contribution may also be due to Bain's lack of confidence in his own theory: Eventually, Bain was not convinced by himself and doubted about the practical values of neural groupings. §.§ Hebbian Learning RuleHebbian Learning Rule is named after Donald O. Hebb (1904-1985) since it was introduced in his work The Organization of Behavior <cit.>. Hebb is also seen as the father of Neural Networks because of this work <cit.>. In 1949, Hebb stated the famous rule: “Cells that fire together, wire together", which emphasized on the activation behavior of co-fired cells. More specifically, in his book, he stated that: “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”This archaic paragraph can be re-written into modern machine learning languages as the following: Δ w_i = η x_i ywhere Δ w_i stands for the change of synaptic weights (w_i) of Neuron i, of which the input signal is x_i. y denotes the postsynaptic response and η denotes learning rate.In other words, Hebbian Learning Rule states that the connection between two units should be strengthened as the frequency of co-occurrences of these two units increase. Although Hebbian Learning Rule is seen as laying the foundation of neural networks, seen today, its drawbacks are obvious: as co-occurrences appear more, the weights of connections keep increasing and the weights of a dominant signal will increase exponentially. This is known as the unstableness of Hebbian Learning Rule <cit.>. Fortunately, these problems did not influence Hebb's identity as the father of neural networks.§.§ Oja's Rule and Principal Component AnalyzerErkki Oja extended Hebbian Learning Rule to avoid the unstableness property and he also showed that a neuron, following this updating rule, is approximating the behavior of a Principal Component Analyzer (PCA) <cit.>. Long story short, Oja introduced a normalization term to rescue Hebbian Learning rule, and further he showed that his learning rule is simply an online update of Principal Component Analyzer. We present the details of this argument in the following paragraphs.Starting from Equation <ref> and following the same notation, Oja showed:w_i^t+1 = w_i^t + η x_i ywhere t denotes the iteration. A straightforward way to avoid the exploding of weights is to apply normalization at the end of each iteration, yielding:w_i^t+1 = w_i^t + η x_i y(∑_i=1^n (w_i^t + η x_i y)^2)^1/2where n denotes the number of neurons. The above equation can be further expanded into the following form:w_i^t+1 = w_i^tZ + η (yx_iZ + w_i∑_j^nyx_jw_jZ^3) + O(η^2)where Z=(∑_i^nw_i^2)^1/2. Further, two more assumptions are introduced: 1) η is small. Therefore O(η^2) is approximately 0. 2) Weights are normalized, therefore Z=(∑_i^nw_i^2)^1/2=1. When these two assumptions were introduced back to the previous equation, Oja's rule was proposed as following:w_i^t+1 = w_i^t + η y(x_i-yw_i^t) Oja took a step further to show that a neuron that was updated with this rule was effectively performing Principal Component Analysis on the data.To show this, Oja first re-wrote Equation <ref> as the following forms with two additional assumptions <cit.>:dd(t)w_i^t = Cw_i^t - ((w_i^t)^TCw_i^t)w_i^twhere C is the covariance matrix of input X. Then he proceeded to show this property with many conclusions from his another work <cit.> and linked back to PCA with the fact that components from PCA are eigenvectors and the first component is the eigenvector corresponding to largest eigenvalues of the covariance matrix.Intuitively, we could interpret this property with a simpler explanation: the eigenvectors of C are the solution when we maximize the rule updating function. Since w_i^t are the eigenvectors of the covariance matrix of X, we can get that w_i^t are the PCA. Oja's learning rule concludes our story of learning rules of the early-stage neural network. Now we proceed to visit the ideas on neural models.§.§ MCP Neural ModelWhile Donald Hebb is seen as the father of neural networks, the first model of neuron could trace back to six years ahead of the publication of Hebbian Learning Rule, when a neurophysiologist Warren McCulloch and a mathematician Walter Pitts speculated the inner workings of neurons and modeled a primitive neural network by electrical circuits based on their findings <cit.>. Their model, known as MCP neural model, was a linear step function upon weighted linearly interpolated data that could be described as: y = 1 , ∑_iw_ix_i ≥θAND z_j=0, ∀ j0 , otherwisewhere y stands for output, x_i stands for input of signals, w_i stands for the corresponding weights and z_j stands for the inhibitory input. θ stands for the threshold. The function is designed in a way that the activity of any inhibitory input completely prevents excitation of the neuron at any time.Despite the resemblance between MCP Neural Model and modern perceptron, they are still different distinctly in many different aspects: * MCP Neural Model is initially built as electrical circuits. Later we will see that the study of neural networks has borrowed many ideas from the field of electrical circuits. * The weights of MCP Neural Model w_i are fixed, in contrast to the adjustable weights in modern perceptron. All the weights must be assigned with manual calculation. * The idea of inhibitory input is quite unconventional even seen today. It might be an idea worth further study in modern deep learning research. §.§ PerceptronWith the success of MCP Neural Model, Frank Rosenblatt further substantialized Hebbian Learning Rule with the introduction of perceptrons <cit.>.While theorists like Hebb were focusing on the biological system in the natural environment, Rosenblatt constructed the electronic device named Perceptron that was showed with the ability to learn in accordance with associationism. <cit.> introduced the perceptron with the context of the vision system, as showed in Figure <ref>(a).He introduced the rules of the organization of a perceptron as following: * Stimuli impact on a retina of the sensory units, which respond in a manner that the pulse amplitude or frequency is proportional to the stimulus intensity. * Impulses are transmitted to Projection Area (A_I). This projection area is optional. * Impulses are then transmitted to Association Area through random connections. If the sum of impulse intensities is equal to or greater than the threshold (θ) of this unit, then this unit fires. * Response units work in the same fashion as those intermediate units.Figure <ref>(a) illustrates his explanation of perceptron. From left to right, the four units are sensory unit, projection unit, association unit and response unit respectively. Projection unit receives the information from sensory unit and passes onto association unit. This unit is often omitted in other description of similar models. With the omission of projection unit, the structure resembles the structure of nowadays perceptron in a neural network (as showed in Figure <ref>(b)): sensory units collect data, association units linearly adds these data with different weights and apply non-linear transform onto the thresholded sum, then pass the results to response units. One distinction between the early stage neuron models and modern perceptrons is the introduction of non-linear activation functions (we use sigmoid function as an example in Figure <ref>(b)). This originates from the argument that linear threshold function should be softened to simulate biological neural networks <cit.> as well as from the consideration of the feasibility of computation to replace step function with a continuous one <cit.>. After Rosenblatt's introduction of Perceptron, <cit.> introduced a follow-up model called ADALINE. However, the difference between Rosenblatt's Perceptron and ADALINE is mainly on the algorithm aspect. As the primary focus of this paper is neural network models, we skip the discussion of ADALINE.§.§ Perceptron's Linear Representation Power A perceptron is fundamentally a linear function of input signals; therefore it is limited to represent linear decision boundaries like the logical operations like NOT, AND or OR, but not XOR when a more sophisticated decision boundary is required. This limitation was highlighted by <cit.>, when they attacked the limitations of perceptions by emphasizing that perceptrons cannot solve functions like XOR or NXOR. As a result, very little research was done in this area until about the 1980’s. To show a more concrete example, we introduce a linear preceptron with only two inputs x_1 and x_2, therefore, the decision boundary w_1x_1 + w_2x_2 forms a line in a two-dimensional space. The choice of threshold magnitude shifts the line horizontally and the sign of the function picks one side of the line as the halfspace the function represents. The halfspace is showed in Figure <ref> (a). In Figure <ref> (b)-(d), we present two nodes a and b to denote to input, as well as the node to denote the situation when both of them trigger and a node to denote the situation when neither of them triggers. Figure <ref> (b) and Figure <ref> (c) show clearly that a linear perceptron can be used to describe AND and OR operation of these two inputs. However, in Figure <ref> (d), when we are interested in XOR operation, the operation can no longer be described by a single linear decision boundary. In the next section, we will show that the representation ability is greatly enlarged when we put perceptrons together to make a neural network. However, when we keep stacking one neural network upon the other to make a deep learning model, the representation power will not necessarily increase.§ FROM MODERN NEURAL NETWORK TO THE ERA OF DEEP LEARNING In this section, we will introduce some important properties of neural networks. These properties partially explain the popularity neural network gains these days and also motivate the necessity of exploring deeper architecture. To be specific, we will discuss a set of universal approximation properties, in which each property has its condition. Then, we will show that although a shallow neural network is an universal approximator, deeper architecture can significantly reduce the requirement of resources while retaining the representation power. At last, we will also show some interesting properties discovered in the 1990s about backpropagation, which may inspire some related research today. §.§ Universal Approximation PropertyThe step from perceptrons to basic neural networks is only placing the perceptrons together. By placing the perceptrons side by side, we get a single one-layer neural network and by stacking one one-layer neural network upon the other, we get a multi-layer neural network, which is often known as multi-layer perceptrons (MLP) <cit.>.One remarkable property of neural networks, widely known as universal approximation property, roughly describes that an MLP can represent any functions. Here we discussed this property in three different aspects: * Boolean Approximation: an MLP of one hidden layer[Through this paper, we will follow the most widely accepted naming convention that calls a two-layer neural network as one hidden layer neural network.] can represent any boolean function exactly. * Continuous Approximation: an MLP of one hidden layer can approximate any bounded continuous function with arbitrary accuracy. * Arbitrary Approximation: an MLP of two hidden layers can approximate any function with arbitrary accuracy. We will discuss these three properties in detail in the following paragraphs. To suit different readers' interest, we will first offer an intuitive explanation of these properties and then offer the proofs. §.§.§ Representation of any Boolean FunctionsThis approximation property is very straightforward. In the previous section we have shown that every linear preceptron can perform either AND or OR. According to De Morgan's laws, every propositional formula can be converted into an equivalent Conjunctive Normal Form, which is an OR of multiple AND functions. Therefore, we simply rewrite the target Boolean function into an OR of multiple AND operations. Then we design the network in such a way: the input layer performs all AND operations, and the hidden layer is simply an OR operation. The formal proof is not very different from this intuitive explanation, we skip it for simplicity. §.§.§ Approximation of any Bounded Continuous FunctionsContinuing from the linear representation power of perceptron discussed previously, if we want to represent a more complex function, showed in Figure <ref> (a), we can use a set of linear perceptrons, each of them describing a halfspace. One of these perceptrons is shown in Figure <ref> (b), we will need five of these perceptrons. With these perceptrons, we can bound the target function out, as showed in Figure <ref> (c). The numbers showed in Figure <ref> (c) represent the number of subspaces described by perceptrons that fall into the corresponding region. As we can see, with an appropriate selection of the threshold (e.g. θ=5 in Figure <ref> (c)), we can bound the target function out. Therefore, we can describe any bounded continuous function with only one hidden layer; even it is a shape as complicated as Figure <ref> (d). This property was first shown in <cit.> and <cit.>. To be specific, <cit.> showed that, if we have a function in the following form:f(x) = ∑_i ω_i σ (w_i^Tx+θ)f(x) is dense in the subspace of where it is in.In other words, for an arbitrary function g(x) in the same subspace as f(x), we have|f(x)-g(x)| < ϵwhere ϵ > 0.In Equation <ref>, σ denotes the activation function (a squashing function back then), w_i denotes the weights for the input layer and ω_i denotes the weights for the hidden layer. This conclusion was drawn with a proof by contradiction: With Hahn-Banach Theorem and Riesz Representation Theorem, the fact that the closure of f(x) is not all the subspace where f(x) is in contradicts the assumption that σ is an activation (squashing) function. Till today, this property has drawn thousands of citations. Unfortunately, many of the later works cite this property inappropriately <cit.> because Equation <ref> is not the widely accepted form of a one-hidden-layer neural network because it does not deliver a thresholded/squashed output, but a linear output instead. Ten years later after this property was shown, <cit.> concluded this story by showing that when the final output is squashed, this universal approximation property still holds. Note that, this property was shown with the context that activation functions are squashing functions. By definition, a squashing function σ: R → [0, 1] is a non-decreasing function with the properties lim_x →∞σ(x) = 1 and lim_x → -∞σ(x) = 0. Many activation functions of recent deep learning research do not fall into this category. §.§.§ Approximation of Arbitrary FunctionsBefore we move on to explain this property, we need first to show a major property regarding combining linear perceptrons into neural networks. Figure <ref> shows that as the number of linear perceptrons increases to bound the target function, the area outside the polygon with the sum close to the threshold shrinks. Following this trend, we can use a large number of perceptrons to bound a circle, and this can be achieved even without knowing the threshold because the area close to the threshold shrinks to nothing. What left outside the circle is, in fact, the area that sums to N/2, where N is the number of perceptrons used. Therefore, a neural network with one hidden layer can represent a circle with arbitrary diameter. Further, we introduce another hidden layer that is used to combine the outputs of many different circles. This newly added hidden layer is only used to perform OR operation. Figure <ref> shows an example that when the extra hidden layer is used to merge the circles from the previous layer, the neural network can be used to approximate any function. The target function is not necessarily continuous. However, each circle requires a large number of neurons, consequently, the entire function requires even more. This property was showed in <cit.> and <cit.> respectively. Looking back at this property today, it is not arduous to build the connections between this property to Fourier series approximation, which, in informal words, states that every function curve can be decomposed into the sum of many simpler curves. With this linkage, to show this universal approximation property is to show that any one-hidden-layer neural network can represent one simple surface, then the second hidden layer sums up these simple surfaces to approximate an arbitrary function. As we know, one hidden layer neural network simply performs a thresholded sum operation, therefore, the only step left is to show that the first hidden layer can represent a simple surface. To understand the “simple surface”, with linkage to Fourier transform, one can imagine one cycle of the sinusoid for the one-dimensional case or a “bump” of a plane in the two-dimensional case. For one dimension, to create a simple surface, we only need two sigmoid functions appropriately placed, for example, as following:f_1(x)= h1+e^-(x+t_1)f_2(x)= h1+e^x-t_2Then, with f_1(x)+f_2(x), we create a simple surface with height 2h from t_1 ≤ x ≤ t_2. This could be easily generalized to n-dimensional case, where we need 2n sigmoid functions (neurons) for each simple surface. Then for each simple surface that contributes to the final function, one neuron is added onto the second hidden layer. Therefore, despite the number of neurons need, one will never need a third hidden layer to approximate any function. Similarly to how Gibbs phenomenon affects Fourier series approximation, this approximation cannot guarantee an exact representation. The universal approximation properties showed a great potential of shallow neural networks at the price of exponentially many neurons at these layers. One followed-up question is that how to reduce the number of required neurons while maintaining the representation power. This question motivates people to proceed to deeper neural networks despite that shallow neural networks already have infinite modeling power. Another issue worth attention is that, although neural networks can approximate any functions, it is not trivial to find the set of parameters to explain the data. In the next two sections, we will discuss these two questions respectively.§.§ The Necessity of DepthThe universal approximation properties of shallow neural networks come at a price of exponentially many neurons and therefore are not realistic. The question about how to maintain this expressive power of the network while reducing the number of computation units has been asked for years. Intuitively, <cit.> suggested that it is nature to pursue deeper networks because 1) human neural system is a deep architecture (as we will see examples in Section <ref> about human visual cortex.) and 2) humans tend to represent concepts at one level of abstraction as the composition of concepts at lower levels.Nowadays, the solution is to build deeper architectures, which comes from a conclusion that states the representation power of a k layer neural network with polynomial many neurons need to be expressed with exponentially many neurons if a k-1 layer structured is used. However, theoretically, this conclusion is still being completed. This conclusion could trace back to three decades ago when <cit.> showed the limitations of shallow circuits functions. <cit.> later showed this property with parity circuits: “there are functions computable in polynomial size and depth k but requires exponential size when depth is restricted to k-1”. He showed this property mainly by the application of DeMorgan's law, which states that any AND or ORs can be rewritten as OR of ANDs and vice versa. Therefore, he simplified a circuit where ANDs and ORs appear one after the other by rewriting one layer of ANDs into ORs and therefore merge this operation to its neighboring layer of ORs. By repeating this procedure, he was able to represent the same function with fewer layers, but more computations. Moving from circuits to neural networks, <cit.> compared deep and shallow sum-product neural networks. They showed that a function that could be expressed with O(n) neurons on a network of depth k required at least O(2^√(n)) and O((n-1)^k) neurons on a two-layer neural network. Further, <cit.> extended this study to a general neural network with many major activation functions including tanh and sigmoid. They derived the conclusion with the concept of Betti numbers, and used this number to describe the representation power of neural networks. They showed that for a shallow network, the representation power can only grow polynomially with respect to the number of neurons, but for deep architecture, the representation can grow exponentially with respect to the number of neurons. They also related their conclusion to VC-dimension of neural networks, which is O(p^2) for tanh <cit.> where p is the number of parameters. Recently, <cit.> presented a more thorough proof to show that depth of a neural network is exponentially more valuable than the width of a neural network, for a standard MLP with any popular activation functions. Their conclusion is drawn with only a few weak assumptions that constrain the activation functions to be mildly increasing, measurable, and able to allow shallow neural networks to approximate any univariate Lipschitz function. Finally, we have a well-grounded theory to support the fact that deeper network is preferred over shallow ones. However, in reality, many problems will arise if we keep increasing the layers. Among them, the increased difficulty of learning proper parameters is probably the most prominent one. Immediately in the next section, we will discuss the main drive of searching parameters for a neural network: Backpropagation.§.§ Backpropagation and Its PropertiesBefore we proceed, we need to clarify that the name backpropagation, originally, is not referring to an algorithm that is used to learn the parameters of a neural network, instead, it stands for a technique that can help efficiently compute the gradient of parameters when gradient descent algorithm is applied to learn parameters <cit.>. However, nowadays it is widely recognized as the term to refer gradient descent algorithm with such a technique. Compared to a standard gradient descent, which updates all the parameters with respect to error, backpropagation first propagates the error term at output layer back to the layer at which parameters need to be updated, then uses standard gradient descent to update parameters with respect to the propagated error. Intuitively, the derivation of backpropagation is about organizing the terms when the gradient is expressed with the chain rule. The derivation is neat but skipped in this paper due to the extensive resources available <cit.>. Instead, we will discuss two interesting and seemingly contradictory properties of backpropagation. §.§.§ Backpropagation Finds Global Optimal for Linear Separable Data<cit.> studied on the problem of local minima in backpropagation. Interestingly, when the society believes that neural networks or deep learning approaches are believed to suffer from local optimal, they proposed an architecture where global optimal is guaranteed. Only a few weak assumptions of the network are needed to reach global optimal, including * Pyramidal Architecture: upper layers have fewer neurons* Weight matrices are full row rank* The number of input neurons cannot smaller than the classes/patterns of data. However, their approaches may not be relevant anymore as they require the data to be linearly separable, under which condition that many other models can be applied.§.§.§ Backpropagation Fails for Linear Separable DataOn the other hand, <cit.> studied the situations when backpropagation fails on linearly separable data sets. He showed that there could be situations when the data is linearly separable, but a neural network learned with backpropagation cannot find that boundary. He also showed examples when this situation occurs. His illustrative examples only hold when the misclassified data samples are significantly less than correctly classified data samples, in other words, the misclassified data samples might be just outliers. Therefore, this interesting property, when viewed today, is arguably a desirable property of backpropagation as we typically expect a machine learning model to neglect outliers. Therefore, this finding has not attracted many attentions. However, no matter whether the data is an outlier or not, neural network should be able to overfit training data given sufficient training iterations and a legitimate learning algorithm, especially considering that <cit.> showed that an inferior algorithm was able to overfit the data. Therefore, this phenomenon should have played a critical role in the research of improving the optimization techniques. Recently, the studying of cost surfaces of neural networks have indicated the existence of saddle points <cit.>, which may explain the findingsof Brady et al back in the late 80s. Backpropagation enables the optimization of deep neural networks. However, there is still a long way to go before we can optimize it well. Later in Section <ref>, we will briefly discuss more techniques related to the optimization of neural networks.§ THE NETWORK AS MEMORY AND DEEP BELIEF NETSWith the background of how modern neural network is set up, we proceed to visit the each prominent branch of current deep learning family. Our first stop is the branch that leads to the popular Restricted Boltzmann Machines and Deep Belief Nets, and it starts as a model to understand the data unsupervisedly. Figure <ref> summarizes the model that will be covered in this Section. The horizontal axis stands for the computation complexity of these models while the vertical axis stands for the representation power. The six milestones that will be focused in this section are placed in the figure.§.§ Self Organizing MapThe discussion starts with Self Organizing Map (SOM) invented by <cit.>. SOM is a powerful technique that is primarily used in reducing the dimension of data, usually to one or two dimensions <cit.>. While reducing the dimensionality, SOM also retains the topological similarity of data points. It can also be seen as a tool for clustering while imposing the topology on clustered representation. Figure <ref> is an illustration of Self Organizing Map of two dimension hidden neurons. Therefore, it learns a two dimension representation of data. The upper shaded nodes denote the units of SOM that are used to represent data while the lower circles denote the data. There is no connection between the nodes in SOM [In some other literature, <cit.> as an example, one may notice that there are connections in the illustrations of models. However, those connections are only used to represent the neighborhood relationship of nodes, and there is no information flowing via those connections. In this paper, as we will show many other models that rely on a clear illustration of information flow, we decide to save the connections to denote that.]. The position of each node is fixed. The representation should not be viewed as only a numerical value. Instead, the position of it also matters. This property is different from some widely-accepted representation criterion. For example, we compare the case when one-hot vector and one-dimension SOM are used to denote colors: To denote green out of a set: C={green, red, purple}, one-hot representation can use any vector of (1,0,0), (0,1,0) or (0,0,1) as long as we specify the bit for green correspondingly. However, for a one-dimensional SOM, only two vectors are possible: (1,0,0) or (0,0,1). This is because that, since SOM aims to represent the data while retaining the similarity; and red and purple are much more similar than green and red or green and purple, green should not be represented in a way that it splits red and purple. One should notice that, this example is only used to demonstrate that the position of each unit in SOM matters. In practice, the values of SOM unit are not restricted to integers. The learned SOM is usually a good tool for visualizing data. For example, if we conduct a survey on the happiness level and richness level of each country and feed the data into a two-dimensional SOM. Then the trained units should represent the happiest and richest country at one corner and represent the opposite country at the furthest corner. The rest two corners represent the richest, yet unhappiest and the poorest but happiest countries. The rest countries are positioned accordingly. The advantage of SOM is that it allows one to easily tell how a country is ranked among the world with a simple glance of the learned units <cit.>.§.§.§ Learning AlgorithmWith an understanding of the representation power of SOM, now we proceed to its parameter learning algorithm. The classic algorithm is heuristic and intuitive, as shown below:Here we use a two-dimensional SOM as example, and i,j are indexes of units; w is weight of the unit; v denotes data vector; k is the index of data; t denotes the current iteration; N constrains the maximum number of steps allowed; P(·) denotes the penalty considering the distance between unit p,q and unit i,j; l is learning rate; r denotes a radius used to select neighbor nodes. Both l and r typically decrease as t increases. ||·||_2^2 denotes Euclidean distance and dist(·) denotes the distance on the position of units. This algorithm explains how SOM can be used to learn a representation and how the similarities are retained as it always selects a subset of units that are similar with the data sampled and adjust the weights of units to match the data sampled. However, this algorithm relies on a careful selection of the radius of neighbor selection and a good initialization of weights. Otherwise, although the learned weights will have a local property of topological similarity, it loses this property globally: sometimes, two similar clusters of similar events are separated by another dissimilar cluster of similar events. In simpler words, units of green may actually separate units of red and units of purple if the network is not appropriately trained. <cit.>.§.§ Hopfield NetworkHopfield Network is historically described as a form of recurrent[The term “recurrent” is very confusing nowadays because of the popularity recurrent neural network (RNN) gains.] neural network, first introduced in <cit.>. “Recurrent” in this context refers to the fact that the weights connecting the neurons are bidirectional. Hopfield Network is widely recognized because of its content-addressable memory property. This content-addressable memory property is a simulation of the spin glass theory. Therefore, we start the discussion from spin glass. §.§.§ Spin GlassThe spin glass is physics term that is used to describe a magnetic phenomenon. Many works have been done for a detailed study of related theory <cit.>, so in this paper, we only describe this it intuitively. When a group of dipoles is placed together in any space. Each dipole is forced to align itself with the field generated by these dipoles at its location. However, by aligning itself, it changes the field at other locations, leading other dipoles to flip, causing the field in the original location to change. Eventually, these changes will converge to a stable state. To describe the stable state, we first define the total field at location j ass_j = o_j + c^t∑_ks_kd^2_jkwhere o_j is an external field, c^t is a constant that depends on temperature t, s_k is the polarity of the kth dipole and d_jk is the distance from location j to location k. Therefore, the total potential energy of the system is:PE =∑_j s_j o_j+c^ts_j ∑_k s_kd^2_jkThe magnetic system will evolve until this potential energy is minimum.§.§.§ Hopfield Network Hopfield Network is a fully connected neural network with binary thresholding neural units. The values of these units are either 0 or 1[Some other literature may use -1 and 1 to denote the values of these units. While the choice of values does not affect the idea of Hopfiled Network, it changes the formulation of energy function. In this paper, we only discuss in the context of 0 and 1 as values. ]. These units are fully connected with bidirectional weights. With this setting, the energy of a Hopfield Network is defined as:E = -∑_is_ib_i - ∑_i,js_is_jw_i,jwhere s is the state of a unit, b denotes the bias; w denotes the bidirectional weights and i,j are indexes of units. This energy function closely connects to the potential energy function of spin glass, as showed in Equation <ref>. Hopfield Network is typically applied to memorize the state of data. The weights of a network are designed or learned to make sure that the energy is minimized given the state of interest. Therefore, when another state presented to the network, while the weights are fixed, Hopfield Network can search for the states that minimize the energy and recover the state in memory. For example, in a face completion task, when some image of faces are presented to Hopfield Network (in a way that each unit of the network corresponds to each pixel of one image, and images are presented one after the other), the network can calculate the weights to minimize the energy given these faces. Later, if one image is corrupted or distorted and presented to this network again, the network is able to recover the original image by searching a configuration of states to minimize the energy starting from corrupted input presented. The term “energy” may remind people of physics. To explain how Hopfield Network works in a physics scenario will be clearer: nature uses Hopfield Network to memorize the equilibrium position of a pendulum because, in an equilibrium position, the pendulum has the lowest gravitational potential energy. Therefore, whenever a pendulum is placed, it will converge back to the equilibrium position. §.§.§ Learning and InferenceLearning of the weights of a Hopfield Network is straightforward <cit.>. The weights can be calculated as:w_i,j = ∑_i,j(2s_i-1)(2s_j-1)the notations are the same as Equation <ref>. This learning procedure is simple, but still worth mentioning as it is an essential step of a Hopfield Network when it is applied to solve practical problems. However, we find that many online tutorials omit this step, and to make it worse, they refer the inference of states as learning/training. To remove the confusion, in this paper, similar to how terms are used in standard machine learning society, we refer the calculation of weights of a model (either from closed-form solution, or numerical solution) as “parameter learning” or “training”. We refer the process of applying an existing model with weights known onto solving a real-world problem as “inference”[“inference” is conventionally used in such a way in machine learning society, although some statisticians may disagree with this usage.] or “testing” (to decode a hidden state of data, e.g. to predict a label).The inference of Hopfield Network is also intuitive. For a state of data, the network tests that if inverting the state of one unit, whether the energy will decrease. If so, the network will invert the state and proceed to test the next unit. This procedure is called Asynchronous update and this procedure is obviously subject to the sequential order of selection of units. A counterpart is known as Synchronous update when the network first tests for all the units and then inverts all the unit-to-invert simultaneously. Both of these methods may lead to a local optimal. Synchronous update may even result in an increasing of energy and may converge to an oscillation or loop of states. §.§.§ CapacityOne distinct disadvantage of Hopfield Network is that it cannot keep the memory very efficient because a network of N units can only store memory up to 0.15N^2 bits. While a network with N units has N^2 edges. In addition, after storing M memories (M instances of data), each connection has an integer value in range [-M, M]. Thus, the number of bits required to store N units are N^2log(2M+1) <cit.>. Therefore, we can safely draw the conclusion that although Hopfield Network is a remarkable idea that enables the network to memorize data, it is extremely inefficient in practice. As follow-ups of the invention of Hopfield Network, many works are attempted to study and increase the capacity of original Hopfield Network <cit.>. Despite these attempts made, Hopfield Network still gradually fades out of the society. It is replaced by other models that are inspired by it. Immediately following this section, we will discuss the popular Boltzmann Machine and Restricted Boltzmann Machine and study how these models are upgraded from the initial ideas of Hopfield Network and evolve to replace it.§.§ Boltzmann MachineBoltzmann Machine, invented by <cit.>, is a stochastic with-hidden-unit version Hopfield Network. It got its name from Boltzmann Distribution. §.§.§ Boltzmann DistributionBoltzmann Distribution is named after Ludwig Boltzmann and investigated extensively by <cit.>. It is originally used to describe the probability distribution of particles in a system over various possible states as following:F(s) ∝ e^-E_s/kTwhere s stands for the state and E_s is the corresponding energy. k and T are Boltzmann's constant and thermodynamic temperature respectively. Naturally, the ratio of two distribution is only characterized by the difference of energies, as following: r = F(s_1)F(s_2) = e^E_s_2 - E_s_1/kTwhich is known as Boltzmann factor. With how the distribution is specified by the energy, the probability is defined as the term of each state divided by a normalizer, as following: p_s_i = e^-E_s_i/kT∑_je^-E_s_j/kT §.§.§ Boltzmann MachineAs we mentioned previously, Boltzmann Machine is a stochastic with-hidden-unit version Hopfield Network. Figure <ref> introduces how the idea of hidden units is introduced that turns a Hopfield Network into a Boltzmann Machine. In a Boltzmann Machine, only visible units are connected with data and hidden units are used to assist visible units to describe the distribution of data. Therefore, the model conceptually splits into the visible part and hidden part, while it still maintains a fully connected network among these units. “Stochastic” is introduced for Boltzmann Machine to be improved from Hopfield Network regarding leaping out of the local optimum or oscillation of states. Inspired by physics, a method to transfer state regardless current energy is introduced: Set a state to State 1 (which means the state is on) regardless of the current state with the following probability: p = 11 + e^-Δ E/Twhere Δ E stands for the difference of energies when the state is on and off, i.e. Δ E = E_s=1 - E_s=0. T stands for the temperature. The idea of T is inspired by a physics process that the higher the temperature is, the more likely the state will transfer[Molecules move faster when more kinetic energy is provided, which could be achieved by heating.]. In addition, the probability of higher energy state transferring to lower energy state will be always greater than the reverse process[This corresponds to Zeroth Law of Thermodynamics.]. This idea is highly related to a very popular optimization algorithm called Simulated Annealing <cit.> back then, but Simulated Annealing is hardly relevant to nowadays deep learning society. Regardless of the historical importance that the term T introduces, within this section, we will assume T=1 as a constant, for the sake of simplification. §.§.§ Energy of Boltzmann MachineThe energy function of Boltzmann Machine is defined the same as how Equation <ref> is defined for Hopfield Network, except that now visible units and hidden units are noted separately, as following:E(v,h) = -∑_iv_ib_i - ∑_kh_kb_k - ∑_i,jv_iv_jw_ij - ∑_i,kv_ih_kw_ik - ∑_k,lh_kh_lw_k,lwhere v stands for visible units, h stands for hidden units. This equation also connects back to Equation <ref>, except that Boltzmann Machine splits the energy function according to hidden units and visible units. Based on this energy function, the probability of a joint configuration over both visible unit the hidden unit can be defined as following:p(v, h) = e^-E(v,h)∑_m,ne^-E(m,n)The probability of visible/hidden units can be achieved by marginalizing this joint probability.For example, by marginalizing out hidden units, we can get the probability distribution of visible units:p(v) = ∑_he^-E(v,h)∑_m,ne^-E(m,n)which could be used to sample visible units, i.e. generating data. When Boltzmann Machine is trained to its stable state, which is called thermal equilibrium, the distribution of these probabilities p(v,h) will remain constant because the distribution of energy will be a constant. However, the probability for each visible unit or hidden unit may vary and the energy may not be at their minimum. This is related to how thermal equilibrium is defined, where the only constant factor is the distribution of each part of the system. Thermal equilibrium can be a hard concept to understand. One can imagine that pouring a cup of hot water into a bottle and then pouring a cup of cold water onto the hot water. At start, the bottle feels hot at bottom and feels cold at top and gradually the bottle feels mild as the cold water and hot water mix and heat is transferred. However, the temperature of the bottle becomes mild stably (corresponding to the distribution of p(v,h)) does not necessarily mean that the molecules cease to move (corresponding to each p(v,h)).§.§.§ Parameter LearningThe common way to train the Boltzmann machine is to determine the parameters that maximize the likelihood of the observed data. Gradient descent on the log of the likelihood function is usually performed to determine the parameters. For simplicity, the following derivation is based on a single observation. First, we have the log likelihood function of visible units asl(v;w) = log p(v;w) = log∑_h e^-E_v,h - log∑_m,ne^-E_m,nwhere the second term on RHS is the normalizer. Now we take the derivative of log likelihood function w.r.t w, and simplify it, we have:∂ l(v;w)∂ w =-∑_h p(h|v)∂ E(v,h)∂ w + ∑_m,np(m,n)∂ E(m,n)∂ w= -𝔼_p(h|v)∂ E(v,h)∂ w + 𝔼_p(m,n)∂ E(m,n)∂ wwhere 𝔼 denotes expectation.Thus the gradient of the likelihood function is composed of two parts. The first part is expected gradient of the energy function with respect to the conditional distribution p(h|v). The second part is expected gradient of the energy function with respect to the joint distribution over all variable states. However, calculating these expectations is generally infeasible for any realistically-sized model, as it involves summing over a huge number of possible states/configurations. The general approach for solving this problem is to use Markov Chain Monte Carlo (MCMC) to approximate these sums:∂ l(v;w)∂ w = -<s_i,s_j>_p(h_data|v_data) + <s_i,s_j>_p(h_model|v_model)where <·> denotes expectation. Equation <ref> is the difference between the expectation value of product of states while the data is fed into visible states and the expectation of product of states while no data is fed. The first term is calculated by taking the average value of the energy function gradient when the visible and hidden units are being driven by observed data samples. In practice, this first term is generally straightforward to calculate. Calculating the second term is generally more complicated and involves running a set of Markov chains until they reach the current model’s equilibrium distribution, then taking the average energy function gradient based on those samples.However, this sampling procedure could be very computationally complicated, which motivates the topic in next section, the Restricted Boltzmann Machine.§.§ Restricted Boltzmann MachineRestricted Boltzmann Machine (RBM), originally known as Harmonium when invented by <cit.>, is a version of Boltzmann Machine with a restriction that there is no connections either between visible units or between hidden units. Figure <ref> is an illustration of how Restricted Boltzmann Machine is achieved based on Boltzmann Machine (Figure <ref>): the connections between hidden units, as well as the connections between visible units are removed and the model becomes a bipartite graph. With this restriction introduced, the energy function of RBM is much simpler:E(v,h) = -∑_iv_ib_i - ∑_kh_kb_k - ∑_i,kv_ih_kw_ik §.§.§ Contrastive DivergenceRBM can still be trained in the same way as how Boltzmann Machine is trained. Since the energy function of RBM is much simpler, the sampling method used to infer the second term in Equation <ref> becomes easier. Despite this relative simplicity, this learning procedure still requires a large amount of sampling steps to approximate the model distribution. To emphasize the difficulties of such a sampling mechanism, as well as to simplify follow-up introduction, we re-write Equation <ref> with a different set of notations, as following:∂ l(v;w)∂ w = -<s_i,s_j>_p_0 + <s_i,s_j>_p_∞here we use p_0 to denote data distribution and p_∞ to denote model distribution. Other notations remain unchanged. Therefore, the difficulty of mentioned methods to learn the parameters is that it requires potentially “infinitely” many sampling steps to approximate the model distribution. <cit.> overcame this issue magically, with the introduction of a method named Contrastive Divergence. Empirically, he found that one does not have to perform “infinitely” many sampling steps to converge to the model distribution, a finite k steps of sampling is enough. Therefore, Equation <ref> is effectively re-written into:∂ l(v;w)∂ w = -<s_i,s_j>_p_0 + <s_i,s_j>_p_kRemarkably, <cit.> showed that k=1 is sufficient for the learning algorithm to work well in practice. <cit.> attempted to justify Contrastive Divergence in theory, but their derivation led to a negative conclusion that Contrastive Divergence is a biased algorithm, and a finite k cannot represent the model distribution. However, their empirical results suggested that finite k can approximate the model distribution well enough, resulting a small enough bias. In addition, the algorithm works well in practice, which strengthened the idea of Contrastive Divergence. With the reasonable modeling power and a fast approximation algorithm, RBM quickly draws great attention and becomes one of the most fundamental building blocks of deep neural networks. In the following two sections, we will introduce two distinguished deep neural networks that are built based on RBM/Boltzmann Machine, namely Deep Belief Nets and Deep Boltzmann Machine. §.§ Deep Belief NetsDeep Belief Networks is introduced by <cit.>[This paper is generally seen as the opening of nowadays Deep Learning era, as it first introduces the possibility of training a deep neural network by layerwise training], when he showed that RBMs can be stacked and trained in a greedy manner. Figure <ref> shows the structure of a three-layer Deep Belief Networks. Different from stacking RBM, DBN only allows bi-directional connections (RBM-type connections) on the top one layer while the following bottom layers only have top-down connections. Probably a better way to understand DBN is to think it as multi-layer generative models. Despite the fact that DBN is generally described as a stacked RBM, it is quite different from putting one RBM on the top of the other. It is probably more appropriate to think DBN as a one-layer RBM with extended layers specially devoted to generating patterns of data. Therefore, the model only needs to sample for the thermal equilibrium at the topmost layer and then pass the visible states top down to generate the data. §.§.§ Parameter LearningParameter learning of a Deep Belief Network falls into two steps: the first step is layer-wise pre-training and the second step is fine-tuning. Layerwise Pre-trainingThe success of Deep Belief Network is largely due to the introduction of the layer-wised pretraining. The idea is simple, but the reason why it works still attracts researchers. The pre-training is simply to first train the network component by component bottom up: treating the first two layers as an RBM and train it, then treat the second layer and third layer as another RBM and train for the parameters. Such an idea turns out to offer a critical support of the success of the later fine-tuning process. Several explanations have been attempted to explain the mechanism of pre-training: * Intuitively, pre-training is a clever way of initialization. It puts the parameter values in the appropriate range for further fine-tuning. * <cit.> suggested that unsupervised pre-training initializes the model to a point in parameter space which leads to a more effective optimization process, that the optimization can find a lower minimum of the empirical cost function. * <cit.> empirically argued for a regularization explanation, that unsupervised pretraining guides the learning towards basins of attraction of minima that support better generalization from the training data set.In addition to Deep Belief Networks, this pretraining mechanism also inspires the pre-training for many other classical models, including the autoencoders <cit.>, Deep Boltzmann Machines <cit.> and some models inspired by these classical models like <cit.>. After the pre-training is performed, fine-tuning is carried out to further optimize the network to search for the parameters that lead to a lower minimum. For Deep Belief Networks, there are two different fine tuning strategies dependent on the goals of the network.Fine Tuning for Generative Model Fine-tuning for a generative model is achieved with a contrastive version of wake-sleep algorithm <cit.>. This algorithm is intriguing for the reason that it is designed to interpret how the brain works. Scientists have found that sleeping is a critical process of brain function and it seems to be an inverse version of how we learn when we are awake. The wake-sleep algorithm also has two steps. In wake phase, we propagate information bottom up to adjust top-down weights for reconstructing the layer below. Sleep phase is the inverse of wake phase. We propagate the information top down to adjust bottom-up weights for reconstructing the layer above. The contrastive version of this wake-sleep algorithm is that we add one Contrastive Divergence phase between wake phase and sleep phase. The wake phase only goes up to the visible layer of the top RBM, then we sample the top RBM with Contrastive Divergence, then a sleep phase starts from the visible layer of top RBM.Fine Tuning for Discriminative Model The strategy for fine tuning a DBN as a discriminative model is to simply apply standard backpropagation to pre-trained model since we have labels of data.However, pre-training is still necessary in spite of the generally good performance of backpropagation.§.§ Deep Boltzmann Machine The last milestone we introduce in the family of deep generative model is Deep Boltzmann Machine introduced by <cit.>. Figure <ref> shows a three layer Deep Boltzmann Machine (DBM). The distinction between DBM and DBN mentioned in the previous section is that DBM allows bidirectional connections in the bottom layers. Therefore, DBM represents the idea of stacking RBMs in a much better way than DBN, although it might be clearer if DBM is named as Deep Restricted Boltzmann Machine. Due to the nature of DBM, its energy function is defined as an extension of the energy function of an RBM (Equation <ref>), as showed in the following:E(v,h) = -∑_iv_ib_i - ∑_n=1^N∑_kh_n,kb_n,k - ∑_i,kv_iw_ikh_k - ∑_n=1^N-1∑_k,lh_n,kw_n,k,lh_n+1, lfor a DBM with N hidden layers. This similarity of energy function grants the possibility of training DBM with constrative divergence. However, pre-training is typically necessary. §.§.§ Deep Boltzmann Machine (DBM) v.s. Deep Belief Networks (DBN)As their acronyms suggest, Deep Boltzmann Machine and Deep Belief Networks have many similarities, especially from the first glance. Both of them are deep neural networks originates from the idea of Restricted Boltzmann Machine. (The name “Deep Belief Network” seems to indicate that it also partially originates from Bayesian Network <cit.>.) Both of them also rely on layerwise pre-training for a success of parameter learning. However, the fundamental differences between these two models are dramatic, introduced by how the connections are made between bottom layers (un-directed/bi-directed v.s. directed). The bidirectional structure of DBM grants the possibility of DBM to learn a more complex pattern of data. It also grants the possibility for the approximate inference procedure to incorporate top-down feedback in addition to an initial bottom-up pass, allowing Deep Boltzmann Machines to better propagate uncertainty about ambiguous inputs.§.§ Deep Generative Models: Now and the FutureDeep Boltzmann Machine is the last milestone we discuss in the history of generative models, but there are still much work after DBM and even more to be done in the future. <cit.> introduces a Bayesian Program Learning framework that can simulate human learning abilities with large scale visual concepts. In addition to its performance on one-shot learning classification task, their model passes the visual Turing Test in terms of generating handwritten characters from the world’s alphabets. In other words, the generative performance of their model is indistinguishable from human's behavior. Being not a deep neural model itself, their model outperforms several concurrent deep neural networks. Deep neural counterpart of the Bayesian Program Learning framework can be surely expected with even better performance.Conditional image generation (given part of the image) is also another interesting topic recently. The problem is usually solved by Pixel Networks (Pixel CNN <cit.> and Pixel RNN <cit.>). However, given a part of the image seems to simplify the generation task. Another contribution to generative models is Generative Adversarial Network <cit.>, however, GAN is still too young to be discussed in this paper. § CONVOLUTIONAL NEURAL NETWORKS AND VISION PROBLEMS In this section, we will start to discuss a different family of models: the Convolutional Neural Network (CNN) family. Distinct from the family in the previous section, Convolutional Neural Network family mainly evolves from the knowledge of human visual cortex. Therefore, in this section, we will first introduce one of the most important reasons that account for the success of convolutional neural networks in vision problems: its bionic design to replicate human vision system. The nowadays convolutional neural networks probably originate more from the such a design rather than from the early-stage ancestors. With these background set-up, we will then briefly introduce the successful models that make themselves famous through the ImageNet Challenge <cit.>. At last, we will present some known problems of the vision task that may guide the future research directions in vision tasks. §.§ Visual CortexConvolutional Neural Network is widely known as being inspired by visual cortex, however, except that some publications discuss this inspiration briefly <cit.>, few resources present this inspiration thoroughly. In this section, we focus on the discussion about basics on visual cortex <cit.>, which lays the ground for further study in Convolutional Neural Networks. The visual cortex of the brain, located in the occipital lobe which is located at the back of the skull, is a part of the cerebral cortex that plays an important role in processing visual information. Visual information coming from the eye, goes through a series of brain structures and reaches the visual cortex. The parts of the visual cortex that receive the sensory inputs is known as the primary visual cortex, also known as area V1. Visual information is further managed by extrastriate areas, including visual areas two (V2) and four (V4). There are also other visual areas (V3, V5, and V6), but in this paper, we primarily focus on the visual areas that are related to object recognition, which is known as ventral stream and consists of areas V1, V2, V4 and inferior temporal gyrus, which is one of the higher levels of the ventral stream of visual processing, associated with the representation of complex object features, such as global shape, like face perception <cit.>. Figure <ref> is an illustration of the ventral stream of the visual cortex. It shows the information process procedure from the retina which receives the image information and passes all the way to inferior temporal gyrus. For each component: * Retina converts the light energy that comes from the rays bouncing off of an object into chemical energy. This chemical energy is then converted into action potentials that are transferred onto primary visual cortex. (In fact, there are several other brain structures involved between retina and V1, but we omit these structures for simplicity[We deliberately discuss the components that have connections with established technologies in convolutional neural network, one who is interested in developing more powerful models is encouraged to investigate other components. ].)* Primary visual cortex (V1) mainly fulfills the task of edge detection, where an edge is an area with strongest local contrast in the visual signals. * V2, also known as secondary visual cortex, is the first region within the visual association area. It receives strong feedforward connections from V1 and sends strong connections to later areas. In V2, cells are tuned to extract mainly simple properties of the visual signals such as orientation, spatial frequency, and colour, and a few more complex properties. * V4 fulfills the functions including detecting object features of intermediate complexity, like simple geometric shapes, in addition to orientation, spatial frequency, and color. V4 is also shown with strong attentional modulation <cit.>. V4 also receives direct input from V1. * Inferior temporal gyrus (TI) is responsible for identifying the object based on the color and form of the object and comparing that processed information to stored memories of objects to identify that object <cit.>. In other words, IT performs the semantic level tasks, like face recognition.Many of the descriptions of functions about visual cortex should revive a recollection of convolutional neural networks for the readers that have been exposed to some relevant technical literature. Later in this section, we will discuss more details about convolutional neural networks, which will help build explicit connections. Even for readers that barely have knowledge in convolutional neural networks, this hierarchical structure of visual cortex should immediately ring a bell about neural networks. Besides convolutional neural networks, visual cortex has been inspiring the works in computer vision for a long time. For example, <cit.> built a neural model inspired by the primary visual cortex (V1). In another granularity, <cit.> introduced a system with feature detections inspired from the visual cortex. <cit.> published a book describing the models of information processing in the visual cortex. <cit.> conducted a more comprehensive survey on the relevant topic, but they didn't focus on any particular subject in detail in their survey. In this section, we discuss the connections between visual cortex and convolutional neural networks in details. We will begin with Neocogitron, which borrows some ideas from visual cortex and later inspires convolutional neural network.§.§ Neocogitron and Visual CortexNeocogitron, proposed by <cit.>, is generally seen as the model that inspires Convolutional Neural Networks on the computation side. It is a neural network that consists of two different kinds of layers (S-layer as feature extractor and C-layer as structured connections to organize the extracted features.) S-layer consists of a number of S-cells that are inspired by the cell in primary visual cortex. It serves as a feature extractor. Each S-cell can be ideally trained to be responsive to a particular feature presented in its receptive field. Generally, local features such as edges in particular orientations are extracted in lower layers while global features are extracted in higher layers. This structure highly resembles how human conceive objects. C-layer resembles complex cell in the higher pathway of visual cortex. It is mainly introduced for shift invariant property of features extracted by S-layer. §.§.§ Parameter LearningDuring parameter learning process, only the parameters of S-layer are updated. Neocogitron can also be trained unsupervisedly, for a good feature extractor out of S-layers. The training process for S-layer is very similar to Hebbian Learning rule, which strengthens the connections between S-layer and C-layer for whichever S-cell shows the strongest response. This training mechanism also introduces the problem Hebbian Learning rule introduces, that the strength of connections will saturate (since it keeps increasing). The solution was also introduced by <cit.>, which was introduced with the name “inhibitory cell”. It performed the function as a normalization to avoid the problem.§.§ Convolutional Neural Network and Visual CortexNow we proceed from Neocogitron to Convolutional Neural Network. First, we will introduce the building components: convolutional layer and subsampling layer. Then we assemble these components to present Convolutional Neural Network, using LeNet as an example.§.§.§ Convolution OperationConvolution operation is strictly just a mathematical operation, which should be treated equally with other operations like addition or multiplication and should not be discussed particularly in a machine learning literature. However, we still discuss it here for completeness and for the readers who may not be familiar with it. Convolution is a mathematical operation on two functions (e.g. f and g) and produces a third function h, which is an integral that expresses the amount of overlap of one function (f) as it is shifted over the other function (g). It is described formally as the following:h(t) = ∫_-∞^∞ f(τ)g(t-τ)dτand denoted as h=f⋆ g. Convolutional neural network typically works with two-dimensional convolution operation, which could be summarized in Figure <ref>.As showed in Figure <ref>, the leftmost matrix is the input matrix. The middle one is usually called a kernel matrix. Convolution is applied to these matrices and the result is showed as the rightmost matrix. The convolution process is an element-wise product followed by a sum, as showed in the example. When the left upper 3×3 matrix is convoluted with the kernel, the result is 29. Then we slide the target 3×3 matrix one column right, convoluted with the kernel and get the result 12. We keep sliding and record the results as a matrix. Because the kernel is 3×3, every target matrix is 3×3, thus, every 3×3 matrix is convoluted to one digit and the whole 5×5 matrix is shrunk into 3×3 matrix. (Because 5 - (3-1) = 3. The first 3 means the size of the kernel matrix. ) One should realize that convolution is locally shift invariant, which means that for many different combinations of how the nine numbers in the upper 3× 3 matrix are placed, the convoluted result will be 29. This invariant property plays a critical role in vision problem because that in an ideal case, the recognition result should not be changed due to shift or rotation of features. This critical property is used to be solved elegantly by <cit.>, but convolutional neural network brought the performance up to a new level. §.§.§ Connection between CNN and Visual CortexWith the ideas about two dimension convolution, we further discuss how convolution is a useful operation that can simulate the tasks performed by visual cortex. The convolution operation is usually known as kernels. By different choices of kernels, different operations of the images could be achieved. Operations are typically including identity, edge detection, blur, sharpening etc. By introducing random matrices as convolution operator, some interesting properties might be discovered. Figure <ref> is an illustration of some example kernels that are applied to the same figure. One can see that different kernels can be applied to fulfill different tasks. Random kernels can also be applied to transform the image into some interesting outcomes. Figure <ref> (b) shows that edge detection, which is one of the central tasks of primary visual cortex, can be fulfilled by a clever choice of kernels. Furthermore, clever selection of kernels can lead us to a success replication of visual cortex. As a result, learning a meaningful convolutional kernel (i.e. parameter learning) is one of the central tasks in convolutional neural networks when applied to vision tasks. This also explains that why many well-trained popular models can usually perform well in other tasks with only limited fine-tuning process: the kernels have been well trained and can be universally applicable. With the understanding of the essential role convolution operation plays in vision tasks, we proceed to investigate some major milestones along the way.§.§ The Pioneer of Convolutional Neural Networks: LeNetThis section is devoted to a model that is widely recognized as the first convolutional neural network: LeNet, invented by <cit.> (further made popular with <cit.>). It is inspired from the Neocogitron. In this section, we will introduce convolutional neural network via introducing LeNet. Figure <ref> shows an illustration of the architecture of LeNet. It consists of two pairs of Convolutional Layer and Subsampling Layer and is further connected with fully connected layer and an RBF layer for classification. §.§.§ Convolutional LayerA convolutional layer is primarily a layer that performs convolution operation. As we have discussed previously, a clever selection of convolution kernel can effectively simulate the task of visual cortex. Convolutional layer introduces another operation after convolution to assist the simulation to be more successful: the non-linearity transform. Considering a ReLU <cit.> non-linearity transform, which is defined as following:f(x) = max(0, x)which is a transform that removes the negative part of the input, resulting in a clearer contrast of meaningful features as opposed to other side product the kernel produces. Therefore, this non-linearity grants the convolution more power in extracting useful features and allows it to simulate the functions of visual cortex more closely. §.§.§ Subsampling LayerSubsampling Layer performs a simpler task. It only samples one input out every region it looks into. Some different strategies of sampling can be considered, like max-pooling (taking the maximum value of the input), average-pooling (taking the averaged value of input) or even probabilistic pooling (taking a random one.) <cit.>. Sampling turns the input representations into smaller and more manageable embeddings. More importantly, sampling makes the network invariant to small transformations, distortions, and translations in the input image. A small distortion in the input will not change the outcome of pooling since we take the maximum/average value in a local neighborhood. §.§.§ LeNetWith the two most important components introduced, we can stack them together to assemble a convolutional neural network. Following the recipe of Figure <ref>, we will end up with the famous LeNet. LeNet is known as its ability to classify digits and can handle a variety of different problems of digits including variances in position and scale, rotation and squeezing of digits, and even different stroke width of the digit. Meanwhile, with the introduction of LeNet, <cit.> also introduces the MNIST database, which later becomes the standard benchmark in digit recognition field.§.§ Milestones in ImageNet ChallengeWith the success of LeNet, Convolutional Neural Network has been shown with great potential in solving vision tasks. These potentials have attracted a large number of researchers aiming to solve vision task regarding object recognition in CIFAR classification <cit.> and ImageNet challenge <cit.>. Along with this path, several superstar milestones have attracted great attentions and has been applied to other fields with good performance. In this section, we will briefly discuss these models. §.§.§ AlexNetWhile LeNet is the one that starts the era of convolutional neural networks, AlexNet, invented by <cit.>, is the one that starts the era of CNN used for ImageNet classification. AlexNet is the first evidence that CNN can perform well on this historically difficult ImageNet dataset and it performs so well that leads the society into a competition of developing CNNs. The success of AlexNet is not only due to this unique design of architecture but also due to the clever mechanism of training. To avoid the computationally expensive training process, AlexNet has been split into two streams and trained on two GPUs. It also used data augmentation techniques that consist of image translations, horizontal reflections, and patch extractions.The recipe of AlexNet is shown in Figure <ref>. However, rarely any lessons can be learned from the architecture of AlexNet despite its remarkable performance. Even more unfortunately, the fact that this particular architecture of AlexNet does not have a well-grounded theoretical support pushes many researchers to blindly burn computing resources to test for a new architecture. Many models have been introduced during this period, but only a few may be worth mentioning in the future.§.§.§ VGGIn the blind competition of exploring different architectures, <cit.> showed that simplicity is a promising direction with a model named VGG. Although VGG is deeper (19 layer) than other models around that time, the architecture is extremely simplified: all the layers are 3× 3 convolutional layer with a 2×2 pooling layer. This simple usage of convolutional layer simulates a larger filter while keeping the benefits of smaller filter sizes, because the combination of two 3×3 convolutional layers has an effective receptive field of a 5×5 convolutional layer, but with fewer parameters. The spatial size of the input volumes at each layer will decrease as a result of the convolutional and pooling layers, but the depth of the volumes increases because of the increased number of filters (in VGG, the number of filters doubles after each pooling layer). This behavior reinforces the idea of VGG to shrink spatial dimensions, but grow depth. VGG is not the winner of the ImageNet competition of that year (The winner is GoogLeNet invented by <cit.>). GoogLeNet introduced several important concepts like Inception module and the concept later used by R-CNN <cit.>, but the arbitrary/creative design of architecture barely contribute more than what VGG does to the society, especially considering that Residual Net, following the path of VGG, won the ImageNet challenge in an unprecedented level. §.§.§ Residual NetResidual Net (ResNet) is a 152 layer network, which was ten times deeper than what was usually seen during the time when it was invented by <cit.>. Following the path VGG introduces, ResNet explores deeper structure with simple layer. However, naively increasing the number of layers will only result in worse results, for both training cases and testing cases <cit.>. The breakthrough ResNet introduces, which allows ResNet to be substantially deeper than previous networks, is called Residual Block. The idea behind a Residual Block is that some input of a certain layer (denoted as x) can be passed to the component two layers later either following the traditional path which involves convolutional layers and ReLU transform succession (we denote the result as f(x)), or going through an express way that directly passes x there. As a result, the input to the component two layers later is f(x)+x instead of what is typically seen as f(x). The idea of Residual Block is illustrated in Figure <ref>. In a complementary work, <cit.> validated that residual blocks are essential for propagating information smoothly, therefore simplifies the optimization. They also extended the ResNet to a 1000-layer version with success on CIFAR data set. Another interesting perspective of ResNet is provided by <cit.>. They showed that ResNet behave behaves like ensemble of shallow networks: the express way introduced allows ResNet to perform as a collection of independent networks, each network is significantly shallower than the integrated ResNet itself. This also explains why gradient can be passed through the ultra-deep architecture without being vanished. (We will talk more about vanishing gradient problem when we discuss recurrent neural network in the next section.) Another work, which is not directly relevant to ResNet, but may help to understand it, is conducted by <cit.>. They showed that features from lower layers are informative in addition to what can be summarized from the final layer. ResNet is still not completely vacant from clever designs. The number of layers in the whole network and the number of layers that Residual Block allows identity to bypass are still choices that require experimental validations. Nonetheless, to some extent, ResNet has shown that critical reasoning can help the development of CNN better than blind experimental trails. In addition, the idea of Residual Block has been found in the actual visual cortex (In the ventral stream of the visual cortex, V4 can directly accept signals from primary visual cortex), although ResNet is not designed according to this in the first place.With the introduction of these state-of-the-art neural models that are successful in these challenges, <cit.> conducted a comprehensive experimental study comparing these models. Upon comparison, they showed that there is still room for improvement on fully connected layers that show strong inefficiencies for smaller batches of images.§.§ Challenges and Chances for Fundamental Vision ProblemsResNet is not the end of the story. New models and techniques appear every day to push the limit of CNNs further. For example, <cit.> took a step further and put Residual Block inside Residual Block. <cit.> attempted to decrease the depth of network by increasing the width. However, incremental works of this kind are not in the scope of this paper. We would like to end the story of Convolutional Neural Networks with some of the current challenges of fundamental vision problems that may not able to be solved naively by investigation of machine learning techniques. §.§.§ Network Property and Vision Blindness SpotConvolutional Neural Networks have reached to an unprecedented accuracy in object detection. However, it may still be far from industry reliable application due to some intriguing properties found by <cit.>. <cit.> showed that they could force a deep learning model to misclassify an image simply by adding perturbations to that image. More importantly, these perturbations may not even be observed by naked human eyes. In other words, two objects that look almost the same to human, may be recognized as different objects by a well-trained neural network (for example, AlexNet). They have also shown that this property is more likely to be a modeling problem, in contrast to problems raised by insufficient training. On the other hand, <cit.> showed that they could generate patterns that convey almost no information to human, being recognized as some objects by neural networks with high confidence (sometimes more than 99%). Since neural networks are typically forced to make a prediction, it is not surprising to see a network classify a meaningless patter into something, however, this high confidence may indicate that the fundamental differences between how neural networks and human learn to know this world. Figure <ref> shows some examples from the aforementioned two works. With construction, we can show that the neural networks may misclassify an object, which should be easily recognized by the human, to something unusual. On the other hand, a neural network may also classify some weird patterns, which are not believed to be objects by the human, to something we are familiar with. Both of these properties may restrict the usage of deep learning to real world applications when a reliable prediction is necessary. Even without these examples, one may also realize that the reliable prediction of neural networks could be an issue due to the fundamental property of a matrix: the existence of null space. As long as the perturbation happens within the null space of a matrix, one may be able to alter an image dramatically while the neural network still makes the misclassification with high confidence. Null space works like a blind spot to a matrix and changes within null space are never sensible to the corresponding matrix. This blind spot should not discourage the promising future of neural networks. On the contrary, it makes the convolutional neural network resemble the human vision system in a deeper level. In the human vision system, blind spots <cit.> also exist <cit.>. Interesting work might be seen about linking the flaws of human vision system to the defects of neural networks and helping to overcome these defects in the future. §.§.§ Human Labeling PreferenceAt the very last, we present some of the misclassified images of ResNet on ImageNet Challenge. Hopefully, some of these examples could inspire some new methodologies invented for the fundamental vision problem. Figure <ref> shows some misclassified images of ResNet when applied to ImageNet Challenge. These labels, provided by human effort, are very unexpected even to many other humans. Therefore, the 3.6% error rate of ResNet (a general human usually predicts with error rate 5%-10%) is probably hitting the limit since the labeling preference of an annotator is harder to predict than the actual labels. For example, Figure <ref> (a),(b),(h) are labeled as a tiny part of the image, while there are more important contents expressed by the image. On the other hand, Figure <ref> (d) (e) are annotated as the background of the image while that image is obviously centering on other object. To further improve the performance ResNet reached, one direction might be to modeling the annotators' labeling preference. One assumption could be that annotators prefer to label an image to make it distinguishable. Some established work to modeling human factors <cit.> could be helpful. However, the more important question is that whether it is worth optimizing the model to increase the testing results on ImageNet dataset, since remaining misclassifications may not be a result of the incompetency of the model, but problems of annotations. The introduction of other data sets, like COCO <cit.>, Flickr <cit.>, and VisualGenome <cit.> may open a new era of vision problems with more competitive challenges. However, the fundamental problems and experiences that this section introduces should never be forgotten. § TIME SERIES DATA AND RECURRENT NETWORKS In this section, we will start to discuss a new family of deep learning models that have attracted many attentions, especially for the tasks on time series data, or sequential data. The Recurrent Neural Network (RNN) is a class of neural network whose connections of units form a directed cycle; this nature grants its ability to work with temporal data. It has also been discussed in literature like <cit.> and <cit.>. In this paper, we will continue to offer complementary views to other surveys with an emphasis on the evolutionary history of the milestone models and aim to provide insights into the future direction of coming models. §.§ Recurrent Neural Network: Jordan Network and Elman NetworkAs we have discussed previously, Hopfield Network is widely recognized as a recurrent neural network, although its formalization is distinctly different from how recurrent neural network is defined nowadays. Therefore, despite that other literature tend to begin the discussion of RNN with Hopfield Network, we will not treat it as a member of RNN family to avoid unnecessary confusion. The modern definition of “recurrent” is initially introduced by<cit.> as:If a network has one or more cycles, that is, if it is possible to follow a path from a unit back to itself, then the network is referred to as recurrent. A nonrecurrent network has no cycles. His model in <cit.> is later referred to as Jordan Network. For a simple neural network with one hidden layer, with input denoted as X, weights of hidden layer denoted as w_h and weights of output layer denoted as w_y, weights of recurrent computation denoted as w_r, hidden representation denoted as h and output denoted as y, Jordan Network can be formulated ash^t = σ(W_hX+W_ry^t-1) y= σ(W_yh^t) A few years later, another RNN was introduced by <cit.>, when he formalized the recurrent structure slightly differently. Later, his network is known as Elman Network. Elman network is formalized as following:h^t = σ(W_hX+W_rh^t-1) y= σ(W_yh^t)The only difference is that whether the information of previous time step is provided by previous output or previous hidden layer. This difference is further illustrated in Figure <ref>. The difference is illustrated to respect the historical contribution of these works. One may notice that there is no fundamental difference between these two structures since y_t=W_yh_t, therefore, the only difference lies in the choice of W_r. (Originally, Elman only introduces his network with W_r=𝐈, but more general cases could be derived from there.)Nevertheless, the step from Jordan Network to Elman Network is still remarkable as it introduces the possibility of passing information from hidden layers, which significantly improve the flexibility of structure design in later work. §.§.§ Backpropagation through TimeThe recurrent structure makes traditional backpropagation infeasible because of that with the recurrent structure, there is not an end point where the backpropagation can stop. Intuitively, one solution is to unfold the recurrent structure and expand it as a feedforward neural network with certain time steps and then apply traditional backpropagation onto this unfolded neural network. This solution is known as Backpropagation through Time (BPTT), independently invented by several researchers including <cit.>However, as recurrent neural network usually has a more complex cost surface, naive backpropagation may not work well. Later in this paper, we will see that the recurrent structure introduces some critical problems, for example, the vanishing gradient problem, which makes optimization for RNN a great challenge in the society.§.§ Bidirectional Recurrent Neural NetworkIf we unfold an RNN, then we can get the structure of a feedforward neural network with infinite depth. Therefore, we can build a conceptual connection between RNN and feedforward network with infinite layers. Then since through the neural network history, bidirectional neural networks have been playing important roles (like Hopfield Network, RBM, DBM), a follow-up question is that what recurrent structures that correspond to the infinite layer of bidirectional models are. The answer is Bidirectional Recurrent Neural Network. Bidirectional Recurrent Neural Network (BRNN) was invented by <cit.> with the goal to introduce a structure that was unfolded to be a bidirectional neural network. Therefore, when it is applied to time series data, not only the information can be passed following the natural temporal sequences, but the further information can also reversely provide knowledge to previous time steps. Figure <ref> shows the unfolded structure of a BRNN. Hidden layer 1 is unfolded in the standard way of an RNN. Hidden layer 2 is unfolded to simulate the reverse connection. Transparency (in Figure <ref>) is applied to emphasize that unfolding an RNN is only a concept that is used for illustration purpose. The actual model handles data from different time steps with the same single model. BRNN is formulated as following: h_1^t= σ(W_h1X+W_r1h_1^t-1) h_2^t= σ(W_h2X+W_r2h_2^t+1) y= σ(W_y1h_1^t + W_y2h_2^t)where the subscript 1 and 2 denote the variables associated with hidden layer 1 and 2 respectively. With the introduction of “recurrent” connections back from the future, Backpropagation through Time is no longer directly feasible. The solution is to treat this model as a combination of two RNNs: a standard one and a reverse one, then apply BPTT onto each of them. Weights are updated simultaneously once two gradients are computed.§.§ Long Short-Term MemoryAnother breakthrough in RNN family was introduced in the same year as BRNN. <cit.> introduced a new neuron for RNN family, named Long Short-Term Memory (LSTM). When it was invented, the term “LSTM” is used to refer the algorithm that is designed to overcome vanishing gradient problem, with the help of a special designed memory cell. Nowadays, “LSTM” is widely used to denote any recurrent network that with that memory cell, which is nowadays referred as an LSTM cell. LSTM was introduced to overcome the problem that RNNs cannot long term dependencies <cit.>. To overcome this issue, it requires the specially designed memory cell, as illustrated in Figure <ref> (a). LSTM consists of several critical components. * states: values that are used to offer the information for output.⋆ input data: it is denoted as x.⋆ hidden state: values of previous hidden layer. This is the same as traditional RNN. It is denoted as h.⋆ input state: values that are (linear) combination of hidden state and input of current time step. It is denoted as i, and we have: i^t = σ (W_ixx^t+W_ihh^t-1) ⋆ internal state: Values that serve as “memory”. It is denoted as m * gates: values that are used to decide the information flow of states.⋆ input gate: it decides whether input state enters internal state. It is denoted as g, and we have:g^t = σ (W_gii^t) ⋆ forget gate: it decides whether internal state forgets the previous internal state. It is denoted as f, and we have:f^t = σ (W_fii^t) ⋆ output gate: it decides whether internal state passes its value to output and hidden state of next time step. It is denoted as o and we have:o^t = σ (W_oii^t)Finally, considering how gates decide the information flow of states, we have the last two equations to complete the formulation of LSTM:m^t =g^t⊙ i^t + f^t m^t-1 h^t =o^t⊙ m^twhere ⊙ denotes element-wise product. Figure <ref> describes the details about how LSTM cell works. Figure <ref> (b) shows that how the input state is constructed, as described in Equation <ref>. Figure <ref> (c) shows how input gate and forget gate are computed, as described in Equation <ref> and Equation <ref>. Figure <ref> (d) shows how output gate is computed, as described in Equation <ref>. Figure <ref> (e) shows how internal state is updated, as described in Equation <ref>. Figure <ref> (f) shows how output and hidden state are updated, as described in Equation <ref>. All the weights are parameters that need to be learned during training. Therefore, theoretically, LSTM can learn to memorize long time dependency if necessary and can learn to forget the past when necessary, making itself a powerful model. With this important theoretical guarantee, many works have been attempted to improve LSTM. For example, <cit.> added a peephole connection that allows the gate to use information from the internal state. <cit.> introduced the Gated Recurrent Unit, known as GRU, which simplified LSTM by merging internal state and hidden state into one state, and merging forget gate and input gate into a simple update gate. Integrating LSTM cell into bidirectional RNN is also an intuitive follow-up to look into <cit.>. Interestingly, despite the novel LSTM variants proposed now and then, <cit.> conducted a large-scale experiment investigating the performance of LSTMs and got the conclusion that none of the variants can improve upon the standard LSTM architecture significantly. Probably, the improvement of LSTM is in another direction rather than updating the structure inside a cell. Attention models seem to be a direction to go.§.§ Attention ModelsAttention Models are loosely based on a bionic design to simulate the behavior of human vision attention mechanism: when humans look at an image, we do not scan it bit by bit or stare at the whole image, but we focus on some major part of it and gradually build the context after capturing the gist. Attention mechanisms were first discussed by <cit.> and <cit.>. The attention models mostly refer to the models that were introduced in <cit.> for machine translation and soon applied to many different domains like <cit.> for speech recognition and <cit.> for image caption generation. Attention models are mostly used for sequence output prediction. Instead of seeing the whole sequential data and make one single prediction (for example, language model), the model needs to make a sequential prediction for the sequential input for tasks like machine translation or image caption generation. Therefore, the attention model is mostly used to answer the question on where to pay attention to based on previously predicted labels or hidden states. The output sequence may not have to be linked one-to-one to the input sequence, and the input data may not even be a sequence. Therefore, usually, an encoder-decoder framework <cit.> is necessary. The encoder is used to encode the data into representations and decoder is used to make sequential predictions. Attention mechanism is used to locate a region of the representation for predicting the label in current time step. Figure <ref> shows a basic attention model under encoder-decoder network structure. The representation encoder encodes is all accessible to attention model, and attention model only selects some regions to pass onto the LSTM cell for further usage of prediction making. Therefore, all the magic of attention models is about how this attention module in Figure <ref> helps to localize the informative representations. To formalize how it works, we use r to denote the encoded representation (there is a total of M regions of representation), use h to denote hidden states of LSTM cell. Then, the attention module can generate the unscaled weights for ith region of the encoded representation as:β_i^t = f(h^t-1, r, {α^t-1_j}_j=1^M)where α^t-1_j is the attention weights computed at the previous time step, and can be computed at current time step as a simple softmax function:α_i^t = exp(β_i^t)∑_j^Mexp(β_j^t)Therefore, we can further use the weights α to reweight the representation r for prediction. There are two ways for the representation to be reweighted: * Soft attention: The result is a simple weighted sum of the context vectors such that:r^t = ∑_j^Mα_j^tc_j * Hard attention: The model is forced to make a hard decision by only localizing one region: sampling one region out following multinoulli distribution.One problem about hard attention is that sampling out of multinoulli distribution is not differentiable. Therefore, the gradient based method can be hardly applied. Variational methods <cit.> or policy gradient based method <cit.> can be considered. §.§ Deep RNNs and the future of RNNsIn this very last section of the evolutionary path of RNN family, we will visit some ideas that have not been fully explored. §.§.§ Deep Recurrent Neural NetworkAlthough recurrent neural network suffers many issues that deep neural network has because of the recurrent connections, current RNNs are still not deep models regarding representation learning compared to models in other families. <cit.> formalizes the idea of constructing deep RNNs by extending current RNNs. Figure <ref> shows three different directions to construct a deep recurrent neural network by increasing the layers of the input component (Figure <ref> (a)), recurrent component (Figure <ref> (b)) and output component (Figure <ref> (c)) respectively. §.§.§ The Future of RNNsRNNs have been improved in a variety of different ways, like assembling the pieces together with Conditional Random Field <cit.>, and together with CNN components <cit.>. In addition, convolutional operation can be directly built into LSTM, resulting ConvLSTM <cit.>, and then this ConvLSTM can be also connected with a variety of different components <cit.>.One of the most fundamental problems of training RNNs is the vanishing/exploding gradient problem, introduced in detail in <cit.>. The problem basically states that for traditional activation functions, the gradient is bounded. When gradients are computed by backpropagation following chain rule, the error signal decreases exponentially within the time steps the BPTT can trace back, so the long-term dependency is lost. LSTM and ReLU are known to be good solutions for vanishing/exploding gradient problem. However, these solutions introduce ways to bypass this problem with clever design, instead of solving it fundamentally. While these methods work well practically, the fundamental problem for a general RNN is still to be solved. <cit.> attempted some solutions, but there are still more to be done. § OPTIMIZATION OF NEURAL NETWORKSThe primary focus of this paper is deep learning models. However, optimization is an inevitable topic in the development history of deep learning models. In this section, we will briefly revisit the major topics of optimization of neural networks. During our introduction of the models, some algorithms have been discussed along with the models. Here, we will only discuss the remaining methods that have not been mentioned previously.§.§ Gradient Methods Despite the fact that neural networks have been developed for over fifty years, the optimization of neural networks still heavily rely on gradient descent methods within the algorithm of backpropagation. This paper does not intend to introduce the classical backpropagation, gradient descent method and its stochastic version and batch version and simple techniques like momentum method, but starts right after these topics. Therefore, the discussion of following gradient methods starts from the vanilla gradient descent as following:θ^t+1 = θ^t -η▽_θ^twhere ▽_θ is the gradient of the parameter θ, η is a hyperparameter, usually known as learning rate. §.§.§ RpropRprop was introduced by <cit.>. It is a unique method even studied back today as it does not fully utilize the information of gradient, but only considers the sign of it. In other words, it updates the parameters following:θ^t+1 = θ^t -η I(▽_θ^t>0) + η I(▽_θ^t<0)where I(·) stands for an indicator function. This unique formalization allows the gradient method to overcome some cost curvatures that may not be easily solved with today's dominant methods. This two-decade-old method may be worth some further study these days. §.§.§ AdaGradAdaGrad was introduced by <cit.>. It follows the idea of introducing an adaptive learning rate mechanism that assigns higher learning rate to the parameters that have been updated more mildly and assigns lower learning rate to the parameters that have been updated dramatically. The measure of the degree of the update applied is the ℓ_2 norm of historical gradients, S^t=||▽_θ^1, ▽_θ^2, ... ▽_θ^t||_2^2, therefore we have the update rule as following:θ^t+1 = θ^t -ηS^t+ϵ▽_θ^twhere ϵ is small term to avoid η divided by zero. AdaGrad has been showed with great improvement of robustness upon traditional gradient method <cit.>. However, the problem is that as ℓ_2 norm accumulates, the fraction of η over ℓ_2 norm decays to a substantial small term. §.§.§ AdaDeltaAdaDelta is an extension of AdaGrad that aims to reducing the decaying rate of learning rate, proposed in <cit.>. Instead of accumulating the gradients of each time step as in AdaGrad, AdaDelta re-weights previously accumulation before adding current term onto previously accumulated result, resulting in:(S^t)^2 = β(S^t-1)^2+(1-β)(▽_θ^t)^2where β is the weight for re-weighting. Then the update rule is the same as AdaGrad:θ^t+1 = θ^t -ηS^t+ϵ▽_θ^twhich is almost the same as another famous gradient variant named RMSprop[It seems this methodnever gets published, the resources all trace back to Hinton's slides at http://www.cs.toronto.edu/t̃ijmen/csc321/slides/lecture_slides_lec6.pdf]. §.§.§ AdamAdam stands for Adaptive Moment Estimation, proposed in <cit.>. Adam is like a combination momentum method and AdaGrad method, but each component are re-weighted at time step t. Formally, at time step t, we have:Δ_θ^t = αΔ_θ^t-1+(1-α)▽_θ^t(S^t)^2 = β(S^t-1)^2+(1-β)(▽_θ^t)^2 θ^t+1 = θ^t - ηS^t+ϵΔ_θ^t All these modern gradient variants have been published with a promising claim that is helpful to improve the convergence rate of previous methods. Empirically, these methods seem to be indeed helpful, however, in many cases, a good choice of these methods seems only to benefit to a limited extent.§.§ DropoutDropout was introduced in <cit.>. The technique soon got influential, not only because of its good performance but also because of its simplicity of implementation. The idea is very simple: randomly dropping out some of the units while training. More formally: on each training case, each hidden unit is randomly omitted from the network with a probability of p.As suggested by <cit.>, Dropout can be seen as an efficient way to perform model averaging across a large number of different neural networks, where overfitting can be avoided with much less cost of computation. Because of the actual performance it introduces, Dropout soon became very popular upon its introduction, a lot of work has attempted to understand its mechanism in different perspectives, including <cit.>. It has also been applied to train other models, like SVM <cit.>.§.§ Batch Normalization and Layer NormalizationBatch Normalization, introduced by <cit.>, is another breakthrough of optimization of deep neural networks. They addressed the problem they named as internal covariate shift. Intuitively, the problem can be understood as the following two steps: 1) a learned function is barely useful if its input changes (In statistics, the input of a function is sometimes denoted as covariates). 2) each layer is a function and the changes of parameters of below layers change the input of current layer. This change could be dramatic as it may shift the distribution of inputs. <cit.> proposed the Batch Normalization to solve this issue, formally following the steps:μ_B = 1n∑_i=1^nx_iσ_B^2 = 1n∑_i=1^n(x_i-μ_B)^2 x̂_̂î = x_i-μ_Bσ_B+ϵy_i = σ_Lx̂_̂î + μ_Lwhere μ_B and σ_B denote the mean and variance of that batch. μ_L and σ_L two parameters learned by the algorithm to rescale and shift the output. x_i and y_i are inputs and outputs of that function respectively. These steps are performed for every batch during training. Batch Normalization turned out to work very well in training empirically and soon became popularly. As a follow-up, <cit.> proposes the technique Layer Normalization, where they “transpose” batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Therefore, this technique has a nature advantage of being applicable to recurrent neural network straightforwardly. However, it seems that this “transposed batch normalization” cannot be implemented as simple as Batch Normalization. Therefore, it has not become as influential as Batch Normalization is.§.§ Optimization for “Optimal” Model ArchitectureIn the very last section of optimization techniques for neural networks, we revisit some old methods that have been attempted with the aim to learn the “optimal” model architecture. Many of these methods are known as constructive network approaches. Most of these methods have been proposed decades ago and did not raise enough impact back then. Nowadays, with more powerful computation resources, people start to consider these methods again. Two remarks need to be made before we proceed: 1) Obviously, most of these methods can trace back to counterparts in non-parametric machine learning field, but because most of these methods did not perform enough to raise an impact, focusing a discussion on the evolutionary path may mislead readers. Instead, we will only list these methods for the readers who seek for inspiration. 2) Many of these methods are not exclusively optimization techniques because these methods are usually proposed with a particularly designed architecture. Technically speaking, these methods should be distributed to previous sections according to the models associated. However, because these methods can barely inspire modern modeling research, but may have a chance to inspire modern optimization research, we list these methods in this section. §.§.§ Cascade-Correlation LearningOne of the earliest and most important works on this topic was proposed by <cit.>. They introduced a model, as well as its corresponding algorithm named Cascade-Correlation Learning. The idea is that the algorithm starts with a minimum network and builds up towards a bigger network. Whenever another hidden unit is added, the parameters of previous hidden units are fixed, and the algorithm only searches for an optimal parameter for the newly-added hidden unit. Interestingly, the unique architecture of Cascade-Correlation Learning grants the network to grow deeper and wider at the same time because every newly added hidden unit takes the data together with outputs of previously added units as input. Two important questions of this algorithm are 1) when to fix the parameters of current hidden units and proceed to add and tune a newly added one 2) when to terminate the entire algorithm. These two questions are answered in a similar manner: the algorithm adds a new hidden unit when there are no significant changes in existing architecture and terminates when the overall performance is satisfying. This training process may introduce problems of overfitting, which might account for the fact that this method is seen much in modern deep learning research. §.§.§ Tiling Algorithm<cit.> presented the idea of Tiling Algorithm, which learns the parameters, the number of layers, as well as the number of hidden units in each layer simultaneously for feedforward neural network on Boolean functions. Later this algorithm was extended to multiple class version by <cit.>. The algorithm works in such a way that on every layer, it tries to build a layer of hidden units that can cluster the data into different clusters where there is only one label in one cluster. The algorithm keeps increasing the number of hidden units until such a clustering pattern can be achieved and proceed to add another layer. <cit.> also offered a proof of theoretical guarantees for Tiling Algorithm. Basically, the theorem says that Tiling Algorithm can greedily improve the performance of a neural network. §.§.§ Upstart Algorithm<cit.> proposed the Upstart Algorithm. Long story short, this algorithm is simply a neural network version of the standard decision tree <cit.> where each tree node is replaced with a linear perceptron. Therefore, the tree is seen as a neural network because it uses the core component of neural networks as a tree node. As a result, standard way of building a tree is advertised as building a neural network automatically. Similarly, <cit.> proposed a boosting algorithm where they replace the weak classifier as neurons. §.§.§ Evolutionary AlgorithmEvolutionary Algorithm is a family of algorithms uses mechanisms inspired by biological evolution to search in a parameter space for the optimal solution. Some prominent examples in this family are genetic algorithm <cit.>, which simulates natural selection and ant colony optimization algorithm <cit.>, which simulates the cooperation of an ant colony to explore surroundings. <cit.> offered an extensive survey of the usage of evolution algorithm upon the optimization of neural networks, in which Yao introduced several encoding schemes that can enable the neural network architecture to be learned with evolutionary algorithms. The encoding schemes basically transfer the network architecture into vectors, so that a standard algorithm can take it as input and optimize it. So far, we discussed some representative algorithms that are aimed to learn the network architecture automatically. Most of these algorithms eventually fade out of modern deep learning research, we conjecture two main reasons for this outcome: 1) Most of these algorithms tend to overfit the data. 2) Most of these algorithms are following a greedy search paradigm, which will be unlikely to find the optimal architecture. However, with the rapid development of machine learning methods and computation resources in the last decade, we hope these constructive network methods we listed here can still inspire the readers for substantial contributions to modern deep learning research.§ CONCLUSIONIn this paper, we have revisited the evolutionary path of the nowadays deep learning models. We revisited the paths for three major families of deep learning models: the deep generative model family, convolutional neural network family, and recurrent neural network family as well as some topics for optimization techniques.This paper could serve two goals: 1) First, it documents the major milestones in the science history that have impacted the current development of deep learning. These milestones are not limited to the development in computer science fields. 2) More importantly, by revisiting the evolutionary path of the major milestone, this paper should be able to suggest the readers that how these remarkable works are developed among thousands of other contemporaneous publications. Here we briefly summarize three directions that many of these milestones pursue: * Occam's razor: While it seems that part of the society tends to favor more complex models by layering up one architecture onto another and hoping backpropagation can find the optimal parameters, history says that masterminds tend to think simple: Dropout is widely recognized not only because of its performance, but more because of its simplicity in implementation and intuitive (tentative) reasoning. From Hopfield Network to Restricted Boltzmann Machine, models are simplified along the iterations until when RBM is ready to be piled-up. * Be ambitious: If a model is proposed with substantially more parameters than contemporaneous ones, it must solve a problem that no others can solve nicely to be remarkable. LSTM is much more complex than traditional RNN, but it bypasses the vanishing gradient problem nicely. Deep Belief Network is famous not due to the fact the they are the first one to come up with the idea of putting one RBM onto another, but due to that they come up an algorithm that allow deep architectures to be trained effectively. * Widely read: Many models are inspired by domain knowledge outside of machine learning or statistics field. Human visual cortex has greatly inspired the development of convolutional neural networks. Even the recent popular Residual Networks can find corresponding mechanism in human visual cortex. Generative Adversarial Network can also find some connection with game theory, which was developed fifty years ago.We hope these directions can help some readers to impact more on current society. More directions should also be able to be summarized through our revisit of these milestones by readers.§ ACKNOWLEDGEMENTSThanks to the demo from http://beej.us/blog/data/convolution-image-processing/ for a quick generation of examples in Figure <ref>.Thanks to Bojian Han at Carnegie Mellon University for the examples in Figure <ref>.Thanks to the blog at http://sebastianruder.com/optimizing-gradient-descent/index.html for a summary of gradient methods in Section <ref>. Thanks to Yutong Zheng and Xupeng Tong at Carnegie Mellon University for suggesting some relevant contents. 0.2in
http://arxiv.org/abs/1702.07800v4
{ "authors": [ "Haohan Wang", "Bhiksha Raj" ], "categories": [ "cs.LG", "cs.NE", "stat.ML" ], "primary_category": "cs.LG", "published": "20170224233008", "title": "On the Origin of Deep Learning" }
moliva@iim.unam.mxchumin@unam.mxInstituto de Investigaciones en Materiales, Universidad Nacional Autónoma de México, Apartado Postal 70-360, 04510 Mexico City, Mexico. An analytical study of low-energy electronic excited states in an uniformly strained graphene is carried out up to second-order in the strain tensor. We report an new effective Dirac Hamiltonian with an anisotropic Fermi velocity tensor, which reveals the graphene trigonal symmetry being absent in low-energy theories to first-order in the strain tensor. In particular, we demonstrate the dependence of the Dirac-cone elliptical deformation on the stretching direction respect to graphene lattice orientation. We further analytically calculate the optical conductivity tensor of strained graphene and its transmittance for a linearly polarized light with normal incidence. Finally, the obtained analytical expression of the Dirac point shift allows a better determination and understanding of pseudomagnetic fields induced by nonuniform strains.Low-energy theory for strained graphene: an approach up to second-order in the strain tensor Chumin Wang============================================================================================§ INTRODUCTION Given the striking interval of elastic response of graphene <cit.>, can withstand a reversible stretching up to 25 %, strain engineering has been widely used to improve and/or to tune its electronic, thermal, chemical and optical properties <cit.>. For instance, theoretical predictions have been made of a band-gap opening by large uniaxial strains from both tight-binging approach <cit.> and density functional theory <cit.>, whenever the strain produces such a Hamiltonian modification beyond the inequalities obtained by Hasegawa, et al. <cit.> The emergence of the pseudomagnetic field caused by a nonuniform strain is possibly the most interesting strain-induced electronic effect, due to the possibility of observing a pseudoquantum Hall effect under zero external magnetic fields <cit.>. Nowadays, the transport signatures of the such fictitious fields are actively investigated <cit.>. Moreover, from a view point of basic research, strained graphene opens an opportunity to explore mixed Dirac–Schrödinger Hamtiltonian <cit.>, fractal spectrum <cit.>, superconducting states <cit.>, magnetic phase transitions <cit.>, metal-insulator transition <cit.>, among others exotic behaviours. The concept of strain engineering has been also extended to the optical context <cit.>. The optical properties of graphene are ultimately provided by its electronic structure, which can be modified by strain. For example, pristine graphene presents a transparency defined by fundamental constants, around 97.7%, over a broad band of frequencies <cit.>. This remarkable feature is essentially a consequence of its unusual low-energy electronic band structure around the Dirac points. Under uniform strain, such conical bands are deformed which produces anisotropy in the electronic dynamics <cit.>. Accordingly, this effect gives rise an anisotropic optical conductivity of strained graphene <cit.> and, therefore, a modulation of its transmittance as a function of the polarization of the incident light, as experimentally observed <cit.>. From a theoretical viewpoint, this optoelectronic behaviour of strained graphene has been quantified by continuum approaches up to first-order in the strain tensor <cit.>. However, nowadays there are novel methods for applying uniaxial strain larger than 10% in a nondestructive and controlled manner <cit.>. So, a low-energy continuum theory for the electronic and optical properties of strained graphene, up to second-order in the strain tensor, seems to be needed <cit.>. In this paper, we derive the effective Dirac Hamiltonian for graphene under uniform strain up to to second-order in the strain tensor. For this purpose, we start from a nearest-neighbor tight-binding model and carry out an expansion around the real Dirac point. Unlike previous approaches to the first-order in strain, we show how the obtained low-energy Hamiltonian reveals the trigonal symmetry of the graphene. Also, we calculate the optical conductivity of strained graphene and characterize its transmittance for a uniaxial strain up to second-order in the stretching magnitude. These findings describe in a more accurate form the electronic and optical properties of strained graphene and, hence, can be potentially utilized towards novel optical characterizations of the strain state of graphene. § TIGHT-BINDING MODEL AS STARTING POINT Strain effects on electronic properties of graphene are usually captured by using a nearest-neighbor tight-binding model <cit.>. Within this approach, one can demonstrate that the Hamiltonian in momentum space for graphene under a uniform strain is given by <cit.>H(k)=-∑_n=1^3 t_n( [ 0 e^-ik·δ_n^';e^ik·δ_n^' 0 ]),where the strained nearest-neighbor vectors are obtained by δ_n^'=(I̅+ϵ̅)·δ_n, being I̅ the (2×2) identity matrix and ϵ̅ the rank-two strain tensor, whose components are independent on the position. Here, we choose the unstrained nearest-neighbor vectors asδ_1=a_0/2(√(3),1),δ_2=a_0/2(-√(3),1), δ_3=a_0(0,-1),where a_0 is the intercarbon distance for pristine graphene. Thus, the x (y) axis of the Cartesian coordinate system is along the zigzag (armchair) direction of the honeycomb lattice. Owing to the changes in the intercabon distance, the nearest-neighbor hopping parameters are modified. Here we consider this effect by means of the commonly used model <cit.>t_n=t_0e^-β(|δ_n^'|/a_0-1),where t_0=2.7 is the hopping parameter for pristine graphene and β≈3. From equation (<ref>) follows that the dispersion relation near the Fermi energy of graphene under uniform strain is given by two bands,E(k)=±| t_1e^ik·δ_1^' + t_2e^ik·δ_2^' + t_3e^ik·δ_3^'|,which remains gapless as long as the triangular inequalities, | t_1-t_2|≤| t_3|≤| t_1+t_2|, are satisfied <cit.>. Evaluating equation (<ref>) for uniaxial strains, V. Pereira, et al., found the minimum uniaxial deformation that leads to the gap opening is about 23% <cit.>. This result is confirmed by the ab initio calculations, finding that this gap in strained graphene requires deformations larger than 20% <cit.>. Therefore, the use of an effective Dirac Hamiltonian obtained from equation (<ref>) is justified for uniform deformations up to the order of 10%.For this purpose, it is important to take into account a crucial detail: the strain-induced shift of the Dirac points in momentum space. In absence of deformation, the Dirac points K_D (determined by condition E(K_D)=0) coincide with the corners of the first Brillouin zone. Then, to obtain the effective Dirac Hamiltonian in this case, one simply expand the Hamiltonian (<ref>) around such corners, e.g., K_0=(4π/3√(3)a_0,0). However, in presence of deformations, the Dirac points do not coincide even with the corners of the strained first Brillouin zone <cit.>. Thus, to obtain the effective Dirac Hamiltonian, one should no longer expand the Hamiltonian (<ref>) around K_0. As demonstrated <cit.>, such expansion around K_0 yields an incorrect derivation of the anisotropic Fermi velocity. The appropriate procedure is to find first the new positions of the Dirac points and then carry out the expansion around them <cit.>. § EFFECTIVE DIRAC HAMILTONIANAs first step, we determine the new positions of Dirac points from the condition E(K_D)=0, up to second order in the strain tensor, which is the leading order used throughout the rest of the paper. Essentially, we calculate the strain-induced shift of the Dirac point K_D from the corner K_0 of the first Brillouin zone by using equation E(K_D)=0, which leads to∑_n=1^3t_ne^iK_D·δ_n^'=∑_n=1^3t_ne^iK_D· (I̅+ϵ̅)·δ_n=∑_n=1^3t_ne^iG·δ_n=0,where G≡(I̅+ϵ̅)·K_D is the effective Dirac point. As demonstrated in Appendix <ref>, G can be expressed asG=K_0 + A^(1) + A^(2) + 𝒪(ϵ̅^3),whereA_x^(1) + i A_y^(1)=β/2a_0(ϵ_xx-ϵ_yy - 2iϵ_xy),andA_x^(2) + i A_y^(2)=β(4β+1)/16a_0(ϵ_xx-ϵ_yy + 2iϵ_xy)^2.Notice that the correction up to first order, A^(1), coincides with the value previously reported <cit.>, which is interpreted as a gauge field for nonuniform deformations <cit.>. On the other hand, the expression (<ref>) for the second-order correction A^(2) is one of the main contributions of this work. To demonstrate its relevance, we numerically calculate the positions of G for two deformations and compare them with the analytical results given by (<ref>-<ref>). As illustrated in Fig. <ref>(a) for a uniaxial strain along zigzag direction, the values of G_x estimated up to first order in the strain magnitude ϵ (blue solid circles) clearly differ from the exact numerical values of G_x (gray line) as ϵ increases, while the values of G_x estimated up to second order (red open circles) show a significantly better approximation. The case of a shear strain is an even more illustrative example of the relevance of A^(2). According to the first-order correction, G_x does not change under a shear strain, which is at variance with the exact numerical result displayed in Fig. <ref>(b). In contrast, the values of G_x estimated up to second order present a good agreement with the numerical values over the studied range of ϵ. Beyond the present work, the second-order correction A^(2) for nonuniform strain could be relevant to a more complete analysis of the strain-induced pseudomagnetic fields. For example, in presence of a deformation field given by u=(u(y),0), for which ϵ̅_xx=ϵ̅_yy=0 and ϵ̅_xy=∂_yu(y)/2, the pseudomagnetic field B_ps, derived from the standard expression B_ps=∇×A^(1), results equal to zero. However, if A^(2) is taken into account by means of the possible generalized expression B_ps=∇×A^(1) + ∇×A^(2), one can demonstrate that the resulting pseudomagnetic field B_ps is not zero. The implications of this issue will be discussed with details in an upcoming work.Knowing the position of the Dirac point K_D, through equation (<ref>), one can now proceed to the expansion of Hamiltonian (<ref>) around K_D, by means of k=K_D+q, to obtain the effective Dirac Hamiltonian. Following this approach up to second order in the strain tensor ϵ̅, the effective Dirac Hamiltonian can be written as (see Appendix <ref>)H=ħ v_0τ·(I̅ + ϵ̅ - βϵ̅ -βϵ̅^2 - βκ̅_1 + β^2κ̅_2)·q,where v_0=3t_0a_0/2ħ is the Fermi velocity for pristine graphene, τ=(τ_x,τ_y) is a vector of (2×2) Pauli matrices describing the pseudospin degree of freedom,κ̅_1=1/8([ (ϵ_xx-ϵ_yy)^2 -2ϵ_xy(ϵ_xx-ϵ_yy); -2ϵ_xy(ϵ_xx-ϵ_yy) 4ϵ_xy^2 ]),andκ̅_2=1/4([ ϵ_xx^2-ϵ_yy^2+2ϵ_xxϵ_yy+2ϵ_xy^2 4ϵ_xxϵ_xy; 4ϵ_xxϵ_xy2(ϵ_yy^2-ϵ_xy^2) ]). It is important to emphasize that the explicit form of equations (<ref>) and (<ref>) is a consequence of the Cartesian coordinate system xy chosen. For an arbitrary coordinate system x̃ỹ, rotated by an angle ϑ respect to the system xy, the new expressions for κ̅_1 and κ̅_2 should be found by means of the transformation rules of a second order Cartesian tensor <cit.>.Fromequation (<ref>) one can recognize the Fermi velocity tensor asv̅=v_0(I̅ + ϵ̅ - βϵ̅ -βϵ̅^2 - βκ̅_1 + β^2κ̅_2),which generalizes the expression, v_0(I̅ + ϵ̅ - βϵ̅), for the Fermi velocity tensor up to first-order in the strain tensor reportedin Refs. [Oliva13,Volovik14a].As a consistency test, let us consider an isotropic uniform strain of the graphene lattice, which is simply given by ϵ̅=ϵI̅. Under this deformation, the new intercarbon distance a is rescaled as a=a_0(1+ϵ), whereas the new hopping parameter t, expanding equation (<ref>) up to second order in strain, results t=t_0(1-βϵ+β^2ϵ^2/2). Therefore, the new Fermi velocity, v=3t a/2ħ, obtained straight away from the nearest-neighbor tight-binding Hamiltonian, takes the value v=v_0(1-βϵ+ϵ-βϵ^2+β^2ϵ^2/2). This result can be alternatively obtained by evaluating our tensor (<ref>) for ϵ̅=ϵI̅.The tensorial character of v̅ is due to the elliptic shape of the isoenergetic curves around K_D. Notice that the principal axes of the Fermi velocity tensor up to first-order in the strain tensor, v_0(I̅ + ϵ̅ - βϵ̅), are collinear with the principal axes of ϵ̅. Therefore, within the effective low-energy Hamiltonian up to first-order in the strain tensor, the anisotropic electronic behaviour is only originated from the strain-induced anisotropy. Nevertheless, the terms κ̅_1 and κ̅_2 in equation (<ref>) suggest that the second-order deformation theory might reveal the anisotropy (trigonal symmetry) of the underlying honeycomb lattice.To clarify this issue, let us to consider graphene subjected a uniaxial strain such that the stretching direction is rotated by an arbitrary angle θ respect to the Cartesian coordinate system xy (see Fig. <ref>). In this case, the strain tensor (ϵ̅) in the reference system xy readsϵ̅(θ)=ϵ( [ cos^2θ - νsin^2θ(1+ν)cosθsinθ;(1+ν)cosθsinθ sin^2θ - νcos^2θ ]),where ϵ is the strain magnitude. Note that both ϵ̅(θ) and ϵ̅(θ+180^∘) represent physically the same uniaxial strain, which can be confirmed in equation (<ref>). It is important to mention that for θ=n 60^∘ (θ=90^∘+n 60^∘), being n an integer, the stretching is along a zigzag (armchair) direction of graphene lattice. As discussed above, under the strain (<ref>), the Fermi velocity tensor up to first-order in the strain tensor, v_0(I̅ + ϵ̅ - βϵ̅), is diagonal in the coordinate system x'y', rotated by the angle θ respect to the coordinate system xy. However, the Fermi velocity tensor (<ref>), up to second-order in the strain tensor, is diagonal in a coordinate system x”y”, rotated by an angle θ_v such thattan2θ_v=2v_xy/v_xx-v_yy,which determines the direction of lower electronic velocity. In the reciprocal space, the angle θ_v characterizes the pulling direction of isoenergetic curves, i.e., the principal axis of the isoenergetic ellipses, as illustrated in Fig. <ref>(d).In Fig. (<ref>), we show the difference θ = θ_v-θ, numerically calculated from equation (<ref>), as a function of the stretching direction θ for two different strain magnitudes ϵ=5% and 10%. The observed six-fold behaviour of θ can be analytically evaluated byθ ≈-β(2β+1)(1+ν)/16(β-1)ϵsin(6θ)×(1-β(1-ν)/2ϵcos(6θ))in good agreement with the numerical values, as shown in Fig. (<ref>). From the last expression, it follows that the principal axes of the Fermi velocity tensor (<ref>) are only collinear with the principal axes of ϵ̅(θ) for θ=n 30^∘, i.e., when the stretching is along the zigzag or armchair crystallographic directions. This result demonstrates that our Hamiltonian (<ref>), a second-order deformation theory, reveals the trigonal symmetry of underlying honeycomb lattice. § OPTICAL PROPERTIES An anisotropic Dirac system described by the effective HamiltonianH=ħ v_0τ·(I̅+Δ̅)·q,being Δ̅ a symmetric (2×2) matrix such that Δ_ij≪1, presents an anisotropic optical response captured by the conductivity tensor (see Appendix <ref>):σ̅(ω)≈ σ_0(ω){I̅ - (Δ̅)I̅ + 2Δ̅ + Δ̅^2+ 1/2[((Δ̅))^2 + (Δ̅^2)]I̅-2(Δ̅)Δ̅},where ω is the frequency of the external electric field and σ_0(ω) is the optical conductivity of the unperturbed Dirac system, i.e., the optical conductivity of unstrained graphene. Equation (<ref>) is a generalization up to second-order in Δ̅ of previous expression until first-order in Δ̅ for the optical conductivity of an anisotropic Dirac system, as it can be seen in equation (17) of Ref. [Oliva2016]. Now, comparing equations (<ref>) and (<ref>), the optical conductivity tensor σ̅(ω) of strained graphene isstraightforward obtained by making the replacement:Δ̅= ϵ̅ - βϵ̅ -βϵ̅^2 - βκ̅_1 + β^2κ̅_2,into equation (<ref>). Regarding terms up to second-order in the strain tensor, it resultsσ̅(ω) = σ_0(ω) [ I̅ + β̃(ϵ̅)I̅ - 2β̃ϵ̅+ (5β+2β̃^2/4(ϵ̅^2) + 4β̃^2-2β^2-β/8((ϵ̅))^2)I̅+(β̃^2-2β)ϵ̅^2 -2β̃^2(ϵ̅)ϵ̅- 2βκ̅_1 +2β^2κ̅_2],where β̃=β-1. This equation generalizes previous works <cit.>, in which the optical conductivity of graphene under uniform strain was reported up to first-order in the strain tensor. Let us make a proof about the consistency of equation (<ref>). When graphene is at half filling, i.e., the chemical potential equals to zero, the optical conductivity σ_0(ω) is frequency-independent and is given by the universal value e^2/(4ħ) <cit.>. It is important to emphasize that this result is independent on the value v_0 of the Fermi velocity <cit.>. Therefore, under an isotropic uniform strain ϵ̅=ϵI̅, which only leads to a new isotropic Fermi velocity v=v_0(1-βϵ+ϵ-βϵ^2+β^2ϵ^2/2), the optical conductivity does not change and remains equal to σ_0=e^2/(4ħ), at least within the Dirac cone approximation <cit.>. In other words, any expression reported as optical conductivity tensor for uniformly strained graphene, as a function on the strain tensor, to be evaluated for ϵ̅=ϵI̅ must give rise σ_0I̅, as occurred when one evaluates the tensor (<ref>).The optical conductivity up to first-order in the strain tensor, σ_0[I̅ + β̃(ϵ̅)I̅ - 2β̃ϵ̅], under a uniaxial strain (<ref>) can be characterized by σ_∥=σ_0[1-β̃ϵ(1+ν)] and σ_⊥=σ_0[1+β̃ϵ(1+ν)], where σ_∥ (σ_⊥) is the optical conductivity parallel (perpendicular) to the stretching direction (see blue lines in Fig. <ref>). Within the first-order approximation, the optical conductivity along the stretching direction decreases by the same amount that the transverse conductivity increases, independently of θ. This behaviour is modified when second-order terms are taken into account.In Fig. <ref>(a), we plot the components of the optical conductivity tensor (<ref>) versus the stretching magnitude ϵ for a uniaxial strain along the armchair direction. The perpendicular conductivity to the stretching direction, σ_xx(ϵ), does not have appreciable difference respect to the lineal approximation σ_0[1+β̃ϵ(1+ν)] whereas the parallel conductivity, σ_yy(ϵ), noticeably differs from σ_0[1-β̃ϵ(1+ν)] with increasing strain. On the other hand, Fig. <ref>(b) displays a contrary behaviour of the optical conductivity for a uniaxial strain along the zigzag direction. For this case, the parallel conductivity, σ_xx(ϵ), looks slight different from σ_0[1-β̃ϵ(1+ν)] whereas the perpendicular conductivity, σ_yy(ϵ), is noticeably greater than σ_0[1+β̃ϵ(1+ν)] with increasing strain. This increase of σ_yy(ϵ) respect to σ_0[1+β̃ϵ(1+ν)] might help to give a better understanding of the change in the transmission of hybrid graphene integrated microfibers elongated along their axial direction <cit.>. For example, in Figure 2(b) of Ref. [Chen2016], it is possible to appreciate that the experimental data of this change gradually differ, with increasing strain, from the theoretical calculation using the first-order linear approximation σ_0[1+β̃ϵ(1+ν)], which can be improved by considering the second-order contribution as shown in Fig. <ref>(b). To complete our discussion about the emergence of the trigonal symmetry of graphene in the continuum approach presented here, we now study the transmittance of linearly polarized light on strained graphene. Considering graphene as a two-dimensional sheet with conductivity σ̅ and from the boundary conditions, vacuum-graphene-vacuum, for the electromagnetic field on the interfaces, the transmittance for normal incidence reads as <cit.>T=(1 + [σ_xxcos^2θ_i + σ_yysin^2θ_i + σ_xysin 2θ_i]/2ε_0c)^-2where ε_0 is the vacuum permittivity, c is the speed of light in vacuum and θ_i is the incident polarization angle. Note that for a pristine graphene with σ̅=σ_0I̅, equation (<ref>) reproduces the experimentally observed constant transmittance T=(1+πα/2)^-2≈1-πα over visible and infrared spectrum <cit.>, being α≈1/137 the fine-structure constant. From equation (<ref>) it can be seen that an anisotropic absorbance yields a periodic modulation of the transmittance as a function of the polarization direction θ_i <cit.>. For the case of a uniaxial strain, and assuming the chemical potential equal to zero, from equations (<ref>), (<ref>) and (<ref>) it follows that the transmittance up to second-order in the strain magnitude ϵ is given byT=1 - πα + παβ̃(1+ν)ϵcos2(θ_i-θ_v) - πα/2β̃^2(1+ν)^2ϵ^2+ πα/2(1+γcos6θ)(1+ν)^2ϵ^2cos2(θ_i-θ_v),where γ=β(2β+1)/4. Expression (<ref>) reveals two new remarkable features in comparison with the first-order theory. As illustrated in Fig. <ref>, the transmittance mean value, ⟨ T⟩= 1 - πα - παβ̃^2(1+ν)^2ϵ^2/2, has a negative shift with respect to the first-order average value T_0 = 1 - πα. Second, the transmittance oscillation amplitude (T) is determined byT (θ) = 2παβ̃(1+ν)ϵ + πα(1+γcos6θ)(1+ν)^2ϵ^2. While the first-order expression for the transmittance oscillation amplitude, 2παβ̃(1+ν)ϵ, is independent on the stretching direction θ, T of equation (<ref>) depends on θ. For example, for a uniaxial strain along the zigzag (armchair) direction with θ=n 60^∘ (θ=90^∘ + n 60^∘), T takes its highest (lowest) value, as displayed in Fig. <ref>. This strectching direction dependent T might be used to confirm experimentally the present theory up to second-order in the strain tensor, as done for small strain less than 1% <cit.>. § CONCLUSION We have analytically deduced a new effective Dirac Hamiltonian of graphene under a uniform deformation up to second-order in the strain tensor, including new Dirac-point positions that are qualitatively different from those predicted by first-order approaches, as occurred for the shear strain. Moreover, based on a detailed analysis about the anisotropic Fermi velocity tensor, we demonstrated how our second-order deformation theory reveals the trigonal symmetry of graphene unlike the previous first-order results.We further derived, for the first time, analytical expressions for the high-frequency electric conductivity and light transmittance of a strained graphene up to second-order in the strain tensor. The magnitude of this transmittance oscillates according to the incident light polarization and the oscillation amplitude depends on the stretching direction, in contrast to the first-order prediction. In fact, within the first-order theory, the maximal transmittance occurs when the light polarization coincides to the stretching direction. However, the second-order theory predicts such coincidences only for stretching along zigzag and armchair directions. Therefore, the obtained light transmittance results can be experimentally verified by optical absorption measurements and they would be used for characterizing the deformation states of strained graphene. In general, the analytical study presented in this article has the advantage of being concise and establishes a reference point for upcoming numerical and experimental investigations.It would be important to stress that the observed absence of lattice symmetry in the optical properties of strained graphene is due to the combination of the low-energy effective Dirac model and first-order approximation in the strain tensor. Such absence can be overcome by carrying out the study within the first-neighbour tight-binding model as occurred for high-energy electron excitations <cit.> or by introducing second-order effects in the strain tensor even within the simplest Dirac model, as done in this article. This finding of trigonal symmetry in optical response reveals the capability of low-energy effective Dirac theory to describe properly anisotropic electron behaviour in graphene under strong uniform deformations. However, a tight-binding model beyond nearest-neoghbour interactions would be required to analyze both the gap opening and the electron-hole spectrum symmetry induced by lattice strain <cit.>. Finally, the present work can be extended to perform an analytical study of the pseudomagnetic fields induced by nonuniform strains. This work has been partially supported by CONACyT of Mexico through Project 252943, and by PAPIIT of Universidad Nacional Autónoma de México (UNAM) through Projects IN113714 and IN106317. Computations were performed at Miztli of UNAM. M.O.L. acknowledges the postdoctoral fellowship from DGAPA-UNAM. § DIRAC POINT POSITION Here we provide the derivation of expressions (<ref>–<ref>) of main text. Equation, E(K_D)=0, can be rewritten as∑_n=1^3t_ne^iK_D·δ_n^'=∑_n=1^3t_ne^iK_D· (I̅+ϵ̅)·δ_n=∑_n=1^3t_ne^iG·δ_n=0,where G≡(I̅+ϵ̅)·K_D is the effective Dirac point associated to a pristine honeycomb lattice with strained nearest-neighbor hopping integrals t_n. To solve equation (<ref>) in a perturbative manner, we cast the position of G asG=K_0 + A^(1) + A^(2) + 𝒪(ϵ̅^3),where A^(1) ( A^(2)) is the correction from first (second) order in the strain tensor. Similarly, we consider Taylor expansions of t_n, up to second order in strain tensor, in the formt_n=t_0[1 + Δ_n^(1) +Δ_n^(2) + 𝒪(ϵ̅^3)],where Δ_n^(1) (Δ_n^(2)) are terms of the first (second) order in the strain tensor.Substituting equations (<ref>) and (<ref>) into equation (<ref>), the coefficient of the first-order strain tensor should be equal to zero, which leads to∑_n=1^3[Δ_n^(1)+ iA^(1)·δ_n]e^iK_0·δ_n=0. Analogously, the coefficient of the second-order strain tensor should also be zero, yields∑_n=1^3[ Δ_n^(2) + iΔ_n^(1)A^(1)·δ_n - (A^(1)·δ_n)^2/2.. + iA^(2)·δ_n] e^iK_0·δ_n=0. From equation (<ref>), A^(1) can be determined and it is used as input of equation (<ref>) to obtain A^(2). To carry out this procedure, it is necessary to explicitly know Δ_n^(1) and Δ_n^(2)as functions of the strain tensor. Expanding t_n, up to second order in strain tensor, givest_n/t_0 = exp[-β(|δ_n^'|/a_0-1)]= exp[-β( 1/a_0^2δ_n·ϵ̅·δ_n + 1/2a_0^4(ϵ̅·δ_n)^2.. .. - 1/2a_0^4(δ_n·ϵ̅·δ_n)^2 + 𝒪(ϵ̅^3))]=1 - β/a_0^2δ_n·ϵ̅·δ_n - β/2a_0^4(ϵ̅·δ_n)^2+ β(β+1)/2a_0^4(δ_n·ϵ̅·δ_n)^2 + 𝒪(ϵ̅^3).Then, by comparing equations (<ref>) and (<ref>) one obtainsΔ_n^(1)= - β/a_0^2δ_n·ϵ̅·δ_n,and Δ_n^(2)= - β/2a_0^4(ϵ̅·δ_n)^2 + β(β+1)/2a_0^4(δ_n·ϵ̅·δ_n)^2. Finally, substituting Δ_n^(1) into equation (<ref>), we getA_x^(1) + i A_y^(1)=β/2a_0(ϵ_xx-ϵ_yy - 2iϵ_xy),and consequently, using this result and the expression of Δ_n^(2), equation (<ref>) can be rewritten asA_x^(2) + i A_y^(2)=β(4β+1)/16a_0(ϵ_xx-ϵ_yy + 2iϵ_xy)^2. Note that equations (<ref>) and (<ref>) are the first- and second-order corrections to the Dirac point position given in equation (<ref>) of the main text.§ EFFECTIVE DIRAC HAMILTONIAN In order to derive the effective Dirac Hamiltonian given by equation (<ref>) in the main text, we start from the tight-binding model in momentum space for graphene under a uniform strain,H=-∑_n=1^3 t_n( [0 e^-ik·(I+ϵ̅)·δ_n;e^ik·(I+ϵ̅)·δ_n0 ]),and we consider momenta close to the Dirac point K_D, by means of the substitution k=K_D+q. Then, expression (<ref>) transforms asH= ( [ 0 h^∗; h 0 ]),where h=-∑_n=1^3 t_n e^i(K_D+q)·(I+ϵ̅)·δ_n. Now, using equation (<ref>), h can be expanded up to first-order in q and second-order in ϵ̅ as h =-∑_n=1^3 t_n e^i[K_D·(I+ϵ̅)·δ_n+ q·(I+ϵ̅)·δ_n]≈ -∑_n=1^3 t_n e^iK_0·δ_n e^i(A^(1)·δ_n + A^(2)·δ_n)e^i q·(I+ϵ̅)·δ_n≈-∑_n=1^3 t_n e^iK_0·δ_n[1 + i A^(1)·δ_n + i A^(2)·δ_n - (A^(1)·δ_n)^2/2][1 + iq·(I+ϵ̅)·δ_n] ≈-∑_n=1^3 t_n e^iK_0·δ_n[1 + i A^(1)·δ_n + i A^(2)·δ_n - (A^(1)·δ_n)^2/2 + iq·(I+ϵ̅)·δ_n. - . (A^(1)·δ_n)(q·(I+ϵ̅)·δ_n) - (A^(2)·δ_n)(q·δ_n) - i(A^(1)·δ_n)^2(q·δ_n)/2 ]and substituting expression (<ref>) for t_n in equation (<ref>), the expansion of h resultsh ≈-t_0∑_n=1^3 e^iK_0·δ_n[ 1 + i A^(1)·δ_n + i A^(2)·δ_n - (A^(1)·δ_n)^2/2 + Δ_n^(1) + iΔ_n^(1)A^(1)·δ_n +Δ_n^(2)_suming over n equal to zero.+ iq·δ_n_h_0+ iq·ϵ̅·δ_n_h_1,a- (A^(1)·δ_n)(q·δ_n) + iΔ_n^(1)q·δ_n_h_1,b-(A^(1)·δ_n)(q·ϵ̅·δ_n) + iΔ_n^(1)(q·ϵ̅·δ_n)_h_2,a . - i(A^(1)·δ_n)^2(q·δ_n)/2 - Δ_n^(1)(A^(1)·δ_n)(q·δ_n) - (A^(2)·δ_n)(q·δ_n) + iΔ_n^(2)(q·δ_n)_h_2,b]. By taking into account equation (<ref>) and ∑_n=1^3e^iK_0·δ_n=0, the q-independent terms in the last expression are cancelled. Thus, h can be rewritten ash=h_0+h_1,a+h_1,b+h_2,a,h_2,b,whereh_0= - t_0∑_n=1^3 e^iK_0·δ_n[iq·δ_n]=3t_0a_0/2(q_x+iq_y), h_1,a =- t_0∑_n=1^3 e^iK_0·δ_n[ iq·ϵ̅·δ_n]=- t_0∑_n=1^3 e^iK_0·δ_n[ iQ·δ_n]= 3t_0a_0/2(Q_x+iQ_y)= 3t_0a_0/2[ϵ_xxq_x + ϵ_xyq_y + i(ϵ_xyq_x + ϵ_yyq_y)], h_1,b =- t_0∑_n=1^3 e^iK_0·δ_n (q·δ_n) [- (A^(1)·δ_n) + iΔ_n^(1)]= -3t_0a_0/2β[ϵ_xxq_x + ϵ_xyq_y + i(ϵ_xyq_x + ϵ_yyq_y)],h_2,a =- t_0∑_n=1^3 e^iK_0·δ_n (q·ϵ̅·δ_n) [-(A^(1)·δ_n) + iΔ_n^(1)]=- t_0∑_n=1^3 e^iK_0·δ_n (Q·δ_n)[-(A^(1)·δ_n) + iΔ_n^(1)]= -3t_0a_0/2β[ϵ_xxQ_x + ϵ_xyQ_y + i(ϵ_xyQ_x + ϵ_yyQ_y)]=-3t_0a_0/2β{(ϵ_xx^2+ϵ_xy^2)q_x + ϵ_xy(ϵ_xx+ϵ_xy)q_y+ i[ϵ_xy(ϵ_xx+ϵ_xy)q_x + (ϵ_xx^2+ϵ_xy^2)q_y]}andh_2,b =- t_0∑_n=1^3 e^iK_0·δ_n (q·δ_n) [ - i(A^(1)·δ_n)^2/2 - Δ_n^(1)(A^(1)·δ_n) - (A^(2)·δ_n) + iΔ_n^(2)]=-3t_0a_0/2β1/8{ (ϵ_xx-ϵ_yy)^2q_x + 2ϵ_xy(ϵ_xx-ϵ_yy)q_y + i [2ϵ_xy(ϵ_xx-ϵ_yy)q_x - 4ϵ_xy^2q_y] }+ 3t_0a_0/2β^21/4{ (ϵ_xx^2-ϵ_yy^2+2ϵ_xxϵ_yy+2ϵ_xy^2)q_x +4ϵ_xxϵ_xyq_y + i [4ϵ_xxϵ_xyq_x + 2(ϵ_yy^2-ϵ_xy^2)q_y] }, being Q=q·ϵ̅. To simplify each term of h in equation (<ref>), we have used of K_0=(4π/3√(3)a_0,0), equations (2) of the main text for δ_n and the expressions obtained in the previous section for A^(1), A^(2), Δ^(1) and Δ^(2). In addition, note the same algebraic form between the initial expression of equation (<ref>) and equation(<ref>) if one definesQ=q·ϵ̅. This similarity is also observed between equations (<ref>) and (<ref>). In consequence, using equations (<ref>–<ref>) we obtain the contribution of each term of h to equation (<ref>) as( [ 0 h_0^∗; h_0 0 ])= ħ v_0τ·q, ( [ 0 h_1,a^∗; h_1,a 0 ])=ħ v_0τ·ϵ̅·q, ( [ 0 h_1,b^∗; h_1,b 0 ])=-ħ v_0βτ·ϵ̅·q, ( [ 0 h_2,a^∗; h_2,a 0 ])=-ħ v_0βτ·ϵ̅^2·q,and( [ 0 h_2,b^∗; h_2,b 0 ])=ħ v_0τ·(- βκ̅_1 + β^2κ̅_2)·q,where v_0=3t_0a_0/2ħ is the Fermi velocity for pristine graphene, τ=(τ_x,τ_y) is a vector of (2×2) Pauli matrices, κ̅_1 and κ̅_2 are respectively the matrices (10) and (11) of the main text. To obtain the expressions (<ref>–<ref>) in terms of Pauli matrices, we used the identity ([0 χ_xxq_x+χ_xyq_y - i[χ_xyq_x+χ_yyq_y]; χ_xxq_x+χ_xyq_y + i[χ_xyq_x+χ_yyq_y]0 ])= τ·χ̅·q, where χ_ij are the elements of an arbitrary (2×2) matrix χ̅, being χ̅=I, χ̅=ϵ̅,χ̅=-βϵ̅,χ̅=-βϵ̅^2 and χ̅=-βκ̅_1 + β^2κ̅_2 for equations (<ref>) to (<ref>), respectively. It is worth mentioning that expression (<ref>) is the effective Dirac Hamiltonian for pristine graphene, while (<ref>) and (<ref>) are corrections to first-order in ϵ̅, previously derived in Ref. [Oliva13]. The second-order corrections in ϵ̅, equations (<ref>) and (<ref>), are among the principal contributions of our work.Finally, combining equations (<ref>), (<ref>) and (<ref>–<ref>), we obtain the effective Dirac Hamiltonian for graphene under a uniaxial strain, up to second-order in the strain tensor ϵ̅, given byH=ħ v_0τ·(I̅ + ϵ̅ - βϵ̅ -βϵ̅^2 - βκ̅_1 + β^2κ̅_2)·q,which is the equation (<ref>) reported in the main text. § OPTICAL CONDUCTIVITY OF AN ANISOTROPIC DIRAC SYSTEM In this section, we derive the optical conductivity tensor σ̅_ij(ω) of an anisotropic Dirac system described by the effective HamiltonianH=ħ v_0τ·(I̅+Δ̅)·q,where the anisotropic behaviour is expressed through the perturbation Δ̅, which is a symmetric (2×2) matrix such that Δ̅_ij≪1. Essentially, we now extend, up to second-order in Δ̅, a previous calculation of σ̅_ij(ω) up to first-order in Δ̅ reported in Ref. [Oliva2016].Assuming that the considered system has linear response to an external electric field of frequency ω, its optical conductivity σ̅_ij(w) can be calculated by combining the Hamiltonian (<ref>) and the Kubo formula. Following the approach used in Refs. [Ziegler06,Ziegler07], σ̅_ij(ω) can be expressed as a double integral with respect to two energies E, E':σ̅_ij(ω)=ie^2/ħ∫∫Tr{ v_iδ(H-E')v_jδ(H-E)} ×1/E-E'+ħω - iαf(E)-f(E')/E-E'd Ed E',where f(E)=(1+exp[E/(k_BT)])^-1 is the Fermi function at temperature T,is the trace operator including the summation over the q-space (as defined in equation (7) of Ref. [Ziegler07]) and v_l=i [H,r_l] is the velocity operator in the l-direction, with l=x,y.To calculate the integral (<ref>) it is convenient to make the change of variablesq=(I̅+Δ̅)^-1·q^*,which yields that the Hamiltonian (<ref>) becomes H=ħ v_0τ·q^*, corresponding to the case of a unperturbed and isotropic Dirac system, as unstrained graphene. At the same time, the velocity operator components transform asv_x = i [H,r_x]=∂ H/∂ q_x, = (∂ H/∂ q_x^*∂ q_x^*/∂ q_x + ∂ H/∂ q_y^*∂ q_y^*/∂ q_x), = (1 + Δ̅_xx)v_x^* + Δ̅_xyv_y^*,and analogouslyv_y=(1 + Δ̅_yy)v_y^* + Δ̅_xyv_x^*,where v_x^*=(∂ H/∂ q_x^*) and v_y^*=(∂ H/∂ q_y^*) are the velocity operator components for the unperturbed Dirac system.Then, substituting equations (<ref>) and (<ref>) into equation (<ref>) we findσ̅_xx(ω) = [(1+Δ̅_xx)^2+Δ̅_xy^2]Jσ_0(ω), σ̅_yy(ω) = [(1+Δ̅_yy)^2+Δ̅_xy^2]Jσ_0(ω),andσ̅_xy(ω) = σ̅_yx(ω)= [2Δ̅_xy+Δ̅_xy(Δ̅_xx+Δ̅_yy)]Jσ_0(ω),where J is the Jacobian determinant of the transformation (<ref>) originated by expressing the trace operatorof equation (<ref>) in the new variables q^* and σ_0(ω) is the optical conductivity of the unperturbed Dirac system, i.e., the reported optical conductivity of unstrained graphene <cit.>. Note that equations (<ref>-<ref>) can be written in a compact manner asσ̅(ω)=(I̅ + 2Δ̅ + Δ̅^2)Jσ_0(ω). Now, if J is expressed up to second-order in Δ̅, resultsJ = [(I̅+Δ̅)^-1] ≈(I̅-Δ̅+Δ̅^2)≈1-(Δ̅) + [(Δ̅)]^2-(Δ̅)≈1-(Δ̅) + [(Δ̅)]^2/2 + (Δ̅^2)/2,where (Δ̅)=Δ̅_xx+Δ̅_yy.Finally, substituting equation (<ref>) into (<ref>) we obtain that the optical conductivity tensor of the anisotropic Dirac system, described by the Hamiltonian (<ref>), is given byσ̅(ω)≈ σ_0(ω){I̅ - (Δ̅)I̅ + 2Δ̅ + Δ̅^2+ 1/2[((Δ̅))^2 + (Δ̅^2)]I̅-2(Δ̅)Δ̅},where the second term is the contribution to second-order in the perturbation Δ̅ to the conductivity.
http://arxiv.org/abs/1702.08365v1
{ "authors": [ "Maurice Oliva-Leyva", "Chumin Wang" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227164359", "title": "Low-energy theory for strained graphene: an approach up to second-order in the strain tensor" }
equationsection theoremTheorem lemmaLemma remarkRemark exampleExample noteNote definitionDefinition corollaryCorollarysection.equationSlow manifolds for stochastic systems with non-Gaussian stable Lévy noise[This work was partly supported by the NSFC grants 11301197, 11301403, 11371367, 11271290 and 0118011074.]Shenglan Yuan^a,b,c,[shenglanyuan@hust.edu.cn], Jianyu Hu^a,b,c,[hujianyu@hust.edu.cn], Xianming Liu^b,c,[xmliu@hust.edu.cn],Jinqiao Duan^d,[ duan@iit.edu] ^aCenter for Mathematical Sciences,^bSchool of Mathematics and Statistics, ^cHubei Key Laboratory of Engineering Modeling and Scientific Computing,Huazhong University of Sciences and Technology, Wuhan 430074, China ^dDepartment of Applied Mathematics Illinois Institute of Technology, Chicago, IL 60616, USA This work is concerned with the dynamics of a class of slow-fast stochastic dynamical systems with non-Gaussian stable Lévy noise with a scale parameter. Slow manifolds with exponentially tracking property are constructed, eliminating the fast variables to reduce the dimension of these coupled dynamical systems. It is shown that as the scale parameter tends to zero, the slow manifolds converge to critical manifolds in distribution, which helps understand long time dynamics. The approximation of slow manifolds with error estimate in distribution are also considered.Keywords. Stochastic differential equations, random dynamical systems, slow manifolds, critical manifolds, dimension reduction.§ INTRODUCTIONStochastic effects are ubiquitous in complex systems in science and engineering <cit.>. Although random mechanisms may appear to be very small or very fast, their long time impacts on the system evolution may be delicate or even profound, which has been observed in, for example, stochastic bifurcation, stochastic optimal control, stochastic resonance and noise-induced pattern formation <cit.>. Mathematical modeling of complex systems under uncertainty often leads to stochastic differential equations (SDEs) <cit.>. Fluctuations appeared in the SDEs are often non-Gaussian (e.g., Lévy motion) rather than Gaussian (e.g., Brownian motion); see Schilling <cit.>.We consider the slow-fast stochastic dynamical system where the fast dynamic is driven by α-stable noise, see <cit.>. In particular, we study{[ dx^ε=Sx^ε dt+g_1(x^ε,y^ε)dt,; dy^ε=1/εFy^ε dt+1/εg_2(x^ε,y^ε)dt+σε^-1/αdL_t^α, ].where (x^ε,y^ε)∈ℝ^n_1×ℝ^n_2, ε is a small positive parameter measuring slow and fast time scale separation such that in a formal sense|dx^ε/dt|≪|dy^ε/dt|,where we denote by |.| the Euclidean norm. The n_1× n_1 matrix S with all eigenvalues with non-negative real part, F is a n_2× n_2 matrix whose eigenvalues have negative real part. Nonlinearities g_i:ℝ^n_1×ℝ^n_2↦ℝ^n_i, i=1,2 are Lipschitz continuous functions with g_i(0,0)=0. {L_t^α: t∈ℝ} is a two-sided ℝ^n_2-valued α-stable Lévy process on a probability space (Ω,ℱ,ℙ), where 1<α<2 is the index of stability <cit.>. The strength of noise in the fast equation is chosen to be ε^-1/α to balance the stochastic force and deterministic force. σ>0 is the intensity of noise.Invariant manifolds are geometric structures in state space that are useful in investigating the dynamical behaviors of stochastic systems; see <cit.>. A slow manifold is a special invariant manifold of a slow-fast system, where the fast variable is represented by the slow variable and the scale parameter ε is small. Moreover, it exponentially attracts other orbits. A critical manifold of a slow-fast system is the slow manifold corresponding to the zero scale parameter <cit.>. The theory of slow manifolds and critical manifolds provides us with a powerful tool for analyzing geometric structures of slow-fast stochastic dynamical systems, and reducing the dimension of those systems.For a system like (<ref>) based on Brownian noise (α=2), the existence of the slow manifold and its approximation has been extensively studied <cit.>. The dynamics of individual sample solution paths have also been quantified; see <cit.>. Moreover, Ren and Duan <cit.> provided a numerical simulation for the slow manifold and established its parameter estimation. The study of the dynamics generated by SDEs under non-Gaussian Lévy noise is still in its infancy, but some interesting works are emerging <cit.>.The main goal of this paper it to investigate the slow manifold of dynamical system (<ref>) driven by α-stable Lévy process with α∈(1,2) in finite dimensional setting, and examine its approximation and structure.We first introduce a random transformation based on the generalised Ornstein-Uhlenbeck process, such that a solution the system of SDEs (<ref>) with α-stable Lévy noise can be represented as a transformed solution of random differential equations (RDEs) and vice versa. Then we prove that, for 0<ε≪1, the slow manifold ℳ^ε with an exponential tracking property can be constructed as fixed point of the projected RDEs by using the Lyapunov-Perron method <cit.>. Thus as a consequence, with the inverse conversion, we can obtain the slow manifold M^ε for the original SDE system. Subsequently we convert the above RDEs to new RDEs by taking the time scaling t→ε t.After that we use the Lyapunov-Perron method once again to establish the existence of the slow manifold ℳ̃^ε for new RDE system, and denote ℳ^0(ω) as the critical manifold with zero scale parameter in particular. In addition, we show that ℳ^ε is same as ℳ̃^ε in distribution, and the distribution of ℳ^ε(ω) converges to the distribution of ℳ^0(ω), as ε tends to zero. Finally, we derive an asymptotic approximation for the slow manifold ℳ^ε in distribution. Moreover, as part of ongoing studies, we try to study mean residence time on slow manifold, and generalise these results to consider system (<ref>) in Hilbert spaces to studyinfinite dimensional dynamics.This paper is organized as follows. In Section 2, we recall some basic concepts in random dynamical systems, and construct metric dynamical systems driven by Lévy processes with two-sided time. In Section 3, we recall random invariant manifolds and introduce hypotheses for the slow-fast system. In Section 4, we show the existence of slow manifold (Theorem <ref>), and measure the rate of slow manifold attract other dynamical orbits (Theorem <ref>). In Section 5, we prove that as the scale parameter tends to zero, the slow manifold converges to the critical manifold in distribution (Theorem <ref>). In Section 6, we present numerical results using examples from mathematical biology to corroborate our analytical results.§ RANDOM DYNAMICAL SYSTEMSWe are going to introduce the main tools we need to find inertial manifolds for systems of stochastic differential equations driven by α-stable Lévy noise. These tools stem from the theory of random dynamical systems; see Arnold <cit.>.An appropriate model for noise is a metric dynamical system Θ=(Ω,ℱ,ℙ,θ), which consists of a probability space (Ω,ℱ,ℙ) and a flow θ={θ_t}_t∈ℝ:θ: ℝ×Ω→Ω, (t,ω)↦θ_tω;θ_0= id_Ω, θ_s+t=θ_s∘θ_t=:θ_sθ_t.The flow θ is jointly ℬ(ℝ)⊗ℱ-ℱ measurable. All θ_t are measurably invertible with θ_t^-1=θ_-t. In addition, the probability measure ℙ is invariant (ergodic) with respect to the mappings {θ_t}_t∈ℝ. For example, the Lévy process with two side time represents a metric dynamical system. Let L=(L_t)_t≥0 with L_0=0 a.s. be a Lévy process with values in ℝ^n defined on the canonical probability space (E^ℝ^+,ℰ^ℝ^+,ℙ_ℝ^+), in which E=ℝ^n endowed with the Borel σ-algebra ℰ. We can construct the corresponding two-sided Lévy process L_t(ω):=ω(t), t∈ℝ defined on (E^ℝ,ℰ^ℝ,ℙ_ℝ), see Kümmel <cit.>. Since the paths of a Lévy process are càdlàg; see <cit.>. We can define two-sided Lévy process on the space (Ω,ℱ,ℙ) instead of (E^ℝ,ℰ^ℝ,ℙ_ℝ), where Ω=𝒟_0(ℝ,ℝ^n) is the space of càdlàg functions starting at 0 given by𝒟_0={ω : for ∀ t∈ℝ, lim_s↑ tω(s)=ω(t-), lim_s↓ tω(s)=ω(t) exist and ω(0)=0}.This space equipped with Skorokhod's 𝒥_1-topology generated by the metric d_ℝ is a Polish space. For functions ω_1, ω_2∈𝒟_0, d_ℝ(ω_1, ω_2) is given byd_ℝ(ω_1,ω_2)=inf{ε>0:|ω_1(t)-ω_2(λ t)|≤ε, |lnarctan(λ t)-arctan(λ s)/arctan(t)-arctan(s)|≤ε. .for every  t, s∈ℝ and some λ∈Λ^ℝ},whereΛ^ℝ={λ :ℝ→ℝ; λ is injective increasing, lim_t→-∞λ(t)=-∞, lim_t→∞λ(t)=∞}.ℱ is the associated Borel σ-algebra ℬ(𝒟_0)=ℰ^ℝ∩𝒟_0, and (𝒟_0,ℬ(𝒟_0)) is a separable metric space. The probability measure ℙ generated by ℙ(𝒟_0∩ A):=ℙ_ℝ(A) for each A∈ℰ^ℝ. The flow θ is given byθ:ℝ×𝒟_0→𝒟_0, θ_tω(·)↦ω(·+t)-ω(t),which is a Carathéodory function. It follows that θ is jointly measurable. Moreover, θ satisfies θ_s(θ_tω(·))=ω(·+s+t)-ω(s)-(ω(s+t)-ω(s))=θ_s+tω(·) and θ_0ω(·)=ω(·). Thus (𝒟_0,ℬ(𝒟_0),ℙ,(θ_t)_t∈ℝ) is a metric dynamical system generated by Lévy process with two-side time. Note that the probability measure ℙ is ergodic with respect to the flow θ=(θ_t)_t∈ℝ.In the above we define metric dynamical system first, which will step inθ in the complete definition of random dynamical system, that is strongly motivated by the measuability property combined with the cocycle property.A random dynamical system taking values in the measurable space (H,ℬ(H)) over ametric dynamical system (Ω,ℱ,ℙ,(θ_t)_t∈ℝ) with time space ℝ is given by a mappingϕ: ℝ×Ω× H → H, that is jointly ℬ(ℝ)⊗ℱ⊗ℬ(H)-ℬ(H) measurable and satisfies the cocycle property:[ ϕ(0,ω,·)= id_H= identity on Hfor each ω∈Ω;; ϕ(t+s,ω,·)=ϕ(t,θ_sω,ϕ(s,ω,·))for each s, t∈ℝ, ω∈Ω. ]For our application, in the sequel we suppose H=ℝ^n=ℝ^n_1×ℝ^n_2.Note that if ϕ satisfies the cocycle property (<ref>) for almost all ω∈Ω\𝒩_s,t (where the exceptional set 𝒩_s,t can depend on s,t), then we say ϕ forms a crude cocycle instead of a perfect cocycle. In this case, to get a random dynamical system, sometimes we can do a perfection of the crude cocycle, such that the cocycle property is valid for each and every ω∈Ω, see Scheutzow <cit.>.We now recall some objects to help understand the dynamics of a random dynamical system.A random variable ω↦ X(ω) with values in H is called a stationary orbit (or random fixed point) for a random dynamical system ϕ ifϕ(t,ω,X(ω))=X(θ_tω)fort∈ℝ, ω∈Ω.Since the probability measure ℙ is invariant with respect to {θ_t}_t∈ℝ, the random variables ω↦ X(θ_tω) have the same distribution as ω↦ X(ω). Thus (t,ω)↦ X(θ_tω) is a stationary process , and therefore a stationary solution to the stochastic differential equation generating the random dynamical system ϕ.A family of nonempty closed sets M={M(ω)⊂ℝ^n,ω∈Ω} is call a random set for a random dynamical system φ, if the mappingω↦ dist(z,M(ω)):=inf_z^'∈ M(ω)|z-z^'|is a random variable for every z∈ℝ^n. Moreover M is called an (positively) invariant set, ifφ(t,ω,M(ω))⊆ M(θ_tω)fort∈ℝ, ω∈Ω. Leth:Ω×ℝ^n_1→ℝ^n_2, (ω,x)↦ h(ω,x)be a function such that for all ω∈Ω, x↦ h(ω,x) is Lipschitz continuous, and for any x∈ℝ^n_1,ω↦ h(ω,x) is a random variable. We defineℳ(ω)={(x,h(ω,x)):x∈ℝ^n_1}such that ℳ={ℳ(ω)⊂ℝ^n,ω∈Ω} can be represented as a graph of h. It can be shown <cit.> that ℳ is a random set.If ℳ(ω), ω∈Ω also satisfy (<ref>), ℳ is called a Lipschitz continuous invariant manifold. Furthermore, ℳ is said to have an exponential tracking property if for all ω∈Ω, there exists an z^'∈ℳ(ω) such that,|φ(t,ω,z)-φ(t,ω,z^')|≤ C(z,z^',ω)e^-ct|z-z^'|, t≥0, z∈ℝ^n,where C is a positive random variable depending on z and z^', while c is a positive constant. Then ℳ is called a random slow manifold with respect to the random dynamical system φ.Let φ and φ̃ be two random dynamical systems. Then φ and φ̃ are called conjugated, if there is a random mapping T:Ω×ℝ^n→ℝ^n, such that for all ω∈Ω, (t,z)↦ T(θ_tω,z) is a Carathéodory function, for every t∈ℝ^n and ω∈Ω, z↦ T(θ_tω,z) is homeomorphic, andφ̃(t,ω,z)=T(θ_tω,φ(t,ω,T^-1(ω,z))), z∈ℝ^n,where T^-1:Ω×ℝ^n→ℝ^n is the corresponding inverse mapping of T. Note that T provides a random transformation form φ to φ̃ that may be simpler to treat. If M̃ is a invariant set for the random dynamical system φ̃, we defineM(·):=T^-1(·,M̃(.)).From the properties of T, M is also invariant set with respect to φ. § SLOW-FAST DYNAMICAL SYSTEMS The theory of invariant manifolds and slow manifolds of random dynamical system are essential for the study of the solution orbits, and we can use it to simplify dynamical systems by reducing an random dynamical system on a lower-dimensional manifold.For the slow-fast system (<ref>) described by stochastic differential equations with α-stable Lévy noise, the state space for slow variables is ℝ^n_1, the state space for fast variables is ℝ^n_2. To construct the slow manifolds of system (<ref>), we introduce the following hypotheses.Concerning the linear part of (<ref>), we suppose(A_1). There are constants γ_s>0 and γ_f<0 such that|e^Stx| ≤ e^γ_st|x|, t≤0,  x∈ℝ^n_1,|e^Fty| ≤ e^γ_ft|y|, t≥0,  y∈ℝ^n_2. With respect to the nonlinear parts of system (<ref>), we assume(A_2). There exists a constant K>0 such that for all (x_i,y_i)∈ℝ^n, i=1,2,,|g_i(x_1,y_1)-g_i(x_2,y_2)|≤ K(|x_1-x_2|+|y_1-y_2|),which implies that g_i are continuous and thus measurable with respect to all variables. If g_i is locally Lipshitz, but the corresponding deterministic system has a bounded absorbing set. By cutting off g_i to zero outside a ball containing the absorbing set, the modified system has globally Lipschitz drift <cit.>.For the proof of the existence of a random invariant manifold parametrized by x∈ℝ^n_1, we have to assume that the following spectral gap condition.(A_3).The decay rate -γ_f of e^Ft is larger than the Lipschitz constant K of the nonlinear parts in system (<ref>), i.e. K<-γ_f.Under hypothesis (A_1), the following linear stochastic differential equationsdη^1/ε(t) =1/εFη^1/εdt+ε^-1/αdL_t^α, η^1/ε(0)=η_0^1/ε,dξ(t) =Fξ dt+dL_t^α, ξ(0)=ξ_0,have càdlàg stationary solutions η^1/ε(θ_tω) and ξ(θ_tω) defined on θ-invariant set Ω of full measure, through the random variablesη^1/ε(ω)=ε^-1/α∫_-∞^0 e^-Fs/εdL_s^α(ω), ξ(ω)=∫_-∞^0e^-FsdL_s^α(ω),respectively. Moreover, they generate random dynamical systems. The SDE (<ref>) has unique càdlàg solutionφ(t,w,ξ_0)=e^Ftξ_0+∫_0^te^F(t-s)dL_s^α(ω),for details see <cit.>. It follows from (<ref>) and (<ref>) thatφ(t,w,ξ(ω)) =e^Ftξ(ω)+∫_0^te^F(t-s)dL_s^α(ω)=e^Ft∫_-∞^0e^-FsdL_s^α(ω)+∫_0^te^F(t-s)dL_s^α(ω)=∫_-∞^te^F(t-s)dL_s^α(ω).By (<ref>), we also see thatξ(θ_tω) =∫_-∞^0e^-FsdL_s^α(θ_tω)=∫_-∞^0e^-Fsd(L_t+s^α(ω)-L_t^α(ω))=∫_-∞^0e^-FsdL_t+s^α(ω)=∫_-∞^te^F(t-s)dL_s^α(ω).Hence φ(t,w,ξ(ω))=ξ(θ_tω) is a stationary orbit for (<ref>). Then we haveξ(θ_t+sω) =∫_-∞^t+se^F(t+s-r)dL_r^α(ω)=e^F(t+s)∫_-∞^te^-F(r+s)dL_r+s^α(ω)=e^F(t+s)∫_-∞^te^-F(r+s)d(L_r+s^α(ω)-L_s^α(ω))=∫_-∞^te^F(t-r)dL_r^α(θ_sω)=ξ(θ_tθ_sω),which implies ξ generate a random dynamical system. Analogously we obtain the SDE (<ref>) whose unique solution is the generalised Ornstein-Uhlenbeck processη^1/ε(θ_tω)=ε^-1/α∫_-∞^te^F(t-s)/εdL_s^α.Since α-stable process L_t^α satisfying Elog(1+|L_1^α|)<∞, η^1/ε(θ_tω) and ξ(θ_tω) are well-defined sationary semimartingales; see <cit.>. The process η^1/ε(θ_tω) has the same distribution as the process ξ(θ_t/εω), where η^1/ε and ξ are defined in Lemma <ref>. From α-stable process L_t^α are self-similar with Hurst index 1/α, i.e.,L_ct^αd=c^1/α L_t^α,where “d=“ denotes equivalence (coincidence) in distribution, we haveη^1/ε(θ_tω) =ε^-1/α∫_-∞^t e^F(t-s)/εdL_s^α(ω)=∫_-∞^t/εe^F(t/ε-u)ε^-1/αdL_ε u^α(ω)d=∫_-∞^t/ε e^F(t/ε-u)dL_u^α(ω)=ξ(θ_t/εω),which proves that η^1/ε(θ_tω) and ξ(θ_t/εω) have the same distribution.Now we will transform the slow-fast stochastic dynamical system (<ref>) into a random dynamical system <cit.>. We introduce the random transformation( [ x̂^ε;ŷ^ε;]):=T^ε(ω,x^ε,y^ε):= ( [ x^ε; y^ε-ση^1/ε(ω); ]).Then (x̂^ε(t), ŷ^ε(t))=T^ε(θ_tω,x^ε(t),y^ε(t)) satisfies{[ dx̂^ε=Sx̂^ε dt+g_1(x̂^ε,ŷ^ε+ση^1/ε(θ_tω))dt,; dŷ^ε=1/εFŷ^ε dt+1/εg_2(x̂^ε,ŷ^ε+ση^1/ε(θ_tω))dt. ].This can be seen by a formal differentiation of x^ε and y^ε-ση^1/ε(ω).For the sake of simplicity, we write ĝ_i(θ_t^εω,x̂^ε,ŷ^ε) = g_i(x̂^ε,ŷ^ε+ση^1/ε(θ_tω)), i=1, 2. Since the additional term ση^1/ε doesn't change the Lipschitz constant of the functions on the right hand side, the functions ĝ_i have the same Lipschitz constant as g_i.By hypotheses (A_1)-(A_3), system (<ref>) can be solved for any ω contained in a θ-invariant set Ω of full measure and for any initial condition (x̂^ε(0),ŷ^ε(0))=(x_0,y_0) such that the cocycle property is satisfied. Then the solution mapping(t,ω,(x_0,y_0))↦ϕ̂^ε(t,ω,(x_0,y_0))=(x̂^ε(t,ω,(x_0,y_0)),ŷ^ε(t,ω,(x_0,y_0)))∈ℝ^n,defines a random dynamical system. In fact, the mapping ϕ̂^ε is (ℬ(ℝ)⊗ℱ⊗ℬ(ℝ^n),ℬ(ℝ^n))-measurable, and for each ω∈Ω, ϕ̂^ε(·,ω):ℝ×ℝ^n→ℝ^n is a Carathéodory function.In the following section we will show that system (<ref>) generates a random dynamical system that has a random slow manifold for sufficiently small ε>0. Applying the ideas from the end of Section 2 with T:=T^ε to the solution of (<ref>), then system (<ref>) also has a version satisfying the cocycle property. Clearly,φ^ε(t,ω,(x_0,y_0)) =(T^ε)^-1(θ_tω,φ̂^ε(t,ω,T^ε(ω,(x_0,y_0))))=ϕ̂^ε(t,ω,(x_0,y_0))+(0,ση^1/ε(θ_tω)), t∈ℝ, ω∈Ωis a random dynamical system generated by the original system(<ref>). Hence, by the particular structure of T_ε if (<ref>) has a slow manifold so has (<ref>). § RANDOM SLOW MANIFOLDSTo study system (<ref>), for any β∈ℝ, we introduce Banach spaces of functions with a geometrically weighted sup norm <cit.> as follows:C_β^s,- = {ϕ^s,-:(-∞,0]→ℝ^n_1 | ϕ^s,-is continuous andsup_t∈ (-∞,0] |e^-β t ϕ_t^s,-|< ∞},C_β^s,+ = {ϕ^s,+:[0,∞)→ℝ^n_1 | ϕ^s,+is continuous and sup_t∈ [0,∞) |e^-β t ϕ_t^s,+|< ∞}with the norms||ϕ^s,-||_ C_β^s,-:=sup_t∈ (-∞,0] |e^-β t ϕ_t^s,-|   ||ϕ^s,+||_ C_β^s,+ :=sup_t∈ [0,∞) |e^-β t ϕ_t^s,+|Analogously, we define Banach spaces C_β^f,- and C_β^f,+ with the norms||ϕ^f,-||_ C_β^f,-:=sup_t∈ (-∞,0] |e^-β t ϕ_t^f,-|   ||ϕ^f,+||_ C_β^f,+ :=sup_t∈ [0,∞) |e^-β t ϕ_t^f,+|.Let C_β^± be the product space C_β^±:=C_β^s,±× C_β^f,±, (ϕ^s,±,ϕ^f,±)∈ C_β^±. C_β^± equipped with the norm||(ϕ^s,±,ϕ^f,±)||_ C_β^±:=||ϕ^s,±||_ C_β^s,±+||ϕ^f,±||_ C_β^f,±is a Banach space. Letting γ>0 satisfy K<-(γ+γ_f). For the remainder of the paper, we take β=-γ/ε with ε>0 sufficiently small. Assume that (A_1)-(A_3) hold. Then (x_0,y_0) is in ℳ^ε(ω) if and only if there exists a function ϕ̂^ε(t)=(x̂^ε(t),ŷ^ε(t))=(x̂^ε(t,ω,(x_0,y_0)),ŷ^ε(t,ω,(x_0,y_0)))∈ C_β^- with t≤0 such that( [ x̂^ε(t);ŷ^ε(t); ]) =( [ e^Stx_0+∫_0^te^S(t-s)ĝ_1(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds; 1/ε∫_-∞^te^F(t-s)/εĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds; ]),whereℳ^ε(ω)={(x_0,y_0)∈ℝ^n:ϕ̂^ε(·,ω,(x_0,y_0))∈ C_β^-},  ω∈Ω. If (x_0,y_0)∈ℳ^ε(ω), by method of constant variation, system (<ref>) is equivalent to the system of integral equations {[x̂^ε(t)=e^Stx_0+∫_0^te^S(t-s)ĝ_1(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds,; ŷ^ε(t)=e^F(t-u)/εŷ^ε(u)+ 1/ε∫_u^te^F(t-s)/εĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds ]. and ϕ̂^ε(t)∈ C_β^-.Moreover, by u<0 and -(γ+γ_f)>K>0, we have [ |e^F(t-u)/εŷ^ε(u)|≤ e^γ_f(t-u)/ε|ŷ^ε(u)|≤sup_u∈ (-∞,0]{e^-β u|ŷ^ε(u)|}e^γ_f(t-u)/εe^β u;=||ŷ^ε||_C_β^f,-e^γ_f/εte^(γ_f/ε-β)(-u)=||ŷ^ε||_C_β^f,-e^γ_f/εte^-γ+γ_f/εu→0, as u→-∞, ]which leads toŷ^ε(t)=1/ε∫_-∞^te^F(t-s)/εĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds.Thus (<ref>)-(<ref>) imply that (<ref>) holds.Conversely, let ϕ̂^ε(t,ω,(x_0,y_0))∈ C_β^- satisfying (<ref>), then (x_0,y_0) is in ℳ^ε(ω) by (<ref>). Thus, we have finished the proof.Assume (A_1)-(A_3) to be valid. Letting (x̂^ε(0),ŷ^ε(0))=(x_0,y_0), if there exists an δ such that ε∈(0,δ), the system (<ref>) will have a unique solution ϕ̂^ε(t)=(x̂^ε(t),ŷ^ε(t))=(x̂^ε(t,ω,(x_0,y_0)),ŷ^ε(t,ω,(x_0,y_0))) in C_β^-. For any ϕ̂^ε=(x̂^ε,ŷ^ε)∈ C_β^-, define two operators ℐ_s^ε:C_β^-→ C_β^s,- and ℐ_f^ε:C_β^-→ C_β^f,- satisfying[ ℐ_s^ε(ϕ̂^ε)[t]=e^Stx_0+∫_0^te^S(t-s)ĝ_1(θ_s^εω,x̂^ε(s), ŷ^ε(s))ds,;ℐ_f^ε(ϕ̂^ε)[t]=1/ε∫_-∞^te^F(t-s)/εĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds, ]and the Lyapunov-Perron transform ℐ^ε given byℐ^ε(ϕ̂^ε)[t] = ( [ ℐ_s^ε(ϕ̂^ε)[t]; ℐ_f^ε(ϕ̂^ε)[t];]).Under our assumptions above, ℐ^ε maps C_β^- into itself. Taking ϕ̂^ε=(x̂^ε,ŷ^ε)∈ C_β^-, then||ℐ_s^ε(ϕ̂^ε)||_C_β^s,- ≤ Ksup_t∈ (-∞,0]{e^-β t∫_t^0e^γ_s(t-s)(|x̂^ε(s)|+|ŷ^ε(s)+ση^1/ε(θ_sω)|)ds} +sup_t∈ (-∞,0]{e^-β te^γ_st|x_0|}≤ Ksup_t∈ (-∞,0]{∫_t^0e^γ+εγ_s/ε(t-s)ds}||ϕ̂^ε||_C_β^-+C_1≤ε K/γ+εγ_s||ϕ̂^ε||_C_β^-+C_1and||ℐ_f^ε(ϕ̂^ε)||_C_β^f,- ≤K/εsup_t∈ (-∞,0]{e^-β t∫_-∞^te^γ_f(t-s)/ε(|x̂^ε(s)|+|ŷ^ε(s)+ση^1/ε(θ_sω)|)ds}≤K/εsup_t∈ (-∞,0]{∫_-∞^te^γ+γ_f/ε(t-s)ds}||ϕ̂^ε||_C_β^-+C_2=-K/γ+γ_f||ϕ̂^ε||_C_β^-+C_2.Hence, by the definition of ℐ^ε we obtain||ℐ^ε(ϕ̂^ε)||_C_β^-≤ρ(ε)||ϕ̂^ε||_C_β^-+C_3,where C_i, i=1,2,3 are constants andρ(ε):=ε K/γ+εγ_s- K/γ+γ_f. Further, we will show that ℐ^ε is a contraction. Let ϕ̂_1^ε=(x̂_1^ε,ŷ_1^ε), ϕ̂_2^ε=(x̂_2^ε,ŷ_2^ε)∈ C_β^-. Using (A_1)-(A_2) and the definition of C_β^-, we obtain||ℐ_s^ε(ϕ̂_1^ε)-ℐ_s^ε(ϕ̂_2^ε)||_C_β^s,- ≤ Ksup_t∈ (-∞,0]{e^-β t∫_t^0e^γ_s(t-s)(|x̂_1^ε(s)-x̂_2^ε(s)|+|ŷ_1^ε(s)-ŷ_2^ε(s)|)ds}≤ Ksup_t∈ (-∞,0]{∫_t^0e^γ+εγ_s/ε(t-s)ds}||ϕ̂_1^ε-ϕ̂_2^ε||_C_β^-≤ε K/γ+εγ_s||ϕ̂_1^ε-ϕ̂_2^ε||_C_β^-and||ℐ_f^ε(ϕ̂_1^ε)-ℐ_f^ε(ϕ̂_2^ε)||_C_β^f,- ≤K/εsup_t∈ (-∞,0]{e^-β t∫_-∞^te^γ_f(t-s)/ε(|x̂_1^ε(s)-x̂_2^ε(s)|+|ŷ_1^ε(s)-ŷ_2^ε(s)|)ds}≤K/εsup_t∈ (-∞,0]{∫_-∞^te^γ+γ_f/ε(t-s)ds}||ϕ̂_1^ε-ϕ̂_2^ε||_C_β^-=-K/γ+γ_f||ϕ̂_1^ε-ϕ̂_2^ε||_C_β^-.By (<ref>) and (<ref>), we have that||ℐ^ε(ϕ̂_1^ε)-ℐ^ε(ϕ̂_2^ε)||_C_β^-≤ρ(ε)||ϕ̂_1^ε-ϕ̂_2^ε||_C_β^-,By (<ref>) and hypothesis (A_3), we have0<ρ(0)=-K/γ+γ_f<1,ρ(ε)=γ K/(γ+εγ_s)^2.Then there is a sufficiently small constant δ>0 and a constant ρ_0∈(0,1), such that0<ρ(ε)≤ρ_0<1    ε∈(0,δ),which implies that ℐ^ε is strictly contractive. Let ϕ̂^ε(t)∈ C_β^- be the unique fixed point, i.e., the system (<ref>) has a unique solution ϕ̂^ε(t).In what follows we investigate the dependence of the fixed point ϕ̂^ε(t) of the operator ℐ^ε on the intial point.Assume the hypotheses of Lemma <ref> to be valid. Then for any (x_0,y_0),(x_0^',y_0^')∈ℝ^n, there is an δ>0 such that if ε∈(0,δ), we have||ϕ̂^ε(t,ω,(x_0,y_0))-ϕ̂^ε(t,ω,(x_0^',y_0^'))||_C_β^-≤|x_0-x_0^'|/1-ρ(ε),where ρ(ε) is defined as (<ref>). Taking any (x_0,y_0),(x_0^',y_0^')∈ℝ^n, for simplicity we write ϕ̂^ε(t,ω,x_0), ϕ̂^ε(t,ω,x_0^') instead of ϕ̂^ε(t,ω,(x_0,y_0)), ϕ̂^ε(t,ω,(x_0^',y_0^')) in the following estimate, respectively. For every ω∈Ω, we have[ ||ϕ̂^ε(t,ω,x_0)-ϕ̂^ε(t,ω,x_0^')||_C_β^-;≤ ||e^St(x_0-x_0^')+∫_0^te^S(t-s)Δĝ_1(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds||_C_β^s,-; +||1/ε∫_-∞^te^F(t-s)/εΔĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds||_C_β^f,-;≤|x_0-x_0^'|+ε K/γ+εγ_s||ϕ̂^ε(t,ω,x_0)-ϕ̂^ε(t,ω,x_0^')||_C_β^-; -K/γ+γ_f||ϕ̂^ε(t,ω,x_0)-ϕ̂^ε(t,ω,x_0^')||_C_β^-;= |x_0-x_0^'|+ρ(ε)||ϕ̂^ε(t,ω,x_0)-ϕ̂^ε(t,ω,x_0^')||_C_β^-, ]where[ Δĝ_i=ĝ_i(θ_s^εω,x̂^ε(s,ω,x_0),ŷ^ε(s,ω,x_0))-ĝ_i(θ_s^εω,x̂^ε(s,ω,x_0^'),ŷ^ε(s,ω,x_0^')), i=1, 2. ]Then we obtain||ϕ̂^ε(t,ω,x_0)-ϕ̂^ε(t,ω,x_0^')||_C_β^-≤1/1-ρ(ε)|x_0-x_0^'|,which completes the proof. By Lemma <ref>, Lemma <ref> and Lemma <ref>, we can construct the slow manifold as a random graph. Assume that (A_1)-(A_3) hold and that ε is sufficiently small. Then the system (<ref>) has a invariant manifold ℳ^ε(ω)={(x_0,ĥ^ε(ω,x_0)): x_0∈ℝ^n_1}, where ĥ^ε(·,·):Ω×ℝ^n_1↦ℝ^n_2 is a Lipschitz continuous function with Lipschitz constant satisfyingLipĥ^ε(ω,·)≤-K/γ+γ_f1/1-ρ(ε), ω∈Ω.Taking any x_0∈ℝ^n_1, define the Lyapunov-Perron transformĥ^ε(ω,x_0)= 1/ε∫_-∞^0e^-Fs/εĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))ds where (x̂^ε(s),ŷ^ε(s)) is the unique solution in C_β^- of the system (<ref>) with s≤0. It follows from Lemma <ref>, Lemma <ref>, (<ref>) and (<ref>) thatℳ^ε(ω)={(x̂_0,ĥ^ε(ω,x_0)):x_0∈ℝ^n_1}.By (<ref>) and Lemma <ref>, we have|ĥ^ε(ω,x_0)-ĥ^ε(ω,x_0^')|≤-K/γ+γ_f1/1-ρ(ε)|x_0-x_0^'|for all x_0, x_0^'∈ℝ^n_1, ω∈Ω.From Section <ref>, ℳ^ε(ω) is a random set. Now we are going to prove that ℳ^ε(ω) is invariant in the following senseϕ̂^ε(s,ω,ℳ^ε(ω))⊂ℳ^ε(θ_s^εω)fors≥0.In other words, for each (x_0,y_0)∈ℳ^ε(ω), we have ϕ̂^ε(s,ω,(x _0,y_0))∈ℳ^ε(θ_s^εω). Using the cocycle propertyϕ̂^ε(·+s,ω,(x_0,y_0))=ϕ̂^ε(·,θ_sω,ϕ̂^ε(s,ω,(x _0,y_0)))and the fact ϕ̂^ε(·,ω,(x_0,y_0))∈ C_β^-, it followsthat ϕ̂^ε(·,θ_sω,ϕ̂^ε(s,ω,(x_0,y_0)))∈ C_β^-. Thus, ϕ̂^ε(s,ω,(x_0,y_0))∈ℳ^ε(θ_s^εω). This completes the proof. The invariant manifold ℳ^ε(ω) is independent of the choice of γ. Furthermore, the invariant manifold ℳ^ε(ω) exponentially attract other dynamical orbits. Hence, ℳ^ε(ω) is a slow manifold.Assume that (A_1)-(A_3) hold. Then the invariant manifold ℳ^ε(ω)={(x_0,ĥ^ε(ω,x_0)):x_0∈ℝ^n_1} for slow-fast random system (<ref>)obtained inTheorem <ref> has exponential tracking property in the following sense: For any z_0=(x_0,y_0)∈ℝ^n, there is a z_0^'=(x_0^',y_0^')∈ℳ^ε(ω) such that|ϕ̂^ε(t,ω,z_0^')-ϕ̂^ε(t,ω,z_0)|≤ Ce^-ct|z_0^'-z_0|,t≥0,where C>0 and c>0. Let[ϕ̂^ε(t,ω,(x_0,y_0))=(x̂^ε(t,ω,(x_0,y_0)),ŷ^ε(t,ω,(x_0,y_0)));ϕ̂^ε(t,ω,(x_0^',y_0^'))=(x̂^ε(t,ω,(x_0^',y_0^')),ŷ^ε(t,ω,(x_0^',y_0^'))) ]be the two dynamical orbits of system (<ref>) with the initial conditionϕ̂^ε(0,ω,(x_0,_0))=(x_0,y_0),  ϕ̂^ε(0,ω,(x_0^',y_0^'))=(x_0^',y_0^').Thenψ^ε(t) =ϕ̂^ε(t,ω,(x_0^',y_0^'))-ϕ̂^ε(t,ω,(x_0,y_0))=(x̂^ε(t,ω,(x_0^',y_0^'))-x̂^ε(t,ω,(x_0,y_0)), ŷ^ε(t,ω,(x_0^',y_0^'))-ŷ^ε(t,ω,(x_0,y_0))):=(u^ε(t),v^ε(t))satisfies the equation{[du^ε=Su^εdt+Δĝ_1(θ_t^εω,u^ε,v^ε)dt,; dv^ε=1/εFv^εdt+1/εΔĝ_2(θ_t^εω,u^ε,v^ε)dt ].with the nonlinear itemsΔĝ_i(θ_t^εω,u^ε,v^ε) =ĝ_i(θ_t^εω,y^ε(t)+x̂^ε(t,ω,(x_0,y_0)),v^ε(t)+ŷ^ε(t,ω,(x_0,y_0))) -ĝ_i(θ_t^εω,x̂^ε(t,ω,(x_0,y_0)),ŷ^ε(t,ω,(x_0,y_0))), i=1, 2,and the initial condition(u^ε(0),v^ε(0)) = (u_0,v_0) = (x_0^'-x_0,y_0^'-y_0).By direct calculation, for t≥0, ψ^ε(t)=(u^ε(t),v^ε(t)) satisfying( [ u^ε(t); v^ε(t);]) =([∫_+∞^te^S(t-s)Δĝ_1(θ_s^εω,u^ε(s),v^ε(s))ds;e^Ft/εv_0+1/ε∫_0^te^F(t-s)/εΔĝ_2(θ_s^εω,u^ε(s),v^ε(s))ds;]) is a solution of (<ref>) in C_β^+.Now we use the Lyapunov-Perron transform again, to prove that (<ref>) has a unique solution (u^ε(t),v^ε(t)) in C_β^+ with (x_0^',y_0^')=(u_0,v_0)+(x_0,y_0)∈ℳ^ε(ω). Clearly,(x_0^',y_0^')∈ℳ^ε(ω)⟺y_0^'= 1/ε∫_-∞^0e^-Fs/εĝ_2(θ_s^εω, x̂^ε(s,ω,x_0^'),ŷ^ε(s,ω,x_0^'))ds⟺v_0=-y_0+1/ε∫_-∞^0e^-Fs/εĝ_2(θ_s^εω, x̂^ε(s,ω,u_0+x_0),ŷ^ε(s,ω,u_0+x_0))ds =-y_0+ĥ^ε(ω,u_0+x_0).Taking ψ^ε=(u^ε,v^ε)∈ C_β^+, define two operators 𝒥_s^ε:C_β^+→ C_β^s,+ and 𝒥_f^ε:C_β^+→ C_β^f,+ satisfying[𝒥_s^ε(ψ^ε)[t]=∫_+∞^te^S(t-s)Δĝ_1(θ_s^εω,u^ε(s),v^ε(s))ds,; 𝒥_f^ε(ψ^ε)[t]= e^Ft/εv_0+1/ε∫_0^te^F(t-s)/εΔĝ_2(θ_s^εω,u^ε(s),v^ε(s))ds. ]Moreover, the Lyapunov-Perron transform 𝒥^ε:C_β^+→ C_β^+ is given by𝒥^ε(ψ^ε)[t] = ( [ 𝒥_s^ε(ψ^ε)[t]; 𝒥_f^ε(ψ^ε)[t]; ]).We have the following estimates||𝒥_s^ε(ψ^ε)||_C_β^s,+=||∫_+∞^te^S(t-s)[ĝ_1(θ_s^εω,u^ε(s)+x̂^ε(s),v^ε(s)+ŷ^ε(s))-ĝ_1(θ_s^εω,x̂^ε(s),ŷ^ε(s))]ds||_C_β^s,+≤ Ksup_t∈ [0,∞){e^-β t∫_t^+∞e^γ_s(t-s)(|u^ε(s)|+|v^ε(s)|)ds}≤ Ksup_t∈ [0,∞){∫_t^+∞e^γ+εγ_s/ε(t-s)ds}||ψ^ε||_C_β^+≤ε K/γ+εγ_s||ψ^ε||_C_β^+,and||𝒥_f^ε(ψ^ε)||_C_β^f,+ =||e^Ft/εv_0+1/ε∫_0^te^F(t-s)/ε[ĝ_2(θ_s^εω,u^ε(s)+x̂^ε(s),v^ε(s)+ŷ^ε(s))-ĝ_2(θ_s^εω,x̂^ε(s),ŷ^ε(s))]ds||_C_β^f,+≤K/εsup_t∈ [0,∞){e^-β t∫_0^te^γ_f(t-s)/ε(|u^ε(s)|+|v^ε(s)|)ds}+sup_t∈ [0,∞){e^-β te^γ_f/εt|v_0|}≤K/εsup_t∈ [0,∞){∫_0^te^γ+γ_f/ε(t-s)ds}||ψ^ε||_C_β^++|v_0|=-K/γ+γ_f||ψ^ε||_C_β^++|v_0|.Hence, by (<ref>), we obtain||𝒥^ε(ψ^ε)||_C_β^+≤ρ(ε)||ψ^ε||_C_β^++|v_0|where ρ(ε) is defined as (<ref>) in the proof of Lemma <ref>.For any ψ^ε=(u^ε,v^ε), ψ̅^ε=(u̅^ε,v̅^ε)∈ C_β^+,[||𝒥_s^ε(ψ^ε)-𝒥_s^ε(ψ̅^ε)||_C_β^s,+;= ||∫_+∞^te^S(t-s)[Δĝ_1(θ_s^εω,u^ε(s),v^ε(s)) -Δĝ_1(θ_s^εω,u̅^ε(s),v̅^ε(s))]ds||_C_β^s,+;= ||∫_+∞^te^S(t-s)[ ĝ_1(θ_s^εω,u^ε(s)+x̂^ε(s),v^ε(s)+ŷ^ε(s)); - ĝ_1(θ_s^εω,u̅^ε(s)+x̂^ε(s),v̅^ε(s)+ŷ^ε(s))]ds||_C_β^s,+;≤Ksup_t∈[0,∞){e^-β t∫_t^+∞e^γ_s(t-s)(|u^ε(s)-u̅^ε(s)|+|v^ε(s)-v̅^ε(s)|)ds};≤Ksup_t∈[0,∞){∫_t^+∞e^γ+εγ_s/ε(t-s)ds}||ψ^ε-ψ̅^ε||_C_β^+;=ε K/γ+εγ_s||ψ^ε-ψ̅^ε||_C_β^+. ]On the one hand, by (<ref>), we have[ |e^Ft/ε(v_0-v̅_0)|≤ e^γ_f/εt Lipĥ^ε|u_0-u̅_0|; ≤ e^γ_f/εt Lipĥ^ε|∫_+∞^0e^S(-s)[Δĝ_1(θ_s^εω,u^ε(s),v^ε(s))-Δĝ_1(θ_s^εω,u̅^ε(s),v̅^ε(s))]ds|; = e^γ_f/εt Lipĥ^ε|∫_+∞^0e^S(-s)[ ĝ_1(θ_s^εω,u^ε(s)+x̂^ε(s),v^ε(s)+ŷ^ε(s));- ĝ_1(θ_s^εω,u̅^ε(s)+x̂^ε(s),v̅^ε(s)+ŷ^ε(s))]ds|; ≤ Lipĥ^ε· Ke^γ_f/εt∫_0^+∞e^γ_s(-s)|ψ^ε(s)-ψ̅^ε(s)|ds, ]which leads to||e^Ft/ε(v_0-v̅_0)||_C_β^f,+ ≤ Lipĥ^ε· K||ψ^ε(s)-ψ̅^ε(s)||_C_β^+sup_t∈[0,∞){e^(γ_f/ε-β)t∫_0^+∞e^(β-γ_s)sds}≤ε K Lipĥ^ε/γ+εγ_s||ψ^ε(s)-ψ̅^ε(s)||_C_β^+.On the other hand, we observe[ ||1/ε∫_0^te^F(t-s)/ε[Δĝ_2(θ_s^εω,u^ε(s),v^ε(s))-Δĝ_2(θ_s^εω,u̅^ε(s),v̅^ε(s))]ds||_C_β^f,+; = ||1/ε∫_0^te^F(t-s)/ε[ĝ_2(θ_s^εω,u^ε(s)+x̂^ε(s),v^ε(s)+ŷ^ε(s));-ĝ_2(θ_s^εω,u̅^ε(s)+x̂^ε(s),v̅^ε(s)+ŷ^ε(s))]ds||_C_β^f,+; ≤K/εsup_t∈[0,∞){e^-β t∫_0^te^γ_f(t-s)/ε(|u^ε(s)-u̅^ε(s)|+|v^ε(s)-v̅^ε(s)|)ds}; ≤ K/εsup_t∈[0,∞){∫_0^te^γ+γ_f/ε(t-s)ds}||ψ^ε-ψ̅^ε||_C_β^+; = -K/γ+γ_f||ψ^ε-ψ̅^ε||_C_β^+. ]Using(<ref>), (<ref>) and (<ref>), it follows that[||𝒥_f^ε(ψ^ε)-𝒥_f^ε(ψ̅^ε)||_C_β^f,+;≤ ||e^Ft/ε(v_0-v̅_0)||_C_β^f,++||1/ε∫_0^te^F(t-s)/ε[Δĝ_2(u^ε(s),v^ε(s),θ_s^εω);-Δĝ_2(u̅^ε(s),v̅^ε(s),θ_s^εω)]ds||_C_β^f,+;≤(ε K Lipĥ^ε/γ+εγ_s-K/γ+γ_f)||ψ^ε-ψ̅^ε||_C_β^+;≤ -(ε K^2/(γ+εγ_s)(γ+γ_f)[1-K(ε/γ+εγ_s-1/γ+γ_f)]+K/γ+γ_f)||ψ^ε-ψ̅^ε||_C_β^+. ]By (<ref>) and (<ref>), we have||𝒥^ε(ψ^ε)-𝒥^ε(ψ̅^ε)||_C_β^+≤ρ̅(ε)||ψ^ε-ψ̅^ε||_C_β^+withρ̅(ε) =ε K/γ+εγ_s-K/γ+γ_f-ε K^2/(γ+εγ_s)(γ+γ_f)[1-K(ε/γ+εγ_s-1/γ+γ_f)].Note that 0<K<-(γ+γ_f). Clearly, for small ϵ, we have 0<ρ̅(ε)<1, which implies that 𝒥^ε is a contraction in C_β^+. Thus, there is a unique fixed point ψ^ε:=(u^ε,v^ε) in C_β^+. Further, ψ^εsatisfies (x_0^',y_0^')=(u_0,v_0)+(x_0,y_0)∈ℳ^ε(ω). In fact, the solution of (<ref>) in C_β^+ if and only if it is a fixed point of the Lyapunov-Perron transform (<ref>). Moreover, it follows from (<ref>) that||ψ^ε(·)||_C_β^+≤1/1-ρ(ε)|v_0|which leads to||ϕ̂^ε(t,ω,(x_0^',y_0^'))-ϕ̂^ε(t,ω,(x_0,y_0))||_C_β^+≤1/1-ρ(ε)|y_0^'-y_0|.And then|ϕ̂^ε(t,ω,z_0^')-ϕ̂^ε(t,ω,z_0)|≤e^-γ/ε t/1-ρ(ε)|z_0^'-z_0|,with t≥0 and -γ/ε<0. Hence, the proof has been finished. 0<ρ(ε)<1 and 0<ρ̅(ε)<1 are the critical points in the proof of Lemma <ref> and Theorem <ref> respectively.According to Theorem <ref> and Theorem <ref>, the system (<ref>) has an exponential tracking slow manifold. By the relationship between of φ^ε(t,ω) and ϕ̂^ε(t,ω), so has the slow-fast system (<ref>).Assume that (A_1)-(A_3) hold. The slow-fast system (<ref>) with jumps has a slow manifoldM^ε(ω)=(T^ε)^-1ℳ^ε(ω)=ℳ^ε(ω)+(0,ση^1/ε(ω))={(x_0,h^ε(ω,x_0)):x_0∈ℝ^n_1}with h^ε(ω,x_0)=ĥ^ε(ω,x_0)+ση^1/ε(θ_tω), where T^ε is defined in (<ref>). We have the relationship between φ and φ̂ given by (<ref>):φ^ε(t,ω,M^ε(ω)) =(T^ε)^-1(θ_tω,φ̂^ε(t,ω,T^ε(ω,M^ε(ω))))=(T^ε)^-1(θ_tω,φ̂^ε(t,ω,ℳ^ε(ω)))⊂(T^ε)^-1(θ_tω,ℳ^ε(θ_tω))=M^ε(θ_tω).Note that t→η^1/ε(θ_tω) has a sublinear growth rate for 1<α<2; see <cit.>. Thus the transform (T^ε)^-1(θ_tω) does not change the exponential tracking property. It follows thatM^ε(ω) is a slow manifold. It is worthy mentioning that the dynamical orbits in M^ε(ω) are cádlág and adapted. Assume that (A_1)-(A_3) hold and that ε is sufficiently small. For any solution ϕ^ε(t,ω)=(x^ε(t,ω),y^ε(t,ω)) with initial condition z_0=(x_0,y_0) to the fast-slow system (<ref>), there exists a solution ϕ̅^ε(t,ω)=(x̅^ε(t,ω),y̅^ε(t,ω)) with initial point z̅_0=(x̅_0,y̅_0) on the manifold M^ε(ω) which satisfies the reduction systemdx̅^ε=Sx̅^εdt+g_1(x̅^ε,h^ε(θ_tω,x̅^ε))dtsuch that|φ^ε(t,ω)-φ̅^ε(t,ω)|≤e^-γ/ε t/1-(ε K/γ+εγ_s-K/γ+γ_f)|z_0-z̅_0|,   ω∈Ω,with t≥0 and -γ/ε<0. § SLOW MANIFOLDSBy the scaling t→ε t, the system (<ref>) can be rewritten as{[ dx̂^ε=ε Sx̂^ε dt+ε g_1(x̂^ε,ŷ^ε+ση^1/ε(θ_ε tω))dt,;dŷ^ε=Fŷ^ε dt+g_2(x̂^ε,ŷ^ε+ση^1/ε(θ_ε tω)dt. ].If we now replace η^1/ε(θ_ε tω) by ξ(θ_tω), we get the following random dynamical system{[ dx̃^ε=ε Sx̃^ε dt+ε g_1(x̃^ε,ỹ^ε+σξ(θ_tω))dt,; dỹ^ε=Fỹ^ε dt+g_2(x̃^ε,ỹ^ε+σξ(θ_tω))dt, ].with solution of which coincides with that of the system (<ref>)in distribution.The slow manifold of (<ref>) can be constructed in a completely analogous procedure as Section <ref>, so we omit the proof and immediately state the following theorem.Assume that (A_1)-(A_3) hold. Given (x_0,y_0)∈ℝ^n, if there exists an δ such that ε∈(0,δ) and (x̃^ε(0),ỹ^ε(0))=(x_0,y_0), then the system of integral equations( [ x̃^ε(t);ỹ^ε(t); ]) =([ e^Sε tx_0+ε∫_0^te^Sε(t-s)g_1(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))ds;∫_-∞^te^F(t-s)g_2(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))ds; ])has a unique solution ϕ̃^ε(t,ω,(x_0,y_0))=(x̃^ε(t,ω,(x_0,y_0)),ỹ^ε(t,ω,(x_0,y_0)))∈ C_-γ^-. Further, system (<ref>) has a slow manifoldℳ̃^ε(ω) ={(x_0,y_0)∈ℝ^n:ϕ̃^ε(·,ω,(x_0,y_0))∈ C_-γ^-}={(x_0,h̃^ε(ω,x_0)): x_0∈ℝ^n_1},whereh̃^ε(ω,x_0)= ∫_-∞^0e^-Fsg_2(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))dsis a Lipschitz continuous function and Lipschitz constantLiph̃^ε(ω,·)≤- K/(γ+γ_f)(1 -ρ(ε)), ω∈Ω.Now, we give the relationship of ℳ^ε(ω) and ℳ̃^ε(ω) as follows. Assume (A_1)-(A_3) to be valid. The slow manifold ℳ^ε(ω) (see (<ref>)) of system (<ref>) is same as the slow manifold ℳ̃^ε(ω) (see (<ref>)) of system (<ref>) in distribution. That is, for every x_0∈ℝ^n_1,ĥ^ε(ω,x_0)d=h̃^ε(ω,x_0).By the scaling s→ε s in (<ref>) and the fact that the solution of system (<ref>) coincides with that of system (<ref>) in distribution, for every x_0∈ℝ^n_1,ĥ^ε(ω,x_0) =1/ε∫_-∞^0e^-Fs/εg_2(x̂^ε(s),ŷ^ε(s)+ση^1/ε(θ_ sω))ds=∫_-∞^0e^-Fsg_2(x̂^ε(ε s),ŷ^ε(ε s)+ση^1/ε(θ_ε sω))dsd=∫_-∞^0e^-Fsg_2(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))ds=h̃^ε(ω,x_0)which completes the proof. We are going to study the limiting case of the slow manifold for the system (<ref>) as ε→0 and construct an asymptotic approximation of ℳ^ε(ω) with sufficiently small ε>0 in distribution.However, it makes also sense to study (<ref>) for ε=0. In that case, there exists a slow manifold.Consider the following systemdx^0(t)=0,   dy^0(t)=Fy^0(t)dt+g_2(x^0(t),y^0(t)+σξ(θ_tω))dtwith the initial condition (x^0(0),y^0(0))=(x_0,y_0). As proved in Section <ref>, we also have the following result. The system (<ref>) hasthe following slowmanifoldℳ^0(ω)={(x_0,h^0(ω,x_0)): x_0∈ℝ^n_1}whereh^0(ω,x_0)=∫_-∞^0e^-Fsg_2(x_0,y^0(s)+σξ(θ_sω))ds,whose Lipschitz constant Liph^0 satisfiesLiph^0≤-K/γ+γ_f+K,and y^0(t) is the unique solution in C^f,-_-γ for integral equationy^0(t)=∫_-∞^te^F(t-s)g_2(x_0,y^0(s)+σξ(θ_sω))ds, t≤0. From 0<K<-(γ+γ_f), we easily have -(γ+γ_f+K)>0. As we will show, the slow manifold of the system (<ref>) converges to the slow manifold of the system (<ref>) in distribution. In other words, the distribution of ℳ^ε(ω) converges to the distribution of ℳ^0(ω), as ε tends to zero. The slow manifold ℳ^0(ω) is called the critical manifold for the system (<ref>).Assume that (A_1)-(A_3) hold and there exists a positive constant C such that |g_1(x,y)|≤ C. ℳ^ε(ω) converges to ℳ^0(ω) in distributionas ε→0. In other words, for x_0∈ℝ^n_1, ω∈Ω,ĥ^ε(ω,x_0)d=h^0(ω,x_0)+𝒪(ε)in ℝ^n_2 as ε→0.Applying Lemma <ref>, to prove (<ref>), we can alternatively check that ifh̃^ε(ω,x_0)d⟶h^0(ω,x_0)in ℝ^n_2 as ε→0.From (<ref>) and (<ref>), for sufficiently small ε, we have[|h̃^ε(ω,x_0)-h^0(ω,x_0)|;= |∫_-∞^0e^-Fs[g_2(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))-g_2(x_0,y^0(s)+σξ(θ_sω))]ds|;≤ K∫_-∞^0e^-γ_fs(|x̃^ε(s)-x_0|+|ỹ^ε(s)-y^0(s)|)ds. ]According to (<ref>), for t≤0, it follows that[|x̃^ε(t)-x_0|= |e^Sε tx_0-x_0+ε∫_0^te^Sε(t-s)g_1(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))ds|; ≤ |e^Sε tx_0-x_0|+ε∫_t^0e^εγ_s(t-s)|g_1(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))|ds; ≤ |∫_ε t^0Sx_0e^Sudu|+ε· C·∫_t^0e^εγ_s(t-s)ds; ≤ |Sx_0|∫_ε t^0e^γ_sudu+C/γ_s(1-e^εγ_st):=C_1(1-e^εγ_st), ]where C_1=1/γ_s(|Sx_0|+C). Using (<ref>) and (<ref>), it is clear that[|ỹ^ε(t)-y^0(t)|= |∫_-∞^te^F(t-s)[g_2(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))-g_2(x_0,y^0(s)+σξ(θ_sω))]ds|; ≤K∫_-∞^te^γ_f(t-s)(|x̃^ε(s)-x_0|+|ỹ^ε(s)-y^0(s)|)ds; ≤K∫_-∞^te^γ_f(t-s)[C_1(1-e^εγ_s s)+|ỹ^ε(s)-y^0(s)|]ds; = KC_1(1/γ_f-εγ_se^εγ_st-1/γ_f)+K∫_-∞^te^γ_f(t-s)|ỹ^ε(s)-y^0(s)|ds. ]Hence, we have||ỹ^ε-y^0||_C_-γ^f,- ≤sup_t∈ (-∞,0]e^γ t[KC_1(1/γ_f-εγ_se^εγ_st-1/γ_f)+K∫_-∞^te^γ_f(t-s)|ỹ^ε(s)-y^0(s)|ds]≤ KC_1sup_t∈ (-∞,0]q(t,ε)-K/γ+γ_f||ỹ^ε-y^0||_C_-γ^f,-,whereq(t,ε)=1/γ_f-εγ_se^(γ+εγ_s)t-1/γ_fe^γ t,t≤0.Sincedq(t,ε)/dt =e^γ t(γ+εγ_s/γ_f-εγ_se^εγ_st-γ/γ_f) ≥ e^γ t(γ+εγ_s/γ_f-εγ_s-γ/γ_f)→0, as ε→0,which impliesq(t,ε) is increasing with respect to the variable t forsmall ε>0. Then we immediately haveq(t,ε)≤ q(0,ε)=1/γ_f-εγ_s-1/γ_f,t≤0.According to (<ref>) and (<ref>), we obtain||ỹ^ε-y^0||_C_-γ^f,-≤ C_2(1/γ_f-εγ_s-1/γ_f), withC_2=KC_1/1+K/γ+γ_f.Hence|ỹ^ε(t)-y^0(t)|≤ C_2e^-γ t(1/γ_f-εγ_s-1/γ_f),t≤0.It follows from (<ref>), (<ref>) and (<ref>) that|h̃^ε(ω,x_0)-h^0(ω,x_0)|≤ K[C_1∫_-∞^0e^-γ_fs(1-e^εγ_ss)ds+ C_2(1/γ_f-εγ_s-1/γ_f)∫_-∞^0e^-(γ+γ_f)sds]=C_3(1/γ_f-εγ_s-1/γ_f)→0,asε→0,where C_3=K(C_1-C_2/γ+γ_f)=K(γ+γ_f)(|Sx_0|+C)/γ_s(γ+γ_f+K). This completes the proof. Assume the hypotheses of Theorem <ref> to be valid. Then there exists a δ such that if ε∈(0,δ), the slow manifold of system (<ref>) can be approximated in distribution asℳ^ε(ω)d={(x_0,h^0(ω,x_0)+ε h^1(ω,x_0)+𝒪(ε^2)): X_0∈ℝ^n_1}where h^0(ω,x_0) is defined in (<ref>),h^1(ω,x_0)=∫_-∞^0e^-Fs[x^1(s)g_2,x(x_0,y^0(s)+σξ(θ_sω)) +y^1(s)g_2,y(x_0,y^0(s)+σξ(θ_sω))]ds,and (x^1(t),y^1(t)) are given by (<ref>) and (<ref>). Applying Lemma <ref>, we can alternatively proveh̃^ε(ω,x_0)=h^0(ω,x_0)+ε h^1(ω,x_0)+𝒪(ε^2)For the system (<ref>), we write[ x̃^ε(t)=x̃^0(t)+ε x^1(t)+𝒪(ε^2),  x̃^ε(0)=x_0,;ỹ^ε(t)=ỹ^0(t)+ε y^1(t)+𝒪(ε^2),  ỹ^ε(0)=y_0, ]where x̃^0(t), ỹ^0(t), x^1(t) and y^1(t) will be determined in the below. The Taylor expansions of g_i(x̃^ε(t),ỹ^ε(t)+σξ(θ_tω)), i=1, 2at point (x̃^0(t),ỹ^0(t)+σξ(θ_tω)) are as follows.[g_i(x̃^ε(t),ỹ^ε(t)+σξ(θ_tω));= g_i(x̃^0(t),ỹ^0(t)+σξ(θ_tω))+(x̃^ε(t)-x̃^0(t))g_i,x(x̃^0(t),ỹ^0(t)+σξ(θ_tω)); +(ỹ^ε(t)-ỹ^0(t))g_i,y(x̃^0(t),ỹ^0(t)+σξ(θ_tω))+𝒪(ε^2);=g_i(x̃^0(t),ỹ^0(t)+σξ(θ_tω))+ε x^1(t)g_i,x(x̃^0(t),ỹ^0(t)+σξ(θ_tω)); +ε y^1(t)g_i,y(x̃^0(t),ỹ^0(t)+σξ(θ_tω))+𝒪(ε^2), ]where g_i,x(x,y) and g_i,y(x,y) denote the partial derivative of g_i(x,y) with respect to thevariables x and y respectively. Substituting (<ref>) into (<ref>), equating the terms with the same power of ε, we deduce thatdx̃^0(t) =0dx^1(t) =Sx̃^0(t)dt+g_1(x̃^0(t),ỹ^0(t)+σξ(θ_tω))dtanddỹ^0(t) =Fỹ^0(t)dt+g_2(x̃^0(t),ỹ^0(t)+σξ(θ_tω))dtdy^1(t) =x^1(t)g_2,x(x̃^0(t),ỹ^0(t)+σξ(θ_tω))dt+y^1(t)[F+g_2,y(x̃^0(t),ỹ^0(t)+σξ(θ_tω))]dt.Comparing (<ref>)with (<ref>) and (<ref>), we immediately havex^0(t)=x̃^0(t),y^0(t)=ỹ^0(t),which implies that the system (<ref>) essentially is the system (<ref>) scaled by ε t with zero singular perturbation parameter, i.e., the system (<ref>) with ε=0. From (<ref>) and x̃^0(0)=x_0, we getx^1(t)=Sx_0t+∫_0^tg_1(x_0,y^0(s)+σξ(θ_sω))dt.According to (<ref>), (<ref>) and ỹ^0(0)=h^0(ω,x_0), we obtainỹ^0(t)=e^Fth^0(ω,x_0)+∫_0^te^-F(s-t)g__2(x_0,ỹ^0(s)+σξ(θ_sω))ds.By (<ref>)-(<ref>) and ỹ^1(0)=h^1(ω,x_0), we havey^1(t) =e^Ft+∫_0^tg_2,y(x_0,ỹ^0(s)+σξ(θ_sω))dsh^1(ω,x_0)+∫_0^te^-F(s-t)+∫_s^tg_2,y(x_0,ỹ^0(u)+σξ(θ_uω))du· g_2,x(x_0,ỹ^0(s)+σξ(θ_sω))[Sx_0s+∫_0^sg_1(x_0,y^0(u)+σξ(θ_uω))du]dsIt follows from (<ref>), (<ref>) and (<ref>) thath̃^ε(ω,x_0)= ∫_-∞^0e^-Fsg_2(x̃^ε(s),ỹ^ε(s)+σξ(θ_sω))ds = ∫_-∞^0e^-Fs[g_2(x̃^0(s),ỹ^0(s)+σξ(θ_sω))+ε x^1(s)g_2,x(x̃^0(s),ỹ^0(s)+σξ(θ_sω)) +ε y^1(s)g_2,y(x̃^0(s),ỹ^0(t)+σξ(θ_sω))]ds+𝒪(ε^2) = ∫_-∞^0e^-Fs[g_2(x_0,y^0(s)+σξ(θ_sω))ds+ε∫_-∞^0e^-Fs[x^1(s)g_2,x(x_0,y^0(s)+σξ(θ_sω))+y^1(s)g_2,y(x_0,y^0(s)+σξ(θ_sω))]ds+𝒪(ε^2)=h^0(ω,x_0)+ε h^1(ω,x_0)+𝒪(ε^2),which conclude the proof. § EXAMPLES Now we present three examples from biological sciences to illustrate our analytical results.Consider a two dimensional model of FitzHugh-Nagumo system <cit.>{[dx^ε=x^ε dt+1/3sin y^ε dt,  x^ε∈ℝ; dy^ε=-1/εy^ε dt+1/6ε(x^ε)^2 dt+σε^-1/αdL_t^α,  y^ε∈ℝ ].where x^ε is the “slow" component, y^ε is the “fast" component, γ_s=1, γ_f=-1, K<1, g_1(x^ε, y^ε)=1/3sin y^ε, g_2(x^ε, y^ε)=1/6(x^ε)^2.The scaling t→ε t in (<ref>) yields{[dx^ε=ε x^ε dt+1/3εsin y^ε dt,; dy^ε=-y^ε dt+1/6(x^ε)^2 dt+σ dL_t^α. ].We can convert this two dimensional SDE system to the following system{[ dx̃^ε=εx̃^ε dt+ε/3sin(ỹ^ε+σξ(θ_tω))dt,; dỹ^ε=-ỹ^ε+1/6(x̃^ε)^2, ].where ξ(θ_tω)=∫_-∞^t e^(t-s)dL_s^α.Denote x̃^ε(0)=x_0, we get h^0(ω,x_0)=x^2_0/6 andh^1(ω,x_0)=-x^2_0/3+x_0/9∫_-∞^0e^t[∫_0^tsin( x^2_0/6+σ∫_-∞^se^s-rdL_r^α)ds]dt.This produces an approximated slow manifold ℳ̃^ε(ω)={(x_0,h̃^ε(ω,x_0): x_0∈[0,π]} of system (<ref>), where h̃^ε(ω,x_0)=h^0(ω,x_0)+ε h^1(ω,x_0)+𝒪(ε^2). Then we obtain{[ dx̅^ε=εx̅^ε dt+1/3εsiny̅^ε dt,;y̅^ε=h̃^ε(θ_tω,x̅^ε), ].which is the reduction system of (<ref>). Consider a three dimensional model of FitzHugh-Nagumo system{[ dx_1^ε=1/2x_1^ε dt+(-(x_1^ε)^3+x_1^ε x_2^ε/20+y^ε/3)dt,  x_1^ε∈ℝ; dx_2^ε=1/3x_2^ε dt+(1/2sin x_1^εcos x_2^ε+(y^ε)^2/8)dt,  x_2^ε∈ℝ;dy^ε=-1/εy^ε dt-1/10εx_1^ε x_2^ε dt+σε^-1/αdL_t^α,  y^ε∈ℝ ].where (x_1^ε,x_2^ε) is the “slow" component, y^ε is the “fast" component,γ_s=1/3, γ_f=-1, K<1, g_1(x_1^ε, x_2^ε, y^ε)=-(x_1^ε)^3+x_1^ε x_2^ε/20+y^ε/3, g_2(x_1^ε, x_2^ε, y^ε)=1/2sin x_1^εcos x_2^ε+(y^ε)^2/8, g(x_1^ε, x_2^ε, y^ε)=-1/10x_1^ε x_2^ε.We now scale the time t→ε t, (<ref>) can be rewriten as{[ dx_1^ε=1/2ε x_1^ε dt+ε(-(x_1^ε)^3+x_1^ε x_2^ε/20+y^ε/3)dt,; dx_2^ε=1/3ε x_2^ε dt+ε(1/2sin x_1^εcos x_2^ε+(y^ε)^2/8)dt,;dy^ε=-y^ε dt-1/10x_1^ε x_2^ε dt+σ dL_t^α. ].The corresponding system of random differential equations{[ dx̃_1^ε=1/2εx̃_1^ε dt+ε[-(x̃_1^ε)^3+x̃_1^εx̃_2^ε/20+1/3(ỹ^ε+σξ(θ_tω))]dt; dx̃_2^ε=1/3εx̃_2^ε dt+ε[1/2sinx̃_1^εcosx̃_2^ε+1/8(ỹ^ε+σξ(θ_tω))^2]dt; dỹ^ε=-ỹ^ε dt-1/10x̃_1^εx̃_2^ε dt ].where ξ(θ_tω)=∫_-∞^t e^(t-s)dL_s^α.Denote (x̃_1^ε(0),x̃_2^ε(0))=(x_0,x^'_0), we get h^0(ω,(x_0,x^'_0))=-1/10x_0x_0^' andh^1(ω,(x_0,x^'_0))=(10x_0-x_0^3/20-x_0x_0^'/12)x_0^'/10+σ(3x_0^2 -40)x_0^'/1200∫_-∞^0e^t[∫_0^t∫_-∞^se^s-rdL_r^αds]dt+(x^'_0/3+1/2sin x_0cos x_0^'+(x_0x_0^')^2/800)x_0/10-σ^2 x_0/80∫_-∞^0e^t[∫_0^t(∫_-∞^se^s-rdL_r^α)^2ds]dt.We get an approximated slow manifold ℳ̃^ε(ω)={((x_0,x^'_0),h̃(ω,(x_0,x^'_0)): (x_0,x^'_0)∈ℝ^2} of system (<ref>), where h̃^ε(ω,(x_0,x^'_0))=h^0(ω,(x_0,x^'_0))+ε h^1(ω,(x_0,x^'_0))+𝒪(ε^2). The reduced system of (<ref>) given by{[ dx̅_1^ε=1/2εx̅_1^ε dt+ε(-(x̅_1^ε)^3+x̅_1^εx̅_2^ε/20+y̅^ε/3)dt,; dx̅_2^ε=1/3εx̅_2^ε dt+ε(1/2sinx̅_1^εcosx̅_2^ε+(y̅^ε)^2/8)dt,; y̅^ε=h̃^ε(θ_tω,(x̅_1^ε,x̅_2^ε)). ]. Consider a three dimensional model of FitzHugh-Nagumo system{[ dx^ε=1/3x^εdt+x^ε-(x^ε)^3+sin y_1^εcos y_2^ε/50dt,  x∈ℝ; dy_1^ε=-1/εy_1^εdt+1/5εsin x^ε dt+σε^-1/α_1dL_t^α_1,  y_1^ε∈ℝ;dy_2^ε=-1/εy_2^εdt-1/16ε (x^ε)^2dt+σε^-1/α_2dL_t^α_2,  y_2^ε∈ℝ; ].where x^ε is the “slow" component, (y_1^ε,y_2^ε) is the “fast" component, γ_s=1/3, γ_f=-1, K<1, g(x^ε, y_1^ε, y_2^ε)=x^ε-(x^ε)^3+sin y_1^εcos y_2^ε/50, g_1(x^ε, y_1^ε, y_2^ε)=1/5sin x^ε, g_2(x^ε, y_1^ε, y_2^ε)=-(x ^ε)^2/16, and L_t^α_1, L_t^α_2 are independent two-sided α-stable Lévy motion on ℝ with 1<α<2.By using the scaling t→ε t, we have{[ dx^ε=1/3ε x^εdt+εx^ε-(x^ε)^3+sin y_1^εcos y_2^ε/50dt,; dy_1^ε=-y_1^εdt+1/5sin x^ε dt+σ dL_t^α_1,;dy_2^ε=-y_2^εdt-1/16 (x^ε)^2dt+σ dL_t^α_2,; ].which can be transformed into{[ dx̃^ε=1/3εx̃^ε dt+εx̃^ε-(x̃^ε)^3+sin( ỹ_1^ε+σξ_1(θ_tω))cos(ỹ_2^ε+σξ_2(θ_tω))/50dt; dỹ_1^ε=-ỹ_1^ε dt+1/5sin x̃^ε dt; dỹ_2^ε=-ỹ_2^ε dt-1/16(x̃^ε)^2dt; ].where ξ_i(θ_tω)=∫_-∞^t e^(t-s)dL_s^α_i, i=1, 2.Denote x̃(0)=x_0, we geth^0(ω,x_0) = ( [ 1/5sin x_0; -1/16x^2_0;]).andh^1(ω,x_0)= ( [ 1/5cos x_0[x_0^3/50-53x_0/150+∫_-∞^0e^t∫_0^tsin( 1/5sin x_0+σ∫_-∞^se^s-rdL_r^α_1ds)cos(-1/16x^2_0+σ∫_-∞^se^s-rdL_r^α_2)/50dsdt]; -1/8x_0[x_0^3/50-53x_0/150+∫_-∞^0e^t∫_0^tsin( 1/5sin x_0+σ∫_-∞^se^s-rdL_r^α_1ds)cos(-1/16 x^2_0+σ∫_-∞^se^s-rdL_r^α_2)/50dsdt]; ]). Then we obtain an approximated random slow manifold ℳ̃(ω)={(x_0,h̃(ω,x_0): x_0∈ℝ^2} of system (<ref>), where h̃(ω,x_0)=(h̃_1(ω,x_0),h̃_2(ω,x_0))=h^0(ω,x_0)+ε h^1(ω,x_0)+𝒪(ε^2). Moreover, {[ dx̅^ε=1/3εx̅^εdt+εx̅^ε-(x̅^ε)^3+siny̅_1^εcosy̅_2^ε/50dt,;y̅_1^ε=h̃_1(θ_tω,x̅^ε),;y̅_2^ε=h̃_2(θ_tω,x̅^ε),;].is the reduction system of (<ref>). Acknowledgements. The authors are grateful to Björn Schmalfuß, René Schilling, Georg Gottwald, Jicheng Liu and Jinlong Wei for helpful discussions on stochastic differenial equations driven by Lévy motions.99 Ap04D. Applebaum,Lévy Processes and Stochastic Calculus. Cambridge University Press, Cambridge, UK, 2004. Ar03L. Arnold, Random Dynamical Systems, Springer Monographs in Mathematics, 2003. Bo89P. Boxler, A stochastic version of center manifold theory. Probability Theory and Related Fields, 1989, 83(4): 509-545. BB06N. Berglund, B. Gentz, Noise-induced phenomena in slow-fast dynamical systems: a sample-paths approach. Springer Science & Business Media, 2006. BSW14B. Böttcher, R. L. Schilling, J. Wang, Lévy matters III: Lévy-type processes: construction, approximation and sample path properties. Springer, 2014.Ba98I. Bahar, A. R. Atilgan, M. C. Demirel, B. Erman, Vibrational dynamics of folded proteins: significance of slow and fast motions in relation to function and stability. Physical Review Letters, 1998, 80(12): 2733-2736. Ca10T. Caraballo, J. Duany, K. Lu, B. Schmalfuß, Invariant manifolds for random and stochastic partial differential equations. Advanced Nonlinear Studies, 2010, 10(1): 23-52. CS10I. Chueshov, B. Schmalfuß, Master-slave synchronization and invariant manifolds for coupled stochastic systemsJ̇ȯu̇ṙṅȧl̇ ̇ȯḟ ̇Ṁȧṫḣėṁȧṫi̇ċȧl̇ ̇Ṗḣẏṡi̇ċṡ, 2010, 51(10): 102702. CDZ14G. Chen, J. Duan, J. Zhang, Slow foliation of a slow-fast stochastic evolutionary system. Journal of Functional Analysis, 2014, 267(8): 2663-2697. Du15J. Duan, An introduction to stochastic dynamics. Cambridge University Press, 2015. DW14J. Duan, W. Wang, Effective dynamics of stochastic partial differential equations. Elsevier, 2014. DLS03J. Duan, K. Lu, B. Schmalfuß, Invariant manifolds for stochastic partial differential equations. Annals of Probability, 2003: 2109-2135. DLS04J. Duan, K. Lu, B. Schmalfuß, Smooth stable and unstable manifolds for stochastic evolutionary equations. Journal of Dynamics and Differential Equations, 2004, 16(4): 949-972. FLD12H. Fu, X. Liu, J. Duan, Slow manifolds for multi-time-scale stochastic evolutionary systems. Communications in Mathematical Sciences, 2013, 11(1). GL13A. Gu, Y. Li, Synchronization of Coupled Stochastic Systems Driven by Non-Gaussian Lévy Noises. arXiv preprint arXiv:1312.2659, 2013. Ge05C. W. Gear, T. J. Kaper, I. G. Kevrekidis, A. Zagaris, Projecting to a slow manifold: Singularly perturbed systems and legacy codes. SIAM Journal on Applied Dynamical Systems, 2005, 4(3): 711-732. IM02P. Imkeller, A. H. Monahan, Conceptual stochastic climate models. Stochastics and Dynamics, 2002, 2(03): 311-326. Ku04H. Kunita, Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms, Real and stochastic analysis. Birkhäuser Boston, 2004: 305-373. Ku16K. Kümmel, On the dynamics of Marcus type stochastic differential equations, Doctoral thesis, Friedrich-Schiller-Universität Jena, 2016. KP13Y. Kabanov, S. Pergamenshchikov, Two-scale stochastic systems: asymptotic analysis and control. Springer Science & Business Media, 2013. K13X. Kan, J. Duan, I. G. Kevrekidis, A. J. Roberts, Simulating stochastic inertial manifolds by a backward-forward approach. SIAM Journal on Applied Dynamical Systems, 2013, 12(1): 487-514. LAD11X. Liu, X. Sun, J. Duan, A Perspective on Dynamical Systems under Non-Gaussian Fluctuations. Progresses in Some Fields of Mathematical Sciences, Science Press, Beijing, 2011. L10X. Liu, J. Duan, J. Liu, P. E. Kloeden, Synchronization of systems of Marcus canonical equations driven by -stable noises. Nonlinear Analysis: Real World Applications, 2010, 11(5): 3437-3445. Pr04P. E. Protter, Stochastic Integration and Differential equations. 2nd edn. Springer, New York, 2004. PZ07S. Peszat, J. Zabczyk, Stochastic partial differential equations with Lévy noise: An evolution equation approach. Cambridge University Press, 2007. PK08A. Patel, B. Kosko, Stochastic resonance in continuous and spiking neuron models with Lévy noise. IEEE Transactions on Neural Networks, 2008, 19(12): 1993-2008. POC99J. R. Pradines, G. V. Osipov, J. J. Collins, Coherence resonance in excitable and oscillatory systems: The essential role of slow and fast dynamics. Physical Review E, 1999, 60(6): 6407-10. Ro05S. Rong, Theory of stochastic differential equations with jumps and applications: mathematical and analytical techniques with applications to engineering. Springer, New York, 2005. Ro08A. J. Roberts, Normal form transforms separate slow and fast modes in stochastic dynamical systems. Physica A: Statistical Mechanics and its Applications, 2008, 387(1): 12-38. RDJ15J. Ren , J. Duan, C. K. R. T. Jones,Approximation of random slow manifolds and settling of inertial particles under uncertainty. Journal of Dynamics and Differential Equations, 2015, 27(3-4): 961-979. RDW15J. Ren, J. Duan, X. Wang, A parameter estimation method based on random slow manifolds. Applied Mathematical Modelling, 2015, 39(13): 3721-3732. Sc96M. Scheutzow, On the perfection of crude cocycles. Random and Computational Dynamics, 1996, 4(4): 235-256. SY83K. Sato, M. Yamazato, Stationary processes of Ornstein-Uhlenbeck type. Probability Theory and Mathematical Statistics, 1983: 541-551. SP12R. L. Schilling, L. Partzsch, Brownian Motion: An Inroduction to Stochastic Processes. De Gruyter, Berlin, 2012. SS08B. Schmalfuss, K. R. Schneider, Invariant manifolds for random dynamical systems with slow and fast variables. Journal of Dynamics and Differential Equations, 2008, 20(1): 133–164. Wa95T. Wanner, Linearization of random dynamical systems, in Dynamics reported.Springer, Berlin, 1995, 203-268. Wa02H. Wang, Minimum entropy control of non-Gaussian dynamic stochastic systems. IEEE Transactions on Automatic Control, 2002, 47(2): 398-403. WR13W. Wang, A. J. Roberts, Slow manifold and averaging for slow fast stochastic differential system.Journal of Mathematical Analysis and Applications, 2013, 398(2): 822-839. WR12W. Wang, A. J. Roberts, J. Duan, Large deviations and approximations for slow-fast stochastic reaction-diffusion equations. Journal of Differential Equations, 2012, 253(12): 3501-3522. Wo01W. A. Woyczyński, Lévy processes in the physical sciences. Lévy processes. Birkhäuser Boston, 2001: 241-266. Zi08G. Ziglio, SPDEs with Random Dynamic Boundary Conditions, Doctoral dissertation, 2008.
http://arxiv.org/abs/1702.08213v3
{ "authors": [ "Shenglan Yuan", "Jianyu Hu", "Xianming Liu", "Jinqiao Duan" ], "categories": [ "math.DS" ], "primary_category": "math.DS", "published": "20170227100534", "title": "Slow manifolds for stochastic systems with non-Gaussian stable Lévy noise" }
Achievement and Friends: Key Factors of Player Retention Vary Across Player Levels in Online Multiplayer Games Kunwoo Park^*    Meeyoung Cha^**    Haewoon Kwak^†    Kuan-Ta Chen^^*Graduate School of Web Science Technology, School of Computing, KAIST, South Korea ^**Graduate School of Culture Technology, KAIST, South Korea ^†Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar ^Academia Sinica, Taiwan {kw.park,meeyoungcha}@kaist.ac.kr   haewoon@acm.org   ktchen@iis.sinica.edu.tw ============================================================================================================================================================================================================================================================================================================================================================================================================================================== We present a robust approach for detecting intrinsic sentence importance in news, by training on two corpora of document-summary pairs. When used for single-document summarization, our approach, combined with the “beginning of document” heuristic, outperforms a state-of-the-art summarizer and the beginning-of-article baseline in both automatic and manual evaluations. These results represent an important advance because in the absence of cross-document repetition, single document summarizers for news have not been able to consistently outperform the strong beginning-of-article baseline. § INTRODUCTION To summarize a text, one has to decide what content is important and what can be omitted.With a handful of exceptions <cit.>, modern summarization methods are unsupervised, relying on on-the-fly analysis of the input text to generate the summary, without using indicators of intrinsic importance learned from previously seen document-summary pairs.This state of the art is highly unintuitive, as it stands to reason that some aspects of importance are learnable. Recent work has demonstrated that indeed supervised systems can perform well without sophisticated features when sufficient training data is available <cit.>. In this paper we demonstrate that in the context of news it is possible to learn an accurate predictor to decide if a sentence contains content that is summary-worthy. We showthat the predictors built in our approach are remarkably consistent, providing almost identical predictions on a held out test set, regardless of the source of training data.Finally we demonstrate that in single-document summarization task our predictor, combined with preference for content that appears at the beginning of the news article, results in a summarizer significantly better than a state-of-the-art global optimization summarizer. The results hold for both manual and automatic evaluations. In applications, the detector of unimportance that we have developed can potentially improve snippet generation for news stories, detecting if the sentences at the beginning of the article are likely to form a good summary or not. This line of investigation was motivated by our previous work showing that in many news sub-domains the beginning of the article is often an uninformative teaser which is not suitable as an indicative summary of the article <cit.>.§ CORPORAOne of the most cited difficulties in using supervised methods for summarization has been the lack of suitable corpora of document-summary pairs where each sentence is clearly labeled as either important or not <cit.>.We take advantage of two currently available resources: archival data from the Document Understanding Conferences (DUC) <cit.> and the New York Times (NYT) corpus (<https://catalog.ldc.upenn.edu/LDC2008T19>). The DUC data contains document-summary pairs in which the summaries were produced for research purposes during the preparation of a shared task for summarization. The NYT dataset contains thousands such pairs and the summaries were written by information scientists working for the newspaper. DUC2002 is the latest dataset from the DUC series in which annotators produced extractive summaries, consisting of sentences taken directly from the input.DUC2002 contains 64 document sets. The annotators created two extractive summaries for two summary lengths (200 and 400 words), for a total of four extracts per document set. In this work, a sentence from the original article that appears in at least one of the human extracts is labeled as important (summary-worthy). All other sentences in the document are treated as unlabeled. Unlabeled sentences could be truly not summary-worthy but also may be included into a summary by a different annotator <cit.>. We address this possibility in Section <ref>, treating the data as partially labeled.For the NYT corpus, we work with 19,086 document-summary pairs published between 1987 and 2006 from the Business section.Table <ref> in Section 5 shows a summary from the NYT corpus. These are abstractive, containing a mix of informative sentences from the original article along with abstractive re-telling of the main points of the article, as well as some meta-information such as the type of article and a list of the photos accompanying the article. It also shows the example of lead (opening) paragraph along with the summary created by the system we propose, InfoFilter, with the unimportant sentence removed.In order to label sentences in the input, we employee Jacana  <cit.> for word alignment in mono-lingual setting for all pairs of article-summary sentences. A sentence from the input is labeled as important (summary-worthy) if the alignment score between the sentence and a summary sentence is above a threshold, which we empirically set as 14 based on preliminary experiments. All other sentences in the input are treated as unlabeled.Again, an unlabeled sentence could be positive or negative.§ METHOD As mentioned earlier, existing datasets contain clear labels only for positive sentences.Due to the variability of human choices in composing a summary, unlabeled sentences cannot be simply treated as negative.For our supervised approach to sentence importance detection, a semi-supervised approach is first employed to establish labels. §.§ Learning from Positive and Unlabeled SamplesLearning frompositive (e.g., important in this paper) and unlabeled samplescan be achieved by the methods proposed in <cit.>. Following <cit.>,we use a two-stage approach to train a detector of sentence importance from positive and unlabeled examples.Let y be the importance prediction for a sample, where y=1 is expected for any positive sample and y=0 for any negative sample. Let o be the ground-truth labels obtained by the method described in Section <ref>,where o=1 means that the sentence is labeled as positive (important) and o=0 means unlabeled. In the first stage, we build an estimator e, equal to the probability that a sample is predicted as positive given that it is indeed positive,p(o=1|y=1).We first train a logistic regression (LR) classier with positive and unlabeled samples, treating the unlabeled samples as negative. Then e can be estimated as Σ_x∈ P(LR(x)/|P|), where P is the set of all labeled positive samples, and LR(x) is the probability of a sample x being positive, as predicted by the LR classifier. We then calculate p(y=1|o=0) using the estimator e, the probability for an unlabeled sample to be positive as: w=LR(x)/e / 1-LR(x)/1-e.A large w means an unlabeled sample is likely to be positive, whereas a small w means the sample is likely to be negative. In the second stage, a new dataset is constructed from the original dataset. We first make two copies of every unlabeled sample, assigning the label 1 with weight w to one copy and the label 0 with weight 1-w to the other. Positive samples remain the same and the weight for each positive sample is 1. We call this dataset the relabeled data. We train a SVM classifier with linear kernel on the relabeled data. This is our final detector of important/unimportant sentences.§.§ FeaturesThe classifiers for both stages use dictionary-derived features which indicate the types / properties of a word, along with several general features. MRC The MRC Psycholinguistic Database <cit.> isa collection of word lists with associated word attributes according to judgements by multiple people. The degree to which a word is associated with an attribute is given as a score within a range. We divide the score range into 230 intervals. The number of intervals was decided empirically on a small development set and was inspired by prior work of feature engineering for real valued scores <cit.>. Each interval corresponds to a feature; the value of the feature is the fraction of words in a sentence whose score belongs to this interval. Six attributes are selected: imagery,concreteness,familiarity,age-of-acquisition,and two meaningfulness attributes.In total, there are 1,380 MRC features. LIWC LIWC is a dictionarythat groups words in different categories, such as positive or negative emotions, self-reference etc. and other language dimensions relevant in the analysis of psychological states. Sentences are represented by a histogram of categories, indicating the percentage of words in the sentence associated with each category. We employ LIWC2007 English dictionary which contains 4,553 words with 64 categories. INQUIRER The General Inquirer <cit.> is another dictionary of 7,444 words, grouped in 182 general semantic categories.For instance, the word absurd is mapped to tags NEG and VICE. Again, a sentence is represented with the histogram of categories occurring in the sentence. General We also include features that capture general attributes of sentences including:total number of tokens,number of punctuation marks,if it contains exclamation marks, if it contains question marks,if it contains colons, if it contains double quotations.§ EXPERIMENTS ON IMPORTANCE DETECTIONWe train a classifier separately for the DUC2002 and the NYT 1986-2006 corpora.The DUC model is trained using the articles and summaries from DUC2002 dataset, where 1,833 sentences in total appear in the summaries.We also randomly sample 2,200 non-summary sentences as unlabeled samples to balance the training set. According to the criteria described in NYT corpus section, there are 22,459 (14.1%) positive sentences selected from total of 158,892 sentences. Sentences with Jacana alignment scores less than or equal to 10 form the unlabeled set, including 20,653 (12.9%) unlabeled sentences in total. Liblinear <cit.> is used for training the two-stage classifiers.§.§ Test Set The test set consists of 1,000 sentences randomly selected from NYT dataset for the year 2007.Half of the sentences are from the Business section, where the training data was drawn. The rest are from the U.S. International Relations section (Politics for short), to test the stability of prediction across topic domains. Three students from the University of Akronannotated if the test sentences contain important summary-worthy information. For each test (source) sentence from the original article,we first apply Jacana to align it with every sentence in the corresponding summary.The summary sentence with the highest matching score is picked as the target sentence for the source sentence. Each pair of source and target sentences is presented to students and they are asked to mark if the sentences share information. Sentences from the original article that contribute content to the most similar summary sentence are marked as positive;those that do not are marked as negative. The pairwise annotator agreements are all above 80% and the pairwise Kappa ranges from 0.73 to 0.79.The majority vote becomes the label of the source (article) sentence. Table <ref> presents the distribution of final labels. The classes are almost balanced, with slightly more negative pairs overall.§.§ Evaluation ResultsIn the process above, we have obtained a set of article sentences that contribute to the summary (positive class) or not (negative class)[We assume that an article sentence not contributing to the summary does not contribute any content to the summary sentence that is closest to the article sentence.].Table <ref> shows the evaluation results on the human-annotated test set. The baseline is assuming that all sentences are summary-worthy. Although the unimportant class is the majority (see Table <ref>), predicting all test samples as not summary-worthy is less useful in real applications because we cannot output an empty text as a summary.Each row in Table <ref> corresponds to a model trained with one training set. We use dictionary features to build the models, i.e., NYT Model and DUC Model. We also evaluate the effectiveness of the general features by excluding it from the dictionary features,i.e. NYT w/o general and DUC w/o general. Precision, recall and F-1 score are presented for all models. Models trained on the NYT corpus and DUC corpus are both significantly better than the baseline, with p<0.0001 for McNemara's test. The NYT model is better than DUC modeloverall according to F-1. The results also show a noticeable performance drop when general features are removed.We also trained classifiers with bag of words (BOW) features for NYT and DUC respectively, i.e. BOW-NYT and BOW-DUC. The classifiers trained on BOW features still outperform the baseline but are not as good as the dictionary and general sentence properties models.§.§ NYT Model vs. DUC Model Further, we study the agreement between the two models in terms of prediction outcome.First, we compare the prediction outcome from the two models using NYT2007 test set. The Spearman's correlation coefficients between the outputs from the two models is around 0.90, showing that our model is very robust and independent of the training set. Then we repeat the study on a much larger dataset, usingarticles from the DUC 2004 multi-document summarization task. There are no single document summaries in that year but this is not a problem, because we use the data simply to study the agreement between the two models, i.e., whether they predict the same summary-worthy status for sentences, not to measure the accuracy of prediction.There are 12,444 sentences in this dataset. The agreement between the two models is very high (87%) for both test sets. Consistent with the observation above, the DUC model is predicting intrinsic importance more aggressively. Only for a handful of sentences the NYT model predicts positive(important) while the DUC model predicts negative (not important). We compute Spearman's correlation coefficients between the posterior probability for sentences from the two models.The correlation is around 0.90, indicating a great similarity in the predictions of the two models.§ SUMMARIZATION We propose two importance-based approaches to improving single-document summarization.In the first approach, InfoRank, the summary is constructed solely from the predictions of the sentence importance classifier.Given a document, we first apply the sentence importance detector on each sentence to get the probability of this sentence being intrinsically important. Then we rank the sentences by the probability score to form a summary within the required length.The second approach, InfoFilter, uses the sentence importance detector as a pre-processing step.We first apply the sentence importance detector on each sentence, in the order they appear in the article. We keep only sentences predicted to be summary-worthy as the summary till the length restriction. This combines the preference for sentences that appear at the beginning of the article but filters out sentences that appear early but are not informative. §.§ Results on Automatic Evaluation The model trained on the NYT corpus is used in the experiments here.Business and politics articles (100 each) with human-generated summaries from NYT2007 are used for evaluation.Summaries generated by summarizers are restricted to 100 words. Summarizer performance is measured by ROUGE-1 (R-1) and ROUGE-2 (R-2) scores <cit.>. Several summarization systems are used for comparison here, including LeadWords, which picks the first 100 words as the summary; RandomRank, which ranks the sentences randomly and then picks the most highly ranked sentences to form a 100-word summary; and Icsisumm <cit.>, a state-of-the-art multi-document summarizer <cit.>. Table <ref> shows the ROUGE scores for all summarizers. InfoRank significantly outperforms Icsisumm on R-1 score and is on par with it on R-2 score. Both InfoRank and Icsisumm outperformRandomRank by a large margin. These results show that the sentence importance detector is capable of identifying the summary-worthy sentences.LeadWords is still a very strong baseline single-document summarizer. InfoFilter achieves the best result and greatly outperforms the LeadWords in bothR-1 and R-2 scores. The p value of Wilcoxon signed-rank test is less than 0.001, indicating that the improvement is significant. Table <ref> shows the example of lead paragraph along with the InfoFilter summary with the unimportant sentence removed. The InfoFilter summarizer is similar to the LeadWords summarizer, but it removes any sentence predicted to be unimportant and replaces it with the next sentence in the original article that is predicted to be summary-worthy.Among the 200 articles, 116 have at least one uninformative sentence removed. The most frequent number is two removed sentences. There are 17 articles for which more than three sentences are removed.§.§ Results on Human EvaluationWe also carry out human evaluation, to better compare the relative performance of theLeadWords and InfoFilter summarizers. Judgements are made for each of the 116 articles in which at least one sentence had been filtered out by InfoFilter.For each article, we first let annotators read the summary from the NYT2007 dataset and then the two summaries generated by LeadWords and InfoFilter respectively.Then we ask annotators if one of the summary covers more of the information presented in the NYT2007 summary. The annotators are given the option to indicate that the two summaries are equally informative with respect to the content of the NYT summary. We randomize the order of sentences in both LeadWords and InfoFilter summaries when presenting to annotators. The tasks are published on Amazon Mechanical Turk (AMT) and each summary pair is assigned to 8 annotators. The majority vote is used as the final label.According to human judgement, InfoFilter generates better summaries for 55 of the 116 inputs; for 39 inputs, the LeadWords summary is judged better.The result is consistent with the ROUGE scores, showingthat InfoFilter is the better summarizer. § CONCLUSIONIn this paper, we presented a detector for sentence importance and demonstrated that it is robust regardless of the training data. The importance detector greatly outperforms the baseline.Moreover, we tested the predictors on several datasets for summarization. In single-document summarization, the ability to identify unimportant content allows us to significantly outperform the strong lead baseline.eacl2017
http://arxiv.org/abs/1702.07998v1
{ "authors": [ "Yinfei Yang", "Forrest Sheng Bao", "Ani Nenkova" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170226080726", "title": "Detecting (Un)Important Content for Single-Document News Summarization" }
=1
http://arxiv.org/abs/1702.08458v1
{ "authors": [ "Anthony M. Charles", "Finn Larsen", "Daniel R. Mayerson" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20170227190003", "title": "Non-Renormalization For Non-Supersymmetric Black Holes" }
Department of Physics, McCullough Building, Stanford University, Stanford, California 94305-4045, USADepartment of Physics, McCullough Building, Stanford University, Stanford, California 94305-4045, USAWe study two Weyl semimetal generalizations in five dimensions (5d) which have Yang monopoles and linked Weyl surfaces in the Brillouin zone, respectively, and carry the second Chern number as a topological number. In particular, we show a Yang monopole naturally reduces to a Hopf link of two Weyl surfaces when the 𝐓𝐏 (time-reversal combined with space-inversion) symmetry is broken. We then examine the phase transition between insulators with different topological numbers in 5d. In analogy to the 3d case, 5d Weyl semimetals emerge as intermediate phases during the topological phase transition.71.10.-w 73.20.At 14.80.Hv Weyl Semimetal and Topological Phase Transition in Five Dimensions Shou-Cheng Zhang December 30, 2023 ==================================================================§ INTRODUCTION The discovery of topological states of matter has greatly enriched the variety of condensed matter in nature <cit.>. These states usually undergo phase transitions involving a change of topology of the ground state wave function, which are called topological phase transitions (TPTs). In three dimensions (3d), an significant topological state is the Weyl semimetal <cit.>, which plays a key role in TPTs of 3d insulators. An example is the time-reversal invariant (TRI) transition between a noncentrosymmetric topological insulator (TI) <cit.> and a normal insulator (NI) in 3d, during which an intermediate TRI Weyl semimetal phase inevitably occurs <cit.>. Another example is the TPT between different 3d Chern insulators (CI) <cit.>, where an intermediate Weyl semimetal phase is also required <cit.>. In both examples, the topological numbers of the insulators are transferred via Weyl points of the Weyl semimetal phase, which behave as "Dirac monopoles" of the Berry curvature in the Brillouin zone (BZ). The electrons around each Weyl point obey the Weyl equation, with a chirality equal to the first Chern number C_1=±1 of the Berry curvature around the Weyl point.Recently, there has been a revival of interest in gapless topological phases in higher dimensions, aimed at understanding roles of higher-dimensional topological numbers <cit.>. In particular, the Weyl semimetal can be generalized to 5d in two ways: the first is to promote Weyl fermions in 3d to chiral fermions in 5d, which are described by a 4-component spinor and have a 2-fold degenerate linear energy spectrum. The Dirac monopoles associated with the Weyl points in 3d become the Yang monopoles in 5d <cit.>, which carry a non-Abelian second Chern number C_2^NA=±1 of the SU(2) Berry curvature of the 2-fold degenerate valence (conduction) band <cit.>. The Yang monopole was first introduced into condensed matter physics in the construction of the four dimensional quantum Hall effect <cit.>. The second way is to keep the energy spectrum non-degenerate, while promoting the Weyl points to linked 2d Weyl surfaces in the 5d BZ <cit.>. In this case, each Weyl surface carries an Abelian second Chern number C_2^A∈ℤ of the U(1) Berry curvature, which is equal to the sum of its linking number with all the other Weyl surfaces <cit.>. Two natural questions are then whether the two 5d Weyl semimetal generalizations are related, and whether they play the role of intermediate phases during the TPT of certain gapped topological states of matter in 5d.In this letter, we show the two 5d Weyl semimetal generalizations, namely, the Yang monopole and the linked Weyl surfaces in 5d, are closely related via the 𝐓𝐏 symmetry breaking, where 𝐓 and 𝐏 stand for time-reversal and space-inversion, respectively. We then demonstrate they also arise as intermediate phases in the TPT between 5d CI and NI, and between 5D TI and NI with particle-hole symmetry 𝐂 that satisfies 𝐂^2=-1 <cit.>. In analogy to 3d cases, the Weyl arcs on the boundary of the 5d Weyl semimetal <cit.> naturally interpolate between the surface states of different gapped topological phases.§ YANG MONOPOLES AND LINKED WEYL SURFACES In 3d, a Weyl semimetal is known as a semimetal which is gapless at several points in the BZ, i.e., Weyl points. The low energy bands near a Weyl point is generically given by a 2×2 Weyl fermion Hamiltonian H_W(𝐤)=∑_i=1^3v_i(k_i-k^W_i)σ^i up to an identity term, where 𝐤 is the momentum, and σ^i (i=1,2,3) are the Pauli matrices. The Weyl point is located at 𝐤^W, while the velocities v_i≠0 (i=1,2,3) play the role of light speed. By defining the U(1) Berry connection a_i(𝐤)=i⟨ u_𝐤|∂_k_i|u_𝐤⟩ of the valence (conduction) band wavefunction |u_𝐤⟩, one can show the first Chern number of the Berry curvature f_ij=∂_k_ia_j-∂_k_ja_i on a 2d sphere enclosing 𝐤^W is C_1=sgn(v_1v_2v_3)=±1, where sgn(x) is the sign of x. Therefore, the Weyl point 𝐤^W can be viewed as a Dirac monopole of the Berry connection.The first way of generalizing the Weyl semimetal to 5d is to replace the Weyl fermions above by the chiral Dirac fermions in 5d:H_Y(𝐤)=∑_i=1^5v_i(k_i-k^Y_i)γ^i ,where 𝐤 is now the 5d momentum, and γ^i (1≤ i≤5) are the 4×4 Gamma matrices satisfying the anticommutation relation {γ^i,γ^j}=2δ^ij. The band structure of such a Hamiltonian is 4-fold degenerate at 𝐤^Y, and is 2-fold degenerate everywhere else with a linear dispersion. The 2-fold degeneracy enables us to define a U(2) Berry connection a^αβ_i(𝐤)=i⟨ u^α_𝐤|∂_k_i|u^β_𝐤⟩, where |u^α_𝐤⟩ (α=1,2) denote the two degenerate wavefunctions of the valence bands <cit.>. One can then show the non-Abelian second Chern number C_2^NA on a 4d sphere enclosing 𝐤^Y isC_2^NA=∮_S^4^4𝐤ϵ^ijkl[(f_ijf_kl)-(f_ij)(f_kl)]/32π^2=±1 ,where f_ij=∂_k_ia_j-∂_k_ja_i-i[a_i,a_j] is the non-Abelian U(2) Berry curvature. In this calculation, only the traceless SU(2) part of f_ij contributes. Therefore, 𝐤^Y can be viewed as a Yang monopole in the BZ, which is the source of SU(2) magnetic field in 5d <cit.>.However, the generic 2-fold degeneracy of Hamiltonian H_Y(𝐤) requires the system to have certain symmetries. A common symmetry of this kind is the combined 𝐓𝐏 symmetry of time-reversal and inversion, which is anti-unitary and satisfies (𝐓𝐏)^2=-1 for fermions. Therefore, the Yang monopole 5d generalization is not in the same symmetry class as that of the generic 3d Weyl semimetal. We remark here that the above 5d Yang monopole, together with the 3d Weyl point and the 2d Dirac point (e.g., in graphene), correspond exactly to the quaternion (pseudoreal), complex and real classes of the Wigner-Dyson threefold way <cit.>, and the anti-unitary 𝐓𝐏 symmetry plays a key role in the classification. Basically, a matrix Hamiltonian H(𝐤) falls into these three classes if (𝐓𝐏)^2=-1,0,+1, respectively (0 stands for no 𝐓𝐏 symmetry), and one can show d=5,3,2 are the corresponding spacial dimensions where point-like gapless manifold in the BZ are stable. The minimal Hamiltonians of the three classes are listed in Tab. <ref>. In particular, (𝐓𝐏)^2=+1 is possible for systems with a negligible spin-orbital coupling such as graphene, where the electrons can be regarded as spinless. The second 5d Weyl semimetal generalization requires no symmetry (other than the translational symmetry), thus is in the same symmetry class with the 3d Weyl semimetal. Its band structure is non-degenerate except for a few closed submanifolds ℳ_j called Weyl surfaces where two bands cross each other<cit.>. The effective Hamiltonian near each ℳ_j involves only the two crossing bands and takes the 2×2 form H_W(𝐤)=ξ_0(𝐤)+∑_i=1^3ξ_i(𝐤)σ^i. Therefore, ℳ_j is locally determined by 3 conditions ξ_i(𝐤)=0 (i=1,2,3). In one band α of the two associated with ℳ_j, one can define a U(1) Berry connection a^(α)_i(𝐤)=i⟨ u^α_𝐤|∂_k_i|u^α_𝐤⟩ with its wavefunction |u^α_𝐤⟩, and define the U(1) second Chern number of ℳ_j in band α on a 4d closed manifold 𝒱 that only encloses ℳ_j asC_2^A(ℳ_j,α)=∮_𝒱^4𝐤ϵ^ijklf^(α)_ijf^(α)_kl/32π^2∈ℤ ,where f^(α)_ij is the Berry curvature of a^(α)_i. Remarkably, we showed in an earlier paper that <cit.>C_2^A(ℳ_j,α)=∑_ℓ∈α,ℓ≠ jΦ(ℳ_j,ℳ_ℓ) ,where Φ(ℳ_j,ℳ_ℓ) is the linking number between ℳ_j and ℳ_ℓ in the 5d BZ, and ℳ_ℓ runs over all the Weyl surfaces associated with band α. The relation between the above two 5d generalizations can be most easily seen in the following 4-band model with HamiltonianH_Y'(𝐤)=∑_i=1^5(k_i-k^Y_i)γ^i+bi[γ^4,γ^5]/2 ,where b is a real parameter that breaks the 𝐓𝐏 symmetry. When b=0, the Hamiltonian reduces to the Yang monopole Hamiltonian H_Y(𝐤) in Eq. (<ref>), where we have set all the velocities to v_i=1. When b≠0, the 𝐓𝐏 symmetry is broken, and the Yang monopole necessarily evolves into linked Weyl surfaces. This can be seen explicitly by deriving the energy spectrum ϵ^α_𝐤=±[((k̃_1^2+k̃_2^2+k̃_3^2)^1/2± b)^2+k̃_4^2+k̃_5^2]^1/2, where we have defined k̃_i=k_i-k_i^Y (1≤ i≤5). Here 1≤α≤4 denotes the α-th band in energies. Fig. <ref>(a) and Fig. <ref>(b) show the band structures for b=0 and b≠0, respectively, where k̃_2,k̃_3,k̃_4,k̃_5 are assumed zero. In the b≠0 case, one can readily identify three Weyl surfaces: ℳ_1 between bands ϵ^2_𝐤 and ϵ^3_𝐤, ℳ_2 between bands ϵ^1_𝐤 and ϵ^2_𝐤 and ℳ_2' between bands ϵ^3_𝐤 and ϵ^4_𝐤 (see Fig. <ref>(b)). ℳ_1 is a 2d sphere given by k̃_1^2+k̃_2^2+k̃_3^2=b^2 and k̃_4=k̃_5=0, while ℳ_2 and ℳ_2' coincide and are a 2d plane given by k̃_1=k̃_2=k̃_3=0. In particular, the second band ϵ_𝐤^2 (thick red line in Fig. <ref>(b)) is associated with ℳ_1 and ℳ_2, which form a Hopf link in 5d as can be seen in the 3d subspace k_3=k_5=0 plotted in Fig. <ref>(d). In the limit b→0, the radius of ℳ_1 contracts to zero, so ℳ_1 collapses onto ℳ_2 (and ℳ_2') and becomes the 4-fold degenerate Yang monopole in Fig. <ref>(c). One can add other small 𝐓𝐏 breaking terms to Eq. (<ref>), and the above picture remains topologically unchanged.Due to the 𝐓𝐏 symmetry breaking, the U(2) gauge field a_i(𝐤) is broken down to two U(1) gauge fields a_i^(1)(𝐤) and a_i^(2)(𝐤) in bands ϵ_𝐤^1 and ϵ_𝐤^2. One can easily check the Abelian second Chern number of ℳ_1 calculated from a_i^(2)(𝐤) is C_2^A(ℳ_1,2)=1, which is defined on 4d manifold 𝒱 with topology S^2× S^2 as shown in Fig. <ref>(d) <cit.>. This is closely related to the non-Abelian second Chern number C_2^NA=1 of the Yang monopole before symmetry breaking. In fact, ignoring the gauge invariance, we can still define the U(2) gauge field a_i^αβ(𝐤) using the two valence bands of Hamiltonian H_Y'(𝐤), which is singular on ℳ_1 but not on ℳ_2 (since ℳ_2 is between the two bands defining the U(2) Berry connection), and still satisfies C_2^NA=1 on a sphere S^4 enclosing ℳ_1. The sphere S^4 can be deformed adiabatically into 𝒱 in Fig. <ref>(d), so we also have C_2^NA=1 on 𝒱.To see C_2^NA is equal to C_2^A(ℳ_1,2), we can take the limit 𝒱 is a thin "torus" S^2× S^2, i.e., its smaller radius (distance to ℳ_1) tends to zero. In this limit, one will find ∫_𝒱^4𝐤ϵ^ijklf^12_ijf^21_kl=0, namely, the off-diagonal elements of field strength f_ij do not contribute (see Appendix <ref>). So C_2^NA is solely given by the diagonal field strengths f^11_ij and f^22_ij, which can be roughly identified with U(1) Berry curvatures of band 1 and 2. By calculations, one can show ϵ^ijklf^11_ijf^11_kl=ϵ^ijklf_ijf_kl=0. A heuristic understanding of this is the Berry curvature f^11_ij of band 1 sees only ℳ_2, while the U(1) trace Berry curvature f_ij sees only ℳ_1, so both of them do not see linked Weyl surfaces and have zero contribution to the second Chern number. One can then readily show C_2^NA=∫_𝒱^4𝐤ϵ^ijklf^22_ijf^22_kl/32π^2 =∫_𝒱^4𝐤ϵ^ijklf^(2)_ijf^(2)_kl/32π^2=C_2^A(2,ℳ_1). We note that in this limit where 𝒱 is closely attached to ℳ_1, only the diagonal elements of f_ij contributes, while in the Yang monopole case which is spherically symmetric, the diagonal and off-diagonal elements are equally important <cit.>.In high energy physics, a U(2) gauge symmetry can be spontaneously broken down to U(1)×U(1) via the Georgi-Glashow mechanism <cit.> with an isospin 1 Higgs field. In 5d space, SU(2) gauge fields are associated with point-like Yang monopoles, while U(1) gauge fields are associated with monopole 2-branes (codimension 3 objects). We conjecture that a gauge symmetry breaking from U(2) to U(1)×U(1) in 5d will always break an SU(2) Yang monopole into two linked U(1) monopole 2-branes ℳ_1 and ℳ_2, where ℳ_1 is coupled to one of the two U(1) gauge fields, while ℳ_2 is coupled to both U(1) gauge fields with opposite monopole charges. § TOPOLOGICAL PHASE TRANSITIONS IN 5D It is known that 3d Weyl semimetals play an important role in 3d TPTs. An example is the TPT of 3d Chern insulator (CI) with no symmetry, which is characterized by three integers (n_1,n_2,n_3), with n_i being the first Chern number in the plane orthogonal to k_i in the BZ <cit.>. The CI becomes a normal insulator (NI) when all n_i=0. The TPT from a 3d NI to a (0,0,1) CI involves an intermediate Weyl semimetal phase as shown in Fig. <ref>(a)-(c). By creating a pair of Weyl points with opposite monopole charges and annihilating them after winding along a closed cycle in k_3 direction, one creates a Berry flux quanta in the k_1-k_2 plane, and n_3 increases by one <cit.>. At the same time, a fermi arc arises on the real space boundary connecting the projections of the two Weyl points <cit.>, which finally becomes a closed fermi loop along k_3.Another example is the TPT from TI to NI, which are the two phases in the ℤ_2 classification of 3d TRI insulators <cit.>. When the inversion symmetry is broken, an intermediate TRI Weyl semimetal arises <cit.>, which contains (multiples of) 4 Weyl points as shown in Fig. <ref>(d)-(f). The TPT is done by creating two pairs of Weyl points with opposite charges, winding them along a loop that encloses a TRI point (e.g., Γ point), then annihilating them in pairs with their partners exchanged. Meanwhile, the fermi surface loop of the Dirac surface states of TI breaks into two fermi arcs connecting the 4 Weyl points, which vanish when all the Weyl points are gone. Similarly, the 5d TPTs involve creation of 5d Weyl semimetal phases. We first examine the TPT of 5d CIs with no symmetry, which are characterized by 5 second Chern numbers n_i in the 4d hyperplanes of the BZ orthogonal to k_i (1≤ i≤5), and 10 first Chern numbers n_ij in the 2d planes parallel to k_i and k_j (1≤ i<j≤5). The second Chern numbers n_i are even under both 𝐓 and 𝐏 transformations, while the first Chern numbers n_ij are odd under 𝐓 and even under 𝐏. Here we shall show that changes of the five n_i will involve creation and annihilation of linked Weyl surfaces in the BZ. A simple example without 𝐓𝐏 symmetry is the following 4-band HamiltonianH_QH(𝐤)=∑_i=1^5ξ_i(𝐤)γ^i+bi[γ^3,γ^4]/2 ,where ξ_i(𝐤)=sin k_i for 1≤ i≤ 4, and ξ_5(𝐤)=m+∑_i=1^4(1-cos k_i)+η(1-cos k_5). Here m is a tuning parameter, while 0≤ b<η<1-b.We shall label each band by its order in energies, and assume the lower two bands are occupied. Through an analysis similar to we did below Eq. (<ref>), the Weyl surfaces between bands 2 and 3 are given by ξ_1^2+ξ_2^2+ξ_5^2=b^2 and ξ_3=ξ_4=0, while those between bands 1 and 2 (also 3 and 4) are given by ξ_1=ξ_2=ξ_5=0. These two kinds of Weyl surfaces are drawn as blue and red in the 3d subspace k_2=k_4=0 of the BZ shown in Fig. <ref>, respectively, where they appear as 1d loops.When m>b, the system is a 5d NI with all n_i and n_ij zero and no Weyl surfaces (Fig. <ref>(a)). The TPT to a 5d CI with n_5=1 is driven by decreasing m, which experiences the following stages: when -b<m<b, a Weyl surface between bands 2 and 3 arise around the origin, which is topologically a 2d sphere in the k_3=k_4=0 hyperplane (Fig. <ref>(b), (c)). When b-2η<m<-b, as shown in Fig. <ref>(d), the 2d sphere between band 2 and 3 splits into two smaller spheres ℳ_1 and ℳ_3 (blue) in the k_3=k_4=0 hyperplane, while another 2d sphere Weyl surface ℳ_2 (red) between bands 1 and 2 is created in k_1=k_2=0 plane, which is linked to both ℳ_1 and ℳ_3. As m is further decreased, ℳ_1 and ℳ_3 will move along ± k_5, respectively, and finally merge into a single Weyl surface when -b-2η<m<b-2η (Fig. <ref>(e)). This Weyl surface then shrinks to zero, and the system becomes a 5d CI with n_5=1 for b-2<m<-b-2η, leaving a cylindrical Weyl surface ℳ_2 between bands 1 and 2 (also one between bands 3 and 4, Fig. <ref>(f)). We note that if b=0, the 𝐓𝐏 symmetry is restored, and the two blue Weyl surfaces ℳ_1 and ℳ_3 will collapse into two Yang monopoles of opposite monopole charges C_2^NA. The TPT process then becomes the creation, winding and annihilation of two Yang monopoles.This TPT is also accompanied with a surface state evolution from trivial to nontrivial. It has been shown <cit.> that a 5d Weyl semimetal with linked Weyl surfaces contain protected Weyl arcs in the 4d momentum space of surface states, which have linear dispersions in the other 3 directions perpendicular to the arc. By taking an open boundary condition along k_3 direction, one can obtain a Weyl arc on the 4d boundary connecting the projections of the two Weyl surface hopf links (Fig. <ref>(d), (e)). When the system becomes a CI, the Weyl arc develops into a non-contractible Weyl loop along k_5 as expected.The second example is the TPT between 𝐓𝐏 breaking 5d insulators with particle-hole symmetry 𝐂 satisfying 𝐂^2=-1, which are shown to be classified by ℤ_2 into 5d TIs and NIs <cit.>. Here we consider an eight-band model Hamiltonian of a 5d TI as follows:H_TI(𝐤)=∑_i=1^6ζ_i(𝐤)Γ^i+H_A ,where Γ^i (1≤ i≤7) are the 8×8 Gamma matrices so chosen that Γ^1, Γ^2, Γ^3, Γ^7 are real and Γ^4, Γ^5, Γ^6 are imaginary, ζ_i(𝐤)=sin k_i for 1≤ i≤5, ζ_6(𝐤)=m+∑_i=1^5t_i(1-cos k_i) with t_i>0, andH_A=iη_0Γ^1Γ^2Γ^7+η_1Γ^7sin k_5+iη_2Γ^3Γ^4Γ^5+iη_3Γ^3Γ^4is a symmetry breaking perturbation. The𝐓, 𝐏 and 𝐂 transformation matrices are given by 𝒯=Γ^4Γ^5Γ^7, 𝒫=iΓ^6 and 𝒞=Γ^4Γ^5, and a Hamiltonian H(𝐤) will have these symmetries if 𝒯^† H(-𝐤)𝒯=H^*(𝐤), 𝒫^† H(-𝐤)𝒫=H(𝐤) and 𝒞^† H(-𝐤)𝒞=-H^*(𝐤), respectively. It is then easy to see that H_A respects the 𝐂 symmetry but breaks 𝐓, 𝐏 and 𝐓𝐏 symmetries. In particular, only η_1 and η_2 breaks the 𝐓𝐏 symmetry.In the absence of H_A, the system is a 5d NI if m>0, and is a 5D TI if m<0. With the symmetry breaking term H_A, the calculation of the band structure of H_TI(𝐤) becomes more complicated. For simplicity, we shall only examine the limiting case where (t_1/t_2)^2+η_0^2<1, |η_0|≫|η_1|, |η_0|≫|η_2| and |η_0η_1|≫|η_3| (with t_1 and t_2 as defined in the expression of ζ_6(𝐤)). We shall label each band by its order in energies, and keep the Fermi energy at zero, i.e., between band 4 and 5, as required by the 𝐂 symmetry. To a good approximation, the Weyl surfaces between bands 4 and 5 are given by ζ_1^2+ζ_2^2=η_0^2, ζ_3^2+ζ_4^2+ζ_5^2=η_2^2 and ζ_6=0, while those between bands 5 and 6 (also between 3 and 4) are given by ζ_3=ζ_4=ζ_5=0. The TPT can be driven by tuning m from negative (TI) to positive (NI), and the evolution of these low energy Weyl surfaces are illustrated in Fig. <ref> (a)-(f) in the 3d subspace k_1^2+k_2^2=η_0^2 and k_4=0. The small blue loops are the images of Weyl surfaces between bands 4 and 5, while the red loop at k_3=k_5=0 is that between bands 5 and 6. At first, two pairs of blue Weyl surfaces arise unlinked (panel (b)). As m increases, they merge into four new Weyl surfaces linked with the red Weyl surface, which then wind around the red Weyl surface and merge into unlinked pairs again with their partners exchanged (panels (c)-(e)). Finally, the four unlinked blue Weyl surfaces contract to zero, and the system becomes a 5d NI. Similar to the CI case, if η_1=η_2=0, the 𝐓𝐏 symmetry is recovered, and the 4 blue Weyl surfaces will collapse into 4 Yang monopoles. The TPT process then involves winding of Yang monopoles instead of linked Weyl surfaces. The topological surface states of the system also involves a topological transition during the TPT. The topological surface states of a noncentrosymmetric 5d TI is generically a "Weyl ring" as shown in Fig.<ref>(g). The TPT then breaks into two Weyl arcs (panel (h)), which finally vanish when entering the NI phase. In conclusion, we show that 5d Weyl semimetals with Yang monopoles are protected by the 𝐓𝐏 symmetry, and generically reduce to 5d Weyl semimetals with linked Weyl surfaces in the presence of 𝐓𝐏 symmetry breaking. We therefore expect that Yang monopoles generically break into linked U(1) monopole 2-branes in 5d theories of gauge symmetry breaking from U(2) to U(1)×U(1). As gapless states carrying the second Chern number, they emerge as intermediate phases in the TPTs between CIs and NIs or between TIs and NIs in 5d space, generalizing the connection between gapless and gapped topological phases in 3d <cit.>. Acknowledgments. We would like to thank Jing-Yuan Chen for helpful discussions during the research. This work is supported by the NSF under grant number DMR-1305677. § DERIVATION OF C_2^NA=C_2^A(ℳ_1,2) ON MANIFOLD 𝒱It is sufficient to do the calculation in the limit 𝒱 is thin, i.e., close to Weyl surface ℳ_1. For the model given in Eq. (<ref>), such a 4d manifold 𝒱 can be given by (κ-b)^2+k̃_4^2+k̃_5^2=ϵ^2, where κ=√(k̃_1^2+k̃_2^2+k̃_3^2), and ϵ≪ b. Using Gamma matrices defined in <cit.>, the wave function |u^2_𝐤⟩ is given by|u^2_𝐤⟩=(cosθ/2cosα/2,sinθ/2cosα/2e^iϕ, cosθ/2sinα/2e^iψ,sinθ/2cosα/2e^iϕ+iψ) ,while |u^1_𝐤⟩ is well-approximated by|u^1_𝐤⟩=(sinθ/2,-cosθ/2e^iϕ,0,0) ,where we have defined the angles α, ψ, θ,ϕ by sinα e^iψ=(k_4+ik_5)/ϵ, and sinθ e^iϕ=(k_1+ik_2)/κ. This approximation basically ignores the dependence of |u^1_𝐤⟩ on k_4 and k_5, which is valid since |k̃_4,5|<ϵ≪ b, and |u^1_𝐤⟩ is nonsingular at ℳ_1. The nonzero components of U(2) Berry connection a^αβ_𝐤 can then be shown to bea^11_ϕ=-1+cosθ/2,a^22_ϕ=-1-cosθ/2,a^22_ψ=-1-cosα/2,a^21_θ=i/2cosθcosα/2 , a^21_ϕ=1/2sinθcosα/2 , with a^12_i=a^21*_i. It is then straightforward to calculate the non-Abelian field strengths f^αβ_ij. In particular, one can prove that ϵ^ijklf^12_ijf^21_kl=[sin2θ(1-cosα)sinα]/8, which gives 0 when integrated over the four angles. Therefore, the off-diagonal components of f_ij have no contribution to the second Chern number C_2^NA. Further, one can show f_θϕ^22=-f_θϕ^11=f_θϕ^(2)+(sin 2θcosα)/4 and f_αψ^22=f_αψ^(2)=(sinα)/2 are the only rest nonzero terms, where f_ij^(1) and f_ij^(2) are the U(1) Berry connection in band 1 and 2, respectively. Therefore, we have ϵ^ijklf^11_ijf^11_kl=ϵ^ijklf_ijf_kl=0, and ϵ^ijklf^22_ijf^22_kl=ϵ^ijklf^(2)_ijf^(2)_kl+sin 2θsin2α. The non-Abelian second Chern number is thenC_2^NA =∫_0^πθ∫_0^2πϕ∫_0^πα∫_0^2πψϵ^ijklf^22_ijf^22_kl/32π^2=∮_𝒱^4𝐤ϵ^ijklf^(2)_ijf^(2)_kl/32π^2=C_2^A(2,ℳ_1)=1 .If we rewrite the non-Abelian field strength as f_ij=f_ij^at^a where t^a=(1,σ^1,σ^2,σ^3)/2 (a=0,1,2,3) are the generator of U(2), the non-Abelian second Chern number on 𝒱 can be expressed asC_2^NA=-c^0+c^1+c^2+c^3 ,where we have defined c^a=∫_𝒱^4𝐤ϵ^ijklf^a_ijf^a_kl/64π^2. In the limit 𝒱 is close to ℳ_1, the above calculations tell us that c_0=c_1=c_2=0, and c_3=C_2(2,ℳ_1)=1. In contrast, in the Yang monopole case which is rotationally symmetric, one can show c_0=0, and c_1=c_2=c_3=1/3. Therefore, the TP symmetry breaking also breaks the symmetry between c_1, c_2 and c_3.§ WEYL SURFACES OF MODEL HAMILTONIAN EQ. (7)Compared to the four-band model in Eq. (6) which can be easily diagonalized, the eight-band model H_TI(𝐤) in Eq. (7) has a band structure more difficult to calculate. Here we present an easier way to examine the band structure with the assumptions (t_1/t_2)^2+η_0^2<1, |η_0|≫|η_1|, |η_0|≫|η_2| and |η_0η_1|≫|η_3|.For the moment we shall assume η_3=0. To solve the Schrödinger equation H_TI|ψ⟩=E|ψ⟩, one can first rewrite it into H_TI^2|ψ⟩=E^2|ψ⟩, which reduces to(E^2-∑_i=1^6ξ_i^2-η_0^2-η_1^2sin^2 k_5-η_2^2)|ψ⟩=(Λ_0+Λ_2)|ψ⟩after making use of the properties of Gamma matrices, where we have definedΛ_0=2η_0(ζ_1Γ^2Γ^7-ζ_2Γ^1Γ^7+η_1sin k_5Γ^1Γ^2) , Λ_2=2η_2(ζ_3Γ^4Γ^5-ζ_4Γ^3Γ^5+ζ_5Γ^3Γ^4) .One can easily show that Λ_0^2=4η_0^2(ζ_1^2+ζ_2^2+η_1^2sin^2 k_5)=η_0^2χ_0^2, Λ_2^2=4η_2^2(ζ_3^2+ζ_4^2+ζ_5^2)=η_2^2χ_2^2, and [Λ_0,Λ_2]=0. Therefore, they can be simultaneously diagonalized, i.e., Λ_0=±η_0χ_0, Λ_2=±η_2χ_2. One then obtain the energy spectrum of the eight bands asE=±√((η_0±χ_0)^2+(η_2±χ_2)^2+ζ_6^2) .One can then see the Weyl surfaces between bands 4 and 5 are given by χ_0^2=ζ_1^2+ζ_2^2+η_1^2sin^2 k_5=η_0^2, χ_2^2=ζ_3^2+ζ_4^2+ζ_5^2=η_2^2 and ζ_6=0. Since |η_0|≫|η_1|, one can approximately ignore the η_1^2sin^2 k_5 term.It is also easy to see that the Weyl surfaces between bands 3 and 4 are given by χ_2=0, i.e., ζ_3=ζ_4=ζ_5=0, which is exactly the k_1-k_2 plane. However, another set of Weyl surfaces are given by χ_0=0, i.e., ζ_1=ζ_2=sin k_5=0, which gives the k_3-k_4 plane in touch with the above Weyl surface (the k_1-k_2 plane). Such a configuration is unstable against perturbations in 5d.This touching of Weyl surfaces is removed when one adds the η_3 term. Via a perturbation analysis, one can show the η_3 term splits the above two kinds of Weyl surfaces in the k_5 direction for about a distance of order |η_3E/η_0η_1| away.The Weyl surfaces between bands 4 and 5 and between bands 5 and 6 can then be plotted according to the expression of the functions ζ_i, which are as illustrated in Fig. <ref>. In particular, the condition (t_1/t_2)^2+η_0^2<1 limits the number of Weyl surfaces between bands 4 and 5 to only four.26 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Qi and Zhang(2011)]Qi2011 authorX.-L. Qi and authorS.-C. Zhang, journalRev. Mod. Phys. volume83, pages1057 (year2011).[Berry(1984)]Berry1984 authorM. V. Berry, journalProc. R. Soc. Lond. A volume392, pages45 (year1984), ISSN issn0080-4630.[Nielsen and Ninomiya(1983)]Nielsen1983 authorH. Nielsen and authorM. Ninomiya, journalPhys. Lett. B volume130, pages389 (year1983).[Wan et al.(2011)Wan, Turner, Vishwanath, and Savrasov]Wan2011 authorX. Wan, authorA. M. Turner, authorA. Vishwanath, and authorS. Y. Savrasov, journalPhys. Rev. B volume83, pages205101 (year2011).[Balents(2011)]Balents2011 authorL. Balents, journalPhysics volume4, pages36 (year2011).[Fu et al.(2007)Fu, Kane, and Mele]Fu2007 authorL. Fu, authorC. L. Kane, and authorE. J. Mele, journalPhys. Rev. Lett. volume98, pages106803 (year2007).[Qi et al.(2008)Qi, Hughes, and Zhang]Qi2008 authorX.-L. Qi, authorT. L. Hughes, and authorS.-C. Zhang, journalPhys. Rev. B volume78, pages195424 (year2008).[Murakami(2007)]Murakami2007 authorS. Murakami, journalNew J. Phys. volume9, pages356 (year2007).[Murakami et al.(2016)Murakami, Hirayama, Okugawa, and Miyake]Murakami2016 authorS. Murakami, authorM. Hirayama, authorR. Okugawa, and authorT. Miyake, journalArXiv e-prints(year2016), 1610.07132.[Kohmoto et al.(1992)Kohmoto, Halperin, and Wu]Kohmoto1992 authorM. Kohmoto, authorB. I. Halperin, and authorY.-S. Wu, journalPhys. Rev. B volume45, pages13488 (year1992).[Haldane(2004)]Haldane2004 authorF. D. M. Haldane, journalPhys. Rev. Lett. volume93, pages206602 (year2004).[Ho řřava(2005)]Horava2005 authorP. Ho řřava, journalPhys. Rev. Lett. volume95, pages016405 (year2005), <http://link.aps.org/doi/10.1103/PhysRevLett.95.016405>.[Zhao and Wang(2013)]Zhao2013 authorY. X. Zhao and authorZ. D. Wang, journalPhys. Rev. Lett. volume110, pages240404 (year2013), <http://link.aps.org/doi/10.1103/PhysRevLett.110.240404>.[Schnyder and Brydon(2015)]Schnyder2015 authorA. P. Schnyder and authorP. M. R. Brydon, journalJ. Phys.: Condens. Matter volume27, pages243201 (year2015), <http://stacks.iop.org/0953-8984/27/i=24/a=243201>.[Lian and Zhang(2016)]Lian2016 authorB. Lian and authorS.-C. Zhang, journalPhys. Rev. B volume94, pages041105 (year2016).[Mathai and Thiang(2017)]Mathai2017 authorV. Mathai and authorG. C. Thiang, journalJ. Phys. A: Math. Theor. volume50, pages11LT01 (year2017), <http://stacks.iop.org/1751-8121/50/i=11/a=11LT01>.[Mathai and Thiang(2016)]Mathai2016b authorV. Mathai and authorG. C. Thiang, journalArXiv e-prints(year2016), 1611.08961.[Sugawa et al.(2016)Sugawa, Salces-Carcoba, Perry, Yue, and Spielman]Sugawa2016 authorS. Sugawa, authorF. Salces-Carcoba, authorA. R. Perry, authorY. Yue, and authorI. B. Spielman, journalArXiv e-prints(year2016), 1610.06228.[Yang(1978)]Yang1978 authorC. N. Yang, journalJ. Math. Phys. volume19 (year1978).[Wilczek and Zee(1984)]Wilczek1984 authorF. Wilczek and authorA. Zee, journalPhys. Rev. Lett. volume52, pages2111 (year1984).[Zhang and Hu(2001)]Zhang2001 authorS.-C. Zhang and authorJ. Hu, volume294, pages823 (year2001), ISSN issn0036-8075, <http://science.sciencemag.org/content/294/5543/823>.[Kitaev(2009)]Kitaev2009 authorA. Kitaev, journalAIP Conference Proceedings volume1134 (year2009).[Ryu et al.(2010)Ryu, Schnyder, Furusaki, and Ludwig]Ryu2010 authorS. Ryu, authorA. P. Schnyder, authorA. Furusaki, and authorA. W. W. Ludwig, journalNew J. Phys. volume12, pages065010 (year2010), <http://stacks.iop.org/1367-2630/12/i=6/a=065010>.[Wigner(1932)]Wigner1932 authorE. P. Wigner, journalNachr. Akad. Wiss. Göttingen, Math. Physik. Kl. volume546 (year1932).[Dyson(1962)]Dyson1962 authorF. J. Dyson, journalJ. Math. Phys. volume3, pages1199 (year1962).[Georgi and Glashow(1972)]Georgi1972 authorH. Georgi and authorS. L. Glashow, journalPhys. Rev. Lett. volume28, pages1494 (year1972), <http://link.aps.org/doi/10.1103/PhysRevLett.28.1494>.
http://arxiv.org/abs/1702.07982v2
{ "authors": [ "Biao Lian", "Shou-Cheng Zhang" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170226031256", "title": "Weyl Semimetal and Topological Phase Transition in Five Dimensions" }
1/2 i d r p #1#1 #1(#1) #1|#1| #1#2∂#1/∂#2 #1#2#1/#2 #1#2(#1#2) ^⊤ #1#2#1/#2 #1#2#3#4(#1 #2_ #3 #4 ) #1#2#3#4(#1#3 #2#4) #1#2( #1_#2) #1#2#3#4#1 #2, #3 (#4) #1 #1(<ref>) #1#1 #1#1
http://arxiv.org/abs/1702.08566v1
{ "authors": [ "George S. Pogosyan", "Kurt Bernardo Wolf", "Alexander Yakhno" ], "categories": [ "math-ph", "math.MP" ], "primary_category": "math-ph", "published": "20170227222945", "title": "Superintegrable classical Zernike system" }
http://arxiv.org/abs/1702.07843v1
{ "authors": [ "T. Tatsumi", "H. Sotani" ], "categories": [ "astro-ph.HE", "hep-ph", "nucl-th" ], "primary_category": "astro-ph.HE", "published": "20170225071525", "title": "Hybrid Quark Stars With Strong Magnetic Field" }
[pages=1-last]Adversarial_Networks_for_the_Detection_of_Aggressive_Prostate_Cancer.pdf
http://arxiv.org/abs/1702.08014v1
{ "authors": [ "Simon Kohl", "David Bonekamp", "Heinz-Peter Schlemmer", "Kaneschka Yaqubi", "Markus Hohenfellner", "Boris Hadaschik", "Jan-Philipp Radtke", "Klaus Maier-Hein" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170226100849", "title": "Adversarial Networks for the Detection of Aggressive Prostate Cancer" }
BLTP, JINR, Dubna, Moscow Region, 141980, Russia Dubna State University, Dubna,141980, RussiaBLTP, JINR, Dubna, Moscow Region, 141980, Russia Umarov Physical Technical Institute, TAS, Dushanbe, 734063, TajikistanTheoretical Physics Department, Indian Association for the Cultivation of Science, Jadavpur, Kolkata 700032, IndiaUniversity Bordeaux, LOMA UMR-CNRS 5798, F-33405 Talence Cedex, FranceWe study magnetization reversal in a φ_0 Josephson junction with direct coupling between magnetic moment and Josephson current. Our simulationsof magnetic moment dynamics show that by applying an electric current pulse, we can realize the full magnetization reversal. We propose different protocols of full magnetization reversal based on the variation of the Josephson junction and pulse parameters, particularly, electric current pulse amplitude, damping of magnetization and spin-orbit interaction. We discuss experiments which can probe the magnetization reversal in φ_0-junctions.Magnetization reversal by superconducting current in φ_0 Josephson junctions A. Buzdin December 30, 2023 ============================================================================Spintronics, which deals with an active control of spin dynamics in solid state systems, is one of the most rapidly developing field of condensed matter physics <cit.>. An important place in this field is occupied by superconducting spintronics dealing with the Josephson junctions (JJ) coupled tomagnetic systems <cit.>. The possibility of achieving electric control over the magnetic properties of the magnet via Josephson current and its counterpart, i.e., achieving magnetic control over Josephson current, recently attracted a lot of attention <cit.>. Spin-orbit coupling plays a major role in achieving such control. For example, in superconductor/ferromagnet/superconductor (S/F/S) JJs, its presence in a ferromagnet without inversion symmetry provides a mechanism for a direct (linear) coupling between the magnetic moment and the superconducting current. In such junctions, called hereafter φ_0-junction, time reversal symmetry is broken, and the current–phase relation is given by I = I_c sin (φ-φ_0), where the phase shift φ_0 is proportional to the magnetic moment perpendicular to the gradient of the asymmetric spin-orbit potential, and also to the applied current.<cit.> . Thus such JJs allow one to manipulate the internal magnetic moment by Josephson current <cit.>. The static properties of S/F/S structures are well studied both theoretically and experimentally; however, the magnetic dynamics of these systems has not been studied in detail beyond a few theoretical works <cit.>.The spin dynamics associated with such φ_0-junctions was studied theoretically in Ref. konschelle09. The authors considered a S/F/S φ_0-junction in a low frequency regime which allowed usage of quasi-static approach to study magnetization dynamics. It was demonstrated that a DC superconducting current produces a strong orientation effect on the magnetic moment of the ferromagnetic layer. Thus application of a DC voltage to the φ_0-junction is expected to lead to current oscillations and consequently magnetic precession. This precession can be monitored by the appearance of higher harmonics in current-phase relation; in addition, it also leads to the appearance of a DC component of the current which increases near a ferromagnetic resonance<cit.>. It is then expected that the presence of external radiation in such a system would lead to several phenomena such as appearance of half-integer steps in the current-voltage (I-V) characteristics of the junction and generation of an additional magnetic precession with frequency of external radiation <cit.>. In this paper we study the magnetization reversal in φ_0-junction with direct coupling between magnetic moment and Josephson current and explore the possibility of electrically controllable magnetization reversal in these junctions. We carry out investigations of the magnetization dynamics for two types of applied current pulse: rectangular and Gaussian forms. An exact numerical simulation of the dynamics of magnetic moment of the ferromagnetic layer in the presence of such pulses allows us to demonstrate complete magnetization reversal in these systems. Such reversal occurs for specific parameters of the junction and the pulse. We chart out these parameters and suggest a possible way for determination of spin-orbit coupling parameter in these systems. We discuss the experiment which can test our theory.In order to study the dynamics of the S/F/S system, we use the method developed in Ref. konschelle09. We assume that the gradient of the spin-orbit potential is along the easy axis of magnetization taken to be along ẑ. The total energy of this system can be written asE_tot=-Φ_0/2πφ I+E_s(φ,φ_0)+E_M(φ_0),where φ is the phase differencebetween the superconductors across the junction, I is the external current, E_s( φ,φ_0)=E_J[1-cos( φ-φ_0)], and E_J=Φ_0I_c/2π is the Josephson energy. Here Φ_0 is the flux quantum, I_c is the critical current, φ_0=l υ_so M_y/(υ_F M_0), υ_F is Fermi velocity, l=4 h L/ħυ_F, L is the length of F layer, h is the exchange field of the F layer, E_M=-K𝒱M^2_z/(2 M_0^2), the parameter υ_so/υ_F characterizes a relative strength of spin-orbit interaction, K is the anisotropic constant, and 𝒱 is the volume of the F layer.The magnetization dynamics is described by the Landau-Lifshitz-Gilbert equation<cit.> (seealso Supplementary Material) which can be written in the dimensionless form as[ d m_x/d t= 1/1+α^2{-m_ym_z+G r m_zsin(φ-rm_y); - α[m_xm_z^2+G r m_xm_ysin(φ-rm_y)]},; d m_y/d t= 1/1+α^2{m_xm_z;- α[m_ym_z^2-G r (m_z^2+m_x^2)sin(φ-rm_y)]},;d m_z/d t= 1/1+α^2{-G r m_xsin(φ-rm_y); - α[G r m_ym_zsin(φ-rm_y)-m_z(m_x^2+m_y^2)]}, ]where α is a phenomenological Gilbert damping constant, r=lυ_so/υ_F, and G= E_J/(K 𝒱). The m_x,y,z = M_x,y,z/M_0 satisfy the constraint ∑_α=x,y,z m_α^2(t)=1. In this system of equations time is normalized to the inverse ferromagnetic resonance frequency ω_F=γ K/M_0: (t→ t ω_F), γ is the gyromagnetic ratio, and M_0= M. In what follows, we obtain time dependence ofmagnetization m_x,y,z(t), phase difference φ(t) and normalized superconducting current I_s(t)≡ I_s(t)/I_c = sin(φ(t)-rm_y(t)) via numerical solution of Eq.(<ref>).Let us first investigate an effect of superconducting current on the dynamics of magnetic momentum. Our main goal is to search for cases related to thepossibility of the full reversal of the magnetic moment by superconducting current.In Ref.konschelle09 the authorshave observed a periodic reversal, realized in short time interval. But, as we see in Fig. <ref>, during a long time interval the character of m_z dynamicschanges crucially. At long times, m⃗ becomes parallel to y-axis, as seen from Fig.<ref>(b)) demonstrating dynamics of m_y. The situation is reminiscent of Kapitza pendulum (a pendulum whose point of suspension vibrates) where the external sinusoidal force can invert the stability position of the pendulum.<cit.> Detailed features of Kapitza pendulum manifestation will be presented elsewhere. The question we put here is the following: is it possible to revers the magnetization by the electric current pulse and then preserve this reversed state. The answer may be found by solving the system of equations (<ref>)together with Josephson relationdφ/dt=V, written in the dimensionless form. It was demonstrated in Ref.cai10 that using a specific time dependence of the bias voltage, applied to the weak link leads to the reversal of the magnetic moment of the nanomagnet. The authors showed the reversal of nanomagnet by linearly decreasing bias voltage V=1.5-0.00075t (see Fig.3 in Ref.cai10). The magnetization reversal, in this case, was accompanied by complex dynamical behavior of the phase and continued during a sufficiently long time interval. In contrast, in the present work we investigate the magnetization reversal in the system described by the equations (<ref>) under the influence of the electric current pulse of rectangular andGaussian forms. The effect of rectangular electric current pulse are modeled byI_pulse=A_s in the Δ t time interval (t_0-Δ t/2, t_0+Δ t/2) and I_pulse=0 in other cases. The form of the current pulse is shown in the inset to Fig.<ref>(a).Here we consider the JJ with low capacitance C ( R^2C/L_J<<1, where L_J is the inductance of the JJ and R is its resistance), i.e., we do not take into account the displacement current. So, the electric current through JJs isI_pulse= wdφ/dt+sin(φ-rm_y)where w=V_F/I_cR=ω_F/ω_R, V_F=ħω_F/2e, I_c - critical current, R- resistance of JJ, ω_R =2e I_cR/ħ - characteristic frequency.We solved the system of equations ( <ref>) together withequation ( <ref>) and describe the dynamics of the system. Time dependence ofthe electric current is determined through time dependence of phase difference φ and magnetization components m_x, m_y, m_z.We first study the effect of the rectangular pulse shown in the inset to Fig.<ref>(a). It is found that the reversal of magnetic moment can indeed be realizedat optimal values of JJ (G, r) and pulse (A_s, Δ, t_0) parameters. An example of the transition dynamics for such reversal of m_z with residual oscillation is demonstrated in Fig. <ref>(a); the corresponding parameter values are shown in the figure.Dynamics of the magnetic moment components, the phase difference andsuperconducting current is illustrated in Fig.<ref>(b). We see that in the transition region the phase difference changes from 0 to 2π and, correspondingly, the superconducting current changes its direction twice. This is followed by damped oscillation of the superconducting current.There are some characteristic time points in Fig.<ref>(b), indicated by vertical dashed lines. Line 1 corresponds to a phase difference of π/2 and indicate maximum of superconducting current I_s. The line 1^' which corresponds to the maximum of m_y, andm_z=0 has a small shift from line 1. This fact demonstrates that, in general, the characteristic features of m_x and m_y time dependence do not coincide with the features on the I_s(t), i.e., there is a delay in reaction of magnetic moment to the changes of superconducting current. Another characteristic point corresponds to the φ=π. At this time line 2crosses pointsI_s=0, m_y=0, and minimum of m_z. At time moment when φ=3π/2 line 3 crosses minimum of I_s. When pulse is switched off, the superconducting current starts to flow through the resistance demonstrating damped oscillations and causing residual oscillations of magnetic moment components. Note also, that the time at which the current pulse ends (t=28) is actually does notmanifest itself immediately in them_y (and not shown here m_x) dynamics. They demonstrate continuoustransition to the damped oscillating behavior.Fig. <ref>(b) provides us with a direct way of determining the spin-orbit coupling strength in the junction via estimation of r. For this, we note that the φ(t) = φ_00 + ∫_0^t V(t') dt' can be determined, up to a initial time-independent constant φ_00, in terms of the voltage V(t) across the junction. Moreover, the maxima and minima of I_s occurs at times t_ max andt_ min (see Fig.<ref>(b)) for which sin[φ_00 + ∫_0^t_ max[t_ min] V(t') dt' - r m_y(t_ max[t_ min])] = +[-]1. Eliminating φ_0 from these equations, one getssin1/2[ ∫_t_ max^t_ minV(t') dt' + r [m_y(t_ max)- m_y(t_ min)] ] = 1which allows us, in principal,to determine r in terms of the magnetization m_y at the position of maxima and minima of the supercurrent and the voltage V across the junction. We stress that for the experimental realization of proposed method one would need to resolve the value of the magnetization at the time difference of the order of 10^-10 - 10^-9 c. At the present stage the study of the magnetization dynamics with such a resolution is extremely challenging. To determine the spin-orbit coupling constant r experimentally it may be more convenientto vary the parameters of the current pulse I(t) and study the threshold of the magnetic moment switching.The dynamics of the system in the form of magnetization trajectories in the planes m_y-m_x andm_z-m_x during a transition time interval at the same parameters of the pulse and JJ at α=0 is presented in Fig.<ref>.We see that magnetic moment makes a spiralrotation approaching the state with m_z=-1 after switching off the electric current pulse. The figures show clearly the specific features of the dynamics around points B, A' and Q and damped oscillations of the magnetization components (see Fig.<ref>(b) and Fig.<ref>(d)). The cusps at point B in Fig.<ref>(a) corresponds just to the changefrom an increasing of absolute value of m_x to its decreasing and, opposite, at point A^' in Fig.<ref>(c). The behavior of magnetic system happens to be sensitive to the parameters of the electric current pulse and JJ. In the Supplement we show three additional protocols of the magnetization reversal by variation of A_s, G and r.It is interesting to compare the effect of rectangular pulse with the Gaussian one of the formI_pulse=A_s1/σ√(2π)exp(-(t-t_0)^2/2σ^2).where σ denotes the full width at half-maximum of the pulse and A is its maximum amplitude at t=t_0. In this case we also solve numerically the system of equations (<ref>) together with equation (<ref>) using (<ref>).An example of magnetic moment reversal in this case is presented in Figure <ref>, which shows the transition dynamics of m_zfor the parameters r=0.1, G=10, A_s=5, σ=2 at small dissipation α=0.01.We see that the magnetization reversal occurs more smoothly in compare with a rectangular case.We also note that very important role in the reversal phenomena belongs to the effect of damping. It's described by term with α in the system of equations (<ref>), where α is a damping parameter.The examples of the magnetization reversal at G=50, r=0.1 and different values of α are presented in Fig.<ref>.We see that dissipation can bring the magnetic system to full reversal, even if at α=0 the system does not demonstrate reversal. Naturally, the magnetic moment, after reversal,shows some residual oscillations as well. We stress that the full magnetization reversal is realized in some fixed intervals of dissipation parameter. As expected, the variation of phase difference by π reflects the maxima in thetime dependence of the superconducting current. Fig.<ref> demonstrates this fact.The presented data shows that the total change of phase difference consists of 6π, which corresponds to the six extrema in the dependence I_s(t). After the full magnetization reversal is realized,the phase difference shows the oscillations only.One of the important aspect of the results that we obtain here is the achievement of a relatively short switching time interval for magnetization reversal. As we have seen in Figs.<ref>(a) and <ref>, the time taken for such reversal is ω_F t ≃ 100 which translate to 10^-8 seconds for typical ω_F ≃ 10 GHz. We note that this amounts to a switching time which is 1/20^ th of that obtained in Ref.cai10.Experimental verification of our work would involve measurement of m_z(t) in a φ_0 junction subjected to a current pulse. For appropriate pulse and junction parameters as outlined in Figs. <ref> and <ref>, we predict observation of reversal of m_z at late times ω_F t ≥ 50. Moreover, measurement of m_y at times t_ max and t_ min where I_s reaches maximum and minimum values and the voltage V(t) across the junction between these times would allow for experimental determination of r via Eq. <ref>. As a ferromagnet we propose to use a very thin F layer on dielectric substrate. Its presence produces the Rashba-type spin-orbit interaction and the strength of this interaction will be large in metal with large atomic number Z. The appropriate candidate is a permalloy doped with Pt.<cit.> In Ptthe spin-orbit interaction play a very important role in electronic band formation and the parameter υ_so/υ_F, which characterizes the relative strength of the spin-orbit interaction is υ_so/υ_F∼ 1. On the other hand, the Pt doping of permalloy up to 10 % did not influenced significantly its magnetic properties <cit.>and then we may expect to reach υ_so/υ_F in this case 0.1 also. If the length of F layer is of the order of the magnetic decaying length ħυ_F/h, i.e. l∼ 1, we have r∼ 0.1. Another suitable candidate may be a Pt/Co bilayer, ferromagnet without inversion symmetry like MnSi or FeGe. If the magnetic moment is oriented in plane of the F layer, than the spin-orbit interaction should generate a φ_0 Josephson junction<cit.> with finite ground phase difference. The measurement of this phase difference (similar to the experiments in Ref.Szombati) may serve as an independent way for the parameter r evaluation. The parameter G has been evaluated in Ref.konschelle09 for weak magnetic anisotropy of permalloy K∼ 4× 10^-5K·Å^-3 (see Ref.Rusanov) and S/F/S junction with l∼ 1 and Tc∼ 10K as G∼ 100. For stronger anisotropy we may expect G∼ 1. In summary, we have studied the magnetization reversal in φ_0-junction with direct coupling between magnetic moment and Josephson current. By adding the electric current pulse, we have simulated the dynamics of magnetic moment components and demonstrate the full magnetization reversal at some parameters of the system and external signal. Particularly, time interval for magnetization reversal can be decreased by changing the amplitude of the signal and spin-orbit coupling. The observed features might find an application in different fields of superconducting spintronics. They can be considered as a fundamental basis for memory elements, also. See supplementary material for demonstration of different protocols of the magnetization reversal by variation of Josephson junction and electric current pulse parameters.Acknowledgment: The authors thankI. Bobkovaand A. Bobkov for helpful discussion. The reported study was funded by the RFBR research project 16–52–45011_India, 15–29–01217, the the DST-RFBR grant INT/RUS/RFBR/P-249, and the French ANR projects "SUPERTRONICS".99 sdsrev I. Zutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. 76, 323 (2004). linder15 Jacob Linder andW. A. Jason Robinson, Nature Physics,11, 307 (2015). buzdin05 A. I. Buzdin, Rev. Mod. Phys.,77, 935 (2005). bergeret05F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Rev. Mod. Phys., 77, 1321 (2005). golubov04A. A. Golubov, M. Y. Kupriyanov, and E. Ilichev, Rev. Mod. Phys. , 76, 411 (2004). buzdin08 A. Buzdin, Phys. Rev. Lett., 101, 107005 (2008). krive05 I. V. Krive, A. M. Kadigrobov, R. I. Shekhter, and M. Jonson, Phys. Rev. B 71, 214516 (2005). reynoso08 A. A. Reynoso, G. Usaj, C. A. Balseiro, D. Feinberg, and M. Avignon, Phys. Rev. Lett. 101, 107001 (2008). konschelle09F. Konschelle, A. Buzdin, Phys. Rev. Lett .,102 , 017001 (2009). waintal02X. Waintal and P. W. Brouwer, Phys. Rev., B 65, 054407 (2002). braude08V. Braude and Ya. M. Blanter, Phys. Rev. Lett., 100, 207001 (2008). linder83J. Linder and T. Yokoyama, Phys. Rev., B 83, 012501 (2011). cai10Liufei Cai andE. M. Chudnovsky, Phys. Rev., B 82, 104429 (2010). chud2016 Eugene M. Chudnovsky, Phys. Rev., B 93, 144422 (2016). kapitza P. L. Kapitza, Soviet Phys. JETP, B21, 588-592 (1951); Usp. Fiz. Nauk, B44, 7-15 (1951). HrabecA. Hrabec, F. J. T. Goncalves, C. S. Spencer, E. Arenholz, A. T. NDiaye, R. L. Stamps and Christopher H. Marrows, Phys. Rev. B 93, 014432 (2016). SzombatiD. B. Szombati, S. Nadj-Perge, D. Car, S. R. Plissard, E. P. A. M. Bakkers and L. P. Kouwenhoven, Nture Physics 12, 568 (2016). Rusanov A. Yu. Rusanov, M. Hesselberth, J. Aarts, A. I. Buzdin, Phys. Rev. Lett. 93, 057002 (2004).Supplementary Material to “Magnetization reversal by superconducting current in φ_0 Josephson junctions” §GEOMETRY AND EQUATIONS Geometry of the consideredφ_0-junction<cit.> is presented in Fig. <ref>. The ferromagnetic easy-axis is directed along the z-axis, which is also the direction n of the gradient of the spin-orbit potential. The magnetization component m_y is coupled with Josephson current through the phase shift term φ_0∼ (n[m∇Ψ]), where Ψ is the superconducting order parameter (∇Ψ is along the x-axis in the system considered here).In order to study the dynamics of the S/F/S system, we use the method developed in Ref. konschelle09. We assume that the gradient of the spin-orbit potential is along the easy axis of magnetization taken to be along ẑ. The total energy of this system is determined by the expression (S1) in the main text.The magnetization dynamics is described by the Landau-Lifshitz-Gilbert equation<cit.>dM/dt=-γ M× H_eff+α/M_0( M×dM/dt)where γ is the gyromagnetic ratio, α is a phenomenological Gilbert damping constant, and M_0= M. The effective field experienced by the magnetization M is determined by H_eff=-1/𝒱∂ E_tot/∂ M, so H_eff = K/M_0[G r sin(φ - r M_y/M_0) y + M_z/M_0z] where r=lυ_so/υ_F, and G= E_J/(K 𝒱).Using (<ref>) and (<ref>), we obtain the system of equations (2) in main text, which describes the dynamics of the SFS structure. § MAGNETIZATION REVERSAL UNDER ELECTRIC CURRENT PULSEMagnetic system is very sensitive to the parameters of the electric current pulse and Josephson junction. Here we show three additional protocols of the magnetization reversal by variation of A_s, G and r. §.§ Effect of A_s-variationFigure  <ref> demonstrates the magnetization reversal by changing pulse parameter A_s.We see that change of pulse amplitude A_s=1.3 to A_s=1.4 reverses magnetic moment. At A_s=1.5 this feature is still conserved, but disappears at larger values. §.§ Effect of G-variationFigure  <ref> demonstrates the magnetization reversal by changing Josephson junction parameter G.§.§ Effect of r-variationFigure  <ref> demonstrates the magnetization reversal by changing Josephson junction parameter of spin-orbital coupling r.Figure  <ref> demonstrates the magnetization reversal by changing Josephson junction parameter of spin-orbital coupling r.We see that there is a possibility of magnetization reversal around G=10. In this case a decrease of spin-orbit parameter may lead to the magnetization reversal also. Themagnetization reversal depends on the other parameters of the systemand, naturally, the minimal value of parameterr depends on their values. In particular case presented here it is around 0.05.
http://arxiv.org/abs/1702.08394v4
{ "authors": [ "Yu. M. Shukrinov", "I. R. Rahmonov", "K. Sengupta", "A. Buzdin" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170227174121", "title": "Magnetization reversal by superconducting current in $\\varphi_0$ Josephson junctions" }
[ Residual Convolutional CTC Networks for Automatic Speech RecognitionYisen Wang^*†wangys14@mails.tsinghua.edu.cn Xuejiao Deng^*†sophiadeng@tencent.com Songbai Pu^†johnsonpu@tencent.com Zhiheng Huang^†zhihhuang@tencent.com ^†Tencent AI Lab, Tencent, Shenzhen, China ^Department of Computer Science and Technology, Tsinghua University, Beijing, ChinaResidual CNN, CTC, System Combination, ASR0.3in ]Deep learning approaches have been widely used in Automatic Speech Recognition (ASR) and they have achieved a significant accuracy improvement. Especially, Convolutional Neural Networks (CNNs) have been revisited in ASR recently. However, most CNNs used in existing work have less than 10 layers which may not be deep enough to capture all human speech signal information. In this paper, we propose a novel deep and wide CNN architecture denoted as RCNN-CTC, which has residual connections and Connectionist Temporal Classification (CTC) loss function. RCNN-CTC is an end-to-end system which can exploit temporal and spectral structures of speech signals simultaneously. Furthermore, we introduce a CTC-based system combination, which is different from the conventional frame-wise senone-based one. The basic subsystems adopted in the combination are different types and thus mutually complementary to each other. Experimental results show that our proposed single system RCNN-CTC can achieve the lowest word error rate (WER) on WSJ and Tencent Chat data sets, compared to several widely used neural network systems in ASR. In addition, the proposed system combination can offer a further error reduction on these two data sets, resulting in relative WER reductions of 14.91% and 6.52% on WSJ dev93 and Tencent Chat data sets respectively.§ INTRODUCTIONAutomatic Speech Recognition (ASR) is designed to automatically transcribe human speech into text. In the past several years, deep learning <cit.> has been successfully applied in ASR to boost the recognition accuracy. Very recently, CNN becomes an attractive model in ASR, which transforms speech signals into feature maps as used in computer vision <cit.>. Compared to other deep learning architectures, CNN has several advantages: 1) CNN is suited to exploit local correlations of human speech signals in both time and frequency dimensions. 2) CNN has the capacity to exploit translational invariance in signals. Most of previous applications of CNN in ASR only used a few convolutional layers. One typical architecture usually contains several convolutional layers, followed by a number of recurrent layers and fully-connected feedforward layers. These CNN structures are often less than 10 layers[One exception is LACE <cit.> which has about 20 layers, but it does not utilize CTC as proposed in this paper.], which may not be deep enough to capture all the information of human speech signals, especially for long sequences. As a result, their WERs may be adversely affected. Also, the convergence speed is too slow for training this type of architecture for acoustic models in practice.Traditional acoustic model training is based on frame-wise cross entropy loss (CE), which requires pre-generated and aligned frame labels by hidden Markov model/Gaussian mixture model (HMM/GMM) paradigm. To simplify this process,() introduced CTC objective function to infer speech-label alignments automatically without any intermediate process, leading to an end-to-end system for ASR. CTC technique has shown promising results in Deep Speech <cit.> and EESEN <cit.>.Motivated by the above observations, a residual convolutional neural networks architecture along with CTC loss system, denoted as RCNN-CTC, is proposed in this paper to boost the performance of ASR. RCNN-CTC has the following three advantages: 1) It is a CNN-based system which operates on both time and frequency dimensions. RCNN-CTC can model temporal as well as spectral local correlations and gain translational invariance in speech signals. 2) Its network architecture can be very deep (more than 40 layers) to obtain more expressive power and better generalization capacity through residual connections between layers, as inspired by Residual Networks (ResNets) <cit.>. 3) RCNN-CTC can also be trained in an end-to-end manner thanks to the CTC loss. In addition to the proposed RCNN-CTC, we propose a CTC-based system combination to further enhance the recognition accuracy. The proposed combination is different from the conventional frame-wise senone-based one due to the fact that the former produces peak phone/label distribution while the latter produces frame-wise senone distribution. The basic subsystems adopted in our combination are RCNN-CTC, Bidirectional Long Short Term Memory (BLSTM) <cit.> and Convolutional Long short term memory Deep Neural Network (CLDNN) <cit.>. They have heterogeneous structures and are mutually complementary in producing transcriptions (see Section 4). Note that the CTC-based system combination may be difficult as the output of each basic subsystem is not frame-aligned and the scores are not well calibrated, thus the results cannot be simply averaged. We implement a series of procedures of time normalization, alignment and voting to address the above issue. In summary, our contributions in this paper are threefolds: 1) We propose a residual convolutional neural networks architecture paired with CTC loss (RCNN-CTC) for ASR task. Such a deep and wide network has not been applied to ASR before in our knowledge; 2) We propose a novel CTC-based system combination, which can obtain significant reduction on WER in our experiments; 3) Empirically, our proposed single system RCNN-CTC can achieve lower WERs compared with other widely used neural network ASR systems on WSJ and Tencent Chat data sets. In addition, the proposed system combination can further reduce the WERs on these two data sets.§ RELATED WORKIn the last few years, Recurrent Neural Networks (RNNs) have been widely used for sequential modeling due to its capability of modeling long history <cit.>. As a sequential task, Long Short Term Memory (LSTM) <cit.> and Bidirectional LSTM (BLSTM) <cit.> have also been successfully applied to ASR, and they addressed the drawbacks of RNN, such as the gradient vanishing problem. However, a disadvantage of LSTM is that it needs to store multiple gating neural responses at each time-step and unfold the time steps during training and test stages, which results in a computational bottleneck for long sequences, i.e., thousands of frames in ASR. CNN was introduced into ASR to alleviate the computational problem. In early work, only a few CNN layers were typically used. For example, () used one convolutional layer, one pooling layer and a few fully-connected layers.() used three convolutional layers as the feature preprocessing layers. () showed that CNN-based speech recognition which uses raw speech as input can be more robust. To the end, deep CNN (about 10 convolutional layers) showed great performance in noisy speech recognition <cit.>. Recently, ResNet <cit.> has been shown to achieve compelling convergence and high accuracy in computer vision, which attributes to its identity mapping as the skip connection in residual blocks. Successful attempts along this line in ASR have also been reported very recently.() proposed a deep convolutional network with batch normalization (BN), residual connections and convolutional LSTM structure. Convolutional LSTM uses convolutions to replace the inner products within LSTM units. Residual connections are used to train very deep network, and BN normalizes each layer's inputs to reduce internal covariance shift. The above techniques are employed to add more computation depth to the model while reducing the number of parameters at the same time. Another network architecture was proposed in <cit.>, i.e., deep recurrent convolutional network with deep residual learning. They implemented several recurrent layers at the bottom, followed by deep full convolutional layers with 3 × 3 filters (but no pooling layer). Besides, they built four residual blocks among the CNN layers, with each residual block containing layers with the same number of feature maps to avoid extra parameters. Residual LSTM architecture was proposed in <cit.>. In addition to the inherent shortcut paths between LSTM memory cells, they employed additional spatial shortcut paths between layer outputs. They showed that the residual LSTM architecture provided a large gain from increasing depth. However, these models still suffer from the computational bottleneck, due to the components of LSTM in their network architectures. () proposed another deep CNN with layer-wise context expansion and location-based attention architecture (LACE). The layer-wise context expansion and location-based attention mechanism are implemented by element-wise matrix product and convolution operations without max-pooling or average-pooling. Moreover, they employed four residual blocks, each having an identical structure, which is similar to ResNet. It is worth pointing out that they did not employ CTC loss. Consequently, LACE depends on the tedious label alignment process and cannot facilitate an end-to-end training framework.§ RESIDUAL CONVOLUTIONAL CTC NETWORKSAs stated above, CNN and CTC both own excellent characteristics for ASR task, but the combination of these two components is not fully explored. In this paper, we propose a novel residual convolutional CTC networks architecture, namely RCNN-CTC, which is very deep (more than 40 layers) to get full value of CNN, residual connections and CTC.§.§ Residual CNNGenerally speaking, deep CNNs can improve generalization and outperform shallow networks. However, they tend to be more difficult to train and slower to converge. Residual Networks (ResNets) <cit.> have been proposed recently to ease the training of very deep CNNs. ResNet is composed of a number of stacked residual blocks, and each block contains direct links between the lower layer outputs and the higher layer inputs. The residual block (described in Figure <ref>) is defined as:y = ℱ(x, W_i) + x,where x and y are the input and output of the layers considered, and ℱ is the stacked nonlinear layers mapping function. Note that identity shortcut connections of x do not add extra parameters and computational complexity. With the presence of residual connections, ResNet can improve the convergence speed in training. ResNet can also enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks. Recently,() showed that wide residual networks (WRNs) are superior over the commonly used narrow and very deep counterparts (original ResNets), which widens the convolutional layers by adding more feature maps in each residual block. Note that more feature maps mean more computation. In order to get a trade-off between performance and computational complexity, we adopt the network architecture with width = 2, i.e., our network is 2 times wider of original ResNets architecture. The details of the proposed RCNN-CTC network architecture is shown in Table <ref>. In particular, we use a large 41 × 11 filter with 32 feature maps and width = 1 as conv1, followed by 4 groups (each with size N, width =2) of residual blocks defined in Figure <ref>, namely ResBlock1, ResBlock2, ResBlock3 and ResBlock4 (N = 5 and 2 for Tencent Chat and WSJ data respectively, due to the fact that the former data is larger than the latter). 0.34cm In general, convolutions require a context window, thus conv1 is set by considering the input feature dimension and the empirical window size. We also employ batch normalization (BN) <cit.> technique in RCNN-CTC, which is used for normalizing each layer’s input to reduce internal covariance shift. BN speeds up training and acts as a regularizer. The standard formulation of BN for CNN can be readily applied here, and we do not need the sequence-wise normalization of RNN <cit.>. Moreover, strided convolutions are an essential element of CNN. For RCNN-CTC applying striding is also a natural way to reduce the computational cost on time and frequency dimensions. We find that RCNN-CTC's performance is sensitive to the stride on the time dimension but not on the frequency dimension. Unlike ResNets used in computer vision where ResBlock2 and ResBlock3 need tobe set deeper than ResBlock1 and ResBlock4 to describe the shape or skeleton, each ResBlock has almost the same importance in ASR (i.e., N is identical for each ResBlock).In summary, our proposed RCNN-CTC has a deeper and wider network architecture, compared to the existing CNN-based systems in ASR. §.§ CTCTraditional acoustic model training is based on frame-level labels with cross-entropy criterion (CE), which requires a tedious label alignment procedure. Following <cit.>, we adopt the CTC objective <cit.> to automatically learn the alignments between speech frames and their label sequences, leading to an end-to-end training. To align the network outputs with the label sequences, an intermediate representation of CTC path is introduced in <cit.>. The label sequence z can then be mapped to its corresponding CTC paths. It is a one-to-many mapping because multiple CTC paths can correspond to the same label sequence. For example, both “A A ϕ ϕ B C ϕ" and “ϕ A A B ϕ C C" are mapped to label sequence “A B C", where ϕ is the blank symbol. We denote the set of CTC paths for z as Φ(z). The likelihood of z can thus be evaluated as a sum of the probabilities of its CTC paths:P(z|X) = ∑_p∈Φ(z) P(p|X),where X is the utterance consisting of speech frames and p is a CTC path. Given this distribution, we can derive the objective function of sequence labeling ln P(z|X). Since the objective function is differentiable, we can back-propagate these errors and further update the network parameters. § CTC-BASED SYSTEM COMBINATIONWith regard to the conventional system combination, its performance improvement is little, due to the slight difference among subsystems. Therefore, we propose a system combination method which takes the diversity and complementary among subsystems into account. As a result, our proposed system combination can obtain an absolute WER reduction of 1% on WSJ and Tencent Chat data sets. §.§ Subsystems SelectionOur selection of subsystems is guided by the following principles: compared to the transcription (ground truth) G, we first figure out the correct part/words C_i in the decoding text of each subsystem i. We then search for the combination of subsystems and compute their union set of correct words U = ⋃_i C_i. We define a maximal correct word rate (MCWR) as the selection criterion:MCWR = ∑_w ∈ G𝕀(w ∈ U)/|G|,where 𝕀(·) is the indicator function that takes 1 if (·) is true and 0 otherwise, and |G| is the length of ground truth G. Our goal is to select the combination which achieves the highest MCWR while using a minimal number of subsystems at the same time. Through this method, we can use the least cost to find subsystems which are mutually complementary. This also provides a guideline to choose the combination which has a balance between recognition accuracy and combination costs. In our experiments, a small held-out data of WSJ is used to search for an optimal system combination via MCWR metric. Therefore, the following three subsystems[On the held-out data, two subsystems cannot achieve an acceptable MCWR (0.95) but three subsystems have already obtained a very high MCWR (0.98), while four subsystems' MCWR (0.98) is almost the same to the three ones.] are selected: 1) The proposed RCNN-CTC in Section 3; 2) BLSTM <cit.> which consists of several bidirectional LSTM layers; and 3) CLDNN <cit.>, which consists of convolutional layers, LSTM layers and DNN layers. Figure <ref> demonstrates these subsystems network architectures. Due to the MCWR metric and their heterogeneous structures, they may be mutually complementary to each other, which is confirmed in our experiments. An illustrative example in WSJ data set is given below to explain the complementary of subsystems. Here, the ground truth is:CONTACTS STILL INSIDE OWENS CORNING HELP TOO and the output sentences of the three subsystems are:1. CONTACTS STILL INSIDE OWNS CORNING HELPED TOO 2. CONTACT STILL INSIDE OWENS CORNING HELPED 0.8cm0.15mm3. CONTACT STILL INSIDE OWNS CORNING HELP TOO The incorrect words in the output of each subsystem are marked underline. We can see that each subsystem has its own defect (i.e., incorrect words). But the incorrect words are different among the three subsystems, and they can be mutually corrected to a certain extent. For example, compared to the ground truth word “CONTACTS”, the word “CONTACT” is incorrect in subsystem 2 and 3 while it is correct in subsystem 1. The hope is that via system combination, we can leverage multiple systems to have more correct words. In summary, we select three different types of subsystems for combination, including CNN-based subsystems (RCNN-CTC), LSTM-based subsystems (BLSTM) and their mixture (CLDNN). We argue that CNN can have a global view on a long utterance via hierarchical feature abstraction from bottom up, while LSTM can capture the sequence information contained in long sentences. The system combination can thus realize both advantages.§.§ ChallengesOur proposed single system RCNN-CTC uses a CTC output. The system combination is thus CTC based and different from the frame-wise CE-based one, since the peak responses of CTC in each subsystem may mismatch. Besides, the output likelihood of each subsystem is not at the same scale of time, which confuses the decoding process of the Weighted Finite-State Transducer (WFST) used in our experiments. Inspired by ROVER <cit.>, we propose our CTC-based system combination method (Figure <ref>) as follows. For each subsystem, after decoding with the WFST graph (TLG), 1-best hypothesis[We have tested top N (N>1) hypotheses and found that the results are no better, see Section 5.4 for details.] with confidence score is prepared for the following processes. Alignment and composition are applied to the hypotheses of various subsystems to generate a single composite word transition network (WTN). Once the WTN is generated, we select the best scoring word from each branching path by a voting scheme to produce a new hypothesis.§.§ AlignmentTime Normalization.With regard to each subsystem, after searching the best lattice path, we get a hypothesis sequence with each item involving a label, confidence score, starting time and duration time, which may not be at the same scale due to the CTC decoding. Therefore, we need to unify the time length and rescale starting/duration time to the same scale before constructing a WTN. WTN Construction. After time normalization, we can align and combine the hypotheses sequences into a single composite WTN. In particular, one of the sequences is chosen as the base WTN (WTN-BASE), and other sequences are added to WTN-BASE word by word. Comparing the word in the sequence and the corresponding word in WTN-BASE, we adopt different operations for different conditions. 1) Correction. A branching point is created and the word transition arc is added to WTN-BASE; 2) Substitution. A branching point is created and the word transition arc is added to WTN-BASE; 3) Deletion. A branching point is created and the BLANK transition arc is added to WTN-BASE; 4) Insertion. A sub-WTN is created and inserted between the adjacent nodes in WTN-BASE to record the fact. Following the above procedure, we iteratively combine the lattice words until the final composite WTN is generated. Considering the example in section 4.1, if we select the output of subsystem 1 as WTN-BASE, the first word in WTN-BASE is “CONTACTS” while it is “CONTACT” in subsystem 2 and 3. This satisfies the substitution condition, we thus create a branching point and add the word transition arc of “CONTACT” to WTN-BASE. The rest words are processed in a similar way until the final single composite WTN is generated in Figure <ref>.§.§ VotingOnce the composite WTN has been generated, a voting module is employed to select the best scoring word sequence by searching the WTN. According to ROVER, there are three voting schemes, i.e., voting by 1) frequency of occurrence, 2) frequency of occurrence and average word confidence, and 3) frequency of occurrence and maximum confidence. Generally, the third voting scheme, i.e., frequency of occurrence and maximum confidence, usually reports the best results <cit.>, which is thus adopted in our system combination. As for the choice of confidence score, we use the minimum Bayes risk score <cit.> to serve as maximum confidence. § EXPERIMENTSWe analyze the performance of our proposed RCNN-CTC and CTC-based system combination on a benchmark data set, Wall Street Journal (WSJ), and a large mobile chat data set, Tencent Chat from Tencent company. Tencent Chat data set contains about 2.3 million utterances which account for 1400 hours speech data.§.§ Experimental SetupFor WSJ data set, we use the standard configuration si284 for training, eval92 for validation and dev93 for test. Our input features are 40 dimensional filterbank features with delta and delta-delta configuration. The features are normalized via mean subtraction and variance normalization on the speaker basis. For Tencent Chat data set, we use about 1400 hours internal speech data for training and an independent 2000 utterances for test. Our input features are 40 dimensional filterbank combined with 3 dimensional pitch features, and are normalized by per utterance mean and variance as there is no speaker information.We use the Kaldi recipe <cit.> to prepare the dictionary for WSJ and Tencent Chat data sets. It in fact uses CMU dictionary and Sequitur G2P to prepare phone sequences for both English and Chinese words. Finally, we have 118 phones served as acoustic model output labels. Our decoding follows the WFST-based approach in EESEN <cit.>. As for the language model, we apply the WSJ pruned trigram language model with expanded lexicon <cit.> in the ARPA format on WSJ data set. For Tencent Chat data set, we use 5-gram language model trained with about 6 billion tokens (120K vocabulary) corpus from an internal data set. All the networks use phone-based training by stochastic gradient descent optimization (SGD). The learning rates are initialized to be in the range of 4×10^-5 to 1×10^-4, and are exponentially decayed by a factor of 0.1 after every 10 epochs during training. §.§ Results on WSJ data setWe compare our proposed single system RCNN-CTC with several commonly used neural network baseline systems in ASR, i.e., BLSTM <cit.>, CLDNN <cit.> and VGG <cit.>. BLSTM is implemented according to <cit.>, which uses 4 bidirectional LSTM layers. At each layer, both the forward and the backward layers comprise 320 hidden units. CLDNN is implemented following <cit.>, which contains 3 convolutional layers, 3 bidirectional LSTM layers and 2 fully-connected layers. The kernel sizes of the three convolutional layers are (11, 21), (11, 11), (3, 3), and the strides are (3,2), (1,2), (1,1) respectively. Batch normalization and Relu activation function are also employed. Each LSTM layer consists of 896 hidden units and 2 fully-connected layers have 896 and 74 units respectively. VGG is implemented according to <cit.>, which has 14 layers. First, there are 3 convolutional layers with small 3×3 filters and 96 feature maps, followed by a max-pooling layer. Then, 4 convolutional layers with 192 feature maps and 4 convolutional layers with 384 feature maps are added, all using 3×3 filters and max-pooling at the end. With regard to RCNN-CTC, we adopt the parameters in Table <ref> with N = 2.Table <ref> compares our proposed single system RCNN-CTC with baseline systems on WSJ data set. For all systems trained with CTC loss, we can observe that RCNN-CTC obtains WER of 5.35% and 8.99% on eval92 and dev93 respectively[Lower WER results on WSJ data set were reported in Kaldi Speech Recognition project, however, these results were achieved using additional techniques including speaker-adaptive features, splice context for data preparation, and iVector for instantaneous adaptation.] https://github.com/kaldi-asr/kaldi, which slightly outperforms BLSTM, VGG and CLDNN. We speculate the slight gain may be limited to the small data size of WSJ, as the proposed RCNN-CTC cannot demonstrate its full system strength. We will observe much larger gain of RCNN-CTC vs. other systems when a larger Chat data set is used in Section 5.3. Moreover, we show additional results in Table <ref> where we compare systems trained with CTC and CE. Here, we only take BLSTM system as an example. The results are similar for other systems. For BLSTM+CE, we use GMM-HMM system <cit.> to generate the label alignment to train. The GMM-HMM system is trained with the maximum likelihood (ML) criterion and refined with the boosted maximum-mutual-information (BMMI) sequence-discriminative training criterion. As can be seen from the last two rows of Table <ref>, BLSTM+CTC slightly outperforms BLSTM+CE, whereas the former can be trained in an end-to-end manner while the latter requires label alignment.We next proceed with the system combination experiments on WSJ data set. For a fair comparison, we only consider combinations of three subsystems[As mentioned in Section 4.1, on a held-out data set, two subsystems cannot achieve an acceptable MCWR (0.95) but three subsystems have already obtained a very high MCWR (0.98), while four subsystems' MCWR (0.98) is almost the same to the three ones.] among the four: RCNN-CTC, VGG-CTC, CLDNN-CTC and BLSTM-CTC. Table <ref> shows all four possible combinations and their WER on eval92 and dev93 respectively. It is worth pointing out that the WERs of subsystem may not be a useful metric for selecting subsystems for combination. Instead, it is the complementary among subsystems that really matters. For example, RCNN-CTC, VGG and CLDNN are top 3 single systems with regard to WER in Table <ref>, while their system combination results are 4.70%/8.04% on eval92 and dev93 respectively, which is the worst in Table <ref>. While our system combination has the lowest WER of 4.29%/7.65%, which indicate the effectiveness of MCWR subsystem selection method. The fact that both top 2 system combinations including RCNN-CTC suggests its supremacy over other systems. Moreover, we notice that the WERs of combined systems are all lower than the single system results in Table <ref>, indicating that system combination can always boost the recognition accuracy. Note that our proposed system combination achieves an absolute WER drop of 1.06% and 1.34% (or relative drop of 19.81% and 14.91%) on eval92 and dev93 respectively compared to the best single system RCNN-CTC.§.§ Results on Tencent Chat data setIn the following, we explore the performance of RCNN-CTC and system combination on a large Chat data set. Here, we only demonstrate the results trained with CTC loss to avoid tedious label alignment work in CE. Baseline systems are the same to those in Section 5.2, but some network parameters are slightly adjusted. CLDNN uses the same network architecture, but the kernel sizes of the three convolutional layers are (11, 11), (5, 5), (3, 3), and the strides are (3,1), (1,1), (1,1) respectively. As for RCNN-CTC, we again adopt the parameters in Table <ref>, where the difference is N = 5. With regard to BLSTM and VGG, parameters of these systems are the same as in Section 5.2.Table <ref> summarizes the WERs of single systems on Tencent Chat data set. Compared to VGG, CLDNN and BLSTM, RCNN-CTC performs the best and obtains an absolute WER reduction of 0.77%, 0.68% and 0.77%, or relative WER reduction of 5.12%, 4.55% and 5.12% respectively. Furthermore, Table <ref> confirms the advantages of deep CNN architecture for ASR tasks on large data sets. RCNN-CTC and VGG are both CNN-based systems, while RCNN-CTC has residual connections as described in Section 3, which allow it to have very deep network depth (RCNN-CTC 40 layers vs. VGG 14 layers) and thus achieve higher accuracy. Similar to the experiments on WSJ data set, we also carry out a series of experiments on Tencent Chat data set to further assess the proposed system combination. We again consider combinations of three subsystems only, with WER of all combinations are collectively listed in Table <ref>. Similar to the WSJ system combination, the combination of RCNN+BLSTM+CLDNN outperforms others, due to the maximal complementary of these three subsystems described by MCWR. As can be noticed, top 2 system combinations also both choose RCNN-CTC as one base subsystem, which reveals its superb capacity in ASR. WERR is the relative WER reduction of each combination with respect to the best single system RCNN-CTC in Table <ref>. Our proposed system combination can achieve WER of 13.33%, which accounts for an absolute WER drop of 0.93% or relative drop of 6.52% compared to RCNN-CTC.In summary, the experimental results are representative to reveal the effectiveness of our proposed single system RCNN-CTC and CTC-based system combination.§.§ Analysis and Discussion Choice of 1-best vs. N-best in system combination. As stated in Section 4, we choose 1-best hypothesis for combination, because we find that N-best is no better than 1-best in our experiments, as shown in Table <ref>. Here N-best (N=10) distinct hypotheses of each subsystem are prepared for combination. Firstly, if we use the voting scheme in Section 4.4, i.e., maximal confidence score voting, choosing N-best does not offer any further benefits. Although N-best hypotheses make the WTN contain more branchings and words choices, maximal confidence score voting almost gets the same result as with 1-best hypothesis. The first two rows of Table <ref> verify the above conclusions. Moreover, we conduct another experiment using frequency of occurrence as voting score for N-best subsystems combination. We find that the results are close to 1-best on WSJ data set but slightly worse on Tencet Chat data set. This is because that one subsystem's error may repeat many times in N-best hypotheses, which distorts the following frequency-based voting. Furthermore, considering the computational cost of N-best hypotheses, 1-best from each subsystem with maximal confidence score may be preferred. § CONCLUSIONSIn this paper, we proposed a novel residual convolutional neural networks architecture trained with CTC loss (RCNN-CTC) for ASR. We argued that CNN is suited to exploit local correlations of human speech signals in both time and frequency dimensions, and has the capacity to exploit translational invariance in signals. In our proposed RCNN-CTC, we employ a wide and deep CNN architecture (more than 40 layers) with residual connections, which owns more expressive power and better generalization capacity. RCNN-CTC can be trained in an end-to-end manner thanks to the adoption of CTC loss, which effectively avoids the tedious frame alignment process. Furthermore, we proposed a CTC-based system combination via subsystems selection, alignment and voting procedures. Experiments on WSJ and Tencent Chat data sets show that, among widely used neural network systems in ASR, RCNN-CTC obtains the lowest WER. In addition, significant WER reductions are further obtained via our proposed system combination. For example compared to RCNN-CTC, the proposed system combination further results in relative WER reductions of 14.9% and 6.52% on WSJ dev93 and Tencent Chat data sets respectively.icml2016
http://arxiv.org/abs/1702.07793v1
{ "authors": [ "Yisen Wang", "Xuejiao Deng", "Songbai Pu", "Zhiheng Huang" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170224224913", "title": "Residual Convolutional CTC Networks for Automatic Speech Recognition" }
The present paper introduces the open-source Java Event Tracer (JETracer) framework for real-time tracing of GUI events within applications based on the AWT, Swing or SWT graphical toolkits. Our framework provides a common event model for supported toolkits, the possibility of receiving GUI events in real-time, good performance in the case of complex target applications and the possibility of deployment over a network. The present paper provides the rationale for JETracer, presents related research and details its technical implementation. An empirical evaluation where JETracer is used to trace GUI eventswithin five popular, open-source applications is also presented.JETracer - A Framework for Java GUI Event Tracing Arthur-Jozsef Molnar Faculty of Mathematics and Computer Science, Babeş-Bolyai University, Cluj-Napoca, Romania arthur@cs.ubbcluj.roDecember 30, 2023 =========================================================================================================================================§ INTRODUCTION The graphical user interface (GUI) is currently the most pervasive paradigm for human-computer interaction. With the continued proliferation of mobile devices, GUI-driven applications remain the norm in today's landscape of pervasive computing. Given their virtual omnipresence and increasing complexity across different platforms, it stands to reason that tools supporting their lifecycle must keep pace. This is especially evident for many applications where GUI-related code takes up as much as 50% of all application code <cit.>.Therefore we believe that software tooling that supports the GUI application development lifecycle will take on an increasing importance in practitioners' toolboxes. Such software can assist with many activities, starting with analysis and design, as well as coding, program comprehension, software visualization andtesting. This is confirmed when studying the evolution of a widely used IDE such as Eclipse <cit.>, where each new version ships with more advanced features which are aimed at helping professionals create higher quality software faster. Furthermore, the creation of innovative tools is nowadays aided by the prevalence of managed, flexible platforms such as Java and .NET, which enable novel tool approaches via techniques such as reflection, code instrumentation and the use of annotations.The supportive role of tooling is already well established in the literature. In <cit.>, the authors conduct an industry survey covering over 1400 professional developers regarding the strategies, tools and problems encountered by professionals when comprehending software. Among the most significant findings are that developers usually interact with the target application's GUI for finding the starting point of further interaction as well as the use of IDE's in parallel with more specialized tools. Of particular note was the finding that "industry developers do not use dedicated program comprehension tools developed by the research community" <cit.>. Another important issue regards open access to state of the art tooling. As we show in the following section, there exist commercial tools that incorporate some of the functionalities of JETracer. However, as they are closed-source and available against significant licensing fees, they have limited impact within the academia.Our motivation for developing JETracer is the lack of open-source software tools providing multi-platform GUI event tracing. We believe GUI event tracing supports the development of further innovative tools. These may target program comprehension or visualization by creating runtime traces or software testing by providing real-time information about executed code.The present paper is structured as follows: the next section presents related work, while the third section details JETracer's technical implementation. The fourth section is dedicated to an evaluation using 5 popular open-source applications. The final section presents our conclusions together with plans for future work.§ RELATED WORK The first important work is the Valgrind[http://valgrind.org/] multi-platform dynamic binary instrumentation framework. Valgrind loads the target into memory and instruments it <cit.> in a manner that is similar with our approach. Among Valgrind's users we mention the DAIKON invariant detection system <cit.> as well as the TaintCheck system <cit.>. An approach related to Valgrind is the DTrace[http://dtrace.org/blogs] tool. Described as an "observability technology" by its authors <cit.>, DTrace allows observing what system components are doing during program executionWhile the first efforts targeted natively-compiled languages from the C family, the prevalence of instrumentation-friendly and object oriented platforms such as Java and .NET spearheaded the creation of supportive tooling from platform developers and third parties alike. In this regard we mention Oracle's Java Mission Control and Flight Recorder tools <cit.> that provide monitoring for Java systems. Another important contribution is Javaassist, a tool which facilitates instrumentation of Java class files, including core classes during JVM class loading <cit.>. Its capabilities and ease of use led to its widespread use in dynamic analysis and tracing <cit.>. As discussed in more detail within the following sections, JETracer uses Javaassist for instrumenting key classes responsible for firing events within targeted GUI frameworks.The previously detailed frameworks and tools have facilitated the implementation of novel software used both in research and industry targeting program comprehension, software visualization and testing. A first effort in this direction was the JOVE tool for software visualization <cit.>. JOVE uses code instrumentation to capture snapshots of each working thread and create a program trace which can then be displayed using several visualizations <cit.>. A different approach is taken within Whyline, which proposes a number of "Why/Why not" type of questions about the target program's textual or graphical output <cit.>. Whyline uses bytecode instrumentation to create program traces and record the program's graphical output, with the execution history used for providing answers to the selected questions.An important area of research where JETracer is expected to contribute targets the visualization of GUI-driven applications. This is of particular interest as an area where research results recently underwent large-scale industrial evaluation with encouraging results <cit.>. A representative approach is the GUISurfer tool, which builds a language independent model of the targeted GUI using static analysis <cit.>.Another active area of research relevant to JETracer is GUI application testing, an area with notable results both from commercial as well as academic organizations. The first wave of tools enabling automated testing for GUI applications are represented by capture-replay implementations such as Pounder or Marathon, which enable recording a user's interaction with the application <cit.>. The recorded actions are then replayed automatically and any change in the response of the target application, such as an uncaught exception or an unexpected window being displayed are interpreted as errors. The main limitation of such tools lays in limitations when identifying graphical widgets, as changes to the target application can easily break test case replayability. More advanced tools integrate scripting engines facilitating quick test suite creation such as Abbot and TestNG <cit.>. However, existing open-source tools are limited to a single GUI toolkit <cit.>. Even more so, some of these tools such as Abbot and Pounder are no longer in active development, and using them with the latest version of Java yields runtime errors. These projects paved the way for commercial implementations such as MarathonITE[http://marathontesting.com/], a fully-featured and commercial implementation of Marathon or the Squish[http://www.froglogic.com/squish] toolkit. When compared with their open-source alternatives, these applications provide greater flexibility by supporting many GUI toolkits such as AWT/Swing, SWT, Qt, Web as well as mobile platforms. In addition, they provide more precise widget recognition algorithms which helps with test case playback. GUI interactions can be recorded as scripts using non-proprietary languages such as Python or JavaScript, making it easier to modify or update test cases. As part of our research we employed the Squish tool for recording consistently replayable user interaction scenarios which are described in detail within the evaluation section. From the related work surveyed, we found the Squish implementation to be the closest one to JETracer. The Squish tool consists of a server component that is contained within the IDE and a hook component deployed within the AUT <cit.>. The Java implementation of Squish uses a Java agent and employs a similar architecture to our own framework, which is detailed within the following section. In contrast to commercial implementations encumbered by restrictive and pricey licensing agreements, we designed JETracer as an open framework to which many tools can be connected without significant programming effort. Furthermore, JETracer facilitates code reuse and a modular approach so that interested parties can add modules to support other GUI toolkits.The last, but most important body of research addressing the issue of GUI testing has resulted in the GUITAR framework <cit.>, which includes the GUIRipper <cit.> and MobiGUITAR <cit.> components able to reverse engineer desktop and mobile device GUIs, respectively. Once the GUI model is available, valid event sequences can be modelled using an event-flow graph or event-dependency graph <cit.>. Information about valid event sequences allows for automated test case generation and execution, which are also provided in GUITAR <cit.>. The importance of the GUITAR framework for our research is underlined by its positive evaluation in an industrial context <cit.>. While GUITAR and JETracer are not integrated, JETracer's creation was partially inspired by limitations within GUITAR caused by its implementation as a black-box toolset. One of our future avenues of research consists in integrating the JETracer framework into GUITAR and using the event information available to further guide test generation and execution in a white-box process.§ THE JETRACER FRAMEWORK JETRacer is provided under the Eclipse Public License and is freely available for download from our website <cit.>. The implementation was tested under Windows and Ubuntu Linux, using versions 6, 7 and 8 of both Oracle and OpenJDK Java. JETracer consists of two main modules: the Host Module and the Agent Module. The agent module must be deployed within the target application's classpath. The agent's role is to record the fired events as they occur and transmit them to the host via network socket, while the host manages the network connection and transmits received events to subscribed handlers. JETracer's deployment architecture within a target application is shown in Figure <ref>.To deploy the framework, the Agent Module must be added to the target application's classpath, while the Host Module must exist on the classpath of the application interested in receiving event information, which is illustrated in Figure <ref> as the Observer Application. Since communication between the modules happens via network socket, the target application and the host module need not be on the same physical machine. The framework can be extended to provide event tracing for other GUI toolkits. Interested parties must develop a new library to intercept events within the targeted GUI toolkit and transform them into the JETracer common model. Within this model, each event is represented by an instance of the EventMessage class. As it is this common model instance that is transmitted via network to the host, the Host Module implementation does not depend on the agent, which allows reusing the host for all agents. §.§ The Host ModuleThe role of the host module is to transparently manage the connection with the deployed agent, to configure the agent and to receive and forward fired events. The code snippet below illustrates how to initialize the host module within an application in order to establish a connection with an agent:The kind, type and granularity of recorded events can be filtered using an instance of the InstrumentConfig class as detailed within the section dedicated to the agent module. The object is passed as the single parameter of the configure(config) method in the code snippet above. In order to receive fired events, at least one EventMessageListener must be created and registered with the host, as shown above. The notification mechanism is implemented according to Java best practices, with the EventMessageListener interface having just one method:The received EventMessageReceivedEvent instance wraps one EventMessage object which describes a single GUI event using the following information:* Id: Unique for each event.* Class: Class of the originating GUI component (e.g. javax.swing.JButton)* Index: Index in the parent container of the originating GUI component.* X,Y, Width, Height: Location and size of the GUI component which fired the event. * Screenshot: An image of the target application's active window at the time the event was fired. * Type: The type of the fired event (e.g. java.awt.event.ActionEvent). * Timers: The values for a number of high-precision timers for measurement purposes. * Listeners: The list of event handlers registered within the GUI component that fired the event. We believe that the data gathered by JETracer opens the possibility for undertaking a wide variety of analyses. Recording screenshots together with component location and size allows actioned widgets to be identified visually. Likewise, recording each component's index within their parent enables them to be identified programmatically, which can help in creating replayable test cases <cit.>. Knowledge about each component's event listeners gathered at runtime has important implications for program comprehension as well as white-box testing by showing how GUI components and the underlying code are linked <cit.>.§.§ The Agent ModuleThe role of the agent module is to record fired events as they happen, gather event information and transmit it to the host. The existing agents do most of the work during class loading, when they instrument several classes from the Java platform using Javaassist. The actual methods that are instrumented are toolkit-specific, but a common approach is employed. In the first phase, we studied the publicly available source code of the Java platform and identified the methods responsible for firing events. Code which calls our event-recording and transmission code was inserted at the beginning and end of each such method. The code that is instrumented belongs to the Java platform itself, which enables deploying JETracer for all applications that use or extend standard GUI controls.As GUI toolkits generate a high number of events, excluding uninteresting events from tracing becomes important in order to avoid impacting target application performance. This is achieved in JETracer by applying the following filters:Event granularity: Provides the possibility of recording either all GUI events or only those that have application-defined handlers triggered by the event. This filter allows tracing only those events that cause application code to run.Event filter: Used to ignore certain event types. For example, the AWT/Swing agent records both low and high level events. Therefore, a key press is recorded as three consecutive events: a KeyEvent.KEY_PRESSED, followed by a KeyEvent.KEY_RELEASED and a KeyEvent.KEY_TYPED. If undesirable, this can be avoided by filtering the unwanted events. In our empirical evaluation, we observed that ignoring certain classes of events such as mouse movement and repaint events clear the recorded trace of many superfluous entries and increase target application performance.Due to differences between GUI toolkits, the AWT/Swing and SWT agents have distinct implementations. As such, our website <cit.> holds two agent versions: one that works for AWT/Swing applications and one that works for SWT. A common part exists for maintaining the communication with the host and providing support for code instrumentation, which we hopefully will enable interested contributors to extend JETracer for other GUI toolkits. The sections below detail the particularities of each agent implementation. The complete list of events traceable for each of the toolkits is available on the JETracer project website <cit.>.§.§.§ The AWT/Swing AgentDue to the interplay between AWT and Swing we were able to develop one agent module that is capable of tracing events fired within both toolkits. AWT and Swing are written in pure Java and are included with the default platform distribution, so in order to record events we instrumented a number of classes within the java.awt.* and javax.swing.* packages. This proved to be a laborious undertaking due to the way event dispatch works in these toolkits, as many components have their own code for firing events as well as maintaining lists of registered event handlers. This also required laborious testing to ensure that all event types are recorded correctly.§.§.§ The SWT AgentIn contrast to AWT and Swing, the SWT toolkit is available within a separate library that provides a bridge between Java and the underlying system's native windowing toolkit. As such, there exist different SWT libraries for each platform as well as architecture. At the time of writing, JETracer was tested with versions 4.0 - 4.4 of SWT under both Windows and Ubuntu Linux operating systems.In order to trace events, we have instrumented the org.eclipse.swt.widgets.EventTable class, which handles firing events within the toolkit <cit.>. § EMPIRICAL EVALUATION The present section details our evaluation of the JETracer framework. Our goal is to evaluate the feasibility of deploying JETracer within complex applications and to examine the performance impact its various settings have on the target application. GUI applications are event-driven systems that employ callbacks into user code to implement most functionalities. While GUI toolkits provide a set of graphical widgets together with associated events, applications typically use only a subset of them. Furthermore, applications are free to (un)register event handlers and to update them during program execution. This variability is one of the main issues making GUI application comprehension, visualization and testing difficult. As JETracer captures events fired within the application under test, its performance is heavily influenced by factors outside our control. These include the number and types of events that are fired within the application, the number of registered event handlers as well as the network performance between agent and host components. In order to limit external threats to the validity of our evaluation, both modules were hosted on the same physical machine, a modern quad-core desktop computer running Oracle Java 7 and the Windows operating system. We found our results repeatable using different versions of Java on both Windows and Ubuntu Linux.§.§ Target ApplicationsOur selection of target applications was guided by a number of criteria. First, we wanted applications that will enable covering most, if not all GUI controls and events present within AWT/Swing and SWT. Second of all, we searched for complex, popular open-source applications that are in active development. Last but not least, we limited our search to applications that were easy to set up and which enabled the creation of consistently replayable interactions.The selected applications are the aTunes media player, the Azureus torrent client, the FreeMind mind mapping software, the jEdit text editor and the TuxGuitar tablature editor. We used recent versions for each application except Azureus, where due to the inclusion of proprietary code in recent versions an older version was selected. aTunes, FreeMind and jEdit employ AWT and Swing, while Azureus and TuxGuitar use the SWT toolkit. These applications have complex user interfaces that include several windows and many control types, some of which custom created. Several of them have already been used as target applications in evaluating research results. Previous versions of both FreeMind and jEdit were employed in research targeting GUI testing <cit.>, while TuxGuitar was used in researching new approaches in teaching software testing at graduate level <cit.>. §.§ The Evaluation ProcessThe evaluation process consisted of first recording and then replaying a user interaction scenario for each of the applications using different settings for JETracer. These scenarios were created to replicate the actions of a live user during an imagined session of using each application and they cover most control types within each target application. Table <ref> illustrates the total number of events, as well as the number of handled events that were generated when running the scenarios. Differences between the number of generated events are explained by the fact that the interaction scenarios were created to be of approximately equal length from a user's perspective; the actual number of fired events is specific to each application.An important issue that affects reliable replay of user interaction sequences is flakiness, or unexpected variations in the generated event sequence due to small changes in application or system state <cit.>. For instance, the location of the mouse cursor when starting the target application is important for applications that fire mouse-related events. In order to control flakiness, user scenarios were created to leave the application in the same state in which it was when started. Furthermore, we employed the commercially-available Squish capture and replay solution for recording and replaying scenarios. Each application run resulted in information regarding the event trace captured by JETracer as well as per event overhead data. We compared this event trace with the scripted interaction scenarios in order to ensure that our framework captures all generated events in the correct order. All the artefacts required for replicating our experiments as well as our results in raw form are available on our website <cit.>.§.§ Performance BenchmarkThe purpose of this section is to present our initial data concerning the overhead incurred when using JETracer withvarious settings. The most important factors affecting performance are the number of traced events and the overhead that is incurred for each event. Our implementation targets achieving constant overhead in order to ensure predictable target application performance.Each usage scenario was repeated a number of four times in order to assess the impact of those two settings that we observed to impact performance: event granularity and screenshot recording. As GUI toolkits generate events on each mouse move, keystroke and component repaint, tracing all events provides a worst-case baseline for event throughput. During our preliminary testing we found capturing screenshots to be orders of magnitude slower than recording other event parameters, so we also explored its impact on the performance of our selected applications during event tracing.As such, the four scenarios consist of tracing all GUI events versus those having handlers installed by the application, each time with and without recording screenshots. Overhead was recorded via high-precision timers and only includes the time required for recording event parameters and sending the message to the host via network socket. In order to account for variability within the environment, we eliminated outlier values from our data analysis.Table <ref> provides information regarding average per event overhead obtained with screenshot recording turned off. Our data shows that per-event overhead remains consistent at around 0.1ms within all applications, with a slightly higher value when tracing handled events. These higher values are explained by the additional information that is gathered for these events, as several reflection calls are required to record event handler information. Furthermore, standard deviation was in most cases below 0.2ms, showing good performance consistency.From a subjective perspective, applications instrumented to trace all events did not present any observable slowdown. Due to the fact that FreeMind consistently generated the highest number of events, we will use it for more detailed analysis. Our interaction scenario is around 6 minutes long when replayed by a user. The incurred overhead without screenshot recording and tracing all events was 16.5 seconds. However, as FreeMind fires many GUI events while initializing the application, most of the overhead resulted in slower application startup, followed by consistent application performance. The behaviour of the other applications was consistent with this observation. When only handled events are traced, even application startup speed is undistinguishable from an uninstrumented start.The more interesting situation is once screenshot recording is turned on. This has a noticeable impact on JETracer's performance due to JNI interfacing required by the virtual machine to access OS resources. As screenshot recording overhead is dependant on the size of the application window, the main windows of all applications were resized to similar dimensions taking care not to affect the quality of user interaction. Table <ref> details the results obtained with screenshot recording enabled.Our first observation regards the variability in the observed overhead when tracing all events. This is due to the different initialization sequences of the applications. We found that both aTunes and FreeMind do a lot of work on startup, and since at this point their GUI is not yet visible and so screenshots are not recorded, this lowers the reported average value. These events must still be traced however, as they are no different to events fired once the GUI is displayed. The situation is much more balanced once only handled events are traced, in which case we observe that all applications present similar overhead, between 20 and 30ms per event.Subjectively, turning screenshot recording on resulted in moderate performance degradation when tracing handled events, as the applications became less responsive to user input. In the case of FreeMind, the overhead added another 55 seconds to our 6 minute interaction scenario, but the application remained usable. As expected, the worst degradation of performance was observed when screenshots were recorded for all events fired. This made all 5 applications unresponsive to user input over periods of several seconds due to the large number of recorded screenshots. To keep ourscripts replayable without error, they had to be adjusted by inserting additional wait commands between steps. In this worst case, added overhead was 6.5 minutes for FreeMind and over 3 minutes for both jEdit and TuxGuitar. This performance hit can be alleviated by further filtering the events to be traced. However, a complete evaluation of this is target application-specific and out of our scope.An important aspect regarding target application responsiveness is the consistency of the incurred overhead. As most GUI toolkits are single-thread, inconsistent overhead leads to perceived application slowdown while the GUI is unresponsive. We examined this issue within our selected application, and will report the results for FreeMind, as we found it to be representative for the rest of the applications. First of all, with screenshot recording disabled all events were traced under 1ms, which did not affect application performance. As such, we investigated the issue of consistency once screenshot recording was enabled. Figure <ref> illustrates overhead distribution when tracing handled events. Each column represents one standard deviation from the recorded average of 27.95ms. Overhead clumps into two columns: the leftmost column illustrates events for which screenshots could not be captured as the GUI was not yet visible, while most other events were very close to the mean. One of our goals when evaluating JETracer was to compare its performance against other, similar toolkits. However, during our tool survey we were not able to identify similar open-source applications that would enable an objective comparison. Existing applications that incorporate similar functionalities, such as Squish are closed-source so a comparative evaluation was not possible.§ CONCLUSIONS AND FUTURE WORK We envision JETracer as a useful tool for both academia and the industry. We aim for our future work to reflect this by extending JETracer to cover other toolkits such as JavaFX as well as Java-based mobile platforms such as Android. Second of all, we plan to incorporate knowledge gained within our initial evaluation in order to further reduce the framework's performance impact on target applications. We plan to reduce the screenshot capture overhead as well as to examine possible benefits of implementing asynchronous event transmission between agent and host.Our plans are to build on the foundation established by JETracer. We aim to develop innovative applications for program comprehension as well as software testing by using JETracer to provide more information about the application under test. We believe that by integrating our framework with already established academic tooling such as the GUITAR framework will enable the creation of new testing methodologies. Furthermore, we aim to contribute to the field of program comprehension by developing software tooling capable of using event traces obtained via JETracer. Integration with existing tools such as EclEmma will allow for the creation of new tools to shift the paradigm from code coverage to event and event-interaction coverage <cit.> in the area of GUI-driven applications.apalike
http://arxiv.org/abs/1702.08008v1
{ "authors": [ "Arthur-Jozsef Molnar" ], "categories": [ "cs.SE", "D.2.2" ], "primary_category": "cs.SE", "published": "20170226092855", "title": "JETracer - A Framework for Java GUI Event Tracing" }
Labani Mallick labani@iucaa.in (LM) Inter-University Center for Astronomy and Astrophysics, Ganeshkhind, Pune 411007, India Inter-University Center for Astronomy and Astrophysics, Ganeshkhind, Pune 411007, IndiaWe present the first results from a detailed analysis of a new, long (∼100)observation of the narrow-line Seyfert 1 galaxy PG 1404+226 which showed a large-amplitude, rapid X-ray variability by a factor of ∼7 in ∼10 with an exponential rise and a sharp fall in the count rate. We investigate the origin of the soft X-ray excess emission and rapid X-ray variability in the source through time-resolved spectroscopy and fractional root-mean-squared (rms) spectral modeling. The strong soft X-ray excess below 1 observed both in the time-averaged and time-resolved spectra is described by the intrinsic disk Comptonization model as well as the relativistic reflection model where the emission is intensive merely in the inner regions (r_ in<1.7 r_ g) of an ionized accretion disk. We detected no significant UV variability while the soft X-ray excess flux varies together with the primary power-law emission (as F_ primary∝ F_ excess^1.54), although with a smaller amplitude, as expected in the reflection scenario. The observed X-ray fractional rms spectrum is approximately constant with a drop at ∼0.6 and is described by a non-variable emission line component with the observed energy of ∼0.6 and two variable spectral components: a more variable primary power-law emission and a less variable soft excess emission. Our results suggest the `lamppost geometry' for the primary X-ray emitting hot corona which illuminates the innermost accretion disk due to strong gravity and gives rise to the soft X-ray excess emission. § INTRODUCTIONThe narrow-line Seyfert 1 (NLS1) galaxies, a subclass of active galactic nuclei (AGNs) have been the centre of interest because of their extreme variability in the X-ray band <cit.>. The defining properties of this class of AGNs are: Balmer lines with the full width at half-maximum FWHM(H_β) <2000 km s^-1 <cit.>, strong permitted optical/UV Fe II emission lines <cit.> and weaker [OIII] emission [OIII]λ5007/H_β≤3 <cit.>. The X-ray spectra of Seyfert galaxies show a power-law like primary continuum which is thought to arise due to thermal Comptonization of the optical/UV seed photons in a corona of hot electrons surrounding the central supermassive black hole (e.g. ). The optical/UV seed photons are thought to arise from an accretion disk <cit.>. However, the interplay between the accretion disk and the hot corona is not well understood. Many type 1 AGNs also show strong `soft X-ray excess' emission over the power-law continuum below ∼2 in their X-ray spectra. The existence of this component (∼0.1-2) was discovered around 30 years ago (e.g. ), and its origin is still controversial. Initially, it was considered to be the high energy tail of the accretion disk emission (), but the temperature of the soft X-ray excess is in the range ∼0.1-0.2 which is much higher than the maximum disk temperature expected in AGNs. It was then speculated that the soft X-ray excess could result from the Compton up-scattering of the disk photons in an optically thick, warm plasma (e.g. ). Currently, there are two competing models for the origin of the soft X-ray excess: optically thick, low-temperature Comptonization <cit.> and relativistic reflection from an ionized accretion disk <cit.>. However, these models sometimes give rise to spectral degeneracy because of the presence of multiple spectral components in the energy spectra of NLS1 galaxies <cit.>. One efficient approach to overcome the spectral model degeneracy is to study the root-mean-squared (rms) spectrum which links the energy spectrum with variability and has been successfully applied in a number of AGNs (MCG–6-30-15: , 1H 0707–495: , RX J1633.3+4719: , Ark 120: ). Observational evidence for the emission in different bands such as UV, soft and hard X-rays during large variability events may help us to probe the connection between the disk, hot corona and the soft X-ray excess emitting regions. In this paper, we investigate the origin of the soft X-ray excess emission, rapid X-ray variability and the disk-corona connection in PG 1404+226 with the use of both model dependent and model independent techniques. PG 1404+226 is a NLS1 galaxy at a redshift z=0.098 with FWHM(H_β) ∼800 km s^-1 <cit.>. Previously, the source was observed with(),(),() and(). From theobservation, the 2-10 spectrum was found to be quite flat (Γ=1.6±0.4) with flux F_ 2-10∼6.4×10^-12 erg cm^-2 s^-1 <cit.>. The detection of an absorption edge at ∼1 was claimed in previous studies and interpreted as the high-velocity (0.2-0.3 c) outflow of ionized oxygen <cit.>. The source is well-known for its strong soft X-ray excess and large-amplitude X-ray variability on the short timescales (). Here we explore the X-ray light curves, time-averaged as well as time-resolved energy spectra, fractional rms variability spectrum and flux–flux plot through a new ∼100observation of PG 1404+226.We describe theobservation and data reduction in Section <ref>. In Section <ref>, we present the analysis of the X-ray light curves and hardness ratio. In Section <ref>, we present time-averaged and resolved spectral analyses with the use of both phenomenological and physical models. In Section <ref> and <ref>, we present the flux-flux analysis and modeling of the X-ray fractional rms variability spectrum, respectively. Finally, we summarize and discuss our results in Section  <ref>. Throughout the paper, the cosmological parameters H_0=70 km s^-1 Mpc^-1, Ω_m=0.27, Ω_Λ=0.73 are adopted.§ OBSERVATION AND DATA REDUCTION We observed PG 1404+226 with thetelescope <cit.> on 25th January 2016 (Obs. ID 0763480101) for an exposure time of ∼100. Here we analyze data from the European Photon Imaging Camera (EPIC-PN;and MOS;), Reflection Grating Spectrometer (RGS; ) and Optical Monitor (OM; ) on-board . We processed the raw data with the Scientific Analysis System (SAS v.15.0.0) and the most recent (as of 2016 August 2) calibration files. The EPIC-PN and MOS detectors were operated in the large and small window modes, respectively using the thin filter. We processed EPIC-PN and MOS data using epproc and emproc, respectively to produce the calibrated photon event files. We checked for the photon pile-up using the task epatplot and found no pileup in either the PN or MOS data. To filter the processed PN and MOS events, we included unflagged events with pattern≤4 and pattern≤12, respectively. We excluded the proton background flares by generating a GTI (Good Time Interval) file above 10 for the full field with RATE<3.1 cts s^-1, 1.3 cts s^-1 and 2.1 cts s^-1 for PN, MOS 1 and MOS 2, respectively to obtain the maximum signal-to-noise ratio. It resulted in a filtered duration of ∼73 for both the cleaned EPIC-PN and MOS data. We extracted the PN and MOS source events from a circular region of radii 35 arcsec and 25 arcsec, respectively centered on the source while the background events were extracted from a nearby source-free circular region with a radius of 50 arcsec for both the PN and MOS data. We produced the Redistribution Matrix File (rmf) and Ancillary Region File (arf) with the tasks rmfgen and arfgen, respectively. We extracted the deadtime-corrected source and background light curves for different energy bands and bin times from the cleaned PN and MOS event files using the task epiclccorr. We combined the background-subtracted EPIC-PN, MOS 1 and MOS 2 light curves with the FTOOLS <cit.> task lcmath. The source count rate was considerably low above 8, and therefore we considered only the 0.3-8 band for both the spectral and timing analyses. For spectral analysis, we used only the EPIC-PN data due to their higher signal-to-noise compared to the MOS data. We grouped the average PN spectrum using the HEASOFT v.6.19 task grppha to have a minimum of 50 counts per energy bin. The net count rate estimated for EPIC-PN is (0.32±0.03) cts s^-1 resulting in a total of 1.65×10^4 PN counts. Figure <ref> shows the 0.3-8 EPIC-PN background-subtracted source (in red circle) and background (in black square) spectra of PG 1404+226.We processed the RGS data with the SAS task rgsproc. The response files were generated using the task rgsrmfgen. We combined the spectra and response files for two RGS 1+2 using the task rgscombine. Finally, we grouped the RGS spectral data using the grppha tool with a minimum of 50 counts per bin. It restricts the applicability of the χ^2 statistics.The Optical Monitor (OM) was operated in the imaging-fast mode using the only UVW1 (λ_ eff∼2910 Å) filter for a total duration of 94. There is a total of 20 UVW1 exposures, and we found that only the last 14 exposures were acquired simultaneously with the filtered EPIC-PN data. We did not use the fast mode OM data due to the presence of a variable background. We processed only the imaging mode OM data with the SAS task omichain and obtained the background-subtracted count rate of the source, corrected for coincidence losses.§ TIMING ANALYSIS: LIGHT CURVES AND HARDNESS RATIO We perform the timing analysis of PG 1404+226 to investigate the time and energy dependence of variability. Figure <ref> shows the 0.3-8, background-subtracted, deadtime-corrected EPIC-PN, MOS 1 and MOS 2 light curves of PG 1404+226 with time bins of 500. The X-ray time series clearly shows a short-term, large-amplitude variability event in which PG 1404+226 varied by a factor of ∼7 in ∼10 during the 2016 observation. The fractional rms variability amplitude estimated in the 0.3-8 band is F_ var, X=82.5±1.4%. The uncertainty on F_ var was calculated in accordance with . Based on the variability pattern, we divided the entire ∼73 light curve into five intervals. Int 1 consists of the first 38 of the time series and have the lowest flux and moderate fractional rms variability of F_ var,Int1=11.6±2.8%. In Int 2, the X-ray flux increases exponentially by a factor of ∼3 with fractional rms variability of F_ var,Int2=38.3±2.5%. The duration of Int 2 is ∼10. During Int 3, the source was in the highest flux state with the fractional rms amplitude of F_ var,Int3=9.1±5.1%. The source was in the brightest state only for ∼6 and then count rate has started decreasing. In Int 4, the source flux dropped by a factor of ∼3 in ∼6 with F_ var,Int4=31.3±3.6%. During the end of the observation, the source was moderately variable with F_ var,Int5=8.5±5.9%. In Figure <ref>, we show the UVW1 light curve of PG 1404+226 simultaneous with the X-ray light curve. The amplitude of the observed UV variability is only ∼3% of the mean count rate on timescales of ∼62. The fractional rms variability amplitude in the UVW1 band is F_ var,UV∼1% which is much less as compared to the X-ray variability. The X-ray and UV variability patterns appear significantly different suggesting lack of any correlation between the X-ray and UV emission at zero time-lag. The upper and middle panels of Figure <ref> show the background-subtracted, deadtime-corrected, combined EPIC-PN+MOS soft (0.3-1) and hard (1-8) X-ray light curves, respectively, with time bins of 2. The soft band is observed to be brighter than the hard band, however, the variability pattern and amplitude (F_ var,soft=88.2±1.3% and F_ var,hard=88.7±5.4%) in these two bands are found to be comparable during the observation. The peak-to-trough ratio of the variability amplitude in both the soft and hard bands is of the order of ∼12. In the bottom panel of Fig. <ref>, we have shown the hardness ratio as a function of time. A constant model fitted to the hardness ratio curve provided a statistically poor fit (χ^2/d.o.f=46/29), implying the presence of moderate spectral variability and the source became harder at the beginning of the large-amplitude variability. § SPECTRAL ANALYSIS We perform the spectral analysis of PG 1404+226 using XSPEC v.12.8.2 <cit.>. We employ the χ^2 statistics and quote the errors at the 90% confidence limit for a single parameter corresponding to Δχ^2=2.71 unless otherwise specified.§.§ Phenomenological Model§.§.§ The 0.3-8 EPIC-PN SpectrumWe begin our spectral analysis by fitting the 1-8 EPIC-PN spectrum using a continuum model (zpowerlw) multiplied by the Galactic absorption model (TBabs) using the cross-sections and solar ISM abundances of <cit.>. We fixed the Galactic column density at N_ H=2.22×10^20 cm^-2 <cit.> after accounting for the effect of molecular hydrogen. This model provided a χ^2=70 for 50 degrees of freedom (d.o.f) with Γ∼1.81 and can be considered as a good baseline model to describe the hard X-ray emission from the source. Then we extrapolated our 1-8 keV absorbed power-law model (TBabs×zpowerlw) down to 0.3. This extrapolation reveals the presence of a strong soft X-ray excess emission below 1 with χ^2/d.o.f = 11741/160. We show the ratio of the observed EPIC-PN data and the absorbed power-law model in Figure <ref> (top). The fitting of the full band (0.3-8) data with the absorbed power-law model (TBabs×zpowerlw) resulted in a poor fit with χ^2/d.o.f = 1151.7/158. The residual plot demonstrates a sharp dip in the 0.8-1 band and an excess emission below 1. Initially, we modeled the soft X-ray excess emission using a simple blackbody model (zbbody). The addition of the zbbody model improved the fit statistics to χ^2/d.o.f = 230.2/156 (Δχ^2=-921.5 for 2 d.o.f). In XSPEC, the model reads as TBabs×(zbbody+zpowerlw). We show the deviations of the observed EPIC-PN data from the absorbed blackbody and power-law model in Fig. <ref> (middle). The estimated blackbody temperature kT_ BB∼100 is consistent with the temperature of the soft X-ray excess emission observed in Seyfert 1 galaxies and QSOs <cit.>. To model the absorption feature, we have created a warm absorber (WA) model for PG 1404+226 in XSTAR v.2.2.1 (last described byand revised in July 2015). The XSTAR photoionized absorption model has 3 free parameters: column density (N_ H), redshift (z) and ionization parameter (logξ, where ξ=L/nr^2, L is the source luminosity, n is the hydrogen density and r is the distance between the source and cloud). The inclusion of the warm absorber (WA) significantly improved the fit statistics from χ^2/d.o.f = 230.2/156 to 173.7/154 (Δχ^2=-56.5 for 2 d.o.f). To test the presence of any outflow, we varied the redshift of the absorbing cloud which did not improve the fit statistics. We show the deviations of the observed EPIC-PN data from the model, TBabs×WA×(zbbody+zpowerlw) in Fig. <ref> (bottom). We notice significant positive residuals at ∼0.6 which may be the signature of an emission feature. To model the emission feature, we added a Gaussian emission line (GL) which improved the fit statistics to χ^2/d.o.f = 154.6/152 (Δχ^2=-19.1 for 2 d.o.f). The centroid energies of the emission line in the observed and rest frames are ∼0.6 and ∼0.66, respectively. The rest frame 0.66 emission feature most likely represents the O VIII Lyman-α line. The EPIC-PN spectral data, the best-fit model, TBabs×WA×(GL+zbbody+zpowerlw) and the deviations of the observed data from the best-fit model are shown in Figure <ref> (left). The best-fit values for the column density, ionization parameter of the warm absorber are N_ H=5.2^+2.9_-2.0×10^22 cm^-2 and log(ξ/erg cm s^-1)=2.8^+0.1_-0.2, respectively.To search for spectral variability on shorter timescales, we performed time-resolved spectroscopy. First, we generated 5 EPIC-PN spectra from the five intervals defined in Section <ref>. We grouped each spectrum so that we had a minimum of 30 counts per energy bin. The source was hardly detected above 3 for the lowest flux state corresponding to Int 1. Hence we considered only the 0.3-3 energy band for the spectral modelling of Int 1. Then we applied our best-fit mean spectral model to all 5 EPIC-PN spectra. We tied all the parameters except the normalization of the power-law and blackbody components which we set to vary independently. It resulted in a χ^2/d.o.f = 464/399, without any strong residuals. If we allow the blackbody temperature and photon index of the power-law to vary, we did not find any significant improvement in the fit with χ^2/d.o.f = 446.6/389 (Δχ^2=-17.4 for 10 free parameters). The 5 EPIC-PN spectral data sets, the best-fit model and residuals are shown in Fig. <ref> (right). We list the best-fit spectral model parameters for both the time-averaged and time-resolved spectra in Table <ref>. §.§.§ The 0.38-1.8 RGS SpectrumTo confirm the presence of the warm absorption or emission features, we performed a detailed spectral analysis of the high-resolution RGS data. Initially, we used a continuum model similar to that obtained from the EPIC-PN data, i.e. the sum of a power-law and a blackbody. To account for the cross-calibration uncertainties, we multiplied a constant component. All the parameter values are fixed to the best-fit EPIC-PN value since the RGS data (0.38-1.8) alone cannot constrain them. In XSPEC, the model reads as constant×TBabs×(zbbody+zpowerlw). This model provided a poor fit with Δχ^2=64 for 46 d.o.f. The RGS spectral data, the fitted continuum model constant×TBabs×(zbbody+zpowerlw) and the deviations of the observed data from the model are shown in Figure <ref> (left). The residual plot shows an absorption feature at ∼0.9-1.1 and two emission features at ∼0.6 and ∼0.7 in the observer's frame. We added two narrow Gaussian emission lines to model these two emission features and a warm absorber (WA) model to fit the absorption feature, which improved the fit statistics by Δχ^2=-30 for 6 d.o.f with χ^2/d.o.f = 34/40. If we allow the redshift of the WA model to vary, we did not find any significant improvement in the fit statistics. The rest-frame energies of the emission lines are 0.65^+0.01_-0.01 and 0.78^+0.01_-0.01, which can be attributed to the O VIII Lyman-α and Lyman-β, respectively. The best-fit values for the derived WA parameters are N_ H=1.6^+2.1_-1.1×10^23 cm^-2 and log(ξ/erg cm s^-1)=2.4^+1.2_-0.3. The RGS spectum, the best-fit model, constant×TBabs×WA×(GL1+GL2+zbbody+zpowerlw) and the deviations of the observed data from the best-fit model are shown in Fig. <ref> (right). §.§ Physical ModelTo examine the origin of the soft X-ray excess emission, we have tested two different physical models- thermal Comptonization in an optically thick, warm medium and relativistic reflection from an ionized accretion disk. First, we have used the intrinsic disk Comptonization model (optxagnf; ) which assumes that the gravitational energy released in the disk is radiated as a blackbody emission down to the coronal radius, R_ corona. Inside the coronal radius, the gravitational energy is dissipated to produce the soft X-ray excess component in an optically thick, warm (kT_ SE∼0.2) corona and the hard X-ray power-law tail in an optically thin, hot (kT_ e∼100) corona above the disk. Thus, this model represents an energetically self-consistent model. The four parameters which determine the normalization of the model are the following: black hole mass (M_ BH), dimensionless spin parameter (a), Eddington ratio (L/L_E) and proper distance (d). We fitted the 0.3-8 EPIC-PN time-averaged spectrum with the optxagnf model modified by the Galactic absorption (TBabs). We fixed the black hole mass, outer disk radius and proper distance at 4.5×10^6 M_⊙ <cit.>, 1000R_ g and 416, respectively. We assumed a maximally rotating black hole as concluded by <cit.> and fixed the spin parameter at a=0.998. This model resulted in a statistically unacceptable fit with χ^2/d.o.f = 234.9/154, a sharp dip at ∼0.9 and an emission feature at ∼0.6 in the residual spectrum. As before, we used the warm absorber (WA) model which significantly improved the fit statistics to χ^2/d.o.f = 175/152 (Δχ^2=-59.9 for 2 d.o.f). The addition of the Gaussian emission line (GL) provided a statistically acceptable fit with χ^2/d.o.f = 154.9/150 (Δχ^2=-20.1 for 2 d.o.f). The EPIC-PN mean spectrum, the best-fit absorbed disk Comptonization model, TBabs×WA×(GL+optxagnf) and the residuals are shown in Figure <ref> (left). The best-fit values for the Eddington rate, coronal radius, electron temperature, optical depth and spectral index are L/L_E=0.07^+0.02_-0.01, R_ corona=100.0^+0p_-95.0R_ g, kT_ SE= 104.5^+5.4_-2.2, τ=100.0^+0p_-47.0 and Γ= 1.65^+0.14_-0.14, respectively. Then we jointly fitted the five time-resolved spectral data sets with the absorbed disk Comptonization model and kept all the parameters tied to their mean spectral best-fit values except the Eddington ratio. It provided a χ^2/d.o.f = 479.5/404, and we did not notice any strong feature in the residual spectra. The 5 EPIC-PN spectral data sets, the best-fit disk Comptonization model and residuals are shown in Fig. <ref> (right). The best-fit spectral model parameters for both the time-averaged and time-resolved spectra are listed in Table <ref>.The soft X-ray excess emission may also arise due to the relativistic reflection from an ionized accretion disk <cit.>. Hence we modeled the soft X-ray excess using the reflection model (reflionx; ) convolved with the relconv model <cit.> which blurs the spectrum due to general relativistic effects close to the SMBH. We fitted the 0.3-8 EPIC-PN mean spectrum with the thermally Comptonized primary continuum (nthcomp; ) and relativistic reflection model (relconv∗reflionx) after correcting for the Galactic absorption (TBabs). The electron temperature of the hot plasma and the disk blackbody seed photon temperature in the nthcomp model were fixed at 100 and 50, respectively. The parameters of the reflionx model are iron abundance (A_ Fe), ionization parameter (ξ_ disk=4π F/n, F is the total illuminating flux, n is hydrogen density), normalization (A_ REF) of the reflected spectrum and photon index (Γ) of the incident power-law. The convolution model relconv has five free parameters: emissivity index (q, where emissivity of the reflected emission is defined by ϵ∝ R^-q), inner disk radius (R_ in), outer disk radius (R_ out), black hole spin (a) and disk inclination angle (i^∘). We fixed the outer disk radius at R_ out=1000r_ g. In XSPEC, the 0.3-8 model reads as TBabs×(relconv∗reflionx+nthcomp) which provided a reasonably good fit with χ^2/d.o.f = 180.8/151. However, the residual spectrum shows an absorption dip at ∼0.9 and an excess emission at ∼0.6. As before, we fitted the absorption dip with the ionized absorption (WA). The multiplication of the warm absorber model improved the fit statistics to χ^2/d.o.f = 165.1/149 (Δχ^2=-15.7 for 2 d.o.f). To model the emission feature at ∼0.6, we added a Gaussian emission line (GL), which provided an improvement in the fit statistics with χ^2/d.o.f = 157.1/147 (Δχ^2=-8 for 2 d.o.f). The EPIC-PN mean spectrum, the best-fit absorbed relativistic reflection model, TBabs×WA×(GL+relconv∗reflionx+nthcomp) and the residuals are shown in Figure <ref> (left). The best-fit values for the emissivity index, inner disk radius, disk ionization parameter, black hole spin, disk inclination angle and spectral index of the incident continuum are q=9.9^+0.1p_-3.8, R_ in=1.27^+0.46_-0.03R_ g, ξ= 199^+29_-75 erg cm s^-1, a=0.998^+p_-0.006, i^∘=56.8^+1.8_-12.9 and Γ= 2.1^+0.1_-0.1, respectively. We also fitted the five time-resolved spectra jointly with the absorbed relativistic reflection model and tied every parameter to its mean spectral best-fit value except the normalization (A_ REF) of the reflection component. This provided an unacceptable fit with χ^2/d.o.f = 488.4/404. We then set the normalization (A_ NTH) of the illuminating continuum to vary between the five spectra and obtain a noticeable improvement in the fitting with χ^2/d.o.f = 465.2/399 (Δχ^2=-23.2 for 5 d.o.f). If we leave the spectral index (Γ) of the incident continuum to vary, we did not get any significant improvement in the fitting. We summarize the best-fit spectral model parameters for both time-averaged and time-resolved spectra in Table <ref>. The EPIC-PN spectral data sets, the best-fit absorbed relativistic reflection model and residuals are shown in Fig. <ref> (right). § FLUX-FLUX ANALYSIS We perform the flux-flux analysis which is a model-independent approach to distinguish between the main components responsible for the observed spectral variability and was pioneered by <cit.> and <cit.>. Based on our X-ray spectral modeling, we identified the 0.3-1 and 1-8 energy bands as representatives of the soft X-ray excess and primary power-law emission, respectively. Then, we constructed the 0.3-1 vs 1-8 flux-flux plot which is shown in Figure <ref> (Left). The mean count rate in the soft and hard bands are 0.52±0.04 counts s^-1 and 0.08±0.02 counts s^-1, respectively. We begin our analysis by fitting the flux-flux plot with a linear relation of the form, y=ax+b, where y and x represent the 1-8 and 0.3-1 band count rates, respectively. The straight line model provided a statistically unacceptable fit with χ^2/d.o.f = 59/32 and implied that the immanent relationship between the soft X-ray excess and primary power-law emission is not linear. Therefore, we fit the flux-flux plot with a power-law plus constant (PLC) model of the form, y=α x^β+c (where y≡1-8 and x≡0.3-1 count rates) following the approach of <cit.>. The PLC model improved the fit statistics to χ^2/d.o.f = 37.4/31 and explained the flux-flux plot quite well. We show the best-fit PLC model as the solid line in Fig. <ref> (Left). The best-fit power-law normalization, slope and constant values are α=0.11^+0.01_-0.01 counts s^-1, β=1.54^+0.2_-0.2 and c=0.02^+0.006_-0.007 counts s^-1, respectively. The PLC best-fit slope is greater than unity, which indicates the presence of intrinsic variability in the source. The detection of the positive `c'-value in the flux-flux plot implies that there exists a distinct spectral component which is less variable as compared to the primary X-ray continuum and contributes ∼25% of the 1-8 count rate at the mean flux level over the observed ∼20 hr timescales. To investigate this issue further, we computed the unabsorbed (without the Galactic and intrinsic absorption) primary continuum and soft X-ray excess flux in the full band (0.3-8) for all five intervals using XSPEC convolution model cflux and plotted the intrinsic primary power-law flux as a function of the soft X-ray excess flux (the middle panel of Fig. <ref>). The best-fit normalization, slope and constantparameters, obtained by fitting the F_ PL vs F_ BB plot with a PLC model, are α_ mod=0.26^+0.08_-0.07(×10^-12) erg cm^-2s^-1, β_ mod=1.53^+0.43_-0.45 and c_ mod=0.07^+0.03_-0.06(×10^-12) erg cm^-2s^-1, respectively. Interestingly, we found steeping in the F_ PL vs F_ BB plot with an apparent positive constant which is in agreement with the 0.3-1 vs 1-8 flux-flux plot. Our flux-flux analysis suggests that the primary power-law and soft X-ray excess emission are well correlated with each other, although they vary in a non-linear fashion on the observed timescale. We also investigated the variability relation between the UV and soft X-ray excess emission in PG 1404+226. Fig. <ref> (Right) shows the variation of the soft X-ray excess flux as a function of the UVW1 flux, which indicates no significant correlation between the UV and soft X-ray excess emission from PG 1404+226.§ FRACTIONAL RMS SPECTRAL MODELING To estimate the percentage of variability in the primary power-law continuum and soft X-ray excess emission, and also to quantify the variability relation between them, we derived and modeled the fractional rms variability spectrum of PG 1404+226. First, we extracted the background-subtracted, deadtime-corrected light curves in 19 different energy bands from the simultaneous and equal length (72) combined EPIC-PN+MOS data with a time resolution of Δ t=500. We have chosen the energy bands so that the minimum average count in each bin is around 20. Then we computed the frequency-averaged (ν∼[1.4-100]×10^-5) fractional rms, F_var in each light curve using the method described in <cit.>. We show the derived fractional rms spectrum of PG 1404+226 in Figure <ref> (left). The shape of the spectrum is approximately constant with a sharp drop at around 0.6, which can be explained in the framework of a non-variable emission line component at ∼0.6 and two variable spectral components: the soft X-ray excess and primary power-law emission with the decreasing relative importance of the soft excess emission and increasing dominance of the primary power-law emission with energy. We constructed fractional rms spectral models using our best-fit phenomenological and physical mean spectral models in ISIS v.1.6.2-40 <cit.>. First, we explored the phenomenological fractional rms spectral model in which the observed 0.6 Gaussian emission line (GL) is non-variable, and both the soft X-ray excess (zbbody) and primary power-law emission (zpowerlw) are variable in normalization and correlated with each other. Using the equation (3) of <cit.>, we obtained the expression for the fractional rms spectral model: F_var=√([(Δ A_ PL/A_ PLf_ PL)^2+(Δ A_ BB/A_ BBf_ BB)^2+2γΔ A_ PL/A_ PLΔ A_ BB/A_ BBf_ PLf_ BB])/f_ PL(A_ PL,E)+f_ BB(A_ BB,E)+f_ GL(E) where Δ A_ PL/A_ PL and Δ A_ BB/A_ BB represent fractional changes in the normalization of the primary power-law, f_ PL and blackbody, f_ BB components respectively. γ measures the correlation or coupling between f_ PL and f_ BB. f_ GL(E) represents the Gaussian emission line component with the observed energy of ∼0.6.We then fitted the 0.3-8 fractional rms spectrum of PG 1404+226 using this `two-component phenomenological' model (equation <ref>) and the best-fit mean spectral model parameters as the input parameters for the above model. This model describes the data reasonably well with χ^2/d.o.f = 25/16. The best-fit rms model parameters are: Δ A_ PL/A_ PL=0.8±0.7, Δ A_ BB/A_ BB=0.78±0.03 and γ=0.68^+0.32_-0.43. We show the fractional rms variability spectrum and the best-fit `two-component phenomenological' model in Fig. <ref> (right).In PG 1404+226, the soft X-ray excess emission was modeled by two different physical models: intrinsic disk Comptonization and relativistic reflection from the ionized accretion disk. To break the degeneracy between these two possible physical scenarios, we made fractional rms spectral model considering our best-fit disk Comptonization and relativistic reflection models to the time-averaged spectrum. In the disk Comptonization scenario, the observed variability in PG 1404+226 was driven by variation in the source luminosity, as inferred from the joint fitting of 5 EPIC-PN spectra. Therefore, we can write the expression for the fractional rms (see ) as F_ var=√(<(Δ f(L,E))^2>)/f_ optx(L,E)+f_ GL(E) where f_ optx(L,E) and f_ GL(E) represent the best-fit disk Comptonization (optxagnf) and 0.6 Gaussian emission line (GL) components, respectively. L is the source luminosity which is the only variable free parameter in the model. The fitting of the 0.3-8 fractional rms spectrum using this model (equation <ref>) resulted in an enhanced variability in the hard band above ∼1 with χ^2/d.o.f =32/18. We show the fractional rms variability spectrum and the `one-component disk Comptonization' model in Figure <ref> (left).Then, we investigated the relativistic reflection scenario where the origin of the soft X-ray excess emission was explained with the disk irradiation <cit.>. In this scenario, the rapid X-ray variability in PG 1404+226 can be described due to changes in the normalization of the illuminating power-law continuum and reflected inner disk emission as evident from the time-resolved spectroscopy. Thus, we constructed the `two-component relativistic reflection' model where both the inner disk reflection (relconv∗reflionx) and illuminating continuum (nthcomp) are variable in normalization and perfectly correlated with each other. Mathematically, we can write the expression for the fractional rms as F_ var=√(<(Δ f(A_ NTH,A_ REF,E))^2>)/f(A,E) wheref(A_ NTH,A_ REF,E)=f_ NTH(A_ NTH,E)+f_ REF(A_ REF,E)+f_ GL(E) Here f_ NTH(A_ NTH,E), f_ REF(A_ REF,E) and f_ GL(E) represent the best-fit illuminating continuum (nthcomp), inner disk reflection (relconv∗reflionx) and the 0.6 Gaussian emission line (GL) components, respectively. The two variable free parameters of this model (equation <ref>) are A_ NTH and A_ REF. We then fitted the observed fractional rms spectrum using the `two-component relativistic reflection' model which describes the data well with χ^2/d.o.f =25/17. We show the fractional rms variability spectrum and the best-fit model in Figure <ref> (right). The fractional variations in the normalization of the illuminating continuum and reflected emission are Δ A_ NTH/A_ NTH=0.83^+0.17_-0.20 and Δ A_ REF/A_ REF=0.78^+0.02_-0.03, respectively.§ SUMMARY AND DISCUSSION We present the first results from ourobservation of the NLS1 galaxy PG 1404+226. Here, investigate the large-amplitude X-ray variability, the origin of the soft X-ray excess emission and its connection with the intrinsic power-law emission through a detailed analysis of the time-averaged as well as time-resolved X-ray spectra, and frequency-averaged (ν∼[1.4-100]×10^-5) X-ray fractional rms spectrum. Below we summarize our results:* PG 1404+226 showed a short-term, large-amplitude variability event in which the X-ray (0.3-8) count rate increased exponentially by a factor of ∼7 in about 10 and dropped sharply during the 2016observation. The hard X-ray (1-8) /ACIS light curve also showed a rapid variability (a factor of ∼2 in about 5) with an exponential rise and a sharp fall in 2000 <cit.>. The rapid X-ray variability had been observed in a few NLS1 galaxies (e.g. NGC 4051: , 1H 0707–495: , Mrk 335: ). However, the UV (λ_ eff=2910Å) emission from PG 1404+226 is much less variable (F_ var,UV∼1%) compared to the X-ray (0.3-8) variability (F_ var, X∼82%).* The source exhibited strong soft X-ray excess emission below ∼1, which was fitted by both the intrinsic disk Comptonization and relativistic reflection models. The EPIC-PN spectral data revealed the presence of a highly ionized (ξ∼600 erg cm s^-1) Ne X Lyman-α absorbing cloud along the line-of-sight with a column density of N_ H∼5×10^22 cm^-2 and a possible O VIII Lyman-α emission line. However, we did not detect the presence of any outflow as found by <cit.>. * The modelling of the RGS spectrum not only confirms the presence of the Ne X Lyman-α absorbing cloud and O VIII Lyman-α emission line but also reveals an O VIII Lyman-β emission line. * The time-resolved spectroscopy showed a significant variability both in the soft X-ray excess and primary power-law flux, although there were no noticeable variations in the soft X-ray excess temperature (kT_ SE∼100) and photon index of the primary power-law continuum.* In the disk Comptonization scenario, the rapid X-ray variability can be attributed to a variation in the source luminosity as indicated by the time-resolved spectroscopy. However, the modeling of the X-ray fractional rms spectrum using the `one-component disk Comptonization' model cannot reproduce the observed hard X-ray variability pattern and indicates reflection origin for the soft X-ray excess emission (see Fig. <ref>, left).* In the relativistic reflection scenario, the observed large-amplitude X-ray variability was predominantly due to two components: illuminating continuum and smeared reflected emission, both of them are variable in normalization (see Fig. <ref>, right).* The inner disk radius and central black hole spin as estimated from the relativistic reflection model are r_ in<1.7 r_ g and a>0.992, respectively. <cit.> also showed that the disk reflection could successfully explain the broadband (0.3-8) spectrum of PG 1404+226 with the radiation from the inner accretion disk around a Kerr black hole. The disk inclination angle estimated from the ionized reflection model is i^∘=56.8^+1.8_-12.9 which is in close agreement with that (i^∘=58^+7_-34) obtained by <cit.>. The non-detection of the 6.4 iron emission line could be due to its smearing on the broad shape in the spectrum. * We found that the soft (0.3-1) and hard (1-8) band count rates are correlated with each other and vary in a non-linear manner as suggested by the steepening of the flux-flux plot. The fitting of the hard-vs-soft counts plot with a power-law plus constant (PLC) model reveals a significant positive offset at high energies which can be interpreted as corroboration for the presence of a less variable reflection component (probably smeared iron emission line) in the hard band on timescales of ∼20 hr. §.§ UV/X-ray Variability and Origin of the Soft X-ray Excess Emission The observed UV variability in PG 1404+226 is weak with F_ var∼1 per cent only, whereas the X-ray variability is much stronger (F_ var∼82 per cent) on timescales of ∼73. The UV and soft X-ray excess emission do not occur to be significantly correlated as demonstrated in Fig. <ref> (Right). In the intrinsic disk Comptonization (optxagnf) model, the soft X-ray excess emission results from the Compton up-scattering of the UV seed photons by an optically thick, warm (kT_ SE∼0.1-0.2) electron plasma in the inner disk (below r_ corona) itself. So, if the soft X-ray excess was the direct thermal emission from the inner accretion disk, then we expect correlated UV/soft excess variability. However, we did not find any correlation between the UV flux and X-ray spectral parameters. Furthermore, the modeling of the rms variability spectrum using `one-component disk Comptonization' model could not describe the observed hard X-ray variability in PG 1404+226 (see Fig. <ref>, left). It might be possible that the UV and X-ray emitting regions interact on a timescale much longer than the duration of our observation. To explore that possibility, we calculated various timescales associated with the accretion disk. The light travel time between the central X-ray source and the standard accretion disk is given by the relation <cit.>t_ cross=2.6×10^5(λ_ eff/3000Å)^4/3(Ṁ/Ṁ_̇ ̇Ė)^1/3(M_ BH/10^8M_⊙)^2/3 where Ṁ/Ṁ_̇ ̇Ė is the scaled mass accretion rate, M_ BH is the central black hole mass in units of M_⊙ and λ_ eff is the effective wavelength where the disk emission peaks.In the case of PG 1404+226, M_ BH∼4.5×10^6M_⊙, Ṁ/Ṁ_̇ ̇Ė∼0.08 (as obtained from the optxagnf model as well as calculated from the unabsorbed flux in the energy band 0.001-100 using the convolution model cflux in XSPEC) and λ_ eff=2910Å for UVW1 filter. Therefore, the light crossing time between the X-ray source and the disk is ∼13.6, which corresponds to the peak disk emission radius of ∼600r_ g. If we consider a thin disk for which height-to-radius ratio, h/r∼0.1 <cit.>, the viscous timescale at this emission radius (r∼600r_ g) is of the order of ∼10 years which is much longer than the time span of ourobservation. Although both the soft and hard X-ray emission from PG 1404+226 are highly variable, the lack of any strong UV variability is in contradiction with the viscous propagation fluctuation scenario. In the relativistic reflection model, the soft X-ray excess is a consequence of disk irradiation by a hot, compact corona close to the black hole. We found a strong correlation between the soft and hard X-ray emission which is expected in the reflection scenario. Additionally, the modeling of the fractional rms spectrum considering `two-component relativistic reflection' model can reproduce the observed X-ray variability very well (see Fig. <ref>, Right).§.§ Origin of Rapid X-ray VariabilityPG 1404+226 shows a strong X-ray variability with the fractional rms amplitude of F_ var, X∼82% on timescales of ∼20 hr. We attempted to explain the observed rapid variability of PG 1404+226 in the framework of two possible physical scenarios: intrinsic disk Comptonization and relativistic reflection from the ionized accretion disk. In the disk Comptonization scenario, if the rapid X-ray variability is due to the variation in the source luminosity which is favored by the time-resolved spectroscopy, then it slightly overpredicts the fraction variability in the hard band (see Fig. <ref>, left). Therefore, it is unlikely that the rapid X-ray variability is caused by variations in the source (warm plus hot coronae) luminosity only. On the other hand, the soft X-ray excess and primary continuum vary non-linearly (as F_ primary∝ F_ excess^1.54), which indicates that the soft X-ray excess is reciprocating with the primary continuum variations, albeit with a smaller amplitude. It is in agreement with the smeared reflection scenario which is further supported by the high emissivity index (q∼9.9) and non-detection of the iron line. Moreover, the fractional variability spectrum of PG 1404+226 is best described by two components: illuminating continuum and reflected emission, both of them are variable in flux (see Fig. <ref>, right). We interpret these rapid variations in the framework of the light bending model <cit.>, according to which the primary coronal emission is bent down onto the accretion disk due to strong gravity and forms reflection components including the soft X-ray excess emission. The nature of the rapid X-ray variability in PG 1404+226 prefers the `lamppost geometry' for the primary X-ray emitting hot corona.LM gratefully acknowledges support from the University Grants Commission (UGC), Government of India. The authors thank the anonymous referee for constructive suggestions in improving the quality of the paper. This research has made use of processed data ofobservatory through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA Goddard Space Flight Center. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the NASA. Figures in this manuscript were made with the graphics package pgplot and GUI scientific plotting package veusz.(EPIC-PN, EPIC-MOS, RGS and OM)HEASOFT, SAS, XSPEC <cit.>, XSTAR <cit.>, FTOOLS <cit.>, ISIS <cit.>, Python, S-Lang, Veusz, PGPLOT90[Arnaud et al.1985]ar85 Arnaud K. A. et al. 1985, MNRAS, 217, 105[Arnaud1996]ar96 Arnaud K. A., 1996, in Jacoby G. H., Barnes J., eds, ASP Conf. Ser. Vol. 101, Astronomical Data Analysis Software and Systems V. Astron. Soc. Pac., San Francisco, p. 17[Boroson & Green1992]bo92 Boroson T. A., Green R. F., 1992, ApJS, 80, 109[Blackburn1995]bl95 Blackburn J. K., 1995, in Shaw R. A., Payne H. E., Hayes J. J. E., eds, ASP Conf. Ser. Vol. 77, Astronomical Data Analysis Software and Systems IV. Astron. Soc. Pac., San Francisco, p. 367[Boller et al.1996]bo96 Boller T., Brandt W. N., Fink H., 1996, A&A, 305, 53[Churazov et al.2001]ch01 Churazov E., Gilfanov M., Revnivtsev M., 2001, MNRAS, 321, 759[Crummy et al.2005]cr05 Crummy J., Fabian A. C., Brandt W. N., Boller Th., 2005, MNRAS, 361, 1197[Crummy et al.2006]cr06 Crummy J., Fabian A. C., Gallo L., Ross R. R., 2006, MNRAS, 365, 1067[Czerny et al.2003]cz03 Czerny B., Nikołajuk M., Różańska A., Dumont A.-M., Loska Z., Zycki P. T., 2003, A&A, 412, 317[Czerny2006]cz06 Czerny B., 2006, ASPC, 360, 265[den Herder et al.2001]den01 den Herder J. W. et al., 2001, A&A, 365, L7[Dasgupta et al.2005]da05 Dasgupta S., Rao A. R., Dewangan G. C., Agrawal V. K., 2005, ApJ, 618, 87[Dewangan et al.2007]de07 Dewangan G. C., Griffiths R. E., Dasgupta S., Rao A. R., 2007, ApJ, 671, 1284 [Done et al.2012]done12 Done C., Davis S. W., Jin C., Blaes O., Ward M., 2012, MNRAS, 420, 1848[Dewangan et al.2015]de15 Dewangan G. C., Pawar P. K., Pal M., 2015, ASInC, 12, 57[Fabian et al.2002]fa02 Fabian A. C., Ballantyne D. R., Merloni A., Vaughan S., Iwasawa K., Boller Th., 2002, MNRAS, 331L, 35[Fabian & Vaughan2005]fa05 Fabian A. C., Vaughan S., 2005, MNRAS, 340, L28[Fabian et al.2012]fa12 Fabian A. C., Zoghbi A., Wilkins D., Dwelly T., Uttley P. et al., 2012, MNRAS, 419, 116[Garcia et al2014]ga14 García J., Dauser T., Lohfink A., Kallman T. R., et al., 2014, ApJ, 782, 76[Ghosh et al2016]gh16 Ghosh R., Dewangan, G. C., Raychaudhuri B., 2016, MNRAS, 456, 554[Gierliński & Done2004]gi04 Gierliński, M., Done, C., 2004, MNRAS, 349, 7[Gierliński & Done2006]gi06 Gierliński M., Done C., 2006, MNRAS, 371, L16[Goodrich1989]go89 Goodrich R. W., 1989, ApJ, 342, 224[Grupe et al.1999]gr99 Grupe D., Beuermann K., Mannheim K., Thomas H.-C., 1999, A&A, 350, 805[Haardt & Maraschi1991]ha91 Haardt F. & Maraschi L., 1991, ApJ, 380, 51[Haardt & Maraschi1993]ha93 Haardt F. & Maraschi L., 1993, ApJ, 413, 507[Houck & DeNicola2000]ho00 Houck J. C. & DeNicola L. A., 2000, in Manset N., Veillet C., Crabtree D., eds, ASP Conf. Ser., Vol. 216, Astronomical Data Analysis Software and Systems IX. Astron. Soc. Pac., San Francisco, p. 591[Jansen et al.2001]ja01 Jansen F. et al., 2001, A&A, 365, L1 [Janiuk et al.2001]jan01 Janiuk A., Czerny B., Madejski G. M., 2001, ApJ, 557, 408[Kaspi et al.2000]ka00 Kaspi S., Smith P. S., Netzer H., Maoz D., Jannuzi B. T., Giveon U., 2000, ApJ, 533, 631[Komossa & Meerschweinchen2000]ko00 Komossa S., Meerschweinchen J., 2000, A&A, 354, 411[Kallman & Bautista2001]kb01 Kallman T., & Bautista M. 2001, ApJS, 133, 221[Kammoun, Papadakis & Sabra2015]ka15 Kammoun E. S., Papadakis I. E., Sabra B. M., 2015, A&A, 582, 40[Leighly et al.1997]le97 Leighly K. M., Mushotzky R. F., Nandra K., Forster K., 1997, ApJ, 489, L25[Leighly1999a]le99a Leighly K. M., 1999, ApJS, 125, 297[Leighly1999b]le99b Leighly K. M., 1999, ApJS, 125, 317[Magdziarz et al.1998]ma98 Magdziarz P., Blaes O. M., Zdziarski A. A., Johnson W. N., Smith D. A., 1998, MNRAS, 301, 179[Mason et al.2001]ma01 Mason K.O. et al., 2001, A&A, 365, 36[Miniutti et al.2003]mi03 Miniutti G., Fabian A. C., Goyder R., Lasenby A. N., 2003, MNRAS, 344, L22[Miniutti et al.2004]mi04 Miniutti G., Fabian A. C., Miller J. M., 2004, MNRAS, 351, 466[Miniutti & Fabian2004]mf04 Miniutti G., Fabian A. C., 2004, MNRAS, 349, 1435[Miniutti et al.2007]mi07 Miniutti G., Fabian A. C., Anabuki N., Crummy J. et al., 2007, PASJ, 59S, 315[Mallick et al.2016]ma16 Mallick L., Dewangan G.C., Gandhi P. et al, 2016, MNRAS, 460, 1705[Mallick et al.2017]ma17 Mallick L., Dewangan G.C., McHardy I. M., Pahari M., 2017, MNRAS, 472, 174[Mallick et al.2018]ma18 Mallick L. et al., 2018, MNRAS, preprint (arXiv:1804.02703)[Osterbrock & Pogge1985]os85 Osterbrock D. E., Pogge R. W., 1985, ApJ, 297, 166[Papadakis et al.2007]pa07 Papadakis I. E., Brinkmann W., Page M. J., McHardy I., Uttley P., 2007, A&A, 461, 931[Ross & Fabian2005]ro05 Ross R.R., Fabian A.C., 2005, MNRAS, 358, 211[Shakura & Sunyaev1973]sh73 Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337[Singh et al.1985]si85 Singh K. P., Garmire G. P., Nousek J. 1985, ApJ, 297, 633[Strüder et al.2001]st01 Strüder L. et al., 2001, A&A, 365, L18[Turner et al.2001]tur01 Turner M. J. L. et al., 2001, A&A, 365, L27[Taylor et al.2003]ta03 Taylor R. D., Uttley P., McHardy I. M., 2003, MNRAS, 342, 31[Ulrich & Molendi1996]um96 Ulrich-Demoulin M.-H., Molend S., 1996, ApJ, 457, 77[Vaughan et al.1999]va99 Vaughan S., Reeves J., Warwick R., Edelson R., 1999, MNRAS, 309, 113[Vaughan et al.2003]va03 Vaughan S., Edelson R., Warwick R. S., Uttley P., 2003, MNRAS, 345, 1271[Véron-Cetty et al.2001]ve01 Véron-Cetty M.-P., Véron P., Gonçalves A. C, 2001, A&A, 372, 730[Wang et al.1996]wa96 Wang T., Brinkmann W., Bergeron J., 1996, A&A, 309, 81[Wilms et al.2000]wi00 Wilms J., Allen A., McCray R., 2000, ApJ, 542, 914[Wang & Lu2001]wa01 Wang T., Lu Y., 2001, A&A, 377, 52[Willingale et al.2013]wil13 Willingale R., Starling R. L. C., Beardmore A. P., Tanvir N. R., OB́rien P. T., 2013, MNRAS, 431, 394[Wilkins et al.2015]wi15 Wilkins D. R., Gallo L. C., Grupe D., Bonson K., Komossa S., Fabian A. C., 2015, MNRAS, 454, 4440[Zdziarski, Johnson & Magdziarz1996]zd96 Zdziarski A. A., Johnson W. N., Magdziarz P., 1996, MNRAS, 283, 193
http://arxiv.org/abs/1702.08383v3
{ "authors": [ "Labani Mallick", "Gulab C. Dewangan" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170227171245", "title": "Large-amplitude rapid X-ray variability in the narrow-line Seyfert 1 galaxy PG 1404$+$226" }
О зависимости сложности и глубины обратимых схем, состоящих из функциональных элементов NOT, CNOT и 2-CNOT, от количества дополнительных входов General Upper Bounds for Gate Complexity and Depth of Reversible CircuitsConsisting of NOT, CNOT and 2-CNOT Gates Д. В. Закаблуков[Аспирант МГТУ им. Н. Э. Баумана, кафедра <<Информационная безопасность>>, г. Москва,e-mail: ]Dmitry V. Zakablukov[Post-graduate of Information Security Chair, BMSTU, Moscow,e-mail: ] December 30, 2023 ===========================================================================================================================================================================================================================================================================УДК 004.312, 519.7 Исследование выполнено при финансовой поддержке РФФИ в рамках научного проекта № 16-01-00196 A. В работе рассматривается вопрос сложности и глубины обратимых схем, состоящих из функциональных элементов NOT, CNOT и 2-CNOT, в условиях ограничения на количество используемых дополнительных входов. Изучаются функции Шеннонa сложности L(n, q) и глубины D(n,q) обратимой схемы, реализующей отображение f_2^n →_2^n, при условии, что количество дополнительных входов q находится в диапазоне 8n < q ≲ n2^n-o(n). Доказываются верхние оценки L(n,q) ≲ 2^n + 8n2^n/ (log_2 (q-4n) - log_2 n - 2) и D(n,q) ≲ 2^n+1(2,5 + log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2)) для указанного диапазона значений q.The paper discusses the gate complexity and the depth of reversible circuits consisting of NOT, CNOT and 2-CNOT gates in the case, when the number of additional inputs is limited. We study Shannon's gate complexity function L(n, q) and depth function D(n, q) for a reversible circuit implementing a Boolean transformation f_2^n →_2^n with 8n < q ≲ n2^n-o(n) additional inputs. The general upper bounds L(n,q) ≲ 2^n + 8n2^n/ (log_2 (q-4n) - log_2 n - 2) and D(n,q) ≲ 2^n+1(2,5 + log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2)) are proved for this case. Ключевые слова: обратимые схемы, сложность схемы, глубина схемы, вычисления с памятью.Keywords: reversible logic, gate complexity, circuit depth, computations with memory.§ ВВЕДЕНИЕТеория схемной сложности берет свое начало с работы Шеннона <cit.>, в которой было предложено в качестве меры сложности булевой функции рассматривать сложность реализующей ее минимальной контактной схемы.О. Б. Лупановым установлена <cit.> асимптотика сложности L(n) ∼ρ 2^n/ n булевой функции от n переменных в произвольном конечном полном базисе элементов с произвольными положительными весами, где ρ обозначает минимальный приведенный вес элементов базиса. Также в работе <cit.> им были рассмотрены схемы из функциональных элементов с задержками и было доказано, что в регулярном базисе функциональных элементов любая булева функция может быть реализована схемой, имеющей задержку T(n) ∼τ n, где τ — минимум приведенных задержек всех элементов базиса, при сохранении асимптотически оптимальной сложности. Однако вопрос зависимости значений функций L(n) и T(n) от количества используемых регистров памяти в данных работах не рассматривался.Вопрос о вычислениях с ограниченной памятью был рассмотрен Н. А. Карповой в работе <cit.>, где доказано, что в базисе классических функциональных элементов, реализующих всебулевы функции, асимптотическая оценка функции Шеннона сложности схемы с тремя и более регистрами памяти зависит от значения p, но не изменяется при увеличении количества используемых регистров памяти.В данной работе рассматриваются схемы, состоящие из обратимых функциональных элементов NOT, CNOT и 2-CNOT. Определение таких функциональных элементов и схем было дано, например, в работах <cit.>. Функции Шеннона сложности L(n,q) и глубины D(n,q) обратимой схемы, состоящей из элементов NOT, CNOT и 2-CNOT и реализующей некоторое булево отображение _2^n →_2^n с использованием q дополнительных входов, были определены в работах <cit.>.В работе <cit.> были получены следующие оценки сложности обратимой схемы:L(n,0) ≍ n2^n/ log_2 n , L(n,q_0) ≍ 2^nприq_0 ∼ n 2^n-⌈ n/ ϕ(n)⌉,где ϕ(n) ⩽ n/ (log_2 n + log_2 ψ(n)) и ψ(n) — любые сколь угодно медленно растущие функции.В работе <cit.> были получены следующие оценки глубины обратимой схемы:D(n,q) ⩾2^n(n-2)/3(n+q)log_2(n+q) - n/3(n+q), D(n,0) ⩽n2^n+5/ψ(n)( 1 + ε(n) ) ,где ψ(n) = log_2 n - log_2 log_2 n - log_2 ϕ(n), ϕ(n) < n/ log_2 n — любая сколь угодно медленно растущая функция,ε(n) = 1/4ϕ(n) +(4 + o(1))log_2 n ·log_2 log_2 n/n.Для схем с дополнительными входами были получены следующие оценки глубины обратимой схемы:D(n,q_1) ≲ 3nприq_1 ∼ 2^n , D(n,q_2) ≲ 2nприq_2 ∼ϕ(n)2^n,где ϕ(n) = o(n) — любая сколь угодно медленно растущая функция.Таким образом, на сегодняшний день известны оценки сложности и глубины обратимой схемы только в двух крайних случаях: когда в схеме совсем не используются дополнительные входы и когда их количество весьма велико. В данной работе при помощи алгоритма синтеза, основанного на стандартном методе О. Б. Лупанова, доказывается, что для любого значения q в диапазоне 8n < q ≲ n2^n-o(n) верны соотношения L(n,q) ≲ 2^n + 8n2^n/ (log_2 (q-4n) - log_2 n - 2) и D(n,q) ≲ 2^n+1(2,5 + log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2)). § ОСНОВНЫЕ ПОНЯТИЯ Определение обратимых функциональных элементов, в частности NOT и k-CNOT, можно найти в работах <cit.>. Mы будем пользоваться формальным определением этих элементов из работы <cit.>.Напомним, что через N_j^n обозначается функциональный элемент NOT (инвертор) с n входами, задающий преобразование _2^n →_2^n видаf_j(⟨ x_1, …, x_n ⟩) = ⟨ x_1, …, x_j ⊕ 1, …, x_n ⟩.Через C_i_1,…,i_k;j^n = C_I;j^n, j ∉ I, обозначается функциональный элемент k-CNOT с n входами (контролируемый инвертор, обобщенный элемент Тоффоли с k контролирующими входами), задающий преобразование _2^n →_2^n видаf_i_1,…,i_k;j(⟨ x_1, …, x_n ⟩) = ⟨ x_1, …, x_j ⊕ x_i_1∧…∧ x_i_k, …, x_n ⟩.Будем рассматривать только функциональные элементы NOT, CNOT (1-CNOT) и 2-CNOT и обратимые схемы, состоящие из них.Через L(), D() и Q() будем обозначать сложность обратимой схемы , ее глубину и количество дополнительных входов соответственно. Функции Шеннона сложности L(n,q) и глубины D(n,q) обратимой схемы, состоящей из элементов NOT, CNOT и 2-CNOT и реализующей некоторое булево отображение _2^n →_2^n с использованием q дополнительных входов, были определены в работах <cit.>.Значимыми входами схемы будем называть все входы, не являющиеся дополнительными, а значимыми выходами — те выходы, значения на которых нужны для дальнейших вычислений.§ ЗАВИСИМОСТЬ СЛОЖНОСТИ ОБРАТИМОЙ СХЕМЫ ОТ КОЛИЧЕСТВА ДОПОЛНИТЕЛЬНЫХ ВХОДОВ Рассмотрим обратимую схему _n на рис. <ref>, реализующую все конъюнкции n переменных вида x_1^a_1∧…∧ x_n^a_n, a_i ∈_2. Схема имеет n значимых входов и 2^n значимых выходов; в нее входят подсхемы _⌈ n/ 2 ⌉ и _⌊ n/ 2 ⌋, реализующие конъюнкции от меньшего числа переменных. Если все 2^n конъюнкций на значимых выходах основной схемы реализовать одновременно, а не по мере необходимости, то L(_n) ∼ 2^n и Q(_n) ∼ 2^n, как было доказано в работе <cit.>. С другой стороны, мы можем конструировать конъюнкции по мере необходимости по одной, а не все сразу, используя только лишь значимые выходы подсхем _⌈ n/ 2 ⌉ и _⌊ n/ 2 ⌋ и всего один дополнительный вход, который и будет хранить значение нужной нам конъюнкции. После того, как все необходимые операции с этим значимым выходом будут осуществлены, мы можем его обнулить, применив те же функциональные элементы, что и для его получения, но в обратном порядке. Таким образом, в рассматриваемом нами случае для получения каждой конъюнкции потребуется не более двух элементов 2-CNOT, а для получения t конъюнкций (последовательно, по мере необходимости) — не более 2t элементов 2-CNOT. Следовательно, L(_n) ≲ O(2^n/ 2) + 2t, Q(_n) ≲ O(2^n/ 2) + 1. Такой же подход можно применить к подсхемам _⌈ n/ 2 ⌉ и _⌊ n/ 2 ⌋, а также к подсхемам этих подсхем.Если вообще не хранить промежуточных значений, а конструировать конъюнкции по мере необходимости, имея лишь входы x_1, …, x_n, x̅_1, …, x̅_n, то для получения каждой конъюнкции x_1^a_1∧…∧ x_n^a_n, очевидно, потребуется не более 2(n-1) элементов 2-CNOT, а дополнительных входов потребуется всего (n-1) на все конъюнкции. К примеру, на рис. <ref> показан пример конструированияконъюнкции x̅_1 x̅_2 x̅_3 x_4 x_5 x̅_6 x_7 x_8 с использованием промежуточных значений x̅_1 x̅_2, x̅_3 x_4, x_5 x̅_6 и x_7 x_8 и последующем обнулением значений на незначимых выходах.Рассмотрим в общем случае обратимую схему _CONJ(n,q), которая реализует конъюнкции n переменных вида x_1^a_1∧…∧ x_n^a_n, a_i ∈_2, при условии, что для хранения промежуточных значений отведено q дополнительных входов, а значения x̅_1, …, x̅_n уже получены ранее. Обозначим через L_CONJ(n, q, t) сложность схемы _CONJ(n,q), реализующей по мере необходимости t конъюнкций, не обязательно различных, причем значение t может быть любым, в том числе больше 2^n. Также обозначим через Q_CONJ(n, q, t) общее количество необходимых дополнительных входов для такой обратимой схемы. Из рассуждений выше можно вывести следующие простые оценки:L_CONJ(n, 0, t)⩽ 2(n-1)t , Q_CONJ(n, 0, t) = n-1 .Для q_0 ∼⌈ n/ 2 ⌉ + ⌊ n/ 2 ⌋ верны соотношенияL_CONJ(n, q_0, t) ⩽ q_0 + 2t, Q_CONJ(n, q_0, t) ⩽ q_0 + 1. Выведем зависимость значения функции L_CONJ(n, q, t) от значения q.Для любого значения q > 2n, q ≲ 2^n верны соотношенияL_CONJ(n, q, t)⩽ q + 8nt/log_2 q - log_2 n - 1, Q_CONJ(n, q, t)⩽ q + n - 1 .Соотношение Q_CONJ(n, q, t) ⩽ q + n - 1 следует из соотношения (<ref>).Рассмотрим структуру искомой обратимой схемы _CONJ(n,q) на рис. <ref>: она разбита на K = ⌈log_2 n ⌉ уровней, нумерация ведется снизу вверх. На уровне номер k расположены 2^k-1 обратимых подсхем, все они имеют примерно одинаковое количество значимых входов и выходов и реализуют все конъюнкции от некоторого подмножества переменных x_1, …, x_n, причем подмножества для разных схем одного уровня не пересекаются, их объединение равно всему множеству { x_1, …, x_n }, а мощности данных подмножеств примерно равны.Для пояснения структуры схемы _CONJ рассмотрим частный ее случай для n = 7. Схема имеет K = 3 уровня, каждый ее уровень расписан в Таблице <ref>. Из данной таблицы видно, что если некоторая подсхема _k;i на уровне k имеет 2^m значимых выходов, то на уровне (k+1) есть ровно две подсхемы _k+1;j и _k+1;j+1, подключенные к ней, первая из которых имеет 2^⌊ m/ 2 ⌋ значимых выходов, а вторая — 2^⌈ m/ 2 ⌉ значимых выходов. Структура подсхемы _k;i проста: она реализует конъюнкции каждого значимого выхода подсхемы _k+1;j с каждым значимым выходом подсхемы _k+1;j+1 (см. рис. <ref>). Следовательно, сложность такой подсхемы будет равна 2^⌊ m/ 2 ⌋· 2^⌈ m/ 2 ⌉ = 2^m (используются только элементы 2-CNOT).Вернемся к общей схеме _CONJ. Нам дано q дополнительных входов для хранения промежуточных значений. Разумнее всего потратить их для хранения значений на выходах подсхем самых высоких уровней, поскольку видно, что чем меньше уровень схемы _CONJ, тем больше требуется дополнительных входов для хранения промежуточных значений.Рассмотрим случай, когда мы имеем возможность хранить все промежуточные значения. Обозначим через L_k количество элементов на k-м уровне схемы. К примеру, L_1 = 2^n, L_2 = 2^⌊ n/ 2 ⌋ + 2^⌈ n/ 2 ⌉.Оценим значение L_k. Поскольку⌈n/2⌉ = ⌊n + 1/2⌋⩽n + 1/2,тоL_2 ⩽ 2 ·max(2^⌊n/2⌋, 2^⌈n/2⌉) = 2 · 2^⌈n/2⌉⩽ 2 · 2 ^ n/2 + 1/2.Отсюда следует, чтоL_3 ⩽ 4 · 2 ^ n/4 + 1/4 + 1/2,L_4 ⩽ 8 · 2 ^ n/8 + 1/8 + 1/4 + 1/2,L_k ⩽ 2^k · 2 ^ n/ 2^k-1.Обозначим δ_k = 2^k · 2 ^ n/ 2^k-1. Значение переменной k лежит в диапазоне [1, …, K], K = ⌈log_2 n ⌉, k ∈ℕ. Сделаем переобозначение переменной: k = K - s, тогда s = s(k) = K - k. Если k обозначает номер уровня схемы при нумерации от выходов ко входам (снизу вверх), то (s+1) будет означать номер уровня схемы при нумерации от входов к выходам (сверху вниз). Значение переменной s лежит в диапазоне [0, …, K - 1], s ∈ℤ_+. В этом случаеδ_k = 2^K/2^s· 2^(2n · 2^s)/ 2^K⩽2n/2^s· 2^2^s+1 = Δ_s.Следовательно, мы получили цепочку неравенствL_k ⩽δ_k ⩽Δ_s(k) = 2n/2^s· 2^2^s+1. Выпишем первые члены ряда { Δ_s(k) }: { 8n, 16n, 128n, … }. Видно, что с ростом s значение Δ_s растет все быстрее. Более того, можно утверждать, что для любого s ⩾ 1 верно соотношение∑_i=0^s-1Δ_i⩽Δ_s/2.Отсюда следует, что∑_i=0^s Δ_i⩽3Δ_s/2,∑_i=K^K-s L_i ⩽3n/2^s· 2^2^s+1.Другими словами, сумма сложностей всех подсхем на последних (s+1) уровнях (при нумерации снизу вверх) не превышает 3n · 2^-s + 2^s+1.Вернемся снова к общей схеме _CONJ. Из рис. <ref> видно, что для конструирования по мере необходимости одного значимого выхода схемы _CONJ на первых r уровнях будет использовано не более (1 + 2 + 4 + … + 2^r-1)=(2^r - 1) элементов 2-CNOT. Столько же функциональных элементов потребуется для обнуления значений на незначимых выходах. Следовательно, при условии, что количество уровней, для которых подсхемы надо конструировать по мере необходимости, не превышает r, верно соотношениеL_CONJ(n, q, t) ⩽ q + 2(2^r - 1)· t ⩽ q + t · 2^r+1.Слагаемое q в данном соотношении, очевидно, следует из того факта, что для получения значения на одном выходе любой подсхемы _k;i требуется ровно один элемент 2-CNOT. Если мы можем хранить не более q промежуточных значений на выходах подсхем, то для их хранения потребуется не более q элементов 2-CNOT.Нам требуется оценить значение r. Пусть для данного по условию задачи значения q выполняется неравенство3n/2^s· 2^2^s+1⩽ q < 3n/2^s+1· 2^2^s+2для некоторого значения s ∈ [0, …, K - 1], s ∈ℤ_+. Тогда мы можем утверждать, что данного значения q достаточно для хранения значений на всех выходах подсхем на последних (s+1) уровнях при нумерации уровней снизу вверх (см. соотношение (<ref>)), а количество первых уровней, для которых подсхемы нужно конструировать по мере необходимости, не превышает (k-1), поскольку (s+1) = K - (k-1). Следовательно, r ⩽ k-1. Из правого неравенства соотношения (<ref>) следует, чтоlog_2 q < log_2 3 + log_2 n - s - 1 + 2^s+2,2^K/2^k = 2^s > log_2 q - (log_2 3 + log_2 n - s - 1)/4, (log_2 q - (log_2 3 + log_2 n - s - 1))2^k < 4 · 2^K, K = ⌈log_2 n ⌉ , s ⩾ 0⇒ 2^k < 8n/log_2 q - log_2 n - 1при q > 2n. Поскольку r + 1 ⩽ k, то2^r+1 < 8n/log_2 q - log_2 n - 1при q > 2n.Из этого неравенства и неравенства (<ref>) следует оценка из утверждения ЛеммыL_CONJ(n, q, t) ⩽ q + 8nt/log_2 q - log_2 n - 1.Ограничение q ≲ 2^n из утверждения Леммы связано с тем, что при значениях q ≳ 2^n не наблюдается снижение сложности обратимой схемы для рассматриваемого способа синтеза. Отметим, что по аналогии со схемой _CONJ можно построить схему _XOR, которая для заданных входов x_1, …, x_n реализует на своих значимых выходах значения x_1 ∧ a_1 ⊕…⊕ x_n ∧ a_n, a_i ∈_2. Для этого просто надо каждый элемент 2-CNOT в схеме _CONJ заменить на два элемента CNOT. Следовательно,L_XOR(n, q, t) ⩽ 2q + 16nt/log_2 q - log_2 n - 1, Q_XOR(n, q, t) ⩽ q + n - 1. Теперь мы можем доказать основную теорему данного раздела.Для любого значения q > 8n, q ≲ n 2^n-⌈ n/ ϕ(n)⌉, где ϕ(n) ⩽ n/ (log_2 n + log_2 ψ(n)) и ψ(n) — любые сколь угодно медленно растущие функции, верно соотношениеL(n,q) ≲ 2^n + 8n2^n/log_2 (q-4n) - log_2 n - 2.Опишем алгоритм синтеза 𝐀_𝐪, который является модификацией стандартного алгоритма О. Б. Лупанова и который предназначен для синтезирования обратимых схем в условиях ограничения на количество используемых дополнительных входов.Произвольное булево отображение f_2^n →_2^n можно представить в виде некоторых n булевых функций f_i_2^n →_2 от n переменныхf( x) = ⟨ f_1( x), f_2( x), …, f_n( x) ⟩.Каждую функцию f_i( x) можно разложить по последним (n-k) переменным:f_i( x) = ⊕_a_k+1, …, a_n ∈_2x_k+1^a_k+1∧…∧ x_n^a_n∧ f_i(⟨ x_1, …, x_k, a_k+1, …, a_n ⟩) . Каждая из n2^n-k булевых функций f_i(⟨ x_1, …, x_k, a_k+1, …, a_n ⟩), 1 ⩽ i ⩽ n, является функцией от k переменных x_1, …, x_k, ее можно получить при помощи аналога СДНФ, в котором дизъюнкции заменяются на сложение по модулю два:f_i(⟨ x_1, …, x_k, a_k+1, …, a_n ⟩) = f_i,j = ⊕_σ∈_2^kf_i,j(σ) = 1 x_1^σ_1∧…∧ x_k^σ_k. Все 2^k конъюнкций вида x_1^σ_1∧…∧ x_k^σ_k можно разделить на группы, в каждой из которых будет не более s конъюнкций. Обозначим через p = ⌈ 2^k/ s ⌉ количество таких групп. Используя конъюнкции одной группы, мы можем реализовать не более 2^s булевых функций по формуле (<ref>). Обозначим через G_i множество булевых функций, которые могут быть реализованы при помощи конъюнкций i-й группы, 1 ⩽ i ⩽ p. Тогда |G_i| ⩽ 2^s. Следовательно, мы можем переписать формулу (<ref>) следующим образом:f_i(⟨ x_1, …, x_k, a_k+1, …, a_n ⟩) = ⊕_t=1 … pg_j_t∈ G_t1 ⩽ j_t ⩽ |G_t| g_j_t(⟨ x_1, …, x_k⟩) . Отсюда следует, чтоf_i( x) = ⊕_a_k+1, …, a_n ∈_2( ⊕_t=1 … pg_j_t∈ G_t1 ⩽ j_t ⩽ |G_t| x_k+1^a_k+1∧…∧ x_n^a_n∧ g_j_t(⟨ x_1, …, x_k⟩) ) . Общая структура обратимой схемы _f, которая реализует отображение f и которая синтезируется алгоритмом 𝐀_𝐪, показана на рис. <ref>.Сперва реализуем отрицания для всех входных значений x_1, …, x_n со сложностью 2n (по элементу NOT и CNOT на каждый вход), задействовав n дополнительных входов.Разобьем множество значимых входов схемы x_1, …, x_n на две группы: { x_1, …, x_k } и { x_k+1, …, x_n }. Первую группу входов вместе с их отрицаниями подадим на подсхему _1 = _CONJ(k,q_1) для реализации некоторых t_1 конъюнкций x_1^a_1∧…∧ x_k^a_k, a_i ∈_2, отведя данной подсхеме q_1 дополнительных входов для хранения промежуточных значений. Вторую группу входов вместе с их отрицаниями подадим на подсхему _2 = _CONJ(n-k,q_2) для реализации некоторых t_2 конъюнкций x_k+1^a_k+1∧…∧ x_n^a_n, a_i ∈_2, отведя данной подсхеме q_2 дополнительных входов для хранения промежуточных значений.Будем реализовывать все 2^k различных конъюнкций на значимых выходах подсхемы _1 последовательно. Полученные значения будем хранить, используя дополнительные входы. Как только будут получены очередные s конъюнкций, соответствующие им s значимых выходов подаем на значимые входы подсхемы _3;i = _XOR(s,q_3) для получения значений некоторых t_3 функций от переменных x_1, …, x_k, отведя данной подсхеме q_3 дополнительных входов для хранения промежуточных значений. Всего будет не более p = ⌈ 2^k/ s ⌉ различных подсхем _3;i. Как только работа с очередной подсхемой _3;i будет закончена, значение на q_3 незначимых выходах обнуляем, применяя те же самые функциональные элементы, что и для получения подсхемы, но в обратном порядке. Затем обнуляем значения на s значимых выходах, служивших значимыми входами подсхеме _3;i, реализуя еще раз полученные ранее s конъюнкций при помощи подсхемы _1 (см. рис. <ref>). Тем самым мы сможем не увеличивать количество используемых дополнительных входов, а использовать одни и те же дополнительные входы, увеличивая однако при этом сложность соответствующих подсхем в два раза.Из формулы (<ref>) следует, что имея значения некоторого значимого выхода подсхемы _2 и некоторого значимого выхода подсхемы _3;i, мы можем реализовать одно слагаемое во внутренней скобке, используя ровно один элемент 2-CNOT, контролируемый выход которого будет одним из n значимых выходов нашей конструируемой схемы _f (см. рис. <ref>). Рассматриваемое нами отображение f имеет n выходов, количество групп конъюнкций от первых k переменных x_1, …, x_k равно p, количество различных конъюнкций от последних (n-k) переменных x_k+1, …, x_n равно 2^n-k. Следовательно, схемная сложность реализации функции f_i по формуле (<ref>) равна p2^n-k, а отображения f в целом равна L_4 = pn2^n-k, при этом потребуется ровно n дополнительных входов для хранения выходных значений отображения f.Таким образом, мы можем вывести неравенство для L(f,q) следующего вида:L(f,q) = 2n + L_CONJ(k, q_1, t_1) + L_CONJ(n-k, q_2, t_2) + + 2p · L_XOR(s, q_3, t_3) + pn2^n-k,и для Q(_f) следующего вида:Q(_f) = q = n + Q_CONJ(k, q_1, t_1) + + Q_CONJ(n-k, q_2, t_2) + Q_XOR(s, q_3, t_3) + n. Отметим, что каждая из 2^k различных конъюнкций на значимых выходах подсхемы _1 будет получена ровно два раза, следовательно, t_1 = 2^k+1.Поскольку каждый значимый выход подсхемы _2 используется в качестве входа для pn элементов 2-CNOT, а значимый выход подсхемы _3;i может использоваться в качестве входа для 2^n-k элементов 2-CNOT, возникает два различных способа конструирования нашей искомой схемы _f: *В первом случае мы минимизируем значение t_2: для каждой группы конъюнкций от первых k переменных x_1, …, x_k мы один раз конструируем очередной значимый выход подсхемы _2, а затем конструируем для него n значимых выходов подсхемы _3;i. Тогда можно утверждать, что t_2 = p2^n-k, t_3 ⩽ n2^n-k.* Во втором случае мы минимизируем значение t_3: для каждой группы конъюнкций от первых k переменных x_1, …, x_k мы один раз конструируем очередной значимый выход подсхемы _3;i, а затем конструируем для него нужные значимые выходы подсхемы _2. Таких выходов может быть один, а может быть и 2^n-k. Однако мы точно можем утверждать, что t_2 ⩽ pn2^n-k, t_3 ⩽ 2^s. Оценим в общем случае значение L(f,q):L(f,q) ⩽ 2n + pn2^n-k + q_1 + q_2 + 4p q_3 +8k2^k+1/log_2 q_1 - log_2 k - 1 + + 8(n-k)t_2/log_2 q_2 - log_2 (n-k) - 1 + 32pst_3/log_2 q_3 - log_2 s - 1. Будем искать такие значения k и s, что p = ⌈ 2^k/ s ⌉∼ 2^k/ s. ТогдаL(f,q) ≲ 2n + n2^n/s + q_1 + q_2 + 4q_3 2^k/s +8k2^k+1/log_2 q_1 - log_2 k - 1 + + 8(n-k)t_2/log_2 q_2 - log_2 (n-k) - 1 + 32t_3 2^k/log_2 q_3 - log_2 s - 1.* Пусть t_2 = p2^n-k∼ 2^n/ s, t_3 ⩽ n2^n-k. В этом случае L(f,q) ≲ 2n + n2^n/s + q_1 + q_2 + 4q_3 2^k/s +8k2^k+1/log_2 q_1 - log_2 k - 1 + + 8 · 2^n/log_2 q_2 - log_2 (n-k) - 1 + 32n2^n/log_2 q_3 - log_2 s - 1.Положим s = n - k, k = ⌈ n/ ϕ(n) ⌉, где ϕ(n) ⩽ n/ (log_2 n + log_2 ψ(n)) и ψ(n) — любые сколь угодно медленно растущие функции. В этом случае будет верно неравенство 2^k/ s ⩾ψ(n).Поскольку q_3 ≲ 2^s и q_2 < q ≲ n 2^n-⌈ n/ ϕ(n)⌉, то q_2 = o(2^n) и верно соотношение2n + n2^n/s + q_2 + 4q_3 2^k/s≲ 2n + n2^n/n - o(n) + q_2 + 2^n+2/n-o(n)≲ 2^n. Положим q_1 = 0, q_2 = q_3. Согласно формуле (<ref>), L_CONJ(n, 0, t) ⩽ 2(n-1)t, следовательно, мы можем заменить в соотношении (<ref>) сложность подсхемы _1 на k2^k+2:L(f,q) ≲ 2^n + k2^k+2 + 8 · 2^n/log_2 q_2 - log_2 (n-k) - 1 + 32n2^n/log_2 q_3 - log_2 s - 1.Очевидно, что k2^k+2 = 4⌈ n/ ϕ(n) ⌉· 2^⌈ n/ ϕ(n) ⌉ = o(2^n) и 8 · 2^n/ (log_2 q_3 - log_2 s - 1) = o (32n2^n/ (log_2 q_3 - log_2 s - 1) ), поэтому верно соотношение L(f,q) ≲ 2^n + 32n2^n/log_2 q_3 - log_2 s - 1. Согласно Лемме <ref>, Q_CONJ(n, q, t) ⩽ q + n - 1, поэтому соотношение (<ref>) можо переписать в видеq ⩽ n + k - 1 + q_2 + n - k -1 + q_3 + s - 1 + n = 4n - k + 2q_3 - 3 < 4n + 2q_3.Следовательно, log_2 q_3 > log_2 (q - 4n) - 1. Отсюда получаем соотношениеL(f,q) ≲ 2^n + 32n2^n/log_2 (q - 4n) - log_2 n - 2,которое верно при log_2 (q - 4n) > log_2 n + 2 ⇒ q > 8n.* Пусть t_2 ⩽ pn2^n-k∼ n2^n/ s, t_3 ⩽ 2^s. В этом случае L(f,q) ≲ 2n + n2^n/s + q_1 + q_2 + 4q_3 2^k/s +8k2^k+1/log_2 q_1 - log_2 k - 1 + + 8n2^n/log_2 q_2 - log_2 (n-k) - 1 + 32 · 2^n/log_2 q_3 - log_2 s - 1. Как и в первом способе, положим s = n - k, k = ⌈ n/ ϕ(n) ⌉, где ϕ(n) ⩽ n/ (log_2 n + log_2 ψ(n)) и ψ(n) — любые сколь угодно медленно растущие функции; q_1 = 0, q_2 = q_3. Тогда из рассуждений, приведенных при описании первого способа, следует соотношениеL(f,q) ≲ 2^n + 8n2^n/log_2 q_2 - log_2 (n-k) - 1 + 32 · 2^n/log_2 q_3 - log_2 s - 1. Очевидно, что 32 · 2^n/ (log_2 q_2 - log_2 (n-k) - 1) = o (8n2^n/ (log_2 q_2 - log_2 (n-k) - 1) ), поэтому верно соотношение L(f,q) ≲ 2^n + 8n2^n/log_2 q_2 - log_2 (n-k) - 1. Поскольку log_2 q_3 > log_2 (q - 4n) - 1, то и log_2 q_2 > log_2(q - 4n) - 1. Отсюда получаем соотношениеL(f,q) ≲ 2^n + 8n2^n/log_2 (q - 4n) - log_2 n - 2,которое верно при log_2 (q - 4n) > log_2 n + 2 ⇒ q > 8n.Видно, что второй способ синтеза асимптотически лучше первого. Поскольку мы описали алгоритм синтеза обратимой схемы для произвольного отображения f, тоL(n,q) ⩽ L(f,q) ≲ 2^n + 8n2^n/log_2 (q-4n) - log_2 n - 2при q > 8n. При значениях q ≳ n 2^n-⌈ n/ ϕ(n)⌉ не наблюдается снижение сложности обратимой схемы для рассматриваемого способа синтеза. § ЗАВИСИМОСТЬ ГЛУБИНЫ ОБРАТИМОЙ СХЕМЫ ОТ КОЛИЧЕСТВА ДОПОЛНИТЕЛЬНЫХ ВХОДОВ Из доказательства Теоремы <ref> также можно получить верхнюю оценку для функции D(n,q) в случае q > 8n, q ≲ n 2^n-o(n), но для этого необходимо сперва доказать вспомогательную лемму. Для любого значения q > 2n, q ≲ 2^n верны соотношенияD_CONJ(n, q, t)⩽ q + 2t(2+log_2 n - log_2 (log_2 q - log_2 n - 1)) . D_CONJ(n, 0, t)⩽ 2t ·⌈log_2 n ⌉.Рассмотрим схему _CONJ(n,q) из Леммы <ref>. Согласно формуле (<ref>), верно неравенство L_CONJ(n, q, t) ⩽ q + 2t · 2^r. Промежуточные значения, хранимые на q дополнительных входах, можно получить с глубиной не более q. Также очевидно, что сконструировать по мере необходимости один значимый выход схемы _CONJ на первых r уровнях можно с глубиной r, см. рис. <ref>. Отсюда следует, чтоD_CONJ(n, q, t) ⩽ q + 2tr. Согласно формуле (<ref>), при q > 2n верно неравенство2^r < 4n/log_2 q - log_2 n - 1,откуда следует, чтоr < 2 + log_2 n - log_2 (log_2 q - log_n - 1), D_CONJ(n, q, t) ⩽ q + 2t(2+log_2 n - log_2 (log_2 q - log_2 n - 1)) . Соотношение D_CONJ(n, 0, t) ⩽ 2t ·⌈log_2 n ⌉ следует из соотношения (<ref>) и того факта, что сконструировать одну конъюнкцию x_1^a_1∧…∧ x_n^a_n можно с логарифмической глубиной ⌈log_2 n ⌉. Аналогично, для обратимой схемы _XOR(n,q) верно неравенствоD_XOR(n, q, t) = 2D_CONJ(n, q, t) ⩽ 2q + 4t(2+log_2 n - log_2 (log_2 q - log_2 n - 1))для любого значения q > 2n, q ≲ 2^n.Итак, докажем последнюю теорему данной работы.Для любого значения q > 8n, q ≲ n 2^n-⌈ n/ ϕ(n)⌉, где ϕ(n) ⩽ n/ (log_2 n + log_2 ψ(n)) и ψ(n) — любые сколь угодно медленно растущие функции, верно соотношениеD(n,q) ≲ 2^n+1(2,5 + log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2)).Рассмотрим обратимую схему _f из доказательства Теоремы <ref>, синтезированную алгоритмом 𝐀_𝐪. Из соотношения (<ref>) можно вывести аналогичное соотношение для глубины D(f,q) видаD(f,q) = 2 + D_CONJ(k, q_1, t_1) + D_CONJ(n-k, q_2, t_2) + + 2p · D_XOR(s, q_3, t_3) + pn2^n-k. Положим s = n - k, k = ⌈ n/ ϕ(n) ⌉, где ϕ(n) ⩽ n/ (log_2 n + log_2 ψ(n)) и ψ(n) — любые сколь угодно медленно растущие функции. В этом случае будут верны соотношения p ∼ 2^k/ s ⩾ψ(n) и pn2^n-k∼ n2^n/ s ∼ 2^n.Положим q_1 = 0. Поскольку t_1 = 2^k+1, тоD_CONJ(k, q_1, t_1) ⩽ 2t_1 ·⌈log_2 k ⌉⩽⌈log_2 ⌈ n/ ϕ(n)⌉⌉· 2^⌈ n/ ϕ(n) ⌉ + 2 = o(2^n).Таким образом, верно соотношениеD(f,q) ≲ 2^n + D_CONJ(n-k, q_2, t_2) + 2^k+1/s· D_XOR(s, q_3, t_3).Рассмотрим те же два случая для t_2 и t_3, что и на с. item_first_case_for_theorem_L_n_q_bound_for_arbitrary_q. * Пусть t_2 = p2^n-k∼ 2^n/ s, t_3 ⩽ n2^n-k. В этом случаеD_CONJ(n-k, q_2, t_2) ⩽ q_2 + 2^n+1/s(2+log_2 s - log_2 (log_2 q_2 - log_2 s - 1)), D_XOR(s, q_3, t_3) ⩽ 2 q_3 + n2^n-k+2(2+log_2 s - log_2 (log_2 q_3 - log_2 s - 1)). Положим q_2 = q_3. Обозначим d = 2+log_2 s - log_2 (log_2 q_2 - log_2 s - 1), тогдаD(f,q) ≲ 2^n + q_2 + d2^n+1/s + q_3 2^k+2/s + dn2^n+3/s. Согласно формуле (<ref>), верно соотношение2^n + q_2 + 4q_3 2^k/s≲ 2^n.Отсюда получаем, чтоD(f,q) ≲ 2^n + dn2^n+3/s≲ 2^n + 2^n+3(2+log_2 n - log_2 (log_2 q_3 - log_2 n - 1)). Из соотношения (<ref>) следует, что log_2 q_3 > log_2 (q - 4n) - 1. Таким образом, получаем итоговую оценку сверху видаD(f,q) ≲ 2^n(17 + 8(log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2))),которая верна при log_2 (q - 4n) > log_2 n + 2 ⇒ q > 8n.* Пусть t_2 ⩽ pn2^n-k∼ n2^n/ s, t_3 ⩽ 2^s. В этом случаеD_CONJ(n-k, q_2, t_2) ⩽ q_2 + n2^n+1/s(2+log_2 s - log_2 (log_2 q_2 - log_2 s - 1)), D_XOR(s, q_3, t_3) ⩽ 2 q_3 + 2^s+2(2+log_2 s - log_2 (log_2 q_3 - log_2 s - 1)). Положим q_2 = q_3. Обозначим d = 2+log_2 s - log_2 (log_2 q_2 - log_2 s - 1), тогдаD(f,q) ≲ 2^n + q_2 + dn2^n+1/s + q_3 2^k+2/s + d2^n+3/s. Согласно формуле (<ref>), верно соотношение2^n + q_2 + 4q_3 2^k/s≲ 2^n.Отсюда получаем, чтоD(f,q) ≲ 2^n + dn2^n+1/s≲ 2^n + 2^n+1(2+log_2 n - log_2 (log_2 q_2 - log_2 n - 1)). Из соотношения (<ref>) следует, что log_2 q_2 > log_2 (q - 4n) - 1. Таким образом, получаем итоговую оценку сверху видаD(f,q) ≲ 2^n+1(2,5 + log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2)),которая верна при log_2 (q - 4n) > log_2 n + 2 ⇒ q > 8n.Видно, что второй способ синтеза асимптотически лучше первого. Поскольку мы описали алгоритм синтеза обратимой схемы для произвольного отображения f, тоD(n,q) ⩽ D(f,q) ≲ 2^n+1(2,5 + log_2 n - log_2 (log_2 (q - 4n) - log_2 n - 2))при q > 8n. При увеличении количества дополнительных входов с q ∼ n 2^n-o(n) до q ∼ 2^n верхняя асимптотическая оценка функции D(n,q) снижается с экспоненциальной до линейной, см. соотношение (<ref>). Однако выведение зависимости верхней оценки функции D(n,q) от q для диапазона n 2^n-o(n)≲ q ≲ 2^n выходит за рамки данной работы. § ЗАКЛЮЧЕНИЕ В данной работе были рассмотрены обратимые схемы, состоящие из функциональных элементов NOT, CNOT и 2-CNOT и имеющие различное число дополнительных входов q. Были изучены функции Шеннона сложности L(n, q) и глубины D(n,q) обратимой схемы, реализующей какое-либо отображение _2^n →_2^n в условиях ограничения на значение q. Были доказаны верхние оценки для функций L(n, q) и D(n, q) для диапазона 8n < q ≲ n2^n-o(n). Из полученных соотношений можно сделать вывод, что использование дополнительной памяти в обратимых схемах, состоящих из элементов NOT, CNOT и 2-CNOT, почти всегда позволяет существенно снизить сложность и глубину таких схем. 99 shannon C. E. Shannon, “The synthesis of two-terminal switching circuits”, Bell System Technical Journal, 28:8 (1949), 59–98.lupanov_complexity О. Б. Лупанов, “Об одном методе синтеза схем”, Изв. вузов. Радиофизика. 1958. Т. 1, № 1. С. 23–26.lupanov_delay О. Б. Лупанов, “О схемах из функциональных элементов с задержками” Проблемы кибернетики, вып. 23, Наука, M., 1970, 43–81.karpova Н. А. Карпова, “О вычислениях с ограниченной памятью”, Математические вопросы кибернетики, вып. 2, Наука, M., 1989, 131–144.feynman R. Feynman, “Quantum Mechanical Computers”, Optic News, 11:2 (1985), 11–20. DOI: http://doi.org/10.1364/ON.11.2.00001110.1364/ON.11.2.000011.maslov_thesis D. A. Maslov, Reversible Logic Synthesis, Ph. D. Thesis, University of New Brunswick Fredericton, N. B., Canada, 2003, 165 pp.shende V. V. Shende, A. K. Prasad, I. L. Markov, J. P. Hayes, “Synthesis of Reversible Logic Circuits”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 22:6 (2006), 710–722. DOI: http://doi.org/10.1109/TCAD.2003.81144810.1109/TCAD.2003.811448.my_dm_complexity Д. В. Закаблуков, “О сложности обратимых схем, состоящих из функциональных элементов NOT, CNOT и 2-CNOT”, Дискретная математика, 2016. Т. 28, № 2. С. 12–26.my_vestnik_mgu_depth Д. В. Закаблуков, “Оценка глубины обратимых схем из функциональных элементов NOT, CNOT и 2-CNOT”, Вестник Московского университета, серия <<Математика. Механика>>, 2016, № 3. С. 3–12.
http://arxiv.org/abs/1702.08045v2
{ "authors": [ "Dmitry V. Zakablukov" ], "categories": [ "cs.CC" ], "primary_category": "cs.CC", "published": "20170226152557", "title": "General Upper Bounds for Gate Complexity and Depth of Reversible Circuits Consisting of NOT, CNOT and 2-CNOT Gates" }
Browsing through the universe of bibliographic information R. Koopman OCLC Research, Schipholweg 99, Leiden, The Netherlands Tel.: +31 71 524 6500rob.koopman@oclc.orgS. Wang OCLC Research, Schipholweg 99, Leiden, The Netherlands Tel.: +31 71 524 6500shenghui.wang@oclc.org A. Scharnhorst DANS-KNAW, Anna van Saksenlaan 51, The Hague, The Netherlands Tel.: +31 70 349 4450andrea.scharnhorst@dans.knaw.nl Contextualization of topics Rob Koopman Shenghui Wang Andrea ScharnhorstReceived: date / Accepted: date ========================================================= This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records <cit.>. In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results" – the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.§ INTRODUCTIONWhat is the essence, or the boundary of a scientific field? How can a topic be defined? Those questions are at the heart of bibliometrics. They are equally relevant for indexing, cataloguing and consequently information retrieval <cit.>. Rigour and stability inbibliometrically defining boundaries of a field are important for research evaluation and consequently the distribution of funding. But, for information retrieval - next to accuracy - serendipity, broad coverage and associations to other fields are of equal importance. If researchers seek information about a certain topic outside of their areas of expertise, their information needs can be quite different from those in a bibliometric context. Among the many possible hits for a search query, they may want to know which are core works (articles, books) and which are rather peripheral. They may want to use different rankings <cit.>, get some additional context information about authors or journals, or see other closely related vocabulary or works associated with a search term. On the whole, they would have less need to define a topic and a field in a bijective, univocal way. Such a possibility to contextualize is not only important for term-based queries. It also holds for groups of query terms, or for the exploration of sets of documents, produced by different clustering algorithms. Contextualisation is the main motivation behind this paper.If we talk of contextualisation we still stay in the realm of bibliographic information. That is, we rely on information about authors, journals, words, references as hidden in the entirety of the set of all bibliographic records. Decades of bibliometrics research have produced many different approaches to cluster documents, or more specifically, articles. They often focus on one entity of the bibliographic record. To give one example, articles and terms within those articles (in title, abstract and/or full text) form a bipartite network. From this network we can either build a network of related terms (co-word analysis) or a network of related articles (based on shared words). The first method, sometimescalled lexical <cit.>, has been applied in scientometrics to produce so-called topical or semantic maps. The same exercise can be applied to authors and articles, authors and words <cit.>, and in effect to each element of the bibliographic record for an article <cit.>. If we extend the bibliographic record of an article with the list of references contained by this article, we enter the area of citation analysis. Here, the following methods are widely used: direct citations, bibliographic couplingand co-citation maps.Hybrid methods combine citation and lexical analysis (e.g., <cit.>).We would like to note here that in an earlier comparison of citation- and word-based mapping approaches Zitt et al. (<cit.>) underline the differences both signals carry in terms of what aspect of scientific practice they represent. We come back to this in the next paragraph. Formally spoken, the majority of studies apply one method and often display unipartite networks. Sometimes analysis and visualization of multi-partite networks can be found <cit.>. Each network representation of articles captures some aspect of connectivity and structure which can be found in published work. Co-authorship networks shed light on the social dimension of knowledge production, the so-called Invisible College <cit.>. Citation relations are interpreted as traces of flows of knowledge <cit.>. By using different bibliographic elements, we obtain different models for, or representations of, a field or topic; i.e. as a conceptual, cognitive unit; as a community of practice; or as institutionalized in journals. One could also say that choosing what to measure affects the representation of a field or topic. Another source of variety beyond differences arising from choice of representations is how to analyze those representations. Fortunately, network analysis provides several classical methods to choose from, including clustering and clique analysis. However, clusters can be defined in different ways, and some clustering algorithms can be computationally expensive when used on large or complex networks. Consequently, we find different solutions for the same algorithm (if parameters in the algorithm are changed) and different solutions for different algorithms. One could call this an effect of the choice of instrument for the measurement or how to measure. Using an ideal-typical workflow, these points of choice have been further detailed and discussed in another paper of this special issue ( <cit.>). The variability in each of the stages of the workflow results in ambiguity, and, if notarticulated, makes it even harder to reproduce results. Overall, moments of choice add an uncertainty margin to the results <cit.>. Last but not least, we can ask ourselves whether clear delineations exist between topics in practice. Often in the sciences very different topics are still related to each other. There exist unsharp boundaries and almost invisible long threads in the fabric of science <cit.>, which might inhibit the finding of a contradiction-free solution in form of a unique set of disjunct clusters. There is a seeming paradox between the fact that experts often can rather clearly identify what belongs to their field or a certain topic, and that it is so hard to quantitatively represent this with bibliometric methods. However, a closer look into science history and science and technology studies reveals that even among experts opions regarding subject matter classification or topic identification might vary. What belongs to a field and what not is as much an epistemic question as also an object of social negotiations. Moreover, the boundaries of a field change over time, and even a defined canon or body of knowledge determining the essence of a field or a topic can still be controversial or subject to change <cit.>. Defining a topic requires a trade-off between accepting the natural ambiguity of what a topic is and the necessity to define a topic for purposes of education, knowledge acquisition, and evaluation. Since different perspectives serve different purposes, there is also a need to preserve the diversity and ambiguity described earlier. Having said this, for the sake of scientific reasoning it is equally necessary to be able to further specify the validity and appropriateness of different methods for defining topics and fields <cit.>. This paper contributes to this sorting-out-process in several ways. All are driven by the motivation to provide a better understanding of the topic re-construction results by providing context: context of the topics themselves by using a lexical approach and all elements of the bibliographical record to delineate topics;and context for different solutions in the (re-)construction of topics. We first introduce the method of semantic indexing, by which each bibliographic record is decomposed and a vector representation for each of its entities in a lexical space is build, resulting in a so-called semantic matrix. This approach is conceptually closer to classical information retrieval techniques based on Salton's vector space model <cit.> than to the usual bibliometrical mapping techniques. In particular, it is similar to Latent Semantic Indexing or Latent Semantic Analysis. In the specific case of the Astro dataset, we extend the bibliographic record with information on cluster assignments provided by different clustering solutions. For the purpose of a delineation of topics based on clustering of articles, we reconstruct a semantic matrix for articles based on the semantic indexing of their individual entities. Secondly, based on this second matrix, we produce our own clustering solutions (detailed in <cit.>) by applying two different clustering algorithms.Third, we present an interactive visual interface called LittleAriadne that displays the context around those extracted entities. The interface responds to a search query with a network visualization of most related terms, authors, journals, citations and cluster IDs. The query can consist of words or author names, but also clustering solutions. The displayed nodes or entities around a query term represent, to a certain extent, the context of the query in a lexical, semantic space. In what follows, we address the following research questions: Q1 How does the Ariadne algorithm, originally developed for a large corpora which contains tens of millions of articles, work on a much smaller, field-specific dataset?How can we relate the produced contexts to domain knowledge retrieved from other information services?Q2 Can we use LittleAriadne to compare different cluster assignments of papers, by treating those cluster assignments as additional entities? What can we learn about the topical nature of these clusters when exploring them visually? Concerning the last question, we restrict this paper to a description of the approach LittleAriadne offers, and we provide some illustrations. A more detailed discussion of the results of this comparison has been taken up as part of the comparison paper of this special issue <cit.>, which on the whole addresses different analytic methods and visual means to compare different clustering solutions.§ DATAThe Astro dataset used in this paper contains documents published in the period 2003–2010 in 59 astrophysical journals.[For details of the data collection and cleaning process leading to the common used Astro dataset see <cit.>.]Originally, these documents had been downloaded from the Web of Science in the context of a German-funded research project called “Measuring Diversity of Research,” conducted at the Humboldt-University Berlin from 2009 to 2012. Based on institutional access to the Web of Science, we worked on the same dataset. Starting with 120,007 records in total, 111,616 records of the document types Article, Letter and Proceedings Paper have been treated with different clustering methods (see the other contributions to this special issue). Different clustering solutions have been shared, and eventually a selection of solutions for the comparison has been defined. In our paper we used clustering solutions from CWTS-C5 (c) <cit.>, UMSI0 (u) <cit.>, HU-DC (hd) <cit.>, STS-RG (sr) <cit.>, ECOOM-BC13 (eb), ECOOM-NLP11 (en) (both  <cit.>) and two of our own: OCLC-31 (ok) and OCLC-Louvain (ol) <cit.>.The CWTS-C5 and UMSI0 are the clustering solutions generated by two different methods, Infomap and the Smart Local Moving Algorithm (SLMA) respectively, applied on the same direct citation network of articles. The two ECOOM clustering solutions are generated by applying the Louvain method to find communities among bibliographic coupled articles where ECOOM-NLP11 also incorporates the keywords information. The STS-RG clusters are generated by first projecting the relatively small Astro dataset to the full Scopus database. After the full Scopus articles are clustered using SLMAon the direct citation network, thecluster assignments of Astro articles are collected. The HU-DC clusters are the only overlapping clusters generated by a memetic type algorithm designed for the extraction of overlapping, poly-hierarchical topics in the scientific literature.Each article is assigned to a HU-DC cluster with a confidence value. We only took those assignments with a confidence value higher than 0.5. More detailed accounts of these clustering solutions can be found in <cit.>. Table <ref> shows their labels later used in the interface, and how many clusters each solution produced. All the clustering solutions are based on the full dataset. However, each article is not necessarily guaranteed to have a cluster assignment in every clustering solution (see the papers about the clustering solutions for further details). The last column in Table <ref> shows how many articles of the original dataset are covered by different solutions.§ METHOD§.§ Building semantic representations for entities The Ariadne algorithm was originally developed on top of the article database, ArticleFirst of OCLC <cit.>. The interface, accessible at <http://thoth.pica.nl/relate>, allows users to visually and interactively browse through 35 thousand journals, 3 million authors, and 1 million topical terms associated with 65 million articles. The Ariadne pipeline consists of two steps: an offline procedure for semantic indexing and an online interactive visualization of the context of search queries. We applied the same method to the Astro dataset and built an instantiation, named LittleAriadne, accessible at <http://thoth.pica.nl/astro/relate>. To describe our method we givean example of an article from the Astro dataset in table <ref>. We list all the fields of this bibliographic record that we used for LittleAriadne. We include the following types of entities for semantic indexing: authors, journals (ISSN), subjects, citations, topical terms, MAI-UAT thesaurus terms and cluster IDs (see Table <ref>). For the Astro dataset, we extended the original Ariadne algorithm <cit.> by adding citations as additional entities.In the short paper about the OCLC clustering solutions <cit.> we applied clustering to different variants of the vector representation of articles, including variants with and without citations. We reported there about the effect of adding citations to vector representations of articles on clustering. In Table <ref> we display the author name (and other entities) in a syntax (indicated by square brackets) that can immediately be used in the search field of the interface. Each author name is treated as a separate entity. The next type of entity is the journal identified by its ISSN number. One can search for a single journal using its ISSN number. In the visual interface, the ISSN numbers are replaced by the journal name, which is used as label for a journal node. The next type of entities are so-called subjects. Those subjects originate from the fields “Author Keywords” and “Keywords Plus” of the original Web of Science records. Citations, references in the article, are considered as a type of entity too. Here, we use the standardized abbreviated citations in the Web of Science database. We remark that we do not apply any form of disambiguation–neither for the author names nor for the citations. Topical terms such as “mass transfer” and “quiescence” in our example, are single words or two-word phrases extracted from titles and abstracts of all documents in the dataset. A multi-lingual stop-word list was used to remove unimportant words, and mutual information was used to generate two-word phrases. Only words and phrases which occur more than a certain threshold value were kept.The next type of entity is a set of Unified Astronomy Thesaurus (UAT)[<http://astrothesaurus.org/>] terms which were assigned by the Data Harmony's Machine Aided Indexer (M.A.I.).[<http://www.dataharmony.com/services-view/mai/>] Please refer to <cit.> for more details about the thesaurus and the indexing procedure.The last type of entity we add to each of the articles (specific for LittleAriadne) is the collection of cluster IDs corresponding to the clusters to which the article was assigned by the various clustering algorithms. For example, the article in Table <ref> has been assigned to clusters “c 19” (produced by CWTS-C5) and“u 16” (produced by UMSI0), and so on. In other words, we treat the cluster assignments of articles as they would be classification numbers or additional subject headings. Table <ref> lists the total number of different types of entities found in the Astro dataset. To summarize, we deconstruct each bibliographic record, extract a number of entities, and add some more (the cluster IDs and the topical terms). Next, we construct for each of these entities a vector in a word space built from topical terms and subject terms. We assume that the context of all entities is captured by their vectors in this space. Figure <ref> gives a schematic representation of these vectors which form the matrix C. All types of entities – topical term, subject, author, citation, cluster ID and journal – form the rows of the matrix, and their components (all topical terms and subjects) the columns. The values of the vector components are the frequencies of the co-occurrence of an entity and a specific word in the whole dataset. That is, we count how many articles contain both an entity and a certain topical term or subject. Matrix C expresses the semantics of all entities in terms of their context. Such context is then used in a computation of their similarity/relateness. Each vector can be seen as the lexical profile of a particular entity. A high cosine similarity value between two entities indicates a large overlap of the contexts of these two entities – in other words, a high similarity between them. This is different from measuring their direct co-occurrence. For LittleAriadne, the matrix C has roughly 546K × 102K elements, and is sparse and expensive for computation. To make the algorithm scale and to produce a responsive online visual interface, we applied the method of Random Projection <cit.> to reduce the dimensionality of the matrix. As shown in Figure <ref>, we multiply C with a 102K × 600 matrix of randomly distributed –1 and 1, with half-half probabilities.[More efficient random projections are available. This version is more conservative and also computationally easier.] This way, the original 546K × 102K matrix C is reduced to a Semantic Matrix C' of the size of 546K × 600. Still, each row vector represents the semantics of an entity. It has been discussed elsewhere <cit.> that with the method of Random Projection, similar to other dimension reduction methods, essential properties of the original vector space are preserved, and thus entities with a similar profile in the high-dimensional spacestill have a similar profile in the reduced space. A big advantage of Random Projection is that the computation is significantly less expensive than other methods, e.g., Principal Component Analysis <cit.>. Actually, Random Projection is often suggested as a way of speeding up Latent Semantic Indexing (LSI) <cit.>, and Ariadne is similar to LSI in some ways. LSI starts from a weighted term-document matrix, where each row represents the lexical profile of a document in a word space. In Ariadne, however, the unit of analysis is not the document. Instead, each entity of the bibliographic record is subject to a lexical profile. We explain in the next section that, by aggregating over all entities belonging to one article, one can construct a vector representation for the article that represents its semantics and is suitable for further clustering processes (for more details please consult <cit.>). With the Matrix C', the interactive visual interface dynamically computes the most related entities (i.e., ranked by cosine similarity) to a search query. After irrelevant entities have been filtered out by removing entities with a high Mahalanobis distance <cit.> to the query, the remaining entities and the query node are positioned in 2D so that the distance between nodes preserves the corresponding distance in the high dimensional space as much as possible. We use a spring-like force-directed graph drawing algorithm for the positioning of the nodes. Designed as experimental, explorative tool, no other optimisation of the network layout is applied. In the on-line interface, it is possible to zoom into the visualization, to change the size of the labels (font slider) as well as the number of entities displayed (show slider). For the figures in the paper, we used snapshots, in which node labels might overlap. Therefore, we provide links to the corresponding interactive display for each of the figures. In the end, with its most related entities, the context of a query term can be effectively presented <cit.>.For LittleAriadne we extended the usual Ariadne interface with different lists of the most related entities, organized by type. This information is given below the network visualization.§.§ From a semantic matrix of entities to a semantic matrix for articlesThe Ariadne interface provides context around entities, butdoes not produce article clusters directly. In other words, articles contribute to the context of entities associated with them but the semantics of themselves need to be reconstructed before we can apply clustering methods to identify article clusters. We describe the OCLC clustering workflow elsewhere <cit.>, but here we would like to explain the preparatory work for it. The first step is to create a vector representation of each article. For each article, we look up all entities associated with this article in the Semantic Matrix C'. We purposefully leave out the cluster IDs, because we want to construct our own clustering later independently, i.e., without already including information about clustering solutions of other teams.For each article we obtain a set of vectors.For our article example in Table <ref> we have 55 entities. The set of vectors for this article entails one vector representing the single author of this article, 12 vectors for the subjects, one vector for the journal, 21 vectors for the citations and 20 vectors for topical terms.Each article is represented by a unique set of vectors. The size of the set can vary, but each of the vectors inside of a set has the same length, namely 600. For each article we compute the weighted average of its constituent vectors as its semantic representation.Each entity is weighted by its inverse document frequency to the third power; therefore, frequent entities are heavily penalized to have little contribution to the resulting representation of the article. In the end, each article is represented by a vector of 600 dimensions which becomes a row in a new matrix M with the size of 111,616 × 600. Note that since articles are represented as a vector in the same space where other entities are also represented, it is now possible to compute the relatedness between entities and articles! Therefore in the online interface, we can present the articles most related to a query. To group these 111,616 articles into meaningful clusters, we apply standard clustering methods to M. A first choice, the K-Means clustering algorithm results in 31 clusters. As detailed in <cit.>, with k=31, the resulting 31 clusters perform the best according to a pseudo-ground-truth built from the consensus of CWTS-C5, UMSI0, STS-RG and ECOOM-BC13. With this clustering solution the whole dataset is partitioned pretty evenly: the average size is 3600 ± 1371, and the largest cluster contains 6292 articles and the smallest 1627 articles. We also apply a network-based clustering method: the Louvain community detection algorithm.To avoid high computational cost, we first calculate for each article the top 40 most related articles, i.e., with the highest cosine similarity. This results in a new adjacency matrix M' between articles, representing an article similarity network where the nodes are articles and the links indicate that the connected articles are very similar. We set the threshold for the cosine similarity at 0.6 to reduce links with low similarity values. A standard Louvain community detection algorithm <cit.> is applied to this network, producing 32 partitions, i.e., 32 clusters.Compared to K-Means 31 clusters, these 32 Louvain clusters vary more in terms of cluster size, with the largest cluster containing 9464 articles while the smallest cluster 86 articles. The Normalized Mutual Information <cit.> between these two solutions is 0.68, indicating that they are highly similar to each other yet different enough to be studied further. More details can be found in <cit.>. § EXPERIMENTS AND RESULTS To answer the two research questions listed in the introduction, we conducted the following experiments:Experiment 1. We implemented LittleAriadne as an information retrieval tool. We searched with query terms, inspected and navigated through the resulting network visualization. Experiment 2. We visually observed and compared different clustering solutions.§.§ Experiment 1 – Navigate through networked informationWe implemented LittleAriadne, which allows users to browse the context of the 546K entities associated with 111K articles in the datasets. If the search query refers to an entity that exists in the semantic matrix, LittleAriadne will return, by default, top 40 most related entities, which could be topical terms, authors, subjects, citations or clusters. If there are multiple known entities in the search query, a weighted average of the vectors of individual entities is used to calculate similarities (the same way an article vector is constructed). If the search query does not contain any known entities, a blank page is returned, as there is no information about this query. Figure <ref> gives a contextual view of “gamma ray.”[Available at <http://thoth.pica.nl/astro/relate?input=gamma+ray>] The search query refers to an known topical term “gamma ray,” and it is therefore displayed as a red node in the network visualization. The top 40 most related entities are shown as nodes, with the top 5 connected by the red links. The different colours reflect their types, e.g., topical terms, subjects, authors, or clusters. Each of these 40 entities is further connected to its top 5 most related entities among the rest of the entities in the visualization, with the condition that the cosine similarity is not below 0.6. A thicker link means the two linked entities are mutually related, i.e., they are among each other's top 5 list. The colour of the link takes that of the node where the link is originated. If the link is mutual and two linked entities are of different types, one of the entity colours is chosen.The displayed entities often automatically form groups depending on their relatedness to each other, whereby more related entities are positioned closer to each other. Each group potentially represents a different aspect related to the query term. The size of a node is proportional to the logarithm of its frequency of occurrences in the whole dataset. The absolute number of occurrences appears when hovering the mouse cursor over the node. Due to the fact that different statistical methods are at the core of the Ariadne algorithm, this number gives an indication of the reliability of the suggested position and links.In Figure <ref>, there are four clusters from OCLC-31, ECOOM-BC13 and ECOOM-NLP11, and CWTS. The ECOOM-BC13 cluster eb 8 and ECOOM-NLP11 cluster en 4 are directly linked to “gamma ray,” suggesting thatthese two clusters are probably about gamma rays. It is not surprising that they are very close to each other, because they contain 7560 and 5720 articles respectively but share 3603 articles. At the lower part, the OCLC-31 cluster ok 21 and the CWTS cluster c 15 are also pretty close to our search term. They contain 1849 and 3182 articles respectively and share 1721 articles in common which makes them close to each other in the visualization. By looking at the topical terms and subjects around these clusters, we can have a rough idea of their differences. Although they are all about “gamma ray,” Clusters eb 8 and en 4 are probably more about “radiation mechanisms,” “very high energy,” and “observations,” while Clusters ok 21 and c 15 seem to focus more on “afterglows,”“prompt emission,” and “fireball.” Such observations will invite users to explore these clusters or subjects further. Each node is clickable which leads to another visualization of the context of this selected node. If one is interested in cluster ok 21 for instance, after clicking the node, a contextual view of cluster ok 21 is presented,[Available at <http://thoth.pica.nl/astro/relate?input=[cluster:ok%2021]>. ] as shown in Figure <ref>. This context viewprovides a good indication about the content of the articles grouped together in this cluster. In the context view of cluster ok 21 we see again the cluster c 15, which was already near to ok 21 in the context view of “gamma ray.” But the two ECOOM clusters, eb 8 and en 4 that are also in the contextof “gamma ray” are not visible any more. Instead, we find two more similar clusters u 11 and ol 9. That means that,even though the clusters ok 21 and eb 8 are among the top 40 entities that are related to “gamma ray,” they are still different in terms of their content. This can be confirmed by looking at their labels in Table <ref>.[More details about cluster labelling can be found in <cit.>.] As mentioned before, in the interface one can also further refine the display. For instance, one can choose the number of nodes to be shown or decide to limit the display to only authors, journals, topical terms, subjects, citations or clusters. The former can be done by the slider show or by editingthe URL string directly. For the latter options, tick boxes are given. An additional slider font allows to experiment with the font size of the labels. A display with only one type of entity enables us to see context filtered along one perspective (lexical, journals, authors, subjects), and is often useful. For example, Figure <ref>[Available at <http://thoth.pica.nl/astro/relate?input=%5Bsubject%3Ahubble+diagram%5D type=2>] shows at least three separate groups of authors who are most related to “subject:hubble diagram.” At any point of exploration, one can see the most related entities, grouped by their types and listed at the bottom of the interface. The first category shown are the related titles, the titles of the articles most relevant to a search query. Due to license restrictions, we cannot make the whole bibliography available. But when clicking on a title, one actually sees the context of a certain article. Not only titles can be clicked through, all entities at the lower part are also clickable and such an action leads to another contextual view of the selected entity.At the top of the interface, under the search box, we find further hyperlinks behind the label exact search and context search. Clicking on the hyperlinksautomatically sends queries to other information spaces such as Google, Google Scholar, Wikipedia, and WorldCat. For exact search, the same query text is used. For context search, the system generates a selection among all topical terms related to the original query term and send this selection as a string of terms (with the Boolean AND operation) to those information spaces behind the hyperlinks. This option offers users a potential way to retrieve related literature or web resources from a broader perspective. In turn, it also enables the user to better understand the entity-based context view provided by Ariadne.Let us now come back to our first research question: how does the Ariadne algorithm work on a much smaller, field-specific dataset? The interface shows that the original Ariadne algorithm works well on the small Astro dataset. Not surprisingly, compared with our exploration in the much bigger and more general ArticleFirst dataset, we find more consistent representations; that is, specific vocabulary is displayed, which can be cross-checked in Wikipedia, Google or Google Scholar. On the other hand, different corpora introduce different contexts for entities. For example, “young” in ArticleFirst[Available at <http://thoth.pica.nl/relate?input=young>] is associated with adults and 30 years old, while in LittleAriadne it is immediately related to young stars which are merely 5 or 10 millions years old.[Available at <http://thoth.pica.nl/astro/relate?input=young>] Also, the bigger number of topical terms in the larger database leads to a situation where almost every query term produces a response. In LittleAriadne searches for, e.g., a writer such as Jane Austen retrieve nothing. Not surprisingly, for domain-specific entities, LittleAriadne tends to provide more accurate context. A more thorough evaluation needs to be based, as for any other topical mapping, on a discussion with domain experts. §.§ Experiment 2 – Comparing clustering solutions In LittleAriadne we extended the interface with the goal of observing and comparing clustering solutions visually. As discussed in Section <ref> cluster assignments are treated in the same way as other entities associated with articles, such as topical terms, authors, etc. Each cluster ID is therefore represented in the samespace and visualized in the same way. In the interface, when we use a search term, for example “[cluster:c]” and tick the “scan” option, the interface scans all the entities in the semantic matrix which starts with, in this case “cluster:c,” and then effectively selects and visualizes all CWTS-C5 clusters.[This scan option is applicable to any other type of entities, for example, to see all subjects which start with “quantum” by using “subject:quantum” as the search term and do the scanning.] This way, we can easily see the distribution of a single clustering solution. Note that in this scanning visualization, any cluster which contains less than 100 articles is not shown.Figure <ref> shows the individual distribution of clusters from all eight clustering solutions. When two clusters have a relatively high mutual similarity, there is a link between them. It is not surprising to see the HU-DC clusters are highly connected as they are overlapping, and form a poly-hierarchy. Compared to CWTS-C5, UMSI and two ECOOM clusters, the STS-RG and the two OCLC solutions have more cluster-cluster links. This suggests that these clusters overlap more in terms of their direct vocabularies and indirect vocabularies associated with their authors, journals and citations.If we scan two or more cluster entities, such as “[cluster:c][cluster:ok],” we put two clustering solutions on the same visualization so that they can be compared visually. In Figure <ref> (a) we see the high similarity between clusters from CWTS-C5 and those from OCLC-31.[Available at <http://thoth.pica.nl/astro/relate?input=%5Bcluster%3Ac%5D%5Bcluster%3Aok%5D type=S show=500>] CWTS-C5 has 22 clusters while OCLC-31 has 31 clusters. Each CWTS-C5 cluster is accompanied by one or more OCLC clusters. This indicates that they are different, probably because of the granularity aspect instead of any fundamental issue. Figure <ref> (b) shows two other sets of clusters that partially agree with each other but clearly have different capacity in identifying different clusters.[Available at <http://thoth.pica.nl/astro/relate?input=%5Bcluster%3Au%5D%5Bcluster%3Asr%5D type=S show=500>] Figure <ref> (a) shows all the cluster entities from all eight clustering solutions.[Available at <http://thoth.pica.nl/astro/relate?input=%5Bcluster%3Ac%5D%5Bcluster%3Au%5D%5Bcluster%3Aok%5D%5Bcluster%3Aol%5D%5Bcluster%3Aeb%5D%5Bcluster%3Aen%5D%5Bcluster%3Asr%5D%5Bcluster%3Ahd%5D type=S show=500>] The STS and HU have hundreds of clusters, which make the visualization pretty cluttered.Figure <ref> (b) shows only the solutions from CWTS, UMSI, OCLC and ECOOM, whose numbers of the clusters are comparable.[Available at <http://thoth.pica.nl/astro/relate?input=%5Bcluster%3Ac%5D%5Bcluster%3Au%5D%5Bcluster%3Aok%5D%5Bcluster%3Aol%5D%5Bcluster%3Aeb%5D%5Bcluster%3Aen%5D type=S show=500>]Concerning our second research question - can we use LittleAriadne to compare clustering solutions visually? - we can give a positive answer. But, it is not easy to see from LittleAriadne why some clusters are similar and the others not. The visualization functions as a macroscope<cit.> and provides a general overview of all the clustering solutions, which helps to guide further investigation. It is not conclusive, but a useful heuristic devise. For example, from Figure <ref>, especially <ref> (b), it is clear that there are “clusters of clusters.” That is, some clusters are detected by all of these different methods. In the future we may investigate these clusters of clusters more closely and perhaps discover that different solutions identify some of the same topics. We continue the discussion of the use of visual analytics to compare clustering solutions in the paper by Velden et al. <cit.>.§ CONCLUSION We present a method implemented in an interface that allows browsing through the context of entities, such as topical terms, authors, journals, subjects and citations associated with a set of articles. With the LittleAriadne interface, one can navigate visually and interactively through the context of entities in the dataset by seamlessly travelling between authors, journals, topical terms, subjects, citations and cluster IDs as well as consult external open information spaces for further contextualization. In this paper we particularly explored the usefulness of the method to the problem of topic delineation addressed in this special issue. LittleAriadne treats cluster assignments from different solutions as additional special entities. This way we provide the contextual view of clusters as well. This is beneficial for users who are interested in travelling seamlessly between different types of entities and their related cluster assignments generated by different solutions. We also contributed two clustering solutions built on the vector representation of articles, which isdifferent from solutions provided by other methods. We start by including references and treating them as entities with a certain lexical or semantic profile. In essence, we start from a multipartite network of papers, cited sources, terms, authors subjects, etc. and focus on similarity in a high dimensionalspace.Our clusters are comparable to other solutions yet have their own characteristics. Please see <cit.> for more details.We demonstrated that we can use LittleAriadne to compare different clustering solutions visually andgenerate a wider overview. This has a potential to be complementary to any other method of cluster comparison. We hope that this interactive tool supports discussion about different clustering algorithms and helps to find the right meaning of clusters. We have plans to further develop the Ariadne algorithm. The Ariadne algorithm is general enough to incorporate additional types of entities into the semantic matrix. Which entities we can add very much depends on the information in the original dataset or database. In the future, we plan to addpublishers, conferences, etc. with the aim to provide a richer contextualization of entities typically found in a scholarly publication. We also plan to elaborate links to articles that contribute to the contextual visualization, thus strengthening the usefulness of Ariadne not only for the associative exploration of contexts similar to scrolling through a systematic catalogue, but also as a direct tool for document retrieval.In this context we plan to further compare LittleAriadne and Ariadne. As mentioned before, the corpora matter when talking about context of entities. The advantage of LittleAriadne is the confinement of the dataset to one scientific discipline or field and topics within. We hope by continuing such experiments also to learn more about the relationship between genericity and specificity of contexts, and how that can be best addressed in information retrieval.§ ACKNOWLEDGEMENTPart of this work has been funded by the COST Action TD1210 Knowescape, and the FP7 Project ImpactEV. We would like to thank the internal reviewers Frank Havemann, Bart Thijs as well as the anonymous external referees for their valuable comments and suggestions. We would also like to thank Jochen Gläser, William Harvey and Jean Godby for comments on the text.spmpsci
http://arxiv.org/abs/1702.08210v1
{ "authors": [ "Rob Koopman", "Shenghui Wang", "Andrea Scharnhorst" ], "categories": [ "cs.DL" ], "primary_category": "cs.DL", "published": "20170227100108", "title": "Contextualization of topics: Browsing through the universe of bibliographic information" }
IEEEtran↓ ĉ Ŝ ⟨|⟨ |⟩⟩ equation (<ref>T deqnarrayR̂TrS̃tr∞⟨|,ϕ||,ϕ|⟩Û⟨ϕ|||ϕ|⟩Ẑd̂𝒟d̅c̅eqnarray𝒢 Spatially Aware Melanoma Segmentation Using Hybrid Deep Learning Techniques M. Attia^⋆, M. Hossny^⋆, S. Nahavandi^⋆ and A. Yazdabadi^† ^⋆ Institute for Intelligent Systems Research and Innovation, Deakin University^† School of Medicine, Deakin University December 30, 2023 ======================================================================================================================================================================================== In this paper, we proposed using a hybrid method that utilises deep convolutional and recurrent neural networks for accurate delineation of skin lesion of images supplied with ISBI 2017 lesion segmentation challenge. The proposed method was trained using 1800 images and tested on 150 images from ISBI 2017 challenge.§ INTRODUCTIONMelanoma is one of the most deadliest types of cancer that affects large sector of population in United States and Australia. It is responsible for more than 10,000 deaths in 2016. Clinicians diagnose melanoma by visual inspection of skin lesions and moles <cit.>. In this work, we propose an novel approach to segment lesions using deep neural networks. We compared our results to popular deep learning semantic segmentation convolutional neural networks FCN  <cit.> and SegNet <cit.>. This approach will be presented in the in International Symposium on Biomedical Imaging 2017.The rest of this paper is organised as follows. Section 2 describes the related work. The proposed method is presented in Section 3. Section 4 presents results and, finally, Section 5 concludes. § RELATED WORKTraditional intensity based segmentations achieved high accuracies. However, low contrast images with high variance uni-modal histograms resulted in inaccurate delineation of borders. Most of these inaccuracies were corrected with post-processing of images <cit.>. Deep convolutional neural network (CNN) with auto encoder-decoder architectures achieved great results in semantic segmentation <cit.>. Upsampling methods were proposed to solve lost spatial resolution  <cit.>. Ronneberger et al. concatenated a copy of encoded feature map during decoding phase to increase spatial accuracy of the output feature maps  <cit.>. Zheng et al. proposed a trainable conditional random field (CRF) module to refine segmentation prediction map <cit.>. Visin et al. proposed a recurrent neural network (RNN) as post processing module for the coarse extracted feature maps <cit.>. § PROPOSED HYBRID DEEP ARCHITECTUREThe main drawback of semantic segmentation with fully convolutional neural networks (FCN  <cit.> and SegNet <cit.>) is over segmentation due to coarse output of the max-pooling layers during the encoding phase. In order to address this problem, we propose to use recurrent neural networks to learn the spatial dependencies between active neurones after the max-pool encoding <cit.>.The RNN layers are fed with flattened non-overlapping data patches to model spatial dependencies. LetDis the input data such that D∈ℝ^w × h × c where w,h and c are width, height and channels respectively. D is splitted into n × m patches P_i,j such that P_i,j∈ℝ^w_p× h_p × c where w_p=w/nand h_p=h/m. Input patches are flattened into 1-D vector to update its hidden state z^*_i,j where * is the direction of the sweep direction ↑,↓,→ and ←. For every patch P_i,j, the composite activation map feature O ={o^*_i,j}_{i=1,2,…,n}^{j=1,2,…,m} is concatenation of output activation two coupled direction RNN either horizontal (right to left and left to right) or vertical sweep (up to down and down to up) where o^*_i,j∈ℝ^2U∀ * ∈{(↑,↓),(→,←)} is activation of the recurrent unit at position (i,j) with respect to all patches in the column j in case of coupled vertical direction {(↓,↑)} and to all patches in the row i in case of coupled horizontal sweep {(→,←)} and O^↕ denotes concatenated output of o^↓ and o^↑ and similarly O^↔ for O^← and O^→ and U is the number of recurrent units.Similarly, o^↓_i,j and coupled horizontal sweep function can be defined. It is worth noting that both directions are computed independently.Finally, in the decoding stage, deeply encoded features by sequenced recurrent units are used to reconstruct the segmentation mask at the same resolution of the input. Fractionally strided convolutions were used in reconstruction of final output. In strided convolutions, prediction are calculated by inner product between the flattened input and a sparse matrix, whose non-zero elements are elements of the convolutional kernel. This method is both computationally and memory efficient to support joint training of convolutional and recurrent neural networks <cit.>.§ RESULTSThe proposed network was trained using 1800 lesion images provided along with ground truth. These images were provided for the first task of ISBI 2017 challenge “Skin Lesion Analysis Toward Melanoma Detection”<cit.>. The performance of the proposed method is compared to other methods using pixel-wise metrics: Jaccard index, accuracy, sensitivity, specificity and dice coefficient. The results shown in Fig. <ref> demonstrate the efficacy of the proposed method compared to over the classical SegNet <cit.>. These results were obtained on the ISBI training dataset released in January, 2017. The results tabulated in Table <ref> will be presented in ISBI 2017 <cit.>. Figure <ref> andFigure <ref> show sample of the output masks. the ground truth are not published yet. § CONCLUSIONWe utilised a joint architecture that incorporates both deep convolutional and recurrent neural networks for skin lesion segmentation. The results presented great potentials by outperforming state-of-the-art methods of segmentation on skin melanoma delineation problem. Also, it is immune, with high sensitivity, to all artifacts such as markers, ruler marks, and hair occlusions. § ACKNOWLEDGEMENTThis research was fully supported by the Institute for Intelligent Systems Research and Innovation (IISRI).
http://arxiv.org/abs/1702.07963v1
{ "authors": [ "M. Attia", "M. Hossny", "S. Nahavandi", "A. Yazdabadi" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170226005625", "title": "Spatially Aware Melanoma Segmentation Using Hybrid Deep Learning Techniques" }
2ptthmTheorem[section]cor[thm]Corollarylem[thm]Lemmaprop[thm]Propositionclaim[thm]Claimdefn[thm]Definitionrem[thm]Remark
http://arxiv.org/abs/1702.08179v2
{ "authors": [ "Matania Ben-Artzi", "Guy Katriel" ], "categories": [ "math.NA", "34L16, 34B24, 41A15, 65L10" ], "primary_category": "math.NA", "published": "20170227081549", "title": "Spline functions, the discrete biharmonic operator and approximate eigenvalues" }
δ̣ϵ η̅ #1#1_2⟹K #1#1 w̅ z̅ #1#1 sistema {[ .𝐀 𝐂 𝐃 𝐅 𝐏 𝐍 𝐒 𝐕 𝐙 𝐓 𝐞 𝐮̆ 𝐯̌ 𝐱 𝐲 𝐳 𝒜 ℬ 𝒞 𝒟 ℰ ℱ 𝒢 ℐ ℋ 𝒥 𝒦 ℒ ℳ 𝒩 𝒬 𝒪 𝒫 ℛ 𝒮 𝒯 𝒰 𝒱 𝒲 𝒳 𝒴 𝒵 łℓ𝕂 PG÷ minmin𝔽_q^2 𝔽 𝔽_ℓ^2 𝔽_ℓ 𝔽_2^n 2𝔽_2 𝔽_2^m q̅ 𝔽̅_q 𝐅r Φ Supp Φ X_∞ Im 𝔽_q 2𝔽_2 𝔽_ptheoremTheorem[section]remark[theorem]Remarklemma[theorem]Lemma definition[theorem]Definition example[theorem]Exampleproposition[theorem]Propositioncriterio[theorem]Criteriocorollary[theorem]Corollary con[theorem]Conjecture algorithmAlgorithm[section]tn]Carlo Brunetta brunetta@unitn.it no]Marco Calderini cor1 marco.calderini@uib.no tn]Massimiliano Sala maxsalacodes@gmail.com[tn]Department of Mathematics, University of Trento, Via Sommarive 14, 38100 Povo (Trento), Italy [no]Department of Informatics, University of Bergen, Thormøhlensgate 55, 5008 Bergen, Norway [cor1]Corresponding authorSometimes it is possible to embed an algebraic trapdoor into a block cipher. Building on previous research, in this paper we investigate an especially dangerous algebraic structure, which is called a hidden sum and which is related to some regular subgroups of the affine group. Mixing group theory arguments and cryptographic tools, we pass from characterizing our hidden sums to designing an efficient algorithm to perform the necessary preprocessing for the exploitation of the trapdoor. Hidden sums, trapdoor, mixing layer, cryptography, block ciphers. § INTRODUCTIONSometimes it is possible to embed an algebraic trapdoor into a block cipher. Building on previous research, in this paper we investigate an algebraic structure, which is called a hidden sum and which is related to some regular subgroups of the affine group. To be more precise, in <cit.>, the authors study some elementary abelian regular subgroups of the affine general linear group acting on a space V=(_2)^N, in order to construct a trapdoor for a class of block ciphers. These subgroups induce alternative operations ∘ on V, such that (V,∘) is a vector space over _2. In <cit.>, it is shown that for a class of these operations,which we will call practical hidden sums, it is possible to represent the elements with respect to a fixed basis of (V,∘) in polynomial time. Moreover, an estimate on the number of these operations is given. Using this class of hidden sums, the authors provide an attack, which works in polynomial time, on ciphers that are vulnerable to this kind of trapdoor.In this article we continue the analysis started in <cit.>. In Section 3 we give a lower bound on the number of practical hidden sums, comparing also this value with a previous upper bound.From these bounds it is obvious that it is not feasible to generate all possible practical hidden sums due to the large number of these even in small dimensions, e.g. for N=6 we have ∼2^23 practical hidden sums. In Section 4 we study the problem of determining the possible maps which are linear with respect to a given practical hidden sums. More precisely, we provide an algorithm that takes as input a given linear map (with respect to the usual XOR on V) and returnssome operations ∘, which can be defined over V, that are different from the XOR and for which the mapis linear, i.e. (x∘ y)=(x)∘(y) for all x, y in V. Our aim is to individuate a family of hidden sums that can weaken the components of a given cipher, or to design a cipher containing the trapdoor based on hidden sums. In the last section we apply the procedure given in Section 3 to the mixing layer of the cipher PRESENT <cit.>, yelding a set of hidden sums which linearize this permutation matrix and might in principle be used to attack the cipher. § PRELIMINARIES AND MOTIVATIONS We writeto denote the finite field of q elements, where q is a prime power , and ()^s× t to denote the set of all matrices with entries overwith s rows and t columns. The identity matrix of size s is denoted by I_s. We use _i=(0,…,0_i-1,1,0,…,0_N-i)∈ ()^Nto denote the unit vector, which has a 1 in the ith position, and zeros elsewhere. Let m≥ 1, the vector (sub)space generated by the vectors _1,…,_m is denoted by {_1,…,_m}.Let V=()^N, we denote respectively by (V), (V) the symmetric and the alternating group acting on V. We will denote the translation with respectto a vector ∈ V by _:↦+, and T(V,+) will denote the translations on the vector space (V,+), that is, T(V,+)={_|∈ V}. By (V ,+) and (V ,+) we denote the affine and linear groups of V. We write ⟨ g_1,…,g_m⟩ for the group generated by g_1,…,g_m in (V). The map 1_V will denote the identity map on V.Let G be a finite group actingon V. We write the action of a permutation g ∈ G on a vector ∈ V as g.We recall that a permutation group G acts regularly (or is regular) on V if for any pair ,∈ V there exists a unique map g in G such that g=. Moreover, an elementary abelian group (or elementary abelian p-group) is an abelian group such that any nontrivial element has order p. In particular, the group of translations acting on V, T(V,+), is a regular elementary abelian group.§.§ Block ciphers and hidden sumsMost modern block ciphers are iterated ciphers, i.e. they are obtained by the composition of a finite number ℓ of rounds.Several classes of iterated block ciphers have been proposed, e.g. substitution permutation network <cit.> and key-alternating block cipher <cit.>. Here we present one more recent definition <cit.> that determines a class large enough to include some common ciphers (AES <cit.>, SERPENT <cit.>, PRESENT <cit.>), but with enough algebraic structure to allow for security proofs,in the contest of symmetric cryptography. In particular, about properties of the groups related to the ciphers.Let V=(_2)^N with N=mb and b≥ 2. The vector space V is a direct sumV=V_1⊕…⊕ V_b,where each V_i has the same dimension m (over _2). For any ∈ V, we will write =_1⊕…_b, where _i∈ V_i. Any γ∈(V) that acts as γ=_1γ_1⊕…⊕_bγ_b, for some γ_i's in (V_i), is a bricklayer transformation (a “parallel map”) and any γ_i is a brick. Traditionally, the maps γ_i's are called S-boxes and γ a “parallel S-box”. A linear map :V→ V is traditionally said to be a “Mixing Layer” when used in composition with parallel maps.For any I⊂{1,…,b}, with I∅, {1,…,b}, we say that ⊕_i∈ I V_i is a wall. A linear map ∈(V ,+) is a proper mixing layer if no wall is invariant under . We can characterize the translation-based class by the following: A block cipher𝒞 = {φ_k | k∈𝒦}⊂(V), whereis the set containing all the session keys and φ_k are keyed permutations, over _2 is called translation based (tb) if:* it is the composition of a finite number of ℓ rounds, such that any round ρ_k,h can be written[we drop the round indices] as γ_k̅, where - γ is a round-dependent bricklayer transformation (but it does not depend on k), - λ is a round-dependent linear map (but it does not depend on k),- k̅ is in V and depends on both k and the round (k̅ is called a “round key”),* for at least one round, which we call proper, we have (at the same time) thatis proper and that the map 𝒦→ V given by k ↦k̅ is surjective.For a tb cipher it is possible to define the following groups. For each round hΓ_h() = ⟨_k,h| k∈⟩⊆(V),and the round function group is given byΓ_∞()=⟨Γ_h() | h=1,…,ℓ⟩. An interesting problem is determining the properties of the permutation group Γ_∞(𝒞)=Γ_∞that implyweaknesses of the cipher.A trapdoor(sometimes called backdoor see <cit.>) is a hidden structure of the cipher, whose knowledge allows an attacker to obtain information on the key or to decrypt certain ciphertexts.The first paper dealing with properties of Γ_∞ was published by Paterson <cit.>, who showedthat if this group is imprimitive, then it is possible to embed a trapdoor into the cipher.On the other hand, if the group is primitive no such trapdoor can be inserted.Other works, dealing with security properties of ciphers related to groups theory, study also whenever the group generated by the round functions is large <cit.>. However, as shown in <cit.>, even if a given set of round functions generates a large permutation group, it might be possible to approximate these round function by another set of round functions which generates a small group.Similarly, the primitivity of Γ_∞ does not guarantee the absence ofother types of trapdoors based on the group structure of Γ_∞. For example, if the group is contained in (V) (which is a primitive group), the encryption function is affine andonce we know the image of a basis of V and the image of the zero vector, then we are able to reconstruct the matrix and the translation that compose the map. This motivated the authors of <cit.> on studying different abelian operations, which can be individuate on V in order to embed a trapdoor into a block cipher.The authors in <cit.> called these operations hidden sums. In the remainder of this section, we summarize the theory presented in <cit.>.If T⊂(V) is an elementary abelian regular groupacting on V,then there exists a vector space structure (V,∘) such that T is the related translation group. In fact, since T is regular the elements of the group can be labelledT={_|∈ V},where _ is the unique map in T such that 0↦. Then, the sum between two elements is defined by ∘:=_. Clearly, (V,∘) is an abelian additive group and thus a vector space over _2.Conversely, if (V,∘) is a vector space over 2, then its translation group, given by the maps _:↦∘, is an elementary abelian group acting regularly on V. In the following, with the symbol + we refer to the usual sum over the vector space V. We denote by T_+=T(V,+), (V,+) and (V,+), respectively, the translation, affine and linear groups w.r.t. +. We use T_∘, (V,∘) and (V,∘) to denote, respectively, the translation, affine and linear groups corresponding to a hidden sum ∘.That is, T_∘={_|∈ V} where _:↦∘, (V,∘) is the group of the mapssuch that (∘)=∘ for all ,∈ V, and any map in (V,∘) is the composition of a map ∈(V,∘) and a map_∈ T_∘.Since we will focus on a particular class of these operation ∘, we define a hidden sum any vector space structure (V,∘), different from the usual (V,+), such that T_+⊆(V,∘). While, a practical hidden sum is a hidden sum such that T_∘ is also contained in (V,+).From <cit.> we have three interesting problems:* Determine the operations ∘ (equivalently the translation groups T_∘) such that T_+⊆(V,∘).* Determine the operations ∘such that T_∘⊆(V,+) (i.e. those which are practical hidden sums).* Given a parallel S-box γ and a mixing layer , determine the operations ∘ such that γ, ∈(V,∘) or γ∈(V,∘). Problem 1 is related to identifying the class of hidden sums that could potentially contain Γ_∞(), of a given cipher , in the group (V,∘) since T_+⊆Γ_∞(); or at least to individuating hidden sums for which the XOR with a round key is affine with respect to the operation ∘. Operations with the characteristic given in Problem 2 and that of Problem 1 permit to represent an elementin (V,∘) efficiently, as we will see in Algorithm <ref>. The last problem permits to understand if a given block cipher could be modified to introduce the algebraic structure of a hidden sum. The following vector space plays an important role for studying these problems. Let T be any subgroup of the affine group, we can define the vector space U(T)={∈ V|_∈ T}.In <cit.> the authors provedLet V=(_2)^N, (V)=N. Let T⊆(V,+) be an elementary abelian regular subgroup. If T T_+, then 1≤(U(T))≤ N-2. A characterization is given in <cit.> for the maps that generate a translation group T_∘⊆(V,+) such that T_+⊆(V,∘). We recall that for every , _∈ T_∘⊂(V,+) can be written as __ for a linear map _∈(V,+). We will denote by Ω(T_∘)={_|∈ V}⊂(V,+). Moreover _=1_V if and only if ∈ U(T_∘). We recall the following definition.An element r of a ring R is called nilpotent if r^m=0 for some m≥ 1 and it is called unipotent if r-1 is nilpotent, i.e. (r-1)^m=0 for some m≥ 1. Let G⊆(V,+) be a subgroup consisting of unipotent permutations, then G is called unipotent. Let T_∘⊆(V,+). We have that Ω(T_∘) is unipotent. Moreover, for all ∈ V _^2=1_V. Let N=n+d and V=(_2)^N, with n≥ 2 and d≥ 1. Let T_∘⊆(V,+) be such that U(T_∘)={_n+1,…,_n+d}. Then, T_+⊆(V,∘) if and only if for all _∈Ω(T_∘) there exists a matrix B_∈ (_2)^n× d such that_=[[ I_nB_; 0 I_d ]]. Note that we can always suppose thatU(T_∘) is generated by the last vectors of the canonical basis, as any group T_∘ is conjugated to a group T_∘' such that U(T_∘')={_n+1,…,_n+d} (see <cit.>).Indeed, let T_∘ be a translation group of a practical hidden sum, with U(T_∘)={_1,…,_d}. We have that _1,…,_d are linear independent with respect to the sum +, which implies that there exists a linear map g∈ GL(V,+) such that _ig=_n+i for i=1,…,d. Thus, the conjugated group T_∘'=gT_∘ g^-1 is such that U(T_∘')={_n+1,…,_n+d} and ∘' is again a practical hidden sum.Practical hidden sums with U(T_∘)={_n+1,…,_n+d} are easier to study thanks to the particular structure reported in Theorem <ref>.When U(T_∘) is generated by the last vectors of the canonical basis, then the maps __i generate T_∘, i.e. the canonical vectors form a basis also for the vector space (V,∘).Indeed, thanks to the form of the maps __i given in Theorem <ref>, it is possible to verify that combining the maps __i's we are able to obtain all the 2^N different maps contained in T_∘ (see <cit.> for further details). In <cit.> it turns out that for any practical hidden sum T_∘⊆(V,+), any given vectorcan be represented in (V,∘), with respect to the canonical vectors, in polynomial time ((N^3)). Let T_∘ be a practical hidden sum with U(T_∘)={_n+1,…,_n+d} ((V)=N=n+d), then the algorithm for determining the representation of a given vectoris the followingCorrectness of Algorithm <ref>:To find the coefficients _1,…,_N is equivalent to individuating the maps __i which generate the map _. So, we need to understand which maps (amongthe __i)are needed to send 0 in , or equivalentlyin 0 (since T_∘ is an elementary regular subgroups, _ is the unique map such that 0_= and_=0).Note that thanks to the form of the maps __i given in Theorem <ref>, whenever we apply a map __i to a vector , the first n entries ofare left unchanged. So, if the entry v_i is equal to 1 for 1≤ i≤ n, to delete it we need to apply __i. This explains step [ii]. Now, '=__1^_1⋯__n^_n is such that the first n entries are all zero. So, we need to delete the entry v'_j for n+1≤ j≤ N, whenever v'_j=1. Since __j=__j for n+1≤ j≤ N, we apply __j if v'_j=1.So, we have obtained a map ∈ T_∘ such that =0, and since this is unique we have =_. Note that if U(T_∘) is not generated by the last vectors of the canonical basis, then we can consider the conjugated group T_∘'=gT_∘ g^-1, with g as above, and we need to apply Algorithm <ref> to g, obtaining its representation in (V,∘'). This leads us to obtain the representation ofin (V,∘) with respect to the basis {_1 g^-1,...,_N g^-1} of (V,∘). In particular, the map g is an isomorphism of vector space between (V,∘) and (V,∘'). From this, the authors proved that a hidden sum trapdoor is practical, whenever Γ_∞⊆(V,∘) and T_∘⊆(V,+).A class of these practical hidden sums is used in <cit.> to weaken the non-linearity of some APN S-boxes. In addition, in<cit.> a differential attack with respect to hidden sums is presented.An upper bound on the number of some hidden sums is given in <cit.> and reported below.Let n≥ 2 and d≥ 1.According to Theorem <ref>, denote the matrix __i by__i=[[ b^(i)_1,1 … b^(i)_1,d; I_n ⋮ ⋮; b^(i)_n,1 … b^(i)_n,d; 0 I_d ]],for each 1≤ i≤ n.Let N=n+d and V=(_2)^N, with n≥ 2 and d≥ 1. The number of elementary abelian regular subgroups T_∘⊆(V,+) such that (U(T_∘))=d and T_+⊆(V,∘) isN d_2· |(_n,d)|where(_n,d) is the variety of the ideal _n,d⊂_2[b^(s)_i,j|1≤ i,s≤ n,1≤ j≤ d] generated by _0∪_1∪_2∪_3with_0 ={(b^(s)_i,j)^2-b^(s)_i,j|1≤ i,s≤ n,1≤ j≤ d} _1 ={∏_i=1^n∏_j=1^d(1+∑_s∈ Sb^(s)_i,j)| S⊆{1,…,n},S∅}, _2 ={b^(s)_i,j-b^(i)_s,j|1≤ i,s≤ n,1≤ j≤ d}, _3 ={b^(i)_i,j|1≤ i≤ n,1≤ j≤ d},and N d_q=∏_i=0^d-1q^N-i-1/q^d-i-1 is the Gaussian Binomial. Let _n,d defined as in Theorem <ref>, then|(_n,d)|≤ 2^dn(n-1)/2-1-∑_r=1^n-2n r∏_1^n-r2(2^d-1)=μ(n,d). § NEW LOWER BOUNDS AND ASYMPTOTIC ESTIMATES In this section we will provide a lower bound on the cardinality of the variety (_n,d). Moreover we will show that the ratio between the upper bound and lower bound is less than e^2^d+1/2^d(2^d-1).From Theorem <ref> and from Remark <ref>, we have that a group T_∘ with U(T_∘)={_n+1,…,_n+d} is determined by the maps __i's, and in particular by B__1,…,B__n (B__i=0 for n+1≤ i≤ n+d). Thus, we need to construct the matrices B__1,…,B__n so that: i) (U(T_∘))=d. Then for any ∈{_1,…, _n}, B_≠ 0. Indeed, ∈ U(T_∘) if and only if B_ =0. Moreover, for any =(v_1,…,v_n+d)∈ V we have _ = [[I_n ∑_i=1^n v_i B__i;0I_d ]] and so we get that B_ = ∑_i=1^n v_i B__i (see Lemma 5.4 in <cit.>). This implies that every non-null linear combination of B__1,…,B__n is non-zero.Note that, in Theorem <ref>, this condition is given by _1. ii) T_∘ is abelian. This happens if and only if theith row of B__j is equal to the jth row of B__i, obtaining the set _2. iii) T_∘ is elementary, that is, _^2=1_V for all . Then _ fixesand, in particular, __i fixes _i. This is equivalent to having the ith rowofB__i equals to 0, which is expressed by _3. Consider now any matrix B__i. Its size is n× d, thus any row of B__i can be written as an element of _2^d. Let us define the matrix 𝔅_∘=[B__1… B__n]∈ (_2^d)^n× n. From Condition i), ii) and iii), above, we have the following properties for𝔅_∘: (I) 𝔅_∘ is full rank over _2,when seen as an element of (_2)^n× nd. This guarantees that (U(T_∘)) = d.(II) 𝔅_∘ is symmetric.Since the ith row of B__j is equal to the jth row of B__i we have that the entry (i,j) of 𝔅_∘ is equal to the entry (j,i).(III) 𝔅_∘ is zero-diagonal, as the ith row of B__i is zero for all i.Let N = 5 and d=2 and consider the operation ∘ defined by the following B__i's: B__1= [ 0 0; 1 1; 1 1 ] , B__2= [ 1 1; 0 0; 0 1 ] B__3 = [ 1 1; 0 1; 0 0 ] ,B__4 = [ 0 0; 0 0; 0 0 ], B__5 = [ 0 0; 0 0; 0 0 ]Since we have n = N-d = 3, we just need to focus on B__1, B__2 and B__3.Representing the rows of the matrices as elements of 𝔽_2^2 = {0,1,α,α+1}, where α^2 = α + 1, we can rewrite the matrices asB__1= [ 0; α + 1; α + 1 ], B__2= [ α + 1; 0; α ], B__3 = [ α + 1; α; 0 ]and so we can rewrite 𝔅_∘ as𝔅_∘ = [ B__1 B__2 B__3 ] = [0 α +1 α +1; α +10α;α+1α0 ]In Theorem <ref> we have a one-to-one correspondence between the matrices B__i's and the points of (_n,d). Thanks to the 𝔅_∘ construction, we have a one-to-one correspondence between the matrices 𝔅_∘∈ (_2^d)^n× n having the above characteristics (I), (II), (III) and the groups T_∘ with U(T_∘) generated by _n+1,…,_n+d.That is, defining_n,d={𝔅_∘∈ (_2^d)^n × n|𝔅_∘ satisfies conditions (I), (II), and (III)}we have |_n,d|=|(_n,d)|. Thus, we aim to estimate |_n,d|. We recall the following result given in <cit.>.Let q be a power of 2. The number of n × n symmetric invertible matrices over _q with zero diagonal isq^n2∏_j=1^⌈n-1/2⌉( 1 - q^1 - 2j) if n is even 0if n is odd .Let N = n + d and V = (𝔽_2)^N, with n ≥ 2 and d ≥ 1. Let q = 2^d. Then we can define a lower bound ν (n,d) ≤ |_n,d| by ν(n,d) =q^n2∏_j=1^⌈n-1/2⌉( 1 - q^1 - 2j) if n iseven (q^n-1-2^n-1) q^n-12∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j) if n isoddWe require that 𝔅_∘ is full rank over _2, thus if it is invertible this condition is satisfied. Then the value given in Proposition <ref> is a lower bound on the number of all acceptable matrices 𝔅_∘. However, if n is odd from Proposition <ref> we would have only 0 as lower bound.To tackle the n-odd case, we want to reduce it to the n-even case. We show how it is possible to construct a matrix 𝔅_∘ for fixed values n and d starting from a given 𝔅_∘' defined for values n'=n-1,where n-1 is even, and d'=d. Indeed, let 𝔅_∘'∈ (_2^d)^n-1 × n-1 be such that it is full rank over _2, symmetric and zero-diagonal. We need to construct the first row (the first column is the transpose of this) of 𝔅_∘, setting all the others equal to the rows of 𝔅'_∘.That is, 𝔅_∘=[[0 ; ^⊺ 𝔅_∘' ]].Then, we need to verify how many possible 𝔅_∘ we can construct starting from 𝔅_∘', or equivalently how many vectors ∈ (_2^d)^n-1 we can use, in order to enforce 𝔅_∘∈_n,d.As 𝔅_∘' is full rank over _2, summing the rows of this matrix we can create 2^n-1-1 non-zero vectors. Thus, we can choose 2^d(n-1)-2^n-1 different vectorsof (_2^d)^n-1 to construct 𝔅_∘. Hence, for any n we have (2^d(n-1)-2^n-1)|_n-1,d|≤|_n,d| and, in particular, for n odd we have at least (2^d(n-1)-2^n-1) ν(n-1,d) possible matrices 𝔅_∘. Let N = n + d and V = (𝔽_2)^N. Then|ℳ_n,d|=2^d - 1 ifn = 2(2^d + 3)(2^d - 1)(2^d - 2) if n = 32^n2∏_j=1^⌈n-1/2⌉( 1 - 2^1 - 2j) if d = 1 and n is even0 if d = 1 and n is oddThe cases n=2 and n=3 were already proved in <cit.>. We restate here the proof using symmetric matrices, obtaining a shorter and clearer proof.If n=2 then𝔅_∘ = [ 0; 0 ] with ∈𝔽_2^d. Of course the only vector that we have to avoid is the zero vector. Thus we can use 2^d-1 vectorsto construct 𝔅_∘.If n=3 then𝔅_∘ = [ 0; 0; 0 ]with ,,∈𝔽_2^d such that 𝔅_∘∈_n,d. We need to find all the triples (,,) such that the matrix 𝔅_∘ is full rank over _2, that is, summing any rows of it we do not obtain the zero vector. Let us consider the different cases: * ≠ 0 : 2^d - 1 possible values for , * = 0 : so ∉{ 0,} and so we get 2^d - 2 possible values * = : so ∉{ 0,} and so we get 2^d - 2 possible values * ≠ : socan be any element, ∉{ 0,}. So 2^d(2^d-2) possible pairs (,). * = 0 : we have ≠ 0 and z ∉{ 0,y } and so we get (2^d-1)(2^d-2) possible pairs (,). Summing all the possible triple (,,), we get(2^d - 1)(2^d - 2 + 2^d - 2 + 2^d (2^d -2)) + (2^d -1)(2^d - 2) = (2^d-1)(2^d-2)(2^d + 3) Now, let n≥ 2 and fix d=1. As we consider 𝔽_2^d = 𝔽_2, we have that _n,d is the set of symmetric matrices having all zeros on the diagonal and invertible. From Proposition <ref> we have our claim. Let us compare the upper bound and the lower bound on the cardinality of _n,d.We recall a theorem on the infinite product convergence criteria, corollary of the monotone convergence theorem (more details can be found in <cit.>). ­Let {a_j}_j∈ℕ⊆ℝ_+. Then ∏_j=1^∞ a_j converges if and only if ∑_j=1^∞ln (a_j) converges.Moreover, assume a_j ≥ 1, and denotea_j = 1 + p_j. Then1 + ∑_j=1^∞ p_j≤∏_j=1^∞ (1 + p_j) ≤ e^∑_j=1^∞ p_j,that is, the infinite product converges if and only if the infinite sum of the p_j converges.Let μ(n,d) be the bound given in Proposition <ref> and let ν(n,d) be the bound given in Proposition <ref>. Let q = 2^d and d≥ 2. If n is even, thenμ(n,d)/ν(n,d)≤∏_j=1^∞1/(1-q^1-2j)≤ e^q+1/q(q-1) if n is odd, then μ(n,d)/ν(n,d)≤2∏_j=1^∞1/(1-q^1-2j)≤ 2 e^q+1/q(q-1).We haveμ(n,d)= q^n2-1-∑_r=1^n-2n r(q-1)^n-r2 ≤ q^n2.Consider the case n evenμ(n,d)/ν(n,d) ≤q^n2/q^n2∏_j=1^⌈n-1/2⌉( 1 - q^1 - 2j)= ∏_j=1^⌈n-1/2⌉1/( 1 - q^1 - 2j)≤∏_j=1^∞1/( 1 - q^1 - 2j),since we can write1/1-q^1-2j = q^2j-1/q^2j-1-1 = 1 + q/q^2j - q.Defining p_j := q/q^2j - qwe trivially have that p_n ≥ 0 and 1 + ∑_j=1^∞ p_j ≤∏_j=1^∞1/(1-q^1-2j)≤ e^∑_j=1^∞ p_j. We have for any j ≥ 2p_j ≤1/q^j.From this, we get∑_i=1^∞ p_j ≤ p_1+∑_i=2^∞1/q^j= 2/q-1-1/q=q+1/q(q-1)and so∏_j=1^∞1/(1-q^1-2j)≤ e^q+1/q(q-1). Now let n be odd,μ(n,d)/ν(n,d) ≤q^n2/(q^n-1-2^n-1)· q^n-12∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j).Since ji=j-1i-1+j-1i, n≥ 2 and d≥ 2we haveμ(n,d)/ν(n,d) ≤q^n-1/(q^n-1-2^n-1)·∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j) = 1/(1-2^n-1/q^n-1)·∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j)≤1/(1-2^n-1/q^n-1)∏_j=1^∞1/( 1 - q^1 - 2j)≤2∏_j=1^∞1/( 1 - q^1 - 2j)≤ 2 e^q+1/q(q-1).Note that in Proposition <ref>, the comparison for the case d=1 is avoided as the lower bound is the exact value of |_n,d|. For all possible values of n≥ 2 and d≥ 2 we havelim_n→∞μ(n,d)/ν(n,d)=∏_j=1^∞1/( 1 - q^1 - 2j)andlim_d→∞μ(n,d)/ν(n,d)=1.Consider the case n odd, then μ(n,d)/ν(n,d)= q^n2(1-1/q^n2-∑_r=1^n-2n r∏_i=1^n-r2(q-1)/q^n2)/(q^n-1-2^n-1)· q^n-12∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j) = (1-1/q^n2-∑_r=1^n-2n r∏_i=1^n-r2(q-1)/q^n2)/(1-2^n-1/q^n-1)·∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j)=(1-1/q^n2-∑_r=1^n-2n r∏_i=1^n-r2(q-1)/q^n2)/(1-2^n-1/q^n-1)∏_j=1^⌈n-2/2⌉1/( 1 - q^1 - 2j)Consider ∑_r=1^n-2n r∏_i=1^n-r2(q-1)/q^n2 =∑_r=1^n-2n r(q-1)^n-r2/q^n2≤∑_r=1^n-2n rq^n-12-n2≤2^n-2/q^n-1.This implies that the limit of (<ref>), as n approaches infinity, is 0 and then the limitlim_n→∞q^n2(1-1/q^n2-∑_r=1^n-2n r∏_i=1^n-r2(q-1)/q^n2)/(q^n-1-2^n-1)· q^n-12∏_j=1^⌈n-2/2⌉( 1 - q^1 - 2j)=∏_j=1^∞1/( 1 - q^1 - 2j). The case n even is similar.Moreover, from the proof of Proposition <ref> we have1≤μ(n,d)/ν(n,d)≤e^q+1/q(q-1)if n is even 1/(1-2^n-1/q^n-1)e^q+1/q(q-1)if n is oddand immediately we have that lim_d→∞μ(n,d)/ν(n,d)=1. From the results obtained in this section we can see that if we would tackle Problem 3. given in Section 2, then it is not possible to search among all the possible practical hidden sums of a given space V. Indeed, it will be computationally hard, given the huge amount of these operations, also in small dimensions. § ON HIDDEN SUMS FOR LINEAR MAPSIn this section we investigate Problem 3 given in Section 2 page prob3. In particular we want to see if, for a given ∈(V,+), it is possible to individuate an alternative sum ∘ such that ∈(V,∘).Let T_∘⊆(V,+) and ∈(V,+)∩(V,∘) then U(T_∘) is invariant under the action of , i.e. U(T_∘)=U(T_∘). Recall thatis linear with respect to both + and ∘. Let ∈ U(T_∘),and so ∘=+ for any . Thus, for allwe have∘=^-1∘=(^-1∘)=(^-1+ )=+ .That implies ∈ U(T_∘), and so U(T_∘)⊆ U(T_∘).Let T_∘⊆(V,+) and ∈(V,+). Thenis in (V,+)∩(V,∘) if and only if for all ∈ V we have_=_,where _=__. Letbe fixed. Then for allwe have(∘)λ = λ∘λ where (∘)λ = (_ + )λ= _λ + λ and λ∘λ = λ_λ + λ . Imposing the equality we get( _λ) = λ_λ.Vice versa let ,∈ V (∘) =_+=_+=∘. Now we will characterize the linear maps which are also linear for an operation ∘, such that U(T_∘) is generated by the last elements of the canonical basis.Let V=(_2)^N, with N=n+d, n≥ 2 and d≥ 1. Let T_∘⊆(V,+) with U(T_∘)={_n+1,...,_n+d}. Let ∈(V,+). Then ∈(V,+)∩(V,∘) if and only if=[[ Λ_1 Λ_2; 0 Λ_3 ]],with Λ_1∈((_2)^n), Λ_3∈((_2)^d), Λ_2 any matrixin (_2)^n× d and for all ∈ V B_Λ_3=Λ_1B_ (see Theorem <ref> for the notation of B_).Let =[[ Λ_1 Λ_2; Λ_4 Λ_3 ]],Λ_1∈ (_2)^n× n, Λ_3∈ (_2)^d× d, Λ_2∈ (_2)^n× d and Λ_4∈ (_2)^d× nFrom Proposition <ref> we have that Λ_4=0 as U(T_∘) is -invariant. Thus, asis invertible Λ_1∈((_2)^n) and Λ_3∈((_2)^d).By standard matrix multiplication it can be verified that (<ref>) and B_Λ_3=Λ_1B_ are equivalent. Then, we need to show that this condition does not depend on the matrix Λ_2. Indeed, let =(x_1,...,x_n,x_n+1,...,x_n+d)∈ V, and define =(x_1,...,x_n) and '=(x_n+1,...,x_n+d), i.e. =(,'). As reported in Section 3 pagecondizionei, we have _=_(,0), which implies B_=B_(,0), for all ∈ V. Then, B_Λ_3=Λ_1B_ for all ∈ V if and only if B_(,0)Λ_3=Λ_1B_(Λ_1,0) for all =(x_1,...,x_n). So we have [[ Λ_1 Λ_2; 0 Λ_3 ]]∈(V,∘),for any Λ_2. In the following, as Λ_2 in Proposition <ref> could be any matrix, we will consider linear maps of type[[ Λ_1 *; 0 Λ_3 ]]∈(V,∘),with Λ_1∈((_2)^n), Λ_3∈((_2)^d) and * denotes for any matrix of size n× d.From the propositions above, if we want to find an operation ∘ that linearizes a linear map ∈(V,+), i.e. we want to enforce[[ Λ_1 *; 0 Λ_3 ]]∈(V,∘),then we have to construct some matrices B_'s such that B_Λ_3=Λ_1B_ for all . Moreover, as the standard vectors _i's form a basis for the operation ∘, then we need to individuate only the matrices B__i, so that B__iΛ_ 3=Λ_1B__i, and in particular thatB__iΛ_ 3=Λ_1B__i=Λ_1(∑_i=1^nc_iB__i),where c_1,…,c_n are the first components of the vector _i. Correctness of Algorithm <ref>:Note that thanks to the form of the maps __i given in Theorem <ref>, we always have __i^2=1_V, whatever the matrix B__i is. This property, with condition 3 in the algorithm, implies __i^2=1_V, so T_∘=⟨__1,...,__n+d⟩ is elementary. Condition 4 guarantees that T_∘ is abelian. Then, T_∘ is also regular (see Corollary 3.8 in <cit.> for more details). This implies that T_∘=⟨__1,...,__n+d⟩ is a practical hidden sum.The first condition, as seen in Proposition <ref>, is equivalent to having __i=__i for all _i, and for Remark <ref>, we have also _=_ for any . Then, from Proposition <ref> we have ∈(V,∘). To conclude, condition 2 implies that U(T_∘) contains _n+1,...,_n+d.Viceversa, consider T_∘ a practical hidden sums (with __i as in (<ref>)) such that: * T_∘⊆(V,+), T_+⊆(V,∘),* U(T_∘) contains _n+1,...,_n+d,* ∈(V,∘). We need to check that T_∘ is an output of the algorithm. Equivalently, we need to check that the matrices B__i associated to this group satisfies the condition of the system in Algorithm <ref>.Since T_∘ is elementary and abelian, conditions 3 and 4 are satisfied. Also, condition 2 holds because U(T_∘) contains _n+1,...,_n+d. For the first condition, since ∈(V,∘) we have _=_ for any , which is equivalent to B_Λ_3=Λ_1B_ for all , and in particular for all _i.Note that, from Algorithm <ref> we obtain operations ∘ such that{_n+1,…_n+d}⊆ U(T_∘). Indeed, we required that for n+1≤ i≤ n+d, B__i =0, but we did not require that the combinations of B__1,...,B__n are non-zero. So, if we want to construct hidden sums with (U(T_∘))=d, we just process the solution of Algorithm <ref> and discharge the 𝔅_∘'s such that rank__2(𝔅_∘)<n.§.§ Complexity of our search algorithmTo solve the linear system in our algorithm, we represent the matrices as vectors: [ b_1,1 ⋯ b_1,d; ⋮ ⋱ ⋮; b_n,1 ⋯ b_n,d ]⟷[ 1-st rowb_1,1 , ... ,b_1,d , 2-nd rowb_2,1 ,... , b_2,d , ... , n-th rowb_n,1 , ... , b_n,d ] * From i=1,...,n, ∑_j c_jΛ_1B__j =B__iΛ_3 we have n^2d linear equation in n^2d variables. * From _i B__i = 0 and _i B__j = _j B__i we obtain n+12d linear equations, in the same variables.So, we only need to find a solution of a binary linear systemof size(n^2d + n+12d) · n^2d. Therefore, we have immediately the following result The time complexity of Algorithm <ref> is 𝒪( n^6 d^3 ) and the space complexity is 𝒪( l · 2^d-1 n^2) where l is the dimension of the solution subspace.In Table <ref> we report some timings for different dimensions of the message space V, fixing the value of d equal to 2. § HIDDEN SUMS FOR PRESENT'S MIXING LAYERHere we report our results on the search for a hidden sum suitable for the mixing layer of PRESENT, _P, which isdefined by the permutation reported in Table <ref>. Algorithm <ref> requires in input a linear function in the block form:λ' = (Λ_1 * 0Λ_3 )with Λ_1∈((_2)^n) and Λ_3∈((_2)^d) for some integers n and d. Note that a matrix as in (<ref>) is such that the space U'={_n+1,…,_n+d} is λ'-invariant. So, in order to transform a mixing layerinto one as in (<ref>), it is necessary to individuate a subspace U such that U=U. Then, by conjugatingby a linear map π such that Uπ={_n+1,…,_n+d}, we will have πλπ^-1 as in (<ref>). From a group T_∘ obtained from Algorithm <ref> for the map πλπ^-1 we will obtain a hidden sum for , that is, π^-1 T_∘π.For this reason we consider the matrix given by the permutationπ_P = (1 , 61)(22, 62)(43,63) ∈({ i | i = 1,...,64 })so thatπ_P λ_P π_P^-1 = (Λ_1 0 0 I_4) = λ̂_PWe can now apply Algorithm <ref> and obtain all the possible operations that linearize λ̂_P. The operation space will be denoted by O.The time required to compute the operation space O is ∼ 10.420 seconds and it is generated by 2360 60-tuples of 60× 4 matrices. So the number of operations that linearize λ̂_P, in the form described in Theorem <ref>, is | O| = 2^2360.We take a random operation ∘ (Table <ref>) obtained by our algorithm. The operation has rank__2(𝔅_∘) = 60 = n and so the operation is such that (U(T_∘)) = 4 = d. To compress the table, we represent every row of B_e_i as a number in {0,...,(2^4 -1)}, and, as we did in Remark <ref>,row i represents the matrix B__i.§ CONCLUSIONSContinuing the study of <cit.>, here we focused on the class of hidden sums such that T_∘⊆(V,+) and T_+⊆(V,∘), which we called practical hidden sums, as these could be used to exploit new attacks on block ciphers.We gave a lower bound on the number of the practical hidden sums. Then we compared this lower bound with the upper bound given in <cit.>.In the second part, we dealt with the problem of individuating possible practical hidden sums for a given linear map, providing Algorithm <ref> and showing an example on the case of the PRESENT's mixing layer.A dual approach is under investigation in <cit.> (some of the results are reported in <cit.>), where the author takes into consideration some S-boxes, which would be strong according to classical cryptanalysis, and identify some practical hidden sums that weaken significantly the non-linearity properties of the S-Boxes. Once the candidate hidden sums have been found, some (linear) mixing layers are constructed such that they are linear with respect to a hidden sum and the resulting cipher turns out to be attackable by theclever attacker. § ACKNOWLEDGEMENTS The authors would like to thank the anonymous referees for their stimulating suggestions, and several people for interesting discussions: R. Civino and R. Aragona.Part of the results in this paper are from the first's author Master thesis, supervised by the last author. They have been presented partially at WCC 2017.This research was partially funded by the Italian Ministry of Education, Universities and Research, with the project PRIN 2015TW9LSR“Group theory and applications”.10 url<#>1urlprefixURL href#1#2#2 #1#1calderini2017elementary M. Calderini, M. Sala, Elementary abelian regular subgroups as hidden sums for cryptographic trapdoors, arXiv preprint arXiv:1702.00581.CGC-cry-art-Bogdanov2007 A. Bogdanov, L. R. Knudsen, G. Leander, C. Paar, A. Poschmann, M. Robshaw, Y. Seurin, C. Vikkelsoe, PRESENT: An Ultra-Lightweight Block Cipher, in: Proc. of CHES 2007, Vol. 7427 of LNCS, Springe, pp. 450–466r, 2007.CGC-cry-book-stin95 D. Stinson, Cryptography, Theory and Practice, CRC Press, 1995.CGC-cry-book-daemen2002design J. Daemen, V. Rijmen, The design of Rijndael, Information Security and Cryptography, Springer-Verlag, Berlin, 2002, AES - the Advanced Encryption Standard.CGC-cry-art-carantisalaImp A. Caranti, F. Dalla Volta, M. Sala, On some block ciphers and imprimitive groups, AAECC 20 (5-6), pp. 229–350, 2009.daemen2002design J. Daemen, V. Rijmen, The design of Rijndael: AES-the advanced encryption standard, Springer Science & Business Media, 2002.CGC-cry-art-serpent R. Anderson, E. Biham, L. Knudsen, SERPENT: A New Block Cipher Proposal, in: Fast Software Encryption, Vol. 1372 of LNCS, Springer, pp. 222–238, 1998. filion A. Bannier and E. Filiol, Partition-based trapdoor ciphers. In: Partition-based Trapdoor Ciphers. InTech, 2017 CGC-cry-art-paterson1 K. G. Paterson, Imprimitive permutation groups and trapdoors in interated block ciphers, in: Fast software encryption, Vol. 1636 of LNCS, Springer, Berlin, pp. 201–214, 1999.CGC-cry-art-sparr08 R. Sparr and R. Wernsdorf, Group theoretic properties of Rijndael-like ciphers, Discrete Appl. Math., 156, pp. 3139–3149, 2008CGC-cry-art-Wern2 R. Wernsdorf, The one-round functions of the DES generate the alternating group. In: Eurocrypt. 1992. pp. 99-112. court N. Courtois, The Inverse S-box, Non-linear Polynomial Relations and Cryptanalysis of Block Ciphers, in: AES 4 Conference, Bonn May 10-12, vol. 3373 of LNCS, Springer, pp. 170-188, 2004.Rob C. Blondeau, R. Civino and M. Sala, Differential Attacks: Using Alternative Operations, Des. Codes Cryptogr., pp.1-23, 2018. Rob2R. Civino, Differential attacks using alternative operations and block cipher design. Ph.D. thesis, University of Trento (2018).macwilliams69ort J. MacWilliams, http://www.jstor.org/stable/2317262Orthogonal matrices over finite fields, The American Mathematical Monthly 76 (2) (1969) 152–164. <http://www.jstor.org/stable/2317262> stewart2015calculus J. Stewart, Calculus, Cengage Learning, 2015. ]
http://arxiv.org/abs/1702.08384v2
{ "authors": [ "Carlo Brunetta", "Marco Calderini", "Massimiliano Sala" ], "categories": [ "math.GR" ], "primary_category": "math.GR", "published": "20170227171326", "title": "On Hidden Sums Compatible with A Given Block Cipher Diffusion Layer" }
=4
http://arxiv.org/abs/1702.07934v1
{ "authors": [ "Ugo Rosolia", "Francesco Braghin", "Andrew G. Alleyne", "Stijn De Bruyne", "Edoardo Sabbioni" ], "categories": [ "cs.MA", "cs.SY" ], "primary_category": "cs.MA", "published": "20170225181037", "title": "A decentralized algorithm for control of autonomous agents coupled by feasibility constraints" }
Mottmetal-insulator transition in the Doped Hubbard-Holstein model Jamshid Moradi Kurdestanyand S. Satpathy December 30, 2023 =====================================================================§ THE REALITY OF SPECTRUM CONGESTION It is clear that the EM spectrum is now rapidly reaching saturation, especially for frequencies below 10 GHz [www.ntia.doc.gov/files/ntia/publications/january_2016_spectrum_wall_chart.pdf]. Governments, who influence the regulatory authorities around the world, have resorted to auctioning the use of spectrum, in a sense to gauge the importance of a particular user. Billions of USD are being paid for modest bandwidths.The earth observation, astronomy and similar science driven communities cannot compete financially with such a pressure system, so this is where governments have to step in and assess / regulate the situation.It has been a pleasure to see a situation where the communications and broadcast communities have come together to formulate sharing of an important part of the spectrum (roughly, 50 MHz to 800 MHz) in an IEEE standard, IEEE802.22. This standard (known as the “TV White Space Network”(built on lower level standards) shows a way that fixed and mobile users can collaborate in geographically widespread regions, using cognitive radio and geographic databases of users. This White Space (WS) standard is well described in the literature <cit.> and is not the major topic of this short paper.We wish to extend the idea of the WS concept to include the idea of EM sensors (such as Radar) adopting this approach to spectrum sharing, providing a quantum leap in access to spectrum. We postulate that networks of sensors, using the tools developed by the WS community, can replace and enhance our present set of EM sensors.We first define what Networks of Sensors entail (with some history), and then go on to define, based on a Taxonomy of Symbiosis defined by de Bary<cit.>, how these sensors and other users (especially communications) can co-exist. This new taxonomy is important for understanding, and should replace somewhat outdated terminologies from the radar world. § NETWORKS OF SENSORSHere we discuss the migration from what might be perceived as a single or limited sensor function such as a radar, to the concept of a network of sensors using electromagnetic waves to detect, classify, identify objects in a volume of interest. How this is achieved, is, of course, highly variable in terms of architecture, operating frequency band, bandwidth and measurement method. §.§ Early networks of sensors Aircraft sensors using EM waves were developed by many nations in the Second World War, but the term, “Radar” (Radio Detection and Ranging) evolved in the USA and has been adopted universally.Radar represents a class of sensor that emitted an electromagnetic (EM) wave and by measuring time delay of the echo from a target, distance to target was inferred. Rapidly the sensor evolved to measure Doppler (radial velocity), and, using directional antenna technology, a measure of bearing angle or even, elevation angle became routine.Hülsmeyer patented a sensor modelled on Marconi's communications technology that could determine the presence of a reflecting object. It did not have the capability inferred in the term, “Radar” of measuring range, and assumed that only objects in the common volume of the transmitter and receiver were important. §.§ Taxonomy for Networks of SensorsWe have chosen the term, “sensor” for the technology that will be revealed in this short paper, but more specifically, a class of sensors that uses EM waves, but in a symbiosis with other uses of the same EM waves. To understand the subtlety of of the term “`symbiosis”, we have adopted a classification used in Biology, when a similar controversy existed in describing the way certain organisms (plants and animals) coexisted. Anton de Bary in a paper <cit.> defined the interaction of mutually dependent organisms as “symbiosis”, with three subdivisions i.e.:§.§.§ Symbiosis De Bary saw three types of relationship existing between organisms, which we can explore, adding the context for sensor networks. A family tree of EM sensors fulfilling Radar type sensing functions is shown in Figure <ref>.Parasitic SystemsIn nature, parasitism leads to a degradation of one organism due to the parasite harvesting resources of the host, often leading to the death of the host. There does not seem to an analogue in the EM sensor network situation.Commensal SystemsCommensalism between organisms implies that neither system degrades the other, but in the sensor / communications symbiosis, we note that, for example, the sensor might not exist if it were not for the commensal partner. The simplest example is the case of a sensor system that utilises the FM Broadcast Band emissions to track aircraft <cit.>. In particular, Inggs et al. <cit.> describe such systems and how they are implemented in some detail.Mutualistic SystemsHere, two organisms collaborate to mutual benefit i.e. the two functions (say telecommunications and sensing) are designed together, and compromises are made to ensure efficient operation of both functions. The White Space / Telecommunication symbiosis , is described in a patent <cit.>.Thus, in the WS Band, we have shown that the EM sensing function of tracking aircraft for air traffic management (ATM) can be carried out using FM Band broadcasts. Since this is a Commensal sensor, only the presence of stable broadcast transmitters is required. We can envisage more complex scenarios, where a Mutualistic relationship exists i.e. the sensor is in fact a mobile user of a WS network, and uses the network itself to share detections of targets between geographically dispersed nodes, thereby solving may of the tracking problems. These are discussed in other papers, but the field is wide open to exploitation.At the bottom of the taxonomy chart, we refer to the geometric configuration of the sensor network i.e. the spatial distribution of the transmitters and receivers. We do not elaborate further on this aspect in the paper. § CONCLUSIONS We have demonstrated that EM sensors, described by a taxonomy based on the Biological concepts of Symbiosis will replace the current, “hard wired” sensors, some with heritage going back to before the 1939-1945 World War, and an unwieldy terminology.The WS collaboration has shown the way. The hope is that sensor designers will see the advantages of geographically and frequency distributed sensors that can seamlessly collaborate with other spectrum users, leading to a new world where spectrum congestion can be eased. It is not too far far fetched to maybe predict:RadiometersA fleet of L Band radiometers for soil moisture observation from Space, collaborating with L Band Communications and Radar users as the satellite passes over, to ensure a clean spectrum for the soil moisture measurments.Air Traffic Radar A new network of radars using a single frequency, but scheduling transmissions with nearby system for mutual convenience.Monitoring Infirm A sensor network in a house to monitor the movement of the infirm, checking for possible falls, or, unexpected inactivity <cit.>.Imaging Radar A fleet of satellite, airborne and ground based radars that illuminated the earth in a coordinated way, extracting different imaging information, based on user needs. We hope that papers such as this will start the EM sensor community to start thinking laterally about collaboration, and the financial incentives offered by sharing. Clearly, no one wishes for performance standards to be degraded, but this should not be necessary.
http://arxiv.org/abs/1702.07928v1
{ "authors": [ "Michael Inggs", "Amit Mishra" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170225173538", "title": "A New Taxonomy for Symbiotic EM Sensors" }
-2ptSimplified proposal for realizing multiqubit tunable phase gate in circuit QED Wen-An Li[E-mail: liwenan@126.com] and Yuan Chen December 30, 2023 ==============================================================================firststyle shortarticlesinglecolumn Heavy electron materials stand out in the correlated electron family because of the extraordinary variety of quantum mysteries these present. In addition to exhibiting two ordered states at low temperatures, antiferromagnetism and superconductivity, that can coexist, essentially nothing about their higher temperature normal state behavior is what one finds in "normal" materials. Not only does the interaction between a lattice of localized f electron magnetic moments and background conduction electrons give rise to the emergence, at a temperature T^* (often called the coherence temperature), of heavy electrons with masses that can become comparable to that of a muon, every other aspect of their normal state behavior produced by that interaction is anomalous.Experiments on the best studied heavy electron material, CeRhIn_5, show that as the temperature and pressure are varied, some five different temperature scales, all well below the crystal field energy levels, are needed to characterize the normal state anomalies depicted in Fig. <ref> <cit.>: * a nuclear magnetic Knight shift that does not follow the measured spin susceptibility below T^*. * a lower limit, T_QC, on the ln T universal behavior of the heavy electron density of states that begins at T^*. * a maximum in the magnetic resistivity at T^max_ρ. * a lower limit, T_0 or T_X depending on the pressure range in which it is studied, on the power law scaling behavior in the resistivity that begins at a temperature, T_QC. It is widely believed that the source of these anomalies, and similar ones found in other heavy electron materials, are fluctuations associated with quantum critical points that mark transition between distinct phases of matter at T=0. Although there exist microscopic theories of aspects of this quantum critical behavior[the Hertz-Millis-Moriya model for the spin fluctuation spectrum near an antiferromagnetic quantum critical point (AF QCP) <cit.>, the Abrahams-Wölfle model of critical quasiparticles at very low temperatures for materials that are very near an AF QCP <cit.>; the work of Coleman, Pepin, Senthil, and Si et al. on new critical excitations beyond the basic order parameter fluctuations <cit.>; and that of Lonzarich et al. suggesting a path forward for an improved microscopic approach to understanding the emergence of heavy electrons in Kondo lattice materials <cit.>], these do not explain all the above anomalies, not least because there is at present no microscopic theory of the behavior of the three components (light conduction electrons, heavy electrons, and local moments that have partially hybridized), that exist over much of the phase diagram. We therefore turn to phenomenology in our search for an understanding of the wide range of anomalous behavior and the temperature scales over which it is found. In what follows we show that a careful analysis of the phase diagrams expected from the presence of two competing quantum critical points, one associated with the end of antiferromagnetism, the other with the hybridization-induced end of local moment behavior, together with the phenomenological two-fluid model of that hybridization [for a review see Ref. <cit.>], provides a framework that enables us to arrive at a more complete physical understanding of the anomalous normal state behavior of CeRhIn_5, and a number of otherwell-studied heavy electron materials.§ QCPS AND THEIR EXPECTED SCALING SIGNATURESIn the CeMIn_5 (M=Co, Rh, Ir) and similar Kondo lattice materials, we have argued that "collective" hybridization of the local moments against the background conduction electrons begins at the coherence temperature, T^*, and is complete along a line of temperatures, T_L <cit.>: above T_L we expect to find both local moments whose strength has been reduced by hybridization that can order antiferromagnetically and itinerant heavy electrons that can become superconducting; well below it we will find only heavy electrons that may superconduct. Absent superconductivity, we would then expect to find two distinct quantum critical points: an AF QCP that marks the T=0 end of local moment antiferromagnetic behavior and a hybridization quantum critical point (HY QCP) that marks the T=0 completion of collective hybridization of local moments. These QCPs can produce quantum critical local moment spin fluctuations and quantum critical heavy electron spin or charge fluctuations, and a key question is the extent to which these QCPs and the scaling behaviors to which they give rise, can be distinguished and identified experimentally.Quite generally we may expect to find the three classes of heavy electron antiferromagnetic materials shown in Fig. <ref> <cit.>. Class I materials are those in which the AF and HY QCPs appear to be identical within experimental error. Class II are those in which the HY QCP lies well within the antiferromagnetic phase; in this case, the AF QCP becomes an itinerant AF QCP associated with the magnetic instability of the itinerant heavy electrons. Class III are those in which that HY QCP lies well outside the antiferromagnetic phase; between the two QCPs there will then be a region in which a nonmagnetic non-Landau heavy electron liquid coexists with incoherent local moments.We follow Lonzarich et al. <cit.> in making the assumption that in all three classes we are dealing with two order parameters (and their associated quantum critical points and fluctuations): an HY order parameter describing the build up of the emergent heavy electrons and the more familiar AF order parameter describing the build up of local moment AF order. The magnetic and hybridization quantum critical fluctuations will often not behave independently. For example, we shall see that hybridization can be suppressed and reversed at low temperatures by local moment AF order, causing relocalization of heavy electrons and a corresponding decrease in the hybridization order parameter <cit.>. This relocalization takes place at a temperature slightly above the Néel temperature, possibly associated with thermal fluctuations of the AF order parameter. Still another possibility is that the coupling between the magnetic and hybridization quantum critical fluctuations gives rise to unconventional quantum critical scaling in their overlap regime at finite temperatures <cit.>. Moreover, the magnetic quantum critical fluctuations may also penetrate into the region where all f electrons become itinerant, causing a change in the characteristics of the quantum critical scaling from an unconventional type to an itinerant spin density wave (SDW) type. The latter is clearly seen in Class II materials, but may well exist in Class I materials, while in Class III materials, one may expect changes of the scaling exponent when approaching the two separated QCPs with lowering temperature.In making the plots of the impact of the HY quantum critical fluctuations shown in Fig. <ref>, we are making a key assumption about the origins of two parts of the scaling behavior seen in heavy electron liquid that have led it to be called a Kondo liquid (KL): universal scaling behavior, characterized by the energy scale, T^*, of the effective order parameter f(T) that measures its strength (see Eq. <ref>); and the scaling with ln T of the intrinsic KL state density seen in uniform magnetic susceptibility and specific heat experiments <cit.>. A central thesis of the present paper is that these two parts represent distinct scaling phenomena of distinguishable physical origins. As first shown by Yang et al. <cit.>, T^* is determined by the nearest-neighbor coupling between local moments in the Kondo lattice. Their interaction produces collective hybridization below T^* that is quite different from the single-ion Kondo hybridization (screening) found for isolated magnetic moments. In the present paper we argue that the ln T scaling behavior seen in the KL state density is brought about by the HY QCP fluctuations (and/or their associated gauge fluctuations) whose influence is cut off above T^*. Our main focus in this paper will be on Class I materials; materials belonging to the other two classes are discussed only briefly. It is in fact possible that in Class I materials the localization and magnetic QCPs are never exactly identical, since the combined effects of the HY and AF quantum critical fluctuations may act to move the AF and HY QCPs to opposite directions, reflecting the way in which hybridization fluctuations interfere with long range magnetic order and spin fluctuations interfere with collective hybridization in the vicinity of the putative identical QCP.Absent superconductivity, an analysis of a number of experiments on heavy electron materials at comparatively high temperatures (> 2K) yields the general phase diagram shown in Fig. <ref>, in which heavy electrons begin to emerge at a temperature of the order ofT^* as a result of collective hybridization of local moments with the background (light) conduction electrons, and behave like a new quantum state of matter, that exhibits HY quantum critical scaling between T^* and T_QC. Below T_QC, although one continues to have coexisting local moments and heavy electrons over much of the phase diagram, the heavy electrons no longer exhibit their KL scaling behavior but potentially display a more dramatic power law divergence because of the proximity of the AF QCP.Three other important temperature scales are shown there <cit.>: T_N, the Néel temperature at which hybridized local moments begin to exhibit long range magnetic order; T_L, the temperature at which collective hybridization of the local moments is nearly complete, so that well below it one finds only heavy electrons; and T_FL, the temperature at which those heavy electrons begin to exhibit Fermi liquid behavior. The phenomenological two-fluid model of the behavior of the coexisting KL and hybridized local moments helps one determine their relative importance for physical phenomena at any pressure or temperature in the phase diagram. For example, the spin susceptibility takes the form χ=[1-f(T)]χ_SL+f(T)χ_KL,where χ_SL and χ_KL are the intrinsic susceptibility of the spin liquid (hybridized local moments) and the KL, respectively, and f(T), the strength of the KL component, takes the formf(T)=f_0(1-T/T^*)^3/2,where f_0, the temperature independent intrinsic "hybridization strength", is the pressure dependent control parameter depicted in Figs. <ref> and <ref>. We see that for weakly hybridizing materials, characterized by f_0<1, heavy electrons coexist with hybridized local moments until one reaches T=0, with the latter ordering antiferromagnetically at T_N.f_0 must be unity at the HY QCP at which collective hybridization is complete. For strongly hybridizing (f_0>1) materials, that coexistence ends along a line of temperatures, T_L, at which the hybridization of local moments is essentially complete. Eq. <ref> yields the simple expression, T_L=T^*(1-f_0^-2/3).Below T_L, these heavy electrons form a quantum liquid that exhibits anomalous quantum critical behavior between T_L and T_FL, and Landau Fermi liquid behavior below it.Some additional comments are in order: * Not shown in Fig. <ref> is the possible emergence at very low temperatures of a second regime of quantum critical behavior, for which the microscopic theory developed by Abrahams and Wölfle <cit.> may be valid. * Around (and slightly above) the HY QCP there will be a region in which some local moments may be present, but these can reasonably be assumed not to influence the quantum critical behavior of the vast majority of heavy electrons (as required by the Abrahams-Wölfle model); we arbitrarily take this upper limit to be ∼5%, in which case the two-fluid model tells us that this region will begin at ∼ T^*/30. * at T∼0 we expect that well to the left of the HY QCP the Fermi surface will be "small" as it consists of those parts of light electron Fermi surface that have not hybridized with the local moments. To the right of this QCP, the Fermi surface should be "large" as local moments are no longer present. We note that it could be possible that one may observe effectively a large Fermi surface in some regions to the left of the HY QCP in which the local moment fraction is too small to preserve the small Fermi surface. * It is likely that many, if not all, of the lines shown in Fig. <ref> do not represent a phase transition, but are indicative of crossover behavior. * To the extent that one is far from ferromagnetic order, one can neglect the influence of vertex corrections on the static spin susceptibility. Under these circumstances, in both the Kondo liquid and magnetic quantum critical regimes, the uniform magnetic susceptibility χ and the specific heat C depend only on the heavy electron density of states, N(0), and one expects a temperature independent Wilson ratio. * For nearly all heavy electron materials existing experiments have yet to provide us with an unambiguous signature for T_L, thetemperature below which the Knight shift once again follows the spin susceptibility, since both now originate only in heavy electrons.Instead one has to rely upon suggestive experimental results such as the crossover in resistivity exponent in CeRhIn_5, a maximum in the magneto-resistance in CeCoIn_5, a change in the Hall coefficient in YbRh_2Si_2, and the phenomenological two-fluid expression that relates T_L to the intrinsic hybridization strength, f_0, Eq. <ref><cit.>.§ CERHIN_5CeRhIn_5 provides an excellent test of our proposed new framework. At zero magnetic field, the QCPs are hidden by superconductivity <cit.>. A de Haas-van Alphen (dHvA) experiment in which a strong magnetic field acts to suppress the superconductivity reveals a jump in the Fermi surface upon crossing 2.4 GPa in the high field state <cit.>. This establishes the location of its HY QCP. Since this location is close to the AF QCP obtained by extrapolating the pressure dependence of the Néel temperature, CeRhIn_5 is likely a Class I material. We now discuss in more detail the phase diagram shown in Fig. <ref>:* Above ∼1.5GPa, the values of T^* estimated from the resistivity peak <cit.>, the Knight shift anomaly <cit.>, and the Hall resistivity <cit.> are in good agreement. At ambient pressure and 1 GPa, the onset of the Knight shift anomaly and the Hall resistivity enable one to determine T^*, but the peak in the resistivity does not provide a useful estimate of the onset of heavy electron KL scaling behavior at these or other pressures below 1.5 GPa <cit.>. As we see below, the anomalous behavior of resistivity below 1.5 GPa originates in local moment fluctuations that, however, do not affect the Hall resistivity and the Knight shift. The increase of T^* with pressure seen here appears to be a general characteristic of the Ce-based heavy electron materials. * The upper boundary of the magnetic quantum critical regime, T_QC, is determined from the onset of power law scaling of the resistivity <cit.> and, when the pressure exceeds 1.5 GPa, agrees with the temperature that marks the end of the Kondo liquid scaling at high temperatures <cit.>. Curiously, it displays a pressure dependence that is quite similar to that of T^*; it is roughly given by T^*/2 (as discussed below, the range between T^* and T_QC is much larger in CeCoIn_5 allowing us to more precisely identify in that case the KL scaling behavior). Below 1.5 GPa, the power law scaling of the resistivity is seen to begin at temperatures large compared to the end of heavy electron scaling behavior at T^*; indeed at pressures less than 0.3 GPa, it persists into the local moment regime. This finding, together with the quite similar anomalous behavior of the maximium in the resistivity, tells us that these have a common physical origin, and that both likely reflect local moment fluctuations brought about by the AF QCP. * A third temperature scale, T_0, marks the end of AF quantum critical behavior at pressures less than ∼2GPa; it is seen in the resistivity <cit.>, in a pseudo-gap like feature in the spin-lattice relaxation rate, and in peaks in the Hall resistivity measurements <cit.>. These behaviors have a common physical origin in the AF QCP, whose fluctuations are seen in the resistivity measurements, while the "relocalization" of heavy electrons it brings about is clearly visible in the Knight shift anomaly <cit.>. At ambient pressure relocalization begins at T_0∼2T_N, which decreases with increasing pressure, and the anomalous behavior of the Hall resistivity follows a very similar pattern. T_0 is also seen in the neutron scattering experiments as the onset of precursor magnetic fluctuations of the long-range AF phase and in thetransport measurements as the temperature below which the thermal resistivity and the electrical resistivity deviate from each other <cit.>. * Another crossover temperature scale, T_X, marks the lower boundary of the power law scaling in the resistivity at pressures greater than ∼2.7GPa <cit.>. It increases with increasing pressure and extrapolates to zero at the QCP. The T_0 and T_X lines are candidates for a quantum critical cone that describes the limits of the quantum critical region originating in the magnetic/hybridization QCP. While T_X could mark the delocalization temperature, T_L, below which there are no f electron local moments <cit.>, since the two types of quantum critical fluctuations are coupled, T_X is likely larger than, but proportional to T_L. * At high pressures, one finds that as one lowers the temperature below T_X the resistivity of the heavy electron liquid first exhibits anomalous behavior brought about by scattering against quantum critical fluctuations; however below a crossover temperature, T_FL, it exhibits the power law n=2 behavior expected for a Fermi liquid <cit.>. Extrapolations of the T_FL line to lower pressures suggests it approaches zero at the QCP, yielding a narrow bandwidth and a heavy effective mass for the heavy electrons <cit.>. The phase diagram of CeRhIn_5 provides a clear illustration of the interplay between the magnetic and hybridization quantum critical fluctuations. What is not shown there is that below 1 K, a slightly different power law scaling is observed at the quantum critical point if one applies a magnetic field large enough to kill the superconductivity <cit.>. This crossover for the critical exponent may be another indication of the coupling between the magnetic and hybridization QCPs, and is possibly described by the Abrahams-Wölfle model <cit.>.Some desirable further experimental investigations of CeRhIn_5 include what changes when the two quantum critical points become separated by doping or other means and the measurement of quantum critical scaling in the Knight shift anomaly at low temperatures. § CECOIN_5CeCoIn_5 is another much studied Class I material in which one can follow the interplay between the magnetic and hybridization quantum critical fluctuations as one reaches a QCP by applying pressure or magnetic field. The experimental results for the phase diagrams showing changes in scaling behavior with pressure and applied magnetic field are shown in Fig. <ref> for temperatures far below T^*∼ 60K<cit.>: * At pressures around 1.6 GPa, the critical exponent seen in resistivity measurements near to or below T_QC changes from n=1 to n=3/2 <cit.>. Since the latter corresponds to that expected for a 3D SDW QCP (with "disorder"), we see that high pressure appears to tune the system from 2D to 3D. This crossover of the scaling exponent may correspond to the anticipated delocalization line T_L, since 1.6 GPa is not far from the QC pressure of 1.1 GPa <cit.>. * The low-temperature cutoff of the Kondo liquid scaling at ambient pressure seems to be consistent with the onset of resistivity scaling at about 10-20K (T_QC, not shown in Fig. <ref>) <cit.>. However, considering possible changes in the QCP in the field/pressure phase diagram <cit.>, a comparison between the two may not be proper. In order to establish the interplay between two types of quantum criticality, it will be important to have measurements of Kondo liquid scaling (from the NMR Knight shift) and power law scaling (from the magnetic resistivity) over the entire temperature-pressure/magnetic field phase diagram, following the example of measurements on CeRhIn_5. * In Fig. <ref>B, n=1 scaling behavior in the resistivity is seen on both sides of the T_L line, but its lower boundary, T_0, shows a non-monotonic field-dependence <cit.>. Despite the fact that both sides have the same scaling exponent, it may be argued that the right hand side is governed by itinerant 2D SDW criticality, with an onset temperature that is different for the resistivity and thermal expansion, while the left hand side is governed by the local moment quantum criticality – possibly an unconventional quantum critical scaling owing to the interplay between magnetic and hybridization quantum critical fluctuations. * This change of character in the quantum critical scaling takes place at the T_L line which is seen to pass through the point at which T_0 changes slope. It plausibly reflects a change of character of the magnetic quantum criticality due to complete hybridization, as shown in the tentative phase diagram for Class I materials in Fig. <ref>. * At high temperatures, the interplay between the magnetic and hybridization fluctuations and its variation with field and pressure have not been well studied. It has been shown at ambient pressure that over a broad temperature region the resistivity is dominated by the scattering of light electrons from isolated local moments. As first noted by Nakatsuji et al. <cit.>, at ambient pressure this scattering explains why, as the temperature is reduced, the resistivity first increases, reaches a maximum at T^*, and then falls off as 1-f(T) until one reaches a temperature ∼ 0.2T^*. Its change in scaling behavior below this temperature reflects the emerging impact of quantum critical fluctuations on local moment behavior. It is quite possible that it is only at pressures greater than the critical pressure of ∼ 1.1GPa that resistivity measurements begin to tell us about heavy electron behavior at very low temperatures. * The anomalous Knight shift has been measured at high magnetic fields and shows Kondo liquid scaling below T^*∼ 60K down to T_QC∼10K (for field along the c-axis) <cit.>, behavior that we argue arises from the HY QCP. The NMR spin-lattice relaxation rate also exhibits universal scaling and points to an AF QCP at negative pressure under high magnetic field <cit.>. * In the vicinity of the magnetic-field induced QCP, the intrinsic hybridization parameter, f_0, has a power law dependence on the magnetic field, thereby establishing the quantum critical nature of the collective hybridization <cit.>.§ DISCUSSION AND CONCLUSIONOur proposed framework explains the measured scaling behavior of CeMIn_5 and provides insight into that seen in a number of other well studied Kondo lattice materials (see supplementary materials). However, further experiments and analysis are required before we are able to establish more generally the materials for which it is applicable and, for those where it does not apply, understand why it does not. Here are a few open questions that could be answered in future experiments: * The delocalization temperature T_L marks the onset of static hybridization (in the mean-field approach). While indirect evidence for its existence has been obtained in a number of ways, its direct determination is crucial for establishing the range of hybridization quantum critical behavior. This could be done by Knight shift experiments that show a return to one component behavior or by direct measurements of a Fermi surface change across the T_L line using either dHvA or ARPES at finite temperature. * In many materials, HY and AF QCPs appear to be almost the same. Our framework may be best verified by tuning their relative locations. In YbRh_2Si_2, this has been done by replacing Rh with Co or Ir <cit.>, and it will be interesting to check if these replacements lead to the expected change in the quantum critical scaling. Since the critical exponent for the resistivity is not universal <cit.>, tuning the relative positions of the two QCPs may tell us if this nonuniversality is related to the interplay or competition between the two types of quantum critical fluctuations. In addition, such tuning measurements will provide information on the regions of applicability of the microscopic scaling theory of Abrahams and Wölfle as the two QCPs are separated. * While the Knight shift and the resistivity probe quantum critical scaling, direct measurements of the associated quantum critical fluctuations might provide further information. Systematic studies using neutron scattering measurements in the momentum/frequency domain or pump probe technique in the time domain are desirable to establish the existence and interplay of both magnetic and hybridization quantum critical fluctuations. Theoretically, our proposed scenario may be captured qualitatively by an effective field theory of the Kondo-Heisenberg model. The logarithmic divergence of the Kondo liquid scaling below T^* may be ascribed to a marginal Fermi liquid state due to fluctuations of the hybridization field or an emergent gauge field arising from the Kondo-Heisenberg interaction. It might be possible to develop a physical description based on a (quantum) Ginzburg-Landau model in which the order parameter field, ϕ, the modulus of which corresponds essentially to the hybridization gap, is imagined to fluctuate in space and time, taking on a well-defined value only in the low temperature limit. We suggest that the variance of the order parameter field, suitably coarse-grained, increases gradually with decreasing temperature starting perhaps well above T^* and grows towards saturation at temperatures of the order of or below T_L. In the range between T_L and T^* the variance takes on intermediate values in keeping with the existence of regions in space and time which are strongly hybridized, forming the heavy fermion fluid, together with other regions in space and time which are weakly hybridized, forming the local-moment fluid in the two-fluid model. For a more complete understanding of T^*, it would be necessary to include not only fluctuations of the hybridization field, but also of the emergent gauge field as discussed above <cit.>. In this more complete description, T^* would be associated with the combined effects of the Kondo hybridization term and the Heisenberg intersite interaction term in the Kondo-Heisenberg Hamiltonian of Kondo lattice systems. This collective hybridization may be expected to lead to values of T^* that are quite different from the conventional single-ion Kondo temperature, a prediction supported by a number of detailed studies of Kondo lattice materials <cit.>. Below T_QC, the coupling to the spin fluctuations can lead to strong-coupling behavior in which the singularity exceeds that of the marginal Fermi liquid starting point and eventually in the vicinity of AF QCP, give rise to critical quasi-particle behavior as proposed by Abrahams and Wölfle (see supplementary materials) <cit.>.Y.Y. was supported by the National Natural Science Foundation of China (grant no. 11522435), the State Key Development Program for Basic Research of China (2015CB921303), the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (XDB07020200) and the Youth Innovation Promotion Association CAS. G.L. acknowledges support from the Engineering and Physical Sciences Research Council (EPSRC grant no. EP/K012894/1) and CNPq/Science without Borders Program. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation (NSF) grant no. PHY-1066293, and in part at the Santa Fe Institute. We thank many colleagues at the Aspen Center for Physics and elsewhere for stimulating discussions.100Yang2016 Yang Y-F (2016) Two-fluid model for heavy electron physics. Rep Prog Phys 79(7):074501.Park2011 Park T, et al. (2011) Unconventional quantum criticality in the pressure-induced heavy-fermion superconductor CeRhIn_5. J Phys Condens Matter 23(9):094218.Yang2012 Yang Y-F, Pines D (2012) Emergent states in heavy-electron materials. Proc Natl Acad Sci USA 109(45):E3060-E3066.Lin2015 Lin CH, et al. (2015) Evolution of hyperfine parameters across a quantum critical point in CeRhIn_5. Phys Rev B 92(15):155147.Hertz1976 Hertz JA (1976) Quantum critical phenomena. Phys Rev B 14(3):1165-1184.Millis1993 Millis AJ (1993) Effect of a nonzero temperature on quantum critical points in itinerant fermion systems. Phys Rev B 48(10):7183-7196.Moriya1995 Moriya T, Takimoto T (1995) Anomalous properties around magnetic instability in heavy electron systems. J Phys Soc Jpn 64(3):960-969.Abrahams2012 Abrahams E, Wölfle P (2012) Critical quasiparticle theory applied to heavy fermion metals near an antiferromagnetic quantum phase transition.Proc Natl Acad Sci USA 109(9):3238-3242Coleman2001 Coleman P, Pépin C, Si Q, Ramazashvili R (2001) How do Fermi liquids get heavy and die? J Phys Condens Matter 13(35):R723-R738.Si2001 Si Q, Rabello S, Ingersent K, Smith JL (2001) Locally critical quantum phase transitions in strongly correlated metals. Nature 413(6858):804-808.Senthil2004 Senthil T, Vojta M, Sachdev S (2004) Weak magnetism and non-Fermi liquids near heavy-fermion critical points. Phys Rev B 69(3):035111.Paul2007 Paul I, Pépin C, Norman MR (2007) Kondo breakdown and hybridization fluctuations in the Kondo-Heisenberg lattice. Phys Rev Lett 98(2):026402.Lonzarich2017 Lonzarich G, Pines D, Yang Y-F (2017) Toward a new microscopic framework for Kondo lattice materials. Rep Prog Phys 80(2):024501.Si2010 Si Q (2010) Quantum criticality and global phase diagram of magnetic heavy fermions. Phys Status Solidi B 247(3):476-484.Coleman2010 Coleman P, Nevidomskyy AH (2010) Frustration and the Kondo effect in heavy fermion materials. J Low Temp Phys 161(1):182-202.Shirer2012 Shirer KR, et al. (2012) Long range order and two-fluid behavior in heavy electron materials. Proc Natl Acad Sci USA 109(45):E3067-E3073.Coleman2005 Coleman P, Schofield AJ (2005) Quantum criticality. Nature 433(7023):226-229.Lohneysen2007 Löhneysen Hv, Rosch A, Vojta M, Wölfle P (2007) Fermi-liquid instabilities at magnetic quantum phase transitions. Rev Mod Phys 79(3):1015-1075.Gegenwart2008 Gegenwart P, Si Q, Steglich F (2008) Quantum criticality in heavy-fermion metals. Nat Phys 4(3):186-197.Nakatsuji2004 Nakatsuji S, Pines D, Fisk Z (2004) Two fluid description of the Kondo lattice. Phys Rev Lett 92(1):016401.Yang2008 Yang Y-F, Pines D (2008) Universal behavior in heavy-electron materials. Phys Rev Lett 100(9):096404.Yang2008nature Yang Y-F, Fisk Z, Lee H-O, Thompson JD, Pines D (2008) Scaling the Kondo lattice. Nature 454(7204):611-613.Yang2014 Yang Y-F, Pines D (2014) Quantum critical behavior in heavy electron materials. Proc Natl Acad Sci USA 111(23):8398-8403.Shishido2005 Shishido H, Settai R, Harima H, ­Onuki Y (2005) A drastic change of the Fermi surface at a critical pressure in CeRhIn_5: dHvA study under pressure. J Phys Soc Jpn 74(4):1103-1106.Park2010 Park T, et al. (2010) Field-induced quantum critical point in the pressure-induced superconductor CeRhIn_5. Phys Status Solidi B 247(3):553-556.Zaum2011 Zaum S, et al. (2011) Towards the identification of a quantum critical line in the (p, B) phase diagram of CeCoIn_5 with thermal-expansion measurements. Phys Rev Lett 106(8):087003.Sidorov2002 Sidorov VA, et al. (2002) Superconductivity and quantum criticality in CeCoIn_5. Phys Rev Lett 89(15):157004.Paglione2006 Paglione J, et al. (2006) Nonvanishing energy scales at the quantum critical point of CeCoIn_5. Phys Rev Lett 97(10):106606.Curro2004 Curro NJ, Young B-L, Schmalian J, Pines D (2004) Scaling in the emergent behavior of heavy-electron materials. Phys Rev B 70(23):235117.Yang2009 Yang Y-F, Urbano RR, Curro NJ, Pines D, Bauer ED (2009) Magnetic excitations in the Kondo liquid: Superconductivity and hidden magnetic quantum critical fluctuations. Phys Rev Lett 103(19):197004.Friedemann2009 Friedemann S, et al. (2009) Detaching the antiferromagnetic quantum critical point from the Fermi-surface reconstruction in YbRh_2Si_2. Nat Phys 5(7):465-469.[pages=1-5]QCF_SI.pdf
http://arxiv.org/abs/1702.08132v2
{ "authors": [ "Yi-feng Yang", "David Pines", "Gilbert Lonzarich" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170227032640", "title": "Quantum critical scaling and fluctuations in Kondo lattice materials" }
[ McGan: Mean and Covariance Feature Matching GAN equal* YoussefMrouehequal,aif,to Tom Sercuequal,aif,to Vaibhava Goelto aifAI Foundations. IBM T.J. Watson Research Center, NY, USA toWatson Multimodal Algorithms and Engines Group. IBM T.J. Watson Research Center, NY, USA Youssef Mrouehmroueh@us.ibm.com Generative Adversarial Networks, Integral probability metrics, McGan, Covariance Feature Matching0.3in ] We introduce new families ofIntegral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matchingstatistics of distributionsembedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions. § INTRODUCTIONUnsupervised learning of distributions is an important problem, in which we aim to learn underlying features that unveil the hidden the structure in the data. The classic approach to learning distributions is by explicitly parametrizing the data likelihood and fitting this model by maximizing the likelihood of the real data. An alternative recent approach is to learn a generative model of the data without explicit parametrization of the likelihood.Variational Auto-Encoders (VAE) <cit.> and Generative Adversarial Networks (GAN) <cit.> fall under this category. We focus on the GAN approach. In a nutshell GANs learn a generator of the data via a min-max game between the generator and a discriminator, which learns to distinguish between “real” and “fake” samples. In this work we focus on the objective function that is being minimized between the learned generator distribution ℙ_θ and the real data distribution ℙ_r.The original work of <cit.> showed that in GANthis objective is the Jensen-Shannon divergence. <cit.> showed that other φ-divergences can be successfully used. The Maximum Mean Discrepancy objective (MMD) for GAN training was proposed in <cit.>. As shown empirically in <cit.>, one can train the GAN discriminator using the objective of <cit.> while training the generator using mean feature matching. An energy based objective for GANs was also developed recently <cit.>. Finally, closely relatedto our paper, the recent work Wasserstein GAN (WGAN) of <cit.> proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore <cit.> show that the EM objective has many advantages as the loss function correlates with the quality of the generated samples and the mode dropping problem is reduced in WGAN.In this paper, inspired by the MMD distance and the kernel mean embedding of distributions <cit.> we propose to embed distributions in a finite dimensional feature space and to match them based on their mean and covariance feature statistics. Incorporating first and second order statistics has a better chance tocapture the various modes of the distribution. While mean feature matching was empirically used in <cit.>, we show in this work that it is theoretically grounded: similarly to the EM distance in <cit.>, mean and covariance feature matching of two distributions can be written as a distance in the framework of Integral Probability Metrics (IPM) <cit.>.To match the means, we can use any ℓ_q norm, hence we refer to mean matching IPM, as IPM_μ,q.For matching covariances, in this paper we consider the Ky-Fan norm, which can be computed cheaply without explicitly constructing the full covariance matrices, and refer to the corresponding IPM as IPM_Σ.Our technical contributions can be summarized as follows:a) We show in Section <ref> that the ℓ_q mean feature matchingIPM_μ,q has two equivalent primal and dual formulations and can be used as an objective for GAN training in both formulations.b) We show in Section <ref> that the parametrization used in Wasserstein GAN corresponds to ℓ_1 mean feature matching GAN (IPM_μ,1 GAN in our framework). c)  We show in Section <ref> that the covariance feature matching IPM_Σ admits also two dual formulations, and can be used as an objective for GAN training.d)  Similar to Wasserstein GAN, we show that mean feature matchingand covariance matching GANs (McGan) are stable to train, have a reduced mode dropping and the IPM loss correlates with the quality of the generated samples.§ INTEGRAL PROBABILITY METRICSWe define in this Section IPMs as a distance between distribution. Intuitively each IPM findsa “critic" f <cit.> which maximally discriminates between the distributions. §.§ IPM DefinitionConsider a compact space X in ℝ^d. Let ℱ be a set of measurable and bounded real valuedfunctions on X. Let 𝒫(X) be the set of measurable probability distributions on X. Given two probability distributions ℙ,ℚ∈𝒫(X), the Integral probability metric (IPM) indexed by the function space ℱ is defined as follows <cit.>:d_ℱ(ℙ,ℚ)= sup_f ∈ℱ | x∼ℙ𝔼 f(x) -x∼ℚ𝔼f(x)|.In this paper we are interested in symmetric function spaces ℱ, i.e ∀ f ∈ℱ, -f ∈ℱ, hence we can write the IPM in that case without the absolute value:d_ℱ(ℙ,ℚ)= sup_f ∈ℱ{x∼ℙ𝔼 f(x) -x∼ℚ𝔼f(x)}. It is easy to see that d_ℱ defines a pseudo-metric over 𝒫(X). (d_ℱ non-negative, symmetric and satisfies the triangle inequality. A pseudo metric means that d_ℱ(ℙ,ℙ)=0 but d_ℱ(ℙ,ℚ)=0 does not necessarily imply ℙ=ℚ). By choosing ℱ appropriately <cit.>, various distances between probability measures can be defined. In the next subsection following <cit.> we show how to use IPM to learn generative models of distributions, we then specify a special set of functions ℱ that makes the learning tractable.§.§ Learning Generative Models with IPM In order to learn a generative model ofa distribution ℙ_r∈𝒫(X), we learn a functiong_θ: Z⊂ℝ^n_z→X,such that for z∼ p_z, the distribution of g_θ(z) is close to the real data distribution ℙ_r, where p_z is a fixed distribution on Z (for instance z∼𝒩(0,I_n_z)).Let ℙ_θ be the distribution of g_θ(z),z∼ p_z. Using an IPM indexed by a function class ℱ we shall solve therefore the following problem:min_g_θ d_ℱ(ℙ_r,ℙ_θ)Hence this amounts to solving the following min-max problem:min_g_θsup_f∈ℱx∼ℙ_r𝔼f(x) -z ∼ p_z𝔼f(g_θ(z))Given samples {x_i ,1… N} from ℙ_r and samples {z_i ,1… M} from p_z we shall solve the following empirical problem:min_g_θsup_f∈ℱ1/N∑_i=1^N f(x_i) - 1/M∑_j=1^M f(g_θ(z_j)),in the following we consider for simplicity M=N.§ MEAN FEATURE MATCHING GANIn this Section we introduce a class of functions ℱ having the form vΦ_ω(x), wherevector v ∈ℝ^m andΦ_ω: X→ℝ^m a non linear feature map (typically parametrized by a neural network). We show in this Section that the IPM defined by this function class corresponds to the distance between the mean of the distribution inthe Φ_ω space.§.§ IPM_μ,q: Mean Matching IPM More formally consider the following function space:-2emℱ_v,ω,p={ f(x)= ⟨ v,Φ_ω(x)⟩ | v ∈ℝ^m, v_p≤ 1,Φ_ω: X→ℝ^m, ω∈Ω},where ._p is the ℓ_p norm. ℱ_v,ω,p is the space of bounded linear functions defined in the non linear feature space induced by the parametric feature mapΦ_ω. Φ_ω is typically a multi-layer neural network. The parameter space Ω is chosen so thatthe function space ℱ is bounded. Note that for a given ω, ℱ_v,ω,pis a finite dimensional Hilbert space.We recall heresimple definitions on dual norms that will be necessary for the analysis in this Section. Let p,q ∈ [1,∞], such that 1/p+1/q=1. By duality of norms we have: x_q=max_v, v_p≤ 1vx and theHolder inequality: |xy| ≤x_py_q.From Holder inequalitywe obtain the following bound: |f(x)|=|vΦ_ωx|≤v_pΦ_ω(x)_q≤Φ_ω(x)_q.To ensure that f is bounded, it is enough to consider Ω such that Φ_ω(x)_q≤ B, ∀  x ∈X. Given that the space X is bounded it is sufficient to control the norm of the weights and biases of the neural network Φ_ω by regularizing the ℓ_∞ (clamping) or ℓ_2 norms (weight decay) to ensure the boundedness ofℱ_v,ω,p.Now that we ensured the boundedness of ℱ_v,ω,p , welook at its corresponding IPM: d_ℱ_v,ω,p(ℙ,ℚ)=sup_f ∈ℱ_v,ω,px∼ℙ𝔼 f(x) -x∼ℚ𝔼f(x) = max_ω∈Ω, v,||v||_p≤1 vx∼ℙ𝔼Φ_ω(x)- x∼ℚ𝔼Φ_ω(x) = max_ω∈Ω[ max_ v,||v||_p≤1 vx∼ℙ𝔼Φ_ω(x)- x∼ℚ𝔼Φ_ω(x) ]= max_ω∈Ωμ_ω(ℙ)-μ_ω(ℚ)_q,where we used the linearity of the function class and expectation in the first equality and the definition of the dual norm ._q in the last equality and our definition ofthe mean feature embedding ofa distribution ℙ∈𝒫(X):μ_ω(ℙ)= x∼ℙ𝔼[ Φ_ω(x)] ∈ℝ^m.We see that the IPM indexed by ℱ_v,ω,p, corresponds to the Maximum mean feature Discrepancy between the two distributions. Where the maximum is taken over the parameter set Ω, and the discrepancy is measured in the ℓ_q sense between the mean feature embedding of ℙ and ℚ. In other words this IPM is equal to the worst case ℓ_q distance between mean feature embeddings of distributions. We refer in what follows to d_ℱ_v,ω,p as IPM_μ,q. §.§ Mean Feature Matching GANWe turn now to the problem of learning generative models with IPM_μ,q. Setting ℱ to ℱ_v,ω,p in Equation (<ref>) yields to the following min-max problem for learning generative models:min_g_θmax_ω∈Ωmax_v,||v||_p≤ 1ℒ_μ(v,ω,θ),where-0.3inℒ_μ(v,ω,θ)= vx∼ℙ_r𝔼Φ_ω(x)- z ∼ p_z𝔼Φ_ω(g_θ(z)) ,or equivalently using the dual norm:min_g_θmax_ω∈Ωμ_ω(ℙ_r)-μ_ω(ℙ_θ)_q ,where μ_ω(ℙ_θ)=z ∼ p_z𝔼Φ_ω(g_θ(z)).We refer to formulations (<ref>) and (<ref>) as primal and dual formulation respectively.The dual formulation in Equation (<ref>) has a simple interpretation as an adversarial learning game: while the feature space Φ_ω tries tomap the mean feature embeddings of the real distribution ℙ_r and the fakedistribution ℙ_θ to be far apart (maximize the ℓ_q distance between the mean embeddings), the generator g_θ tries to put them close one to another. Hence we refer to this IPM as mean matching IPM.We devise empirical estimates of both formulations in Equations (<ref>) and (<ref>), given samples {x_i,i=1… N} from ℙ_r, and {z_i,i=1… N} from p_z. The primal formulation (<ref>) is more amenable to stochastic gradient descent since the expectation operation appears in a linear way in the cost function of Equation (<ref>), while it is non linear in the cost function of the dual formulation (<ref>) (inside the norm). We give here the empirical estimate of the primalformulation by giving empirical estimatesℒ̂_μ(v,ω,θ) of the primal cost function:(P_μ):min_g_θ   ω∈Ω v,||v||_p≤1max    v1/N∑_i=1^NΦ_ω(x_i)- 1/N∑_i=1^N Φ_ω(g_θ(z_i))An empirical estimate of the dual formulation can be also given as follows:(D_μ):min_g_θ   ω∈Ωmax   1/N∑_i=1^NΦ_ω(x_i)-1/N∑_i=1^N Φ_ω(g_θ(z_i)) _q In what follows we refer to the problem given in (P_μ) and (D_μ) as ℓ_q Mean Feature Matching GAN. Note that while (P_μ) does not need real samples for optimizing the generator, (D_μ) does need samples from real and fake.Furthermore we will need a large minibatch of real data in order to get a good estimate of the expectation. This makes the primal formulation more appealing computationally. §.§ Related Work We show in this Section that several previous works on GAN, can be written within the ℓ_q mean feature matching IPM (IPM_μ,q) minimization framework:a) Wasserstein GAN (WGAN): <cit.> recently introduced Wasserstein GAN. While the main motivation of this paper is to consider the IPM indexed by Lipchitz functions on X, we show that the particular parametrization considered in <cit.>corresponds to amean feature matching IPM.Indeed <cit.> consider the function set parametrized by a convolutional neural network with a linear output layer and weight clipping. Written in our notation, the last linear layer corresponds to v, and the convolutional neural network below corresponds to Φ_ω. Since v and ω are simultaneously clamped, this corresponds to restricting v to be in the ℓ_∞ unit ball, and to define in Ω constraints on the ℓ_∞ norms of ω. In other words <cit.> consider functions in ℱ_v,ω,p, where p=∞ .Setting p=∞ in Equation (<ref>), and q=1 in Equation (<ref>), we see that in WGAN we are minimizing d_ℱ_v,ω,∞, that corresponds to ℓ_1 mean feature matching GAN. b) MMD GAN: Let ℋ be a Reproducing Kernel Hilbert Space (RKHS) with k its reproducing kernel. For any valid PSD kernel k there exists an infinite dimensional feature map Φ: X→ℋ such that: k(x,y)=⟨Φ(x),Φ(y)⟩_ℋ. For an RKHS Φ is noted usually k(x,.) and satisfies the reproducing proprety:f(x)=⟨ f,Φ(x)⟩_ℋ,for all f ∈ℋ. Setting ℱ={ f | || f ||_ℋ≤ 1} in Equation (<ref>) the IPMd_ℱ has a simple expression:d_ℱ(ℙ,ℚ) = sup_f, ||f||_ℋ≤ 1{⟨ f,x∼ℙ𝔼Φ (x) -x∼ℚ𝔼Φ(x)⟩}=||μ(ℙ) -μ(ℚ)| |_ℋ,where μ(ℙ)=x∼ℙ𝔼Φ (x) ∈ℋ is the so called kernel mean embedding <cit.>. d_ℱ in this case is the so called Maximum kernel Mean Discrepancy (MMD) <cit.> . Using the reproducing property MMD has a closed form in term of the kernel k. Note that IPM_μ,2 is a special case of MMD when the feature map is finite dimensional, with the main difference that the feature map is fixed in case of MMD and learned in the case of IPM_μ,2. <cit.> showed that GANs can be learned using MMD with a fixed gaussian kernel.c) Improved GAN: Building on the pioneering work of <cit.>, <cit.> suggested to learn the discriminator with the binary cross entropy criterium of GAN whilelearning the generator with ℓ_2 mean feature matching. The main difference of our IPM_μ,2 GAN is that both “discriminator" and “generator" are learned using the mean feature matching criterium, with additional constraints on Φ_ω. § COVARIANCE FEATURE MATCHING GAN§.§ IPM_Σ: Covariance Matching IPM As follows from our discussion of mean matching IPM comparing two distributions amounts to comparinga first order statistics, the mean of their feature embeddings. Here we ask the question how to incorporate second order statistics, i.e covariance information of feature embeddings.In this Section we will provide a function space ℱsuch that the IPM in Equation (<ref>) captures second order information.Intuitively a distribution of points represented in a feature space can be approximately captured by its mean and its covariance. Commonly in unsupervised learning, this covariance is approximated by its first k principal components (PCA directions), which capture the directions of maximal variance in the data. Similarly, the metric we define in this Section will find k directions that maximize the discrimination between the two covariances. Adding second order information would enrich the discrimination power of the feature space (See Figure <ref>). This intuition motivates the following function space of bilinear functionsin Φ_ω :ℱ_U,V,ω={f(x)=∑_j=1^k u_jΦ_ω(x)v_jΦ_ω(x){u_j},{v_j}∈ℝ^m orthonormalj=1… k,ω∈Ω}.Note that the set ℱ_U,V,ω is symmetric and hence the IPM indexed by this set(Equation (<ref>)) is well defined. It is easy to see that ℱ_U,V,ω can be written as:ℱ_U,V,ω={ f(x)= U^⊤Φ_ω(x)V^⊤Φ_ω(x) ) | U,V ∈ℝ^m× k, U^⊤U=I_k,V^⊤V=I_k,ω∈Ω}the parameter set Ω is such that the function space remains bounded.LetΣ_ω(ℙ)= x∼ℙ𝔼Φ_ω(x)Φ_ω(x)^⊤,be the uncentered feature covariance embedding of ℙ. It is easy to see that x∼ℙ𝔼f(x) can be written in terms of U,V, and Σ_ω(ℙ):x∼ℙ𝔼f(x)=x∼ℙ𝔼U^⊤Φ(x)V^⊤Φ(x)=Trace(U^⊤Σ_ω(ℙ)V).-0.14 inFor a matrix A ∈ℝ^m× m, we note by σ_j(A) the singular value of A, j=1… m in descending order. The 1-schatten norm or the nuclear norm is defined as the sum of singular values, A_*=∑_j=1^m σ_j. We note by [A]_k the k-th rank approximation of A. We note O_m,k={M∈ℝ^m× k | M^⊤M=I_k}. Consider the IPM induced by this function set. Let ℙ,ℚ∈𝒫(X) we have:d_ℱ_U,V,ω(ℙ,ℚ)=sup_f ∈ℱ_U,V,ωx∼ℙ𝔼f(x) - x∼ℚ𝔼f(x)=ω∈Ω  U,V ∈O_m,kmax x∼ℙ𝔼f(x)-x∼ℚ𝔼f(x) = ω∈Ωmax    U,V ∈O_m,kmax  Trace[U^⊤( Σ_ω(ℙ)-Σ_ω(ℚ) )V]= max_ω∈Ω∑_j=1^k σ_j(Σ_ω(ℙ)-Σ_ω(ℚ))= max_ω∈Ω[Σ_ω(ℙ)- Σ_ω(ℚ)]_k_*,where we used the variational definition of singular values and the definition of the nuclear norm. Note that U,V are the left and right singular vectors of Σ_ω(ℙ)- Σ_ω(ℚ). Hence d_ℱ_U,V,ω measures the worst case distance between the covariance feature embeddings of the two distributions, this distance is measured with theKy Fan k-norm (nuclear norm of truncated covariance difference). Hence we call this IPM covariance matching IPM, IPM_Σ.§.§ Covariance Matching GANTurning now to the problem of learning a generative model g_θ of ℙ_r∈𝒫(X) usingIPM_Σ we shall solve:min_g_θd_ℱ_U,V,ω(ℙ_r,ℙ_θ),this has the following primal formulation:min_g_θ         ω∈Ω, U,V ∈O_m,kmax      ℒ_σ(U,V,ω,θ), -0.2in where ℒ_σ(U,V,ω,θ)=x∼ℙ_r𝔼UΦ_ω(x)VΦ_ω(x))-z∼ p_z𝔼UΦ_ω(g_θ(z))VΦ_ω(g_θ(z)),or equivalently the following dual formulation:min_g_θmax_ω∈Ω[Σ_ω(ℙ_r)- Σ_ω(ℙ_θ)]_k_*, where Σ_ω(ℙ_θ)=𝔼_z∼ p_zΦ_ω(g_θ(z))Φ_ω(g_θ(z))^⊤. The dual formulation in Equation (<ref>) shows that learning generative models with IPM_Σ, consists in an adversarial game between the feature map and the generator, when the feature maps tries to maximize the distance between thefeature covarianceembeddings of the distributions, the generator tries to minimize this distance. Hence we call learning with IPM_Σ, covariance matching GAN.We give here an empirical estimate of the primal formulation inEquation (<ref>) which is amenable to stochastic gradient. The dual requires nuclear norm minimization and is more involved.Given {x_i, x_i∼ℙ_r}, and {z_j, z_j ∼ p_z}, the covariance matching GAN can be written as follows:min_g_θ         ω∈Ω,U,V ∈O_m,kmax      ℒ̂_σ(U,V,ω,θ), where ℒ̂_σ(U,V,ω,θ) =1/N ∑_i=1^N UΦ_ω(x_i)VΦ_ω(x_i) -1/N ∑_j=1^N U Φ_ω(g_θ(z_j))VΦ_ω(g_θ(z_j)). §.§ Mean and Covariance Matching GANIn order to match first and second order statistics we propose the following simple extension:min_g_θω∈Ω,v,||v||_p≤ 1 U,V ∈O_m,kmaxℒ_μ(v,ω,θ)+ℒ_σ(U,V,ω,θ),that has a simple dual adversarial game interpretationmin_g_θω∈Ωmaxμ_ω(ℙ)-μ_ω(ℙ_θ)_q+ [Σ_ω(ℙ_r)- Σ_ω(ℙ_θ)]_k_*,where the discriminator finds a feature space that discriminates between means and variances of real and fake, and the generator tries to match the real statistics. We can also give empirical estimates of the primal formulation similar to expressions given in the paper.§ ALGORITHMSWe present in this Section our algorithms for mean and covariance feature matching GAN (McGan) with IPM_μ,q and IPM_Σ.Mean Matching GAN. Primal P_μ: We give in Algorithm <ref> an algorithm for solving the primal IPM_μ,q GAN (P_μ). Algorithm <ref> is adapted from <cit.> and corresponds to their algorithm for p=∞. The main difference is that we allow projection of v on different ℓ_p balls, and we maintain the clipping of ω to ensure boundedness of Φ_ω. For example forp=2, proj_B_ℓ_2(v)=min(1,1/v_2)v. For p=∞ we obtain the same clipping in <cit.> proj_B_ℓ_∞(v)=clip(v,-c,c) for c=1.Dual D_μ: We give in Algorithm <ref> an algorithm for solving the dual formulation IPM_μ,q GAN (D_μ). As mentioned earlier we need samples from “real" and “fake" for training both generator andthe “critic" feature space.Covariance Matching GAN. Primal P_Σ: We give in Algorithm <ref> an algorithm for solving the primal of IPM_Σ GAN (Equation (<ref>)). The algorithm performs a stochastic gradientascent on (ω,U,V) and a descent on θ.We maintain clipping on ω to ensure boundedness of Φ_ω, and perform a QR retraction on the Stiefel manifold O_m,k <cit.>, maintaining orthonormality of U and V. § EXPERIMENTS We train McGan for image generation with both Mean Matching and Covariance Matching objectives.We show generated images on the labeled faces in the wild (lfw) <cit.>, LSUN bedrooms <cit.>, and cifar-10 <cit.> datasets.It is well-established that evaluating generative models is hard <cit.>. Many GAN papers rely on a combination of samples for quality evaluation, supplemented by a number of heuristic quantitative measures. We will mostly focus on training stability by showing plots of the loss function, and will provide generated samples to claim comparable sample quality between methods, but we will avoid claiming better sample quality. These samples are all generated at random and are not cherry-picked.The design of g_θ and Φ_ω are following DCGAN principles <cit.>, with both g_θ and Φ_ω being a convolutional network with batch normalization <cit.> and ReLU activations. Φ_ω has output size bs × F × 4 × 4.The inner product can then equivalently be implemented asor . We generate 64×64 images for lfw and LSUN and 32×32 images on cifar, and train with minibatches of size 64. We follow the experimental framework and implementation of <cit.>, where we ensure the boundedness of Φ_ω by clipping the weights pointwise to the range [-0.01, 0.01]. Primal versus dual form of mean matching. To illustrate the validity of both the primal and dual formulation, we trained mean matching GANs both in the primal and dual form, see respectively Algorithm <ref> and <ref>. Samples are shown in Figure <ref>. Note that optimizing the dual form is less efficient and only feasible for mean matching, not for covariance matching. The primal formulation of IPM_μ,1 GAN corresponds to clipping v, i.e. the original WGAN, while for IPM_μ,2 we divide v by its ℓ_2 norm if it becomes larger than 1. In the dual, for q=2 we noticed little difference between maximizing the ℓ_2 norm or its square. We observed that the default learning rates from WGAN (5e-5) are optimal for both primal and dual formulation. Figure <ref> shows the loss (i.e. IPM estimate) dropping steadily for both the primal and dual formulation independently of the choice of the ℓ_q norm.We also observed that during the whole training process, samples generated from the same noise vector across iterations, remain similar in nature (face identity, bedroom style), while details and background will evolve. This qualitative observation indicates valuable stability of the training process.For the dual formulation (Algorithm <ref>), we confirmed the hypothesis that we need a good estimate of μ_ω(ℙ_r) in order to compute the gradient of the generator ∇_θ:we needed to increase the minibatch size of real threefold to 3×64.Covariance GAN. We now experimentally investigate the IPM defined by covariance matching. For this section and the following, we use only the primal formulation, i.e. with explicit u_j and v_j orthonormal (Algorithm <ref>). Figure <ref> and <ref> show samples and loss from lfw and LSUN training respectively. We use Algorithm <ref> with k=16 components.We obtain samples of comparable quality to the mean matching formulations (Figure <ref>), and we found training to be stable independent of hyperparameters like number of components k varying between 4 and 64. Covariance GAN with labels and conditioning. Finally, we conduct experiments on the cifar-10 dataset, where we will leverage the additional label information by training a GAN with conditional generator g_θ(z,y) with label y ∈ [1, K] supppplied as one-hot vector concatenated with noise z. conditionalGAN,infogan,acgan Similar to Infogan <cit.> and AC-GAN <cit.>, we add a new output layer, S ∈ℝ^K× m and will write the logits SΦ_ω(x)∈ℝ^K. We now optimize a combination of the IPM loss and the cross-entropy loss CE(x,y; S, Φ_ω) = -log[Softmax(SΦ_ω(x))_y ].The critic loss becomes ℒ_D = ℒ̂_σ - λ_D 1/N∑_(x_i,y_i) ∈lab CE(x_i,y_i;S,Φ_ω), with hyper-parameter λ_D. We now sample three minibatches for each critic update: a labeled batch for the CE term, and for the IPM a real unlabeled + generated batch.The generator loss (with hyper-param λ_G) becomes:ℒ_G = ℒ̂_σ+ λ_G 1/N∑_z_i ∼ p_z, y_i ∼ p_y CE(g_θ(z_i,y_i),y_i;S,Φ_ω) which still only requires a single minibatch to compute.We confirm the improved stability and sample quality of objectives including covariance matching with inception scores <cit.> in Table <ref>. Samples corresponding to the best inception score (Sigma) are given in Figure <ref>. Using the code released with WGAN <cit.>, these scores come from the DCGAN model with(deeper generator and discriminator) . More samples are in appendix with combinations of Mean and Covariance Matching.Notice rows corresponding to recognizable classes, while the noise z (shared within each column) clearly determines other elements of the visual style like dominant color, across label conditioning.§ DISCUSSIONWe noticed the influence of clipping on the capacity of the critic: a higher number of feature maps was needed to compensate for clipping.The question remains what alternatives to clipping of Φ_ω can ensure the boundedness. For example, we succesfully used an ℓ_2 penalty on the weights of Φ_ω. Other directions are to explore geodesic distances between the covariances <cit.>, and extensions of the IPM framework to the multimodal setting <cit.>.icml2017 Supplementary Material for McGan: Mean and Covariance Feature Matching GAN 1emYoussefMrouehequal,aif,to Tom Sercuequal,aif,to Vaibhava Goelto§ SUBSPACE MATCHING INTERPRETATION OF COVARIANCE MATCHING GAN Let Δ_ω= Σ_ω(ℙ)-Σ_ω(ℚ). Δ_ωis a symmetric matrix but not PSD, which has the property that its eigenvalues λ_j are related to its singular values as given by: σ_j=|λ_j| and its left and right singular vectors coincides with its eigenvectors andsatisfy the following equality u_j= sign(λ_j)v_j. One can ask here if we can avoid having both U,V in the definition of IPM_Σ since at the optimum u_j=± v_j. One could consider δ E_ω(ℙ_r,ℙ_θ) defined as follows: ω∈Ω,U ∈O_m,kmax  Energy in the subspace of real datax∼ℙ_r𝔼UΦ_ω(x)^2-Energy in the subspace of fake dataz∼ p_z𝔼U Φ_ω(g_θ(z))^2,and then solve for min_g_θδ E_ω(ℙ_r,ℙ_θ).Note that:δ E_ω(ℙ_r,ℙ_θ)=ω∈Ω, U∈O_m,kmax Trace(U^⊤(Σ_ω(ℙ_r)-Σ_ω(ℙ_θ))U)= max_ω∈Ω∑_i=1^k λ_i(Δ_ω)δ E_ω is not symmetric furthermore the sum of those eigenvalues is not guaranteed to be positive and hence δ E_ωis not guaranteed to be non negative, and hence does not define an IPM. Noting thatσ_i(Δ_ω)=|λ_i(Δ_ω)|,we have that:IPM_Σ(ℙ_r,ℙ_θ)=∑_i=1^k σ_i(Δ_ω)≥∑_i=1^k λ_i(Δ_ω)=δ E_ω(ℙ_r,ℙ_θ).Hence δ E is not an IPM but can be optimized as a lower bound of the IPM_Σ. This would have an energy interpretation as in the energy based GAN introduced recently <cit.>: the discriminator defines asubspace that has higher energy on real data than fake data, and the generator maximizes his energy in this subspace.§ MEAN AND COVARIANCE MATCHING LOSS COMBINATIONSWe report below samples for McGan, with different IPM_μ,q and IPM_Σ combinations. All results are reported for the same architecture choice for generator and discriminator, which produced qualitatively good samples with IPM_Σ (Same one reported in Section 6 in the main paper). Note that in Figure <ref> with the same hyper-parameters and architecture choice, WGAN failed to produce good sample. In other configurations training converged.
http://arxiv.org/abs/1702.08398v2
{ "authors": [ "Youssef Mroueh", "Tom Sercu", "Vaibhava Goel" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227174630", "title": "McGan: Mean and Covariance Feature Matching GAN" }
[\begin@twocolumnfalse “If You Can't Beat them, Join them”: A Usability Approach to Interdependent Privacy in Cloud Apps Hamza Harkous EPFL, Switzerland hamza.harkous@epfl.ch Karl Aberer EPFL, Switzerland karl.aberer@epfl.ch =================================================================================================================================================================================Almost by definition, radical innovations create a need to revise existing classification systems. In this paper, we argue that classification system changes and patent reclassification are common and reveal interesting information about technological evolution. To support our argument, we present three sets of findings regarding classification volatility in the U.S. patent classification system. First, we study the evolution of the number of distinct classes. Reconstructed time series based on the current classification scheme are very different from historical data. This suggests that using the current classification to analyze the past produces a distorted view of the evolution of the system. Second, we study the relative sizes of classes. The size distribution is exponential so classes are of quite different sizes, but the largest classes are not necessarily the oldest. To explain this pattern with a simple stochastic growth model, we introduce the assumption that classes have a regular chance to be split. Third, we study reclassification. The share of patents that are in a different class now than they were at birth can be quite high. Reclassification mostly occurs across classes belonging to the same 1-digit NBER category, but not always. We also document that reclassified patents tend to be more cited than non-reclassified ones, even after controlling for grant year and class of origin.Keywords: patents, classification, reclassification.JEL codes: O30, O39.\end@twocolumnfalse ]§ INTRODUCTION The U.S. patent system contains around 10 million patents classified in about 500 main classes. However, some classes are much larger than others, some classes are much older than others, and more importantly none of these classes can be thought of as a once-and-for-all well defined entity. Due to its important legal role, the U.S. Patent and Trademark Office (USPTO) has constantly devoted resources to improve the classification of inventions, so that the classification system has greatly evolved over time, reflecting contemporaneous technological evolution. Classifications evolve because new classes are created but also because existing classes are abolished, merged and split. In fact, all current classes in 2015 have been established in the U.S. Patent Classification System (USPCS) after 1899, even though the first patent was granted in 1790 and the first classification system was created in 1829-1830. To give just another example, out of all patents granted in 1976, 40% are in a different main class now than they were in 1976.To maintain the best possible level of searchability, the USPTO reclassifies patents so that at every single moment in time the patents are classified according to a coherent, up-to-date taxonomy. The downside of this is that the current classification is not meant to reflect the historical description of technological evolution as it unfolded. In other words, while the classification system provides a consistent classification of all the patents, this consistency is not time invariant. Observers at different points in time have a different idea of what is a consistent classification of the past, even when classifying the same set of past patents. In this paper, we focus on the historical evolution of the U.S. patent classification. We present three sets of findings.First we study the evolution of the number of distinct classes, contrasting current and historical classification systems. Recent studies <cit.> have shown that it is possible to reconstruct the long-run evolution of the number of subclasses using the current classification system. This allowed them to obtain interesting results on the types of recombinations and on the relative rates of introduction of new subclasses and new combinations. An alternative way to count the number of distinct categories is to go back to the archives and check how many classes did actually exist at different points in the past. We found important differences between the historical and reconstructed evolution of the classification system. In particular, we find that historically the growth of the number of distinct classes has been more or less linear, with about two and a half classes added per year. By contrast, the reconstructed evolution ­– which considers how many current classes are needed to classify all patents granted before a given date ­– suggests a different pattern with most classes created in the 19^th century and a slowdown in the rate of introduction of novel classes afterwards. Similarly, using the historical classes we find that the relationship between the number of classes and the number of patents is compatible with Heaps' law, a power law scaling of the number of categories with the number of items, originally observed between the number of different words and the total number of words in a text <cit.>. Using the reconstructed evolution Heaps' law does not hold over the long run.Knowing the number of distinct classes, the next question is about their growth and relative size (in terms of the number of patents). Thus our second set of findings concerns the size distribution of classes. We find that it is exponential, confirming a result of <cit.> on a much more restricted sub-sample. We also find that there is no clear relationship between the size and the age of classes, which rules out an explanation of the exponential distribution in terms of simple stochastic growth models in which classes are created once and for all.Third, we hypothesize that new technology fields and radical innovations tend to be associated with a higher reclassification activity. This suggests that the history of reclassification contains interesting information on the most transformative innovations. Our work here is related to <cit.> who study how a range of metrics (claims, references, extensions, etc.) correlate with reclassification for 3 million utility patents since 1994. We used the data since 1976, for which we observe the class of origin and the citations statistics. It appears that reclassified patents are more cited than non-reclassified patents. We also construct a reclassification flow diagram, with aggregation at the level of NBER patent categories <cit.>. This reveals that a non-negligible share of patents are reclassified across NBER categories. We find that patents in “Computers” and in “Electronics” are often reclassified in other NBER categories, which is not the case of other categories such as “Drugs”. We then discuss three examples of new classes (Fabric, Combinatorial Chemistry and Artificial Intelligence).Finally, we argue that it is not possible to explain the observed patterns without accounting for reclassification. We develop a simple model in which classes grow according to preferential attachment but have a probability of being split. The model's only inputs are the number of patents and classes in 2015 and the Heaps' law exponent. Despite this extreme parsimony, the model is able to reproduce i) the historical and reconstructed patterns of growth of the number of classes, ii) the size distribution and (partially) the lack of age-size relationship, and iii) the time evolution of the reclassification rates.The empirical evidence that we present and the assumptions we need to make for the model make it clear that the USPCS has evolved considerably and it is hardly possible to think of patent classes as technological domains with a stable definition. The classification system cannot be well understood as a system in which categories are created once-and-for-all and accumulate patents over time. Instead, it is better understood as a system that is constantly re-organized. Because of this, using the current classification system to study a set of older patents is akin to looking at the past with today's glasses. In this paper, we not only show the differences between the historical and reconstructed reality, but we also explain how these differences emerged.The paper is organized as follows. Section <ref> details our motivation, gives some background on categorization and reviews the literature on technological categories. Section <ref> describes the USPCS and our data sources. Section <ref> presents our results on the evolution of the number of classes. Section <ref> discusses the size distribution of classes. Section <ref> presents our results on reclassification since 1976. Section <ref> presents a model that reproduces the main empirical patterns discovered in the previous sections. The last section discusses the results, motivates further research and concludes.§ WHY IS STUDYING CLASSIFICATION SYSTEMS IMPORTANT?Classification systems are pervasive because they are extremely useful. At a fundamental level, categorization is at the basis of pattern recognition, learning, and sense-making. Producing a discourse regarding technologies and their evolution is no exception. As a matter of fact, theoretical and a fortiori empirical studies almost always rely on some sort of grouping ­– or aim at defining one. Historically, the interest in technology classifications has been mostly driven by the need to match technological and industrial activities <cit.>. Since patented technologies are classified according to their function, not their industry of use or origin, this problem is particularly difficult. Clearly, a good understanding of both industry and patent classification systems is crucial to build a good crosswalk. Here we highlight the need to acknowledge that both classification systems change. For this reason our results give a strong justification for automated, probabilistic, data-driven approaches to the construction of concordance tables such as the recent proposal by <cit.> which essentially works by looking for keywords of industry definitions in patents to construct technology-industry tables.With the rise of interest in innovation itself many studies have used existing patent classifications to study spillovers across technology domains, generally considering classification as static. For instance<cit.> studied the growth and distribution of patent classes since 1976;<cit.>, <cit.>, <cit.> and <cit.> studied co-classification patterns; and <cit.> and <cit.> studied the patterns of citations across USPCS or NBER technology classes. Similarly, technological classification systems are used to estimate technological distance, typically between firms or inventors in the “technology space” based on the classification of their patent portfolio <cit.>. Additional methodological contributions include <cit.>, who have pointed out that using all the codes listed on patents increases the sample size and thus reduces bias in measuring proximity, and <cit.> who argues for using the hierarchical structure of the classification system[In a related context (how professional diversity scales with city size), <cit.> and <cit.> exploited the different layers of industry and occupation classifications systems to identify resolution-independent quantities. Measuring diversity depends on which layer of the classification system one uses, but in such a way that the infinite resolution limit (deepest classification layer) exists and can be used to characterise universal quantities.].In spite of this wide use of the current patent classification system, there have been no quantitative studies of the historical evolution of the system apart from the counts of the number of distinct classes by <cit.> and <cit.>, which we update here. Recently though, <cit.> originated a renewed interest in patent classification by arguing that the classification of patents in multiple fields is indicative of knowledge recombination. Using the complete record of US patents classified according to the current classification system, <cit.> studied the subclasses (“technology codes”). They found that the number of subclasses used up to a given year is proportional to the cumulative number of patents until about 1870, but grew less and less fast afterwards. Remarkably, however, this slowdown in the “introduction” of new subclasses does not apply to new combinations of subclasses. <cit.> found that the number of combinations has been consistently equal to 60% of the number of patents. This finding confirms strumsky2012using argument that patent classifications contain useful information to understand technological change over the long-run. Furthermore, the detailed study of combinations can reveal the degree of novelty of specific patents <cit.>. Besides their use for simplifying the analysis and creating crosswalks, technology taxonomies are also interesting per se. A particularly interesting endeavour would be to construct systematic technology phylogenies showing how a technology descends from others <cit.> (for specific examples, see <cit.> for cornets and<cit.> for programming languages). But categories are not simply useful to describe reality, they are often used to construct it <cit.>. When categories are created as nouns, they can have a predicate and become a subject. As a result, classification systems are institutions that allow agents to coordinate and agree on how things should be called and on where boundaries should be drawn. Furthermore, classification systems may create a feedback on the system it describes, for instance by legitimizing the items that it classifies or more simply by biasing which items are found through search and reused in recombination to create other items. Categorization thus affects the future evolution of the items and their relation (boundaries) with other items. Along this line of argument, the process of categorization is performative. In summary, data on the evolution of technological classification systems provides a window on how society understands its technological artefacts and legitimizes them through the process of categorization. According to <cit.>, social scientists should not over impose their own categories over the actors that they analyze. Instead a researcher should follow the actors and see how they create categories themselves.<cit.> described technological evolution as the co-evolution of a body of practice and a body of understanding. The role of the body of understanding is to “rationalize” the practice. According to him this distinction has important implications for understanding evolutionary dynamics, since each body has its own selection criteria. Our argument here is that the evolution of the USPCS reflects how the beliefs of the community of technologists about the mesoscale structure of technological systems coevolves with technological advancements. We consider patent categorization as a process of codification of an understanding concerning the technological system. To see why studying patent categories goes beyond studying patents, it is useful to remember that examiners and applicants do not need to prove that a technology improves our understanding of a natural phenomenon; they simply need to show that a device or process is novel and effective at solving a problem. However, to establish a new class, it is necessary to agree that bringing together inventions under this new header actually improves understanding, and thus searchability of the patent system. In that sense we believe that the dynamics of patent classes constitute a window on the “community of technologists”.[ Patent officers are generally highly skilled workers. Besides anecdotal evidence on particularly smart patent examiners (Albert Einstein), patent officers are generally highly qualified (often PhDs). That said, <cit.> mention that classification work was not particularly attractive and that the Classification division had difficulties attracting volunteers. More recently <cit.> eludes to “high turnover, less than ideal wages and heavy workloads”. There is an emerging literature on patent officers' biases and incentives <cit.> but it is focused on the decision to grant the patent. Little is known about biases in classification. ] Since classification systems are designed to optimize search, they reflect how search takes place which in turn is indicative of what thought processes are in place. These routines are an integral part of normal problem-solving within a paradigm. As a result, classification systems must be affected by paradigm-switching radical innovations. As noted by e.g. <cit.> and <cit.>, a new technology which fits perfectly in the existing classification scheme may be considered an incremental innovation, as opposed to a radical innovation which challenges existing schemes. A direct consequence is that the historical evolution of the classification system contains a great deal of information on technological change beyond the information contained in the patents[In labor economics, some studies have exploited classification system changes. <cit.> finds that new goods, as measured by changes to the SIC system, have a higher skill intensity than existing goods. <cit.> and <cit.> used changes in the index of industries and the dictionary of occupational titles to evaluate new work at the city level.]. We now describe our attempt at reconstructing the dynamics of the U.S. patent classification system. § THE DATA: THE USPCSWe chose the USPCS for several reasons. First of all, we chose a patent system, because of our interest in technological evolution but also because due to their important legal role patent systems benefit from resources necessary to be maintained up to date. Among the patent classification systems, the USPCS is the oldest still in use (as of couple of years ago) <cit.>. It is also fairly well documented, and in english. Moreover, additional files are available: citation files, digitized text files of the original patents from which to get the classification at birth, files on current classification, etc. Finally, it is one of the most if not the most used patent classification system in studies of innovation and technological change. The major drawback of this choice is that the USPCS is now discontinued. This means that the latest years may include a classificatory dynamics that anticipate the transition to the Cooperative Patent Classification[<http://www.cooperativepatentclassification.org/index.html>], and also implies that our research will not be updated and cannot make predictions specific to this system that can be tested in the future. More generally, we do recognize that nothing guarantees external validity; one could even argue that if the USPCS is discontinued and other classification systems are not, it shows that the USPCS has specificities and therefore it is not representative of other classification systems. Nevertheless, we think that the USPCS had a major influence on technology classifications and is the best case study to start with. §.§ The early history of the USPCS The U.S. patent system was established on 31st July 1790, but the need for examination was abolished 3 years later and reestablished only in 1836. As a result, there was no need to search for prior art and therefore the need for a classification was weak. The earliest known official subject matter classification appeared in 1823 as an appendix to the Secretary of State's report to the Congress for that year <cit.>. It classified 635 patents models in 29 categories such as “Bridges and Locks”, 1184 in a category named “For various purposes”, and omitted those which were not “deemed of sufficient importance to merit preservations”. In 1829, a report from the Superintendent proposed that with the prospect of the new, larger apartments for the Patent office, there would be enough room for a systematic arrangement and classification of models. He appended a list of 14 categories to the report.[The main titles were Agriculture, Factory machine, Navigation, Land works, Common trades, Wheel carriages, Hydraulicks (the spelling of which was changed in 1830), Calorific and steam apparatus, Mills, Lever and screw power, Arms, Mathematical instruments, Chemical compositions and Fine arts.]In 1830 the House of representatives ordered the publication of a list of all patents, which appeared in December 1830/January 1831 with a table of contents organizing patents in 16 categories, which were almost identical to the 14 categories of 1829 plus “Surgical instruments” and “Horology”.[An interesting remark on this classification <cit.> is that it already contained classes based on industry categories (agriculture, navigation, …) and classes based on a “specific mechanical force system” (such as Lever and screw power).]In July 1836, the requirement of novelty examination came into effect, making the search for prior art more pressing. Incidentally, in December the Patent office was completely destroyed by a fire. In 1837, a new classification system of 21 classes was published, including a Miscellaneous class and a few instances of cross noting[The first example given by <cit.> is a patent for a pump classified in both “Navigation” and in “Hydraulics and Hydrostatics”]. The following year another schedule was published, with some significant reorganization and a total number of classes of 22.A new official classification appeared in 1868 and contained 36 main classes. Commenting on this increase in the number of classes, the Commissioner of patents wrote that <cit.>“The number of classes has risen from 22 to 36, a number of subjects being now recognized individually which were formally merged with others under a more generic title. Among these are builder's hardware, felting, illumination, paper, and sewing machines, to each of which subject so much attention has been directed by inventors that a division became a necessity to secure a proper apportionment of work among the corps of examiners.” Clearly, one of the rationale behind the creation and division of classes is to balance the class sizes, but this was not only to facilitate search. This class schedule was designed with administrative problems in mind, including the assignment of patent applications to the right examiners and the “equitable apportionment of work among examiners” <cit.>.Shortly after 1868 a parallel classification appeared, containing 176 classes used in the newly set up patent subscription service. This led to a new official classification containing 145 classes and published as a book in 1872. The number of classes grew to 158 in 1878 and 164 in 1880. <cit.> note that the 1880 classification did not contain any form of cross-noting and cross references, by contrast to the 1872 classification. In 1882 classification reached 167 classes and introduced indentation of subclasses at more than one level. The classification of 1882 also introduced a class called “Electricity”, long before this general purpose technology fully reached its potential.In 1893 it was made clear in the annual report that a Classification division was required “so that [the history of invention] would be readily accessible to searchers upon the novelty of any alleged invention”. After that, the need for a classification division (and the associated claim for extra budget) was consistently legitimated by this need to “oppose the whole of prior art” to every new application. In 1898 the “Classification division” was created with a head, two assistants and two clerks, with the purpose of establishing clearer classification principles and reclassifiying all existing patents. This marked the beginning of professional classification at the USPTO.Since then the classification division has been very active and the patent classification system has evolved considerably, as we document extensively in this paper. But before, we need to explain the basic organizing principles of the classification system. §.§ Rationale and organization of the modern USPCS The USPCS attributes to each patent at least one subject matter. A subject matter includes a main class, delineating the main technology, and a subclass, delineating processes, structural features and functional features. All classes and most subclasses have a definition. Importantly, these are the patent claims which are classified, not the whole patent itself. The patent inherits the classification of its claims; its main classification is the classification of its main (“most comprehensive”) claim.There are different types of patents, and they are translated into different types of classes. According to the USPTO[<http://www.uspto.gov/web/offices/pac/mpep/s1502.html>], “in general terms, a utility patent protects the way an article is used and works, while a design patent protects the way an article looks.” The “classification of design patents is based on the concept of function or intended use of the industrial design disclosed and claimed in the Design patent.”[<http://www.uspto.gov/page/seven-classification-design-patents>].During the 19^th century classification was based on which industry or profession was using the invention, for instance “Bee culture” (449) or “Butchering” (452). The example of choice <cit.> is that of cooling devices which were classified separately if they were used to cool different things, such as beer or milk. Today's system would classify both as cooling devices into the class “Heat exchange” (165), which is the utility or function of the invention. Another revealing example <cit.> is that a subclass dealing with the dispensing of liquids contains both a patent for a water pistol and one for a holy water dispenser. This change in the fundamental principles of classification took place at the turn of the century with the establishment of the Classification division <cit.>. Progressively, the division undertook to redesign the classification system so that inventions would be classified according their utility. The fundamental principle which emerged is that of “utility classification by proximate function” <cit.> where the emphasis on “proximate” means that it is the fundamental function of the invention, not some example application in a particular device or industry. For instance “Agitating” (366) is the relevant class for inventions which perform agitation, whether this is to wash clothes, churn butter, or mix paint <cit.>. Another classification by utility is the classification by effect or product, where the result may be tangible (e.g. Semiconductors device and manufacture, 438) or intangible (e.g. Audio signal system, 381). Finally, the classification by structure (“arrangement of components”) is sometimes used for simple subject matter having general function. This rationale is the most often used for chemical compounds and stock material. It is rarely used for classes and more often used at the subclass level <cit.>Even though the classification by utility is the dominant principle, the three classification rationales (by industry, utility and structure) coexist. Each class “reflects the theories of classification that existed at the time it was reclassified” <cit.>. In addition, the system keeps evolving as classes (and even more so subclasses) are created, merged and split. New categories emerge when the need is felt by an examiner and approved by the appropriate Technology Center; in this case the USPCS is revised through a “Classification order” and all patents that need to are reclassified <cit.>. An example of how subclasses are created is through alpha subclasses. Alpha subclasses were originally informal collections created by patent examiners themselves to help their work, but were later incorporated into the USPC. They are now created and used as temporary subclasses until they become formalized <cit.>. When a classification project is completed, a classification order is issued, summarising the changes officially, and all patents that need to are, in principle, reclassified.One of the latest class to have been created is “Nanotechnology (977)”, in October 2004. As noted by <cit.>, using the current classification system one finds that after reclassification the first nanotechnology patent was granted much earlier[1986 for <cit.>, 1978 for <cit.> and 1975 according to <cit.> and to the data that we use here (US3896814). Again, these differences reflect the importance of reclassification.]. According to <cit.>, large federal research funding led to the emergence of “nanotechnology” as a unifying term, which became reflected in scientific publications and patents. Because nanotechnologies were new, received lots of applications and require interdisciplinary knowledge, it was difficult to ensure that prior art was reviewed properly. The USPTO engaged in a classification project in 2001, which started by defining nanotechnologies and establishing their scope, through an internal process as well as by engaging with other stakeholders such as users or other patent offices. In 2004 the Nanotechnology cross-reference digest was established; cross-reference means that this class cannot be used as a primary class. <cit.> argues that class 977 has been defined with a too low threshold of 1 to 100 nanometers. Also, reclassification has been encouraged but is not systematic, so that many important nanopatents granted before 2004 may not be classified as such.Another example of class creation worth mentioning is given by <cit.> who argue that the creation of “Fabric (woven, knitted, or nonwoven textile or cloth, etc.)” (442) created in 1997, could have been predicted based on clustering analysis of citations. <cit.> recently generalized this approach, by formulating it as a classical machine learning classification problem: patent clusters are characterized by sets of features (citations, claims, etc.), and only some patent clusters are later on recognized as “emerging technology” by being reclassified into a new USPCS main class. In this sense, USPCS experts are labelling data, and <cit.> developed a method to create clusters and train machine learning algorithms on the data labelled by USPCS experts.Finally, a last example is that of organic chemistry[see <http://www.uspto.gov/page/addendum-reclassification-classes-518-585>]. Class 260 used to contain the largest array of patent documents but it was decided that this class needed to be reclassified “because its concepts did not necessarily address new technology and several of its subclasses were too difficult to search because of their size.”. To make smaller reclassification projects immediately available it was decided to split the large class into many individual classes in the range of Classes 518-585. Each of these classes is “considered an independent class under the Class 260 umbrella”; many of these classes have the same general name such as “Organic coumpounds ­– part of the class 532-570 series”[These classes also have a hierarchy indicated by their number, as subclasses within a class schedule usually do.] As argued by <cit.>, this procedure of introducing new codes and modifying existing ones ensures that the current classification of patents is consistent and makes it possible to study the development of technologies over a long period of time. However, while looking at the past with today's glasses ensures that we look at different periods of the past in a consistent way, it is not the same as reporting what the past was in the eyes of those who lived it. In this sense, we believe that it is also interesting to try and reconstruct the classification systems that were in place in the past. We now describe our preliminary attempt to do so, by listing available sources and constructing a simple count of the number of classes used in the past.§.§ Dataset constructionBefore describing the data construction in details, let us state clearly three important caveats.First, we focus on main classes, due to the difficulty of collecting historical data at the subclass level. This is an important omission and avenue for further research. Investigating the complete hierarchy could add significant insight, for instance by contrasting “vertical” and “horizontal” growth of the classification tree, or by exploiting the fact that different layers of system play a different role for search <cit.>.Second, we limit our investigations to Primary (“OR”) classes, essentially for simplicity. Multiple classifications are indeed very interesting and would therefore warrant a complete independent study. Clearly, the fact that multiple classifications can be used is a fundamental feature of the current USPCS. In fact it is a key feature of its evolution: as noted above “cross-noting” was common in some periods and absent in others, and a recent example of a novel class ­– Nanotechnology ­- happens to be an XR-only class (i.e., used only as secondary classification). Here we have chosen to use only OR classes because it allows us to show the main patterns in relatively simple way. Of course some of our results, in particular those of Section <ref>, are affected by this choice, and further research will be necessary to evaluate the robustness of our results. That said, OR classifications, which are used on patent applications to find the most appropriate examining division <cit.>, are arguably the most important.Third, we limit our investigation to the USPCS, as justified in the beginning of Section <ref>. We have good reasons for choosing the USPCS in this study, which aims at giving a long-run picture. However, for studying the details of reclassification patterns and firmly establishing reclassification and classification system changes as novel and useful indicators of technological change, future research will need to establish similar patterns in the IPC or CPC.As a result of these choices, our aim is to build a database[Our data is available at <https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/ZJCDCE>] of 1) the evolution of the USPCS primary classes, and 2) the reclassification of patents from one class to the other.To do this we relied on several sources. First, our most original data collection effort concerns the historical number of classes. For the early years our main sources are <cit.> and <cit.>, complemented by <cit.> and the “Manual of Classification” for the 5 years within the period 1908­–1923. For the 1950­–60's, we used mostly a year-specific source named “General information concerning Patents” which contained a sentence like “Patents are classified into x classes”. Unfortunately, starting in 1969 the sentence becomes “Patents are classified into more than 310 classes”. We therefore switched to another source named “Index of patents issued from the United States Patent Office”, which contains the list of classes. Starting 1963, it contains the list of classes with their name and number on a separate page[We had to make some assumptions. In the 1960's, Designs appeared subdivided into “Industrial arts” and “Household, personal and fine arts”, so we assumed that the number of design classes is 2, up to the year 1977 where Design classes appear with their name and number. We implicitly assume that prior to 1977 the design classes were actually subclasses, since in 1977, there were 39 Design classes, whereas the number of (sub)classes used for design patents in 1976 was more than 60. It should be noted though that according to the dates established, some of the current design classes were created in the late 60's. Another issue was that for 1976 the number of Organic compound classes was not clear ­- we assumed it was 6, as listed in 1977. Finally, we sometimes had two slightly different values for the same year due to contradictory sources or because the sources refer to a different month.]. For 1985, we used a report of the Office of Technology Assessment and Forecast (OTAF) of the Patent and Trademark Office <cit.>.For the years 2001 to 2013, we collected data from the Internet Archive.[<https://archive.org/index.php> where we can find the evolution of the url <http://www.uspto.gov/web/patents/classification/selectnumwithtitle.htm>. We added the class “001” to the count.] As of February 2016 there are 440 utility classes (including the Miscellaneous 001 and and the “Information storage” G9B (established in 2008)), 33 design classes, and the class PLT “Plant", giving a total of 474 classes.[ The list of classes available with their dates established contains 476 classes, but it does not contain 001, and it contains 364, 389, and 395 which have been abolished. We removed the abolished classes, and for Figs <ref> and<ref> we assumed 001 was established in 1899.]. Second, to obtain reclassification data we matched several files. We obtained “current” classifications from the Master Classification File (version mcfpat1506) for patents granted up to the end of June 2015. We matched this with the Patent Grant Authority File (version 20160130) to obtain grant years[ We first removed 303 patents with no main (OR) classification, and then 92 patents dated January 1st 1800. We kept all patent kinds.]. To obtain the classification at birth, we usedthe file “Patent Grant Bibliographic (Front Page) Text Data (January 1976 – December 2015)”, provided by the USPTO[at <https://bulkdata.uspto.gov/> (Access date: January 7, 2018)], from which we also gathered citation data.§ DYNAMICS OF THE NUMBER OF CLASSES AND HEAPS' LAW Our first result concerns the growth of the number of classes (Fig. <ref>), which we have computed using three different methods. First, we used the raw data collected from the historical sources mentioned in Section <ref>.Quite unexpectedly, the data suggests a linear growth, with appreciable fluctuations mainly due to the introduction of an entirely new system in 1872 and to design classes in 1977 (see footnote footnoteDesign). The grey line shows the linear fit with an estimated slope of 2.41 (s.e. 0.06) and R^2 of 0.96 (we treat years with no data as NA, but filling them with the figure from the last observed year does not dramatically affect the results). Second, we have computed, using the Master Classification File for June 2015, the number of distinct classes in which the patents granted up to year t are classified (black line). To do so, we have used all classes in which patents are classified (i.e. including cross-reference classes).[The (reconstructed) number of classes is slightly lower if we consider only Primary classes, because some classes are used only as a cross-reference, never as primary class. These classes are 902: Electronic funds transfer, 903: Hybrid electric vehicles, 901: Robots, 930: Peptide or protein sequence, 977: Nanotechnology, 976: Nuclear technology, 968: Horology, 987: Organic compounds containing a bi, sb, as, or p atom or containing a metal atom of the 6th to 8th group of the periodic system, 984: Musical instruments, G9B: Information storage based on relative movement between record carrier and transducer. ] The pattern of growth is quite different from the historical data. If we consider only the post-1836 data, the growth of the number of classes is sublinear ­– less and less classes are introduced every year. Before 1836, the trend was linear or perhaps exponential, giving a somewhat asymmetric S-shape to the overall picture. Third, we computed the growth of the number of classes based on the dates at which all current classes were established (blue line)[Collected from <https://www.uspto.gov>, page USPCS dates-established]. According to this measure, the first class was created in 1899, when the reorganization of classification started with the creation of the classification division[“Buckles, Buttons, clasps, etc.” is an example of a class that was created early under a slightly different name (1872 according to <cit.>, see <cit.> for details) but has a posterior “date established” (1904 according to the USPTO). Another example is “Butchering”.]. Fig. <ref> displays the number of classes against the number of patents in a log-log scale. In many systems, it has been found that the number of categories grows as a power law of the number of items that they classify, a result known as Heaps' law (for an example based on a classification system ­–the medical subject headings­– instead of a language, see <cit.>). Here we find that using the 2015 classification, Heaps' law is clearly violated[It is possible to obtain a good fit by limiting the fit to the latest periods, however this is arbitrary, and gives a very low Heaps' exponent, leaving unexplained the creation of the vast majority of classes.]. Using the historical data, Heaps' law appears as a reasonable approximation. We estimate the Heaps' exponent to be 0.378 with standard error of 0.010 and R^2=0.95. The inset on the bottom right of Fig. <ref> shows that for the latest years, Heaps' law fails: for the latest 2 million patents (about 20% of the total), almost no classes were created. We do not know whether this slowdown in the introduction of classes is due to a slowdown of radical innovation, or to a more institutionally-driven reason such as a lack of investment in the USPCS due to the expected switch to the Cooperative Patent Classification. Since the joint classification system was first announced on 25 October 2010 <cit.>, we show this date (more precisely, patent number 7818817 issued on the 26^th) as a suggestive indicator (dashed line on the inset). Another consideration is that the system may be growing more “vertically”, in terms of the number of layers of subclasses ­– unfortunately here we have to focus on classes, so we are not able to test for this. § THE SIZE DISTRIBUTION AND THE AGE-SIZE RELATIONSHIPBesides the creation and reorganization of technological categories, we are interested in their growth and relative sizes. More generally, our work is motivated by the Schumpeterian idea that the economy is constantly reshaping itself by introducing novelty <cit.>.The growth of technological domains has been deeply scrutinized in the economics of technical change and development <cit.>. A recurring theme in this literature is the high heterogeneity among sectors. When sectors or technological domains grow at different rates, structural change occurs: the relative sizes of different domains is modified. To study this question in a parsimonious way, one may opt for a mesoscale approach, that is, study the size distribution of categories. Our work here is most directly related to <cit.> who first showed on data for 1963­–1999 that the size distribution of classes is close to exponential. This is an interesting and at first surprising finding, because based on the assumption that all domains grow at the same average rate stochastic growth models such as <cit.> or <cit.> predict a Log-normal or a Pareto distribution, which are much more fat tailed. Instead, we do not see the emergence of relatively very large domains, and this may at first suggest that older sectors do not keep growing as fast as younger ones, perhaps due to technology life-cycles <cit.>. However, as we will discuss, we are able to explain the exponential size distribution by keeping Gibrat's law, but assuming that categories are split randomly. §.§ The size distribution of categories In this section we study the size distribution of classes, where size is the number of patents in 2015 and classes are defined using the current classification system. We use only the primary classification, so we have only 464 classes. Fig. <ref> suggests a linear relationship between the size of a class and the log of its rank, that is, class sizes are exponentially distributed[ For simplicity we used the (continuous) exponential distribution instead of the more appropriate (discrete) geometric distribution, but this makes no difference to our point. We have not rigorously tested whether or not the exponential hypothesis can be rejected, because the proper hypothesis is geometric and classical test statistics such as Kolmogorov-Smirnov do not easily apply to discrete distributions. Likelihood ratio tests interpreted at the 5% level showed that it is possible to obtain better fits using two-parameters distributions that extends the exponential/geometric, namely the Weibull and the Negative binomial, especially after removing the two smallest categories which are outliers (contain 4 and 6 patents) and are part of larger series (532 and 520).]. To see this, let p(k) be the probability density of the sizes k. If it is exponential, it is p(k)=λ e^-λ k. By definition, the rank r(k) of a class of size k is the number of classes that have a larger size, which is r(k)=N ∫_k^∞λ e^-λ x dx = N e^-λ k, where N is the number of classes. This is equivalent to size being linear in the logarithm of the rank. We estimate the parameter λ by maximum likelihood and obtained λ̂=4.71 × 10^-5 with standard error 0.22 × 10^-5. Note that λ̂ is one over the mean size, 21223. We use this estimate to plot the resulting fit in Fig. <ref>.It is interesting to find an exponential distribution, since one may have expected a power law, which is quite common as a size distribution, and appears often with Heaps' law <cit.>.Since the exponential distribution is a good representation of the data, it is worth looking for a simple mechanism that generates this distribution, which we will do in Section <ref>. But since many models can generate an exponential distribution we first need to present additional empirical evidence that will allow us to discriminate between different candidate models.§.§ The age-size relationship To determine whether older classes contain more patents than younger ones, we first need to note that there are two ways of measuring age: the official date at which the class was established, and the year in which its first patent was granted. As expected, it appears that the year in which a class is established is always posterior to the date of its first patent[ Apart from class 532. We confirmed this by manually searching the USPTO website. 532 is part of the Organic compound classes, which have been reorganized heavily, as discussed in Section <ref> ]. Since these two ways of measuring age can be quite different, we show the age-size (or rather size-birth date) relationship for both in Fig. <ref>. If stochastic growth models without reclassification were valid, we would observe a negative slope, that is, newer classes should have fewer patents because they have had less time for accumulation from random growth. Instead, we find no clear relationship. In the case of the year established, linear regressions indicated a positive relationship significant at the 10% but not at the 5% confidence level, whether or not the two “outliers” were removed. Using a log-linear model, we found a significant coefficient of 0.004 after removing the two outliers. In the case of the year of the first patent, the linear model indicated no significant relationship, but the log-linear model delivered a highly significant negative coefficient of -0.005 (which halves and becomes significant at the 10% level only once the two outliers are removed); In all 8 cases (two different age variables and two different models, removing outliers or not) the R^2 was between 0.001 and 0.029. We conclude that these relationships are at best very weak, and in one case of the “wrong” sign (with classes established in recent years being on average larger). Whether they are significant or not, our point here is that their magnitude and the goodness of fits are much lower than what one would expect from growth-only models such as <cit.>, or its modification with uniform attachment (to match the exponential size distribution). We will come back to the discussion of models later, but first we want to show another empirical pattern and explain why we think reclassification and classification system changes are interesting indicators of technological change.§ RECLASSIFICATION ACTIVITY AS AN INDICATOR OF TECHNOLOGICAL CHANGEIt seems almost tautological to say that a radical innovation is hard to categorize when it appears. If an innovation is truly “radical”, it should profoundly change how we think about a technology, a technological domain, or a set of functions performed by technologies. If this is the case a patent related to a radical innovation is originally hard to classify. It is likely that it will have to be reclassified in the future, when a more appropriate set of concepts has been developed and institutionalized (that is, when the community of technologists have codified a novel understanding about the radical innovation). It is also well accepted that radical innovations may create a new wave of additional innovations, which may or may not cluster in time <cit.> but when they are general purpose we do expect a rise in innovative activity <cit.>. A less commented consequence of the emergence and diffusion of General Purpose Technologies (GPTs) is that both due to the sheer increase in the number of patents in this technology, and to the impact of this technology on others, we should expect higher classification volatility. Classification volatility is to be expected particularly in relation to GPTs because by definition GPTs interact with existing technologies and create or reorganize interactions among existing technologies. From the point of view of the classification, the very definition of the objects and their boundaries are transformed. In short, some categories become too large and need to be split; some definitions become obsolete and need to be changed; and the “best” grouping of technologies is affected by the birth and death of conceptual relationships between the function, industry of origin or application, and structural features of technologies.In this section we provide a preliminary study. First we establish that this indicator does exist (reclassification rates can be quite high, reaching 100% if we look far enough in the past). Second, we show that reclassified patents are more cited. Third, we show that reclassification can take place across fairly distant technological domains, as measured by 1-digit NBER categories. Fourth, we discuss three examples of novel classes. §.§ Reclassification rates How many patents have been reclassified? To start with, since no classification existed prior to 1829, all patents published before that have been “(re)classified” in the sense that their category has been determined several and potentially many years after being granted. The same applies to all patents granted at times where completely different classification systems prevailed, which is the case before 1899. In modern times, classification has evolved but as discussed in Section <ref>, the overall classification framework put in place at the turn of the century stayed more or less the same. For the period after 1976, we know the original classification of each patent because we can read it on the digitized version of the original paper (see Section <ref>). After extensive efforts in parsing the data and a few manual corrections, we found an original class for 99.45% of the post-1976 patents in the Master Classification File mcfpat1506. Out of these 5,615,525 patents, 412,724 (7.35%) have been reclassified. There are 789 distinct original classes, including 109 with only 1 patent (apart from data errors, this can come from original classes that had no post-1976 patents classified in them). All current classes have been used as original classes except “001” which is only used as a miscellaneous class in which they are reclassified[We removed US6481014.]. Figure <ref> shows the evolution of the reclassification rate, defined as the share of patents granted in year t which have a different classification in 2015 than in t. It appears that as much as 40% of the 1976's patents belong to a different class now than when they first appear. This reclassification rate declines sharply after that, reaching about 10% in the 1990's and almost zero thereafter. This is an expected result, since the longer the time since granting the patent, the higher the chances that the classification system has changed. §.§ Are reclassified patents more cited? Since there is an established relationship between patent value and the number of citations received <cit.>, it is interesting to check if reclassified patents are more cited. Of course, we are only observing correlations, and the relationship between citations and reclassification can work in multiple ways. A plausible hypothesis is that the more active is a technological domain (in terms of new patents and thus new citations being made), the more likely it is that there will be a need for reclassification, if only to keep the classes at a manageable size[Relatedly, as noted by a referee, if patent examiners are also responsible for reclassification, then their prior art search might be oriented towards patents that they have re-classified, for which their memory is more vivid.]. Another hypothesis is that highly innovative patents are intrinsically ambiguously defined in terms of the classification system existing when they first appear. In any case, since we only have the class number at birth and the class number in 2015, we cannot make subtle distinctions between different mechanisms. However, we can check whether reclassified patents are on average more cited, and we can do so after controlling for the grant year and class at birth. Table <ref> shows basic statistics[We count citations made to patents for which we have reclassification data, from patents granted until June 2015. We removed duplicated citations]. Reclassified patents constitute 7.35% of the sample, and have received on average more than 24 citations, which is more than twice as much as the non reclassified patents. We expect this result to be largely driven by the fact that older patents have both a higher chance to have been reclassified and a higher chance to have accumulated many citations. To investigate the relationship between reclassification and citations in more detail, we regressed the log of total citations received in 2015 on the reclassification dummy and on dummies for the class at birth, for each year separately (and keeping only the patents with at least one citation received, 76.6%):log(c_i)=α_t + β_t R_i + ∑_j=1^J_t-1γ_j,t D_i,jwhere c_i is the number of citations received by patent i between its birth (time t) and (June) 2015, R_i is a dummy that takes the value of 1 if patent i has a main class code in 2015 different from the one it had when it appeared (i.e. in year t), J_t is the number of distinct classes in which the patents born in year t were classified at birth, and D_i,j is a dummy that takes the value of 1 if patent i was classified in class j at birth.Note that we estimate this equation separately for every grant year. We include the class at birth dummies because this allows us to consider patents that are “identical twins” in the sense of being born in the same class in the same year. The coefficient β then shows if reclassified patents have on average received more citations. The results are reported in Fig. <ref>, showing good evidence that reclassification is associated with more citations received. As expected, recent years[2015 is excluded because no patents had been reclassified] are not significant since there has not been enough time for reclassification to take place and citations to accumulate (the bands represent standard approximate 95% confidence intervals). We also note that controlling for the class at birth generally weakens the effect (red dashed line compared to black solid line).§.§ Reclassification flows To visualize the reclassification flows, we consider only the patents that have been reclassified. As in <cit.> we want to construct a bipartite graph showing the original class on one side and the current class on the other side. Since we identify classes by their code number, a potentially serious problem may arise if classes are renumbered, although we believe this tends to be rare given the limited time span 1976­–2015. An example of this is “Bee culture” which was class number 6, but since 1988 is class number 449 and class number 6 does no longer exists. However, even in this case, even though these two classes have the same name, we do not know if they are meant to encompass the same technological domain and have just been “renumbered”, or if other considerations prevailed and renumbering coincides with a more substantive reorganisation. An interesting extension of our work would be to use natural language processing techniques on class definitions to define a measure of reclassification distance more precisely and exclude mere renumbering.To make the flow diagram readable and easier to interpret, we aggregate by using the NBER categories[For more details on the NBER categories, see the historical reference <cit.> and the recent effort by <cit.> to attribute NBER (sub) categories to patent applications.]. To assign each class to a NBER category, we used the 2006 version of the NBER classification, which we modified slightly by classifying the Design classes separately, and classifying USPCS 850 (Scanning probe techniques and apparatus) in NBER 4 (Electrical) and USPCS PLT (Plant) in NBER 6 (Others). Fig. <ref> shows the results[See the online version at <http://danielykim.me/visualizations/PatentReclassificationHJTcategory/>]. The share of a category means the fraction of reclassified patents whose primary class is in a particular NBER category. The width of the lines between an original category i and a current category j is proportional to the number of reclassified patents whose original class is in category i and current class is in category j. Line colors indicate the original category.We can see that patents originally classified in the categories Chemical tend to be reclassified in another class of the category Chemical. The same pattern is observed for the category Drugs. By contrast, the categories Computers & Communications and Electrical & Electronics display more cross-reclassifications, in line with wang2016technological findings on a restricted dataset. This may indicate that the NBER categories related to computers and electronics are not as crisply defined as those related to Chemical and Drugs, and may be suggestive of the general purpose nature of computers. This could also suggest that that these domains were going through a lot of upheaval during this time period. While there is some ambiguity in interpreting these patterns, they are not a priori obvious and point to the same phenomenon as the correlation between citations and reclassifications: dynamic, impact-full, really novel, general purpose fields are associated to more taxonomic volatility. §.§ Three examples of novel classes We now complement the study by providing three examples of novel classes, chosen among recently created classes (and excluding cross-reference only classes). We proceed by looking at the origin of patents reclassified in the new class when it is created. We approximate this by looking at the patents that have been granted on a year preceding the birth year of a class, and now appear as reclassified into it. Note that we can determine the class of origin only for patents granted after 1976. We also give as example the oldest reclassified (utility) patent we can find. We discuss each class separately (see Table <ref> for basic statistics on each of the three example classes, and Table <ref> for the source classes in each case (“Date” is the date at which an “origin” class was established.) Motivated by the study of <cit.> showing that the emergence of a new class (442) could have been predicted by citation clustering, we study class 442, “Fabric (woven, knitted, or nonwoven textile or cloth, etc.)”. The class definition indicates that it is “for woven, knitted, nonwoven, or felt article claimed as a fabric, having structural integrity resulting from forced interassociation of fibers, filaments, or strands, the forced interassociation resulting from processes such as weaving, knitting, needling hydroentangling, chemical coating or impregnation, autogenous bonding (…) or felting, but not articles such as paper, fiber-reinforced plastic matrix materials (PPR), or other fiber-reinforced materials (…)”. This class is “an integral part of Class 428 [and as such it] incorporates all the definitions and rules as to subject matter of Class 428.” The oldest patent reclassified in it was a patent by Charles Goodyear describing how applying caoutchouc to a woven cloth lead to a material with “peculiar elasticity” (US4099, 1845, no classification on the paper file). A first remark is that this class was relatively large at birth. Second, an overwhelming majority of patents came from the “parent” class 428. Our interpretation is that this is an example of an old branch of knowledge, textile, that due to continued development needs to be more finely defined to allow better classification and retrieval ­- note that the definition of 442 is not only about what the technologies are, but what they are not (paper and PPR).Our second example is motivated by kang2012science qualitative study of the process of creation of an IPC class, to which the USPTO participated. <cit.> describes that the process of class creation was initiated because of a high number of incoming patents on the subject matter. Her main conclusion is that disputes regarding class delineation were resolved by evaluating the size of the newly created category under certain definitions. Class 506, “Combinatorial chemistry technology: method, library, apparatus” includes in particular “Methods specially adapted for identifying the exact nature (e.g., chemical structure, etc.) of a particular library member” and “Methods of screening libraries or subsets thereof for a desired activity or property (e.g., binding ability, etc.)”. The oldest reclassified patent is US3814732 (1974), “modified solid supports for solid phase synthesis”. It claims polymeric hydrocarbon resins that are modified by the introduction of other compounds. It was reclassified from class 260, “Chemistry of carbon compounds”. In contrast to 442 or 706 reviewed below, the reclassified patents are drawn relatively uniformly from several categories. Our interpretation is that this is an example of a mid-age technology (chemistry), which due to its interactions with other technologies (computers) develops a novel branch that is largely cross-cutting, but specific enough to warrant the creation of a new class.Our last example is 706, “Data processing ­- Artificial Intelligence”, which is a “generic class for artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation of intelligence (…); and including systems for reasoning with uncertainty (…), adaptive systems, machine learning systems, and artificial neural networks.”. We chose it because we possess at least some domain knowledge. The oldest reclassified AI patent is US3103648 (1963), which is an “adaptive neuron having improved output”, nicely echoing the recent surge of interest in neural networks for machine learning (deep learning). It was originally classified in class 340, “Communications:electrical”. In contrast to the other two examples, we find that the two largest sources were classes that have since been abolished (we recovered the names of 395 and 364 from the “1996 Index to the US patent classification”; their date established was available from the “Date Established” file documented in Section<ref>). Other classes with the “Data processing” header were created during the period, showing that the USPTO had to completely re-organize its computer-related classes around the turn of the millennium. Our interpretation is that this is an example of a highly novel technology, emerging within the broader context of the third and perhaps fourth industrial revolution. Because computers are relatively recent and general purpose, it is very difficult to create taxonomies with stable boundaries.These three examples show strikingly different patterns of technological development and its associated classification volatility. An old branch of knowledge which is deepening (textile), a mid-age branch of knowledge that develops novel interactions with others (chemistry), and a new branch of knowledge (computers) for which classification officers strive to find useful organizational schemes. We acknowledge that these are only examples ­- presumably, some other examples of new classes would follow similar patterns, but other patterns may exists. We have found that about two thirds of post-1976 new classes have more than 90% of their pre-birth (and post-1976) reclassified patents coming from a single origin (pre-existing class), suggesting that a form of “branching” or “class splitting” is fairly common, at least when looking at OR classes only. We do not want to put too much weight on these early results, which will have to be systematised, developed further using subclasses and multiple classifications, and, crucially, compared against results obtained using the IPC/CPC. We do think that such a systematic study of classification re-organizations would tell a fairly detailed story of the evolution of technology, but rather than embarking on such a detailed study here we propose to summarize most of what we have learned so far into a simple theoretical model. § A SIMPLE MODEL In this section, we propose a very simple model that reproduces several facts described above. As compared to other recent models for size distributions and Heaps' law in innovation systems <cit.>, the key assumption that we will introduce is that classes are sometimes split and their items reclassified. We provide basic intuition instead of a rigorous discussion[For instance, we do not claim that the model in general produces a certain type of pattern such as a lack of age-size relationship. We simply show that under a specific parametrisation taken from the empirical data (say ∼10 million patents, 500 classes, and a Heaps exponent of 0.38), it produces patterns similar to the empirical data.]. Let us start with the well-known model of <cit.>. A new patent arrives every period. The patent creates a new category with probability α, otherwise it goes to an existing category which is chosen with probability proportional to its size. The former assumption is meaningful, because in reality the number of categories grows over time. The second assumption is meaningful too, because this “preferential attachment”/“cumulative advantage” is related to Gibrat's law: categories grow at a rate independent of their size, so that their probability of getting the next patent is proportional to their size.There are three major problems with this model. First it gives the Yule-Simon distribution for the size distribution of classes. This is basically a power law so it has much fatter tails than the exponential law that we observe. In other words, it over predicts the number of very large categories by a large margin. Second, since older categories have more time to accumulate patents, it predicts a strong correlation between age and size. Third, since at each time step categories are created with probability α and patents are created with probability 1, the relationship between the number of categories α t and the number of patents t is linear instead of Heaps' constant elasticity relation.A solution to make the size distribution exponential instead of power law is to change preferential attachment for uniform random attachment, that is to choose each category with equal probability. Besides the fact that this new assumption may seem less intuitive than Gibrat's law, this would not solve the second problem because it would still be the case that older categories accumulate more patents. The solution is to acknowledge that categories are not entities that are defined once and for all; instead, they are frequently split and their patents are reclassified.We therefore turn to the model proposed by <cit.>. It assumes that new categories are introduced over time by splitting existing ones. In its original form the model postulates a linear arrangement of stars and bars. Each star represents a patent, and bars materialize the classes. For instance, if there are 3 patents in class 1 and 1 patent in class 2, we have |***|*|. Now imagine that between any two symbols there is a space. At each period, we choose a space uniformly at random and fill it with either a bar (with probability α) or a star (with complementary probability). When a star is added, it means that an existing category acquires a new patent. When a bar is added, it means that an existing category is split into two categories. It turns out that the resulting size-distribution is exponential, as desired. But before we can evaluate the age-size relationship, we need to decide how to measure the age of a category. To do this we propose to reformulate the model as follows.We start with one patent in one category. At each period, we first select an existing category j with probability proportional to its size k_j and add one patent in it. Next, with probability α we create two novel categories by splitting the selected category uniformly at random; that is, we draw a number s from a uniform distribution ranging from 1 to k_j. Next, each patent in j is assigned to the new category 1 with the probability being s/k_j, or to the new category 2 otherwise. This procedure leads to a straightforward interpretation: the patents are reclassified from j to the first or the second new category. These two categories are established at this step of the process, and since patents are created sequentially one by one, we also know the date of the first patent of each new category. To give a date in calendar years to patents and categories, we can simply use the dates of the real patents. Since α is constant, as in Simon's original model, we are left with the third problem (Heaps' power law is violated). We propose to make α time dependent to solve this issue[An interesting alternative (instead of using the parameter α) would be to model separately the process by which the number of patents grow and patent classification officers split categories.]. Denoting the number of categories by C_t and the number of patents by t (since there is exactly one new patent per period), we want to have C_t = C_0 t^b (Heap's law). This means that C_t should grow at a per period rate of dC_t/dt=C_0 b t^b-1. Since we have measured b ≈ 0.378 and we want the number of categories to be 474 when the number of patents is 9,847,315, we can calculate C_0=C_t/t^b=1.07. This gives α_t = 1.07 × 0.378t^0.378-1, which we take to be 1 when t=1.[There is a small inconsistency arising because the model is about primary classification only, but the historical number of classes and Heaps' law are measured using all classes, because we could not differentiate cross-reference classes in historical data. Another point of detail is that we could have used the estimated C_0=0.17 instead of the calculated one. These details do not fundamentally change our point.]Note how parsimonious the model is: its only inputs are the current number of patents and categories, and the Heaps' exponent. Here we do not attempt to study it rigorously. We provide simulation results under specific parameter values. Fig. <ref> shows the outcome of a single simulation run (black dots and lines), compared to empirical data (red crosses). The first pair of panels (a and b) shows the same (empirical) data as Fig. <ref> and <ref> using red crosses. The results from the simulations are the curves. The simulation reproduces Heaps' law well, by direct construction (the grey middle curve on panel b). But it also reproduces fairly well the evolution of the reconstructed number of classes, both the one based on the “date of first patent” and the one based on the “dates established”, and both against calendar time (years) and against the cumulative number of patents.The second pair of panels (c and d) show the age-size relationships, with the same empirical data as in Fig. <ref>. Panel c shows that the model seems to produce categories whose sizes are not strongly correlated with the year in which they were established, as in the empirical data. However, in panel d, in our model there is a fairly strong negative correlation between size and the year of the first patent and this correlation is absent (or is much weaker) in the empirical data. These results for one single run are confirmed by Monte Carlo simulations. We ran the model 500 times and recorded the estimated coefficient of a simple linear regression between the log of size and each measure of age. The insets show the distribution of the estimated coefficients, with a vertical line showing the coefficient estimated on the empirical data. The next panel (e) shows the size distribution in a rank-size form, as in Fig. <ref>. As expected, the model reproduces this feature of the empirical data fairly well. However the empirical data is not exactly exponential and may be slightly better fitted by a negative binomial model (which has one more parameter and recovers the exponential when its shape parameter equals one). The top right histogram shows the distribution of the estimated negative binomial shape parameter. The empirical value departs only slightly from the Monte Carlo distribution.Finally, the last panel (f) shows the evolution of the share of reclassified patents, with the empirical data from Fig. <ref> augmented by values of 1 between 1790 and 1899 (since no current categories existed prior to 1899, all patents have been reclassified). Here again, the model reproduces fairly well the empirical pattern. All or almost all patents from early years have been reclassified, and the share is falling over time. That said, for recent years (post 1976), the specific shape of the curve is different.Overall, we think that given its simplicity the model reproduces a surprisingly high number of empirical facts. It allows us to understand the differences between the different patterns of growth of the reconstructed and historical number of classes. Without a built-in reclassification process it would not have been possible to match all these empirical facts ­– if only because without reclassification historical and reconstructed evolution coincide. This shows how important it is to consider reclassification when we look at the mesoscale evolution of the patent system. On the other hand, much more could be done to make the model more interesting and realistic, for instance by also modelling subclasses and requiring that reclassification takes place within a certain distance. § CONCLUSION In this paper, we have presented a quantitative history of the evolution of the main patent classes within the U.S. Patent Classification System. Our main finding is that the USPCS incurred regular and important changes. For academic researchers, these changes may be perceived as a source of problems, because this suggests that it may not always be legitimate to think that a given patent belongs to one and the same category forever. This means that results obtained using the current classification system may change in the future, when using a different classification system, and even if the very same set of patent is considered.That said, we do not think the effect would be strong. Besides, using the current classification system is still often the best thing to do because of its consistency. Our point here is not to critique the use of the current classification, but to argue that historical changes to the classification system itself contain interesting information that has not been exploited.Our first result is that different methods to compute the growth of the number of classes give widely different results, establishing that the changes to the classification system are very important. Our second result suggests that we do not see very large categories in empirical data because categories are regularly split, leading to an exponential size distribution with no relationship between the age and size of a category. Our third result is that reclassification data contains useful information to understand technological evolution. Our fourth result is that a very simple model that can explain many of the observed patterns needs to include the splitting of classes and the reclassification of patents. Taken together, these results show that it is both necessary and interesting to understand the evolution of classification systems. An important limitation of our study is that it is highly limited in scope: we study the US, at the class level, using main classifications only. A contrasting example we have found is the French patent classification of 1853, which contained 20 groups, was revised multiple times in the 19^th century but while subclasses were added it kept a total of 20 classes even in the “modern” classification of 1904. Similarly, while direct comparison is difficult, our preliminary exploration of other classification systems, such as the IPC and CPC, suggests that they do not feature the same size distribution, perhaps pointing to a different mode of evolution than the one proposed in our model.We believe that our findings are interesting for all researchers working with economic and technological classifications, because we characterized quantitatively the volatility of the patent classification system. We do not know whether they are unstable because collective representations of technological artefacts are context-dependent, or because as more items are introduced and resources invested in classifying them appropriately, collective discovery of the “true” mesoscale partition takes place. But clearly, when interpreting the results which rely upon a static snapshot of a classification system, one should bear in mind that classification systems are dynamic.A case in point is the use of technological classes to produce forecasts: how can we predict the evolution of a given class or set of classes several decades ahead, when we know these classes might not even exist in the future? In this paper, we are not proposing a solution to this forecasting issue ­- only raising conceptual problems that classification system changes pose. Further, even if we consider that today's categorization will not change, a subtle issue arises in the production of correct forecasting models. To see this consider developing a time series model describing the growth of some particular classes. To test the forecasting ability of the model, one should perform out-of-sample tests, as e.g. <cit.> did for technology performance time series. Part of the past data is used to predict more recent data, and the data which is not used for estimation is compared to the forecasts. Now, note that when we use the current classification, we effectively use data from the present; that is, the delineation of categories for past patents uses knowledge from the present, and it is therefore not entirely valid to evaluate forecasts (there is “data snooping” in the sense that one uses knowledge of the future to predict the future).Classification system changes pose serious problems for forecasting but may also bring opportunities: if classification changes reflect technological change then one can in principle construct quantitative theories of that change. Since the patterns described here could be roughly understood using an extremely simple model, it may be possible to make useful forecasts with more detailed models and data, for instance predicting new classes <cit.>. This could be useful because patent classification changes are more frequent than changes to other classification systems such as industries, products and occupations. An interesting avenue for future research would be to use the changes of the patent classification system to predict the changes of industry and occupation classification systems, thus predicting the types of jobs of the future.Beyond innovation studies, with the rising availability of very large datasets, digitized and carefully recorded classifications and classification changes will become available. It will be possible to explore classifications as an evolving network and track the splitting, merging, birth and death of categories. This is an exciting new area of research, but the big data that we will accumulate will only (or mostly) cover recent years. This makes historical studies such as the present one all the more important. agsm
http://arxiv.org/abs/1703.02104v2
{ "authors": [ "Francois Lafond", "Daniel Kim" ], "categories": [ "q-fin.EC" ], "primary_category": "q-fin.EC", "published": "20170227102934", "title": "Long-run dynamics of the U.S. patent classification system" }
Institut für Physik, Humboldt-Universität zu Berlin, Newtonstr. 15, 12489 Berlin, Germanyvladimir.schkolnik@physik.hu-berlin.deZentrum für angewandte Raumfahrttechnologie und Mikrogravitation (ZARM), Universität Bremen, Am Fallturm, 28359 Bremen, Germany Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme, Linzer Straße 1, 28359 Bremen, Germany Menlo Systems GmbH, Am Klopferspitz 19a, 82153 Martinsried, Germany Ferdinand-Braun-Institut, Leibniz-Institut für Höchstfrequenztechnik, Gustav-Kirchhoff-Str. 4, 12489 Berlin, Germany JOKARUS—Design of a compact optical iodine frequency reference for a sounding rocket mission Vladimir Schkolnik1^† contributed equally to this work Klaus Döringshoff1^† Franz Balthasar Gutsch1 Markus Oswald2 Thilo Schuldt3 Claus Braxmaier2,3 Matthias Lezius4 Ronald Holzwarth4 Christian Kürbis5 Ahmad Bawamia5 Markus Krutzik1 Achim Peters1 =============================================================================================================================================================================================================================================================We present the design of a compact absolute optical frequency reference for space applications based on hyperfine transitions in molecular iodinewith a targeted fractional frequency instability of better than 3e-14. It is based on a micro-integrated extended cavity diode laser with integrated optical amplifier, fiber pigtailed second harmonic generation wave-guide modules, and a quasi-monolithic spectroscopy setup with operating electronics.The instrument described here is scheduled for launch end of 2017 aboard the TEXUS 54 sounding rocket as an important qualification step towards space application of iodine frequency references and related technologies. The payload will operate autonomously and its optical frequency will be compared to an optical frequency combduring its space flight.§ SOUNDING ROCKETS AS STEPPINGSTONE FOR SPACE-BORNE LASER SYSTEMSFrequency stable laser systems are a mandatory key technology for future space missions using optical and quantum-optical technologies aiming at Earth observation, tests of fundamental physics and gravitational wave detection. Proposed and projected space missions like STE-QUEST <cit.>, CAL <cit.>, QWEP <cit.>, Q-TEST <cit.> aim at the observation of Bose-Einstein condensates at unprecedented expansion times, quantum gas physics in the pico Kelvin regime, and dual-species atom interferometry for future precision tests of the equivalence principle with quantum matter <cit.>. Such experiments involving light-atom interaction, e.g., for laser cooling or atom interferometry, require laser systems whose optical frequency is stabilized to specific atomic transitions. Moreover, precise frequency control with high demands on frequency stability and agility as well as intensity control is mandatory. Planned gravitational wave observatories, such as LISA <cit.>, use inter-satellite laser ranging with laser systems at 1064 for detection of gravitational waves in a spectral window between 0.1 and 1. Next generation gravity missions (NGGM) might use similar laser ranging techniques for global mapping of temporal variations of Earth's gravitational field <cit.>.The requirement on laser frequency noise of these missions can be achieved by laser frequency stabilization to optical cavities or atomic or molecular transitions. Related technologies have been or are currently being developed in the context of missions aiming at space-borne atom interferometry <cit.> or atomic clocks <cit.>. In addition to the qualification in environmental testing facilities, the deployment of laser systems in realistic scenarios and relevant environments, offered by sounding rockets or zero-g parabolic flights, allows for rapid iterative tests and further development of related technologies. Sounding rocket mission in particular close the gap between ground and space applications <cit.> but also enable scientific, pioneering pathfinder experiments as shown with recent MAIUS mission, demonstrating the first realization of a Bose-Einstein condensate of ^87Rb in space. Sounding rocket missions based on VSB 30 motors, that are used in the TEXUS program, allow for a 16 ballistic flight reaching an apogee of about 250 after 4.2 ascent, followed by about 6 of μ g time <cit.>. Typically, four experiment modules can be launched altogether on one mission as independent payloads, each with a diameter of 43.8cm that can be covered in a pressurized dome. The total scientific payload mass is usually limited to 260kg, with a total payload length of 3.4m.In the JOKARUS mission, we aim to demonstrate an absolute optical frequency reference at 1064 on a sounding rocket. The JOKARUS laser system is based on modulation transfer spectroscopy of the hyperfine transition R(56)32-0:a_10 in molecular iodine at 532, using a frequency-doubled extended cavity diode laser (ECDL). Iodine frequency standards realized with frequency doubled Nd:YAG lasers locked to this hyperfine transition have been investigated in detail for many years as optical frequency standards <cit.>. Thanks to the strong absorption and narrow natural linewidth of 144 <cit.> these systems exhibit fractional frequency instabilities as low as 3e-15<cit.> and an absolute frequency reproducibility of few kHz <cit.>.These features make them promising candidates for future space missions targeting at the detection of gravitational waves, such as LISA, or monitoring of Earth's gravitational potential, such as NGGM <cit.>, which rely on laser-interferometric ranging with frequency-stable laser systems at 1064 distributed on remote satellites that need to be precisely synchronized. Different realizations of iodine references for space missions were proposed and investigated on a breadboard level <cit.> and prototypes were built that fulfill therequirements on the frequency stability for such missions <cit.>.The JOKARUS mission will, for the first time, demonstrate an autonomous, compact, ruggedized iodine frequency reference using a micro-integrated high power ECDL during a space flight.§ STATUS OF LASER SYSTEM QUALIFICATION ON SOUNDING ROCKETSIn three successful rocket missions, namely FOKUS <cit.>, FOKUS Re-Flight and KALEXUS <cit.>, we and partners have demonstrated the maturity of our laser systems and related technologies <cit.>. As part of the FOKUS mission,flown on the 23rd of April 2015, a frequency-stabilized laser system, shown in Fig. <ref>(a), was qualified as master laser for the MAIUS laser system <cit.>. It is based on frequency modulation spectroscopy (FMS) of the D_2 transition in ^87Rb using a micro-integrated distributed feedback (DFB) laser module <cit.>. FOKUS demonstrated the first optical Doppler-free spectroscopy in space. Moreover, the laser frequency was compared to a Cesium (Cs) reference using an optical frequency comb (Menlo Systems) during flight, making the FOKUS mission a demonstrator for a null test of the gravitational red shift between the optical transition in Rb and a microwave Cs clock <cit.>. The FOKUS payload was flown again on the 17th of January 2016 aboard the TEXUS 53 sounding rocket under the name FOKUS Re-Flight together with KALEXUS. FOKUS Re-Flight, shown in Fig. <ref>(b), was updated from FOKUS to a system based on modulation transfer spectroscopy (MTS) using a fiber-pigtailed phase modulator. The KALEXUS mission featured two micro-integrated extended cavity diode lasers (ECDL) <cit.> operating at 767nm that were alternately offset-locked to each other and stabilized to potassium using FMS. This way, redundancy and autonomy concepts for future space missions were demonstrated <cit.>. The successful FOKUS and KALEXUS missions constitute important qualification steps for the MAIUS mission and towards its follow on missions MAIUS II and III aiming at dual species atom interferometry with Rb and K in space as well as planned satellite missions. JOKARUS is based on the laser system heritage of these successful sounding rocket missions and is planned to operate on a sounding rocket mission end of 2017. The next section describes the JOKARUS system design and the estimated performance in context of laser interferometric ranging missions.§ JOKARUS PAYLOAD—SYSTEM DESIGNThe JOKARUS payload, schematically shown in Fig. <ref>, features a laser system based on a ECDL with an integrated optical amplifier operating at 1064nm, a spectroscopy module including the quasi-monolithic setup for modulation transfer spectroscopy of molecular iodine and two periodically poled lithium niobate (PPLN) waveguide modules for second harmonic generation (SHG), as well as RF and control electronics for frequency stabilizing the ECDL laser to the spectroscopy setup.The individual subsystems are presented in the following sections. §.§ Laser SystemThe laser system is housed in a module as shown in Fig. <ref> a) and includes the laser, an electro-optic modulator (EOM) and an acousto-optic modulator (AOM) for preparation of the spectroscopy beams. The laser is a micro-integrated master-oscillator-power-amplifier-module (MOPA) developed and assembled by the Ferdinand-Braun-Institut (FBH), see Fig. <ref> b). The MOPA consists of a narrow-linewidth ECDL master oscillator operating at 1064nm and a high-power power amplifier. A previous generation of the laser module is described in <cit.> and details on the performance of the laser module and the technology applied for its integration will be given elsewhere <cit.>. The ECDL master oscillator provides an emission linewidth of less than 50kHz (1ms, FWHM) and allows for high bandwidth frequency control via control of the injection current.The laser module provides a fiber-coupled optical power of 500 at an injection current of 1500. This module was subject to a vibration qualification according to the requirements of the TEXUS program (8.8g_rms), while similar laser modules of previous generations even passed 29g_rms and 1500g pyroshock tests <cit.>.In the laser system, see Fig. <ref>, the MOPA is followed by an optical isolator (Thorlabs) and a 99:1 fiber splitter, where few mW are separated from the main beam for frequency measurements with the frequency comb that is part of another payload aboard the TEXUS 54 mission. A second fiber splitter divides the main beam into pump and probe beam for modulation transfer spectroscopy (MTS). The probe beam is connected to a fiber-coupled PPLN wave guide module (NTT Electronics) for second harmonic generation. The pump beam is frequency shifted by 150 using a fiber-coupled AOM (Gooch & Housego) to shift spurious interference between pump and probe beam outside the detection bandwidth of the MTS signal. The frequency-shifted beam is then phase-modulated at ≈ 300 by a fiber-coupled EOM (Jenoptik) and will later also be frequency-doubled by a second SHG module. Taking nominal losses of the components, splice connections and the conversion efficiency of the SHG modules into account, we expect an optical power of 10 and 3 for the pump and probe beam at 532, respectively, which is sufficient for saturation spectroscopy . The power of the pump beam can be stabilized by using a voltage controlled attenuator (VCA) and a feedback loop (see Fig. <ref>).Several fiber taps are used for power monitoring at various positions in the laser system during flight. The pump and probe beam are finally guided from the laser module to the spectroscopy module by polarization maintaining fibers at 1064 and mating sleeves connectors.All components of the laser system were qualified at a random vibration level of 8.8g_rms (hard-mounted) to ensure their integrity after the boost phase of the rocket launch. §.§ Spectroscopy ModuleThe spectroscopy setup is housed in a separate module as shown in Fig. <ref> together with the SHG modules (cf. Fig. <ref>). It is based on previous iterations of an iodine reference for deployment in space missions, developed at ZARM Bremen, DLR Bremen and the Humboldt-Universität zu Berlin <cit.>. The optical setup is realized using a special assembly integration technique <cit.>, where the optical components are bonded directly on a base plate made from fused silica with a footprint of 246 × 145 resulting in a quasi-monolithic, mechanically and thermally stable spectroscopy setup as shown in Fig. <ref> (b). An iodine setup using this assembly technique was subjected to environmental tests including vibrational loads up to 29 g_rms and thermal cycling from -20 to +60 <cit.>. In the JOKARUS MTS setup, the pump beam is launched from a fiber collimator (Schäfter + Kirchhoff) with a beam diameter of 2 and is guided twice through an iodine cell with a length of 15, resulting in an absorption length of 30. Behind the cell, the pump beam is reflected at a thin film polarizer (TFP) and focused on a photo detector for optional power stabilization. The probe beam is launched with the same beam diameter and is split using a TFP into a probe and a reference beam for balanced detection using a noise-canceling detector adapted from <cit.>, as shown in Fig. <ref>.The iodine cell is provided by the Institute of Scientific Instruments of the Academy of Sciences of the Czech Republic (ISI) in Brno, filled with an unsaturated vapor pressure of ≈1 <cit.>. §.§ ElectronicsThe electronic system for JOKARUS is segmented in 3 functional units. First, the RF electronics for the optic modulators shown in Fig. <ref>. It is based on a direct digital synthesizer (DDS9m, Novatech Instruments), referenced to an oven-controlled crystal oscillator. The DDS provides a 150 signal for the AOM and two signals for phase modulation of the pump beam via the EOM and analog demodulation of the MTS signal. Second, a stack of compact electronic cards by Menlo Systems based on the FOKUS flight electronics <cit.> are used for temperature control of the SHG modules and the diode laser, as current source for the ECDL-MOPA andfor realizing the feedback control for laser frequency stabilization. The cards are controlledby an ARM based embedded system via a CAN interface, also providing an interface to higher level data acquisition. The third unit contains a 16–bit DAQ card that is used for data acquisition and the x86-based flight computer (exone IT). It runs the experiment control software that provides coarse tuning of the laser frequency, identification of the fine transition R(56)32-0 as well as invoking and controlling a PID feedback control for frequency stabilization to the selected hyperfine transition. §.§ Payload Assembly The subsystems presented above are integrated in individual housings made from aluminum that share a common frame as a support structure shown in Fig. <ref>. A water-cooled heat sink is integrated into the base frame for temperature control until liftoff. During flight, we expect an average temperature increase of about 3K throughout the mechanical structure, based on nominal power consumption of 100W. The optical fiber connection between the laser and spectroscopy units are realized via mating sleeves. The total payload has a dimension of 345mm × 270mm × 350mm and a total mass of 25, which allows for integration into the TEXUS sounding rocket format.We estimated the performance of the JOKARUS system in terms of amplitude spectral density (ASD) of frequency noise in comparison to the frequency stability of an iodine reference developed, characterized and reported previously <cit.>, called elegant breadboard model (EBB). The frequency noise achieved with the EBB is shown in Fig. <ref> (green graph) together with the frequency noise of the free-running Nd:YAG laser (blue graph) used in this setup. The EBB fulfills the requirement on the frequency noise of planned space missions like LISA and NGGM.For the JOKARUS instrument we expect to achieve a frequency noise on a level of [per-mode = symbol]101/2 (red dashed graph) which corresponds to a fractional frequency instability of2.4e-14/√(τ). The performance was estimated from the frequency noise of the free-running ECDL (orange graph) using a control bandwidth of ≈100 and taking into account a factor of three shorter absorption length in JOKARUS compared to the EBB. We therefore expect the JOKARUS instrument to fulfill the requirements on the frequency noise of space missions like LISA and NGGM. § CONCLUSIONWe presented the design of an absolute optical frequency reference developed as a payload for a sounding rocket mission and a potential candidate for space applications that require frequency-stable laser systems at 1064. The payload will be part of a sounding rocket mission planned for a launch end of 2017, where the optical frequency of the JOKARUS frequency reference will be compared to an optical frequency comb during a 6 space flight. JOKARUS will demonstrate autonomous operation of an absolute optical frequency reference whose performance is expected to meet the requirement on the frequency noise of laser systems for future space missions such as LISA or NGGM. The assembly integration technology used for the iodine spectroscopy setup is a very promising technology for realization of compact and ruggedized space optical systems. Future space missions such as NGGM, LISA or MAQRO <cit.> can benefit from this technology heritage.As demonstrated with the FOKUS and KALEXUS missions operating laser frequency references at 780 and 767, respectively, this project further shows the versatility of the micro-integrated diode laser technology for the realization of compact and efficient laser systems for applications in the field.Future space missions using laser or atom interferometry or the development of space optical clocks may benefit from this technology heritage and its first applications in space missions.§ AUTHORS CONTRIBUTIONS VS conceived and designed the payload and authored this manuscript. KD conceived the conception and design of the laser system and spectroscopy setup, estimated system performance and authored this manuscript. FBG conceived the experiment control concept. MO, TS and CB conceived and designed the iodine spectroscopy setup. ML and RH conceived and designed the electronics. AB and CK contributed to the concept of the laser module. MK and AP conceived the payload.All authors revised this manuscript. § ACKNOWLEDGMENTS The authors thank Ulrich Johann and Alexander Sell from Airbus DS (Friedrichshafen) for support within the Laboratory of Enabling Technologies where the iodine spectroscopy unit was integrated. This work is supported by the German Space Agency DLR with funds provided by the Federal Ministry for Economic Affairs and Energy under grant numbersDLR 50WM 1646, 50 WM 1141, 50 WM 1545.10Aguilera2014 Aguilera DN, Ahlers H, Battelier B, Bawamia A, Bertoldi A, Bondarescu R, et al. STE-QUEST test of the universality of free fall using cold atom interferometry. Class Quantum Gravity. 2014 jun;31(11):115010.Dincao2017 D'Incao J, Krutzik M, Eliott E, Williams J. Enhanced association and dissociation of heteronuclear Feshbach molecules in a microgravity environment. Physical Review A. 2017;.CAL CAL project webpage;. Accessed: 2016-12-30. http://coldatomlab.jpl.nasa.gov/.TINO2013 Tino GM, Sorrentino F, Aguilera D, Battelier B, Bertoldi A, Bodart Q, et al. Precision Gravity Tests with Atom Interferometry in Space. Nuclear Physics B - Proceedings Supplements. 2013;243:203 – 217.Williams2016 Williams J, Chiow SW, Yu N, Müller H. Quantum test of the equivalence principle and space-time aboard the International Space Station. New J Phys. 2016;(18).Stamminger2015 Stamminger A, Ettl J, Grosse J, Hörschgen-Eggers M, Jung W, Kallenbach A, et al. MAIUS-1 - Vehicle, Subsystems Design and Mission Operations. In: Proc. 22nd ESA Symp. Eur. Rocket Balloon Program. Relat. Res.. vol. SP-730. ESA Communications; 2015. p. 183–190.Vitale2014 Vitale S. Space-borne gravitational wave observatories. General Relativity and Gravitation. 2014;46(5):1730.Kawamura2008a Kawamura S, Ando M, Nakamura T, Tsubono K, Tanaka T, Funaki I, et al. The Japanese space gravitational wave antenna - DECIGO. Journal of Physics: Conference Series. 2008;122:012006.Schuldt:17 Schuldt T, Döringshoff K, Kovalchuk EV, Keetman A, Pahl J, Peters A, et al. Development of a compact optical absolute frequency reference for space with 10^-15 instability. Appl Opt. 2017 Feb;56(4):1101–1106.Schkolnik2016 Schkolnik V, Hellmig O, Wenzlawski A, Grosse J, Kohfeldt A, Döringshoff K, et al. A compact and robust diode laser system for atom interferometry on a sounding rocket. Applied Physics B. 2016;122(8):217.Dinkelaker:16 Dinkelaker AN, Schiemangk M, Schkolnik V, Kenyon A, Krutzik M, Peters A. KALEXUS - a Potassium Laser System with Autonomous Frequency Stabilization on a Sounding Rocket. In: Frontiers in Optics 2016. Optical Society of America; 2016. p. FF1H.1.Lezius:16 Lezius M, Wilken T, Deutsch C, Giunta M, Mandel O, Thaller A, et al. Space-borne frequency comb metrology. Optica. 2016 Dec;3(12):1381–1387.Leve2015 Lévèque T, Faure B, Esnault FX, Delaroche C, Massonnet D, Grosjean O, et al. PHARAO laser source flight model: Design and performances. Review of Scientific Instruments. 2015;86(3):033104.Leveque2014 Lévèque T, Antoni-Micollier L, Faure B, Berthon J. A laser setup for rubidium cooling dedicated to space applications. Applied Physics B. 2014;116(4):997–1004.Laurent2015540 Laurent P, Massonnet D, Cacciapuoti L, Salomon C. The ACES/PHARAO space mission. Comptes Rendus Physique. 2015;16(5):540 – 552. The measurement of time / La mesure du temps.Swierad2016 Świerad D, Häfner S, Vogt S, Venon B, Holleville D, Bize S, et al. Ultra-stable clock laser system development towards space applications. Scientific Reports. 2016 sep;6:33973.Grosse2014 Grosse J, Seidel S, Krutzik M, Scharringhausen M, van Zoest T. Thermal and mechanical design of the MAIUS atom interferometer sounding rocket payload. In: AIAA SPACE 2014 Conference and Exposition, SPACE Conferences and Exposition. ESA Communications; 2014. .Ye1999 Jun Ye, Robertsson L, Picard S, Long-Sheng Ma, Hall JL. Absolute frequency atlas of molecular I_2 lines at 532 nm. IEEE Transactions on Instrumentation and Measurement. 1999 apr;48(2):544–549.Nevsky2001a Nevsky AY, Holzwarth R, Reichert J, Udem T, Hänsch TW, Zanthier Jv, et al. Frequency comparison and absolute frequency measurement of I_2-stabilized lasers at 532 nm. Optics Communications. 2001 jun;192(3-6):263–272.Eickhoff1995a Eickhoff ML, Hall JL. Optical frequency standard at 532 nm. IEEE Transactions on Instrumentation and Measurement. 1995 apr;44(2):155–158.Hong2001b Hong FL, Ishikawa J, Zhi-Yi Bi, Jing Zhang, Seta K, Onae A, et al. Portable I_2-stabilized Nd:YAG laser for international comparisons. IEEE Transactions on Instrumentation and Measurement. 2001 apr;50(2):486–489.Musha2012 Musha M, Nakagawa K, Ueda Ki. Developments of a space-borne stabilized laser for DECIGO and DPF. International Conference on Space Optical Systems and Applications. 2012;12:10–12.Leonhardt2006 Leonhardt V, Camp JB. Space interferometry application of laser frequency stabilization with molecular iodine. Applied Optics. 2006;45(17):4142.Argence2010 Argence B, Halloin H, Jeannin O, Prat P, Turazza O, de Vismes E, et al. Molecular laser stabilization at low frequencies for the LISA mission. Physical Review D. 2010 apr;81(8):1–8.Schuldt:16a Schuldt T, Döringshoff K, Milke A, Sanjuan J, Gohlke M, Kovalchuk EV, et al. High-Performance Optical Frequency References for Space. Journal of Physics: Conference Series. 2016;723(1):012047.Duncker:14 Duncker H, Hellmig O, Wenzlawski A, Grote A, Rafipoor AJ, Rafipoor M, et al. Ultrastable, Zerodur-based optical benches for quantum gas experiments. Appl Opt. 2014 Jul;53(20):4468–4474.Schiemangk2015 Schiemangk M, Lampmann K, Dinkelaker A, Kohfeldt A, Krutzik M, Kürbis C, et al. High-power, micro-integrated diode laser modules at 767 and 780 nm for portable quantum gas experiments. Applied optics. 2015 Jun;54(17):5332–8.Luvsandamdin:14 Luvsandamdin E, Kürbis C, Schiemangk M, Sahm A, Wicht A, Peters A, et al. Micro-integrated extended cavity diode lasers for precision potassium spectroscopy in space. Opt Express. 2014 Apr;22(7):7790–7798.Kuerbis2017 Kürbis C, et al. Extended cavity diode laser master-oscillator-power-amplifier for precision iodine spectroscopy in space. in preparation. 2017;.res10 Ressel S, Gohlke M, Rauen D, Schuldt T, Kronast W, Mescheder U, et al. Ultrastable assembly and integration technology for ground- and space-based optical systems. Appl Opt. 2010;49(22):4296–4303.Hobbs1997a Hobbs PC. Ultrasensitive laser measurements without tears. Applied optics. 1997 Feb;36(4):903–20.Hrabina2014b Hrabina J, Šarbort M, Acef O, Burck FD, Chiodo N, Holá M, et al. Spectral properties of molecular iodine in absorption cells filled to specified saturation pressure. Applied Optics. 2014 nov;53(31):7435.Kaltenbaek2016 Kaltenbaek R, Aspelmeyer M, Barker PF, Bassi A, Bateman J, Bongs K, et al. Macroscopic Quantum Resonators (MAQRO): 2015 update. EPJ Quantum Technology. 2016;3(1):5.
http://arxiv.org/abs/1702.08330v1
{ "authors": [ "Vladimir Schkolnik", "Klaus Döringshoff", "Franz Balthasar Gutsch", "Markus Oswald", "Thilo Schuldt", "Claus Braxmaier", "Matthias Lezius", "Ronald Holzwarth", "Christian Kürbis", "Ahmad Bawamia", "Markus Krutzik", "Achim Peters" ], "categories": [ "physics.atom-ph", "physics.ins-det" ], "primary_category": "physics.atom-ph", "published": "20170227153101", "title": "JOKARUS - Design of a compact optical iodine frequency reference for a sounding rocket mission" }
[ Adaptive Neural Networks for Efficient Inference equal* Tolga Bolukbasibu Joseph Wangam Ofer Dekelmsr Venkatesh Saligramabu buBoston University, Boston, MA, USA amAmazon, Cambridge, MA, USA msrMicrosoft Research, Redmond, WA, USATolga Bolukbasitolgab@bu.edu deep neural networks, object recognition, budgeted learning, resource efficient prediction, conditional computation, efficient inference0.3in ] We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of accuracy. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. We extend this to learn a network selection system that adaptively selects the network to be evaluated for each example. We show that computational time can be dramatically reduced by exploiting the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. We pose a global objective for learning an adaptive early exit or network selection policy and solve it by reducing the policy learning problem to a layer-by-layer weighted binary classification problem.Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal (<1%) loss of top5 accuracy.§ INTRODUCTIONDeep neural networks (DNNs) are among the most powerful and versatile machine learning techniques, achieving state-of-the-art accuracy in a variety of important applications, such as visual object recognition <cit.>, speech recognition <cit.>, and machine translation <cit.>. However, the power of DNNs comes at a considerable cost, namely, the computational cost of applying them to new examples. This cost, often called the test-time cost, has increased rapidly for many tasks (see Fig. <ref>) with ever-growing demands for improved performance in state-of-the-art systems. As a point of fact, the Resnet152 <cit.> architecture with 152 layers, realizes a substantial 4.4% accuracy gain in top-5 performance over GoogLeNet <cit.> on the large-scale ImageNet dataset <cit.> but is about 14X slower at test-time. The high test-time cost of state-of-the-art DNNs means that they can only be deployed on powerful computers, equipped with massive GPU accelerators. As a result, technology companies spend billions of dollars a year on expensive and power-hungry computer hardware. Moreover, high test-time cost prevents DNNs from being deployed on resource constrained platforms, such as those found in Internet of Things (IoT) devices, smart phones, and wearables.This problem has given rise to a concentrated research effort to reduce the test-time cost of DNNs. Most of the work on this topic focuses on designing more efficient network topologies and on compressing pre-trained models using various techniques (see related work below). We propose a different approach, which leaves the original DNN intact and instead changes the way in which we apply the DNN to new examples. We exploit the fact that natural data is typically a mix of easy examples and difficult examples, and we posit that the easy examples do not require the full power and complexity of a massive DNN.We pursue two concrete variants of this idea. First, we propose an adaptive early-exit strategy that allows easy examples to bypass some of the network's layers. Before each expensive neural network layer (e.g., convolutional layers), we train a policy that determines whether the current example should proceed to the next layer, or be diverted to a simple classifier for immediate classification. Our second approach, an adaptive network selection method, takes a set of pre-trained DNNs, each with a different cost/accuracy trade-off, and arranges them in a directed acyclic graph <cit.>, with the the cheapest model first and the most expensive one last. We then train an exit policy at each node in the graph, which determines whether we should rely on the current model's predictions or predict the most beneficial next branch to forward the example to. In this context we pose a global objective for learning an adaptive early exit or network selection policy and solve it by reducing the policy learning problem to a layer-by-layer weighted binary classification problem.We demonstrate the merits of our techniques on the ImageNet object recognition task, using a number of popular pretrained DNNs. The early exit technique speeds up the average test-time evaluation of GoogLeNet <cit.>, and Resnet50 <cit.> by 20-30% within reasonable accuracy margins. The network cascade achieves 2.8x speed-up compared to pure Resnet50 model at 1% top-5 accuracy loss and 1.9x speed-up with no change in model accuracy. We also show that our method can approximate a oracle policy that can see true errors suffered for each instance. In addition to reducing the average test-time cost of DNNs, it is worth noting that our techniques are compatible with the common design of large systems of mobile devices, such as smart phone networks or smart surveillance-camera networks. These systems typically include a large number of resource-constrained edge devices that are connected to a central and resource-rich cloud. One of the main challenges involved in designing these systems is determining whether the machine-learned models will run in the devices or in the cloud. Offloading all of the work to the cloud can be problematic due to network latency, limited cloud ingress bandwidth, cloud availability and reliability issues, and privacy concerns. Our approach can be used to design such a system, by deploying a small inaccurate model and an exit policy on each device and a large accurate model in the cloud. Easy examples would be handled by the devices, while difficult ones would be forwarded to the cloud. Our approach naturally generalizes to a fog computing topology (where resource constrained edge devices are connected to a more powerful local gateway computer, which in turn is connected to a sequence of increasingly powerful computers along the path to the data-center). Such designs allow our method to be used in memory constrained settings as well due to offloading of complex models from the device.§ RELATED WORKPast work on reducing evaluation time of deep neural networks has centered on reductions in precision and arithmetic computational cost, design of efficient network structure, and compression or sparsification of networks to reduce the number of convolutions, neurons, and edges. The approach proposed in this paper is complimentary. Our approach does not modify network structure or training and can be applied in tandem with these approaches to further reduce computational cost.The early efforts to compress large DNNs used a large teacher model to generate an endless stream of labeled examples for a smaller student model <cit.>. The wealth of labeled training data generated by the teacher model allowed the small student model to mimic its accuracy.Reduced precision networks <cit.> have been extensively studied to reduce the memory footprint of networks and their test-time cost. Similarly, computationally efficient network structures have also been proposed to reduce the computational cost of deep networks by exploiting efficient operations to approximate complex functions, such as the inception layers introduced in GoogLeNet <cit.>.Network sparsification techniques attempt to identify and prune away redundant parts of a large neural networks. A common approach is to remove unnecessary nodes/edges from the network<cit.>. In convolutional neural networks, the expensive convolution layers can be approximated <cit.> and redundant computation can be avoided <cit.>. More recently, researchers have designed spatially adaptive networks <cit.> where nodes in a layer are selectively activated. Others have developedcascade approaches <cit.> that allow early exits based on confidence feedback. Our approach can be seen as an instance of conditional computation, where we seek computational gains through layer-by-layer and network-level early exits. However, we propose a general framework which optimizes a novel system risk that includes computational costs as well as accuracy. Our method does not require within layer modifications and works with directed acyclic graphs that allow multiple model evaluation paths.Our techniques for adaptive DNNs borrow ideas from the related sensor selection problem <cit.>. The goal of sensor selection is to adaptively choose sensor measurements or features for each example.§ ADAPTIVE EARLY EXIT NETWORKSOur first approach to reducing the test-time cost of deep neural networks is an early exit strategy. We first frame a global objective function and reduce policy training for optimizing the system-wide risk to a layer-by-layer weighted binary classification (WBC). We denote a labeled example as (x,y) ∈R^d×{1,…,ℒ}, where d is the dimension of the data and {1,…,ℒ} is the set of classes represented in the data. We define the distribution generating the examples as 𝒳×𝒴. For a predicted label ŷ, we denote the loss L(ŷ,y). In this paper, we focus on the task of classification and, for exposition, focus on the indicator loss L(ŷ,y)=1_ŷ=y, in this section. In practice we upper bound the indicator functions with logistic loss for computational efficiency.As a running DNN example, we consider the AlexNet architecture <cit.>, which is composed of 5 convolutional layers followed 3 fully connected layers. During evaluation of the network, computing each convolutional layer takes more than 3 times longer than computing a fully connected layer, so we consider a system that allows an example to exit the network after each of the first 4 convolutional layers. Let ŷ(x) denote the label predicted by the network for example x and assume that computing this prediction takes a constant time of T. Moreover, let σ_k(x) denote the output of the k^th convolutional layer for example x and let t_k denote the time it takes to compute this value (from the time that x is fed to the input layer). Finally, let ŷ_k(x) be the predicted label if we exit after the k^th layer. After computing the k^th convolutional layer, we introduce a decision function γ_k that determines whether the example should exit the network with a label of ŷ_k(x) or proceed to the next layer for further evaluation. The input to this decision function is the output of the corresponding convolutional layer σ_k(x), and the value of γ_k(σ_k(x)) is either -1 (indicating an early exit) or 1. This architecture is depicted on the right-hand side of Fig. <ref>.Globally, our goal is to minimize the evaluation time of the network such that the error rate of the adaptive system is no more than some user-chosen value B greater than the full network:min_γ_1,...,γ_4E_x∼𝒳[T_γ_1,…,γ_4(x)].E_(x,y)∼𝒳×𝒴[(L(ŷγ_1,...,γ_4(x),y)-L(ŷ(x),y))_+]≤ BHere, T_γ_1,...,γ_4(x) is the prediction time for example x for the adaptive system, ŷγ_1,...,γ_4(x) is the label predicted by the adaptive system for example x. In practice, the time required to predict a label and the excess loss introduced by the adaptive system can be recursively defined. As in <cit.> we can reduce the early exit policy training for minimizing the global risk to a WBC problem. The key idea is that, for each input, a policy must identify whether or not the future reward (expected future accuracy minus comp. loss) outweighs the current-stage accuracy.To this end, we first focus on the problem of learning the decision function γ_4, which determines if an example should exit after the fourth convolutional layer or whether it will be classified using the entire network. The time it takes to predict the label of example x depends on this decision and can be written asT_4(x,γ_4)= T+τ(γ_4)γ_4(σ_4(x))=1t_4 + τ(γ_4),where τ(γ_4) is the computational time required to evaluate the function γ_4. Our goal is to learn a system that trades-off the evaluation time and the induced error:_γ_4 ∈ΓE_x ∼𝒳 [T_4(x,γ_4)] + λE_(x,y)∼𝒳×𝒴[(L(ŷ_4(x),y)-L(ŷ(x),y))_+1_γ_4(σ_4(x))=-1]where (·)_+ is the function(z)_+=max(z,0) and λ∈R^+ is a trade-off parameter that balances between evaluation time and error. Note that the function T_4(x,γ_4) can be expressed as a sum of indicator functions:T_4(x,γ_4)= (T+τ(γ_4))1_γ_4(σ_4(x))=1+(t_4+τ(γ_4))1_γ_4(σ_4(x))=-1= T1_γ_4(σ_4(x))=1+t_41_γ_4(σ_4(x))=-1+τ_4(γ_4)Substituting for T_4(x,γ_4) allows us to reduce the problem to an importance weighted binary learning problem:_γ_4∈ΓE_(x,y)∼𝒳×𝒴 [C_4(x,y)1_γ_4(σ_4(x))≠β_4(x)]+τ(γ_4)where β_4(x) and C_4(x,y) are the optimal decision and cost at stage 4 for the example (x,y) defined:β_4(x)=-1 T>(t_4 + λ(L(ŷ_4(x),y) -L(ŷ(x),y))_+)1and C_4(x,y)=|T-t_4-λ(L(ŷ_4(x),y)-L(ŷ(x),y))_+|.Note that the regularization term, τ(γ_4), is important to choose the optimal functional form for the function γ_4 as well as a natural mechanism to define the structure of the early exit system. Rather than limiting the family of function Γ to a single functional form such as a linear function or a specific network architecture, we assume the family of functions Γ is the union of multiple functional families, notably including the constant decision functions γ_4(x)=1, ∀ x ∈ |𝒳|. Although this constant function does not allow for adaptive network evaluation at the specific location, it additionally does not introduce any computational overhead, that is, τ(γ_4)=0. By including this constant function in Γ, we guarantee that our technique can only decrease the test-time cost.Empirically, we find that the most effective policies operate on classifier confidences such as classification entropy. Specifically, we consider the family of functions Γ as the union of three functional families, the aforementioned constant functions, linear classifier on confidence features generated from linear classifiers applied to σ_4(x), and linear classifier on confidence features generated from deep classifiers applied to σ_4(x).Rather than optimizing jointly over all three networks, we leverage the fact that the optimal solution to Eqn. (<ref>) can be found by optimizing over each of the three families of functions independently. For each family of functions, the policy evaluation time τ(γ_4) is constant, and therefore solving (<ref>) over a single family of functions is equivalent to solving an unregularized learning problem. We exploit this by solving the three unregularized learning problems and taking the minimum over the three solutions. In order to learn the sequence of decision functions, we consider a bottom-up training scheme, as previously proposed in sensor selection <cit.>. In this scheme, we learn the deepest (in time) early exit block first, then fix the outputs. Fixing the outputs of this trained function, we then train the early exit function immediately preceding the deepest early exit function (γ_3 in Fig. <ref>). For a general early exit system, we recursively define the future time, T_k(x,γ_k), and the future predicted label, ỹ_k(x,γ_k), asT_k(x,γ_k)=T+τ(γ_k)γ_k(σ_k(x))=1, k=KT_k+1(x,γ_k γ_k(σ_k(x))=1, k<K +1)+τ(γ_k) t_k + τ(γ_k)andỹ_k(x,γ_k)=ŷ(x) k=K+1 ŷ(x) k=K γ_k(σ_k(x))=1 ỹ_k+1(x,γ_k+1) k<K γ_k(σ_k(x))=-1 ŷ_k(x).Using these definitions, we can generalize Eqn. (<ref>). For a system with K early exit functions, the k^th early exit function can be trained by solving the supervised learning problem:_γ_k∈ΓE_(x,y)∼𝒳×𝒴[C_k(x,y)1_γ_k(x)≠β_k(σ_k(x))]+τ(γ_k),where optimal decision and cost β_k(x) and C_k(x,y) can be defined:β_k(x)=-1 k <K T_k+1(x,γ_k+1)≥ t_k +λ(L(ŷ_k(x),y)-L(ỹ_k+1(x),y))_+-1 k=K T≥ t_k + λ(L(ŷ_k(x),y)-L(ỹ_k+1(x),y))_+1C_k(x,y)=|T_k+1(x,γ_k+1)-t_k k<K-λ(L(ŷ_k(x),y)-L(ỹ_k+1(x),y))_+||T-t_k -λ(L(ŷ_k(x),y)-L(ŷ(x),y))_+| .Eqn. (<ref>) allows for efficient training of an early exit system by sequentially training early exit decision functions from the bottom of the network upwards. Furthermore, by including constant functions in the family of functions Γ and training early exit functions in all potential stages of the system, the early exit architecture can also naturally be discovered. Finally, in the case of single option at each exit, the layer-wise learning scheme is equivalent to jointly optimizing all the exits with respect to full system risk. § NETWORK SELECTION As shown in Fig. <ref>, the computational time has grown dramatically with respect to classification performance. Rather than attempting to reduce the complexity of the state-of-the-art networks, we instead leverage this non-linear growth by extending the early exiting strategy to the regime of network selection. Conceptually, we seek to exploit the fact that many examples are correctly classified by relatively efficient networks such as alexnet and googlenet, whereas only a small fraction of examples are correctly classified by computationally expensive networks such as resnet 152 and incorrectly classified by googlenet and alexnet.As an example, assume we have three pre-trained networks, N_1, N_2, and N_3. For an example x, denote the predictions for the networks as N_1(x), N_2(x), and N_3(x). Additionally, denote the evaluation times for each of the networks as τ(N_1), τ(N_2), and τ(N_3). As in Fig. <ref>, the adaptive system composed of two decision functions that determine which network is evaluated for each example. First, κ_1:|𝒳|→{N_1,N_2,N_3} is applied after evaluation of N_1 to determine if the classification decision from N_1 should be returned or if network N_2 or network N_3 should be evaluated for the example. For examples that are evaluated on N_2, κ_2:|𝒳|→{N_2,N_3} determines if the classification decision from N_2 should be returned or if network N_3 should be evaluated.Our goal is to learn the functions κ_1 and κ_2 that minimize the average evaluation time subject to a constraint on the average loss induced by adaptive network selection. As in the adaptive early exit case, we first learn κ_2 to trade-off between the average evaluation time and induced error:min_κ_2 ∈ΓE_x ∼𝒳[τ(N_3)1_κ_2(x)=N_3]+τ(κ_2)+λE_(x,y)∼𝒳×𝒴[(L(N_2(x),y)-L(N_3(x),y))_+1_κ_2(x)=N_2],where λ∈R^+ is a trade-off parameter. As in the adaptive network usage case, this problem can be posed as an importance weighted supervised learning problem:min_κ_2∈ΓE_(x,y)∼𝒳×𝒴[W_2(x,y)1_κ_2(x)≠θ_2(x)]+τ(κ_2),where θ_2(x) and W_2(x,y) are the cost and optimal decision at stage 4 for the example/label pair (x,y) defined:θ_2(x)=N_2τ(N_3)>λ(L(N_3(x),y)-L(N_2(x),y))_+N_3and W_2(x,y)=| τ(N_3)-λ(L(N_2(x),y)-L(N_3(x),y))_+|.Once κ_2 has been trained according to Eqn. (<ref>), the training times for examples that pass through N_2 and are routed by κ_2 can be defined T_κ_2(x)=τ(N_2)+τ(κ_2)+τ(N_3)1_κ_2(x)=N_3. As in the adaptive early exit case, we train and fix the last decision function, κ_2, then train the earlier function, κ_1. As before, we seek to trade-off between evaluation time and error:min_κ_1 ∈ΓE_x ∼𝒳[τ(N_3)1_κ_1(x)=N_3+τ(N_2)1_κ_1=N_2]+τ(κ_1)+λE_(x,y)∼𝒳×𝒴[(L(N_2(x),y)-L(N_3(x),y))_+1_κ_1(x)=N_2+(L(N_1(x),y)-L(N_3(x),y))_+1_κ_1(x)=N_1]This can be reduced to a cost sensitive learning problem:min_κ_1∈ΓE_(x,y)∼𝒳×𝒴[ R_3(x,y)1_κ_1(x)=N_3+R_2(x,y)1_κ_1(x)=N_2+ R_1(x,y)1_κ_1(x)=N_1]+τ(κ_1),where the costs are defined:R_1(x,y) =(L(N_1(x),y)-L(N_3(x),y))_+R_2(x,y) =(L(N_2(x),y)-L(N_3(x),y))_+ + τ(N_2)R_3(x,y) =τ(N_3). § EXPERIMENTAL SECTIONWe evaluate our method on the Imagenet 2012 classification dataset <cit.> which has 1000 object classes. We train using the 1.28 million training images and evaluate the system using 50k validation images. We use the pre-trained models from Caffe Model Zoo for Alexnet, GoogLeNet and Resnet50 <cit.>. For preprocessing we follow the same routines proposed for these networks and verify the final network performances within a small margin (<0.1%). Note that it is common to use ensembles of networks and multiple crops to achieve maximum performance. These methods add minimal gain in accuracy while increasing the system cost dramatically. As the speedup margin increases, it becomes trivial for the policy to show significant speedups within the same accuracy tolerance. We believe such speedups are not useful in practice and focus on single crop with single model case.Temporal measurements: We measure network times using the built-in tool in the Caffe library on a server that utilizes a Nvidia Titan X Pascal with CuDNN 5. Since our focus is on the computational cost of the networks, we ignore the data loading and preprocessing times. The reported times are actual measurements including the policy overhead.Policy form and meta-features: In addition to the outputs of the convolutional layers of earlier networks, we augment the feature space with the entropy of prediction probabilities. We relax the indicators in equations (<ref>) and (<ref>) learnlinear logistic regression model on these features for our policy. We experimented with pooled internal representations, but in practice, inclusion of the entropy feature with a simple linear policy significantly outperforms more complex policy functions that exclude the entropy feature. §.§ Network Selection Baselines: Our full system, depicted in Figure <ref>, starts with Alexnet. Following the evaluation of Alexnet, the system determines for each example either to return the prediction, route the example to GoogLeNet, or route the example to Resnet50. For examples that are routed to GoogLeNet, the system either returns the prediction output by GoogLeNet or routes the example to Resnet50. As baselines, we compare against a uniform policy and a myopic policy which learns a single threshold based on model confidence. We also report performance from different system topologies. To provide a bound on the achievable performance, we show the performance of a soft oracle. The soft oracle has access to classification labels and sends each example to the fastest model that correctly classifies the example. Since having access to the labels is too strong, we made the oracle softer by adding two constraints. First, it follows the same network topology, also it can not make decisions without observing the model feedback first, getting hit by the same overhead. Second, it can only exit at a cheaper model if all latter models agree on the true label. This second constraint is added due to the fact that our goal is not to improve the prediction performance of the system but to reduce the computational time, and therefore we prevent the oracle from “correcting” mistakes made by the most complex networks. We sweep the cost trade-off parameter in the range 0.0 to 0.1 to achieve different budget points. Note that due to weights in our cost formulation, even when the pseudo labels are identical, policy behavior can differ. Conceptually, the weights balance the importance of the samples that gain in classification loss in future stages versus samples that gain in computational savings by exiting early stages.The results are demonstrated in Figure <ref>. We see that both full tree and a->g->r50 cascade achieve significant (2.8x) speedup over using Resnet50 while maintaining its accuracy within 1%. The classifier feedback for the policy has a dramatic impact on its performance. Although, Alexnet introduces much less overhead compared to GoogLeNet (≈0.2 vs ≈0.7), the a->r50 policy performs significantly worse in lower budget regions. Our full tree policy learns to choose the best order for all budget regions. Furthermore, the policy matches the soft oracle performance in both the high and low budget regions.Note that GoogLeNet is a very well positioned at 0.7ms per image budget, probably due to its efficiency oriented architectural design with inception blocks <cit.>. For low budget regions, the overhead of the policy is a detriment, as even when it can learn to send almost half the samples to Alexnet instead of GoogLeNet with marginal loss in accuracy, the extra 0.23ms Alexnet overhead brings the balance point, ≈ 0.65ms, very close to using only GoogLeNet at 0.7ms. The ratio between network evaluation times is a significant factor for our system. Fortunately, as mentioned before, for many applications the ratio of different models can be very high (cloud computing upload times, resnet versus Alexnet difference etc.).We further analyzed the network usage and runtime proportion statistics for samples at different budget regions. Fig. <ref> demonstrates the results at three different budget levels. Full tree policy avoids using GoogLeNet altogether for high budget regions. This is the expected behavior since the a->r50 policy performs just as well in those regions and using GoogLeNet in the decision adds too much overhead.At mid level budgets the policy distributes samples more evenly. Note that the sum of the overheads is close to useful runtime of cheaper networks in this region. This is possiblesince the earlier networks are very lightweight. §.§ Network Early Exits To output a prediction following each convolutional layer, we train a single layer linear classifier after a global average pooling for each layer. We added global pooling to minimize the policy overhead in earlier exits. For Resnet50 we added an exit after output layers of 2a, 2c, 3a, 3d, 4a and 4f. The dimensionality of the exit features after global average pooling are 256, 256, 512, 512, 1024 and 1024 in the same order as the layer names. For GoogLeNet we added the exits after concatenated outputs of every inception layer.Table <ref> shows the early exit performance for different networks. The gains are more marginal compared to network selection. Fig <ref> shows the accuracy gains per evaluation time for different layers. Interestingly, the accuracy gain per time is more linear within the same architecture compared to different network architectures. This explains why the adaptive policy works better for network selection compared to early exits. §.§ Network Error AnalysisFig. <ref> shows the distributions over examples of the networks that correctly label the example. Notably, 50% and 77% of the examples are correctly classified by all networks for top 1 and top 5 error, respectively. Similarly, 18% and 5% of the examples are incorrectly classified by all networks with respect to their top 1 and top 5 error, respectively. These results verify our hypothesis that for a large fraction of data, there is no need for costly networks. In particular, for the 68% and 82% of data with no change in top 1 and top 5 error, respectively, the use of any network apart from Alexnet is unnecessary and only adds unnecessary computational time. Additionally, it is worth noting the balance between examples incorrectly classified by all networks, 18% and 5% respectively for top 1 and top 5 error, and the fraction of examples correctly classified by either GoogLeNet or Resnet but not Alexnet, 25.1% and 15.1% for top 1 and top 5 error, respectively. This behavior supports our observation that entropy of classification decisions is an important feature in making policy decisions, as examples likely to be incorrectly classified by Alexnet are likely to be classified correctly by a later network.Note that our system is trained using the same data used to train the networks. Generally, the resulting evaluation error for each network on training data is significantly lower than error that arises on test data, and therefore our system is biased towards sending examples to more complex networks that generally show negligible training error. Practically, this problem is alleviated through the use of validation data to train the adaptive systems. In order to maintain the reported performance of the network without expansion of the training set, we instead utilize the same data for training both networks and adaptive systems, however we note that performance of our adaptive systems is generally better when trained on data excluded from the network training.§ CONCLUSIONWe proposed two different schemes to adaptively trade off model accuracy with model evaluation time for deep neural networks. We demonstrated that significant gains in computational time is possible through our novel policy with negligible loss in accuracy on ImageNet image recognition dataset. We posed a global objective for learning an adaptive early exit or network selection policy and solved it by reducing the policy learning problem to a layer-by-layer weighted binary classification problem. We believe that adaptivity is very important in the age of growing data for models with high variance in computational time and quality. We also showed that our method approximates an Oracle based policy that has benefit of access to true error for each instance from all the networks.§ ACKNOWLEDGEMENTS This material is based upon work supported in part by NSF Grants CCF: 1320566, NSF Grant CNS: 1330008 NSF CCF: 1527618, the U.S. Department of Homeland Security, Science and Technology Directorate, Office of University Programs, under Grant Award 2013-ST-061-ED0001, and by ONR contract N00014-13-C-0288. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the social policies, either expressed or implied, of the NSF, U.S. DHS, ONR or AF.icml2017
http://arxiv.org/abs/1702.07811v2
{ "authors": [ "Tolga Bolukbasi", "Joseph Wang", "Ofer Dekel", "Venkatesh Saligrama" ], "categories": [ "cs.LG", "cs.CV", "cs.NE", "stat.ML" ], "primary_category": "cs.LG", "published": "20170225002251", "title": "Adaptive Neural Networks for Efficient Inference" }
Gran Sasso Science Institute (INFN), Viale Francesco Crispi 7, I-67100 L’Aquila, Italy Dipartimento di Fisica, Università di Roma "Tor Vergata", Via della Ricerca Scientifica 1, I-00133, Roma, ItalySezione INFN, Università di Roma "Tor Vergata", Via della Ricerca Scientifica 1, I-00133, Roma, Italysandeep.haridasu@gssi.infn.itA recent analysis of the Supernova Ia data claims a 'marginal' (∼3σ) evidence for a cosmic acceleration. This result has been complemented with a non-accelerating R_h=ct cosmology, which was presented as a valid alternative to the ΛCDM model. In this paper we use the same analysis to show that a non-marginal evidence for acceleration is actually found. We compare the standard Friedmann models to the R_h=ct cosmology by complementing SN Ia data with the Baryon Acoustic Oscillations, Gamma Ray Bursts and Observational Hubble datasets. We also study the power-law model which is a functional generalisation of R_h=ct. We find that the evidence for late-time acceleration is beyond refutable at a 4.56σ confidence level from SN Ia data alone, and at an even stronger confidence level (5.38σ) from our joint analysis. Also, the non-accelerating R_h=ct model fails to statistically compare with the ΛCDM having a Δ(AIC)∼30. Strong evidence for an accelerating universe Balakrishna S. Haridasu<ref> Vladimir V. Luković<ref>,<ref> Rocco D'Agostino<ref>,<ref> Nicola Vittorio<ref>,<ref>Received / Accepted========================================================================================================================================= § INTRODUCTIONThe very first evidence for an accelerated expansion of the universe was obtained using the SN Ia observations in <cit.> and <cit.>, which has been further confirmed with the most recent supernova data <cit.>. Other low redshift data such as the Observational Hubble parameter-OHD <cit.>, Baryon Acoustic Oscillations-BAO <cit.>, also support an accelerating universe. As an independent observation, the Cosmic Microwave Background-CMB radiation has been in excellent concert with these results and has provided with the most stringent constraints on the cosmological models <cit.>. These observations have established the ΛCDM as the concordance model of cosmology and the late-time acceleration has been a well accepted phenomenon. However, <cit.> have used a modified statistical model for the analysis of the supernova JLA dataset <cit.> and claimed that the evidence for the acceleration is marginal (≲ 3σ). The modification has been done by assuming an intrinsic variation in the SN absolute magnitude and in the light curve (colour and stretch) corrections, which were modelled as Gaussian. More recently, <cit.> have strongly criticised this approach as incomplete and suggested using redshift dependent ad hoc functions for these corrections, presenting the evidence for acceleration to be ∼ 4.2 σ. In this paper we want to show that even with the less flexible modelling of <cit.> the evidence for acceleration is very strong. We also extend the joint analysis done in <cit.> with the inclusion of Gamma Ray Bursts-GRB dataset <cit.>. The marginal evidence for an accelerating universe quoted in <cit.> implies a scenario with very low dark matter and dark energy densities. As this scenario converges towards the Milne model, it has been complemented with the R_h=ct cosmology <cit.>, which features a non-accelerating linear expansion of the universe. This model essentially advocates that the Hubble sphere is same as the particle horizon of the universe <cit.>. Also, the R_h=ct model has often been regarded as a Milne universe <cit.> and several physical issues against this interpretation have been raised <cit.>. In fact, this model preserves the linear expansion by imposing the constraint on the total equation of state (EoS) (ρ_tot + 3p_tot=0), without requiring ρ_tot = p_tot = 0. It is interesting to note that this model also coincides with the linear coasting models that are discussed in the context of modified gravity <cit.>. In a recent work, <cit.> presented a linear coasting model in f(R) gravity, which is functionally equivalent to the R_h=ct model. A linear expansion model can be generalised to a power-law model with an exponent n, which was brought up as an alternative to the standard model as it does not have the flatness and horizon problems <cit.>. In addition, it has been shown in <cit.> that classical fields coupling to spacetime curvature can give rise to a back-reaction from singularities, which can change the nature of expansion from exponential to power-law.The R_h=ct model has been tested time and again and contradictory conclusions have been presented. <cit.> have pointed out some of the problems in this model using the cosmic chronometers data from <cit.> and radial BAO measurements, along with several model-independent diagnostics, to show that R_h=ct is not a viable model. In <cit.> several claims against R_h=ct were refuted, and a list of works favouring R_h=ct over ΛCDM was compiled in <cit.>. Power-law cosmologies with n ≥ 1 have been explored against data in several works such as <cit.>, finding n ∼ 1.5 consistently. In a more recent work <cit.>, power-law and R_h=ct models were tested with SN Ia and BAO datasets and were found to be highly disfavoured against ΛCDM. As most of these works have used older data for their analyses, we believe this is a good occasion to revise the constraints using more recent data and hence statistically verify the viability of these models against ΛCDM. The present paper is structured as follows. A brief introduction to the models is given in <Ref>. We describe the data and method used for our joint analysis in <Ref>. Our results and discussion are given in <Ref>.§ MODELS In this section we briefly describe the standard ΛCDM model, power-law and R_h=ct cosmologies, which we test to asses the late-time acceleration. The dominant components of the ΛCDM model at late times are cold dark matter (CDM), treated as dust, and dark energy (DE) fluid with an EoS parameter w=-1.The corresponding Friedmann equation is given by, H(z)^2 = H_0^2( Ω_m(1+z)^3+ Ω_k(1+z)^2 +Ω_Λ(1+z)^3(1+w)),where H_0 is the present expansion rate, while Ω_m, Ω_Λ and Ω_kare the dimensionless density parameters.For the flat ΛCDM model, Ω_m =1-Ω_Λ. We test the extensions of ΛCDM model, namely the kΛCDM model with the constraint Ω_m =1-Ω_Λ-Ω_k, and the flat wCDM model with w as a free parameter.The second Friedmann equation, ä/a=- 4π/3 G∑_iρ_i(1+3w_i), gives us insight into the necessary conditions to be satisfied for assessing the dynamics of expansion rate. The criteria for acceleration can be derived as: Ω_m≤Ω_Λ/2 for kΛCDM and w ≤ -1/(3 Ω_Λ) for wCDM. We can asses the evidence for acceleration by estimating the confidence level with which the criteria are satisfied.In a flat, power-law cosmological model the scale factor evolves in time as a(t) ∝ t^n, with the Hubble equation H(z) = H_0(1+z)^1/n. Here, n>1 implies an accelerated scenario. Although motivated physically with a total EoS parameter w_tot=-1/3, the flat R_h=ct model coincides with the power-law model for n=1.It is worthwhile noting that <Ref> reduces to the functional form of a power-law model for selected parameter values: Ω_m = 0, Ω_Λ=1 andw = 2-3n/3n. The luminosity distance for all these models with their corresponding H(z) is written as, D_L(z) ≡{(1+z) c ( ∫_0^zξ/H(ξ))forΩ_k=0 (1+z) c/H_0√(-Ω_k)sin( √(-Ω_k) H_0∫_0^zξ/H(ξ))forΩ_k≠0 .The theoretical distance modulus is defined as μ_th =5log[D_L(Mpc)] + 25. The angular diameter distance is D_A(z)=D_L(z)/(1+z)^2, which is used in the modelling of BAO data. § DATA AND METHOD We test the models described in the <Ref> against data in the redshift range 0<z≲ 8. We use the observables SN Ia, BAO, OHD and GRB that are uncorrelated. We perform a joint analysis using all the datasets together by defining a combined likelihood function. We keep the description of the data to a minimum as we refer to <cit.> for most of it. We use the JLA dataset <cit.> consisting of 740 SN, which already provides an empirical correction to the absolute magnitude,M^corr_B = M_B -α s + β c,In the statistical method we implement <cit.>, the stretch s, colour c corrections and the absolute magnitude M_B are all considered random Gaussian variables without any redshift dependence. Such an assumption does not account for the selection effects in 's' and 'c' corrections. The JLA dataset has been corrected for the selection bias only in the apparent magnitude <cit.>, which is why the correction in 's' and 'c' have to be explicitly included when they are modelled as distributions. As anticipated in the introduction, we use the less flexible modelling of <cit.> to show that even in this case the evidence for acceleration is very strong. Different methods for treating the selection bias in the SN data and their short comings have been discussed in <cit.>. It is of high importance to study these effects, which we shall address in a forthcoming paper. The SN Ia likelihood L_ SN used here is described in <cit.>. The BAO data is available for the compound observable D_V defined in <cit.>,D_V(z) = [(1+z)^2 D_A^2(z)c z/H(z)]^1/3 ,It is important to note that the observable D_V is usually presented as a ratio with r_d, the sound horizon at the drag epoch. For the purpose of model selection we fit r_d as a free parameter instead of using the standard fit function for the drag epoch z_d <cit.>, as it is not very straight forward to use it for the power law cosmologies. A similar approach was also implemented in <cit.>. Hence, the parameters H_0 and r_d are now degenerate, and BAO data by itself is only able to constraint the combination r_d× H_0 and Ω_m. To avoid correlations among different BAO data points, we use only six measurements taken from <cit.> also summarised in Table 1 of <cit.>.A simple likelihood for the uncorrelated data is then implemented as,L_ BAO∝exp[-12∑_i=1^6( r_d/D_V^i-r_d/D_V(z_i)/σ_r_d/D_V^i)^2]. The measurements of the expansion rate have been estimated using the differential age (DA) method suggested in <cit.>, which considers pairs of passively evolving red galaxies at similar redshifts to obtain z/ t.We use a compilation of 30 uncorrelated DA points taken from <cit.> obtained using BC03 models. We implement a simple likelihood function assuming all the data are uncorrelated. L_ OHD∝exp[-12∑_i=1^30( H_i-H(z_i)/σ_H_i)^2]. Finally, we use the GRB dataset comprising of 109 observations compiled with the well known Amati relation <cit.>. The dataset has 50 GRBs at z<1.4 and 59 GRBs at z>1.4, in a total range of 0.1<z<8.1. The dataset is given in tables I and II of <cit.>. The distance modulus μ_GRB and the corresponding standard deviation can be defined as,μ_GRB = 5/2(log_10[ (1+z)/4π(E_p,i/300 )^bS_bolo^-1/100 ^2] + λ)σ_μ_GRB = (5/2log(10))^2[(b σ_E_p,i/E_p,i)^2 + (σ_S_bolo/S_bolo)^2+σ_sys^2].We adopt σ_sys = 0.7571, following the model independent calibration done in <cit.>.The likelihood for the GRB is defined as, L_ GRB∝exp[-12∑_i=1^109(μ_GRB^i - μ_th^i/σ_μ_GRB^i)^2]. The joint likelihood for these four independent observables is given as L_ tot=L_ SNL_ OHDL_ BAOL_ GRB. We use the two most common criteria for model comparison in cosmology, namely the Akaike Information criteria (AIC) <cit.> and the Bayesian Information criteria (BIC) <cit.>. The AIC and BIC values for large number of measurements are defined as,AIC = -2log L^max + 2 N_p ,BIC = -2log L^max + N_plog(N_data), where, N_p and N_data are the number of parameters and data points, respectively. Δ(AIC) = AIC-AIC_refcriterion takes into account the number of parameters to estimate the amount of information lost in one model when compared to a reference model, in our case ΛCDM. We define the Δ(BIC) similar to Δ(AIC). A negative value of the Δ(AIC) or Δ(BIC) indicates that the model in comparison performs better than the reference model. § RESULTS AND DISCUSSION In this Section we present the results obtained from our joint analysis for the models and data given in the earlier Sections. We first present our assessment for the current accelerated state of the universe and then comment on the model comparison using the AIC and BIC statistics.The SN Ia Hubble diagram was claimed to be consistent with a uniform rate of expansion in <cit.> as the analysis in kΛCDM model evades non accelerating criterion by only ≲ 3σ. We reproduce this result and agree with this statement (see top panel of <Ref>). However, there is a strong prejudice for a flat universe from the CMB data (<cit.>), and hence it is important to analyse SN Ia and other cosmological data in the context of a flat wCDM model. We find that the evidence for acceleration in the Ω_m - w plane is much more significant (≥ 4.56σ) compared to the marginal (≥ 2.88σ) found in the Ω_m - Ω_Λ plane (see <Ref>). The claimed marginal evidence for acceleration that corresponds to a very low matter density becomes more significant in both kΛCDM and wCDM models when more physical values of Ω_m are considered. In <Ref> contours for the best-fit regions of supernova dataset and our joint analysis are shown. The joint analysis improves the evidence for acceleration in the kΛCDM model to 4.98σ and in wCDM model to 5.38σ. The parameter space Ω_m - w allows us to explore points that correspond to the functional forms of R_h=ct and power-law models for specific values of the parameters in wCDM. The point (Ω_m,w)=(0,-1/3), shown in red in the bottom panel of <Ref>, phenomenologically reproduces the expansion law of the R_h=ct model (cf. <Ref>). Similarly, the point (Ω_m,w)=(0,-0.38) corresponds to the best-fit value of n=1.08 in power-law model from our joint analysis (cf. <Ref> and <Ref>).Using SN data alone, the best-fit model for the power-law cosmology is found to be n = 1.28 (corresponding to w = -0.48), implying an accelerating universe (cf. <Ref>). In the Ω_m - w plane this point lies within the 2σ SN confidence region, in contrast to the R_h=ct at 4.56σ.However, when the joint analysis is performed, these points are at 5σ and 5.5σ for the power-law and R_h=ct models, respectively.The best-fit parameter values for the models considered in this work are presented in <Ref>. The values for the fitted r_d parameter are consistent with the estimates (see <cit.>) using the fit function for the drag epoch from <cit.>. In our analysis, the best-fit value for the index n is driven towards unity. This result is quite different form the previous n estimates <cit.>, which consistently suggest n ∼ 1.5. The modification in the statistical method for the SN analysis enables this change. While, the value we find for BAO data alone (n=0.94) is consistent with the value given by <cit.> (n=0.93), for BAO+SN data we find n=1.11 in contrast to his n=1.52. Our best-fit value for the joint analysis (n = 1.08 ± 0.02) is now consistent with n = 1.14 ± 0.05 found by <cit.> using X-ray cluster data.It is clear that the H_0 estimates for the power-law and R_h=ct models are highly in tension with the direct estimate in <cit.>, which is already a well established problem for ΛCDM <cit.>. In this work, the joint analysis provides H_0=66.4±1.8 km s^-1/Mpc for the ΛCDM scenario. We note that this value is consistent with our previous estimate <cit.> but with a higher error due to the difference in the BAO analysis (see <Ref>). In any case, this value still remains in tension with the direct estimate at 2.7σ.We want to stress that the Milne model with Ω_m=Ω_Λ=0 and Ω_k=1 does not correspond to the flat R_h=ct model. In fact, these two models share the same Hubble equation and EoS (ρ + 3 p = 0), but do not have the same D_L as the negative curvature in the Milne model corresponds to D_L∝ (1+z) sinh(log(1+z)) with Ω_k=1, where as in the R_h=ct model D_L∝ (1+z) log(1+z). In the SN Ia Hubble diagram it is difficult to see any significant difference among the ΛCDM, Milne and R_h=ct model predictions, however, their performance can be more effectively tested with the information criteria. The analysis of SN Union 2.1 data done by <cit.> calls for a non accelerating scenario, as the R_h=ct model was claimed to perform on par with ΛCDM. This point was also taken by <cit.> who analysed the JLA dataset with their improved statistical method. In our work, using the same technique for analysing the JLA data we find that R_h=ct performs poorly when compared to ΛCDM with Δ(AIC)_SN∼ 22, while a power-law model is performing as good as ΛCDM with n∼1.28 from SN data. Our results are consistent with the previous work by <cit.>. The values of Δ(AIC)_SN obtained from the SN data alone are shown in <Ref>. Note that the Milne model was claimed to perform marginally worse in comparison to ΛCDM using the SN data alone <cit.>. In our work, we find for this model Δ(AIC)_SN= 9.8 high enough to reject a model. In any case, the Milne model fails to keep up when the high redshift GRB (Δ(AIC)_GRB∼20) and BAO (Δ(AIC)_BAO∼38) data are used, yielding a total Δ(AIC)_Joint∼66.4 (see <Ref>). The AIC statistics disfavours the power law models by Δ((AIC)_Joint)∼ 28 and the R_h=ct model by Δ((AIC)_Joint)∼ 30 in comparison to the ΛCDM model. In any case, our joint analysis shows that all the three models (Milne, power-law and R_h=ct) are strongly disfavoured with respect to ΛCDM (<Ref>). § CONCLUSIONSContrary to the claim by <cit.>, we find that the SN data alone indicates an accelerating universe at more than 4.56 σ confidence level. This evidence becomes even stronger (5.38σ), when we perform the joint analysis combining SN, BAO, OHD and GRB data. The non accelerating R_h=ct model fails to explain at once these data resulting in Δ(AIC)_Joint∼ 30 with respect to ΛCDM. Although, the power-law model performs slightly better that the R_h=ct model, similarly fails with a Δ(AIC)_Joint∼ 28. Our analysis shows that the possibility of having models with an uniform rate of expansion is excluded given the current low-redshift data. In conclusion, on one hand we re-assert that the current expansion of our universe is accelerated and on the other hand that ΛCDM still constitutes the base line for a concordance model in cosmology. aa
http://arxiv.org/abs/1702.08244v1
{ "authors": [ "Balakrishna S. Haridasu", "Vladimir V. Luković", "Rocco D'Agostino", "Nicola Vittorio" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170227114107", "title": "Strong evidence for an accelerating universe" }
[NO \title GIVEN] [NO \author GIVEN] December 30, 2023 ====================== Two permutations of the vertices of a graph G are called G-different if there exists an index i such that i-th entry of the two permutations form an edge in G. We bound or determine the maximum size of a family of pairwise G-different permutations for various graphs G. We show that for all balanced bipartite graphs G of order n with minimum degree n/2 - o(n), the maximum number of pairwise G-different permutations of the vertices of G is 2^(1-o(1))n. We also present examples of bipartite graphs G with maximum degree O(log n) that have this property. We explore the problem of bounding the maximum size of a family of pairwise graph-different permutations when an unlimited number of disjoint vertices is added to a given graph. We determine this exact value for the graph of 2 disjoint edges, and present some asymptotic bounds relating to this value for graphs consisting of the union of n/2 disjoint edges.§ INTRODUCTIONFor any graph G, let two permutations of the vertices of G be G-different if there exists some index i such that the i-th entry of the two permutations form an edge in G. Let F(G) be the maximum size of a family of pairwise G-different permutations of the vertices of G.The value of F(G) has been studied for many graphs G. One of the most studied such graphs is the path on n vertices P_n (pairs of P_n-different permutations are also called colliding permutations in <cit.>). Körner and Malvenuto <cit.> conjectured that F(P_n) = n ⌊ n/2 ⌋. The authors' results implied that F(G) ≤n ⌊ n/2 ⌋for all n-vertex balanced bipartite graphs G, and F(K_⌊ n/2 ⌋, ⌈ n/2 ⌉) = n ⌊ n/2 ⌋, where K_⌊ n/2 ⌋, ⌈ n/2 ⌉ is the complete balanced bipartite graph on n vertices. The current asymptotic bounds on F(P_n) stand at1.81 ≤lim_n →∞ (F(P_n))^1/n≤ 2;the lower bound was shown in <cit.>.It is conjectured that F(P_n) = F(K_⌊ n/2 ⌋, ⌈ n/2 ⌉), which is surprising; the complete balanced bipartite graph has many more edges than the path, which is one of the sparsest connected balanced bipartite graphs. Therefore we investigate F(G) for balanced bipartite graphs G more dense than the path but less dense than the complete balanced bipartite graph.In this paper, we present new bounds on F(G) for various bipartite graphs G, thereby potentially making progress towards determining this value for the path. We show that for all dense enough n-vertex balanced bipartite graphs G, F(G) is near F(K_⌊ n/2 ⌋, ⌈ n/2 ⌉). We also present a smaller family of much sparser bipartite graphs, which have average degree O(log n), for which this growth holds. In comparison, the path graph has average degree approximately 2. We investigate the properties of families of pairwise graph-different permutations where arbitrarily many disjoint vertices are added to a graph. We develop new methods for bounding this quantity for the matching graph and determine its exact value for the 4-vertex matching graph (the graph of 2 independent edges).In related work, Körner, Malvenuto, and Simonyi <cit.> bounded F(G) for various graphs G with arbitrarily many isolated vertices, and completely determined this value for stars. Cohen and Malvenuto <cit.> presented bounds on F(C_n), where C_n denotes the n-vertex cycle. Their bounds are similar to the current bounds on F(P_n). Körner, Simonyi, and Sinaimeri <cit.> investigated distance graphs, as well as specific graphs G with n vertices such that F(G) does not grow exponentially with n, in contrast to the majority of the results in this field. Frankl and Deza <cit.> looked at a slightly different problem, in which they bounded the maximum number of pairwise t-intersecting permutations, where two permutations are t-intersecting if they share t common positions.The organization of this paper is as follows. In Section <ref>, we present classes of balanced bipartite graphs for which F is near the upper bound given in (<ref>). In Section <ref>, we investigate the properties of families of pairwise matching-different permutations. We present implications and potential future extensions of our work in Section <ref>. § DENSE BIPARTITE GRAPHS: LOWER BOUNDS In Section <ref>, we present lower bounds on F for n-vertex bipartite graphs with three different maximum degrees of the bipartite complement; namely, 1, any positive constant, and o(n). In Section <ref>, we present lower bounds for the graph consisting of the union of small disjoint balanced bipartite graphs. §.§ Bipartite Graphs with Specified Maximum Degree of Complement It was shown in <cit.> that F(P_n) ≤n ⌊ n/2 ⌋; from their proof it immediately follows that F(K_a, n-a) = na. The following trivial lemma shows that this upper bound applies to all bipartite graphs. If G is a subgraph of H, then F(G) ≤ F(H). Any pair of G-different permutations must also be H-different by definition; therefore any family of pairwise G-different permutations is also pairwise H-different.If G is subgraph of K_a,n-a, then F(G) ≤na. Because it is conjectured that the upper bound in Corollary <ref> is tight for the n-vertex path, it is natural to try to show that this bound is tight for other classes of non-complete bipartite graphs. We present such a class of graphs in the following result.Let G(n, a) be the graph obtained by removing a maximal matching from K_a,n-a. (By this definition, G(n, n/2) is the crown graph on n vertices.) We use induction on n and a to determine F(G(n,a)) for all n and a in the below theorem. For all nonnegative integers a ≤ n,F(G(n,a)) = 1 n < 3nan ≥ 3. If n < 3, there are at most 2 vertices in G(n, a), so the graph does not have any edges by definition. Therefore no two permutations are G(n,a)-different, so F(G(n,a)) = 1. We now assume that n ≥ 3. It suffices to show that F(G(n,a)) ≥na, as F(G(n,a)) ≤na by Corollary <ref>. We prove the result by induction. For the base case, note that for any nonnegative integer n, F(G(n,0)) = F(G(n,n)) = 1 = n0 = nn. This is because G(n,0) and G(n,n) both have no edges, so no two permutations are G(n,0)-different or G(n,n)-different. Additionally, F(G(3,1)) = F(G(3,2)) = 3 = 31 = 32. This is because G(3,1) and G(3,2) each have 3 vertices and 1 edge, so it suffices to show that F(H) ≥ 3, where H is a graph with vertices labeled 1,2,3 and with an edge between 1 and 2. The 3 permutations1 2 3 3 1 2 2 3 1are pairwise H-different, so F(H) ≥ 3, and therefore F(H) = 3. For the inductive step, assume n > 3, 0 < a < n, and F(G(n-1, d)) = n-1d for all 0 ≤ d ≤ n-1. It follows that G(n,a) is not an empty graph, so let x and y be vertices in the first and second subsets of G(n,a) respectively such that there is an edge between x and y. If x is removed from G(n,a), the resulting graph is either G(n-1,a-1) or a supergraph of G(n-1,a-1). Likewise, if y is removed from G(n,a), the resulting graph is either G(n-1,a) or a supergraph of G(n-1,a). Then by the inductive hypothesis, there exists a family of at least F(G(n-1,a-1)) permutations of V(G(n,a)) - {x} that are pairwise G(n,a)-different, and there exists a family of at least F(G(n-1,a)) permutations of V(G(n,a)) - {y} that are pairwise G(n,a)-different; let these families be ℱ_x and ℱ_y respectively. Let ℱ be the family that consists of the union of x concatenated to all elements of ℱ_x and y concatenated to all elements of ℱ_y. Then ℱ is pairwise G(n,a)-different, soF(G(n,a)) ≥ |ℱ| = |ℱ_x| + |ℱ_y| = F(G(n-1,a-1)) + F(G(n-1,a)).By this induction, F(G(n,a)) = na for n ≥ 3, as na = n-1a-1 + n-1a= F(G(n-1,a-1)) + F(G(n-1,a)). Therefore for all n ≥ 3, there exist non-complete bipartite graphs on n vertices that are subgraphs of K_a,n-a for which the upper bound of na is exactly equal to F. However, the graphs G(n,a) considered in the above theorem are such that the maximum degrees of their bipartite complements are 1. As the path is much more sparse, we want to extend this result to apply to graphs with larger maximum bipartite complement degree. We make the following definition in order to consider such graphs. Let F(n, a, Δ) be the minimum value of F(G) over all n-vertex bipartite graphs G that are subgraphs of K_a,n-a, such that the maximum degree of the bipartite complement of G is Δ.We can now generalize Theorem <ref> as follows. For all nonnegative integers n, a, and Δ such that n ≥ 2 Δ and Δ≤ a ≤ n - Δ,F(n, a, Δ) ≥n - 2 Δ a - Δ.We show the result by induction on n and a, just as in Theorem <ref>. For the base case, it suffices to note the trivial observation that F(n, Δ, Δ) ≥ 1 and F(n, n - Δ, Δ) ≥ 1 for all n ≥ 2 Δ. For the inductive step, let n > 2 Δ. Assume that for all Δ≤ d ≤ n - 1 - Δ,F(n-1, d, Δ) ≥n - 1 - 2 Δ d - Δ.It remains to be shown that for any Δ < a < n - Δ,F(n, a, Δ) ≥n - 2 Δ a - Δ.Let G be any bipartite graph with n vertices that is a subgraph of K_a,n-a, such that the maximum degree of the bipartite complement of G is Δ. We show thatF(G) ≥ F(n-1, a-1, Δ) + F(n-1, a, Δ),as then it would follow thatF(G) ≥n - 1 - 2 Δ a - 1 - Δ + n - 1 - 2 Δ a - Δ = n - 2 Δ a - Δby the inductive hypothesis. First, note that G has more than 2 Δ vertices, so it must have at least one subset with more than Δ vertices, and therefore, because Δ is the maximum degree of the bipartite complement graph, G must have at least one edge. Let this edge connect vertices x and y in the first and second subsets respectively. Removing a vertex from a graph cannot increase the maximum degree of the complement graph. Therefore, F(G - {x}) ≥ F(n-1, a-1, Δ) and F(G - {y}) ≥ F(n-1, a, Δ). It follows by definition that there exist pairwise G-different families ℱ_x and ℱ_y of permutations of V(G) - {x} and V(G) - {y} respectively such that |ℱ_x| ≥ F(n-1, a-1, Δ) and |ℱ_y| ≥ F(n-1, a, Δ). Let ℱ be the family of permutations of V(G) consisting of all permutations in ℱ_x concatenated to x and all permutations in ℱ_y concatenated to y. Then ℱ is pairwise G-different by construction, soF(G) ≥ |ℱ| = F(n-1, a-1, Δ) + F(n-1, a, Δ). For all nonnegative integers n and Δ such that n ≥ 2 Δ,F(n, ⌊ n/2 ⌋, Δ) ≥ 2^-2 Δn ⌊ n/2 ⌋.It is easy to see by expanding the binomial coefficient that F(n, ⌊ n/2 ⌋, Δ)≥n - 2 Δ⌊ n/2 ⌋ - Δ≥ 2^-2 Δn ⌊ n/2 ⌋.Although the lower bound in Theorem <ref> does not quite reach the upper bound given in Corollary <ref>, it comes within a constant factor of the upper bound for balanced bipartite graphs when Δ is a constant, as shown in Corollary <ref>. This constant factor is due to the difficulty of finding sufficient base cases for the induction on n and a for large Δ. Although for many Δ better base cases are easily found (as in Δ = 1), it is difficult to find general base cases for all Δ.Because lim_n →∞1/nlog_2 n ⌊ n/2 ⌋ = 1 by Stirling's formula, the function F(K_⌊ n/2 ⌋, ⌈ n/2 ⌉) = n ⌊ n/2 ⌋ grows exponentially on the order of 2^n. We therefore remain primarily interested in showing that F grows on the order of 2^n for various classes of balanced bipartite graphs, and thereby showing that the upper bound on F of n ⌊ n/2 ⌋ is met asymptotically. We now use Corollary <ref> to show that F(n, ⌊ n/2 ⌋, Δ) grows on the order of 2^n if Δ = o(n). lim_n →∞1/nlog_2 F(n, ⌊ n/2 ⌋, o(n)) = 1.First note that lim_n →∞1/nlog_2 F(n, ⌊ n/2 ⌋, o(n))≤lim_n →∞1/nlog_2 F(K_⌊ n/2 ⌋, ⌈ n/2 ⌉) = 1by Stirling's formula. It therefore suffices to show the opposite inequality to prove a lower bound of 1. By Corollary <ref>, lim_n →∞1/nlog_2 F(n, ⌊ n/2 ⌋, o(n))≥lim_n →∞1/nlog_2 ( 2^-2 · o(n)n ⌊ n/2 ⌋) ≥lim_n →∞1/nlog_2 n ⌊ n/2 ⌋ + lim_n →∞1/nlog_2 2^-2 · o(n)= 1 - lim_n →∞2 · o(n)/n= 1. This theorem is particularly interesting because it presents a very large class of graphs such that any graph G in this class has the property that F(G) is near 2^n. However, as Δ = o(n), these graphs are relatively dense. In the next section we present specific but much sparser graphs G for which F(G) is near 2^n. §.§ Union of Small Dense Balanced Bipartite Graphs In this section we show that F grows on the order of 2^n for graphs consisting of the union of small complete balanced bipartite graphs.We first present the following well-known lemma, which provides a method for placing lower bounds on F(G) for graphs G composed of disjoint subgraphs. An equivalent result is shown in <cit.>, but we present the proof as it is related to future proofs in this paper.Let G be the union of disjoint graphs G_1 and G_2. Then F(G) ≥ F(G_1) · F(G_2). Let ℱ_1 = {π_1, …, π_F(G_1)} and ℱ_2 = {σ_1, …, σ_F(G_2)} be families of pairwise G_1-different and G_2-different permutations respectively of maximum size, so that |ℱ_1| = F(G_1) and |ℱ_2| = F(G_2). Then let ℱ be the family of permutations consisting of π_i concatenated to σ_j for all 1 ≤ i ≤ F(G_1) and 1 ≤ j ≤ F(G_2). Then ℱ is G-different, as for any two permutations π_i_1σ_j_1 and π_i_2σ_j_2, if i_1 ≠ i_2, π_i_1 and π_i_2 are G-different; otherwise j_1 ≠ j_2 and σ_j_1 and σ_j_2 are G-different. Therefore F(G) ≥ |ℱ| = F(G_1) · F(G_2). Intuitively, if F(G_1) ≈ 2^|V(G_1)| and F(G_2) ≈ 2^|V(G_2)|, then F(G_1 + G_2) ≈ 2^|V(G_1)| + |V(G_2)| = 2^|V(G_1 + G_2)| by Lemma <ref>. Because we want to find bipartite graphs G for which F(G) ≈ 2^|V(G)|, this idea is very useful, and forms the basis of the theorem below. Let B(n,k(n)) be the balanced bipartite graph of order n consisting of the union of k(n) disjoint balanced complete bipartite graphs, each of order ⌊ n/k(n) ⌋ or ⌈ n/k(n) ⌉. Ifk(n) = O ( n/log_2 n) ,thenlim_n →∞1/nlog_2 F(B(n,k(n))) = 1.For some given n, let k = k(n), B = B(n,k(n)), and let the k disjoint subgraphs of B be B_1, …, B_k with orders n_1, …, n_k respectively. By Lemma <ref>,F(B) ≥∏_i=1^k F(B_i) = ∏_i = 1^kn_i ⌊ n_i / 2 ⌋.The right side can be expanded by Stirling's formula, which is easily applied to show that there exists a positive constant l for whichx ⌊ x/2 ⌋≥ l ·2^x/√(x)holds for all positive integers x. (The actual value of l is not relevant to us, but it is easily bounded.) Then, as ∑_i=1^k n_i = n, and by the AM-GM inequality,F(B)≥∏_i = 1^kn_i ⌊ n_i / 2 ⌋≥∏_i = 1^k l ·2^n_i/√(n_i)= l^k · 2^n/√(∏_i = 1^k n_i)≥l^k · 2^n/√(( n/k)^k) = l^k · 2^n/2^k/2log_2 ( n/k)=l^k · 2^n - k/2log_2 ( n/k).Therefore F(B(n,k(n))) ≥ l^k(n)· 2^n - k(n)/2log_2 ( n/k(n)).If k(n) = O(n / log_2 n), it is easily verifiable from (<ref>) thatlim_n →∞1/nlog_2 F(B(n,k(n))) ≥ 1.The opposite inequality is trivial as B(n,k(n)) is bipartite. Note that the proof of Theorem <ref> holds even if the k(n) disjoint balanced complete bipartite graphs have different orders; however, the union is sparsest when the disjoint subgraphs are near in order.Theorem <ref> provides an n-vertex bipartite graph B(n,k(n)) with maximum degree O(log_2 n) such that F(B(n,k(n))) grows on the order of 2^n, or more formally, such that F(B(n,k(n))) = 2^(1-o(1))n. This graph is the sparest balanced bipartite graph we currently know with this property. § FAMILIES OF PAIRWISE MATCHING-DIFFERENT PERMUTATIONSIn Section <ref>, we primarily dealt with relatively dense bipartite graphs G such that F(G) was near 2^n. Now we examine very sparse bipartite graphs. The most prominently studied of these is the path; improvements on the lower bound on F(P_n) were made in <cit.>. In this paper we investigate F for the matching graph on n vertices, which we denote M(n). (We will assume n to be even whenever referencing M(n) in this section.) As the matching is a subgraph of the path, any lower bounds on F(M(n)) would also apply to the n-vertex path. Additionally, the matching consists of the union of n/2 disjoint edges, giving it a special structure relating to Lemma <ref>.We first generalize the function F and show how the generalization is related to the original function. Let F_b (G) be the maximum size of a family of pairwise G-different permutations of the vertices of G with an additional b blank spaces.Here a blank space can be thought of as an isolated vertex added to G. For example, consider the family ℱ shown below.1 2 *1 2 2 * 1We say ℱ is family of 3pairwise M(2)-different permutations of the vertices of M(2) with 1 blank space; the blank space in each permutation is denoted by *' and simply serves as a placeholder. By this definition, it is clear that F_b (G) ≤ F_c (G) if b ≤ c for any graph G. We now extend Definition <ref> to account for families of permutations with unlimited blank spaces; an equivalent definition was made in <cit.>.For any graph G, assign each element of V(G) to a unique natural number. Let two infinite permutations of ℕ be G-different if at some position their corresponding elements are both assigned to vertices in G and form an edge in G. Then let F_∞(G) be the maximum size of a family of pairwise G-different infinite permutations of ℕ.Körner, Malvenuto, and Simonyi <cit.> showed that for any graph G on n vertices, F_∞ (G) ≤ (χ(G))^n,where χ(G) denotes the chromatic number of G. Therefore, for graphs with finitely many vertices, F_∞ (G) is finite, so it follows that there exists a sufficiently large constant b for which F_b (G) = F_∞ (G). (For example, it is easy to verify that b = n (χ(G))^n satisfies this equation.) We can therefore think of F_∞ (G) as the maximum size of a family of pairwise G-different permutations of the vertices of G with arbitrarily many blank spaces, rather than in terms of infinite permutations of the natural numbers.If G(n) is a graph defined for all positive integer n, then letρ_b(G) = lim sup_n →∞1/nlog_2 F_b(G(n)),and let ρ(G) = ρ_0(G). (In this paper G(n) will usually be the first n vertices of an infinite graph, as is the case with M(n).) Therefore ρ_b (G) measures the asymptotic behavior of F_b(G(n)). Although it may seem that F_∞ (G(n)) should be much larger than F(G(n)), the following two lemmas show that for certain graphs G(n) such as the matching graph M(n), ρ(G) and ρ_∞ (G) are equal. Very similar results were shown in <cit.> for the path; we present a generalization of these proofs below. If ℱ is a family of permutations of the vertices of some graph G with any number of blank spaces (or with no blank spaces), let H_ℱ, G be the graph whose vertices are the permutations in ℱ and whose edges are all pairs of permutations σ, π∈ℱ such that σ and π are G-different.Note that if ℱ is pairwise G-different, then H_ℱ, G is complete, or equivalently, α(H_ℱ, G) = 1.If ℱ is a family of G_0-different permutations of the vertices of G_0 with unlimited blank spaces, then ρ(G) ≥1/|V(G_0)|log_2 |ℱ|, where G(n) consists of the union of n/|V(G_0)| copies of G_0, for all n which are divisible by |V(G_0)|.We omit the proof of this lemma, which is similar to the proof of Lemma <ref>. A nearly identical result is shown in <cit.>, which is specific to the path but easily generalizable.Let G(n) be a graph of order n defined for all positive n such that G(n_1) + G(n_2) is a subgraph of G(n_1 + n_2). If either ρ(G) or ρ_∞ (G) exists (that is, either of their limits exist and is not ∞), then both values exist and ρ(G) = ρ_∞ (G). Clearly if ρ_∞ (G) exists, then ρ(G) exists and ρ_∞ (G) ≥ρ(G), as F_∞ (G(n)) ≥ F(G(n)) for all n. We now show by contradiction that if ρ(G) exists, then ρ_∞ (G) exists and ρ(G) ≥ρ_∞ (G). Assume that ρ(G) exists and is not ∞, but that ρ_∞ (G) > ρ(G) or that ρ_∞ (G) = ∞. Then, in both of these cases, there exists an N such that there is a family ℱ of pairwise G(N)-different permutations of V(G(N)) with unlimited blanks, where 1/Nlog_2 |ℱ| > ρ(G). However, by assumption the union of k copies of G(N) is a subgraph of G(k N) for all positive integers k. Therefore, ρ(G) ≥1/Nlog_2 |ℱ| by Lemma <ref>, which is a contradiction. Lemma <ref> shows that ρ_∞ (M) = ρ(M). As M(n) is bipartite, ρ(M) ≤ 1, so ρ_∞ (M) ≤ 1. Therefore F_∞ (M(n)) grows exponentially on the order of at most 2^n. This bound was improved in <cit.>, where it was shown that √(3)^n ≤ F_∞ (M(n)) ≤ 2^n. The upper bound of 2^n was shown as part of the more general result that F_∞ (G) ≤ (χ(G))^|V(G)|, where χ(G) denotes the chromatic number of G. We use a different approach for bounding F_∞ (M(n)); we first bound α(H_ℱ, M(2)) for families ℱ of permutations of the vertices of M(2), then we use this result to bound α(H_ℱ, M(n)) for larger n. This approach helps determine F_∞ (M(n)) for small n and provides a slightly stronger upper bound on F_∞ (M(n)) for all n (better than 2^n by a constant factor). We also present some interesting constructions of families ℱ with relatively good upper bounds on α(H_ℱ, M(n)); these results mark progress towards potentially improving the lower bounds on F_∞ (M(n)).Let ℱ be a family of permutations of the vertices of M(2) with b blank spaces, and let c = b + 2 be the length of the permutations in ℱ. Thenα(H_ℱ, M(2)) ≥2^c-2/2^c - 2· |ℱ|.Assume the vertices of M(2) are labeled 1 and 2, so that all permutations in ℱ consist of 1, 2, and c-2 blanks. We first observe that an independent set in H_ℱ, M(2) cannot contain permutations π and σ such that π(j) = 1 and σ(j) = 2 for some position j. Therefore, an independent set I in H_ℱ, M(2) is characterized by a string s of 1's and 2's of length c. A permutation π∈ℱ is in I only if π(j) = s(j) for every position j at which π does not have a blank space.There are 2^c possible labelings of the c positions with 1's and 2's, but 2 of these (all 1's and all 2's) always correspond to empty independent sets. Therefore let I_1, …, I_2^c-2 be the 2^c-2 independent sets of maximal size in H_ℱ, M(2) corresponding to strings of 1's and 2's of length c. Each permutation π∈ℱ belongs to exactly 2^c-2 of these independent sets, as each of the c-2 blank spaces in π may be labeled 1 or 2 in the string s. Therefore∑_i=1^2^c-2 |I_i| = 2^c-2· |ℱ|.It follows by the pigeonhole principle that there exists some k for which|I_k| ≥2^c-2/2^c - 2· |ℱ|.α(H_ℱ, M(2)) > 1/4· |ℱ|.The inequality in Lemma <ref> is only significantly stronger than that in Corollary <ref> for families of permutations that are very short in length. It is therefore desirable to be able to consider families of permutations with as few blank spaces as possible. The following lemma shows that families of permutations with sufficiently many blank spaces can be condensed to equivalent families with fewer blank spaces.Let ℱ = {π_1, π_2, …, π_p} be a family of p permutations of the vertices of M(2) with b blank spaces, and let c = b + 2 be the length of each permutation in ℱ. If p < c2, then there exists a family ℱ' = {π_1', π_2', …, π_p'} of p permutations of the vertices of M(2) with b-1 blank spaces such that H_ℱ, M(2) is a subgraph of H_ℱ', M(2). There are c2 pairs of positions j_1, j_2 in the permutations in ℱ. If p = |ℱ| < c2, then by the pigeonhole principle there must be some pair of positions j_1, j_2 (1 ≤ j_1 < j_2 ≤ c) such that each permutation in ℱ has a blank space in at least one of these positions. Then for each π_i, let π_i' be the permutation consisting of π_i with the entry at position j_2 removed, and let π_i'(j_1) take on the value of whichever of π_i(j_1) or π_i(j_2) is not a blank space. In other words, position j_2 was merged into position j_1 for each permutation π_i to obtain π_i'. Then ℱ' = {π_1', π_2', …, π_p'} satisfies the desired properties. We now apply Lemma <ref> and Lemma <ref> to determine the value of F_∞ (M(4)) and to improve the existing upper bound on F_∞ (M(n)) by a constant factor.F_∞ (M(4)) = 9. We show that a family of 10 permutations of V(M(2)) with unlimited blank spaces cannot be M(2)-different. Specifically, it suffices to show that there is no family ℱ_1 of 10 permutations of the vertices of M(2) with unlimited blanks such that α(H_ℱ_1, M(2)) ≤ 3 and |E(H_ℱ_1, M(2))| > 22. To see this, label the vertices on the two edges in M(4) (1, 2) and (3, 4) respectively. Then, for any family ℱ = {π_1, …, π_10} of 10 permutations of the vertices of M(4) with unlimited blanks, let ℱ_1 = {σ_1, 1, …, σ_1, 10} be the family ℱ with all 3's and 4's replaced by blank spaces, and let ℱ_2 = {σ_2, 1, …, σ_2, 10} be the family ℱ with all 1's and 2's replaced by blank spaces. By this definition, (π_i_1, π_i_2) ∈ E(H_ℱ, M(4)) if and only if (σ_1, i_1, σ_1, i_2) ∈ E(H_ℱ_1, M(2)) or (σ_2, i_1, σ_2, i_2) ∈ E(H_ℱ_2, M(2)). Therefore, if S ⊆{1, …, 10} and if {σ_1, i : i ∈ S} is an independent set in H_ℱ_1, M(2), then {σ_2, i : i ∈ S} must be a clique in H_ℱ_2, M(2) in order for H_ℱ, M(2) to be complete; the same applies for independent sets in H_ℱ_2, M(2) and cliques in H_ℱ_1, M(2). Because the largest clique in both H_ℱ_1, M(2) and in H_ℱ_2, M(2) has order at most F_∞ (M(2)) = 3, the independence number of both graphs must be 3 (it cannot be less than 3 by Lemma <ref>). Furthermore, the complete graph on 10 vertices has 45 edges, so either H_ℱ_1, M(2) or H_ℱ_2, M(2) must have at least 23 edges in order for H_ℱ, M(2) to be complete.As 52 = 10, it is only necessary to consider families with at most 3 blank spaces by Lemma <ref>. The case of 0 blanks is trivial; for families ℱ_1 of permutations of the vertices of M(2) with 1 blank space, note that by Lemma <ref>,α(H_ℱ_1, M(2)) ≥2^3 - 2/2^3 - 2· 10 = 10/3 > 3.If ℱ_1 has 3 blank spaces, then the permutations have length 5, so each of the 10 pairs of positions j_1, j_2 must correspond to the 1 and the 2 of some permutation in ℱ_1; otherwise the family could be condensed by Lemma <ref>. Therefore for each 1 ≤ j ≤ 5, exactly 4 permutations in ℱ_1 do not have a blank space at position j. Among these 4 permutations, there are at most 2 · 2 = 4 pairs of M(2)-different permutations (π_i_1, π_i_2) which correspond to edges in H_ℱ_1, M(2). Therefore H_ℱ_1, M(2) has at most 5 · 4 = 20 edges. The only remaining case is when the permutations have 2 blank spaces. We used a brute force computer search for this case, and found that no 10 permutations of the vertices of M(2) with 2 blanks has independence number 3.F_∞ (M(n)) < 9 · 2^n-4 for even n > 4. It suffices to show that for even n > 4, F_∞ (M(n)) < 4 · F_∞ (M(n-2)). To show this, we use the same method of separating out the independent edges that we used in Theorem <ref>. Label the vertices of the n/2 edges in M(n) (1,2), (3,4), …, (n-1,n). Let ℱ be some family of pairwise M(n)-different permutations of the vertices of M(n) with unlimited blank spaces. Let ℱ_1 be the family ℱ with all numbers other than 1's and 2's replaced with blank spaces, and let ℱ_2 be the family ℱ with all 1's and 2's replaced with blank spaces. (By this definition, ℱ_1 contains permutations of the vertices of M(2) and ℱ_2 contains permutations of the vertices of M(n-2)). Because any independent set in H_ℱ_1, M(2) must correspond to a clique of equal size in H_ℱ_2, M(n-2), the clique number of H_ℱ_2, M(n-2) must be at least α(H_ℱ_1, M(2)) > 1/4· |ℱ|. By definition, F_∞ (M(n-2)) is an upper bound on the clique number of H_ℱ_2, M(n-2), soF_∞ (M(n-2)) > 1/4· |ℱ|.As this inequality holds for all pairwise M(n)-different families ℱ of permutations of the vertices of M(n) with unlimited blanks, it must be thatF_∞ (M(n-2)) > 1/4· F_∞ (M(n)). To conclude this section we present results which were motivated by the problem of improving the lower bound on F_∞ (M(n)). We first observe that there exist families ℱ of permutations of the vertices of M(2) such that α(H_ℱ, M(2)) is very close to 1/4· |ℱ|. Specifically, for any integer c ≥ 2, let 𝒜_c be the family of the c(c-1) distinct permutations of the vertices of M(2) with c-2 blank spaces. Let s be a string of 1's and 2's of length c characterizing an independent set I in H_𝒜_c, M(2). If s has x 1's and y 2's, then |I| = xy by the definition of ℱ. Therefore the size of the largest independent set in H_𝒜_c, M(2) is α(H_𝒜_c, M(2)) = ⌊ c/2 ⌋·⌈ c/2 ⌉. It follows that lim_c →∞α(H_𝒜_c, M(2))/|𝒜_c| = lim_c →∞⌊ c/2 ⌋·⌈ c/2 ⌉/c(c-1) = 1/4.This construction shows that the bound in Lemma <ref> is nearly optimal. We now apply the ideas in Lemma <ref> and in (<ref>) to get an interesting result.Let ℱ be a family of p pairwise M(n)-different permutations of the vertices of M(n) with unlimited blank spaces, and once again label the vertices of the edges in M(n) (1,2), (3,4), …, (n-1, n). Let ℰ_2, ℰ_4, …, ℰ_n be defined such that ℰ_k = {σ_k, 1, …, σ_k, p} consists of the family ℱ with all non-blank entries other than k-1 and k replaced by blank spaces in each permutation. Let ℱ_0 = {π_0, 1, …, π_0, p} be a family of p empty permutations (or permutations of the vertices of M(0)). It follows that H_ℱ_0, M(0) is empty and α(H_ℱ_0, M(0)) = p. Then let ℱ_2 = {π_2, 1, …, π_2, p} be defined so that π_2, j consists of π_0, j concatenated to σ_2, j, and in general, let ℱ_k = {π_k, 1, …, π_k, p} be such that π_k, j consists of π_k-2, j concatenated to σ_k, j. (This definition is such that H_ℱ_n, M(n) = H_ℱ, M(n).) Note that for any positive even k and for any indices i_1 and i_2, (π_k, i_1, π_k, i_2) ∈ E(H_ℱ_k, M(k)) if and only if either (π_k-2, i_1, π_k-2, i_2) ∈ E(H_ℱ_k-2, M(k-2)) or (σ_k, i_1, σ_k, i_2) ∈ E(H_ℰ_k, M(2)). It follows by Lemma <ref> that for any independent set I in H_ℱ_k-2, M(k-2), there exists an independent set I' in H_ℱ_k, M(k), which is a subset of I, such that |I'| > 1/4· |I|. Therefore α(H_ℱ_k, M(k)) > 1/4·α(H_ℱ_k-2, M(k-2)), so by inductionα(H_ℱ_k, M(k)) > p/2^kfor all positive even k. As shown in (<ref>), for k = 2 there exist families ℱ_2 (namely, 𝒜_c for large c) such that α(H_ℱ_2, M(2)) is arbitrarily close to 1/2^k = 1/4. However, if p ≈ 2^n, then α(H_ℱ_k, M(k)) must be approximately p/2^k for all 1 ≤ k ≤ n, as α(H_ℱ_n, M(n)) = 1. More generally, if ρ_∞(M) ≥1/2log_2 a for some constant a, then there must exist families ℰ_2, …, ℰ_n such that their corresponding families ℱ_0, …, ℱ_n satisfy α(H_ℱ_k, M(k)) ≈p/a^k/2 and n ≈log_√(a) p (loosely speaking). Below, we present a construction which partially answers this question by providing families ℰ_2, …, ℰ_l such that for certain a > 3, α(H_ℱ_k, M(k)) is within a constant factor of p/a^k/2 for 0 ≤ k ≤ l, where l grows logarithmically as a function of p (but slower than log_√(a) p). We later explain how this result could potentially be extended to improve the lower bound on ρ_∞ (M).Let 𝒜 be some family of permutations of the vertices of M(2) with unlimited blank spaces, and let p be some positive integer. Let l = 2 ·⌈log_|𝒜| p ⌉. Then there exists a family ℱ_l of p permutations of M(l) such thatα(H_ℱ_l, M(l)) ≤ P · a^l/2,where P is the least power of |𝒜| not less than p and a = α(H_𝒜, M(2))|𝒜|. Let q = |𝒜| and let 𝒜 = {A_1, A_2, …, A_q}. For even k where 2 ≤ k ≤ l, let ℰ_k = {σ_k, 1, …, σ_k, P} consist of the patternA_1, …, A_1_q^k/2-1, A_2, …, A_2_q^k/2-1 ⋯ A_q, …, A_q_q^k/2-1repeated P/q^k/2 times. Let ℱ_k = {π_k, 1, …, π_k, P} be defined the same before: π_k, j = π_k-2, j σ_k, j.It remains to be shown that α(H_ℱ_l, M(l)) = P · a^l/2. We first observe that any independent set with indices I_l of maximum size in H_ℱ_l, M(l) is constructed in the following manner. Let I_0 = {1, …, P} represent the indices of the independent set π_0, 1, …, π_0, P in H_ℱ_0, M(0). For each 2 ≤ k ≤ l, choose some independent set ℬ_k in H_𝒜, M(2) of size α(H_𝒜, M(2)). Then let I_k = {j ∈ I_k-2 : σ_k, j∈ℬ_k}. By this construction, α(H_𝒜, M(2)) out of every q elements of I_k-2 will be in I_k. Therefore by induction, as I_k is of maximum size by assumption,α(H_ℱ_k, M(k)) = |I_k| = P · (α(H_𝒜, M(2)) / q)^k/2 = P · a^k/2for all even 0 ≤ k ≤ l.§ CONCLUSIONIn this paper, we develop new methods for bounding the maximum size of a family of pairwise graph-different permutations for various bipartite graphs. For specific non-complete bipartite graphs G with vertex subsets of size a and b, we show that the upper bound on F(G) of F(K_a,b) is tight. We show that if G(n) is any balanced bipartite graph on n vertices with minimum degree n/2 - o(n), then F(G(n)) grows on the same exponential order as F(K_⌊ n/2 ⌋, ⌈ n/2 ⌉) when n →∞. We also show that this growth is achieved for certain much sparser balanced bipartite graphs. We present several new bounds on F_∞ for the matching graph M(n). Specifically, we determine the exact value of F_∞ (M(4)), and improve the upper bound on F_∞ (M(n)). Our new methods and bounds make potential progress towards determining the value of F(P_n).§ ACKNOWLEDGEMENTSWe would like to thank Dr. Tanya Khovanova for her helpful comments and suggestions on the paper. We would also like to thank the MIT PRIMES program for providing us with the opportunity to perform this research.ieeetr
http://arxiv.org/abs/1702.08579v1
{ "authors": [ "Louis Golowich", "Chiheon Kim", "Richard Zhou" ], "categories": [ "math.CO", "05D05" ], "primary_category": "math.CO", "published": "20170227232618", "title": "Maximum Size of a Family of Pairwise Graph-Different Permutations" }
Entanglement and squeezing in continuous-variable systems]Entanglement and squeezing in continuous-variable systemsmanuel.gessner@ino.it QSTAR, INO-CNR and LENS, Largo Enrico Fermi 2, I-50125 Firenze, ItalyQSTAR, INO-CNR and LENS, Largo Enrico Fermi 2, I-50125 Firenze, ItalyQSTAR, INO-CNR and LENS, Largo Enrico Fermi 2, I-50125 Firenze, Italy We introduce a multi-mode squeezing coefficient to characterize entanglement in N-partite continuous-variable systems. The coefficient relates to the squeezing of collective observables in the 2N-dimensional phase space and can be readily extracted from the covariance matrix. Simple extensions further permit to reveal entanglement within specific partitions of a multipartite system. Applications with nonlinear observables allow for the detection of non-Gaussian entanglement. [ Augusto Smerzi December 30, 2023 =====================§ INTRODUCTION Entangled quantum states play a central role in several applications of quantum information theory <cit.>. Their detection and microscopic characterization, however, poses a highly challenging problem, both theoretically and experimentally <cit.>. A rather convenient approach to detect and quantify multipartite entangled states is available for spin systems by the concept of spin squeezing <cit.>.Entanglement criteria based on spin-squeezing coefficients are determined by suitable expectation values and variances of collective spin operators.This renders these criteria experimentally accessible in atomic systems <cit.>, without the need for local measurements of the individual subsystems. Spin-squeezing based entanglement criteria have been initially introduced for spin-1/2 particles <cit.> andmore recently extended to particles of arbitrary spin <cit.> and systems of fluctuating number of particles <cit.>. Furthermore, in Ref. <cit.>, spin squeezing has been related to Bell correlations. However, so far, entanglement criteria based on spin squeezing have been developed only for discrete-variable systems. A direct extension for continuous-variable systems is not available since the restriction to collective observables prohibits applications in unbounded Hilbert spaces. In this article, we construct a bosonic multi-mode squeezing coefficient via a combination of locally and collectively measured variances.The additional information provided by local measurements allows us to introduce a generalized squeezing coefficient for arbitrary-dimensionalsystems <cit.>. This coefficient is able to detect continuous-variable entanglement and is readily determined on the basis of the covariance matrix. A suitable generalization is further able to reveal entanglement in a specific partition of the multipartite system. As illustrated by examples, our coefficient can be interpreted in terms of squeezed observables in the collective 2N-dimensional phase space of the N-partite system. We point out that, in continuous-variable systems, local access to the individual modes is routinely provided,most prominent examples being multi-mode photonic states <cit.>and atomic systems with homodyne measurements <cit.>.Our results provide an experimentally feasible alternative to standard entanglement detection techniques based on uncertainty relations or the partial transposition criterion, which are particularly well-suited for Gaussian states <cit.>. We show how our approach, as well as other variance-based strategies can be interpreted as different modifications of Heisenberg's uncertainty relation, which are only satisfied by separable states. The squeezing coefficient presented here is furthermore directly related to the Fisher information <cit.>. In situations where the Fisher information can be extracted experimentally, this relation can be exploited to devise a sharper entanglement criterion <cit.>, which is expected to be relevant for the detection of non-Gaussian states.§ ENTANGLEMENT DETECTION WITH LOCAL VARIANCESA unified approach to entanglement detection in arbitrary-dimensional multipartite systems was recently proposed in <cit.>.This approach is based on a combination of the Fisher information with suitable local variances. In the following we will briefly review the general ideas.Later in this article, this technique will be applied in the context of continuous-variable systems to derive a bosonic squeezing coefficient withthe ability to detect mode entanglement. In an N-partite Hilbert space ℋ=ℋ_1⊗⋯⊗ℋ_N, any separable quantum state,ρ̂_sep=∑_γp_γρ̂^(γ)_1⊗⋯⊗ρ̂^(γ)_N,characterized by probabilities p_γ and local quantum states ρ̂^(γ)_i, must satisfy <cit.>F_Q[ρ̂_sep,Â] ≤4Var(Â)_Π(ρ̂_sep),where Â=∑_i=1^NÂ_i is a sum of arbitrary local operators Â_i that act on the ℋ_i, respectively. We have further introduced the quantum mechanical variance Var(Â)_ρ̂=⟨Â^2⟩_ρ̂ -⟨Â⟩_ρ̂^2 and expectation value ⟨Â⟩_ρ̂=Tr[Âρ̂]. The uncorrelated product state Π(ρ̂)=ρ̂_1⊗⋯⊗ρ̂_N is constructed from the reduced density operators ρ̂_i of ρ̂, i.e., it describes the state ρ̂ after the removal of all correlations between the subsystems ℋ_i, for i=1,…,N.On the left-hand side of Eq. (<ref>) appears the quantum Fisher information F_Q[ρ̂,∑_i=1^N Â_i], which expresses how sensitively the state ρ̂(θ)=e^-i∑_j=1^N Â_jθρ̂ e^i∑_j=1^N Â_jθ changes upon small variations of θ <cit.>. This quantity is compared to the sum of local variances of the Â_i that generate the unitary transformation e^-i∑_j=1^N Â_jθ: The term on the right-hand side can be expressed as Var(∑_i=1^NÂ_i)_Π(ρ̂)=∑_i=1^NVar(Â_i)_ρ̂. The combination of the Fisher information with local variances provides two decisive advantages. First, it renders the separability criterion (<ref>) stricter than those that are based on a state-dependent upper bound for the local variances <cit.>. Indeed, the condition (<ref>) is necessary and sufficient for separability of pure states: for everypure entangled state it is possible to find a set of local opertors Â_i such that Eq. (<ref>) is violated <cit.>. Second, the local variances lead to a finite bound for the Fisher information, even in the presence of unbounded operators. This is of particular importance for applications in the context of continuous-variable systems.The Fisher information has been extracted in certain (discrete-variable) experiments with a state-independent method <cit.>. In the present article we focus mostly on simpler entanglement criteria that are available directly from the covariance matrix of the quantum state. To this end, we make use of the upper bound <cit.>F_Q[ρ̂,Â]≥|⟨ [Â,B̂]⟩_ρ̂|^2/Var(B̂)_ρ̂,which holds for arbitrary states ρ̂ and an arbitrary pair of operators Â, B̂. In combination with Eq. (<ref>), we obtain the variance-based separability criterion <cit.>Var(Â)_Π(ρ̂_sep)Var(B̂)_ρ̂_sep≥|⟨ [Â,B̂]⟩_ρ̂_sep|^2/4,where B̂ is an arbitrary operator and Â=∑_i=1^NÂ_i, as before. Based on the separability condition (<ref>) a family of entanglement witnesses may be formulated in terms of generalized squeezing coefficients <cit.>ξ^2_Â,B̂(ρ̂)=4Var(Â)_Π(ρ̂)Var(B̂)_ρ̂/|⟨ [Â,B̂]⟩_ρ̂|^2.For separable states, the attainable values of ξ^2_Â,B̂ are bounded byξ^2_Â,B̂(ρ̂_sep)≥ 1,as a consequence of Eq. (<ref>). Since this bound holds for arbitrary choices of Â=∑_i=1^NÂ_i and B̂, these operators can be optimized to obtain the most efficient entanglement witness. In the case of spin systems, Eq. (<ref>) indeed reproduces the spin-squeezing coefficient, and several sharpened generalizations thereof as special cases <cit.>. In Sec. <ref>, we will derive such a coefficient for bosonic continuous-variable systems.We further remark that the coefficient (<ref>) can be made more sensitive to entanglement if knowledge of the Fisher information is available <cit.>. The variance-assisted Fisher density <cit.>f_Â(ρ̂)=F_Q[ρ̂,Â]/4Var(Â)_Π(ρ̂),satisfies f_Â(ρ̂_sep)≤ 1 for separable states ρ̂_sep, due to Eq. (<ref>). This criterion is stronger than Eq. (<ref>) since f_Â(ρ̂)≥ξ^-2_Â,B̂(ρ̂) holds as a consequence of Eq. (<ref>). We will discuss in Sec. <ref> how both coefficients (<ref>) and (<ref>) can be adjusted to reveal entanglement in a specific partition, rather than just anywhere in the system.§ MULTI-MODE CONTINUOUS VARIABLE SYSTEMSWe now apply the above entanglement criteria to a bosonic continuous-variable system of N modes. We will focus particularly on the squeezing coefficient (<ref>). The associated entanglement criterion (<ref>) manifests as a modification of Heisenberg's uncertainty relation for separable states, allowing us to compare our method to other existing techniques. The optimal choice of the operators  and B̂ in Eq. (<ref>) can be found in terms of 2N-dimensional phase space vectors. These lead to a geometrical interpretation and associate the coefficient to multi-mode squeezing in phase space. §.§ Comparison to related entanglement criteria and the uncertainty principleWe begin by comparing the criterion (<ref>) to current state-of-the-art criteria for continuous-variable systems.Most of the existing entanglement criteria for continuous variables are formulated in terms of second moments, i.e., functions of the covariance matrix <cit.>. Standard methods for entanglement detection are based on Gaussian tests of the positive partial transposition (PPT) criterion <cit.>, which are applicable to bipartite systems and yield a necessary and sufficient condition for separability of two-mode Gaussian states <cit.>. If both subsystems consist of more than one mode, this condition is no longer sufficient as bound entanglement can arise <cit.>. The PPT criterion is further intimately related to separability conditions <cit.> based on Heisenberg-Robertson uncertainty relations <cit.>, which can be sharpened with an entropic formulation of the uncertainty principle <cit.>; see also <cit.>.The most general formulation of these criteria for bipartite systems was provided in Ref. <cit.>. We point out that the proof presented there can be straight-forwardly generalized to a multipartite scenario (see Appendix <ref>), yielding the separability conditionVar(Â(α))_ρ̂_sepVar(B̂(β))_ρ̂_sep≥(∑_i=1^N|α_iβ_i⟨[Â_i,B̂_i]⟩_ρ̂_sep|)^2/4,where Â(α)=∑_i=1^Nα_iÂ_i and B̂(β)=∑_i=1^Nβ_iB̂_i are sums of arbitrary operators Â_i and B̂_i that act on the local Hilbert spaces ℋ_i, with real coefficients α=(α_1,…,α_N) and β=(β_1,…,β_N). For two-mode states (N=2), the criterion (<ref>) was shown <cit.> to be stronger than criteria that employ the sum (rather than the product) of two variances <cit.>, which in turn were extended to multipartite systems in <cit.>; see also <cit.>. When the operators Â_i and B̂_i are limited to quadratures, the family of criteria (<ref>), for N=2, is equivalent to the PPT criterion <cit.>.The criterion (<ref>) can now be compared to the entanglement criteria that follow from the approach presented in Sec. <ref> in the case of continuous-variable systems. Following Eq. (<ref>) we find that any separable continuous-variable state ρ̂_sep must obey the boundVar(Â(α))_Π(ρ̂_sep)Var(B̂(β))_ρ̂_sep≥|∑_i=1^Nα_iβ_i⟨[Â_i,B̂_i]⟩_ρ̂_sep|^2/4.In contrast to Eq. (<ref>), the uncertainty principle is not used to derive Eq. (<ref>). We further remark that, while in Eq. (<ref>), the operator B̂(β) needs to be a sum of local operators, the criterion (<ref>) can be sharpened by using more general operators instead of B̂(β).The two necessary conditions for separability (<ref>) and (<ref>) should be compared to the general boundVar(Â(α))_ρ̂Var(B̂(β))_ρ̂≥|∑_i=1^Nα_iβ_i⟨[Â_i,B̂_i]⟩_ρ̂|^2/4,provided by the Heisenberg-Robertson uncertainty relation for arbitrary states <cit.>. We observe that both methods can be interpreted as restrictions on the uncertainty relation, which lead to different conditions that are only satisfied by separable states. On the one hand, in Eq. (<ref>) the modification of Eq. (<ref>) is given by a tighter upper bound for separable states, (∑_i=1^N|α_iβ_i⟨[Â_i,B̂_i]⟩_ρ̂|)^2≥|∑_i=1^Nα_iβ_i⟨[Â_i,B̂_i]⟩_ρ̂|^2. On the other hand, in Eq. (<ref>), the variance Var(Â(α))_Π(ρ̂_sep) is obtained for the product state Π(ρ̂_sep), and thus differs from the uncertainty relation (<ref>), despite the fact that the right-hand side of the two inequalities coincide.In the following we focus on the separability condition (<ref>). We reformulate Eq. (<ref>) in terms of multidimensional quadratures (Sec. <ref>) and covariance matrices, which leads to the definition of a multi-mode squeezing coefficient (Sec. <ref>). §.§ Formulation in terms of multidimensional quadraturesLet us briefly illustrate how the criteria (<ref>) can be used with specific choices of accessible operators. In the discrete-variable case, the most general non-trivial local operator can be parametrized in terms of a finite number of generating operators, allowing one to systematically optimize the spin-squeezing coefficient as a function of these parameters <cit.>. In infinite-dimensional Hilbert spaces such a parametrization would involve an infinite number of parameters. The entanglement criteria that can be constructed using Eqs. (<ref>) and (<ref>) therefore depend on a predefined set of accessible operators <cit.>. A common choice for such a set are the local position x̂_1,…,x̂_N and momentum operators p̂_1,…,p̂_N, or more general quadratures of the type q̂_j(θ_j)=1/√(2)(e^iθ_jâ_j^†+e^-iθ_jâ_j). Here, â_i is the annihilation operator of the mode i and the quadrature q̂_j(θ_j) includes the special cases q̂_j(0)=x̂_j and q̂_j(π/2)=p̂_j. Let us considerM̂(𝐯) =∑_i=1^N(n_ix̂_i+m_ip̂_i)=∑_i=1^N1/√(2)(v_iâ^†_i+v_i^*â_i),where 𝐯=(v_1,…,v_N), v_j=n_j+im_j, and the coefficients n_j and m_j are chosen real. We obtain the following separability criterion from (<ref>):Var(M̂(𝐯))_Π(ρ̂_sep)Var(M̂(𝐰))_ρ̂_sep ≥|∑_i=1^NIm(v^*_iw_i)|^2/4where Im denotes the imaginary part. As a special case we may constrain both 𝐯 and 𝐰 to the unit circle: v_j=e^iθ_j and w_j=e^iϕ_j. We then obtain the multi-mode quadrature operators, e.g., Q̂_θ=∑_j=1^Nq̂_j(θ_j)=1/√(2)∑_j=1^N(e^iθ_jâ^†_j+e^-iθ_jâ_j),and with Eq. (<ref>), we find the separability boundVar(Q̂_θ)_Π(ρ̂_sep)Var(Q̂_ϕ)_ρ̂_sep ≥|∑_i=1^Nsin(θ_i-ϕ_i)|^2/4.Alternatively, 𝐯 and 𝐰 may be chosen purely imaginary and purely real, respectively. Introducing the notationX̂_𝐧 =∑_i=1^Nn_ix̂_i, P̂_𝐦=∑_i=1^Nm_ip̂_i,we obtain the separability conditionVar(X̂_𝐧)_Π(ρ̂_sep)Var(P̂_𝐦)_ρ̂_sep≥|𝐧·𝐦|^2/4,where 𝐧·𝐦=∑_i=1^Nn_im_i, and the roles of X̂_𝐧 and P̂_𝐦 may be exchanged.So far, we restricted to operators of first order in â_i and â^†_i. An example of a second-order operator is given by the number operator N̂=∑_i=1^Nn̂_i. The commutation relations[X̂_𝐦,N̂]=iP̂_𝐦, [P̂_𝐦,N̂]=iX̂_𝐦,however, render these combinations of operators less suitable for entanglement detection. The reason is that the expectation values of X̂_𝐦 and P̂_𝐦, which play the role of the upper separability bound in Eq. (<ref>), can be set to zero with local operations. A family of second-order quadrature operators will be discussed in Sec. <ref>. §.§ Multi-mode squeezing coefficient as entanglement witnessIn general, the criteria of the type (<ref>) are conveniently expressed using the covariance formalism of continuous variables <cit.>. Introducing the 2N-dimensional vector 𝐫̂=(r̂_1,…,r̂_2N)=(x̂_1,p̂_1,…,x̂_N,p̂_N), we define the covariance matrix of the state ρ̂ element-wise as (γ_ρ̂)_αβ=Cov(r̂_α,r̂_β)_ρ̂.Arbitrary local operatorsM̂(𝐠)=∑_i=1^2Ng_ir̂_i,with the real vector 𝐠=(g_1,…,g_2N) now generate the commutation relations[M̂(𝐡),M̂(𝐠)]=∑_i,j=1^2Nh_ig_j[r̂_i,r̂_j]=i𝐡^TΩ𝐠,where Ω=⊕_i=1^Nω is the symplectic form with ω=([01; -10 ]). Furthermore, the variances of such operators are given in terms of the covariance matrix as:Var(M̂(𝐠))_ρ̂=∑_i,j=1^2Ng_ig_jCov(r̂_i,r̂_j)_ρ̂=𝐠^Tγ_ρ̂𝐠.We can thus rewrite the separability criterion Eq. (<ref>) in the equivalent form(𝐡^Tγ_Π(ρ̂_sep)𝐡)(𝐠^Tγ_ρ̂_sep𝐠)≥(𝐡^TΩ𝐠)^2/4,where the correlation-free covariance matrix γ_Π(ρ̂_sep) is obtained from γ_ρ̂_sep by setting all elements to zero except for the local 2× 2 blocks on the diagonal. Again, we may compare Eq. (<ref>) to the Heisenberg-Robertson uncertainty relation (<ref>), which here reads(𝐡^Tγ_ρ̂𝐡)(𝐠^Tγ_ρ̂𝐠)≥(𝐡^TΩ𝐠)^2/4,and holds for arbitrary ρ̂.The number of degrees of freedom in Eq. (<ref>) can be halved by considering only pairs of operators that maximize the right-hand side, which is given by the scalar product of the two vectors 𝐡 and Ω𝐠. For two vectors of fixed length, the scalar product reaches its maximum value when they are parallel or anti-parallel, hence, if 𝐡=±Ω𝐠 (recall also that ΩΩ=-𝕀_2N). Generally, we call the directions ±Ω𝐠 maximally non-commuting with respect to 𝐠 since pairs of operators M̂(𝐠) and M̂(±Ω𝐠) maximize the absolute value of Eq. (<ref>). This yields the following necessary condition for separabilityξ^2(ρ̂_sep)≥ 1,where, based on the general formulation (<ref>), we introduced the bosonic multi-mode squeezing coefficientξ^2(ρ̂):=min_𝐠ξ^2_𝐠(ρ̂),withξ^2_𝐠(ρ̂):=4(𝐠^TΩ^Tγ_Π(ρ̂)Ω𝐠)(𝐠^Tγ_ρ̂𝐠)/(𝐠^T𝐠)^2.Notice that ξ^2_𝐠 does not depend on the normalization of 𝐠. The coefficient ξ^2_𝐠(ρ̂) can be immediately obtained from any given covariance matrix. The optimization involved in Eq. (<ref>) contains 2N free parameters, whereas the normalization of 𝐠 reduces this number by one. In the two examples discussed below, we notice that a reasonable point of departure for finding the 𝐠 that minimizes Eq. (<ref>), is to choose 𝐠 as an eigenvector of γ_ρ̂ with minimal eigenvalue, i.e., to focus first on the minimization of the last factor in Eq. (<ref>).The coefficients ξ^2_𝐠(ρ̂) allow for an interpretation that creates a notion of bosonic multi-mode squeezing and relates it to entanglement. To this end, one interprets the vectors 𝐠 as `directions' in the 2N-dimensional phase space. A small value of 𝐠^Tγ_ρ̂𝐠 can be interpreted as a squeezed variance along the direction 𝐠 (here considered to be normalized to one), as by Eq. (<ref>) it reflects a small variance of the operator M̂(𝐠), in close analogy to the case of spins <cit.>. In order to satisfy Eq. (<ref>), the squeezing along 𝐠 entails anti-squeezing along the maximally non-commuting direction Ω𝐠. If the anti-squeezing along Ω𝐠 can be suppressed by removing mode correlations (as is done by the operation Π), it is possible to achieve small values of ξ^2, and, in the presence of mode entanglement, to violate the bound (<ref>). §.§ ExamplesWe now discuss the examples of N-mode squeezed states for N=2,3 and show that they violate the separability criterion (<ref>) for any finite squeezing. Moreover, the entanglement of a non-Gaussian squeezed state is detected with the aid of nonlinear observables.§.§.§ Two-mode squeezed statesWe first consider two-mode squeezed vacuum states |Ψ^(2)_r⟩=Ŝ_12[r]|0,0⟩,generated by the operation Ŝ_12[ξ]=e^ξâ_1^†â_2^†-ξ^*â_1â_2 <cit.> and |0⟩ is the vacuum. One confirms easily that the states |Ψ^(2)_r⟩ violate Eq. (<ref>), which reveals their inseparability. To see this, we notice first that these states are Gaussian, and therefore fully characterized by their covariance matrixγ_|Ψ^(2)_r⟩= 1/2[R^(2)0S^(2)0;0R^(2)0 -S^(2);S^(2)0R^(2)0;0 -S^(2)0R^(2) ],where R^(2) = cosh(2r) and S^(2) = sinh(2r). For r>0, it describes position correlations and momentum anti-correlations: Var(x̂_1-x̂_2)_|Ψ^(2)_r⟩=Var(p̂_1+p̂_2)_|Ψ^(2)_r⟩=e^-2r. We take 𝐠_xp=(c_x,c_p,-c_x,c_p) with arbitrary numbers c_x and c_p,which is the eigenvector of the matrix (<ref>) with minimum eigenvalue ( R^(2) - S^(2))/2. This choice corresponds to the operator M̂(𝐠) = c_x(x̂_1-x̂_2) + c_p(p̂_1+p̂_2) and leads to 𝐠^T γ_|Ψ^(2)_r⟩𝐠=(c_x^2 + c_p^2)e^-2 r. We can obtain the correlation-free covariance matrix from Eq. (<ref>) by setting all elements except the diagonal 2× 2-blocks to zero, yieldingγ_Π(|Ψ^(2)_r⟩)= 1/2[ R^(2) 0 0 0; 0 R^(2) 0 0; 0 0 R^(2) 0; 0 0 0 R^(2) ].Thus, 𝐠_xp^TΩ^Tγ_Π(|Ψ^(2)_r⟩)Ω𝐠_xp=(c_x^2+c_p^2)R^(2) and with 𝐠_xp^T𝐠_xp=2(c_x^2+c_p^2), we finally findξ^2_𝐠_xp(|Ψ^(2)_r⟩)=1/2(1+e^-4r),which violates Eq. (<ref>) for all r>0. The choice of 𝐠_xp is indeed confirmed as optimal by observing in Fig. <ref> that ξ^-2(|Ψ^(2)_r⟩)=ξ^-2_𝐠_xp(|Ψ^(2)_r⟩), where ξ^-2(|Ψ^(2)_r⟩) was obtained by minimizing the quantity (<ref>) numerically. §.§.§ Three-mode squeezed statesA three-mode squeezed state can be generated by single-mode vacuum squeezing of all three modes followed by two consecutive two-mode mixing operations <cit.>. Specifically, we consider the states|Ψ^(3)_r⟩ =B̂_23[π/4]B̂_12[arccos(1/√(3))]×Ŝ_3[r]Ŝ_2[r]Ŝ_1[-r]|0,0,0⟩,where Ŝ_i[ξ]=e^1/2(ξâ_i^† 2-ξ^*â_i^2) is the single-mode squeezing operator of mode i and B̂_ij[θ]=e^(â_iâ_j^†-â_i^†â_j)θ mixes the modes i and j with angle θ∈ℝ. These states are also Gaussian, with covariance matrix <cit.> γ_|Ψ^(3)_r⟩=1/2[ R_+^(3) 0 S^(3) 0 S^(3) 0; 0 R_-^(3) 0-S^(3) 0-S^(3); S^(3) 0 R_+^(3) 0 S^(3) 0; 0-S^(3) 0 R_-^(3) 0-S^(3); S^(3) 0 S^(3) 0 R^(3)_+ 0; 0-S^(3) 0-S^(3) 0 R_-^(3) ], where R^(3)_±=cosh(2r)±1/3sinh(2r) and S^(3)=-2/3sinh(2r). The eigenspace of the lowest eigenvalue (1/2)e^-2r is spanned by the three vectors 𝐠_0=(1,0,1,0,1,0), 𝐠_1=(0,-1,0,-1,0,2) and 𝐠_2=(0,1,0,-1,0,0). These vectors thus identify 2N-dimensional directions in phase space along which the states |Ψ^(3)_r⟩ are squeezed. Let us first focus on 𝐠_0. A maximally non-commuting direction with respect to 𝐠_0 is given by -Ω𝐠_0=(0,1,0,1,0,1). This identifies a direction of anti-squeezing in phase space since the vector -Ω𝐠_0 is a (non-normalized) eigenvector of γ_|Ψ^(3)_r⟩ with maximal eigenvalue (1/2)e^2r. This, of course, is required to satisfy the uncertainty relation (<ref>). The particular form of the directions 𝐠_0 and -Ω𝐠_0 allows for an interpretation in terms of N-dimensional position and momentum quadratures, as introduced in Eq. (<ref>). We can identify N-dimensional position squeezing along 𝐦_0=(1,1,1) withVar(X̂_𝐦_0)_|Ψ^(3)_r⟩=𝐠_0^Tγ_|Ψ^(3)_r⟩𝐠_0=3/2e^-2r,and momentum anti-squeezing withVar(P̂_𝐦_0)_|Ψ^(3)_r⟩=𝐠_0^TΩγ_|Ψ^(3)_r⟩Ω𝐠_0=3/2e^2r. However, for the separability condition (<ref>), we compare to the correlation-free covariance matrix, again obtained by removing the off-diagonal correlation blocks γ_|Ψ^(3)_r⟩, leading to γ_Π(|Ψ^(3)_r⟩)=1/2[ R_+^(3) 0 0 0 0 0; 0 R_-^(3) 0 0 0 0; 0 0 R_+^(3) 0 0 0; 0 0 0 R_-^(3) 0 0; 0 0 0 0 R^(3)_+ 0; 0 0 0 0 0 R_-^(3) ]. The removal of the correlations has indeed suppressed the anti-squeezing along -Ω𝐠_0, or equivalently for P̂_𝐦_0: We find Var(P̂_𝐦_0)_Π(|Ψ^(3)_r⟩)=𝐠_0^TΩγ_Π(|Ψ^(3)_r⟩)Ω𝐠_0=3/2R^(3)_-,and, together with Eq. (<ref>), upon insertion into Eq. (<ref>),ξ^2_𝐠_0(|Ψ^(3)_r⟩)=1/3(1+2e^-4r).Hence, ξ^2_𝐠_0(|Ψ^(3)_r⟩)<1 for all r>0, which violates the separability condition (<ref>).The violation of the separability condition is also observed for the other eigenvectors 𝐠_1 and 𝐠_2, however, they produce smaller values of ξ^-2_𝐠_0. Specifically they lead to ξ^2_𝐠_1(|Ψ^(3)_r⟩)=ξ^2_𝐠_2(|Ψ^(3)_r⟩)=(2 + e^-4 r)/3. The reason for this is that the anti-squeezing along the respective maximally non-commuting direction is not in all cases reduced with equal effectiveness by the removal of correlations. In other words, 𝐠^TΩ^Tγ_Π(ρ̂)Ω𝐠 is not necessarily small when 𝐠^Tγ_ρ̂𝐠 is.§.§.§ Non-Gaussian entanglement from higher-order squeezingSqueezing is a second-order process that is able to generate entangled Gaussian states, such as the two examples discussed above. In order to illustrate how the squeezing coefficient can be adapted to non-Gaussian states, we consider a fourth-order squeezing evolution as a direct extensionof the (conventional) second-order squeezing, discussed in Sec. <ref>. Specifically, we consider states of the form|ψ_r⟩=X̂_12[r]|0,0⟩,where X̂_12[ξ]=e^1/2(ξâ^† 2_1â^† 2_2-ξ^*â_1^2â_2^2). The fourth-order generator in X̂_12 produces a squeezing effect which is not captured by the variances of linear observables, i.e., the covariance matrix. Hence, the criteria (<ref>) and (<ref>) are not able to detect the state's entanglement if the operators Â_i and B̂_i are limited to quadratures. Consequently also Simon's PPT criterion for Gaussians states <cit.> cannot reveal this non-Gaussian entanglement.We therefore extend the squeezing coefficient (<ref>) by employing nonlinear observablesD̂(μ) =μ_1(â_1^2+â_1^† 2)+μ_2 i(â_1^† 2-â_1^2)+μ_3(â_2^2+â_2^† 2)+μ_4 i(â_2^† 2-â_2^2),characterized by a vector μ=(μ_1,μ_2,μ_3,μ_4)∈ℝ^4. From Eq. (<ref>), we find that the non-Gaussian squeezing coefficientχ_μ,ν(ρ̂)=4 Var(D̂(μ))_ρ̂Var(D̂(ν))_Π(ρ̂)/|⟨[D̂(μ),D̂(ν)]⟩_ρ̂|^2must satisfyχ_μ,ν(ρ̂_sep)≥ 1 ∀μ,νfor all separable states ρ̂_sep.This novel criterion identifies the entanglement of the statesρ̂_r,s=1/1+s(|ψ_r⟩⟨ψ_r|+s|0,0⟩⟨ 0,0|),as is shown in Fig. <ref>. By mixing the state (<ref>) incoherently with the vacuum, we further test the applicability of the criterion to mixed continuous-variable states. The optimal squeezing directions in the “phase space” of second-order variables, i.e., those that lead to the maximum value χ_μ_0,ν_0(ρ̂_r,s) are given by μ_0=(c_1,c_2,-c_1,c_2) and ν_0=Ωμ_0 for all values of r and s, where c_1 and c_2 are arbitrary real numbers. All these observations are in complete analogy to the results of Sec. <ref> once the conventional covariance matrix, generated by (linear) quadratures â_i, is replaced by the nonlinear covariance matrix, generated by second-order quadratures â^2_i.With the help of the second-order observables (<ref>), the entanglement of ρ̂_r,s can also be revealed by violation of condition (<ref>) with numerically optimal directions μ_0'=μ_0 and ν_0'=(c_2,-c_1,-c_2,-c_1).§ DETECTING ENTANGLEMENT IN A FIXED PARTITIONSo far our analysis was focused on detecting inseparability in a multipartite system, without information about which of the parties are entangled.It should be noticed that the variance on the right-hand side of the separability criterion (<ref>)is calculated on the state Π(ρ̂)=⊗_i=1^Nρ̂_i, where ρ̂_iis the reduced density operator of ρ̂ for subsystem i. The replacement of ρ̂ by Π(ρ̂) removes all correlations between the subsystems, see Sec. <ref>. In a multipartite system this is just one of the possible ways to separating the full system into uncorrelated, possibly coarse-grained subsystems. The general criterion (<ref>) can be extended to probe for entanglement in a specific partition of the system. To see this, let us introduce the generalized set of operationsΠ_{𝒜_1,…,𝒜_M}(ρ̂)=⊗_l=1^Mρ̂_𝒜_l,where the {𝒜_1,…,𝒜_M} label some coarse-grained partition of the N subsytems and ρ̂_𝒜_l denotes the reduced density operator of ρ̂ on the subsystems labeled by 𝒜_l. We introduce the term {𝒜_1,…,𝒜_M}-separable for quantum states ρ̂_{𝒜_1,…,𝒜_M} that can be decomposed in the formρ̂_{𝒜_1,…,𝒜_M}=∑_γp_γρ̂^(γ)_𝒜_1⊗…⊗ρ̂^(γ)_𝒜_M,where the ρ̂^(γ)_𝒜_l describe arbitrary quantum states on 𝒜_l. We will now show that Eq. (<ref>) can be extended to yield a condition for {𝒜_1,…,𝒜_M}-separability. Any {𝒜_1,…,𝒜_M}-separable state must satisfyF_Q[ρ̂_{𝒜_1,…,𝒜_M},Â] ≤ 4Var(Â)_Π_{𝒜_1,…,𝒜_M}(ρ̂),where Â=∑_i=1^NÂ_i. The proof is provided in Appendix <ref>.The criterion (<ref>) allows us to devise witnesses for entanglement among specific, possibly coarse-grained subsets of the full system. Instead of removing all off-diagonal blocks from the covariance matrix, as was done in the case of full separability, one only puts certain off-diagonal blocks, as specified by the chosen partition, to zero. Different partitions then lead to different bounds on the Fisher information. In the extreme case with the trivial partition that contains all subsystems, we recover the upper bound F_Q[ρ̂,Â]≤ 4Var(Â)_ρ̂, which holds for all quantum states ρ̂ and is saturated by pure states <cit.>. In the opposite limit, when the partition separates all of the N subsystems, we obtain Eq. (<ref>).Notice that for probing {𝒜_1,…,𝒜_M}-separability, one may also employ partially non-local operators, i.e., operators that can be non-local on the potentially entangled subsets and local between separable subsets. We will call the general class of such operators {𝒜_1,…,𝒜_M}-local. Let us consider the case of a pure state, for which F_Q[|Ψ⟩,Â]= 4Var(Â)_|Ψ⟩ holds. If condition (<ref>) is satisfied for all possible {𝒜_1,…,𝒜_M}-local operators, we can conclude that there are no correlations between the different subsystems <cit.>. Since this implies a product state, we observe that Eq. (<ref>) represents a necessary and sufficient condition for {𝒜_1,…,𝒜_M}-separability of pure states. More precisely, for each pure {𝒜_1,…,𝒜_M}-inseparable state, a {𝒜_1,…,𝒜_M}-local operator exists that violates Eq. (<ref>).As these results are independent of the dimensions of the local Hilbert spaces ℋ_i, they apply to arbitrary multipartite systems of discrete, continuous and hybrid variables. Hence, a simple substitution of the product state Π(ρ̂) by Π_{𝒜_1,…,𝒜_M}(ρ̂) leads to a generalization of the criterion (<ref>). The methods developed in Refs. <cit.> which follow from Eq. (<ref>) can thus be easily extended to render them susceptible to entanglement in a specific partition. In particular, we may use this to modify the squeezing coefficients (<ref>) and variance-assisted Fisher densities (<ref>) accordingly, to detect {𝒜_1,…,𝒜_M}-separable states.In the continuous-variable case, a generalization of the squeezing coefficient (<ref>) is found asξ_{𝒜_1,…,𝒜_M}^2(ρ̂):=min_𝐠4(𝐠^TΩ^Tγ_Π_{𝒜_1,…,𝒜_M}(ρ̂)Ω𝐠)(𝐠^Tγ_ρ̂𝐠)/(𝐠^T𝐠)^2,and an observation of ξ_{𝒜_1,…,𝒜_M}^-2(ρ̂)>1 indicates inseparability in the {𝒜_1,…,𝒜_M}-partition.Let us illustrate this with the aid of the three-mode squeezed state |Ψ^(3)_r⟩, which was introduced in Sec. <ref>. To check for inseparability within the {(1,2),(3)}-partition, we employ the covariance matrix of the state Π_{(1,2),(3)}. This is obtained from the full covariance matrix (<ref>) by removing all correlations between the subsystems (1,2) and (3), while retaining correlations between (1) and (2). This yields γ_Π_{(1,2),(3)}(|Ψ^(3)_r⟩)=1/2[ R_+^(3) 0 S^(3) 0 0 0; 0 R_-^(3) 0-S^(3) 0 0; S^(3) 0 R_+^(3) 0 0 0; 0-S^(3) 0 R_-^(3) 0 0; 0 0 0 0 R^(3)_+ 0; 0 0 0 0 0 R_-^(3) ]. The phase-space direction 𝐠_0 leads toξ_{(1,2),(3)}^2(|Ψ^(3)_r⟩)=1/9(5 + 4 e^-4 r),which reveals inseparability in the {(1,2),(3)}-partition for all r>0. We obtain the same result for the partitions {(1),(2,3)} and {(1,3),(2)}.Let us remark, however, that in general, entanglement in all the possible partitions does not necessarily imply genuine multipartite entanglement <cit.>. A state of the form ρ̂=p_1ρ̂_{1,2}⊗ρ̂_3+p_2ρ̂_1⊗ρ̂_{2,3}+p_3ρ̂_{1,3}⊗ρ̂_2 with 0<p_1,p_2,p_3<1 and entangled states ρ̂_{1,2}, ρ̂_{2,3} and ρ̂_{1,3} is not separable in any of the partitions {(1,2),(3)}, {(1),(2,3)}, or {(1,3),(2)}. Yet, it only contains bipartite entanglement. § CONCLUSIONSWe introduced a multi-mode squeezing coefficient to detect entanglement in continuous-variable systems. This coefficient is based on squeezing of a collective observable in the 2N-dimensional phase space. The entanglement of two- and three-mode squeezed states is successfully revealed. Interestingly, in both examples the inverse squared squeezing coefficient coincides with the number of squeezed modes in the limit of strong squeezing (see Fig. <ref>). Generalizations to higher-order phase space variables further allow for the detection of entanglement in non-Gaussian states.The multi-mode squeezing coefficient can be directly measured in continuous-variable photonic cluster states, where the full covariance matrix can be extracted <cit.>. Homodyne measurement techniques can also be realized with systems of cold atoms <cit.>. In the future, cold bosonic atoms in optical lattices under quantum gas microscopes may provide an additional platform for continuous-variable entanglement, where local access to individual modes is available <cit.>. Similarly, also phonons in an ion trap may provide a locally controllable continuous-variable platform for quantum information <cit.>.There are several ways to improve the performance of entanglement detection by modifying the definition of the multi-mode squeezing coefficient (<ref>). As was illustrated in Sec. <ref>, the inclusion of local observables that are nonlinear in â_i and â_i^†, can yield stronger criteria. Furthermore, only the variance that is measured without correlations needs to pertain to a sum of local operators. Employing Eq. (<ref>) with a more general choice of B̂(β) is further expected to provide tighter bounds for the Fisher information. The strongest criteria will be obtained when the Fisher information is measured directly <cit.>. Based on the intuition obtained from the study of spin systems <cit.>, this is expected to be particularly important to detect entanglement innon-Gaussian states. However, since the properties of the (quantum) Fisher information are largely unexplored in the context of continuous-variable systems, a detailed understanding of the relationship among the entanglement criteria discussed here and in the literature <cit.> is presently still lacking. For instance, it is not known whether the squeezing criterion is equivalent to the PPT condition <cit.> in the case of quadrature observables.We have extended our techniques beyond the detection of full inseparability to witness entanglement in a particular partition of the multipartite system. A simple generalization of the squeezing coefficient presented here (and similarly of related methods developed in Refs. <cit.>), provides an additional tool to characterize the entanglement structure in a multipartite system on a microscopic level.Finally, the approach presented here can be combined with the concept of generalized spin squeezing which was developed in <cit.> to define a similar squeezing parameter and entanglement criterion for hybrid combinations of discrete- and continuous-variable systems <cit.>.M.G. acknowledges support from the Alexander von Humboldt foundation. We thank G. Ferrini for useful discussions.§ EXTENSION OF THE ENTANGLEMENT CRITERION BY GIOVANNETTI ET AL. <CIT.> TO MULTI-MODE SYSTEMSHere we show that the proof presented for bipartite systems in <cit.> can be straight-forwardly extended to multipartite systems, leading to Eq. (<ref>). First notice that for a separable state (<ref>), the variance of a sum of local operators Â(α)=∑_i=1^Nα_iÂ_i is bounded byVar(Â(α))_ρ̂_sep≥∑_γp_γVar(Â(α))_ρ̂^(γ)_1⊗⋯⊗ρ̂^(γ)_N= ∑_γp_γ∑_i,j=1^Nα_iα_jCov(Â_i,Â_j)_ρ̂^(γ)_1⊗⋯⊗ρ̂^(γ)_N= ∑_γp_γ∑_i=1^Nα_i^2Var(Â_i)_ρ̂^(γ)_i.Analogously, we find Var(B̂(β))_ρ̂_sep≥∑_γp_γ∑_i=1^Nβ_i^2Var(B̂_i)_ρ̂^(γ)_i. The weighted sum of these two variances with w_1,w_2≥ 0 is bounded byw_1Var(Â(α))_ρ̂_sep+w_2Var(B̂(β))_ρ̂_sep≥∑_γp_γ∑_i=1^N[w_1α_i^2Var(Â_i)_ρ̂^(γ)_i+w_2β_i^2Var(B̂_i)_ρ̂^(γ)_i]≥∑_γp_γ∑_i=1^N[w_1α_i^2Var(Â_i)_ρ̂^(γ)_i+w_2β_i^2|⟨ [Â_i,B̂_i]⟩_ρ̂^(γ)_i|^2/4Var(Â_i)_ρ̂^(γ)_i]≥∑_γp_γ∑_i=1^N√(w_1w_2)|α_iβ_i⟨ [Â_i,B̂_i]⟩_ρ̂^(γ)_i|,where we used the Heisenberg-Robertson uncertainty relation,Var(B̂_i)_ρ̂^(γ)_i≥|⟨ [Â_i,B̂_i]⟩_ρ̂^(γ)_i|^2/4Var(Â_i)_ρ̂^(γ)_i,and the fact that the function f(x)=c_1x+c_2/x over x>0 takes its minimum value at 2√(c_1c_2) <cit.>. Next we use that for arbitrary operators O and ρ̂_i=∑_γp_γρ̂^(γ)_i, we have |⟨ O⟩_ρ̂_i|=|∑_γp_γ⟨ O⟩_ρ̂^(γ)_i|≤∑_γp_γ|⟨ O⟩_ρ̂^(γ)_i|. Since the operators Â_i and B̂_i act locally on ℋ_i, we may substitute ⟨ [Â_i,B̂_i]⟩_ρ̂_i=⟨ [Â_i,B̂_i]⟩_ρ̂_sep. We obtainw_1Var(Â(α))_ρ̂_sep+w_2Var(B̂(β))_ρ̂_sep≥√(w_1w_2)∑_i=1^N|α_iβ_i⟨ [Â_i,B̂_i]⟩_ρ̂_sep|,Maximizing the bound (<ref>) over √(w_1/w_2) <cit.> finally yieldsVar(Â(α))_ρ̂_sepVar(B̂(β))_ρ̂_sep≥(∑_i=1^N|α_iβ_i⟨[Â_i,B̂_i]⟩_ρ̂_sep|)^2/4,which is the desired result, Eq. (<ref>).§ PROOF OF EQ. (<REF>)In order to prove Eq. (<ref>), we follow the proof presented for the main result in Ref. <cit.>. We obtainF_Q[ρ̂_{𝒜_1,…,𝒜_M},∑_i=1^NÂ_i]≤∑_γp_γF_Q[ρ̂^(γ)_𝒜_1⊗…⊗ρ̂^(γ)_𝒜_M,∑_i=1^NÂ_i]≤ 4∑_γp_γVar(∑_i=1^NÂ_i)_ρ̂^(γ)_𝒜_1⊗…⊗ρ̂^(γ)_𝒜_M= 4∑_γp_γ∑_l=1^MVar(∑_i∈𝒜_lÂ_i)_ρ̂^(γ)_𝒜_l≤ 4∑_l=1^MVar(∑_i∈𝒜_lÂ_i)_ρ̂_𝒜_l=4Var(∑_i=1^NÂ_i)_Π_{𝒜_1,…,𝒜_M}(ρ̂),where we used the convexity and additivity properties of the Fisher information <cit.>, the concavity of the variance, as well as the relation F_Q[ρ̂,Ĥ]≤ 4Var(Ĥ)_ρ̂ <cit.> and ρ̂_𝒜_l=∑_γp_γρ̂^(γ)_𝒜_l. Notice that the result can be extended to a more general class of {𝒜_1,…,𝒜_M}-local generators: Instead of the fully local operator ∑_i=1^NÂ_i considered here, one may employ operators of the form ∑_l=1^MÔ_𝒜_l, where Ô_𝒜_l is an arbitrary operator on the subsets contained in 𝒜_l. 60 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Nielsen and Chuang(2000)]NielsenChuang author author M. A. Nielsen and author I. L. Chuang, @nooptitle https://doi.org/10.1017/CBO9780511976667 Quantum Computation and Quantum Information (publisher Cambridge University Press,address New York, NY, year 2000)NoStop [Pezzè et al.(2016)Pezzè, Smerzi, Oberthaler, Schmied, and Treutlein]LucaRMP author author L. Pezzè, author A. Smerzi, author M. K. Oberthaler, author R. Schmied,andauthor P. Treutlein, http://arxiv.org/abs/1609.01609 title Non-classical states of atomic ensembles: fundamentals and applications in quantum metrology, howpublished arXiv:1609.01609 [quant-ph] (year 2016)NoStop [Horodecki et al.(2009)Horodecki, Horodecki, Horodecki, andHorodecki]Horodecki2009 author author R. Horodecki, author P. Horodecki, author M. Horodecki,and author K. Horodecki, http://dx.doi.org/10.1103/RevModPhys.81.865 journal journal Rev. Mod. Phys. volume 81, pages 865 (year 2009)NoStop [Mintert et al.(2005)Mintert, Carvalho, Kuś, andBuchleitner]Mintert2005 author author F. Mintert, author A. R. Carvalho, author M. Kuś, and author A. Buchleitner,http://dx.doi.org/10.1016/j.physrep.2005.04.006 journal journal Phys. Rep. volume 415, pages 207 (year 2005)NoStop [Gühne and Tóth(2009)]Guhne2009 author author O. Gühne and author G. Tóth, http://dx.doi.org/10.1016/j.physrep.2009.02.004 journal journal Phys. Rep. volume 474, pages 1 (year 2009)NoStop [Wineland et al.(1992)Wineland, Bollinger, Itano, Moore, and Heinzen]PhysRevA.46.R6797 author author D. J. Wineland, author J. J. Bollinger, author W. M. Itano, author F. L. Moore, and author D. J. Heinzen,10.1103/PhysRevA.46.R6797 journal journal Phys. Rev. A volume 46, pages R6797 (year 1992)NoStop [Kitagawa and Ueda(1993)]PhysRevA.47.5138 author author M. Kitagawa and author M. Ueda, 10.1103/PhysRevA.47.5138 journal journal Phys. Rev. A volume 47,pages 5138 (year 1993)NoStop [Sørensen et al.(2001)Sørensen, Duan, Cirac, andZoller]Sorensen2001 author author A. Sørensen, author L. M. Duan, author J. I. Cirac, and author P. Zoller, http://dx.doi.org/10.1038/35051038 journal journal Nature volume 409, pages 63 (year 2001)NoStop [Sørensen and Mølmer(2001)]PhysRevLett.86.4431 author author A. S. Sørensen and author K. Mølmer, 10.1103/PhysRevLett.86.4431 journal journal Phys. Rev. Lett. volume 86, pages 4431 (year 2001)NoStop [Tóth et al.(2009)Tóth, Knapp, Gühne, andBriegel]TothPRA2009 author author G. Tóth, author C. Knapp, author O. Gühne,andauthor H. J. Briegel, 10.1103/PhysRevA.79.042334 journal journal Phys. Rev. A volume 79, pages 042334 (year 2009)NoStop [Ma et al.(2011)Ma, Wang, Sun, and Nori]SpinSqueezing author author J. Ma, author X. Wang, author C. Sun,and author F. Nori, http://dx.doi.org/10.1016/j.physrep.2011.08.003 journal journal Phys. Rep. volume 509,pages 89(year 2011)NoStop [Tóth et al.(2007)Tóth, Knapp, Gühne, and Briegel]TothPRL2007 author author G. Tóth, author C. Knapp, author O. Gühne,andauthor H. J. Briegel, 10.1103/PhysRevLett.99.250405 journal journal Phys. Rev. Lett. volume 99, pages 250405 (year 2007)NoStop [Vitagliano et al.(2011)Vitagliano, Hyllus, Egusquiza, andTóth]VitaglianoPRL2011 author author G. Vitagliano, author P. Hyllus, author I. L. Egusquiza,andauthor G. Tóth, 10.1103/PhysRevLett.107.240502 journal journal Phys. Rev. Lett. volume 107, pages 240502 (year 2011)NoStop [Hyllus et al.(2012)Hyllus, Pezzé, Smerzi, and Tóth]PhysRevA.86.012337 author author P. Hyllus, author L. Pezzé, author A. Smerzi,and author G. Tóth, 10.1103/PhysRevA.86.012337 journal journal Phys. Rev. A volume 86, pages 012337 (year 2012)NoStop [Schmied et al.(2016)Schmied, Bancal, Allard, Fadel, Scarani, Treutlein, andSangouard]Schmied441 author author R. Schmied, author J.-D. Bancal, author B. Allard, author M. Fadel, author V. Scarani, author P. Treutlein,and author N. Sangouard, 10.1126/science.aad8665 journal journal Science volume 352, pages 441 (year 2016)NoStop [Gessner et al.(2017)Gessner, Pezzè, and Smerzi]ResolutionEnhanced author author M. Gessner, author L. Pezzè, and author A. Smerzi, 10.1103/PhysRevA.95.032326 journal journal Phys. Rev. A volume 95, pages 032326 (year 2017)NoStop [Yokoyama et al.(2013)Yokoyama, Ukai, Armstrong, Sornphiphatphong, Kaji, Suzuki, Yoshikawa, Yonezawa, Menicucci, and Furusawa]10000modes author author S. Yokoyama, author R. Ukai, author S. C. Armstrong, author C. Sornphiphatphong, author T. Kaji, author S. Suzuki, author J.-i. Yoshikawa, author H. Yonezawa, author N. C.Menicucci,and author A. Furusawa, http://dx.doi.org/10.1038/nphoton.2013.287 journal journal Nat. Photon. volume 7, pages 982 (year 2013)NoStop [Chen et al.(2014)Chen, Menicucci, and Pfister]PhysRevLett.112.120505 author author M. Chen, author N. C. Menicucci,and author O. Pfister, 10.1103/PhysRevLett.112.120505 journal journal Phys. Rev. Lett. volume 112, pages 120505 (year 2014)NoStop [Roslund et al.(2014)Roslund, de Araújo, Jiang, Fabre, and Treps]Roslund2014 author author J. Roslund, author R. M. de Araújo, author S. Jiang, author C. Fabre,andauthor N. Treps, http://dx.doi.org/10.1038/nphoton.2013.340 journal journal Nat. Photon. volume 8, pages 109 (year 2014)NoStop [Gross et al.(2011)Gross, Strobel, Nicklas, Zibold, Bar-Gill, Kurizki, and Oberthaler]GrossNATURE2011 author author C. Gross, author H. Strobel, author E. Nicklas, author T. Zibold, author N. Bar-Gill, author G. Kurizki,and author M. K. Oberthaler, 10.1038/nature10654 journal journal Naturevolume 480, pages 219 (year 2011)NoStop [Hamley et al.(2012)Hamley, Gerving, Hoang, Bookjans,and Chapman]HamleyNATPHYS2012 author author C. D. Hamley, author C. S. Gerving, author T. M. Hoang, author E. M. Bookjans,and author M. S. Chapman, 10.1038/NPHYS2245 journal journal Nat. Phys.volume 8, pages 305 (year 2012)NoStop [Peise et al.(2015)Peise, Kruse, Lange, Lücke, Pezzé, Arlt, Ertmer, Hammerer, Santos, Smerzi,and Klempt]Peise2015 author author J. Peise, author I. Kruse, author K. Lange, author B. Lücke, author L. Pezzé, author J. Arlt, author W. Ertmer, author K. Hammerer, author L. Santos, author A. Smerzi,and author C. Klempt, http://dx.doi.org/10.1038/ncomms9984 journal journal Nat. Commun. volume 6, pages 8984 (year 2015)NoStop [Simon(2000)]PhysRevLett.84.2726 author author R. Simon, 10.1103/PhysRevLett.84.2726 journal journal Phys. Rev. Lett. volume 84, pages 2726 (year 2000)NoStop [Duan et al.(2000)Duan, Giedke, Cirac, and Zoller]PhysRevLett.84.2722 author author L.-M. Duan, author G. Giedke, author J. I. Cirac,andauthor P. Zoller, 10.1103/PhysRevLett.84.2722 journal journal Phys. Rev. Lett. volume 84, pages 2722 (year 2000)NoStop [Giovannetti et al.(2003)Giovannetti, Mancini, Vitali, andTombesi]PhysRevA.67.022320 author author V. Giovannetti, author S. Mancini, author D. Vitali, and author P. Tombesi, 10.1103/PhysRevA.67.022320 journal journal Phys. Rev. A volume 67, pages 022320 (year 2003)NoStop [Braunstein and van Loock(2005)]RevModPhys.77.513 author author S. L. Braunstein and author P. van Loock, 10.1103/RevModPhys.77.513 journal journal Rev. Mod. Phys. volume 77,pages 513 (year 2005)NoStop [Adesso and Illuminati(2007)]Adesso07 author author G. Adesso and author F. Illuminati, https://doi.org/10.1088/1751-8113/40/28/S01 journal journal J. Phys. A volume 40, pages 7821 (year 2007)NoStop [Weedbrook et al.(2012)Weedbrook, Pirandola, García-Patrón, Cerf, Ralph, Shapiro, andLloyd]Weedbrook author author C. Weedbrook, author S. Pirandola, author R. García-Patrón, author N. J.Cerf, author T. C. Ralph, author J. H. Shapiro,and author S. Lloyd, 10.1103/RevModPhys.84.621 journal journal Rev. Mod. Phys. volume 84, pages 621 (year 2012)NoStop [Pezzé and Smerzi(2009)]PhysRevLett.102.100401 author author L. Pezzé and author A. Smerzi, 10.1103/PhysRevLett.102.100401 journal journal Phys. Rev. Lett. volume 102, pages 100401 (year 2009)NoStop [Pezzè et al.(2016)Pezzè, Li, Li, and Smerzi]PezzePNAS2016 author author L. Pezzè, author Y. Li, author W. Li,and author A. Smerzi, 10.1073/pnas.1603346113 journal journal Proc. Natl. Acad. Sci. volume 113, pages 11459 (year 2016)NoStop [Gessner et al.(2016)Gessner, Pezzè, and Smerzi]Gessner2016 author author M. Gessner, author L. Pezzè, and author A. Smerzi, 10.1103/PhysRevA.94.020101 journal journal Phys. Rev. A volume 94, pages 020101(R) (year 2016)NoStop [Braunstein and Caves(1994)]PhysRevLett.72.3439 author author S. L. Braunstein and author C. M. Caves, 10.1103/PhysRevLett.72.3439 journal journal Phys. Rev. Lett. volume 72, pages 3439 (year 1994)NoStop [Paris(2009)]paris2009 author author M. G. Paris, http://dx.doi.org/10.1142/S0219749909004839 journal journal Intl. J. Quant. Inf. volume 7, pages 125 (year 2009)NoStop [Giovannetti et al.(2011)Giovannetti, Lloyd, and Maccone]Giovannetti2011 author author V. Giovannetti, author S. Lloyd,and author L. Maccone, 10.1038/nphoton.2011.35 journal journal Nat. Photon. volume 5, pages 222 (year 2011)NoStop [Pezzé and Smerzi(2014)]Varenna author author L. Pezzé and author A. Smerzi, in @noopbooktitle Atom Interferometry, Proceedings of the International School of Physics "Enrico Fermi", Course 188, Varenna, editor edited by editor G. Tino and editor M. Kasevich (publisher IOS Press, address Amsterdam, Netherlands, year 2014)NoStop [Strobel et al.(2014)Strobel, Muessel, Linnemann, Zibold, Hume, Pezzè, Smerzi, and Oberthaler]Strobel424 author author H. Strobel, author W. Muessel, author D. Linnemann, author T. Zibold, author D. B. Hume, author L. Pezzè, author A. Smerzi,and author M. K. Oberthaler, 10.1126/science.1250147 journal journal Science volume 345, pages 424 (year 2014)NoStop [Bohnet et al.(2016)Bohnet, Sawyer, Britton, Wall, Rey, Foss-Feig, and Bollinger]Bohnet1297 author author J. G. Bohnet, author B. C. Sawyer, author J. W. Britton, author M. L. Wall, author A. M. Rey, author M. Foss-Feig,and author J. J. Bollinger, 10.1126/science.aad9958 journal journal Science volume 352, pages 1297 (year 2016)NoStop [van Loock and Furusawa(2003)]PhysRevA.67.052315 author author P. van Loock and author A. Furusawa, 10.1103/PhysRevA.67.052315 journal journal Phys. Rev. A volume 67, pages 052315 (year 2003)NoStop [Valido et al.(2014)Valido, Levi, and Mintert]PhysRevA.90.052321 author author A. A. Valido, author F. Levi,andauthor F. Mintert, 10.1103/PhysRevA.90.052321 journal journal Phys. Rev. A volume 90, pages 052321 (year 2014)NoStop [Teh and Reid(2014)]PhysRevA.90.062337 author author R. Y. Teh and author M. D. Reid,10.1103/PhysRevA.90.062337 journal journal Phys. Rev. A volume 90, pages 062337 (year 2014)NoStop [Shchukin and van Loock(2015)]PhysRevA.92.042328 author author E. Shchukin and author P. van Loock, 10.1103/PhysRevA.92.042328 journal journal Phys. Rev. A volume 92,pages 042328 (year 2015)NoStop [Gerke et al.(2015)Gerke, Sperling, Vogel, Cai, Roslund, Treps, and Fabre]PhysRevLett.114.050501 author author S. Gerke, author J. Sperling, author W. Vogel, author Y. Cai, author J. Roslund, author N. Treps,and author C. Fabre, 10.1103/PhysRevLett.114.050501 journal journal Phys. Rev. Lett. volume 114, pages 050501 (year 2015)NoStop [Werner and Wolf(2001)]WernerWolf author author R. F. Werner and author M. M. Wolf, 10.1103/PhysRevLett.86.3658 journal journal Phys. Rev. Lett. volume 86,pages 3658 (year 2001)NoStop [Heisenberg(1927)]Heisenberg1927 author author W. Heisenberg, 10.1007/BF01397280 journal journal Z. Phys. volume 43, pages 172 (year 1927)NoStop [Robertson(1929)]PhysRev.34.163 author author H. P. Robertson, 10.1103/PhysRev.34.163 journal journal Phys. Rev. volume 34,pages 163 (year 1929)NoStop [Walborn et al.(2009)Walborn, Taketani, Salles, Toscano, and de Matos Filho]PhysRevLett.103.160505 author author S. P. Walborn, author B. G. Taketani, author A. Salles, author F. Toscano,andauthor R. L. de Matos Filho,10.1103/PhysRevLett.103.160505 journal journal Phys. Rev. Lett. volume 103,pages 160505 (year 2009)NoStop [Huang(2013)]Yichen author author Y. Huang, 10.1109/TIT.2013.2257936 journal journal IEEE Trans. Inf. Theo. volume 59, pages 6774 (year 2013)NoStop [Shchukin and van Loock(2016)]PhysRevLett.117.140504 author author E. Shchukin and author P. van Loock, 10.1103/PhysRevLett.117.140504 journal journal Phys. Rev. Lett. volume 117, pages 140504 (year 2016)NoStop [Ferraro et al.(2005)Ferraro, Olivares, and Paris]Paris2005 author author A. Ferraro, author S. Olivares, and author M. G. A. Paris,@nooptitle Gaussian states in continuous variable quantum information (publisher Bibliopolis,address Napoli, year 2005)NoStop [Aoki et al.(2003)Aoki, Takei, Yonezawa, Wakui, Hiraoka, Furusawa, and van Loock]PhysRevLett.91.080404 author author T. Aoki, author N. Takei, author H. Yonezawa, author K. Wakui, author T. Hiraoka, author A. Furusawa,and author P. van Loock, 10.1103/PhysRevLett.91.080404 journal journal Phys. Rev. Lett. volume 91, pages 080404 (year 2003)NoStop [Seevinck and Uffink(2001)]PhysRevA.65.012107 author author M. Seevinck and author J. Uffink, 10.1103/PhysRevA.65.012107 journal journal Phys. Rev. A volume 65, pages 012107 (year 2001)NoStop [Medeiros de Araújo et al.(2014)Medeiros de Araújo, Roslund, Cai, Ferrini, Fabre, andTreps]PhysRevA.89.053828 author author R. Medeiros de Araújo, author J. Roslund, author Y. Cai, author G. Ferrini, author C. Fabre,and author N. Treps, 10.1103/PhysRevA.89.053828 journal journal Phys. Rev. A volume 89, pages 053828 (year 2014)NoStop [Ferrini et al.(2015)Ferrini, Roslund, Arzani, Cai, Fabre, and Treps]PhysRevA.91.032314 author author G. Ferrini, author J. Roslund, author F. Arzani, author Y. Cai, author C. Fabre,and author N. Treps, 10.1103/PhysRevA.91.032314 journal journal Phys. Rev. A volume 91, pages 032314 (year 2015)NoStop [Bakr et al.(2009)Bakr, Gillen, Peng, Folling, andGreiner]Bak2009 author author W. S. Bakr, author J. I. Gillen, author A. Peng, author S. Folling,and author M. Greiner, http://dx.doi.org/10.1038/nature08482 journal journal Nature volume 462, pages 74 (year 2009)NoStop [Sherson et al.(2010)Sherson, Weitenberg, Endres, Cheneau, Bloch, and Kuhr]She2010 author author J. F. Sherson, author C. Weitenberg, author M. Endres, author M. Cheneau, author I. Bloch,and author S. Kuhr, http://dx.doi.org/10.1038/nature09378 journal journal Nature volume 467, pages 68 (year 2010)NoStop [Roos et al.(2008)Roos, Monz, Kim, Riebe, Häffner, James, and Blatt]PhysRevA.77.040302 author author C. F. Roos, author T. Monz, author K. Kim, author M. Riebe, author H. Häffner, author D. F. V.James,and author R. Blatt, 10.1103/PhysRevA.77.040302 journal journal Phys. Rev. A volume 77, pages 040302 (year 2008)NoStop [Abdelrahman et al.()Abdelrahman, Khosravani, Gessner, Breuer, Buchleitner, Gorman, Masuda, Pruttivarasin, Ramm, Schindler, and Häffner]Abdelrahman author author A. Abdelrahman, author O. Khosravani, author M. Gessner, author H.-P. Breuer, author A. Buchleitner, author D. J. Gorman, author R. Masuda, author T. Pruttivarasin, author M. Ramm, author P. Schindler,and author H. Häffner, http://dx.doi.org/10.1038/ncomms15712 journal journal Nat. Commun. volume 8, pages 15712 (year 2017)NoStop [Jeong et al.(2014)Jeong, Zavatta, Kang, Lee, Costanzo, Grandi, Ralph,and Bellini]Jeong2014 author author H. Jeong, author A. Zavatta, author M. Kang, author S.-W. Lee, author L. S. Costanzo, author S. Grandi, author T. C.Ralph,and author M. Bellini, http://dx.doi.org/10.1038/nphoton.2014.136 journal journal Nat. Photon. volume 8, pages 564 (year 2014)NoStop [Morin et al.(2014)Morin, Huang, Liu, Le Jeannic, Fabre, and Laurat]Morin2014 author author O. Morin, author K. Huang, author J. Liu, author H. Le Jeannic, author C. Fabre,and author J. Laurat, http://dx.doi.org/10.1038/nphoton.2014.137 journal journal Nat. Photon. volume 8, pages 570 (year 2014)NoStop
http://arxiv.org/abs/1702.08413v3
{ "authors": [ "Manuel Gessner", "Luca Pezzè", "Augusto Smerzi" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170227181708", "title": "Entanglement and squeezing in continuous-variable systems" }
[pages=1-last]miccai_paper_ver3.pdf
http://arxiv.org/abs/1702.08379v3
{ "authors": [ "Paul Jaeger", "Sebastian Bickelhaupt", "Frederik Bernd Laun", "Wolfgang Lederer", "Daniel Heidi", "Tristan Anselm Kuder", "Daniel Paech", "David Bonekamp", "Alexander Radbruch", "Stefan Delorme", "Heinz-Peter Schlemmer", "Franziska Steudle", "Klaus H. Maier-Hein" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170227170620", "title": "Revealing Hidden Potentials of the q-Space Signal in Breast Cancer" }
http://arxiv.org/abs/1702.07840v1
{ "authors": [ "T. Tatsumi", "R. Yoshiike", "T. -G. Lee" ], "categories": [ "hep-ph", "nucl-th" ], "primary_category": "hep-ph", "published": "20170225065902", "title": "Fluctuations In The Inhomogeneous Chiral Transition" }
Convex Relaxations of Chance Constrained AC Optimal Power Flow Andreas Venzke, Student Member, IEEE, Lejla Halilbasic, Student Member, IEEE, Uros Markovic, Student Member, IEEE, Gabriela Hug, Senior Member, IEEE, and Spyros Chatzivasileiadis, Member, IEEE A. Venzke, L. Halilbasic and S. Chatzivasileiadis are with the Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark. U. Markovic and G. Hug are with the Power Systems Laboratory, ETH Zurich, Zurich, Switzerland. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================== High penetration of renewable energy sources and the increasing share of stochastic loads require the explicit representation of uncertainty in tools such as the optimal power flow (OPF). Current approaches follow either a linearized approach or an iterative approximation of non-linearities. This paper proposes a semidefinite relaxation of a chance constrained AC-OPF which is able to provide guarantees for global optimality. Using a piecewise affine policy, we can ensure tractability, accurately model large power deviations, and determine suitable corrective control policies for active power, reactive power, and voltage. We state a tractable formulation for two types of uncertainty sets. Using a scenario-based approach and making no prior assumptions about the probability distribution of the forecast errors, we obtain a robust formulation for a rectangular uncertainty set. Alternatively, assuming a Gaussian distribution of the forecast errors, we propose an analytical reformulation of the chance constraints suitable for semidefinite programming. We demonstrate the performance of our approach on the IEEE 24 and 118 bus system using realistic day-ahead forecast data and obtain tight near-global optimality guarantees. AC optimal power flow, convex optimization, chance constraints, semidefinite programming, uncertainty. § INTRODUCTIONPOWER system operators have to deal with higher degrees of uncertainty in operation and planning. If uncertainty is not explicitly considered, increasing shares of unpredictable renewable generation and stochastic loads, such as electric vehicles, can lead to higher costs and jeopardize system security. The scope of this work is to introduce a convex AC optimal power flow (OPF) formulation which is able to accurately model the effect of forecast errors on the power flow, can define a-priori suitable corrective control policies for active power, reactive power, and voltage, and can provide near-global optimality guarantees.Chance constraints are included in the OPF formulation to account for uncertainty in power injections, defining a maximum allowable probability of constraint violation. It is generally agreed that the non-linear nature of the AC-OPF along with the probabilistic constraints render the problem for most instances intractable <cit.>. To ensure tractability of these constraints, either a data-driven or scenario-based approach is applied, or the assumption of specific uncertainty distributions is required for an analytical reformulation of the chance constraints. To deal with the higher complexity of chance constrained OPF, existing approaches either assume a DC-OPF <cit.>, a linearized AC-OPF <cit.> or solve iteratively linearized instances of the non-linear AC-OPF <cit.>. Chance constrained DC-OPF results to a faster and more scalable algorithm, but it is an approximation that neglects losses, reactive power, and voltage constraints, and can exhibit substantial approximation errors <cit.>.Refs. <cit.> and <cit.> formulate a chance constrained DC-OPF assuming a Gaussian distribution of the forecast errors. The work in <cit.> relies on a cutting-plane algorithm to solve the resulting optimization problem, whereas the work in <cit.> states a direct analytical reformulation of the same chance constraints. This framework is further extended by the work in <cit.> which assumes uncertainty sets for both the mean and the variance of the underlying Gaussian distributions to obtain a more distributionally robust formulation. The work in <cit.> formulates a robust multi-period chance constrained DC-OPF assuming interval bounds on uncertain wind infeeds. These works <cit.> include corrective control of the generation units to restore the active power system balance as a function of the forecast errors. The work in <cit.> extends this corrective control framework to include HVDC converter active power set-points and phase shifting transformers in an N-1 security context.Alternatively, the works in <cit.> use a linearization of the AC power flow equations based on <cit.> to achieve a tractable formulation of the chance constraints. As the operating point is not known a-priori, the linearization is performed around a flat start or no-load voltage, and not the actual operating point.These works <cit.> focus on low-voltage distribution systems with high share of photovoltaic (PV) production and minimize PV curtailment subject to chance constraints on voltage magnitudes. Scenario-based methods are applied to achieve a tractable formulation. In this framework, line flow limits and corrective control from conventional generation are not considered. Furthermore, the utilized linearization in <cit.> is designed for radial distribution grids and assumes no voltage control capability of generation units.In Ref. <cit.>, an iterative back-mapping and linearization of the full AC power flow equations is used to solve the chance constrained AC-OPF. The recent work in <cit.> uses an iterative procedure to calculate the full Jacobian, which is the exact AC power flow linearization around the operating point. Assuming a Gaussian distribution of the forecast errors, an analytical reformulation of the chance constraints on voltage magnitude and current line flow is proposed. Although this approach can be shown to scale well, it is not convex and does not guarantee convergence.In this work, we formulate convex relaxations of chance constrained AC-OPF which allow us to provide guarantees for the optimality of the solution, or otherwise upper-bound the distance to the global optimum of the original non-linear problem. Besides that, we include chance constraints for all relevant state variables, namely active and reactive power, voltage magnitudes and active and apparent branch flows. Two tractable formulations of the chance constraints are proposed. First, based on realistic forecast data and making no prior assumptions about the probability distributions, we formulate a rectangular uncertainty set and, subsequently, the associated chance constraints. Second, assuming a Gaussian distribution of the forecast errors, we provide an analytical reformulation of the chance constraints.§.§ Convex Relaxations and Relaxation Gap In general, the AC-OPF is a non-convex, non-linear problem. As a result, identified solutions are not guaranteed to be globally optimal and the distance to the global optimum cannot be specified. Recent advancements in the area of convex optimization with polynomials have achieved to relax the non-linear, non-convex optimal power flow problem and transform it to a convex semidefinite (SDP) or second-order cone problem <cit.>. Formulating a convex optimization problem results in tractable solution algorithms that can determine the global minimum. Within power systems, finding the global minimum has two important implications. First, from an economic point of view, it can result to substantial cost savings <cit.>. Second, froma technical point of view, the global optimum determines a lower or an upper bound of the required control effort. The term relaxation gap denotes the difference between the minimum obtained through the convex relaxation and the global minimum of the original non-convex problem. A relaxation is tight, if the relaxation gap is small. A relaxation is exact, if the relaxation gap is zero, i.e. zero relaxation gap is achieved when the minimum of the convex relaxation coincides with the global minimum of the original non-convex, non-linear problem. Since the work in <cit.> has shown cases in which the semidefinite relaxation of <cit.> fails, it is necessary to investigate the relaxation gap of the obtained solution, and examine the conditions under which we can obtain zero relaxation gap. In the work <cit.> a reactive power penalty is introduced, which allows to upper bound the distance to global optimum. In this work, we develop a penalized semidefinite formulation for a chance constrained AC-OPF, which allows us as well to determine an upper bound of the distance to the global optimum. In Fig. <ref> we illustrate the previously explained concepts in the context of our work. With relaxation gap, we refer to the gap between the semidefinite relaxation and a non-linear chance constrained AC-OPF which uses the affine policy to parametrize the solution space.§.§ Main ContributionsIn this work we propose a framework for a convex chance constrained AC-OPF. The work in <cit.> makes a first step towards such a formulation which takes into account security constraints and uncertainty. The change of the system state is describedwith an affine policy as an explicit function of the forecast errors. A combination of the scenario approach and robust optimization is used to ensure tractability of the chance constraints <cit.>. For the convex relaxations we build upon the SDP AC-OPF formulation proposed in <cit.>. The contributions of our work are the following: * To the best of our knowledge, this is the first paper that proposes a convex formulation for the chance constrained OPF that (a) is able to determine if it has found the global minimum of the original non-convex problem[As we will discuss later in this paper, in cases where a penalized SDP formulation is necessary, this point corresponds to a near-global minimum.], and (b) if not, it is able to determine the distance to the global minimum through the relaxation gap. * In this paper, we introduce a penalty term on power losses which allows us to obtain near-global optimality guarantees and we investigate the conditions under which we can obtain a zero relaxation gap.We show that this penalty term is small in practice, leading to tight near-global optimality guarantees of the obtained solution. * We formulate tractable chance constraints suitable for semidefinite programming for two types of uncertainty sets. First, using a piecewise affine policy, we state a tractable formulation of the chance constrained AC-OPF with convex relaxations that makes no prior assumptions on the type of probability distribution. Using existing data or scenarios, we determine a rectangular uncertainty set; as the set and the chance constraints are affine or convex, we can account for the whole set by enforcing the chance constraints only at its vertices <cit.>. Second, assuming Gaussian distributions, we formulate tractable chance constraints for the optimal power flow equations that are suitable for semidefinite programming. In that, we also assume the correlation of different uncertain variables. To the best of our knowledge, this is the first paper that introduces a tractable reformulation of the chance constrained AC-OPF with convex relaxations for Gaussian distributions. * The proposed framework includes corrective control policies related to active and reactive power, and voltage. * Based on realistic forecast data and the IEEE 118 bus test case, we compare our approach for both uncertainty sets to the chance constrained DC-OPF formulation in <cit.>, and the iterative AC-OPF in <cit.>. Compared to the DC-OPF formulation, we find that the formulations proposed in this paper are more accurate and significantly decrease constraint violations. For the rectangular uncertainty set, the affine policy complies with all considered chance constraints and outperforms all other methods having the lowest number of constraint violations. At the same time, we obtain tight near-global optimality guarantees which ensure that the distance to the global optimum is smaller than 0.01% of the objective value. For a Gaussian distribution, both the iterative AC-OPF and our approach satisfy the constraint violation limit, with our approach achieving slightly lower costs due to the corrective control capabilities. As the realistic forecast data we used do not follow a Gaussian distribution, we also observed that both approaches may exceed the constraint violation limit at certain timesteps for that dataset. The remainder of this work is structured as follows: In Section <ref> the convex relaxation of the chance constrained AC-OPF problem is formulated. Section <ref> introduces the piecewise affine policy, defines corrective control policies and states the tractable OPF formulation for both uncertainty sets. Section <ref> states an alternative approach using a linearization based on power transfer distribution factors (PTDFs). Section <ref> investigates the relaxation gap for a IEEE 24 bus system and presents numerical results for a IEEE 118 bus system using realistic forecast data. Section <ref> concludes the paper. The nomenclature is provided in Table <ref>. An underline and overline denote, respectively, the upper and lower bound of a variable.§ OPTIMAL POWER FLOW FORMULATION§.§ Convex Relaxation of AC Optimal Power FlowFor completeness, we outline the semidefinite relaxation of the AC-OPF problem as formulated in <cit.>. A power grid consists of 𝒩 buses and ℒ lines. The set of generator buses is denoted with 𝒢. The following auxiliary variables are introduced for each bus k ∈𝒩 and line (l,m) ∈ℒ:Y_k:= e_k e_k^T Y Y_lm := (y̅_lm + y_lm) e_l e_l^T - (y_lm) e_l e_m^T Y_k := 12[ {Y_k + Y_k^T} { Y_k^T - Y_k };{ Y_k - Y_k^T} {Y_k + Y_k^T} ]Y_lm := 12[{Y_lm + Y_lm^T} { Y_lm^T - Y_lm}; { Y_lm - Y_lm^T}{Y_lm + Y_lm^T} ] Y̅_k:= -12[ {Y_k + Y_k^T} { Y_k - Y_k^T };{ Y_k^T - Y_k} {Y_k + Y_k^T} ] M_k:= [ e_k e_k^T 0; 0 e_k e_k^T ] X: = [{V} {V}]^TMatrix Y denotes the bus admittance matrix of the power grid, e_k the k-th basis vector, y̅_lm the shunt admittance and y_lm the series admittance of line (l,m) ∈ℒ, and V the vector of complex bus voltages. The non-linear AC-OPF problem can be written using (<ref>) – (<ref>) asmin_W ∑_k ∈𝒢{ c_k2 (Tr{Y_k W} + P_D_k)^2 + c_k1(Tr{Y_k W} + P_D_k) + c_k0}subject to the following constraints for each bus k ∈𝒩 and line (l,m) ∈ℒ:P_G_k - P_D_k≤Tr{Y_k W}≤P_G_k - P_D_k Q_G_k- Q_D_k≤Tr{Y̅_k W}≤Q_G_k - Q_D_k V_k^2 ≤Tr{ M_k W}≤V_k^2 -P_lm≤Tr{Y_lm W}≤P_lm Tr{Y_lm W}^2 + Tr{Y̅_lm W}^2 ≤S_lm^2 W = XX^TThe objective (<ref>) minimizes generation cost, where c_k2, c_k1 and c_k0 are quadratic, linear and constant cost variables associated with power production of generator k ∈𝒢.[In case renewable curtailment costs are assumed, this could introduce negative linear costs, which may not result in a tight relaxation.] The terms P_D_k and Q_D_k denote the active and reactive power consumption at bus k. Constraints (<ref>) and (<ref>) include the nodal active and reactive power flow balances; P_G_k, P_G_k, Q_G_k andQ_G_k are generator limits for minimum and maximum active and reactive power, respectively. The bus voltages are constrained by (<ref>) with corresponding lower and upper limits V_k, V_k. The active and apparent power branch flow P_lm and S_lm on line (l,m) ∈ℒ are limited by P_lm (<ref>) and S_lm (<ref>), respectively. To obtain an optimization problem linear in W, the objective function is reformulated using Schur's complement:min_W, α∑_k ∈𝒢α_k[c_k1Tr{Y_k W} + a_k √(c_k2)Tr{Y_k W} + b_k; √(c_k2)Tr{Y_k W} + b_k -1;]≼ 0where a_k : = - α_k + c_k0 + c_k1P_D_k and b_k : = √(c_k2) P_D_k. In addition, the apparent branch flow constraint (<ref>) is rewritten:[- (S_lm)^2Tr{Y_lm W} Tr{Y̅_lm W};Tr{Y_lm W}-1 0; Tr{Y̅_lm W} 0-1 ]≼ 0The non-convex constraint (<ref>) can be expressed by:W ≽ 0rank(W) = 1The convex relaxation is introduced by dropping the rank constraint (<ref>), relaxing the non-linear, non-convex AC-OPF to a convex semidefinite program (SDP). The work in <cit.> proves that if the rank of W obtained from the SDP relaxation is 1, then W is the global optimum of the non-linear, non-convex AC-OPF and the optimal voltage vector can be computed following the procedure described in <cit.>.§.§ Inclusion of Chance ConstraintsRenewable energy sources and stochastic loads introduce uncertainty in power system operation. To account for uncertainty in bus power injections, we extend the presented OPF formulation with chance constraints. A number of n_W wind farms are introduced in the power grid at buses k ∈𝒲 and modeled asP_W_k = P_W_k^f + ζ_kwhere P_W are the actual wind infeeds, P_W^f are the forecasted values and ζ are the uncertain forecast errors. To simplify notation, the resulting upper and lower bounds on net active and reactive power injections are written in compact form as:P_k := P_G_k - P_D_k + P^f_W_k + ζ_k,P_k := P_G_k - P_D_k + P^f_W_k +ζ_kQ_k := Q_G_k - Q_D_k Q_k := Q_G_k - Q_D_kThe convex chance constrained AC-OPF problem includes chance constraints for each bus k ∈𝒩 and line (l,m) ∈ℒ:min_W, α∑_k∈ Gα_ks.t.(<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>), (<ref>) forW = W_0ℙ{ P_k ≤Tr{Y_k W(ζ) }≤P_k,Q_k≤Tr{Y̅_k W(ζ) }≤Q_k,V_k^2 ≤Tr{ M_k W(ζ)}≤V_k^2,- P_lm≤Tr{Y_lm W(ζ)}≤P_lm ,[ [- (S_lm)^2Tr{Y_lm W(ζ) } Tr{Y̅_lm W(ζ) };Tr{Y_lm W(ζ) }-1 0; Tr{Y̅_lm W(ζ) } 0-1 ]] ≼ 0 , W(ζ) ≽ 0}≥ 1-ϵThe parameter ϵ∈ (0,1) defines the upper bound on the violation probability of the chance constraints (<ref>) – (<ref>). The function W(ζ) denotes the system state as a function of the forecast errors.The chance constrained AC-OPF problem (<ref>) – (<ref>) is an infinite-dimensional problem optimizing over W(ζ) which is a function of a continuous uncertain variable ζ <cit.>. This renders the problem intractable, which makes it necessary to identify a suitable approximation for W(ζ) <cit.>.In the following, an approximation of an explicit dependence of W(ζ) on the forecast errors is presented. § PIECEWISE AFFINE POLICY We present a formulation of the chance constraints using a piecewise affine policy, which approximates the system change as a linear function of the forecast errors. This allows us to include corrective control policies for active and reactive power, and voltages. We propose a tractable formulation for two types of uncertainty sets. First, using an approach based on randomized and robust optimization, and making no prior assumption on the underlying probability distributions, we determine a rectangular uncertainty set. For that, it is sufficient to enforce the chance constraints at its vertices. Second, assuming a Gaussian distribution of the forecast errors, we can provide an analytical reformulation of the linear chance constraints and a suitable approximation of the semidefinite chance constraints.§.§ Formulation of Chance ConstraintsThe main idea is to describe the matrix W(ζ) as the sum of the forecasted system operating state W_0 and the change of the system state B_i due to each forecast error. Similar to <cit.>, the matrix W(ζ) is approximated using the affine policyW(ζ) = W_0 + ∑_i=1^n_wζ_i B_iwhere W_0 and B_i are matrices modeled as decision variables. Eq. (<ref>) provides an affine parametrization of the solution space for the product of real and imaginary part of bus voltages described by W(ζ). The main advantages of the affine policy are that it resembles affine corrective control policies and naturally allows to include these as well. Furthermore, as the system change depends linearly on the forecast error, in case a Gaussian distribution is assumed, an analytical reformulation can be applied as we will show in Section <ref>. Inserting (<ref>) in (<ref>) – (<ref>) yields:ℙ{ P_k≤Tr{Y_k W_0 } +∑_i^n_wζ_i Tr{Y_k B_i }≤P_k Q_k ≤Tr{Y̅_k W_0 } + ∑_i^n_wζ_i Tr{Y̅_k B_i }≤Q_kV^2_k ≤Tr{ M_k W_0} + ∑_i^n_wζ_i Tr{ M_k B_i }≤V^2_k - P_lm≤Tr{Y_lm W_0} +∑_i^n_wζ_i Tr{Y_lm B_i }≤P_lm[ - S_lm^2 Ξ_lm^P Ξ_lm^Q; Ξ_lm^P -10; Ξ_lm^Q0 -1 ]≼ 0W_0 + ∑_i^n_wζ_i B_i ≽ 0}≥ 1-ϵThe terms Ξ_lm^P: =Tr{Y_lm W_0} + ∑_i=1^n_Wζ_i Tr{Y_lm B_i } and Ξ_lm^Q: =Tr{Y̅_lm W_0} + ∑_i=1^n_Wζ_i Tr{Y̅_lm B_i }denote the active and reactive power flow on transmission line (l,m) ∈ℒ as a function of the forecast errors. Note that the chance constraints (<ref>) – (<ref>) are convex and can be classified in two groups: The constraints (<ref>) – (<ref>) are linear scalar chance constraints and the constraints (<ref>) – (<ref>) are semidefinite chance constraints.§.§ Corrective Control PoliciesThe affine policy allows to include corrective control policies related to active power, reactive power, and voltage in the AC-OPF formulation. In this work, the implemented policies are generator active power control, generator voltage control, and wind farm reactive power control.Throughout the transmission system operation, generation has to match demand and system losses. If an imbalance occurs,automatic generation control (AGC) restores the system balance <cit.>. Hence, designated generators in the power grid will respond to changes in wind power by adjusting their output as part of secondary frequency control. The generator participation factors are defined in the vector d_G ∈ℝ^n_b. The term n_b denotes the number of buses. The sum of the change in generator active power set-points should compensate the deviation in wind generation, i. e. ∑_k ∈ G d_G_k = 1. The wind vector d^i_W∈ℝ^n_b for each wind feed-in i in [1,n_W] has a {-1} entry corresponding to the bus where the i-th wind farm is located at. The other entries are zero. The line losses of the AC power grid vary non-linearly with changes in wind infeeds. To compensate for this change in system losses, we add a slack variable γ_i to the generator set-points. This results in the following constraints on each matrix B_i, bus k ∈𝒩 and wind feed-in i in [1,n_W]:Tr{Y_k B_i }= d_G_k (1 + γ_i) + d^i_W_kAs a result of (<ref>), it is ensured that each generator compensates the non-linear change in system losses according to its participation factor. To constrain the magnitude of the slack variable, a penalty term is added to the objective function (<ref>), where the term μ≥ 0 is a penalty weight:min_W, α, γ∑_k ∈ Gα_k + μ∑_i^n_wγ_iThis penalty guides the optimization to a physically meaningful solution, i.e. it allows us to obtainsolution matrices. The increase in losses due to deviations in wind infeeds is minimized. With this penalized semidefinite AC-OPF formulation, near-global optimality guarantees can be derived specifying the maximum distance to the global optimum <cit.>. The numerical results show that while this penalty is necessary to obtain zero relaxation gap, in practice the deviation from the global optimum is very small. This is investigated in detail in Section <ref>.In power systems, automatic voltage regulators (AVR) are installed as part of the control unit of generators. They keep the voltages at the generator terminals to a value fixed by the operator or a higher level controller <cit.>. The voltage set-point at each generator k ∈𝒢 is changed as a function of the forecast errors <cit.> and can be retrieved using:V_k (ζ)^2 =Tr{ M_k W_0 } + ∑_i=1^n_wζ_i Tr{ M_k B_i } According to recent revisions in Grid Codes <cit.>, renewable generators such as wind farms have to be able to provide or absorb reactive power up to a certain extent. This is often specified in terms of a power factor cosϕ: = √(P^2P^2 + Q^2). In this paper, we include the reactive power capabilities of the wind farms in the optimization. Note that these vary depending on the magnitude of the actual wind infeed. For each k ∈𝒲 the constraints (<ref>) and (<ref>) are replaced by:Q_k := Q_G_k - Q_D_k + τ(P^f_W_k+ζ_k)Q_k := Q_G_k - Q_D_k - τ(P^f_W_k+ζ_k)where τ:=√(1 - cos^2ϕcos^2ϕ). Using this procedure, active and reactive power set-points of FACTS devices and HVDC converter can also be included in the optimization.§.§ Piecewise Affine Policy In Fig. <ref> the affine policy for a wind infeed P_W_i is depicted. By choosing an affine policy in the form of (<ref>), the maximum and minimum bounds of the uncertainty set are linearly connected using the matrix B_i, and the OPF solution at the bounds can be recovered. As the OPF is a non-linear problem, the true system variation will likely not coincide with the linearization. Hence, the affine policy of <cit.> is not exact at the operating point W_0, but returns only an estimate W_0^', i.e. a non-physical higher rank solution. To obtain an exact solution for W_0, i.e. asolution, we introduce a modification to the conventional affine policy by separating the linearization between the maximum and minimum value into an upper part B_i^u and a lower part B_i^l, and thereby introducing a piecewise affine policy. Thus, we linearize between the operating point and the maximum and minimum value of the uncertainty set, respectively. We extend the work of <cit.>, by ensuring that the obtained solution is exact at the operating point. An additional benefit of our approach is that we get a closer approximation of the true system behavior, while the obtained control policies are piecewise linear. §.§ Tractable Formulation for Rectangular Uncertainty SetIn this section, we provide a tractable formulation of the chance constraints for a rectangular uncertainty set. The proposed procedure is a combination of robust and randomized optimization from <cit.> and which is applied to chance constrained AC-OPF in <cit.>. A scenario-based method, which does not make any assumption on the underlying distribution of the forecast errors, is used to compute the bounds of the uncertainty set. Two parameters need to be specified, ϵ∈ (0,1) is the allowable violation probability of the chance constraints and β∈ (0,1) a confidence parameter. Then, the minimum volume hyper-rectangular set is computed, which with probability 1-β contains 1-ϵ of the probability mass. According to <cit.>, it is necessary to include at least the following number of scenarios N_s to specify the uncertainty set:N_s≥1ϵee-1 (ln1β + 2 n_W - 1)The term e is Euler's number. The minimum and maximum bounds on the forecast errors ζ_i ∈ [ ζ_i, ζ_i] are retrieved by a simple sorting operation among the N_s scenarios and the vertices, i.e. the corner points, of the rectangular uncertainty set can be defined.To obtain a tractable formulation of the chance constraints, the following result from robust optimizationis used: If the constraint functions are linear, monotone or convex with respect to the uncertain variables, then the system variables will only take the maximum values at the vertices of the uncertainty set <cit.>. The chance constraints (<ref>) – (<ref>) are linear and the semidefinite chance constraints (<ref>), (<ref>) are convex. Hence, it suffices to enforce the chance constraints at the vertices v ∈𝒱 of the uncertainty set.The vector ζ_v ∈ℝ^n_W collects the forecast error bounds for each vertex, i.e. the entries of this vector correspond to the the deviation of each wind farm for a specific vertex v. For each vertex, a corresponding slack variable γ_v is defined. Based on our experience with the SDP solvers, we introduce the following more numerically robust formulation:W_v: = W_0 + ∑_i=1^n_Wζ_v_i B_iThe matrix W_v denotes the power flow solution at the corresponding vertex v. The active and reactive power limits for each bus k ∈𝒩 and vertex v ∈𝒱 can be written as:Q^v_k: =Q_G_k - Q_D_k+ τ(P_W_k^f + ζ_v_k)Q^v_k : =Q_G_k - Q_D_k - τ( P_W_k^f + ζ_v_k) P^v_k := P_G_k - P_D_k + P^f_W_k + ζ_v_k P^v_k := P_G_k - P_D_k + P^f_W_k + ζ_v_kWe provide a tractable formulation of chance constraints (<ref>) – (<ref>) for each vertex v ∈𝒱, bus k ∈𝒩 and line (l,m) ∈ℒ:P^v_k≤Tr{Y_k W_v }≤P^v_k Q^v_k≤Tr{Y̅_k W_v }≤Q^v_kV^2_k≤Tr{ M_k W_v}≤V^2_k - P_lm≤Tr{Y_lm W_v}≤P_lm[ [- (S_lm)^2 Tr{Y_lm W_v } Tr{Y̅_lm W_v};Tr{Y_lm W_v}-1 0;Tr{Y̅_lm W_v } 0-1 ]] ≼ 0 W_v ≽ 0 Tr{Y_k (W_v-W_0) }=∑_i = 1^n_Wζ_v_i (d_G_k (1 + γ_v) + d_W_k^i)The constraint (<ref>) links the forecasted system state to each of the vertices. To enforce the semidefinite chance constraint (<ref>) for the uncertainty set, it suffices that W_v is positive semidefinite at the vertices of the uncertainty set, i. e.(<ref>) is fulfilled.For illustrative purposes, in Fig. <ref> a rectangular uncertainty set is depicted for two uncertain wind infeeds P_W_1 and P_W_2. The resulting optimization problem for a rectangular uncertainty set of dimension n_w minimizes the objective (<ref>) subject to constraints (<ref>) and (<ref>) – (<ref>). Note that the proposed formulation holds for an arbitrary high-dimensional rectangular uncertainty set.§.§ Tractable Formulation for Gaussian Uncertainty Set In the following, it is assumed that the forecast errors ζ are random variables following a Gaussian distribution with zero mean and covariance matrix Λ. Assuming a Gaussian distribution can be helpful when there is insufficient amount of data at hand, as it can provide a suitable approximation of the power system operation under uncertainty. At the same time, through the covariance matrix, geographical correlations between wind farms, solar PV plants, or other types of uncertainty can be captured. We give a direct tractable formulation of the chance constrained AC-OPF, as the work in <cit.> presented for the chance constrained DC-OPF.For a defined confidence interval 1-ϵ, the uncertainty set for a Gaussian distribution of the forecast errors is an ellipsoid. First, the direction of linearization of the B matrices is rotated to correspond to the ellipsoid axes which are described by the eigenvectors η_i of the covariance matrix. The eigenvalues λ_i describe the squared dimension of the ellipsoid in the direction of its axes. Similar to the rectangular uncertainty set, we introduce the following auxiliary variables for each ellipsoid axis i in [1,n_W] and bus k ∈𝒲: d̃_G := d_G || η_i ||,d̃^i_W_k := η_i ,ζ̃_̃ĩ := √(λ_i) With B̃_i we denote the matrices of the affine policy rotated in the direction of the ellipsoid axes and (<ref>) has to hold: Tr{Y_k B̃_i }= d̃_G_k (1 + γ_i) + d̃^i_W_kSecond, we use theoretical results on chance constraints from the work in <cit.>, which presents the theory for an analytical reformulation of linear scalar chance constraints. To apply the reformulation, we approximate the joint probability of the chance constraint violation (<ref>)–(<ref>) with the violation probability of each individual chance constraint. Applying the reformulation to the chance constraints (<ref>) – (<ref>) yields for each bus k ∈𝒩 and line (l,m) ∈ℒ:P_k≤Tr{Y_k W_0 } ±√(∑_i^n_wκ_i^2 Tr{Y_k B̃_i }^2)≤P_kQ_k ≤Tr{Y̅_k W_0 } ±√(∑_i^n_wκ_i^2 Tr{Y̅_kB̃_i }^2)≤Q_kV^2_k ≤Tr{ M_k W_0 } ±√(∑_i^n_wκ_i^2 Tr{ M_kB̃_i }^2 )≤V^2_k -P_lm≤Tr{Y_lm W_0 } ±√(∑_i^n_wκ_i^2Tr{Y_lmB̃_i }^2 )≤P_lmThe term κ_i:= Φ^-1 (1-ϵ) ζ̃_̃ĩ is introduced, where Φ^-1 denotes the inverse Gaussian function. The chance constraint (<ref>) is a linear matrix inequality which ensures that the matrix W_0 + ∑_i^n_wζ̃_i B̃_iis positive semidefinite inside a confidence interval 1-ϵ. An analytical reformulation of this type of constraint is not known <cit.>. As a safe approximation, it suffices to enforce that W_0 + ∑_i^n_wζ̃_i B̃_iis positive semidefinite at maximum corresponding deviations ±κ_i to ensure that (<ref>) is fulfilled. We include the following semidefinite constraints for each ellipsoid axis i ∈ [1, n_W]:W_0 ±κ_i B̃_i ≽ 0This results in (<ref>) holding for the outer rectangular approximation of the ellipsoid uncertainty set. The semidefinite chance constraint on the apparent branch power flow can be conservatively approximated by enforcing it for the smallest rectangular set enclosing the ellipsoid, i.e. by including the constraint (<ref>) in the optimization. The assumption of a multivariate Gaussian distribution of the forecast errors leads to an uncertainty set which in two dimensions can be described as an ellipse. For the case of two wind farms with uncertain infeeds P_W_1 and P_W_2 this configuration is depicted in Fig. <ref>. Incorporating the results on the modification of the affine policy presented in Section <ref>, we add the constraints (<ref>) – (<ref>) not for B̃_i but for both B̃_i^u and B̃_i^l and each of their combinations, splitting the uncertainty set into four quadrants (I) – (IV) as depicted in Fig. <ref>. The resulting optimization problem corresponds to minimizing objective function (<ref>) subject to constraints (<ref>), (<ref>), (<ref>) and (<ref>) –(<ref>) for each quadrant of the ellipsoid.§ LINEARIZATION USING PTDFSIn the following, an alternative approach is presented which is used as benchmark for comparison with the rest of the approaches presented in this paper. To describe the system change as a function of the forecast errors, in this section we introduce a linear approximation based on DC power flow. This linear approximation uses the so-called power transfer distribution factors (PTDFs) to estimate the change in line loading due to a change in active power injections. This approach has been used in the works in <cit.> and <cit.> in the context of DC- and AC-OPF, respectively.The PTDFs use the DC power flow representation, i. e.assuming that the voltage magnitudes of all buses are equal to 1 p.u. and the resistances of branches are neglected. Hence, line losses are neglected and the generator participation factors are defined without including the slack term γ. As we assume constant voltage magnitudes, the semidefinite (<ref>), the voltage (<ref>) and the reactive power (<ref>) chance constraints are dropped and the focus is on approximating the chance constraints for the active power bus injection and active power branch flow, Eqs. (<ref>) and (<ref>). The admittance matrix B_DC is constructed using only the line reactances x_lm. The resulting matrix is singular. Thus, one column and the corresponding row are removed to obtain B̃_DC. The vectors d_G and d^i_W collect the generator participation factors and wind injections, and d̃_G and d̃^i_W denote the corresponding vectors with the first entry removed. The PTDF for each line (l,m) ∈ℒ is defined as follows:PTDF_lm = (e_l - e_m)^T 1x_lmB̃_DC^-1The PTDFs provide an approximate linear relation between a change in bus power injections and the change of the active power flow over a transmission line. Assuming the maximum and minimum bounds of the forecast errors are described by a rectangular uncertainty set with vertices ζ_v from the previously described scenario-based approach, we formulate a tractable approximation of (<ref>) and (<ref>) for each bus k ∈𝒩, line (l,m) ∈ℒ and vertex v ∈𝒱:P_k^v≤Tr{Y_k W_0 } + ∑_i^n_Wζ_v_i (d_G_k+d^i_W_k) ≤P_k^v -P_lm ≤Tr{Y_lm W_0 } + ∑_i^n_WPTDF_lmζ_v_i (d̃_G+d̃^i_W)≤P_lmAssuming the forecast errors follow a Gaussian distribution with zero mean and co-variance matrix Λ, we formulate a tractable approximation of (<ref>) and (<ref>) for each bus k ∈𝒩 and line (l,m) ∈ℒ:P_k≤Tr{Y_k W_0 }±Φ^-1 (1-ϵ) √( d_G_k^2 1^T Λ1)≤P_k -P_lm≤Tr{Y_lm W_0 }±Φ^-1 (1-ϵ) √(Ψ^T ΛΨ)≤P_lm The term 1∈ℝ^n_W denotes the vectors of ones. The vector Ψ∈ℝ^n_W contains for each wind feedin i ∈ [1,n_W] the approximated change in line loading:Ψ_i = PTDF_lm (d̃_G+d̃^i_W) § SIMULATION AND RESULTSIn this section, we first describe the simulation setup. Subsequently, using the IEEE 24 bus test case, we investigate the relaxation gap of the obtained solution matrices as a function of the penalty weight. Detailed results on the IEEE 118 bus test case using realistic forecast data are provided and our proposed approaches are compared to two alternative approaches described in the literature.§.§ Simulation SetupThe optimization problem is implemented in Julia using the optimization toolbox JuMP <cit.> and the SDP solver MOSEK 8 <cit.>. A small resistance of 10^-4 has to be added to each transformer, which is a condition for obtaining zero relaxation gap <cit.>. To investigate whether the relaxation gap of an obtained solution matrix W is zero, the ratio ρ of the 2^nd to 3^rd eigenvalue is computed, a measure proposed by <cit.>. This value should be around 10^5 or larger for zero relaxation gap to hold, which means that the obtained solution matrix is rank-2. The respectivesolution can be retrieved by following the procedure described in <cit.>. According to <cit.>, the obtained solution is then a feasible solution to the original non-linear AC-OPF problem.The work in <cit.> proposes the use of the following measure to evaluate the degree ofthe near-global optimality of a penalized SDP relaxation. Let f̃_1(x) be the generation cost of the convex OPF without a penalty term and f̃_2(x) the generation cost of the convex OPF with a penalty weight sufficiently high to obtainsolution matrices. Then, the near-global optimality can be assessed by computing the parameter δ_opt:= f̃_1(x)f̃_2(x)· 100 %. The closer this parameter is to 100%, the closer the solution is to the global optimum. Note that this distance is an upper bound to the distance from global optimality. §.§ Investigating the Relaxation Gap This section investigates the relaxation gap of the obtained matrices. With relaxation gap, we refer to the gap between the SDP relaxation and a non-linear chance constrained AC-OPF which uses the affine policy to parametrize the solution space. The IEEE 24 bus system with parameters specified in <cit.> is used. The allowable violation probability is selected to be ϵ=5%. Two wind farms with a forecasted infeed of 50 MW and 150 MW and a maximum power of 150 MW and 400 MW are introduced at buses 8 and 24, respectively. For illustrative purposes, the forecast error for the rectangular uncertainty is assumed to be bounded within ± 50% of the forecasted value with 95% probability. For the Gaussian uncertainty set, a standard deviation of 25% of the forecasted value and no correlation between both wind farms is assumed. Each generator adjusts its active power proportional to its maximum active power to react to deviations in wind power output. For the rectangular uncertainty set, Fig. <ref> shows the eigenvalue ratios ρ of the matrices W_0 - W_4 as a function of the penalty weight μ. A certain minimum value for the weight μ = 175 is necessary to obtain solution matrices with , i.e. eigenvalue ratio ρ higher than 10^5, at the operating state W_0 and the four vertices of the rectangular uncertainty set W_1 - W_4. The near-global optimality at μ = 175 for this test case evaluates to a tight upper bound of 99.74%. If the penalty weight is increased beyond μ = 375 a higher rank solution is obtained for the forecasted system state. A similar observation can be made if a Gaussian distribution is assumed for the forecast errors. Fig. <ref> shows the eigenvalue ratios ρ as a function of the penalty weight μ for the Gaussian uncertainty set. A certain minimum value for the weight μ = 10 is necessary to obtain solution matrices withat the operating state W_0 and the four end-point of the ellipsoid axes. The generation cost is almost flat with respect to increasing penalty weight and the near-global optimality at μ = 10 for this test case evaluates to an upper bound larger than 99.99%. As it is also observed, the necessary magnitude of the penalty weight μ to obtainsolution matrices depends on the test case and configuration. §.§ IEEE 118 Bus Test Case In this section, our proposed approaches using the affine policy and PTDFs are compared with two alternative approaches described in the literature <cit.>. We use the IEEE 118 bus test case with realistic forecast data for the wind farms, and Monte Carlo simulations to evaluate the constraint violations. §.§.§ Simulation SetupWe use the IEEE 118 bus specifications from <cit.> with the following modifications: The bus voltage limits are set to 0.94 p.u. and 1.06 p.u. As the upper branch flow limits are specified in MW, the active line flow limit is considered for branch flows. The line flow limits are decreased by 30% and the load is increased by 30% to obtain a more constrained system. Two wind farms with a rated power of 300 MW and 600 MW are placed at buses 5 and 64. The rated wind power corresponds to 24.1% of total load demand. Realistic day-ahead wind forecast scenarios from <cit.> and <cit.> are used for both wind farms. To create the scenarios, the methodology described in <cit.> is used. The forecasts are based on wind power measurements in the Western Denmark area from 15 different control zones collected by the Danish transmission system operator Energinet. We select control zone 1 to correspond to the wind farm at bus 5 and zone 7 to the wind farm at bus 64. We allow a constraint violation of ϵ = 5% for all considered approaches. In order to construct the rectangular uncertainty set, the confidence parameter β = 10^-3 is selected. Then, a minimum of 314 scenarios are required according to (<ref>). The forecast is computed as mean value of the scenarios. For the Gaussian uncertainty set, we compute the co-variance matrix based on these 314 scenarios. Fig. <ref> shows the forecast data from hour 1 to hour 5 with the upper and lower bounds specified by the maximum and minimum scenario values, respectively. In Fig. <ref> the rectangular and Gaussian uncertainty set for hour 4 are shown.In the following, the parameters for the corrective control policies are specified. A participation factor of 0.25 is selected for the generators at buses i = {12, 26,54, 61}, i.e. d_G_i = 0.25. Wind farms have a reactive power capability of 0.95 inductive to 0.95 capacitive according to recent Grid Codes <cit.>. The approaches using PTDFs assign a fixed power factor cosϕ to each wind farm. The affine policy includes a generator voltage and wind farm reactive power corrective control, assigning an updated set-point to generators and wind farms based on the actual realization of the forecast errors.To facilitate comparability, we use the same scenarios for all approaches to compute the respective uncertainty sets. We evaluate the constraint violations using Monte Carlo simulations with 10'000 scenarios and MATPOWER AC power flows <cit.>. We enable the enforcement of generator reactive power limits in the power flow, i.e. PV buses are converted to PQ buses once the limits are reached, as otherwise high nonphysical overloading of the limits can occur <cit.>. Furthermore, we distribute the loss mismatch from the active generator set-points among the generators according to their participation factors and rerun the power flow to mimic the response of automatic generation control (AGC). §.§.§ Numerical Comparison to Alternative ApproachesIn the following, the main modeling assumptions of the respective approaches and the type of chance constraints they include are outlined. All approaches considering chance constraints include corrective control of the active generator set-points. * Chance constrained DC-OPF <cit.> (DC-OPF): A robust formulation based on DC-OPF includes chance constraints on active generator power and active branch flow. Interval bounds on the forecast errors are assumed. Hence, we use the scenarios to compute the interval bounds. A power factor of 1 is assumed for wind farms. * Iterative chance constrained AC-OPF <cit.> (Iterative): At each iteration the Jacobian is computed and the uncertainty margins resulting from the chance constraints are updated until convergence is reached. The forecast errors are assumed to follow a Gaussian distribution. The covariance matrix constructed from the N_s scenarios is used. Chance constraints on active and reactive generator limits, voltage magnitudes and apparent line flows are included in the formulation. A power factor of 1 is assumed for wind farms, as no reactive power corrective control is included in <cit.>.These two approaches are compared to the following approaches based on the formulations presented in this work: * AC-OPF with convex relaxations but without chance constraints (AC-OPF) <cit.>* Chance constrained AC-OPF with convex relaxations, using an affine policy for a Gaussian uncertainty set (AP (Gauss)) including corrective control for wind farms, generator voltages and generator active power. * Chance constrained AC-OPF with convex relaxations, using an affine policy for a rectangular uncertainty set (AP (Rect)) including corrective control for wind farms and generator voltages and generator active power. * Chance constrained AC-OPF with convex relaxations, using PTDFs (PTDF (Gauss))for a Gaussian uncertainty set. * Chance constrained AC-OPF with convex relaxations, using PTDFs (PTDF (Rect)) for a rectangular uncertainty set. In Table <ref> the cost of uncertainty for the different approaches and considered time steps are shown. The cost of uncertainty represents the additional cost incurred by considering the stochastic variables, and is defined as the difference between the solution of the chance constrained and a baseline. In this paper, the AC-OPF with convex relaxations but without considering uncertainty is assumed as the baseline cost. From Table <ref>, we make the following observations. First, the DC-OPF (with chance constraints) leads to a cost reduction, as no losses are considered compared to the AC-OPF. Second, the approaches stemming from robust optimization lead to a cost increase of approximately 0.8% for time step 1 compared to an increase of approximately 0.5% for the same time step for the approaches assuming a Gaussian distribution. This shows that the Gaussian uncertainty set is less conservative as indicated in Fig. <ref>. For the rectangular uncertainty set, the affine policy reduces the cost compared to the approach using PTDFs. Comparing the approaches for the Gaussian uncertainty set, again the affine policy results to the lowest cost of uncertainty compared to the approach using PTDFs and the iterative chance constrained AC-OPF. The reason for that is that the affine policy includes corrective control for voltages and both active and reactive power. In Table <ref> the violation probability of the chance constraints on active power, voltages, and active branch flows are shown. Monte Carlo simulations using 10'000 scenarios with MATPOWER AC power flows are conducted. A minimum violation limit of 10^-3 p.u. for active generator limits and 0.1% for voltage and line flow limits is considered to exclude numerical errors. In all considered time steps, the AC-OPF without consideration of uncertainty leads to insecure instances and violates constraints on line and generator limits on active power.First, investigating the robust approaches using the rectangular uncertainty set the following observations can be made: The robust DC-OPF formulation in <cit.> leads to insecure instances for all time steps and violates both voltage and generator active power constraints. The AC-OPF approach using PTDFs for the chance constraints reduces the voltage violations but does not comply with the 5% confidence interval. The AC-OPF using the affine policy complies with the chance constraints for all time steps while slightly decreasing the generation cost compared to the approach using PTDFs. As the scenario based method is conservative, there are nearly zero violations occurring for the considered 10'000 samples for the approach using the affine policy.Second, we compare the different approaches which assume a Gaussian distribution of the forecast errors. The affine policy improves upon the approach using PTDFs and results to a secure operation for time steps 1 to 3. For time steps 4 and 5 we observe a slight violation of the active power line and bus voltage limit. This is due to the fact that we do not sample out of a Gaussian distribution but out of a set of realistic forecast scenarios, that apparently are not Gaussian distributed. The iterative approach results to a secure operation for time steps 1 to 4 and slightly violates the active generator and branch flow limit in time step 5. In order to verify if these violations occur due to the mismatch between actual distribution and the assumed Gaussian we repeat the 10'000 scenario evaluations for both affine policy and the iterative chance constrained AC-OPF from <cit.>. We sample from the Gaussian distribution assumed for the uncertainty set. The results are shown in Table <ref>. For all 5 time steps, both approaches comply with the 5% violation probability. Hence, the occurring violations in Table <ref> stem from the mismatch between Gaussian distribution and actual probability distribution. As shown in Table <ref>, the affine policy results in a slightly lower generation cost than the iterative AC-OPF, as it includes corrective control policies. This leads us to the following conclusions. First, that if the forecast errors do follow a normal distribution both approaches demonstrate good performance and do not exceed the violation limit. If the data are not normally distributed, as is the case for the results shown in Table <ref>, none of the two methods can guarantee that the violation probability will be below ϵ. The differences in performance in that case are, as it would be expected, data- and system-specific. However, independent from the fact if the underlying probability distribution is Gaussian or not, one difference that remains is that the approach proposed in this paper is more rigorous, since it provides guarantees regarding the global optimality of the obtained solution and allows to include corrective control policies related to reactive power and voltage. Table <ref> lists the penalty weights and obtained near-global optimality guarantees for the 5 time steps. Note that it is sufficient to define for both uncertainty sets a penalty weight of μ = 100 p.u. to obtain zero relaxation gap, i.e.solution matrices and a near-global optimality guarantee of larger than 99.99%. This means that the maximum deviation from the global optimum is smaller than 0.01% of the objective value. Table <ref> lists the computational time of the different approaches. The optimization problems are solved on a desktop computer with an Intel Xeon CPU E5-1650 v3 @ 3.5 GHz and 32 GB RAM. For all optimization problems except the iterative approach, MOSEK V8 <cit.> is used. The iterative approach utilizes the MATPOWER AC-OPF. The DC-OPF formulation is the fastest, as the optimization problem is a linear program. The computational time increases with increasing constraint complexity. The SOC constraints in the formulation for the Gaussian uncertainty set are computationally the most challenging. We observe that the iterative approach, despite the need for computing a number of iterations, converges faster than all approaches that utilize convex relaxations and an SDP solver. Current trends expect the need of more rigorous optimal power flow approaches in the future, that e.g. can guarantee a global minimum. In that case the need for further research to improve both the optimization solvers and the convex formulations of the AC-OPF problem is apparent. Possible directions to increase the computational speed of the proposed approaches are the chordal decomposition technique, outlined in <cit.>, and distributed optimization techniques, e.g. the alternating direction method of multipliers (ADMM) for sparse semidefinite problems in <cit.>. The chordal decomposition technique can be applied to reduce the computational burden of the semidefinite constraints on W_0 and B_i (<ref>). As shown in <cit.> a speed-up by several orders of magnitude can then be expected for large systems. § CONCLUSIONSIn this work, a convex formulation for a chance constrained AC-OPF is presented which is able to provide near-global optimality guarantees. The OPF formulation considers chance constraints for all relevant variables, and has an explicit representation of corrective control policies. Two tractable formulations are proposed: First, a scenario-based method is applied in combination with robust optimization. Second, assuming a Gaussian distribution of forecast errors, we provide an analytical reformulation of the chance constraints. Detailed case studies on the IEEE 24 and 118 bus test systems are presented. For the latter, we used realistic forecast data and Monte Carlo simulations to evaluate constraint violations. Compared to a chance constrained DC-OPF formulation, we find that the formulations proposed in this paper are more accurate and significantly decrease constraint violations. Compared with iterative non-convex AC-OPF formulations, both our piece-wise affine control policy and the iterative AC-OPF do not exceed the constraint violation limit for the Gaussian uncertainty set. Most importantly, our proposed approach obtains tight near-global optimality guarantees which ensure that the distance to the global optimum is smaller than 0.01% of the objective value. In our future work, besides investigating chordal decomposition techniques, we include security constraints in the proposed formulation by defining a matrix W^s(ζ) for each outage s of a generation unit or transmission line. § ACKNOWLEDGMENTThe authors would like to thank Pierre Pinson for sharing the forecast data, Line Roald for providing an updated version of the code from <cit.>, and Martin S. Andersen and Daniel K. Molzahn for fruitful discussions. IEEEtran[ < g r a p h i c s > ]Andreas Venzke (S'16) received the M.Sc. degree in Energy Science and Technology from ETH Zurich, Zurich, Switzerland in 2017. He is currently working towards the Ph.D. degree at the Department of Electrical Engineering, Technical University of Denmark (DTU), Kongens Lyngby, Denmark. His research interests include power system operation under uncertainty and convex relaxations of optimal power flow.[ < g r a p h i c s > ]Lejla Halilbasic (S'15) received the M.Sc. degree in Electrical Engineering from the Technical University of Graz, Austria in 2015. She is currently working towards the Ph.D. degree at the Department of Electrical Engineering, Technical University of Denmark (DTU), Denmark. Her research interests include optimization of power system operation and its applications to electricity markets. [ < g r a p h i c s > ]Uros Markovic (S'16) received the M.Sc. degree in Electrical Engineering and Information Technology from ETH Zurich, Zurich, Switzerland in 2016. He is currently working towards the Ph.D. degree at the Power Systems Laboratory, ETH Zurich, where he joined in March 2016. His research interests include modeling, control and optimization of inverter-based power system with low rotational inertia.[ < g r a p h i c s > ]Gabriela Hug (S'05, M'08, SM'14) was born in Baden, Switzerland. She received the M.Sc. degree in electrical engineering in 2004 and the Ph.D. degree in 2008, both from Swiss Federal Institute of Technology (ETH), Zurich, Switzerland. After the Ph.D. degree, she worked in the Special Studies Group of Hydro One, Toronto, ON, Canada, and from 2009 to 2015, she was an Assistant Professor in Carnegie Mellon University, Pittsburgh, PA, USA. She is currently an Associate Professor in the Power Systems Laboratory, ETH Zurich. Her research is dedicated to control and optimization of electric power systems.[ < g r a p h i c s > ]Spyros Chatzivasileiadis (S'04, M'14) is an Assistant Professor at the Technical University of Denmark (DTU). Before that he was a postdoctoral researcher at the Massachusetts Institute of Technology (MIT), USA and at Lawrence Berkeley National Laboratory, USA. Spyros holds a PhD from ETH Zurich, Switzerland (2013) and a Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece (2007). In March 2016 he joined the Center of Electric Power and Energy at DTU. He is currently working on power system optimization and control of AC and HVDC grids, including semidefinite relaxations, distributed optimization, and data-driven stability assessment.
http://arxiv.org/abs/1702.08372v4
{ "authors": [ "Andreas Venzke", "Lejla Halilbasic", "Uros Markovic", "Gabriela Hug", "Spyros Chatzivasileiadis" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170227165130", "title": "Convex Relaxations of Chance Constrained AC Optimal Power Flow" }
Iterative Local Voting for Collective Decision-making in Continuous Spaces Nikhil Garg Stanford University Vijay Kamble University of Illinois at Chicago Ashish Goel Stanford University David Marn University of California, Berkeley Kamesh Munagala Duke University December 30, 2023 ===========================================================================================================================================================================================================================================Many societal decision problems lie in high-dimensional continuous spaces not amenable to the voting techniques common for their discrete or single-dimensional counterparts. These problems are typically discretized before running an election or decided upon through negotiation by representatives. We propose a algorithm called Iterative Local Voting for collective decision-making in this setting. In this algorithm, voters are sequentially sampled and asked to modify a candidate solution within some local neighborhood of its current value, as defined by a ball in some chosen norm, with the size of the ball shrinking at a specified rate.We first prove the convergence of this algorithm under appropriate choices of neighborhoods to Pareto optimal solutions with desirable fairness properties in certain natural settings: when the voters' utilities can be expressed in terms of some form of distance from their ideal solution, and when these utilities are additively decomposable across dimensions. In many of these cases, we obtain convergence to the societal welfare maximizing solution.We then describe an experiment in which we test our algorithm for the decision of the U.S. Federal Budget on Mechanical Turk with over 2,000 workers, employing neighborhoods defined by ℒ^1, ℒ^2 and ℒ^∞ balls. We make several observations that inform future implementations of such a procedure.Supported by NSF grant nos. CCF-1408784, CCF-1637397, CCF-1637418, and IIS-1447554, ONR grant no. N00014-15-1-2786, ARO grant no. W911NF-14-1-0526, and the NSF Graduate Research Fellowship under grant no. DGE-114747. This work benefited from many helpful discussions with Oliver Hinder. § INTRODUCTIONMethods and experiments to increase large-scale, direct citizen participation in policy-making have recently become commonplace as an attempt to revitalize democracy. Computational and crowdsourcing techniques involving human-algorithm interaction have been a key driver of this trend <cit.>. Some of the most important collective decisions, whether in government or in business, lie in high-dimensional, continuous spaces – e.g. budgeting, taxation brackets and rates, collectively bargained wages and benefits, urban planning etc. Direct voting methods originally designed for categorical decisions are typically infeasible for collective decision-making in such spaces. Although there has been some theoretical progress on designing mechanisms for continuous decision-making <cit.>, in practice these problems are usually resolved using traditional approaches – they are either discretized before running an election, or are decided upon through negotiation by committee, such as in a standard representative democracy <cit.>. One of the main reasons for the current gap between theory and practice in this domain is the challenge of designing practically implementable mechanisms. We desire procedures that are simple enough to explain and use in practice, and that result in justifiable solutions while being robust to the inevitable deviations from ideal models of user behavior and preferences. To address this challenge, a social planner must first make practically reasonable assumptions on the nature and complexity of feedback that can be elicited from people and then design simple algorithms that operate effectively under these conditions. Further, while robustness to real-world model deviations may be difficult to prove in theory, it can be checked in practice through experiments. We first tackle the question of what type of feedback voters can give. In general, for the types of problems we wish to solve, a voter cannot fully articulate her utility function. Even if voters in a voting booth had the patience to state their exact utility for a reasonably large number of points (e.g. how much they liked each candidate solution on a scale from one to five), there is no reason to believe that they could do so in any consistent manner. On the other hand, we posit that it is relatively easy for people to choose their favorite amongst a reasonably small set of options, or articulate how they would like to locally modify a candidate solution to better match their preferences. Such an assumption is common and is a central motivation in social choice, especially implicit utilitarian voting <cit.>.In this paper, we study and experimentally test a type of algorithm for large-scale preference aggregation that effectively leverages the possibility of asking voters such easy questions. In this algorithm that we call Iterative Local Voting (ILV), voters are sequentially sampled and are asked to modify a candidate solution to their favorite point within some local neighborhood, until a stable solution is obtained (if at all). With a continuum of voters, no one votes more than once. The algorithm designer has flexibility in deciding how these local neighborhoods are defined – in this paper we focus on neighborhoods that are balls in the ℒ^q norm, and in particular on the cases where q=1, 2 or ∞. (For M < ∞ dimensional vectors, the ℒ^q norm x_q ≜√(∑_m |x_m|^q). q=1, 2 and ∞ neighborhoods correspond to bounds on the sum of absolute values of the changes, the sum of the square of the changes, and the maximum change, respectively.)More formally, consider a M-dimensional societal decision problem in 𝒳⊂^M and a population of voters 𝒱, where each voter v ∈𝒱 has bounded utility f_v(x) ∈, ∀ x ∈𝒳. Then we consider the class of algorithms described in Algorithm <ref>. We study the algorithm class under two plausible models of how voters respond to query (<ref>), which asks for the voter's favorite point in a local region. * Model A:One possibility is that voters exactly perform the maximization asked of them, responding with their favorite point in the given ℒ^q norm constraint set. In other words, they return a point max_x∈{ s : s - x_t-1_q ≤ r_t} f_v_t(x). Note that by definition of this movement, the algorithm is myopically incentive compatible: if a voter is the last voter and no projections are used, then truthfully performing this movement is the dominant strategy. In general, the mechanism is not globally incentive compatible, nor incentive compatible with projections onto the feasible region. Simple examples of manipulations in both instances exist. * Model B:On the other hand, voters may not actually search within the constraint set to find their favorite point inside of it. Rather, a voter v may have an idea about how to best improve the current point and then move in that direction to the boundary of the given constraint set. This model leads to a voter moving the current solution in the direction of the gradient of her utility function, returning a point x_t-1 + r_t g_t/ g_t_q, for some g_t∈∂ f_v_t(x_t-1). Note that ∂ f(x) denotes the set of subgradients of a function f at x, i.e. g∈∂ f(x) if ∀ y, f(y) - f(x) ≥ g^T(y - x). ILV is directly inspired by the stochastic approximation approach to solve optimization problems <cit.>, especially stochastic gradient descent (SGD) and the stochastic subgradient method (SSGM). The idea is that if (a) voter preferences are drawn from some probability distribution and (b) the response of a voter to the query (<ref>) moves the solution approximately in the direction of her utility gradient, then this procedure almost implements stochastic gradient descent for minimizing negative expected utility.The caveat is that although the procedure can potentially obtain the direction of the gradient of the voter utilities, it cannot in general obtain any information about its magnitude since the movement norm is chosen by the procedure itself. However, we show that for certain plausible utility and voter response models, the algorithm does indeed converge to a unique point with desirable properties, including cases in which it converges to the societal optimum.Note that with such feedback and without any additional assumptions on voter preferences (e.g. that voter utilities are normalized to the same scale), no algorithm has any hope of finding a desirable solution that depends on the cardinal values of voters' utilities, e.g., the social welfare maximizing solution (the solution that maximizes the sum of agent utilities). This is because an algorithm that uses only ordinal information about voter preferences is insensitive to any scaling or even monotonic transformations of those preferences. §.§ Contributions This work is a step in extending the vast literature in social choice to continuous spaces, taking into account the feedback that voters can actually give. Our main theoretical contributions are as follows: * Convergence for ℒ^p normed utilities: We show that if the agents cost functions can be expressed as the ℒ^p distance from their ideal solution, and if agents correctly respond to query (<ref>), then an interesting duality emerges: for p=1,2 or ∞, using ℒ^q neighborhoods, where q=∞, 2 and 1 respectively, results in the algorithm converging to the unique social welfare optimizing solution. Whether such a result holds for general (p,q), where q is the dual norm to p (i.e. 1/p + 1/q = 1), is an open question. However, we show that such a general result holds if, in response to query (<ref>), the voter instead moves the current solution in the direction of the gradient of her utility function to the neighborhood boundary. * Convergence for other utilities: Next, we show convergence to a unique solution in two cases: (a) when the voter cost can be expressed as a weighted sum of ℒ^2 distances over sub-spaces of the solution space, under ℒ^2 neighborhoods – in which case the solution is also Pareto efficient, and (b)when the voter utility can be additively decomposed across dimensions, under ℒ^∞ neighborhoods – in which case the algorithm converges to the median of the ideal solutions of the voters on each dimension.We then build a platform and run the first large-scale experiment in voting in multi-dimensional continuous spaces, in a budget allocation setting. We test three variants of ILV: with ℒ^1, ℒ^2 and ℒ^∞ neighborhoods. Our main findings are as follows: * We observe that the algorithm with ℒ^∞ neighborhoods is the only alternative that satisfies the first-order concern for real-world deployability: consistent convergence to a unique stable solution. Both ℒ^1 and ℒ^2 neighborhoods result in convergence to multiple solutions. * The consistent convergence under ℒ^∞ neighborhoods in experiments strongly suggests the decomposability of voter utilities for the budgeting problem. Motivated by this observation, we propose a general class of decomposable utility functions to model user behavior for the budget allocation setting. * We make several qualitative observations about user behavior and preferences. For instance, voters have large indifference regions in their utilities, with potentially larger regions in dimensions about which they care about less. Further, we show that asking voters for their ideal budget allocations and how much they care about a given item is fraught with UI biases and should be carefully designed.We remark that an additional attractive feature of such a constrained local update algorithm in a large population setting is that strategic behavior from the voters is less of a concern: even if a single voter is strategic, her effect on the outcome is negligible. Further, it may be difficult for a voter, or even a coalition of voters, to strategically vote; one must reason over the possible future trajectories of the algorithm over the randomness of future voters. One coalition strategy for ℒ^2 and ℒ^∞ neighborhoods, voters trade votes on different dimensions with one another; we leave robustness to such strategies to future work.The structure of the paper is as follows. After discussing related work in Section <ref>, we present convergence results for our algorithm under different settings in Section <ref>. In Section <ref>, we introduce the budget allocation problem and describe our experimental platform. In Section <ref>, we analyze the experiment results, and then we conclude the paper in Section <ref>. The proofs of our results are in the appendix.§ RELATED WORKOur work relates to various strands of literature. We note that a conference version of this work appeared previously <cit.>. Furthermore, the term “iterative voting” is also used in other works to denote unrelated methods <cit.>.Stochastic Gradient Descent As discussed in the introduction, we draw motivation from the stochastic subgradient method (SSGM), and our main proof technique is mapping our algorithm to SSGM. Beginning with the original stochastic approximation algorithm by Robbins and Monro <cit.>, a rich literature surrounds SSGM, for instance see <cit.>. Iterative local voting A version of our algorithm, with ℒ^2 norm neighborhoods, has been proposed independently several times <cit.> and is referred to as Normalized Gradient Ascent (NGA). Instead of directly asking voters to perform query (<ref>), the movement ∇ f_v(x_t-1)/∇ f_v(x_t-1)_2 would be estimated through population surveys to try to compute the fixed point where _v[∇ f_v(x)/∇ f_v(x)_2] = 0. (Note that we work with distributions of voters and for strictly concave utility functions, the movement for each voter is well-defined for all but a measure 0 set. Then, given a bounded density function of voters, the expectation is well-defined).This fixed point has been called Directional Equilibrium (DE) in the recent literature <cit.>. The movement is equivalent to the movement in this work in the case voters respond according to Model B and with ℒ^2 neighborhoods, and we show in Section <ref> that, in such cases, the algorithm converges to a Directional Equilibrium when it converges. We further conjecture that even under voter Model A, if Algorithm <ref> converges, the fixed point is a Directional Equilibrium.Several properties of the fixed point have been studied, starting from <cit.> to more recently, <cit.> and <cit.>: it exists under light assumptions, is Pareto efficient, and has important connections to the Majority Core literature in economics. Showing that an iterative algorithm akin to ours converges to such a point has been challenging; indeed, except for special cases such as quadratic utilities f_v(x) = -(x - x^v)^TΩ(x - x^v), with society-wide Ω that encodes the relative importance and relationships between issues<cit.>, convergence is an open question.Our algorithm differs from NGA in a few crucial directions, even in the case that the movement is equivalent: by relating our algorithm to SGD, we are able to characterize the step-size behavior necessary for convergence and show convergence even when each step is made by a single voter, rather than after an estimate of the societal normalized gradient. One can also characterize the convergence rate of the algorithm <cit.>. Furthermore, the literature has referred to the ℒ^2 norm (or “quadratic budget”) constraint as “central to their strategic properties” <cit.>. In this work, this limitation is relaxed – the same strategic property, myopic incentive compatibility, holds for the other norm constraints for their respective cases.Finally, because we are primarily interested in designing implementable voting mechanisms, we focus on somewhat different concerns than the directional equilibria literature. However, we believe that the ideas in this work, especially the connections to the optimization literature, may prove useful to work on NGA.To the best of our knowledge, no work studies such an algorithm with other neighborhoods and under ordinal feedback, or implements such an algorithm. Optimization without gradients Because we are concerned with optimization without access to voters' utility functions or its gradients, this work seems to be in the same vein as recent literature on convex optimization without gradients – such as with comparisons or with pairs of function evaluations <cit.>. However, in the social choice or human optimization setting, we cannot estimate each voter's utility functions or gradients exactly rather than up to a scaling term, and yet we would like to find some point with good societal properties. This limitation prevents the use of strategies from such works.<cit.>, for example, present an optimal coordinate-descent based algorithm to find the optimum of a function for the case in which noisy comparisons are available on that function; in our setting, such an algorithm could be used to find the optimal value for each voter, but not the societal optimum because each voter can independently scale her utility function. <cit.> present a distributed optimization algorithm where each node (voter) has access to its own subgradients and a few of its neighbors, but in our case each voter can arbitrarily scale her utility function and thus her subgradients. Similar problems emerge in applying results from the work of <cit.>. In our work, such scaling does not affect the point to which the algorithm converges. Participatory Budgeting The experimental setting for this work, and a driving motivation, is Participatory Budgeting, in which voters are asked to help create a government budget. Participatory budgeting has been among the most successful programs of Crowdsourced Democracy, with deployments throughout the world allocating hundreds of millions of dollars annually, and studies have shown its civic engagement benefits <cit.>.In a typical election, community members propose projects, which are then refined and voted on by either their representatives or the entire community, through some discrete aggregation scheme. In no such real-world election, to our knowledge, can the amount of money to allocate to a project be determined in a continuous space within the voting process, except through negotiation by representatives.<cit.> propose a “Knapsack Voting” mechanism in which each voter is asked to create a valid budget under the budget constraint; the votes are then aggregated using K-approval aggregation on each dollar in the budget, allowing for fully continuous allocation in the space. This mechanism is strategy-proof under some voter utility models. In comparison, our mechanism works in more general spaces and is potentially easier for voters to do.Implicit Utilitarian Voting With a finite number of candidates, the problem of optimizing some societal utility function (based on the cardinality of voter utilities) given only ordinal feedback is well-studied, with the same motivation as in this work: ordinal feedback such as rankings and subset selections are relatively easy for voters to provide. The focus in such work, referred to as implicit utilitarian voting, is to minimize the distortion of the output selected by a given voting rule, over all possible utility functions consistent with the votes, i.e. minimize the worst case error achieved by the algorithm due to an under-determination of utility functions when only using the provided inputs <cit.>. In this work, we show convergence of our algorithm under certain implicit utility function forms. However, we do not characterize the maximum distortion of the resulting fixed point (or even the convergence to any fixed point) under any utility functions consistent with the given feedback, leaving such analysis for future work.§ CONVERGENCE ANALYSISIn this section, we discuss the convergence properties of ILV under various utility and behavior models. For the rest of the technical analysis, we make the following assumptions on our model. * The solution space 𝒳⊆ℝ^M is non-empty, bounded, closed, and convex. * Each voter v has a unique ideal solution x_v∈𝒳. * The ideal point x_v of each voter is drawn independently from a probability distribution with a bounded and measurable density function h_𝒳. Under this model, for a solution x∈𝒳, the societal utility is given by _v[f_v(x)]. and the social optimal (SO) solution is any x^*∈max_x∈𝒳_v[f_v(x)].“Convergence” of ILV refers to the convergence of the sequence of random variables {x_t}_t≥ 1 to some x∈𝒳 with probability 1, assuming that the algorithm is allowed to run indefinitely (this notion of convergence also implies the termination of the algorithm with probability 1). In the following subsections, we present several classes of utility functions for which the algorithm converges, summarized in Table <ref>. We further formalize the relationship to directional equilibria in Section <ref>. §.§ Spatial UtilitiesHere we consider spatial utility functions, where the utilities of each voters can be expressed in the form of some kind of spatial distance from their ideal solutions. First, we consider the following kind of utilities. ℒ^p normed utilities. The voter utility function is ℒ^p normed if f_v(x) = -x - x_v_p, ∀ x∈𝒳. Under such utilities, for p=1, 2 and ∞, restricting voters to a ball in the dual norm leads to convergence to the societal optimum. theoremthmpqmain Suppose that conditions _1, _2, and _3 are satisfied, the voter utilities are ℒ^p normed, and voters respond to query (<ref>) according to either Model A or Model B. Then, ILV with ℒ^q neighborhoods converges to the societal optimal point w.p. 1 when (p,q) = (2, 2), (1, ∞), or (∞, 1). The proof is contained in the appendix. A sketch of the proof is as follows. For the given pairs (p, q), we show that, except in certain `bad' regions, the update rule x_t+1 = min_x[ x - x_v_t_p : x - x_t_q ≤ r_t] is equivalent to the stochastic subgradient method (SSGM) update rule x_t+1 = x_t - r_t g_t, for some g_t ∈∂_v[x - x_v_t_p], and that the probability of being in a `bad' region decreases fast enough as a function of r_t. We then leverage a standard SSGM convergence result to finish the proof. One natural question is whether the result extends to general dual norms p,q, where 1/p + 1/q = 1. Unfortunately, the update rule is not equivalent to SSGM in general, and we leave the convergence to the societal optimum for general (p,q) as an open question.Further, note that even if each voter could scale their utility function arbitrarily, the algorithm would converge to the same point.However, the general result does hold for general dual norms (p, q) if one assumes the alternative behavior model. theoremthmpqother Suppose that conditions _1, _2, and _3 are satisfied, the voter utilities are ℒ^p normed, and voters respond to query (<ref>) according to Model B. Then, ILV with ℒ^q neighborhoods converges to the societal optimal point w.p. 1 for any p>0 and q>0 such that 1/p+1/q = 1.The proof is contained in the appendix. It uses the following property of ℒ^p normed utilities: the ℒ^q norm of the gradient of these utilities at any point other than the ideal point is constant. This fact, along with the voter behavior model, allows the algorithm to implicitly capture the magnitude of the gradient of the utilities, and thus a direct mapping to SSGM is obtained. Note that the above result holds even if we assume that a voter moves to her ideal point x_v in case it falls within the neighborhood (since, as explained earlier, the probability of sampling such a voter decreases fast enough). Next, we introduce another general class of utility functions, which we call Weighted Euclidean utilities, for which one can obtain convergence to a unique solution.Weighted Euclidean utilities. Let the solution space 𝒳 be decomposable into K different sub-spaces, so that x=(x^1,…, x^K) for each x∈𝒳 (where ∑_k=1^K (x^k) = M). Suppose that the utility function of the voter v is f_v(x)=-∑_k=1^K w^k_v/w_v_2x^k-x^k_v_2. where w_v is a voter-specific weight vector, then the function is a Weighted Euclidean utility function. We further assume that w_v∈𝒲⊂_+^K and x_v are independently drawn for each voter v from a joint probability distribution with a bounded and measurable density function, with 𝒲 nonempty, bounded, closed, and convex.This utility function can be interpreted as follows: the decision-making problem is decomposable into K sub-problems, and each voter v has an ideal point x^k_v and a weight w^k_v for each sub-problem k, so that the voter's disutility for a solution is the weighted sum of the Euclidean distances to the ideal points in each sub-problems. Such utility functions may emerge in facility location problems, for example, where voters have preferences on the locations of multiple facilities on a map. This utility form is also the one most closely related to the existing literature on Directional Equilibria and Quadratic Voting, in which preferences are linear. To recover the weighted linear preferences case, set K = M, with each sub-space of dimension 1. In this case, the following holds: propositionthmotherone Suppose that conditions _1, _2, and _3 are satisfied, the voter utilities are Weighted Euclidean, and voters correctly respond to query (<ref>) according to either Model A or Model B. Then, ILV with ℒ^2 neighborhoods converges with probability 1 to the societal optimal point.The intuition for the result is as follows: as long as the neighborhood does not contain the ideal point of the sampled voter, the correct response to query (<ref>) under weighted Euclidean preferences is to move the solution in the direction of the ideal point to the neighborhood boundary, which, as it turns out, is the same as the direction of the gradient. Thus with radius r_t, the effective movement is ∇ f_v(x_t)/||∇ f_v(x_t)||_2. With (normalized) weighted Euclidean utilities, ∇ f_v(x_t)_2 = 1 everywhere. As before, even if the utilities were not normalized (i.e. not divided by w_2), the algorithm would converge to the same point, as if utility functions were normalized. §.§ Decomposable Utilities Next consider the general class of decomposable utilities, motivated by the fact that the algorithm with ℒ^∞ neighborhoods is of special interest since they are easy for humans to understand: one can change each dimension up to a certain amount, independent of the others. Decomposable utilities. A voter utility function is decomposable if there exists concave functions f^m_v for m∈{1 … M} such thatf_v(x)=∑_m=1^Mf^m_v(x^m). If the utility functions for the voters are decomposable, then we can show that our algorithm under ℒ^∞ neighborhoods converges to the vector of medians of voters' ideal points on each dimension. Suppose that h_𝒳^m is the marginal density function of the random variable x^m_v, and let x̅^m be the set of medians of x^m_v. (By set of medians, we mean the set of points such that, on each dimension, the mass of voters with ideal points above and below.)propositionthmothertwo Suppose that conditions _1, _2, and _3 are satisfied, the voter utilities are decomposable, and voters respond to query (<ref>) according to either Model A or Model B. Then, ILV with ℒ^∞ neighborhoods converges with probability 1 to a point in the set of medians x̅. Although simply eliciting each agent's optimal solution and computing the vector of median allocations on each dimension is a viable approach in the case of decomposable utilities, deciding an optimal allocation across multiple dimensions is a more challenging cognitive task than deciding whether one wants to increase or decrease each dimension relative to the current solution (see Section <ref> for experimental evidence). In fact, in this case, the algorithm can be run separately for each dimension, so that each voter expresses her preferences on only one dimension, drastically reducing the cognitive burden of decision-making on the voter, especially in high dimensional settings like budgeting.§.§ Equivalence to Directional EquilibriumAs discussed in Section <ref>, our algorithm, with ℒ^2-norm neighborhoods, is related to an algorithm, NGA, to find what are called Directional Equilibria in literature. Prior work mostly focuses on the properties of the fixed point, with discussion of the proposed algorithm limited to simulations. We show that with the radius decreasing as 𝒪(1/t), the algorithm indeed finds directional equilibria in the following sense: if under a few conditions a trajectory of the algorithm converges to a point, then that point is a directional equilibrium. theoremthmequiv Suppose that _1, _2, and _3 are satisfied, and let G(x) ≜_v[∇ f_v(x)/∇ f_v(x)_2]. Suppose, G(x) is uniformly continuous, ℒ^2 movement norm constraints are used, and voters move according to Model B. If a trajectory {x}_t=1^∞ of the algorithm converges to x^*, i.e. x_t → x^*, then x^* is a directional equilibrium, i.e. G(x^*) = 0. The proof is in the appendix. It relies heavily on the continuity assumption: if a point x is not a directional equilibrium, then the algorithm with step sizes 𝒪(1/t) will with probability 1 leave any small region surrounding x: the net drift of the voter movements is away from the region. We note that the necessary assumptions hold for all utility functions for which convergence holds, using the ℒ^2 norm algorithm (e.g. weighted Euclidean utilities). It is further possible to characterize other utility functions for which the equivalence holds: with appropriate conditions on the distribution of voters and how f differs among voters, the conditions on G can be met.We further conjecture that even under voter Model A, if Algorithm <ref> converges, the fixed point is a Directional Equilibrium. Note that as r_t → 0, f_v(y) can be linearly approximated by the first term of the Taylor series expansion around x, for y ∈{ s : ||s - x ||_2 ≤ r_t}. Then, to maximize f_v(y) in the region, if the region does not contain x_vvoter v chooses y^* s.t. y^* - x ≈ r_t∇ f(x)/||∇ f(x)||_2, i.e., the voter moves the solution approximately in the direction of her gradient to the neighborhood boundary. A single step of our algorithm with ℒ^2 neighborhoods is similar to Quadratic Voting <cit.> for the same reason. Independently of our work, <cit.> formalize the relationship between the Normalized Gradient Ascent mechanism and Quadratic Voting. § EXPERIMENTS WITH BUDGETS We built a voting platform and ran a large scale experiment, along with several extensive pilots, on Amazon Mechanical Turk (<https://www.mturk.com>). Over 4,000 workers participating in total counting pilots and the final experiment, with over 2,000 workers participating in the final experiment. The design challenges we faced and voter feedback we received provide important lessons for deploying such systems in a real-world setting.First we present a theoretical model for our setting. We consider a budget allocation problem on M items, where the items may include both expenditures and incomes. One possibility is to define 𝒳 as the space of feasible allocations, such as those below a spending limit, and to run the algorithm as defined, with projections. However, in such cases, it may be difficult to theorize about how voters behave; e.g. if voters knew their answers would be projected onto a budget balanced set, they may respond differently.Rather, we consider an unconstrained budget allocation problem, one in which a voter's utility includes a term for the budget deficit. Let ℰ⊆{1 … M}, ℐ = {1 … M}∖ℰ be the expenditure and income items, respectively. Then the general budget utility function is f_v(x)= g_v(x) - d(∑_e∈ℰ x^e -∑_i∈ℐx^i), where d is an increasing function on the deficit.For example, suppose a voter's disutility was proportional to the square of the budget deficit (she especially dislikes large budget deficits); then, this term adds complex dependencies between the budget items. In general, nothing is known about convergence of Algorithm <ref> with such utilities, as the deficit term may add complex dependencies between the dimensions. However, if the voter utility functions are decomposable across the dimensions and ℒ^∞ neighborhoods used, then the results of Section <ref> can be applied. We propose the following class of decomposable utility functions for the budgeting problem, achieved by assuming that the cost for the deficit is linear, and call the class “decomposable with a linear cost for deficit," or DLCD. Let f_v(x) be DLCD if f_v(x)=∑_m=1^M f^m_v(x^m) - w_v(∑_e ∈ℰ x^e -∑_i ∈ℐ x^i), where f^m_v is a concave function for each m and w_v ∈_+.In the experiments discussed below in the budget setting, ILV consistently and robustly converges with ℒ^∞ norm neighborhoods. Further, it approximately converges to the medians of the optimal solutions (which are elicited independently), as theorized in Section <ref>. Such a convergence pattern suggests the validity of the DLCD model, though we do not formally analyze this claim.§.§ Experimental Setup We asked voters to vote on the U.S. Federal Budget across several of its major categories: National Defense; Healthcare; Transportation, Science, & Education; and Individual Income Tax (Note that the US Federal Government cannot just decide to set tax receipts to some value. We asked workers to assume tax rates would be increased or decreased at proportional rates in hopes of affecting receipts.) This setting was deemed the most likely to be meaningful to the largest cross-section of workers and to yield a diversity of opinion, and we consider budgets a prime application area in general. The specific categories were chosen because they make up a substantial portion of the budget and are among the most-discussed items in American politics. We make no normative claims about running a vote in this setting in reality, and Participatory Budgeting has historically been more successful at a local level.One major concern was that with no way to validate that a worker actually performed the task (since no or little movement is a valid response if the solution presented to the worker was near her ideal budget), we may not receive high-quality responses. This issue is especially important in our setting because a worker's actions influence the initial solution future workers see. We thus restricted the experiment to workers with a high approval rate and who have completed over 500 tasks on Mechanical Turk (MTurk). Further, we offered a bonus to workers for justifying their movements well, and more than 80% of workers qualified, suggesting that we also received high-quality movements. The experiment was restricted to Americans to best ensure familiarity with the setting. Turkprime (<https://www.turkprime.com>) was used to manage postings and payment. §.§ Experimental Parameters Our large scale experiment included 2,000 workers and ran over a week in real-time. Participants of any of the pilots were excluded. We tested the ℒ^1, ℒ^2, and ℒ^∞ mechanisms, along with a “full elicitation” mechanism in which workers reported their ideal values for each item, and a “weight” in [0,10] indicating how much they cared about the actual spending in that item being close to their stated value. To test repeatability of convergence, each of the constrained mechanisms had three copies, given to three separate groups of people. Each group consisted of two sets with different starting points, with each worker being asked to vote in each set in her assigned group. Each worker only participates as part of one group, and cannot vote multiple times. We used a total of three different sets of starting points across the three groups, such that each group shared one set of starting points with each of the other two groups. This setup allowed testing for repeatability across different starting points and observing each worker's behavior at two points. Workers in one group in each constrained mechanism type were also asked to do the full elicitation after submitting their movements for the constrained mechanism, and such workers were paid extra. These copies, along with the full elicitation, resulted in 10 different mechanism instances to which workers could be allocated, each completed by about 200 workers. To update the current point, we waited for 10 submissions and then updated the point to their average. This averaging explains the step-like structure in the convergence plots in the next section. The radius was decreased approximately every 60 submissions, r_t≊r_0/⌈ t/60 ⌉. The averaging and slow radius decay rate were implemented in response to observing in the pilots that the initial few voters with a large radius had a disproportionately high impact, as there were not enough subsequent voters to recover from large initial movements away from an eventual fixed point (though in theory this would not be a problem given enough voters). We note that the convergence results for stochastic subgradient methods trivially extend to cover these modifications: the average movement over a batch of submissions starting at the same point is still in expectation a subgradient, and the stepped radius decrease still meets the conditions for valid step-sizes. §.§ User Experience As workers arrived, they were randomly assigned to a mechanism instance. They had a roughly equal probability of being assigned to each instance, with slight deviations in case an instance was “busy” (another user was currently doing the potential 10^th submission before an update of the instance's current point) and to keep the number of workers in each instance balanced. Upon starting, workers were shown mechanism instructions. We showed the instructions on a separate page so as to be able to separately measure the time it takes to read & understand a given mechanism, and the time it takes to do it, but we repeated the instructions on the actual mechanism page as well for reference. On the mechanism page, workers were shown the current allocation for each of the two sets in their group. They could then move, through sliders, to their favorite allocation under the movement constraint. We explained the movement constraints in text and also automatically calculated for them the number of “credits” their current movements were using, and how many they had left. Next to each budget item, we displayed the percentage difference of the current value from the 2016 baseline federal budget, providing important context to workers (The 2016 budget estimate was obtained from <http://federal-budget.insidegov.com/l/119/2016-Estimate> and <http://atlas.newamerica.org/education-federal-budget>). We also provided short descriptions of what goes into each budget item as scroll-over text. The resulting budget deficit and its percent change were displayed above the sliders, assuming other budget items are held constant.For the full elicitation mechanism, workers were asked to move the sliders to their favorite points with no constraints (the sliders went from $0 to twice the 2016 value in that category), and then were asked for their “weights” on each budget item, including the deficit. Figure <ref> shows part of the interface for the ℒ^2 mechanism, not including instructions, with similar interfaces for the other constrained mechanisms. The full elicitation mechanism additionally included sliders for items' weights. On the final page, workers were asked for feedback on the experiment. A full walk-through of the experiment with screenshots and link to an online demo is available in the Appendix. We plan on posting the data, including feedback. In general, workers seemed to like the experiment, though some complained about the constraints, and others were generally confused. Some expressed excitement about being asked their views in an innovative manner and suggested that everyone could benefit from participating as, at the least, a thought exercise. The feedback and explanations provided by workers were much longer than we anticipated, and they convince us of the procedure's civic engagement benefits.§ RESULTS AND ANALYSIS We now discuss the results of our experiments.§.§ Convergence One basic test of a voting mechanism is whether it produces a consistent and unique solution, given a voting population and their behaviors. If an election process can produce multiple, distinct solutions purely by chance, opponents can assail any particular solution as a fluke and call for a re-vote. The question of whether the mechanisms consistently converge to the same point thus must be answered before analyzing properties of the equilibrium point itself. In this section, we show that the ℒ^2 and ℒ^1 algorithms do not appear to converge to a unique point, while the ℒ^∞ mechanism converges to a unique point across several initial points and with distinct worker populations.The solutions after each voter for each set of starting points, across the 3 separate groups of people for each constrained mechanism are shown in Figure <ref>. Each plot shows all the trajectories with the given mechanism type, along with the median of the ideal points elicited from the separate voters who only performed the full elicitation mechanism. Observe that the three mechanisms have remarkably different convergence patterns. In the ℒ^1 mechanism, not even the sets done by the same group of voters (in the same order) converged in all cases. In some cases, they converged for some budget items but then diverged again. In the ℒ^2 mechanism, sets done by the same voters starting from separate starting points appear to converge, but the three groups of voters seem to have settled at two separate equilibria in each dimension. Under the ℒ^∞ neighborhood, on the other hand, all six trajectories, performed by three groups of people, converged to the same allocation very quickly and remained together throughout the course of the experiment. Furthermore, the final points, in all dimensions except Healthcare, correspond almost exactly to the median of values elicited from the separate set of voters who did only the full elicitation mechanism. For Healthcare, the discrepancy could result from biases in full elicitation (see Section <ref>), though we make no definitive claims. These patterns shed initial insight on how the use of ℒ^2 constraints may differ from theory in prior literature and offer justification for the use of DLCD utility models and the ℒ^∞ constrained mechanism.One natural question is whether these mechanisms really have converged, or whether if we let the experiment continue, the results would change. This question is especially salient for the ℒ^2 trajectories, where trajectories within a group of people converged to the same point, but trajectories between groups did not. Such a pattern could suggest that our results are a consequence of the radius decreasing too quickly over time, or that the groups had, by chance, different distributions of voters which would have been corrected with more voters. However, we argue that such does not seem to be the case, and that the mechanism truly found different equilibria. We can test whether the final points for each trajectory are stable by checking the net movement in a window, normalized by each voter's radius, i.e. 1/N∑_s=t-N^t x_s-x_s-1/r_s, for some N. If voters in a window are canceling each other's movements, then this value would go to 0, and the algorithm would be stable even if the radius does not decrease. The notion is thus robust to apparent convergence just due to decreasing radii.The net movement normalized in a sliding window of 30 voters, for each dimension and mechanism, is shown in Figure <ref>. It seems to almost die down for almost all mechanisms and budget items, except for a few cases which do not change the result. We conclude it likely that the mechanisms have settled into equilibria which are unlikely to change given more voters. §.§ Understanding Voter Behavior A mechanism's practical impact depends on more than whether it consistently converges, however. We now turn our attention to understanding how voters behave under each mechanism and whether we can learn anything about their utility functions from that behavior. We find that voters understood the mechanisms but that their behaviors suggest large indifference regions, and that the full elicitation scheme is susceptible to biases that can skew the results.§.§.§ Voter understanding of mechanisms One important question is whether, given very little instruction on how to behave, voters understand the mechanisms and act approximately optimally under their (unknown to us) utility function. This section shows that the voters behavior follows what one would expect in one important respect: how much of one's movement budget the voter used on each dimension, given the constraint type. Regardless of the exact form of the utility function, one would expect that, in the ℒ^1 constrained mechanism, a voter would use most of her movement credits in the dimension about which she cares most. In fact, in either the Weighted Euclidean preferences case (and with `sub-space' being a single dimension) or with a small radius with ℒ^1 constraints, a voter would move only on one dimension. With ℒ^2 constraints, one would expect a voter to apportion her movement more equally because she pays an increasing marginal cost to move more in one dimension (people were explicitly informed of this consequence in the instructions). Under the Weighted Euclidean preferences model with ℒ^2 constraints, a voter would move in each dimension proportional to her weight in that dimension. Finally, with ℒ^∞ constraints, a voter would move, in all dimensions in which she is not indifferent, to her favorite point in the neighborhood for that dimension (most likely an endpoint), independently of other dimensions. One would thus expect a more equal distribution of movements. Figure <ref> shows the average movement (as a fraction of the voter's total movement) by each voter for the dimension she moved most, second, third, and fourth, respectively, for each constrained mechanism. We reserve discussion of the full elicitation weights for Section <ref>. The movement patterns indicate that voters understood the constraints and moved accordingly – with more equal movements across dimensions in ℒ^2 than in ℒ^1, and more equal movements still in ℒ^∞. We dig deeper into user utility functions next, but can conclude that, regardless of their exact utility functions, voters responded to the constraint sets appropriately.§.§.§ Large indifference regions Although it is difficult to extract a voter's full utility function from their movements, the separability of dimensions (except through the deficit term) under the ℒ^∞ constraint allows us to test whether voters behave according to some given utility model in that dimension, without worrying about the dependency on other dimensions. Figure <ref> shows, for the ℒ^∞ mechanism, a histogram of the movement on a dimension as a fraction of the radius (we find no difference between dimensions here). Note that a large percentage of voters moved very little on a dimension, even in cases where their ideal point in that dimension was far away (defined as being unreachable under the current radius). This result cannot be explained away by workers clicking through without performing the task: almost all workers moved at least one dimension, and, given that a worker moved in a given dimension, it would not explain smaller movements being more common than larger movements. That this pattern occurs in the ℒ^∞ mechanism is key – if a voter feels any marginal disutility in a dimension, she can move the allocation without paying a cost of more limited movement in other dimensions. We conclude that, though voters may share a single ideal point for a dimension when asked for it, they are in fact relatively indifferent over a potentially large region – and their actions reflect so.We further analyze this claim in Appendix Section <ref>, looking at the same distribution of movement but focusing on workers who provided a text explanation longer than (and shorter than, separately) the median explanation of 197 characters. (We assume that the voters who invested time in providing a more thorough explanation than the average worker also invested time in moving the sliders to a satisfactory point, though this assumption cannot be validated.) Though there are some differences (those who provide longer explanations also tend to use more of their movement), the general pattern remains the same; only about 40% of workers who provided a long explanation and were far away from their ideal point on a dimension used the full movement budget. This pattern suggests that voters are relatively indifferent over large regions. Furthermore, this lack of movement is correlated with a voter's weights when she was also asked to do the full elicitation mechanism. Conditioned on being far from her ideal point, when a voter ranked an item as one of her top two important items (not counting the deficit term), she moved an average of 74% of her allowed movement in that dimension; when she ranked an item as one of least two important items, she moved an average of 61%, and the difference is significant through a two sample t-test with p = .013. We find no significant difference in movement within the top two ranked items or within the bottom two ranked items. This connection suggests that one can potentially determine which dimensions a voter cares about by observing these indifference regions and movements, even in the ℒ^∞ constrained case. One caveat is that the differences in effects are not large, and so at the individual level inference of how much an individual cares about one dimension over another may be noisy. On the aggregate, however, such determination may prove useful. Furthermore, we note that while such indifference regions conflict with the utility models under which the ℒ^2 constraint mechanism converges in theory, it fits within the DLCD framework introduced in Section <ref>. §.§.§ Mechanism timeIn this section, we note one potential problem with schemes that explicitly elicit voter's optimal solutions – for instance, to find the component-wise median – as compared to the constrained elicitation used in ILV: it seems to be cognitively difficult for voters. In Figure <ref>, the median time per page, aggregated across each mechanism type, is shown. The “Mechanism” time includes a single user completing both sets in each of the constrained mechanism types, but not does include the time to also do the extra full elicitation task in cases where a voter was asked to do both a constrained mechanism and the full elicitation. The full elicitation bars include only voters who did only the full elicitation mechanism, and so the bars are completely independent. On average, it took longer to do the full elicitation mechanism than it took to do two sets of any of the constrained mechanisms, suggesting some level of cognitive difficulty in articulating one's ideal points and weights on each dimension – even though understanding what the instructions are asking was simple, as demonstrated by the shorter instruction reading time for the full elicitation mechanism. The ℒ^∞ mechanism took the least time to both understand and do, while the ℒ^2 mechanism took the longest to do, among the constrained mechanisms. This result is intuitive: it is easier to move each budget item independently when the maximum movement is bounded than it is to move the items when the sum or the sum of the changes squared is bounded (even when these values are calculated for the voter). In practice, with potentially tens of items on which constituents are voting, these relative time differences would grow even larger, potentially rendering full elicitation or ℒ^2 constraints unpalatable to voters.One potential caveat to this finding is that the Full Elicitation mechanism potentially provides more information than do the other mechanisms. From a polling perspective, it is true that more information is provided from full elicitation – one can see the distribution of votes, the disagreement, and correlation across issues, among other things. However, from a voting perspective, in which the aggregation (winner) is the only thing reported, it is not clear that this extra information is useful. Further, much of this information that full elicitation provides can reasonably be extracted from movements of voters, especially the movements of those who are given a starting point close to the eventual equilibrium.§.§.§ UI biases We now turn our attention to the question of how workers behaved under the full elicitation mechanism and highlight some potential problems that may affect results in real deployments. Figures <ref> and <ref> show the histogram of values and weights, respectively, elicited from all workers who did the full elicitation mechanism. Note that in the histogram of values, in every dimension, the largest peak is at the slider's default value (at the 2016 estimated budget), and the histograms seem to undergo a phase shift at that peak, suggesting that voters are strongly anchored at the slider's starting value. This anchoring could systematically bias the medians of the elicited values. A similar effect occurs in eliciting voter weights on each dimension. Observe that in Figure <ref> the full elicitation weights appear far more balanced than the weights implied by any of the mechanisms (for the full elicitation mechanism, the plot shows the average weight over the sum of the weights for each voter). From the histogram of full elicitation weights, however, we see that this result is a consequence of voters rarely moving a dimension's weight down from the default of 5, but rather moving others up.One potential cause of this behavior is that voters might think that putting high weights on each dimension would mean their opinions would count more, whereas in any aggregation one would either ignore the weights (calculate the unweighted median) or normalize the weights before aggregating. In future work, one potential fix could be to add a “normalize” button for the weights, which would re-normalize the weights, or to automatically normalize the weights as voters move the sliders. These patterns demonstrate the difficulty in eliciting utilities from voters directly; even asking voters how much they care about a particular budget item is extremely susceptible to the user interface design. Though such anchoring to the slider default undoubtedly also occurs in the ℒ^∞ constrained mechanism, it would only slow the rate of convergence, assuming the anchoring affects different voters similarly. These biases can potentially be overcome by changing the UI design, such as by providing no default value through sliders. Such design choices must be carefully thought through before deploying real systems, as they can have serious consequences.§ CONCLUSIONWe evaluate a natural class of iterative algorithms for collective decision-making in continuous spaces that makes practically reasonable assumptions on the nature of human feedback. We first introduce several cases in which the algorithm converges to the societal optimum point, and others in which the algorithm converges to other interesting solutions. We then experimentally test such algorithms in the first work to deploy such a scheme. Our findings are significant: even with theoretical backing, two variants fail the basic test of being able to give a consistent decision across multiple trials with the same set of voters. On the other hand, a variant that uses ℒ^∞ neighborhoods consistently leads to convergence to the same solution, which has attractive properties under a likely model for voter preferences suggested by this convergence. We also make certain observations about other properties of user preferences – most saliently, that they have large indifferences on dimensions about which they care less.In general, this work takes a significant step within the broad research agenda of understanding the fundamental limitations on the quality of societal outcomes posed by the constraints of human feedback, and in designing innovative mechanisms that leverage this feedback optimally to obtain the best achievable outcomes.§ MECHANICAL TURK EXPERIMENT ADDITIONAL INFORMATION In this section, we provide additional information regarding our Amazon Mechanical Turk experiment, including a walk-through of the user experience. Furthermore, we have a live demo that can be accessed at: <http://gargnikhil.com/projectdetails/IterativeLocalVoting/>. This demo will remain online for the foreseeable future.Figures <ref> through <ref> show screenshots of the experiment. We now walkthrough the experiment: * Figure <ref> – Welcome page. Arriving from Amazon Mechanical Turk, the workers read an introduction and the consent agreement. * Figure <ref> – Instructions (shown are ℒ^2 instructions). The workers read the instructions, which are also provided on the mechanism page. There is a 5 minute limit for this page. * Figures <ref>, <ref> – Mechanism page for ℒ^2 and Full Elicitation, respectively. For the former, workers are asked to move to their favorite point within a constraint set, for 2 different budget points. The “Current Credit Allocation” encodes the constraint set – as workers move the budget bars, it shows how much of their movement budget they have spent, and on which items. The other constrained movement mechanisms are similar. For the Full Elicitation mechanism, voters are simply asked to indicate their favorite budget point and weights. The instructions are repeated on the mechanism page as well at the top. There is a 10 minute limit for this page. * Second mechanism, 30% of workers. Some workers were asked to do both one of the ℒ^1,ℒ^2, or ℒ^∞, and the Full Elicitation mechanism. For these workers, the Full Elicitation mechanism shows up after the constrained mechanism. * Figure <ref> – Feedback page. Finally, workers are asked to provide feedback, after which they are shown a code and return to the Mechanical Turk website.§ INDIFFERENCE REGIONS ADDITIONAL INFORMATION We now present some additional data for the claim in Section <ref>, that voters have large indifference regions on the space. In particular, Figures <ref> and <ref> reproduce Figure <ref> but with workers who provided explanations longer (and shorter) than the median response, respectively. This split can (roughly) correspond to workers who may have answered more or less sincerely to the budgeting question. We find that the response distribution, as measured by the fraction of possible movement one used when far away from one's ideal point on a given dimension, are similar. § PROOFSIn this appendix, we include proofs for all the theorems in the paper.§.§ Known SSGM Results <cit.> Let θ∈Θ be a random vector with distribution P. Let f̅(x) = [f(x, θ)] = ∫_Θf(x, θ) dP(θ), for x∈𝒳, a non-empty bounded closed convex set, and assume the expectation is well-defined and finite valued. Suppose that f(·,θ), θ∈Θ is convex and f̅(·) is continuous and finite valued in a neighborhood of point x. For each θ, choose any g(x, θ) ∈∂ f(x, θ). Then, there exists g̅(x) ∈∂f̅(x) s.t. g̅(x) = _θ[g(x, θ)]. This theorem says that the expected value of the sub-gradient of the utility at any point x across voters is a subgradient of the societal utility at x, irrespective of how the voters choose the subgradient when there are multiple subgradients, i.e., when the utility function is not differentiable.This key result allows us to use the subgradient of utility function of a sampled voter as an unbaised estimate of the societal subgradient.Now, consider a convex function f on a non-empty bounded closed convex set 𝒳⊂^M, and use [·]_𝒳 to designate the projection operator. Starting with some x_0∈𝒳, consider the SSGM update rule x_t = [x_t-1 - r_t (g̅_t + z_t + b_t)]_𝒳, where z_t is a zero-mean random variable and b_t is a constant, and g̅_t ∈∂ f(x_t). Let _t[·] be the conditional expectation given ℱ_t, the σ-field generated by x_0,x_1,…,x_t. Then we have the following convergence result.<cit.> Consider the above update rule.If f(·) has a unique minimizer x^* ∈𝒳r_t > 0, ∑ _t r_t = ∞, ∑ _t r_t^2 < ∞∃ C_1 ∈ < ∞ s.t. ∂ f(x)_2 ≤ C_1,∀x ∈𝒳∃ C_2 ∈ < ∞ s.t. _t[z_t^2] ≤ C_2, ∀t ∃ C_3 ∈ < ∞ s.t. b_t_2 ≤ C_3,∀t ∑_t r_t b_t < ∞ w.p. 1 Then x_t → x^* w.p. 1 as t →∞. Note: <cit.> prove the result for gradients, though the same proof follows for subgradients. Only the inequality [x^* - x_t]^T g_t ≤ f(x^*) - f(x_t) for gradient g_t at iteration t is used, which holds for subgradients. <cit.> provide a general discussion of subgradient methods, along with similar results. <cit.>, in Theorem 46, provide a convergence proof for the stochastic subgradient method without projections and the extra noise terms.§.§ Mapping ILV to SSGM As described in Section <ref>, suppose that h_𝒳 is the induced probability distribution on the ideal values of the voters. In the following discussion, we will refer to voters and their ideal solutions interchangeably.Next, we restate ILV without the stopping condition so that it looks like the stochastic subgradient method. Consider Algorithm <ref>. We want to minimize the societal cost, f̅(x) = [f_v(x)]. From Theorem <ref>, it immediately follows that if each voter v articulates a subgradient of her utility function for all x, i.e. g̃_v(x) ∈∂ f_v(x), then from Theorem <ref>, we can conclude that the algorithm converges. However, users may not be able to articulate such a subgradient. Instead, when the voters respond correctly to query (<ref>) (i.e. move to their favorite point in the given ℒ^q neighborhood), we haveg̃_v_t(x_t)= x_t - min_x[f_v_t(x) : x - x_t_q ≤ r_t]/r_t.Furthermore, for all the proofs, we assume the following. * The solution space 𝒳⊂ℝ^M is non-empty, bounded, closed, and convex. * Each voter v has a unique ideal solution x_v∈𝒳. * The ideal point x_v of each voter is drawn independently from a probability distribution with a bounded and measurable density function h_𝒳 on M dimensions: there existsCs.t. ∀x we have h_𝒳 (x) ≤ C. This assumption allows us to bound the probability of errors that occur in small regions of the space. §.§ Proof of Theorem <ref> Let the disutility, or cost to voter v ∈𝒱 be f_v(x) = x - x_v_p for all x∈𝒳. We use the following technical lemma: For q ∈{1,2, ∞}, there exists K_2 ∈^+ s.t. g̃_v(x) -g_t_2 ≤ K_2, ∀ g_t ∈∂ f_v(x) for anyv andx. The lemma bounds the error in the movement direction from the gradient direction, by noting that both the movement direction and the gradient direction have bounded norms.We also need the following lemma, which is proved separately for each case in the following sections. Suppose that f_v(x) ≜x_v - x _p and define the function A_t≜𝕀{g̃_v_t(x_t)∉∂ f_v_t(x_t)}, where g̃_v_t(x_t) is as defined in (<ref>). Then there exists C∈ s.t. ∀ n, (A_t = 1 | ℱ_t) ≤ C r_t, when (p=2,q=2), (p=1,q=∞), or (p=∞,q=1). The lemma can be interpreted as follows: A_t indicates a `bad' event, when a voter may not be providing a true subgradient of her utility function. However, the probability of the event occurring vanishes with r_t, which, as we will see below, is the right rate for the algorithm to converge. * We will show that Algorithm <ref> meets the conditions in Theorem <ref>. Let b_t ≜_t[g̃_v_t(x_t)] - g̅_t and z_t ≜g̃_v_t(x_t) - _t[g̃_t], for some g̅_t∈∂f̅(x_t). Then, g̃_v_t(x_t) can be written as g̃_v_t(x_t) = g̅_t + z_t + b_t. We show that b_t, z_t meet the conditions in the theorem, and so the algorithm converges. Let A_t be the indicator function described in Lemma <ref>. Then, for some g̅_t∈∂f̅(x_t), b_t= _t[g̃_v_t(x_t)] - g̅_t = _t[g̃_v_t(x_t)] - _t[g_t]Theorem <ref>,i.i.d sampling of v = (A_t = 1 | ℱ_t)(_t[g̃_v_t(x_t) | A_t = 1] - _t[g_t| A_t = 1])+ (A_t = 0 | ℱ_t)(_t[g̃_v_t(x_t) | A_t = 0] - _t[g_t| A_t = 0]) = (A_t = 1 | ℱ_t)(_t[g̃_v_t(x_t) | A_t = 1] - _t[g_t| A_t = 1])≤C r_t (_t[g̃_v_t(x_t) | A_t = 1] - _t[g_t| A_t = 1]). Lemma <ref> Combining with Lemma <ref>, and the fact that r_t=r_0/t, we have ∑ r_t b_t≤∞ and there existsC_1 ∈ < ∞ s.t. b_t_2 ≤ C_1,∀t. Finally, note thatz_t≜g̃_v_t(x_t) - _t[g̃_v_t(x_t)] is bounded for each t because the g̃_v_t(x_t) is bounded as defined. Thus, all the conditions in Theorem <ref> are met for both b_t and z_t, and the algorithm converges.§.§ Proof of Theorem <ref>Instead of moving to their favorite point on the ball, voters now instead move in the direction of the gradient of their utility function to the boundary of the given neighborhood. In this case, we have:g̃_v_t(x_t) = g_v_t/g_v_t_q;for g_v_t∈∂ f_v_t(x_t). The key to the proof is the following observation, that the q norm of the gradient of the p norm, except at the ideal points on each dimension, is constant. This observation is formalized in the following lemma: ∀(p, q) s.t. p>0, q>0, and 1/p + 1/q = 1, ∇x - x_v _p _q = 1, ∀x s.t. x^m≠ x^m_v for any m. * Since the probability of picking a voter v such that x_t^m = x^m_v for some dimension m is 0, we have g̃_v_t(x_t) = g_v_t for g_v_t = ∇ f_v_t(x_t). Thus we obtain the gradient exactly, and hence Theorem <ref> applies with b_t = 0 for all t. §.§ Proof of PropositionsWe now turn our attention to the case of Weighted Euclidean utilities and show that Algorithm <ref> converges to the societal optimum. The analogue to Lemma <ref> for this case is (proved in the following subsection): Suppose that f_v(x) ≜∑_k = 1^K w_v^k/w_v_2x^k - x^k_v_2, and define the functionA_t≜𝕀{g̃_v_t(x_t)∉∂ f_v_t(x_t)},where g̃_v_t(x_t) is as defined in (<ref>) for q=2. Then there exists C∈ s.t. ∀n, (A_t = 1 | ℱ_t) ≤ C r_t. * The proof is then similar to that of Theorem <ref>, and the algorithm converges to x^* = min[∑_k = 1^K w_v^kx^k - x^k_v_2/w_v_2].Now, we sketch the proof for fully decomposable utility functions and ℒ^∞ neighborhoods.* Consider each dimension separately. If x^m_t-1 < x^m_v, then the sampled voter increases x^m_t-1 by r_t as long as x^m_t-1 + r_t≤ x^m_v. On the other hand if x^m_t-1 > x^m_v, then the sampled voter decreases x^m_t-1 by r_t as long as x^m_t-1 - r_t≥ x^m_v. Thus except for when a voter's ideal solution is too close to the current point, the algorithm can be seen as performing SSGM on each dimension separately as if the utility function was ℒ^1 (the absolute value) on each dimension. Thus a proof akin to that of Theorem <ref> with p=1, q= ∞ holds.§.§ Proof of Theorem <ref>We now show that the algorithm finds directional equilibria in the following sense: if under a few conditions a trajectory of the algorithm converges to a point, then that point is a directional equilibrium. * Suppose x^* is not a directional equilibrium, i.e. ∃ϵ > 0 s.t. G(x^*)_2 = ϵ. Consider a δ-ball around x^*, B_δ≜{x : x^* - x _2 < δ}, with δ, ϵ_2 > 0 chosen such that ∃ m ∈{1 … M} s.t. ∀ x ∈ B_δ, (G_m(x)) = (G_m(x^*)) and |G_m(x)| > ϵ_2, i.e. the gradient in the mth dimension does not change sign and has magnitude bounded below. Such a δ, ϵ_2 exists by the continuity assumption (if x^* is not a directional equilibrium, at least 1 dimension of G(x^*) is non-zero and thus one can construct a ball around x^* such that G(x), x∈ B_δ in that dimension satisfies the conditions). Now, one can show that the probability of leaving neighborhoods around x^* goes to 1: ∀ t>0, 0<δ_2<δ, w.p. 1 ∃τ≥ t s.t. x_τ - x^* _2 > δ_2. Suppose x_t ∈ B_δ_2 (otherwise τ = t satisfies), r_k = 1/k. x_τ = x_t + ∑_k=t^τΔ x_kΔ x_k ≜ - r_k ∇ f^v_k(x_k)/∇ f^v_k(x_k)_2 x_τ - x^* _2=x_t - x^* + ∑_k=t^τΔ x_k _2 ≥∑_k=t^τΔ x_k _2 -x_t - x^*_2 ≥∑_k=t^τΔ x_k _2 - δ_2∑_k=t^τΔ x_k _2≥ | ∑_k=t^τΔ x_k,m |defn of ·_2= |_v[∑_k=t^τΔ x_k,m] + ∑_k=t^τΔ x_k,m - _v[∑_k=t^τΔ x_k,m] | ≥ |_v[∑_k=t^τΔ x_k,m] | - |∑_k=t^τΔ x_k,m - _v[∑_k=t^τΔ x_k,m] | By Hoeffding's inequality, Pr(∑_k=t^τΔ x_k,m - _v[∑_k=t^τΔ x_k,m] ≥ϵ_3 )≤exp[-2(τ - t)^2ϵ_3^2/2∑_k=t^τ1/k] → 0as τ→∞ Furthermore, by the continuity assumption, |_v[∑_k=t^τΔ x_k,m]|≜ |∑_k=t^τr_kG_m(x_k)| →∞ as τ→∞ whilex_k ∈ B_δ_2 Thus, Pr(x_τ - x^* _2 > δ_2) → 1 as τ→∞. Thus, if an infinite trajectory converges to x^*, then w.p. 1, then x^* is a directional equilibrium.§.§ Proofs of Lemmas Lemma <ref> For q ∈{1,2, ∞}, ∃ K_2 ∈^+ < ∞ s.t. g̃_v_t -g_t_2 ≤ K_2, ∀g_t ∈∂ f_v_t(x_t), v_t, x_t. g̃_v_t(x_t) -g_t_2≤g̃_v_t(x_t)_2 + g_t_2 = x_t - min_x[ x - x_v_t_p : x - x_t_q ≤ r_t]_2/r_t + g_t_2≤ K_1 + g_t_2≤ K_2 for some K_1,K_2 ∈^+. The second inequality follows from the fact that for finite M-dimensional vector spaces, y_2 ≤y_1 and y_2 ≤√(M)y_∞. The third follows from the norm of the subgradients of the p norm being bounded. Lemma <ref>, case (p = 2, q = 2). Remember that A_t≜𝕀{g̃_v_t(x_t)∉∂ f_v_t(x_t)}. Let B_t = 𝕀{x_v_t - x_t_2 ≤ r_t}. We show that A) B_t = 0A_t = 0, and B) ∃ C∈ s.t. (B_t = 1 | ℱ_t) ≤ C r_t. Then, ∃ C∈ s.t. (A_t = 1 | ℱ_t) ≤ C r_t. Part A, B_t = 0 g̃_v_t(x_t) = g_t, for some g_t ∈∂ f_v_t(x_t): First, note that ∂ f_v_t(x)= ∂x - x_v_t_2 = {x - x_v_t/x_v_t - x_2}x ≠ x_v_t {g : g_2 ≤ 1}x = x_v_t If x_v_t - x_t_2 > r_t , then min_x[ x - x_v_t_2 : x - x_t_2 ≤ r_t]= x_t + r_t x_v_t - x_t/x_v_t - x_t_2 Then, g̃_v_t(x_t)= x_t - min_x[ x - x_v_t_2 : x - x_t_2 ≤ r_t]/r_t Definition= x_t - (x_t + r_t x_v_t - x_t/x_v_t - x_t_2)/r_t= x_t - x_v_t/x_v_t - x_t_2∈∂ f_v_t(x_t) Part B, ∃ C∈ s.t. (B_t = 1 | ℱ_t) ≤ C r_t: (B_t = 1 | ℱ_t)= (x_v_t - x_t_2 ≤ r_t |ℱ_t)= ∫_x ∈{x : x - x_t_2 ≤ r_t} h_𝒳 | ℱ_t (x) dx = ∫_x ∈{x : x- x_t_2 ≤ r_t} h_𝒳 (x) dx vdrawn independent of history≤ Cr_t^2boundedh_𝒳≤ Cr_t r_t ≤ 1eventually for some C∈ < ∞. Note that C depends on the volume of a sphere in M dimensions. Lemma <ref>, case (p = 1, q = ∞). Let h_v_t(x_t)≜[ (x_v_t^1 - x_t^1),, (x_v_t^m - x^m_t),, (x_v_t^M- x_t^M) ]^T Let B_t = 𝕀{∃ m, |x_v_t^m - x^m_t| ≤ r_t}. We show the the same two parts as in the above proof. Part A, B_t = 0 g̃_v_t(x_t) = g_t, for some g_t ∈∂ f_v_t(x_t): First, note that the subgradients are ∂ f_v_t(x)= ∂x - x_v_t_1 = {g : g_∞≤ 1, g^T(x - x_v_t) = x - x_v_t_1} If ∀m, |x_v_t^m - x^m_t| > r_t, then min_x[ x - x_v_t_1 : x - x_t_∞≤ r_t]= x_t + r_t h_v_t(x_t) Then, g̃_v_t(x_t)= x_t - min_x[ x - x_v_t_1 : x - x_t_∞≤ r_t]/r_t Definition= x_t - (x_t + r_t h_v_t(x_t))/r_t= -h_v_t(x_t)∈∂ f_v_t(x_t) Part B, ∃ C∈ s.t. (B_t = 1 | ℱ_t) ≤ C r_t: (B_t = 1 | ℱ_t)= (∃ m: |x_v_t^m - x^m_t| ≤ r_t |ℱ_t)= ∫_x ∈{x : ∃ m, |x^m - x^m_t| ≤ r_t} h_𝒳 | ℱ_t (x) dx = ∫_x ∈{x : ∃ m, |x^m - x^m_t| ≤ r_t} h_𝒳 (x) dx vdrawn independent of history≤ Cr_tbounded h_𝒳,fixedM,bounded 𝒳 for some C∈ < ∞. In the last line, C≊ 2M (diameter(𝒳)), based on the volume of the slices around the ideal points on each dimension. Lemma <ref>, case (p = 1, q = ∞). Let m̅_t∈max_m |x_v_t^m - x^m_t|, Let h_v_t(x_t) ≜[ 0, 0,, 0, (x^m̅_t_t - x_v_t^m̅_t),0,, 0,0 ]^T,Let B_t ≜𝕀{∃ m≠m̅_t : |x_v_t^m̅_t - x^m̅_t_t| < |x_v_t^m - x^m_t| + r_t}. We show the the same two parts as in the above proofs. Part A, B_t = 0 g̃_v_t(x_t) = g_t, for some g_t ∈∂ f_v_t(x_t): First, note that when B_t = 0, the set of subgradients is ∂ f_v_t(x)= ∂x - x_v_t_∞= {h_v_t(x_t)} Also when B_t = 0, min_x[ x - x_v_t_∞ : x - x_t_1 ≤ r_t]= x_t - r_t h_v_t(x_t) Then, g̃_v_t(x_t)= x_t - min_x[ x - x_v_t_1 : x - x_t_∞≤ r_t]/r_t Definition= x_t - (x_t - r_t h_v_t(x_t))/r_t= h_v_t(x_t)∈∂ f_v_t(x_t) Part B, ∃ C∈ s.t. (B_t = 1 | ℱ_t) ≤ C r_t: (B_t = 1 | ℱ_t)= (𝕀{∃ m≠m̅_t : |x_v_t^m̅_t - x^m̅_t_t| < |x_v_t^m - x^m_t| + r_t} |ℱ_t)= ∫_x ∈{x : ∃ m ≠m̅_t s.t.|x_v_t^m̅_t - x^m̅_t_t| < |x_v_t^m - x^m_t| + r_t} h_𝒳 | ℱ_t (x) dx = ∫_x ∈{x : ∃ m ≠m̅_t s.t.|x_v_t^m̅_t - x^m̅_t_t| < |x_v_t^m - x^m_t| + r_t} h_𝒳 (x) dx vdrawn independent of history≤ Cr_tboundedh_𝒳,fixedM,bounded 𝒳 for some C∈ < ∞. Note that C ≊ 2M^2 (diameter(𝒳)), based on the volume of the slices around each dimension. Lemma <ref> ∀(p, q) s.t. p>0, q>0, 1/p + 1/q = 1, ∇x - x_v _p _q = 1, ∀x s.t. ∀m, x^m≠ x^m_v. If x^m ≠ x^m_v, ∀m: ∇_m x - x_v _p= ∇_m (∑_m |x^m - x^m_v |^p)^1/p= 1/p∇_m |x^m - x^m_v |^p/(∑_m |x^m - x^m_v |^p)^1 - 1/p= |x^m - x^m_v |^p-1(∇_m |x^m - x^m_v |)/x - x_v _p^p - 1 Then ∇x - x_v _p_q= |x^m - x^m_v |^p-1(∇_m |x^m - x^m_v |)/x - x_v _p^p - 1_q= 1/x - x_v _p^p - 1(∑_m ||x^m - x^m_v |^p-1(∇_m |x^m - x^m_v |)|^q)^1/q= 1/x - x_v _p^p - 1(∑_m |x^m - x^m_v |^(p-1)q)^1/q= 1/x - x_v _p^p - 1x - x_v _p^p/q(p-1)q = p= 1 Lemma <ref> Suppose that f_v(x) ≜∑_k = 1^K w_v^k/w_v_2x^k - x^k_v_2, and define the functionA_t≜𝕀{g̃_v_t(x_t)∉∂ f_v_t(x_t)},where g̃_v_t(x_t) is as defined in (<ref>) for q=2. Then there exists C∈ s.t. ∀n, (A_t = 1 | ℱ_t) ≤ C r_t. Let B_t = 𝕀{∃ ks.t. x^k_v_t - x^k_t_2 ≤ r_t}. We show the same two parts for B_t as for the proofs for Lemma <ref>. Part A, B_t = 0 g̃_v_t(x_t) = g_t, for some g_t ∈∂f_v_t(x_t): First, note that, when B_t = 0, ∂_m f_v_t(x_t) = ∂_m ∑_k = 1^K w_v^k/w_v_2x^k - x^k_v_2 = w^k_m/w_v_2x^m- x^m_v_t/x^k_m_v_t - x^k_m_t_2where k_m is the subspace that contains the mth dimension Also if B_t = 0, then min_x[ ∑_k = 1^K w^k/w_v_2x^k - x^k_v_2 : x - x_t_2 ≤ r_t]= x_t + r_t […,w^k_m/w_v_2x^m_v_t - x^m/x^k_m_v_t - x^k_m_t_2,…] Then, g̃_v_t(x_t)= x_t - min_x[ x - x_v_t_2 : x - x_t_2 ≤ r_t]/r_t Definition∈∂ f_v_t(x_t) Part B, ∃ C∈ s.t. (B_t = 1 | ℱ_t) ≤ C r_t: (B_t = 1 | ℱ_t)= (x_v_t - x_t_2 ≤ r_t |ℱ_t)= ∫_x ∈{x : ∃ ks.t. x^k_v_t - x^k_t_2 ≤ r_t} h_𝒳 | ℱ_t (x) dx = ∫_x ∈{x :∃ ks.t. x^k_v_t - x^k_t_2 ≤ r_t} h_𝒳 (x) dx vdrawn independent of history≤ Cr_t^2boundedh_𝒳≤ Cr_t r_t ≤ 1eventually for some C∈ < ∞. Note that C depends on K and M.0.2inplainnat
http://arxiv.org/abs/1702.07984v3
{ "authors": [ "Nikhil Garg", "Vijay Kamble", "Ashish Goel", "David Marn", "Kamesh Munagala" ], "categories": [ "cs.MA", "cs.CY", "cs.GT" ], "primary_category": "cs.MA", "published": "20170226032431", "title": "Iterative Local Voting for Collective Decision-making in Continuous Spaces" }
This work has been submitted to the IEEE Photonics Technology Letters for possible publication. Copyright may be transferred without notice,after which this version may no longer be accessible.IEEE Photonics Technology Letters, Vol. xx, No. x, xxxx 2017 Shell et al. Upper and Lower Bounds for the Ergodic Capacity of MIMO Jacobi Fading Channels Amor Nafkha, Senior Member, IEEE, and Rémi Bonnefoi, Member, IEEE A. Nafkha and R. Bonnefoi are with SCEE/IETR research team, CentraleSupélec, Avenue de la Boulaie, 35576 Cesson Sévigné, France. E-mail:{amor.nafkha,Remi.Bonnefoi}@centralesupelec.fr. A. Nafkha is also with the B-com Institute of Research and Technology, 1219 Avenue des Champs Blancs, 35510 Cesson Sévigné, France December 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================== In multi-(core/mode) optical fiber communication, the transmission channel can be modeled as a complex sub-matrix of the Haar-distributed unitary matrix (complex Jacobi unitary ensemble). In this letter, we present new analytical expressions of the upper and lower bounds for the ergodic capacity of multiple-input multiple-output Jacobi-fading channels. Recent results on the determinant of the Jacobi unitary ensemble are employed to derive a tight lower bound on the ergodic capacity. We use Jensen's inequality to provide an analytical closed-form upper bound to the ergodic capacity at any signal-to-noise ratio (SNR). Closed-form expressions of the ergodic capacity, at low and high SNR regimes, are also derived. Simulation results are presented to validate the accuracy of the derived expressions. Ergodic capacity, Multi-(core/mode) optical fiber, space-division multiplexing, Jacobi-fading MIMO channels.§ INTRODUCTIONTo accommodate the exponential growth of data traffic over the last few years, space-division multiplexing (SDM) based on multi-core optical fiber or multi-mode optical fiber <cit.> is expected to overcome the barrier from capacity limit of single-core fiber <cit.>. The main challenge in SDM occurs due to in-band crosstalk between multiple parallel transmission channels (cores or modes). This non-negligible crosstalk can be dealt with by using multiple-input multiple-output (MIMO) signal processing techniques. Assuming important crosstalk between channels (cores or modes), negligible backscattering and near-lossless propagation, we can model the transmission channel as a random complex unitary matrix <cit.>. In <cit.>, authors introduced the Jacobi unitary ensemble to model the propagation channel for fiber-optical MIMO channel and they gave analytical expression for the ergodic capacity. However, to the best of the authors' knowledge, no bounds for the ergodic capacity of the uncorrelated MIMO Jacobi-fading channels exist in the literature so far. The two main contributions of this work are: (i) the derivation of a lower/upper bounds on the ergodic capacity of an uncorrelated MIMO Jacobi-fading channel with identically and independently distributed input symbols, (ii) the derivation of simple asymptotic expressions for ergodic capacity in the low and high SNR regimes.The rest of this paper is organized as follows: Section <ref> introduces the MIMO Jacobi-fading channel model and includes the definition of ergodic capacity. We derive a lower and upper bound, at any SNR value, and an approximation, in high and low SNR regimes, to the ergodic capacity in Section <ref>. The theoretical and the simulation results are discussed in Section <ref>. Finally, Section <ref> provides the conclusion. § PROBLEM FORMULATION Consider a single segment m-channel lossless optical fiber system, the propagation through the fiber may be analyzed through its 2m × 2m scattering matrix given by <cit.>S =[ R_ll T_rl; T_lr R_rr ]where T_lr and T_rl sub-matrices correspond to the transmitted from left to right and from right to left signals, respectively. The R_ll and R_rr sub-matrices present the reflected signals from left to left and from right to right. Moreover, R_ll=R_rr≈0_m × m given the fact that the backscattering in the optical fiber is negligible, and T=T_lr=T_rl^† because the two fiber ends are not distinguishable. The notation (.)^† is used to denote the conjugate transpose matrix. Energy conservation principle implies that the scattering matrix S is a unitary matrix (i.e. S^-1 = S^† where the notation (.)^-1 is used to denote the inverse matrix.). As a consequence, the four Hermitian matrices T_lrT_lr^†, T_rlT_rl^†, I_m-R_llR_ll^†, and I_m-R_rrR_rr^† have the same set of eigenvalues λ_1,λ_2,....,λ_m. Each of these m transmission eigenvalues is a real number between 0 and 1. Without loss of generality, the transmission matrix T will be modeled as a Haar-distributed unitary random matrix of dimension m × m <cit.>. In this paper, we consider that there are m_t ≤ m excited transmitting channels and m_r ≤ m receiving channels coherently excited in the input and output side of the m-channel lossless optical fiber. Therefore, we only consider a truncated version of the transmission matrix T, which we denote by H, since not all transmitting or receiving channels may be available to a given link. Without loss of generality, the effective transmission channel matrix H is the m_r × m_t upper-left corner of the transmission matrix T <cit.>. As a result, the corresponding multiple-input multiple-output channel for this system is given byy=Hx + zwhere y∈ℂ^m_r× 1 is the received signal, x∈ℂ^m_t× 1 is the emitted signal with 𝔼[x^†x]=𝒫/m_tI_m_t, and z∼𝒩(0,σ^2I_m_r) is circular-symmetric complex Gaussian noise. We denote 𝔼[W] the mathematical expectation of random variable W. The variable 𝒫 is the total transmit power across the m_t modes/cores, and σ^2 is the Gaussian noise variance. We know from <cit.> that when the receiver has a complete knowledge of the channel matrix, the ergodic capacity is given by C_m_t,m_r^m,ρ = {[ 𝔼[ln(I_m_t + ρ/m_tH^†H)]; ; 𝔼[ln(I_m_r + ρ/m_tHH^†)] ].where ln is the natural logarithm function and ρ=𝒫/σ^2 is the average signal-to-noise ratio (SNR).In this paper, we consider the case where m_r ≥ m_t and m_t+m_r≤ m. The other case where m_r < m_t and m_t+m_r≤ m can be treated defining m_t' = m_r and m_r' = m_t. In the case where m_t+m_r>m, it was shown in <cit.> that the ergodic capacity can be deduced from (<ref>) as follows:C_m_t,m_r^m,ρ = (m_t+m_r-m) ln(1+ρ) + C_m-m_r,m-m_t^m,ρThe ergodic capacity is defined as the average with respect to the joint distribution of eigenvalues of the covariance channel matrix J=1/m_tH^†H. The random matrix J follows the Jacobi distribution and its ordered eigenvalues λ_1 ≥λ_2 ≥…≥λ_m_t have the joint density given by ℱ_a,b,m(λ)=χ^-1∏_1≤ j ≤ m_tλ_j^a (1-λ_j)^b V(λ)^2where a=m_r - m_t, b=m-m_r-m_t, λ = (λ_1, …, λ_m_t), V(λ)=∏_1 ≤ j<k ≤ m_t |λ_k-λ_j|, χ is a normalization constant evaluated using Selberg integral <cit.>, and it is given by:χ = ∏_j=1^m_tΓ(a+1+j)Γ(b+1+j)Γ(2+j)/Γ(a+b+m_t + j+1)Γ(2) § TIGHT BOUNDS ON THE ERGODIC CAPACITY In order to obtain simplified closed-form expressions for the ergodic capacity of the Jacobi MIMO channel, we consider classical inequalities such as Jensen's inequality and Minkowski's inequality. Moreover, we used the concavity property of the ln(.) function given the fact that the channel covariance matrix J is positive definite matrix <cit.>. §.§ Upper bound The following theorem presents a new tight upper bound on the ergodic capacity of Jacobi MIMO channel. Let m_t ≤ m_r, and m_t + m_r ≤ m, the ergodic capacity of uncorrelated MIMO Jacobi-fading channel, with receiver CSI and no transmitter CSI, is upper bounded by C_m_t,m_r^m,ρ≤ m_t ln(1 + ρ m_r/m) Proof of Theorem <ref>: We propose to use the well known Jensen's inequality <cit.> to obtain an upper bound for the ergodic capacity. According to this inequality and the concavity of the ln(.) function, we can give a tight upper bound on the ergodic capacity (<ref>) as:C_m_t,m_r^m,ρ ≤m_t ln(1+ρ𝔼[λ_1])Now, the density of λ_1 is given by <cit.> asf_λ_1(λ_1)= 1/m_t∑_k=0^m_t-1 e_k,a,b^-1λ_1^a (1-λ_1)^b (P_k^(a,b)(1-2λ_1))^2where e_k,a,b = Γ(k+a+1)Γ(k+b+1)/k! (2k+a+b+1) Γ(k+a+b+1) and P_k^(a,b)(x) are the Jacobi polynomials <cit.>. They are orthogonal with respect to the Jacobi weight function ω^a,b(x) := (1-x)^a (1+x)^b over the interval I=[-1,1], where a,b>-1, and they are defined by∫^1_-1 (1-x)^a (1+x)^b P_n^(a,b)(x) P_m^(a,b)(x) dx = 2^a+b+1 e_n,a,bδ_n,mwhere δ_n,m is the Kronecker delta function. Using (<ref>), we can write the expectation of λ_1 as𝔼[λ_1] =∑_k=0^m_t-1e_k,a,b^-1/m_t∫^1_0λ_1^a+1(1-λ_1)^b(P_k^(a,b)(1-2λ_1))^2 dλ_1By taking u= 1-2 λ_1, we can write𝔼[λ_1]= 1/m_t2^a+b+2∑_k=0^m_t-1 e_k,a,b^-1∫^1_-1(1-u)^a (1+u)^b P_k^(a,b)(u) (P_k^(a,b)(u)-uP_k^(a,b)(u)) duwe recall from <cit.> the following three-term recurrence relation of Jacobi polynomials generation:u P_k^(a,b)(u) = P_k+1^(a,b)(u)/A_k- C_k P_k-1^(a,b)(u)/A_k- B_k P_k^(a,b)(u)/A_k, k>0where A_k=(2k+a+b+1)(2k+a+b+2)/2(k+1)(k+a+b+1), B_k=(a^2-b^2)(2k+a+b+1)/2(k+1)(k+a+b+1)(2k+a+b), and C_k=(k+a)(k+b)(2k+a+b+2)/(k+1)(k+a+b+1)(2k+a+b). Then, by employing (<ref>), (<ref>), and (<ref>), the expectation of λ_1 can be expressed as𝔼[λ_1]= ∑_k=0^m_t-1e_k,a,b^-1/m_t2^a+b+2∫^1_-1(1-u)^a (1+u)^b P_k^(a,b)(u) (P_k^(a,b)(u)-uP_k^(a,b)(u)) duthus, we can write𝔼[λ_1] =1/2m_t∑_k=0^m_t-1(1+B_k/A_k) =mr/mFinally, the upper bound on the ergodic capacity can be expressed as:C_m_t,m_r^m,ρ≤ m_t ln(1+ρ m_r/m)This completes the proof of Theorem <ref>.In low-SNR regimes, the proposed upper bound expression is very close to the ergodic capacity. Thus, we derive the following corollary.Let m_t ≤ m_r, and m_t + m_r ≤ m. In low-SNR regimes, the ergodic capacity for uncorrelated MIMO Jacobi-fading channel can be approximated asC_m_t,m_r^m,ρ<<<1≈m_t m_rρ/mProof of Corollary <ref>: In low-SNR regimes (ρ <<< 1), the function ln(1+m_rρ/m) can be approximated by m_r ρ/m. When the sum of transmit and receive modes, m_t + m_r, is larger than the total available modes, m, the upper bound expression of the ergodic capacity can be deduced from (<ref>).§.§ Lower bound The following theorem gives a tight lower bound on the ergodic capacity of Jacobi MIMO channels. Let m_t ≤ m_r, and m_t + m_r ≤ m, the ergodic capacity of uncorrelated MIMO Jacobi-fading channel, with receiver CSI and no transmitter CSI, is lower bounded by C_m_t,m_r^m,ρ≥ m_t ln(1+ρ/√(F_m_t,m_r^m))where F_m_t,m_r^m=∏_j=0^m_t-1∏_k=0^m-m_r-1exp^(1/m_r+k-j)Proof of Theorem <ref>: We start from Minkowski's inequality <cit.> that we recall here for simplicity. Let A and B be two n × n positive definite matrices, then (A+B)^1/n≥(A)^1/n+(B)^1/n with equality iff A is proportional to B. Applying this inequality to (<ref>), a lower bound of the ergodic capacity can be obtained asC_m_t,m_r^m,ρ ≥ m_t 𝔼[ln( 1+ρexp^(1/m_tln(J)))]Recalling that ln(1+c exp^x) is convex in x for x>0, we apply Jensen's inequality <cit.> to further lower bound (<ref>) C_m_t,m_r^m,ρ≥ m_t ln( 1+ρexp^(1/m_t𝔼[ln(J)] ))Using the Kshirsagar's theorem <cit.>, it has be shown in <cit.>, and <cit.> that the determinant of the Jacobi ensemble can be decomposed into a product of independent beta distributed variables. We infer from <cit.> thatln(J)(d)=∑_j=1^m_tln T_jwhere (d)= stands for equality in distribution, T_j, j=1,…,m_t are independent and T_j(d)= Beta(m_r-j+1,m-m_r) where Beta(α,β) is the beta distribution with shape parameters (α,β). Taking the expectation over all channel realizations of a random variable U= ln(J), we get𝔼[U] = ∑_j=0^m_t-1ψ(m_r-j)-ψ(m-j)where ψ(n) is the digamma function. For positive integer n, the digamma function is also called the Psi function defined as <cit.> {[ψ(n) = -γn=1; ψ(n) = -γ + ∑_k=1^n-11/kn ≥ 2 ].where γ≈ 0.5772 is the Euler-Mascheroni constant. Now, we can finish the proof of the Theorem <ref> as followsC_m_t,m_r^m,ρ ≥m_t ln( 1+ρexp^(1/m_t∑^m_t-1_j=0ψ(m_r-j)-ψ(m-j) )) ≥m_t ln(1+ρ/√(F_m_t,m_r^m))where F_m_t,m_r^m=∏_j=0^m_t-1∏_k=0^m-m_r-1exp^(1/m_r+k-j). This completes the proof of Theorem <ref>. In high-SNR regimes, the proposed lower bound expression is closed to the ergodic capacity. Thus, we derive the following corollary.Let m_t ≤ m_r, and m_t + m_r ≤ m. In high-SNR regimes, the ergodic capacity for uncorrelated MIMO Jacobi-fading channel can be approximated asC_m_t,m_r^m,ρ>>1≈ m_tln( ρ)- ∑_j=0^m_t-1∑_k=0^m-m_r-11/m_r+k-jProof of Corollary <ref>: In high-SNR regimes (ρ >> 1), the function ln(1+ρ/√(F_m_t,m_r^m)) can be approximated by ln(ρ)-1/m_tln(F_m_t,m_r^m). § SIMULATION RESULTSIn this section, we present numerical results to further investigate the resulting analytical equations. The tightness of the derived expressions is clearly visible in Figs. <ref>–<ref>.In Fig. <ref>(a), we have plotted the exact ergodic capacity obtained by computer simulation and the corresponding lower and upper bounds, for the uncorrelated MIMO Jacobi-fading channels, with (m_t=m_r=2, m=6) and (m_t=4, m_r=10, m=16). At very low SNR (typically below 2 dB), the exact curves and the upper bounds are practically indistinguishable. The gaps between the exact curves of the ergodic capacity and the lower bounds considerably vanish in moderate to high SNR (typically above 20 dB). We can observe that the expression in (<ref>) matches perfectly with the ergodic capacity expression in (<ref>). Figure <ref>(b) shows the ergodic capacities of uncorrelated MIMO Jacobi fading channels, and it proves by numerical simulations the validity of the high-SNR regimes lower-bound approximation given in (<ref>). Results are shown for different numbers of transmitted/received modes, with m = 4, m = 8, and m=16. We see that the ergodic capacities approximations are accurate over a large range of high SNR values. Figure <ref>(a) shows the ergodic capacity and the analytical low-SNR upper bound expression in Eq. (<ref>) for several uncorrelated MIMO Jacobi-fading channels configurations. It is clearly seen that our expression is almost exact at very low SNR and that it gets tighter at low SNR as the number of available modes (m) increases.Figure <ref>(b) shows the comparison of the ergodic capacity of the uncorrelated MIMO Jacobi-fading channels and the derived expressions of the upper and lower bounds where the number of available modes is equal to 128. As can be seen in Fig. <ref>(b), the derived upper and lower bounds of the ergodic capacity are close to the exact expression given in (<ref>). We verify that our upper and lower bounds give good approximations of the ergodic capacity even for very large number of available modes (i.e. m=128).In Fig. <ref>(a), we investigate how close the ergodic capacity is to its upper and lower bounds in cases where m_t+m_r > m. We address this particular case using (<ref>). It can be observed that the proposed upper bound on the ergodic capacity is extremely tight for all SNR regimes when m_r is larger than m_t. It is important to note that there exists a constant gap between the lower bound and the exact ergodic capacity at all SNR levels. When m_t is larger than m_r, such upper and lower bounds are close to ergodic capacity at all SNR regimes. For comparison purposes, we have depicted in Fig. <ref>(b) the ergodic capacity of the MIMO Jacobi-fading channel obtained by computer simulation, the upper/lower bounds and the high/low SNR approximations when the sum of transmit and receive modes, m_t + m_r, is larger than the total available modes, m. In the high SNR regimes, the ergodic capacity and its high SNR approximation curves are almost indistinguishable. Similarly, we observe that there is almost no difference between the ergodic capacity and its low SNR approximation in the low SNR regions, while there is a significant difference in the high SNR regimes. This difference can be explained by the fact that the first order Taylor's expansion of ln(1+x) is not valid for high values of x.§ CONCLUSIONIn this paper, we derive new analytical expressions of the lower-bound and upper-bound on the ergodic capacity for uncorrelated MIMO Jacobi fading channels assuming that transmitter has no knowledge of the channel state information. Moreover, we derive accurate closed-form analytical approximations of ergodic capacity in the high and low SNR regimes. The simulation results show that the lower-bound and upper-bound expressions are very close to the ergodic capacity.1 MMF1 K. Ho, and J. Kahn, “Statistics of group delays in multimode fiber with strong mode coupling,” J. Lightwave Technol. 29(21), 3119–3128, 2011. MMF2 C. Lin, I. B. Djordjevic, and D. Zou, “Achievable information rates calculation for optical OFDM transmission over few-mode fiber long-haul transmission systems,” Optical Express 23(13), 16846–16856, 2015. SDM D. J. Richardson, J. M. Fini, and L. E. Nelson, “Space-division multiplexing in optical fibres,” Nat. Photonics 7(5), 354–362, 2013. DFS R. Dar, M. Feder, M. Shtaif, “The Jacobi MIMO channel,” IEEE Trans. on Inf. Theory 59(4), 2426–2441, 2013. Winzer P. J. Winzer, G. J. Foschini, “MIMO capacities and outage probabilities in spatially multiplexed optical transport systems,” Optical Express 19(17), 16680-16696, 2011. Aris A. Karadimitrakis, A. L. Moustakas, P. Vivo, “Outage capacity for the optical MIMO channel,” IEEE Trans. on Inf. Theory 60(7), 4370–4382, 2014.telatar E. Telatar, “Capacity of multi-antenna Gaussian channels,” Europ. Trans. Telecommun. 10, 585–596, 1999.Kan J. Kaneko, “Selberg integrals and hypergeometric functions associated with Jack polynomials,” SIAM J. Math. Anal. 24, 1086–1110, 1993. Jiang2009 T. Jiang, “Approximation of Haar distributed matrices and limiting distributions of eigenvalues of Jacobi ensembles,” Prob. Theory and Related Fields 144, 221–246, 2009. EIT T. M. Cover, J. A. Thomas, Elements of information theory (John Wiley & Sons, New Jersey, 2006). Cvetkovski Z. Cvetkovski, Inequalities: theorems, techniques and selected problems (Springer, 2012). Ism M. E. H. Ismail, Classical and quantum orthogonal polynomials in one variable (Cambridge Univ. Press., 2005).Kshirsagar A. M. Kshirsagar, “The noncentral multivariate beta distribution,” Ann. Math. Statist. 32, 104–111, 1961. Muirhead R. J. Muirhead, Aspects of multivariate statistical theory (Wiley, 2005). Rouault A. Rouault, “Asymptotic behavior of random determinants in the Laguerre, Gram and Jacobi ensembles,” ALEA Lat. Am. J. Probab. Math. Stat, 2007. bookdigamma M. Abramowitz, I. A. Stegun, Handbook of mathematical functions with formulas, graphs, and mathematical tables, Dover, 1970.
http://arxiv.org/abs/1702.08258v1
{ "authors": [ "Amor Nafkha", "Remi Bonnefoi" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170227125146", "title": "Upper and Lower Bounds for the Ergodic Capacity of MIMO Jacobi Fading Channels" }
[ Activation Ensembles for Deep Neural Networks equal* Mark Harmonto Diego Klabjanto toNorthwestern University, Evanston, ILMark HarmonMarkHarmon2012@u.northwestern.edu deep learning, activation functions, ensemble, convex0.3in ]Many activation functions have been proposed in the past, but selecting an adequate one requires trial and error.We propose a new methodology of designing activation functions within a neural network at each layer.We call this technique an “activation ensemble" because it allows the use of multiple activation functions at each layer.This is done by introducing additional variables, α, at each activation layer of a network to allow for multiple activation functions to be active at each neuron.By design, activations with larger α values at a neuron is equivalent to having the largest magnitude.Hence, those higher magnitude activations are “chosen" by the network.We implement the activation ensembles on a variety of datasets using an array of Feed Forward and Convolutional Neural Networks.By using the activation ensemble, we achieve superior results compared to traditional techniques.In addition, because of the flexibility of this methodology, we more deeply explore activation functions and the features that they capture. § INTRODUCTION Most of the recent advancements in the mathematics of neural networks come from four areas: network architecture <cit.> <cit.>, optimization method (AdaDelta <cit.> and Batch Normalization <cit.> ), activations functions, and objective functions (such as Mollifying Networks <cit.> ).Highway Networks <cit.> and Residual Networks <cit.> both use the approach of adding data from a previous layer to one further ahead for more effective learning.On the other hand, others use Memory Networks <cit.> to more effectively remember past data and even answer questions about short paragraphs.These new architectures move the field of neural networks and machine learning forward, but one of the main driving forces that brought neural networks back into popularity is the rectifier linear unit (ReLU)<cit.> <cit.>.Because of trivial calculations in both forward and backward steps and its ability to more effectively help a network learn, ReLU's revolutionized the way neural networks are studied.One technique that universally increases accuracy is ensembling multiple predictive models.There are books and articles that explain the advantages of using multiple models rather than a single large model <cit.>.When neural networks garnered more popularity in the 90's and early 2000's, researchers used the same technique of ensembles <cit.>.Additionally, other techniques may identify as an ensemble.For example, the nature of Dropout <cit.> trains many different networks together by dropping nodes from the entirety of a network. We focus on activation functions rather than expanding the general architectures of networks.Our work can be seen as a layer or neuron ensemble that can be applied to any variety of deep neural network via activation ensembles. We do not focus on creating yet another unique activation function, but rather ensemble a number of proven activation functions in a compelling way.The end result is a novel activation function that is a combination of existing functions.Each activation function, from ReLU to hyperbolic tangent contain advantages in learning.We propose to use the best parts of each in a dynamic way decided by variables configuring contributions of each activation function.These variables put weights on each activation function under consideration and are optimized via backpropagation.As data passes through a deep neural network, each layer transforms the data to better interpret and gather features.Therefore, the best possible function at the top of a network may not be optimal in the middle or bottom of a network.The advantage of our architecture is that rather than choosing activations at specified layers or over an entire network, one can give the network the option to choose the best possible activation function of each neuron at each layer.Creating an ensemble of activation functions presents some challenges in how to best combine the activation functions to extract as much information from a given dataset as possible.In the most basic model ensembles, one can average the given probabilities of multiple models.This is only feasible because the range of values in each model are the same, which is not replicated in most activation functions.The difficulty lies in restricting or expanding the range of these functions without losing their inherent performance. An activation ensemble consists of two important parts. The first is the main α parameter attached to each activation function for each neuron. This variable assigns a weight to each activation function considered, i.e., it designs a convex combination of activation functions.The second are a set of “offset" parameters, η and δ, which we use to dynamically offset normalization range for each function.Training of these new parameters occurs during typical model training and is done through backpropagation.Our work contains two significant contributions.First,we implement novel activation functions as convex combinations of known functions with interesting properties based upon current knowledge of activation functions and learning.Second, we improve the learning of any network considered herein, including the well-established residual network in the Cifar-100 learning task. § RELATED WORK There is a plethora of work in building the best activation functions for neural networks.Before ReLU activations were commonly used, deep neural networks were nearly impossible to train since the neurons would get stuck in the upper and lower areas of sigmoid and hyperbolic tangent functions. Some work has focused on improving these activation functions, such as <cit.>, who introduce a stochastic variable to the sigmoid and hyperbolic tangent functions.Since sigmoid and hyperbolic tangent function both contain areas of high saturation for values in large magnitude, the stochastic variable can aid in pushing the activation functions out of high saturation areas.In our work, rather than introducing stochasticity, we introduce several activations at each neuron, from which the network can chose a combination.Thus, it can reap the benefits of the sigmoid and hyperbolic tangent function without being limited to these functions at each layer. On the other hand, <cit.> create a smoother leaky ReLU function dubbed the exponential linear function in order to take advantage of including negative values.<cit.> further generalize the exponential linear unit by Clevert.The new exponential linear unit is called the Parametric Exponential Linear Unit (PELU):f(x) = {[ a/b x x ≥ 0; a(e^x/b -1) x ≤ 0 ] a,b>0 .The extra parameters, a,b increase the robustness of this activation function similar to that of the leaky ReLU compared to a regular rectifier unit.They find that this more general format dramatically increases accuracy. Our work differs from those who utilize different activation functions by being able to use as many activation functions as a user's computational capabilities allow.Rather than choosing one for a network, we allow the network to not only choose the best function for each layer, but also for each single neuron in each of the layers.We grant activation functions with negative values in our network, but we restrict this after a simple transformation to ensure that our functions are similar in magnitude.<cit.> take a different approach from typical work that focuses on activation functions.Rather than combining inputs, they use multiple biases to find features hidden within the magnitudes of activation functions.In this way, they can threshold various outputs to find hidden features and help filter out noise from the data.We restrict the range of our activation functions, which helps the neural network find features that may be hidden within the magnitude of another activation function.Thus, we are able to find hidden features via known activation functions without introducing multiple biases. <cit.> create an activation function during the training phase of the model.However, they use a cubic spline interpolation rather than using the basis of the rectifier unit.Their work differs from ours in that we use the many different available activation functions rather than creating an entirely new function via interpolation.It is important to note that we restrict our activation function to a particular set and then allow the network to chose the best one or some combination of those available rather than have activation functions with open parameters (though this is possible in our architecture).Maxout Networks, <cit.>, allow the network to choose the best activation function similar to our work.However, Maxout Networks require many more weights to train.For each activation function, there is an entirely new weight matrix while our extra activations only require a few parameters.In addition, Maxout Networks find the maximum value for the activation function rather than combining the activation functions into a novel function as done in our work.Thus, features in a Maxout Network are lost due to not being represented after the maximization function.<cit.> approach the problem of activation functions through the lens of ReLU's.Similar to PReLU's, which are leaky ReLU's with varying slope on the negative end, they create unique activation functions for rectifier units.However, they combine the training of the network and learning of the activation functions similar to our work and that of <cit.>.Their work restricts the combination to a set of linear functions with open parameters, while our does not have a restriction on the type of functions.Since the work of <cit.> has this restriction, they cannot combine functions of various magnitude like the work we present here.<cit.> uses multiple activation functions in a neural network for each neuron in the field of stochastic control.Similar to our work, he combines functions such as the ReLU, hyperbolic tangent, and sigmoid.To train the network, Chen uses Neuroevolution of Augmenting Topologies (NEAT) to train his neural network for control purposes.However, he simply adds together the activation functions without capturing magnitude and does not allow the network to choose an optimal set of activations for each neuron.Work by <cit.>, which is closest to our approach, explores the construction of activation functions during the training phase of a neural network.The general framework of their activation function is the following: h_i (x) = max (0,x) + ∑_s=1^S a_i^smax (0,-s + b_i^s) where i denotes the neuron, S is a hyperparameter, and a and b are trained variables chosen by the user.Thus, rather than combining various activations functions together via an optimization process, <cit.> take the baseline foundation of the ReLU function and add additional training variables to the function through a and b.The difference in our work is that we want to capture the features extracted by the various activation functions actively used to find the best combination available.§ ACTIVATION ENSEMBLES Ensemble Layers were created with the idea of allowing a network to choose its own activation for each neuron and for each layer of the network.Overall, the network takes the output of a previous layer, for example from a convolutional step, applies its various activations, normalizes these activations, and places weights on each activation function.We first go through each step of the process of making such a layer.The first naive approach is to simply take the input, use a variety of activation functions, and add these activation functions together. We denote the set of activation functions we use to train a network f^j ∈ F.Using this method, a network may reap the benefits of having more than one activation function, which may extract different features from the input.However, simply adding poses a problem for most functions.Many functions, like the sigmoid and hyperbolic tangent possess different values, but they can be easily scaled to have the same range.However, other functions, such as the rectifier family and the inverse absolute value function cannot be easily adjusted due to their unbounded ranges.If we simply add together the functions, the activation functions with the largest absolute value will dominate the network leaving the other functions with minimal input.To solve this issue, we need to normalize the activation functions with respect to one another in order to have relatively equal contribution to learning.One option would be to use mean and standard deviation normalization; however, this would not equalize contribution.Therefore, we scale the functions to [0,1].While building our method, we additionally performed tests using the range of [-1,1] for each activation function.We found that the performance was either close to that of [0,1] or slightly worse.In addition, allowing negative values causes additional issues when choosing the best activation functions with the α parameter we introduce later.Simply adding activations together and forcing them to have equal contribution does not solve our second goal of finding the best possible activation functions for particular problems, networks, and layers.Therefore, we apply an additional weight value, α, to each activation for each neuron.Therefore, for the output of each neuron i and m being the number of activation functions, we have the following activation function for each neuron: h_i^j (z) = f^j (z) - min_k (f^j(z_ki))/max_k f^j(z_ki) - min_k f^j(z_ki) + ϵ y_i (z) = ∑_j=1^m α_i^j h_i^j (z) Here, ϵ is a small number, k goes through all training samples (in practice k varies over the samples in a minibatch), and z_ki is the input to neuron i with training example k. We consider α_i^j as part of our network optimization. In order to again avoid one activation growing much larger than the others we must also limit the magnitude of α.Since the activation functions are contained to [0,1], the most obvious choice is to limit the values of α to lie within the same range.Thus, the network also has the ability to choose a particular activation function f^j∈ F if the performance of one far outweighs the others.For example, during our experiments, there were many neurons that we found heavily favored the ReLU function after training signified by the α^j for the ReLU function being magnitudes larger than the others.In order to force the network to choose amongst the presented activation functions, we further require than all the weights for each neuron add to one.This then gives us another optimization problem to solve when updating the weights. In what follows, the values α̂^j are the proposed weight values for the activation function j gradient update and α^j are the new values we find after solving the projection problem.We omit the neuron index i for simplicity, i.e. this problem must be solved for each neuron. αminimize∑_j=1^m12 (α̂^̂ĵ - α^j)^2 subject to∑_j=1^mα^j= 1 α^j ≥ 0,j=1,2, … m. This optimization problem is convex, and is readily solvable via KKT conditions.It yields the following solution: α^j=α̂^j + λ,if λ > α^j,0,otherwise. ∑_j=1^mmax(0,α̂^j + λ)=1 This problem is very similar to the water-filling problem, and has an O(m) solution.Algorithm 1 exhibits the procedure for solving this problem.The projection sub-problem must be solved for each neuron for each layer that our multiple activations are applied. We borrow two elements in our approach from Batch Normalization <cit.>.We record running minimum and maximum values while training over each minibatch.Thus we transform the data using only a small portion (dictated by batch-size during training).The other element that we introduce is the two parameters, η and δ, which allow for the possibility of the network choosing to leave the activation at its original state.Below is the final equation that we use for the activation function at each neuron. y_i = ∑_j=1^m α^j(η^j h_i^j + δ^j) Therefore, the final algorithm for our network involves both maintaining running maximum and minimum values and solving the projection subproblem during training.Then in the test phase, we use the approximate minimum and maximum values we obtain during the training phase.The weights learned during the training phase, namely our new parameters α,η, and δ remain as is during training. In summary, rather than creating a new weight matrix for each activation function, which adds a huge amount of overhead, we allow the network to change its weights according the activation function that is optimal for each neuron.We next provide derivatives for the new parameters.Let ℓ denote the cost function for our neural network.We backpropagate through our loss function ℓ with the chain rule to find the following: ∂ℓ/∂α^j = ∂ℓ/∂ y_i·(η^j h_i^j + δ^j ) ∂ℓ/∂η^j = ∂ℓ/∂ y_i·α^j h_i^j ∂ℓ/∂δ^j = ∂ℓ/∂ y_i·α^j As a note, activation ensembles and the algorithms above work in the same way for CNN's.The only difference is that instead of using a set of activations for a specific neuron, we use a set of activations for a specific filter. §.§ Activation Sets To explore the strength of our ensemble method, we create three sets of activations to take advantage of the weakness of individual functions. The first set is a number of activation functions seen in networks today.One of the functions, the exponential linear units, garners favorable results with datasets such as CIFAR-100. Others include the less popular inverse absolute value function and the sigmoid function (which is primarily relegated to recurrent networks in most literature).One omission we must mention is the adaptive piecewise linear unit. Since our goal is to create an activation from common functions, a function that focuses on adapting via weights is not included; however, this does not mean it would not work within our model.* Sigmoid Function: 1/1+e^-z* Hyperbolic Tangent: tanh(z)* Soft Linear Rectifier: ln (1 + e^-z)* Linear Rectifier: max(0,z)* Inverse Absolute Value: z/1+|z|* Exponential Linear Function: {[zz ≥ 0; e^z -1z ≤ 0 ]. The next two sets of activations are designed to take advantage of the ensemble method's ability to join similar functions to create a better function for each neuron.Since ReLU functions are widely regarded as the best performing activations functions for most datasets and network configurations, we introduce an ensemble of varying ReLU functions.Since rectifier neurons can "die" when the value drops below zero, our ensemble uses rectifiers with various intercepts of form f^b (z) = max(0,z+b), where b∈ℝ.We settle with five values for b, [-1.0, -0.5, 0, 0.5, 1.0], to balance around the traditional rectifier unit.The final activation set is a reformation of the absolute value function.It only consists of two mirrored ReLu functions of the form:f^1 (z) = max(0,-z) f^2 (z) = max(0,z)The behavior of the graphed function is very similar to that of an absolute value; however, our function contains individual slopes that vary by the value of α_1 and α_2.We create this function to capture elements of the data that may not necessarily react positively with respect to a rectifier unit, but still carry important information for prediction.§ EXPERIMENTS For our experiments, we use the datasets MNIST, ISOLET, CIFAR-100, and STL-10.For MNIST and ISOLET, we use small designed feed forward and convolutional neural networks.For CIFAR-100, we apply a residual neural network.Last, we design an auto-encoder strategy for STL-10.In addition, we use Theano and Titan X GPU's for all experiments.We found that the optimization function AdaDelta performs the best for all of the datasets.For each network, we set the learning rate for AdaDelta to 1.0, which is the suggested value by the authors for most cases.In each network that was designed in-house, we implemented batch normalization before each ensemble layer. We describe the architecture used for each dataset under their respective sections. The nomenclature we use for network descriptions is (16)3c-2p-10f where (16)3c describes a convolutional layer with 16 filters of size 3x3, 2p for a max-pooling layer with filter size 2x2, and 10f for 10 neurons of a feed foward layer.The weights at the ensemble layer were initialized at 1/m where m is the number of activation functions at a neuron. Furthermore, we set η = 0 and δ=1 and initialized the traditional neural network weights with the Glorot method.In addition, we initialize the residual network using the He Normal method.We train on our three activation sets as well as the same networks with rectifier units for the original networks since they are the standard in most cases.Our stopping criterion is based upon the validation error for each network except for the residual network, in which the suggested number of epochs is 82.Also, we apply AdaDelta for each optimization step for all new variables as well, α, η, and δ for each minibatch.In Table 1 below, we summarize the test accuracy (reconstruction loss for STL-10) of our datasets and various models.Each number is an average over five runs with different random seeds.Note that the largest improvement is found for the ISOLET dataset. tableFinal Model Results for Models with and without Activation Ensembles2c 1-3 DataSetModel Original/Ensemble MNISTFFN97.73% / 98.37%MNISTCNN99.34% / 99.40%ISOLETFFN 95.16% / 96.28%CIFAR-100 Residual 73.64% / 74.20%STL-10 CAE 0.03524 / 0.03502§.§ Comparing Activation Functions We first explore the α parameter values of our activation functions.We primarily concentrate on the first set of activations (Sigmoid, Tanh, ReLU, Soft ReLu, ExpLin, InvAbs).Since ReLU is the most common activation function in literature, we expect it be chosen the most by our networks.As seen Figures 1,3,4, and 5, we find this to be true.However, neurons that are deeper may not choose any particular activation.In fact, at some neurons in the bottom layers, the parameters for choosing a function are nearly equal.Figure 1 illustrates the differences between layers of our activation ensembles for the in-house Feed-Forward Network model applied to the MNIST dataset.Each data point is taken at the end of an epoch during training.The neurons in the images were chosen randomly. The top two images if Figure 1, which come from the top layer of the network, show that the ReLU becomes the leading function very quickly.The other functions, with the exception of the softplus and exponential linear functions, rapidly approach zero.The bottom two images, which are neurons from the next layer in the network, still choose the rectifier unit, but not as resolutely as in the previous layer.It is interesting that the inverse absolute value function becomes very important.We experimented as well with momentum, and found that with a step of 0.1, the network still chooses a leading activation function.We find that this is a general trend for FFN's and the MNIST dataset.However, as we discuss below, this does not hold for all models or datasets.Next we compare the functions that the ensembles create. Figure 2 depicts the various sets of activations for our ensemble. The images are of the same five random neurons taken from the second layer of the FFN for the MNIST dataset in the first two figures while the final image is taken from the second layer of the same model trained on the ISOLET dataset.The images are made by taking the final α's for each activation function, inputing a uniform vector from [-1,1], and adding the functions together.Note that we leave out the normalization technique used for activation ensembles and the values for η and δ.The first set of activations, presented in the top left of Figure 2, behave very similarly to the exponential linear unit.However, this function appears closer to an x^3 function because of its inflection point near the origin.The second set of functions clearly form a piecewise ReLU unit with the various pieces clearly visible.The last set appears much like an absolute value function, but differentiates itself by having varying slopes on either side of the y-axis. We observe that the best activation function for each model is the third set, with the exception of the residual network. §.§ MNIST We solve the MNIST dataset using two networks, a FFN and a CNN.The FFN has the form 400f-400f-400f-10f while the CNN is of the form (32)3c-2p-(32)3c-2p-400f-10f.MNIST is not a particularly difficult dataset to solve and is one of the few image-based problems that classical FFN's can solve with ease.Thus, this problem is of particular interest since we may compare results between two models that can easily solve the classification problem with our activation ensembles.One of the aspects we explore is whether or not different activations fancy certain models.In Figure 3, we present histograms representing the share of α values for each activation function in the ensemble.On the left are histograms representing the α values for each neuron when using the activations from set 1 and on the right are for the second set of activation functions.The top left image in Figure 1 shows what we expect, namely that the ReLU dominates the layer.As we move towards the bottom of the network, the ReLU is still the most favored function; however, the difference becomes marginal.Also note that the sigmoid function is the lowest value function in all 3 layers and that the exponential linear function has the smallest range of all the functions.On the right are the set of rectifier units with varying intercepts.The dominant rectifier units in the first layer are the first and second rectifier units (max(0,x-1.0) and max(0,x-0.5)) while the final layer is dominated by the third and fourth rectifier unit (max(0,x) and max(0,x+0.5)).Also observe that the most dominant activation set in our second set does not hold nearly as high values as seen in the first set.In Figure 4, we present the same histograms as Figure 3, but for the CNN model applied to MNIST.Unsurprisingly, we observe that ReLU's dominate the top layer of the network. However, the story changes towards the bottom of the network. In fact, the hyperbolic tangent and inverse absolute value functions overtake the rectifier unit in importance. Interestingly, this third layer is also a feed forward layer after the prior convolutional layers. Thus, it appears that the transition of one type of layer to another changes the favored activation functions.Another interesting behavior is for the second set of activations.For the first two convolutional layers, there is not a dominant function. At the final activation ensemble layer, we find that the original ReLU is the chosen activation function.Therefore, we see that given a single dataset, the favored activation depends on the layer and model used for classification. §.§ ISOLET ISOLET is a simple letter classification problem.The data consists of audio data of people uttering letters.The goal is to predict the letter said for each example (a-z).ISOLET has 7797 examples and 617 attributes.For training and testing, we split the data into 70% for training and 30% for testing.In addition, we use our in-house FFN described in the previous section (400f-400f-400f-26f).We train both a network with only ReLU's at each neuron and a network with activation ensembles.For ISOLET, we implement all three classes of ensembles we mention in the previous section.In Figure 5, we present the same images provided for the MNIST dataset.In the first set of images on the left, we see much different behavior of the α values than we did with the MNIST dataset.Most values are clumped up near one another.Though the ReLU is the leading activation function, it is not by much in the first two layers.Similar to the CNN for MNIST, hyperbolic tangent and the inverse absolute value functions move up in importance.For the second set of activation functions, we see that the α values are very close together and the network does not choose a dominant activation, even more so than for the first set.However, at the final layer, the network singles out the first rectifier unit (max(0,x-1.0)) as the worst of the set.Thus, we see that even with the same network and sets of activation functions, the model chooses different optimal activation functions based upon each individual dataset. §.§ CIFAR-100 In order to get a broad scope of networks and test the strength of activation ensembles, we decide to use a residual network for CIFAR-100 since it both garners very good results for image-based datasets and offers and additional challenge for implementing the ensemble within its structure. The network we use comes from the Lasagne recipe Github <<https://github.com/Lasagne/Recipes/tree/master/papers>>.It was originally designed for the CIFAR-10 dataset, but we modify the model for CIFAR-100.The model has three residual columns of which include five residual block each followed by a global pooling layer. Challenges arise from two areas while implementing activation ensembles on residual networks.First, residual networks are specifically designed for ReLU's via the residual element and initialization. Second, the placement of each activation ensemble needs to be carefully constructed to avoid disrupting the flow of information within each residual block.We attempt various approaches to activation ensembles on residual networks.The failed ideas include having an ensemble after each residual block and after the second stack but before adding in the residual. The implementation that works best is to incorporate the activation ensemble in the middle of a residual block and not at the end of the block or after.In addition, this is the only network in which the second class of activations (the 5 ReLU's with various intercepts) works best. This is an expected result because of a residual network's dependence on the ReLU. §.§ STL-10 The CAE we implement for this problem is very similar to one done by <cit.>. However, our network contains less filters at each layer than their network due to insufficient GPU memory on our graphical cards.Our structure is the following for the encoder portion of the autoencoder: (32)5c-2p-(64)5c-2p-(128)5c-6p.Some CAE's use an ReLU at the end for prediction purposes, but we use the sigmoid function.We keep normalization very simple with this network by only scaling the data by 255 at the input level.The reproduced images by the network are then very easy for comparison by using the sigmoid function and mean-squared error.During preliminary experiments we saw certain classes of activation functions tend to perform better, and thus we implemented only the third ensemble set that resembles an absolute value function.We additionally tie all the weights for the encoder and decoder to restrain the model from simply finding the identity function.Since tying the weights is suitable for autoencoders, we also tie the weights associated with our activation ensembles (this includes α, δ, and η values).We choose to not tie the maximum and minimum values we find during the normalization stage as those are simply used for the purpose of our projection algorithm for the α variables.§ CONCLUSION In our work, we introduce a new concept we title as an activation ensemble.Similar to common ensembling techniques in general machine learning, an activation ensemble is a combination of activation functions at each neuron in a neural network.We describe the implementation for standard feed-forward networks, convolutional neural networks, residual networks, and convolutional autoencoders.We create a convex combination of activation functions yet to be seen in literature with an algorithm to solve the new projection problem associated with our model.In addition, we find that these ensembles improve classification accuracy for MNIST, ISOLET, CIFAR-100, and STL-10 (reconstruction loss).To gain more insight into activation functions and their relationships with the neural network model implemented for a given dataset, we explore the α's for each activation function of each model.Further, we examine one dataset, MNIST, with two models, FNN and CNN.This enables us to discover that the optimal activation varies between a FFN and CNN on the same dataset (MNIST).While the top two layers of both favor the rectifer unit, the bottom layer fancies the hyperbolic tangent and inverse absolute value functions.When comparing the activation ensemble results between ISOLET and MNIST on the FFN, we observe that the optimal activation functions between datasets are also different.While MNIST esteems the ReLU, ISOLET does not favor a particular activation until the bottom layer, in which it chooses the hyperbolic tangent.icml2017lichman2013uci krizhevsky2009learning coates2010analysis lecun2015deep
http://arxiv.org/abs/1702.07790v1
{ "authors": [ "Mark Harmon", "Diego Klabjan" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170224223029", "title": "Activation Ensembles for Deep Neural Networks" }
In hyperbolic space, the angle of intersection and distance classify pairs of totally geodesic hyperplanes. A similar algebraic invariant classifies pairs of hyperplanes in the Einstein universe. In dimension 3, symplectic splittings of a 4-dimensional real symplectic vector space model Einstein hyperplanes and the invariant is a determinant. The classification contributes to a complete disjointness criterion for crooked surfaces in the 3-dimensional Einstein universe.HAWC Observations Strongly Favor Pulsar Interpretations of the Cosmic-Ray Positron Excess and Ke Fang^f,g=========================================================================================§ INTRODUCTIONPolyhedra bounded by crooked surfaces form fundamental domains in the Einstein Universe for Lorentzian Kleinian groups (<cit.>, <cit.>).Crooked surfaces are assembled from pieces of certain hypersurfaces, namely light cones and Einstein tori.This motivates our study ofthese hypersurfaces, and how they intersect. The theory of crooked planes, in the context of Minkowski space, has been very successful in understanding and classifying discrete groups of affine transformations acting properly on ℝ^3 (<cit.>,<cit.>,<cit.> and <cit.>). Crooked planes are piecewise linear surfaces in Minkowski 3-space which bound fundamental domains for proper affine actions. In 2003, Frances <cit.> studied the boundary at infinity of these quotients of Minkowski space by introducing the conformal compactification of a crooked plane. In this paper, we call conformally compactified crooked planes crooked surfaces.Recently, Danciger-Guéritaud-Kassel <cit.> have adapted crooked planes to the negatively curved anti-de Sitter space. In a note shortly following the DGK paper, Goldman <cit.> unified crooked planes and anti-de Sitter crooked planes. More precisely, Minkowski space and anti-de Sitter space can be conformally embedded in the Einstein universe in such a way that crooked planes in both contexts are subsets of a crooked surface.A crooked surface is constructed using three pieces : two wings, and a stem. The wings are parts of light cones, and the stem is part of an Einstein torus. In order to understand the intersection of crooked surfaces, we first focus on Einstein tori. Our first result classifies their intersections.Let T_1,T_2⊂ be Einstein tori. Suppose that T_1≠ T_2. Then T_1∩ T_2 is nonempty, and exactly one of the following possibilities occurs: * T_1∩ T_2 is a union of two photons which intersect in exactly one point.* T_1∩ T_2 is a spacelike circle and the intersection is transverse.* T_1∩ T_2 is a timelike circle and the intersection is transverse.A single geometric invariant η(T_1,T_2), related to the Maslov index, distinguishes the three cases. * η(T_1,T_2)=1 if and only ifT_1∩ T_2 is a union of two photons which intersect in exactly one point,* η(T_1,T_2) > 1 if and only if T_1∩ T_2 is spacelike, and* η(T_1,T_2) < 1 if and only if T_1∩ T_2 is timelike.We next show how to further interpret this result in the three-dimensional case. The Lagrangian Grassmannian in dimension 4 is a model of the 3-dimensional Einstein universe. The relationship between the two models was studied extensively in <cit.>. We develop the theory of Einstein tori in the space of Lagrangians and characterize η as the determinant of a linear map.A simple consequence of Theorem <ref> isLet T_1,T_2 be a pairEinstein tori. Then, T_1∩ T_2 is non-contractible as a subset of T_1 or T_2. We use this corollary to prove a complete disjointness criterion for crooked surfaces, generalizing the constructionin Charette-Francoeur-Lareau-Dussault <cit.> and the criterion for disjointness of anti-de Sitter crooked planes in Danciger-Guéritaud-Kassel <cit.> :Two crooked surfaces C,C' are disjoint if and only if the four photons on the boundary of the stem of C are disjoint from C', and the four photons on the boundary of the stem of C' are disjoint from C. The Lagrangian model of the Einstein universe allows us to express the condition in this theorem explicitly in terms of symplectic products. In that model, a pair of simple inequalities guarantee that a photon does not intersect a crooked surface.Finally, we show that the criterion in theorem <ref> reduces to the criterion for disjointness of anti-de Sitter crooked planes from <cit.> when specializing to crooked surfaces adapted to an anti-de Sitter patch.§ NOTATIONS AND TERMINOLOGY If V is a vector space, denote the associated projective space (V), defined as the space of all 1-dimensional linear subspaces of V. If v̌∈ V is a nonzero vector in a vector space V, then denote the corresponding point (projective equivalence class) in the projective space (V) by [v̌]∈(V). We call a real vector space endowed with a nondegenerate bilinear form a bilinear form space. If v̌∈ V is a nonzero vector in a bilinear form space (V,), thenv̌^⊥ :={w̌∈ V |v̌w̌ = 0 }is a linear hyperplane in V. When v̌ is non-null, then v̌^⊥ is nondegenerate and defines an orthogonal decompositionV = v̌⊕v̌^⊥.More generally, if S⊂ V is a subset, then defineS^⊥ :={w̌∈ V |v̌w̌ = 0, ∀v̌∈ S}. § EINSTEIN GEOMETRYThis section briefly summarizes the basics of the geometry of . For more details, see <cit.>. §.§ The bilinear form space ^n,2 Letbe a (n+2)-dimensional real vector space endowed with a signature (n,2) symmetric bilinear form× ⟶(ǔ,v̌)↦ǔv̌.Define the null cone:() := {v̌∈|v̌v̌ = 0}.The Einstein universe is the projectivization of () ::=(()). carries a natural conformal Lorentzian structure coming from the product on . More precisely, smooth cross-sections ofthe quotient map ()⟶ determine Lorentzian structures on . Furthermore these Lorentzian structures areconformally equivalent to each other.The orthogonal group Ø(n,2) ofacts conformally and transitively on . In fact, the group of conformal automorphisms ofis exactly Ø(n,2). §.§ Photons and light conesA photon is the projectivization (P) of a totally isotropic 2-plane P⊂. It corresponds to a lightlike geodesic in the conformal Lorentzian metric of . A spacelike circle (respectively timelike circle) is the projectived null cone ((S)) of a subspace S⊂ which has signature (2,1) (respectively signature (1,2)).A light cone is the projectivized null cone ((H)) of a degenerate hyperplane H⊂. Such a degenerate hyperplane H = ň^⊥ for some null vector ň∈(). In terms of the synthetic geometry of , the light cone defined by p = [ň]∈ equals the union of all photons containing p. We will denote it by (p).One can consider a different homogeneous space, the space of photons of , denoted n. It admits a natural contact structure (see <cit.>) in which the photons in a lightcone form a Legendrian submanifold. The contact geometry of photon space is intimately related to the conformal Lorentzian geometry of the Einstein universe. This relation stems from the incidence relation between the two spaces. We say that a point p∈ is incident to a photon ϕ∈n whenever p∈ϕ. By extension, two points p,q∈ are called incident when they are incident to a common photon, and two photons ϕ,ψ∈n are called incident when they intersect in a common point. §.§ Minkowski patches The complement inof a light cone is a Minkowski patch. Its natural structure is Minkowski space , an affine space with a parallel Lorentzian metric. Any geodesically complete simply-connected flat Lorentzian manifold is isometric to . As such it is the model space for flat Lorentzian geometry.Following <cit.>, forwe use quadratic form of signature (n,2) given byv̌v̌ := v_1^2 + v_2^2 + … + v_n-1^2 - v_n^2 - v_n+1v_n+2and work in the embedding of Minkowski space⟶n v̌v^n⟼v̌v^n‖v̌‖^2 - (v^n)^21.In the expression above,v̌ :=v^1⋮v^n-1 ∈is a vector in Euclidean space with Euclidean norm ‖v̌‖, and the Lorentzian norm in is:(v̌, v^n) ⟼‖v̌‖^2 - (v^n)^2. The complement of this embedding ofis a light cone, and we will denote its vertex by p_∞. This vertex is called the improper point in <cit.>, and its coordinates in a basis as above are:p_∞⟷0̌0 1 0 .The closure inof every non-null geodesic γ incontains p_∞ and the union γ∪{p_∞} is a spacelike circle or a timelike circle according to the nature of γ. Conversely every timelike or spacelike circle which contains p_∞ is the closure of a timelike or spacelike geodesic in .The light cone of a point which is not p_∞, but belongs to its light cone, intersects the Minkowski patchin an affine hyperplane upon which the Lorentzian structure onrestricts to a field of degenerate quadratic forms,that is,a null hyperplane.If we choose an origin p_0 for a Minkowski patch, then we get an identification of the patch with a Lorentzian vector space. The trichotomy of vectors into timelike, spacelike and lightlike has an intrinsic interpretation with respect to p_0 and p_∞:A point is : * timelike if it lies on some timelike circle through p_0 and p_∞,* spacelike if it lies on some spacelike circle through p_0 and p_∞, and* lightlike if it lies on a photon through p_0. One and only one of these three happens for every point in the Minkowski patch. §.§ Einstein hyperplanes An Einstein hyperplaneH corresponds to a linear hyperplane ℓ^⊥⊂ orthogonal to a spacelike line ℓ⊂. A linear hyperplane ℓ^⊥ is conveniently described by a normal vector š⊂ℓ,which we may assume satisfies šš = 1. In that case š is determined up to multiplication by ± 1.The hyperplane š^⊥ is a bilinear form space isomorphic to ^n-1,2 and its projectivized null cone is a model for n-1. In dimension n=3, an Einstein hyperplane is homeomorphic to a 2-torus S^1× S^1 so we will call it an Einstein torus. Under the embedding (<ref>), an Einstein hyperplane which passes through the point p_∞ meets the Minkowski patchin an affine hyperplane upon which the Lorentzian structure onrestricts to a Lorentzian metric, that is, a timelike hyperplane.Since an Einstein torus is a totally geodesic embedded copy of 2, it has a pair of natural foliations by photons. This is because the light cone of a point in 2 is a pair of photons through that point. As described in Goldman <cit.> for n=3, the complement of an Einstein hyperplane has the natural structure of the double covering of anti-de Sitter space. This identification is presented in more detail in section <ref>.§ PAIRS OF EINSTEIN HYPERPLANESThe purpose of this section is to define the invariant η≥ 0 characterizing pairs of hyperplanes inand to prove theorem <ref>. We describe the moduli space of equivalence classes of pairs, and reduce to the case n=3. Then <ref> reinterpretsin terms of symplectic geometry using the local isomorphism ⟶Ø(3,2).§.§ Pairs of positive vectorsA linearly independent pair of two unit-spacelike vectors š_1, š_2 spans a 2-plane ⟨š_1,š_2⟩⊂ which is: * Positive definite ⟺ |š_1š_2| < 1;* Degenerate ⟺ |š_1š_2| = 1;* Indefinite ⟺ |š_1š_2| > 1.The positive definite and indefinite cases respectively determine orthogonal splittings :≅^n,2 = ^2,0⊕^n-2,2≅^n,2 = ^1,1⊕^n-1,1.In the degenerate case, the null space is spanned by š_1 ±š_2, whereš_1š_2 = ∓ 1.By replacing š_2 by -š_2 if necessary, we may assume that š_1š_2 = 1. Then š_1 - š_2 is null. Sinceitself is nondegenerate, there exists v̌_3∈ such that(š_1 -š_2)v̌_3 = 1.Thenš_1,š_2,v̌_3 span a nondegenerate 3-plane of signature (2,1).In all three cases, there is a 5-dimensional subspace of signature (3,2) containing š_1 and š_2. For that reason, the discussion of pairs of Einstein hyperplanes can be reduced to the case of n=3.The absolute value of the productη(H_1, H_2) := |š_1·š_2 |is a nonnegative real number, depending only on the pair of Einstein hyperplanes H_1 and H_2. Specifying the above discussion to the case n=3 we have proved Theorem <ref> :* If the span of š_1,š_2 is positive definite (η(H_1,H_2)<1), then the intersection of the corresponding Einstein tori is the projectivised null cone of a signature (1,2) subspace, which is a timelike circle. * If the span of š_1,š_2 is indefinite (η(H_1,H_2)>1), then the intersection is the projectivised null cone of a signature (2,1) subspace, which is a spacelike circle. * Finally, if the span of š_1,š_2 is degenerate (η(H_1,H_2)=1), the span š_1 + š_2 is a degenerate 2-plane in š_1 + š_2 + v̌_3 ≅^2,1. The orthogonal complement of this ^2,1 is of signature (1,1) and is contained in (š_1 + š_2)^⊥ = š_1^⊥∩š_2^⊥. Since this last subspace is of dimension 3 and must also contain the degenerate direction of š_1 + š_2, it is a degenerate subspace with signature (+,-,0). Its null cone is exactly the union of two isotropic planes intersecting in the degenerate direction, so when projectivising we get a pair of photons intersecting in a point. The intersection of two Einstein tori is non-contractible in each of the two tori.An Einstein torus is a copy of the 2-dimensional Einstein universe. Explicitly, we can write it as ℙ() whereis the null cone in ℝ^2,2. A computation shows that all timelike circles are homotopic, all spacelike circles are homotopic and these two homotopy classes together generate the fundamental group of the torus. Similarly, photons are homotopic to the sum of these generators.§.§ Involutions in Einstein toriOrthogonal reflection in š defines an involution ofwhich fixes the corresponding hyperplane H=š^⊥.The orthogonal reflection in a positive vector š is defined by:R_š(v̌) = v̌ - 2v̌š/ššš. We compute the eigenvalues of the composition R_šR_š'̌, where š,š'̌ are unit spacelike vectors, and relate this to the invariant η.The orthogonal subspace to the plane spanned by š and š'̌ is fixed pointwise by this composition. Therefore, 1 is an eigenvalue of multiplicity n. In order to determine the remaining eigenvalues, we compute the restriction of R_šR_š'̌ to the subspace š + š'̌. R_šR_š'̌(š)= R_š(š - 2(šš'̌)š'̌)=-š - 2(šš'̌)(š'̌-2(š'̌š)š)=(4 (š'̌š)^2-1)š -2(š'̌š)š'̌. R_šR_š'̌(š'̌)= R_š(-š'̌)= -š'̌ + 2(šš'̌)š.The matrix representation of R_šR_š'̌ in the basis š,š'̌ is therefore:[ 4 (š'̌š)^2-12(šš'̌); -2(š'̌š) -1 ].The eigenvalues of this matrix are:2(šš'̌)^2 - 1 ± 2 (šš'̌)√((šš'̌)^2-1).We observe that they only depend on the invariant η=|šš'̌|. The composition of involutions has real distinct eigenvalues when the intersection is spacelike, complex eigenvalues when the intersection is timelike, and a double real eigenvalue when the intersection is a pair of photons.The case when š_1·š_2 = 0 is special: in that case the two involutions commute and we will say that the Einstein hyperplanes are orthogonal. As observed at the end of section <ref>, the complement of an Einstein torus inis a model for the double covering space of anti-de Sitter space ^3, which has a complete Lorentzian metric of constant curvature -1. In this conformal model of ^3 (see <cit.>), indefinite totally geodesic 2-planes are represented by tori which are orthogonal to ∂^3. § THE SYMPLECTIC MODELWe describe a model for Einstein 3-space in terms of 4-dimensional symplectic algebra, an alternative approach which is simpler for some calculations.Let (,ω) be a 4-dimensional real symplectic vector space, that is,is a real vector space of dimension 4 and × is a nondegenerate skew-symmetric bilinear form. Let 𝗏𝗈𝗅∈Λ^4(V) be the element defined by the equation (ω∧ω)(𝗏𝗈𝗅)=-2. The second exterior power Λ^2() admits a nondegenerate symmetric bilinear form · of signature (3,3) defined by (ǔ∧v̌)∧(ǔ'∧v̌')=(ǔ∧v̌)·(ǔ'∧v̌')𝗏𝗈𝗅. The kernel:= (ω)⊂Λ^2()inherits a symmetric bilinear form which has signature (3,2).Define the vector ω^*∈Λ^2 to be dual to ω by the equationω^*·(ǔ∧v̌) = ω(ǔ,v̌),for all ǔ,v̌∈. Because of our previous choice of 𝗏𝗈𝗅, we have ω^*ω^*=-2. The bilinear form , together with the vector ω^* define a reflection𝖱_ω^*:Λ^2() →Λ^2() α ↦α + (αω^*)ω^*.The fixed set of this reflection is exactly the vector subspaceorthogonal to ω^*.The Plücker embedding ι : 𝖦𝗋(2,) →(Λ^2()) maps 2-planes into lines in Λ^2(). We say that a plane inis Lagrangian if the form ω vanishes identically on pairs of vectors in that plane. If we restrict ι to Lagrangian planes, then the image is exactly the set of null lines in .The form ω yields a relation of (symplectic) orthogonality on 2-planes in . Lagrangian planes are orthogonal to themselves, and non-Lagrangian planes have a unique orthogonal complement which is also non-Lagrangian. The following proposition relates orthogonality inwith an operation on Λ^2(). A pair of 2-dimensional subspaces S,T⊂ are orthogonal with respect to ω if and only if [𝖱_ω^*(ι(S))] = [ι(T)].First, assume S is Lagrangian. This means that S=S^⊥, and that ι(S)∈ω^*⊥. Hence,𝖱_ω^*(ι(S)) = ι(S)=ι(S^⊥). Next, if S is not Lagrangian, then we can find bases (ǔ,v̌) of S and (ǔ',v̌') of S^⊥ satisfying ω(ǔ,v̌)=ω(ǔ',v̌')=1 and all other products between these four are zero. Then,𝗏𝗈𝗅=-ǔ∧v̌∧ǔ'∧v̌'andω^*=-ǔ∧v̌ -ǔ'∧v̌'.Consequently,[𝖱_ω^*(ι(S))] = [ǔ∧v̌ + ω(ǔ,v̌)ω^*] = [-ǔ'∧v̌'] = [ι(S^⊥)].§.§ Symplectic interpretation of Einstein space and photon spaceThe natural incidence relation betweenand 3 is described in the two algebraic models ( and ) as follows. A point p∈ and a photon ϕ∈3 are incident if and only if (p,ϕ) satisfies one of the two equivalent conditions: * The null line incorresponding to p lies in the isotropic 2-plane incorresponding to ϕ.* The Lagrangian 2-plane incorresponding to p contains the line incorresponding to ϕ.These two are equivalent because of the following proposition : Let P,Q⊂ be two-dimensional subspaces. Then, P∩ Q= 0 if and only if ι(P)ι(Q)≠0.Choose bases ǔ,v̌ of P and ǔ',v̌' of Q. Then,ǔ∧v̌∧ǔ'∧v̌' ≠ 0if and only if ǔ,v̌,ǔ',v̌' spanwhich is equivalent to P and Q being transverse.The light cone (p) of a point p∈ is the union of all photons containing p. It corresponds to the orthogonal hyperplane [p]^⊥⊂ of the null line corresponding to p. In photon space (), the photons containing p form the projective space (L) of the Lagrangian 2-plane L corresponding to p.§.§ Timelike or spacelike triples and the Maslov indexFixing a pair of non-incident points in the Einstein universe induces a trichotomy on points, as explained in section <ref>. The corresponding data in the Lagrangian model is related to the Maslov index of a triple of Lagrangians.Two non-incident points correspond to a pair of transverse Lagrangians L,L'. This induces a splitting =L⊕ L'. Together with the symplectic form ω, this splitting defines a quadratic form defined byq_L,L'(v̌) := ω(π_L(v̌),π_L'(v̌)). The Maslov index of a triple of pairwise transverse Lagrangians L,P,L' is the integer m(L,P,L')=sign(q_L,L'|_P), where sign(q)is the difference between the number of positive and negative eigenvalues of q. Transversality implies that q_L,L' restricted to P is nondegenerate. This index classifies orbits of triples of pairwise transverse Lagrangians <cit.>.Lagrangians which are nontransverse to L correspond to lightlike points, Lagrangians P with |m(L,P,L')|=2 correspond to timelike points, and Lagrangians P with m(L,P,L')=0 correspond to spacelike points.§.§ Nondegenerate planes and symplectic splittings We describe the algebraic structures equivalent to an Einstein torus in . As a reminder, these are hyperplanes of signature (2,2) inside ≅^3,2, and describe surfaces in 3 homeomorphic to a 2-torus.In symplectic terms, an Einstein torus corresponds to a splitting ofas a symplectic direct sum of two nondegenerate 2-planes. Let us detail this correspondence.Define a 2-dimensional subspace S⊂ to be nondegenerate if and only if the restriction ω|_S is nondegenerate. A nondegenerate 2-plane S⊂ determines a splitting as follows. The plane S^⊥ := {v̌∈|ω(v̌,S) = 0}is also nondegenerate, and defines a symplectic complement to S. In other words,splits as an (internal) symplectic direct sum:= S ⊕ S^⊥.The corresponding Einstein torus is then the set of Lagrangians which are non-transverse to S (and therefore also to S^⊥).The lines in S determine a projective line in 3 which is not Legendrian. Conversely, non-Legendrian projective lines in 3 correspond to nondegenerate 2-planes. This non-Legendrian line in 3, as a set of photons, corresponds to one of the two rulings of the Einstein torus. The other ruling corresponds to the line (S^⊥). In order to make explicit the relationship between the descriptions of Einstein tori in the two models, define a map μ as follows:μ : 𝖦𝗋(2,)→() S↦[ι(S) + 1/2ω(ι(S))ω^*],where ι(S) is any representative inof the projective class ι(S).The map μ is the composition of the Plücker embedding ι with the orthogonal projection onto . For S a nondegenerate plane, the image of μ is always a spacelike line, and μ(S)=μ(S^⊥).For the first part, let š be any vector representative of the line ι(S). Then,(š + 1/2ω(š)ω^*) ∧(š +1/2ω(š)ω^*) = 1/2ω(š)^2 𝗏𝗈𝗅,and therefore μ(S) is spacelike.The second part is a consequence of the correspondence between orthogonal complements and reflection in ω^* (Proposition <ref>) and the fact that a vector and its reflected copy have the same orthogonal projection to the hyperplane of reflection.The map μ induces a bijection between spacelike lines inand symplectic splittings of . Under the Plücker embedding ι, the Einstein torus defined by the symplectic splitting S⊕ S^⊥ is sent to the Einstein torus defined by the spacelike vector μ(S)∈.Let ǔ∈ be a spacelike vector normalized so that ǔ·ǔ= 2. Then, both vectors ǔ±ω^* are null. By the fact that null vectors in Λ^2() are decomposable, each ǔ±ω^* corresponds to a 2-plane in . These 2-planes are nondegenerate since(ǔ±ω^*) ∧ω^* = -ω(ǔ±ω^*)𝗏𝗈𝗅 = 2≠ 0.The two planes ǔ±ω^* are orthogonal since they are the images of each other by the reflection 𝖱_ω^*, and so they are the summands for a symplectic splitting of .This map is inverse to the projection μ defined above.To prove the last statement in the proposition, we apply proposition <ref>. The Einstein torus defined by the splitting S,S^⊥ is the set of Lagrangian planes which intersect S (and S^⊥) in a nonzero subspace. Let P be such a plane. Then, ι(S)ι(P)=0, which means that(ι(S) + 1/2 (ι(S)ω^*)ω^*)ι(P)=0,so ι(P) is in the Einstein torus defined by the orthogonal projection μ(S). Similarly, if ι(P) is orthogonal to μ(S) then P intersects S in a nonzero subspace. §.§ Graphs of linear mapsNow we describe pairs of Einstein tori in terms of symplectic splittings of (,ω) more explicitly.Let A,B be vector spaces of dimension 2 and A⊕ B their direct sum. If AB is a linear map, then the graph of f is the linear subspace (f)⊂ A⊕ B consisting of all ǎ⊕ f(ǎ), where ǎ∈ A. Every 2-dimensional linear subspaceL ⊂ A⊕ B which is transverse to B = 0 ⊕ B⊂ A⊕ B equals (f) for a uniquef. Furthermore, L = (f) is transverse to A = A ⊕ 0 if and only if f is invertible, in which case L = (f^-1) for the inverse map BA.Now, suppose A and B are endowed with nondegenerate alternating bilinear forms ω_A, ω_B, respectively. Let AB be a linear map. Its adjugate is the linear mapBAdefined as the compositionBB^*A^*Awhereω_A^#, ω_B^# are isomorphisms induced by ω_A, ω_B respectively, and f^† is the transpose of f. If ǎ_1,ǎ_2 and b̌_1,b̌_2 are bases of A and B respectively withω_A(ǎ_1,ǎ_2)= 1ω_B(b̌_1,b̌_2)= 1,then the matrices representing f and (f) in these bases are related by:f_11f_12f_21f_22 =f_22-f_12-f_21f_11.In particular, if f is invertible, then(f)=(f) f^-1where (f) is defined by f^*(ω_B) = (f) ω_A. Let = S ⊕ S^⊥. Let SS^⊥ be a linear map and let P = (f)⊂ be the corresponding 2-plane inwhich is transverse to S^⊥. * P is nondegenerate if and only if (f) ≠ -1.* If P is nondegenerate, then its complement P^⊥ is transverse to S, and equals the graphP^⊥ = (-(f) ),of the negative of the adjugate map to fS^⊥S. Choose a basis ǎ, b̌ for S. Then ǎ⊕ f(ǎ) and b̌⊕ f(b̌) define a basis for P, andω(ǎ⊕ f(ǎ),b̌⊕ f(b̌)) = ω(ǎ,b̌) + ω( f(ǎ),f(b̌))= (1 + (f)) ω(ǎ,b̌),since, by definition,ω( f(ǎ),f(b̌)) = (f) ω(ǎ,b̌).Thus P is nondegenerate if and only if 1 + (f)≠ 0, as desired.For the second assertion, suppose that P is nondegenerate. Since P,P^⊥, S, S^⊥⊂ are each 2-dimensional, the following conditions are equivalent: * P is transverse to S^⊥;* P∩ S^⊥ = 0;* P^⊥ + S =;* P^⊥ is transverse to S.Thus P^⊥ = (g) for a linear map S^⊥ S.We express the condition that ω(P,P^⊥) = 0 in terms of f and g: For š∈ S and ť∈ S^⊥, the symplectic product is zero if anly only ifω(š + f(š), t + g(ť) ) = ω(š,g(ť) ) + ω(f(š), ť)vanishes. This condition easily implies that g = -(f) as claimed.The following proposition relates the invariant η defined for a pair of spacelike vectors with the invariantassociated to a pair of symplectic splittings.Let S⊕ S^⊥ be a symplectic splitting and f:S→ S^⊥ be a linear map with (f)≠ -1. Let T=(f) be the symplectic plane defined by f. Then,η(μ(S),μ(T))= |1-(f)|/|1+(f)|. Let ǔ,v̌ be a basis for S such that ω(ǔ,v̌)=1. Then, ǔ+f(ǔ),v̌+f(v̌) is a basis for T. Moreover,ǔ∧v̌∧(ǔ + f(ǔ))∧(v̌ + f(v̌)) = ǔ∧v̌∧ f(ǔ) ∧ f(v̌).We can compute which multiple of 𝗏𝗈𝗅 this last expression represents by using the normalization (ω∧ω)(𝗏𝗈𝗅)=-2and the computation(ω∧ω)(ǔ∧v̌∧ f(ǔ)∧ f(v̌)) =2(f).We deduce thatǔ∧v̌∧ f(ǔ) ∧ f(v̌)=-(f)𝗏𝗈𝗅. Using the product formula from lemma <ref>, we find that√(2)ǔ∧v̌ + ω^*/√(2)is a unit spacelike representative of μ(S), and√(2)(ǔ + f(ǔ))∧(v̌ + f(v̌))/1+(f) + ω^*/√(2)is a unit spacelike representative of μ(T). Their product is -2(f)/1+(f) + 1 = 1-(f)/1+(f), proving the proposition.§ EXAMPLES OF PAIRS OF TORIThe purpose of this section is to describe a few basic examples illustrating our general theory. These examples include indefinite affine planes (Einstein tori containing the distant point) and one-sheeted hyperboloids. §.§ The inner product space associated to (,ω)Let e_1,e_2,e_3,e_4∈ be a basis for which the symplectic form ω is defined by ω(e_1,e_3) = ω(e_2,e_4) = 1 and other products zero. We define the inner product on = (ω) ⊂Λ^2() which has signature (3,2).Let e^1,e^2,e^3,e^4∈^* be the basis dual to e_1,e_2,e_3,e_4; in this basisω = e^1 ∧ e^3 + e^2 ∧ e^4.and the corresponding volume form is:ω∧ω = -2 e^1∧ e^2 ∧ e^3 ∧ e^4.Denote the element ω^*∈Λ^2() dual to ω by:ω^* := e_1∧ e_3 + e_2∧ e_4/2so that ω(ω^*)=1. For α∈Λ^2(),α∧ω^* = ω(α) ( - e_1 ∧ e_2 ∧ e_3 ∧ e_4/2).For α,β∈Λ^2(), define their inner product α·β by:α∧β =:α·β e_1 ∧ e_2 ∧ e_3 ∧ e_4/2.so that ω^*·ω^* = -1 and ω(α) = - ω^*·α.Projection onto=(ω) = (ω^*)^⊥is:Λ^2()⟶ α ⟼α - ω(α)ω^*The innner product (<ref>) on Λ^2() has signature (3,3). Sinceω^*·ω^* = -1,the induced bilinear form on its orthogonal complementis nondegenerate with signature (3,2). §.§ Symplectic splittings from positive vectors If v∈ is a positive vector, say with v· v= 1, then the both vectors v ±ω^* are null (since ω^*·ω^* = -1 andv·ω^*=0). By the well-knownfact that null vectors in Λ^2() are decomposable, each v ±ω^* corresponds to a 2-plane in . This 2-plane is nondegenerate since(v±ω^*) ∧ω^* = -ω(v±ω^*)( e_1 ∧ e_2 ∧ e_3 ∧ e_4/2) ≠ 0.These two planes are the summands for a symplectic splitting of . §.§ Some specific examplesThe reference <cit.> uses an inner product space (denoted ^3,2) which has coordinates (X,Y,Z,U,V) where the inner product has associated quadratic form equals X^2 + Y^2 - Z^2 - UV; compare (<ref>). (Reference <cit.> uses the standard diagonal form X^2 + Y^2 + T^2 - Z^2 - W^2, where the coordinates (X,Y,T,Z,W) correspond to:T: = (U-V)/2, W:= (U+V)/2.The coordinate basis of Λ^2() projects to the followingbasis for :{ e_1∧ e_2,e_1∧ e_3 - e_2 ∧ e_4,e_1∧ e_4, e_2∧ e_3,e_3∧ e_4}.and(X,Y,Z,U,V)∈ corresponds to:1/2( X(e_1∧ e_3 - e_2 ∧ e_4)+ Y(e_1∧ e_2 + e_3 ∧ e_4)+ Z(e_1∧ e_2 - e_3 ∧ e_4)- Ue_1 ∧ e_4 + Ve_2 ∧ e_3 ).The inner product is then(X_1,Y_1,Z_1,U_1,V_1)· (X_2,Y_2,Z_2,U_2,V_2)=X_1X_2 + Y_1Y_2 - Z_1Z_2-(U_1V_2 + V_1U_2)/2as desired.§.§ Planes parallel to the yz-plane Let x_0∈. The plane defined by x=x_0 incompletes to an Einstein torus denoted . In , it corresponds to the hyperplane indefined by X = x_0 V, or, equivalently,(X,Y,Z,U,V) ·ξ_x_0 = 0whereξ_x_0 = (1,0,0,2x_0,0) = 1/2 (e_1∧ e_3 - e_2 ∧ e_4 - 2 x_0e_1 ∧ e_4)is unit positive.Now we apply the construction from <ref>: Sinceξ_x_0 + ω^* = e_1∧ e_3 - x_0 e_1∧ e_4 = e_1∧ (e_3 - x_0 e_4) ξ_x_0 - ω^* = e_2∧ e_4 - x_0 e_1∧ e_4 = (e_2 - x_0e_1) ∧ e_4,the corresponding symplectic splitting = S_x_0⊕ S_x_0^⊥ is defined by:S_x_0 := ⟨ e_1, e_3 - x_0 e_4 ⟩S_x_0^⊥ := ⟨ e_2 + x_0 e_1, e_4 ⟩.These nondegenerate 2-planes are graphs of linear maps S_0 → S_0^⊥ as follows: S_x_0 = (f_x_0) where:e_1f_x_0 0 e_3⟼ -x_0e_4However, S_x_0^⊥ is not transverse to S_0 and cannot be represented as such a graph. On the other hand, S_x_0^⊥ is transverse to S_0 andS_x_0^⊥ = (S_0^⊥ S_0) wheree_2f_x_0^⊥ -x_0 e_1 e_4⟼ 0SimilarlyS_x_0 is not transverse to S_0 and cannot be represented as a graph of a map S_0→ S_0^⊥. In all of these cases (f) = 0.For x_0≠ 0, the toriand 𝒳(0) intersect in a pair of ideal photons intersecting at the distant point: these photons correspond to the isotropic planes:{(0,Y,± Y, U, 0) | Y,U∈}⊂ §.§ The xz-planeDenote the Einstein corresponding to the xz-plane y=0 by . This corresponds to the hyperplane Y = 0 in , the orthogonal complement ofξ := 1/2( e_1∧ e_2 + e_3∧ e_4 ).Sinceξ + ω^* =1/2 (e_1-e_4) ∧ (e_2 + e_3)ξ - ω^* = 1/2 (e_1+e_4) ∧ (-e_2 + e_3),the corresponding symplectic splitting is:= ⟨ e_1 - e_4 , e_2+ e_3⟩⊕⟨ e_1 + e_4 , -e_2 + e_3 ⟩.Let = S ⊕ S^⊥ be the symplectic splitting corresponding to 𝒳(0). Give S and S^⊥ the ordered bases (e_1,e_3), (e_2,e_4), respectively. In terms of these bases S and S^⊥, the summands corresponding toare the graphs of0 -1 1 0 ,0 1 -1 0 ,respectively. 𝒳(0) meets this torus in the z-axis defined by x = y = 0, a timelike geodesic (particle) in . §.§ The one-sheeted hyperboloid Another interesting torus is the one-sheeted hyperboloid ℋ defined by x^2 + y^2 - z^2 = 1 in . This corresponds to the hyperplane U = V in . The unit positive vector normal to this hyperplane is:ξ() := 0 0 0 -1 1 ⟷ e_1 ∧ e_4 + e_2 ∧ e_3 ∈.The corresponding symplectic splitting is: = H ⊕ H^⊥, whereh_1 := e_1 + e_2h_2 := e_3 + e_4andh_1^⊥:= e_1 - e_2h_2^⊥:= e_3 - e_4are bases of H and H^⊥ respectively. Sinceh_1 + h_1^⊥ ∈ S_x_0, h_2 + 1-x_0/1+x_0h_2^⊥ ∈ S_x_0,S_x_0 = (f) wheref =1 0 0 (1+x_0)/(1-x_0).The intersection ℋ∩ will be a pair of photons if x_0 = ± 1, a spacelike circle if -1 < x_0 < 1 and a timelike circle if |x_0|>1 (see fig. <ref>). The determinant (f) of f is never -1 (since all the 2-planes are nondegenerate). The ruled torus corresponds to one of two the summands, and the torus itself corresponds to the splitting, that is, to the pair (f, -f/(f)). §.§ Vertical translatesTranslatingby the vertical vector (0,0,z_0) yields(z_0) := { x^2 + y^2 - (z-z_0)^2 = -1 },which is defined by the unit positive vectorξ((z_0)) :=0 0 z_0 -(1+z_0^2) 1 .These vectors have inner products:ξ((z_1)) ·ξ((z_2 ))= 1 + 1/2 (z_1 - z_2)^2.In particular(z_0) ∩(-z_0) ={ x^2 + y^2 = 1,z = 0},the horizontal circle of radius √(1+z_0^2) and (f) = -(1 + z_0^2).Here is a slight variant, where the intersection is the horizontal unit circle x^2 + y^2 - 1 = z = 0.Take ξ(z_0) to be the unit positive vector:ξ_± :=0 0z_0 -√(1+z_0^2) √(1+z_0^2)The corresponding tori are the hyperboloids (z_0) defined by :x^2 + y^2 - (z - z_0/√(1+z_0^2))^2=1/1+z_0^2The intersection (z_0)∩(-z_0) equals the horizontal unit circle. The inner productξ_+ ·ξ_- = -(1 + 2z_0^2) < -1.The corresponding determinant is d = -z_0^2/(1 + z_0^2) for which -1<d<0.§ DISJOINT CROOKED SURFACES In this section we apply the techniques developed above in order to prove a full disjointness criterion for pairs of crooked surfaces.We work in the symplectic framework of section <ref> with the symplectic vector space (,ω).Let ǔ_+,ǔ_-,v̌_+,v̌_- be four vectors insuch thatω(ǔ_+,v̌_-)=ω(ǔ_-,v̌_+)=1and all other products between these four vanish. This means that we have LagrangiansP_0:= v̌_+ + v̌_-,P_∞ := ǔ_+ + ǔ_-, andP_± :=v̌_± + ǔ_±representing the points of intersection of the photons associated to [ǔ_+],[ǔ_-],[v̌_+],[v̌_-]. We call this configuration of four points and four photons a lightlike quadrilateral.The crooked surface C determined by this configuration is a subset ofconsisting of three pieces : two wings and a stem (see Figure <ref>). The two wings are foliated by photons, and we will denote by 𝒲_+, 𝒲_- the sets of photons covering the wings. Each wing is a subset of the light cone of P_+ and P_-, respectively. Identifying points in () with the photons they represent, the foliations are as follows:𝒲_+ = {[t ǔ_+ + s v̌_+]   |   t s ≥ 0}, 𝒲_- = {[t ǔ_- + s v̌_-]   |   t s ≤ 0}.We will sometimes abuse notation and use the symbol 𝒲_± to denote the collection of points in the Einstein universe which is the union of these collections of photons.The stem 𝒮 is the subset of the Einstein torus determined by the splitting S_1 ⊕ S_2 := (ǔ_+ + v̌_-) ⊕ (ǔ_- + v̌_+) consisting of timelike points with respect to P_0,P_∞ :𝒮 = {L = w̌ + w̌'  | w̌∈ S_1, w̌'̌∈ S_2, |m(P_0,L,P_∞)| = 2}.Note that this definition gives only the interior of the stem as defined in <cit.>. A crooked surface is the closure inof a crooked plane in the Minkowski patch defined by the complement of the light cone of P_∞ (see <cit.>). Let C_1, C_2 be two crooked surfaces such that their stems intersect. Then, the stem of C_1 intersects a wing of C_2 or vice versa. That is, crooked surfaces cannot intersect in their stems only.The stem consists of two disjoint, contractible pieces. To see this, note that this set is contained in the Minkowski patch defined by P_∞. There, the Einstein torus containing the stem is a timelike plane through the origin, and the timelike points in this plane form two disjoint quadrants. Let K be the intersection of the two Einstein tori containing the stems of C_1 and C_2. Then, K is non-contractible in either torus (Corollary <ref>), so it can't be contained in the interior of the stem. Therefore, K must intersect the boundary of the stem which is part of the wings.Let p_0,p_∞,p∈ be three points in the Einstein universe. The point p is timelike with respect to p_0,p_∞ if and only if the intersection of the three light cones of p,p_0,p_∞ is empty.We work in the model ofgiven by lightlike lines in a vector space of signature (3,2). If p is timelike with respect to p_0,p_∞, then it lies on a timelike curve which means that the subspace generated by p,p_0,p_∞ has signature (1,2). Therefore, its orthogonal complement is positive-definite and contains no lightlike vectors, so the intersection of the light cones is empty. The converse is similar. A photon represented by a vector p̌∈ is disjoint from the crooked surface C if and only if the following two inequalities are satisfied:ω(p̌,v̌_+)ω(p̌,ǔ_+)>0 ω(p̌,v̌_-)ω(p̌,ǔ_-)<0. Write p̌ in the basis ǔ_+,ǔ_-,v̌_+,v̌_- :p̌ = a ǔ_+ + b ǔ_- + c v̌_+ + d v̌_-.Then,a=ω(p̌,v̌_-)b=ω(p̌,v̌_+) c=-ω(p̌,ǔ_-)d=-ω(p̌,ǔ_+). The photon p̌ is disjoint from 𝒲_+ if and only if the following equation has no solutions with ts≥ 0:ω(p̌, t ǔ_+ + s v̌_+) = 0.This happens exactly when b d < 0. Similarly, p̌ is disjoint from 𝒲_- if and only if a c >0. These two equations are equivalent to the ones in the statement of the Lemma, therefore it remains only to show that under these conditions, p̌ is disjoint from the stem.The Lagrangian plane P representing the intersection of p̌ with the Einstein torus containing the stem is generated by p̌ and a ǔ_+ + d v̌_-. This is because a ǔ_+ + dv̌_- represents the unique photon in one of the foliations of the Einstein torus which intersects the photon p̌, and hence the span p̌ + (a ǔ_+ + d v̌_-) is their intersection point, the Lagrangian P. We want to show that P is not timelike with respect to P_0,P_∞. By Lemma <ref>, this is equivalent to showing that the triple intersection of the lightcones of P_0, P, and P_∞ is non-empty.The intersection of the light cones of P_0 and P_∞ consists of planes of the form: (s ǔ_+ + t ǔ_-) + ( s' v̌_+ + t' v̌_-) where st' + ts'=0. We want to show that no point represented by such a plane is incident to P. Two Lagrangian planes are incident when their intersection is a non-zero subspace. Equivalently, they are incident if they do not span . We have :(p̌,a ǔ_+ + d v̌_-, s ǔ_+ + t ǔ_-, s'v̌_+ + t' v̌_-)= (-bdss' + catt')(ǔ_+,ǔ_-,v̌_+,v̌_-) = k(bds^2 + act^2)(ǔ_+,ǔ_-,v̌_+,v̌_-),where t'=kt, s'=-ks, k≠ 0. There exist t,s making this determinant vanish because bd,ac have different signs. This means that the point where p̌ intersects the Einstein torus containing the stem is not timelike and therefore outside the stem. Two crooked surfaces C,C' given respectively by the configurations ǔ_+,ǔ_-,v̌_+,v̌_- and ǔ'_+,ǔ'_-,v̌'_+,v̌'_- are disjoint if and only if the four photons ǔ'_+,ǔ'_-,v̌'_+,v̌'_- do not intersect C and the four photons ǔ_+,ǔ_-,v̌_+,v̌_- do not intersect C'. Since the four photons on the boundary of the stem are part of the crooked surface,the forward implication is clear.We now show the reverse implication. Assume that the four photons ǔ'_+,ǔ'_-,v̌'_+,v̌'_- do not intersect C and the four photons ǔ_+,ǔ_-,v̌_+,v̌_- do not intersect C'. Let us first show that the wing 𝒲_+ of C does not intersect C'. By lemma <ref>, it suffices to show thatω(tǔ_+ + sv̌_+, v̌'_+)ω(tǔ_+ + sv̌_+, ǔ'_+) > 0andω(tǔ_+ + sv̌_+, v̌'_-)ω(tǔ_+ + sv̌_+, ǔ'_-) < 0for all s,t∈ such that st≥ 0 (with s and t not both zero).We haveω(tǔ_+ + sv̌_+, v̌'_+)ω(tǔ_+ + sv̌_+, ǔ'_+) =t^2ω(ǔ_+,v̌'_+)ω(ǔ_+,ǔ'_+) + stω(ǔ_+,v̌'_+)ω(v̌_+,ǔ'_+) + stω(v̌_+,v̌'_+)ω(ǔ_+,ǔ'_+) + s^2ω(v̌_+,v̌'_+)ω(v̌_+,ǔ'_+).By hypothesis, neither ǔ_+, v̌_+ intersect C', and neither ǔ'_+, v̌'_+ intersect C. Therefore, using again lemma <ref> and st≥ 0, we see that each term in this sum is non-negative and that at least one of them must be strictly positive. Therefore,ω(tǔ_+ + sv̌_+, v̌'_+)ω(tǔ_+ + sv̌_+, ǔ'_+) > 0.The proof thatω(tǔ_+ + sv̌_+, v̌'_-)ω(tǔ_+ + sv̌_+, ǔ'_-) < 0is similar. Therefore, 𝒲_+ does not intersect C'.In an analogous way, one can show that 𝒲_- does not intersect C'. Therefore, the wings of the crooked surface C do not intersect C'. Hence, to show that C and C' are disjoint, it only remains to show that the stem of C does not intersect C'.By symmetry, the wings of C' do not intersect C, which means in particular that they do not intersect the stem of C. Consequently, the stem of C can only intersect the stem of C'. However, according to theorem <ref>, if the stem of C intersects the stem of C', it must necessarily intersect its wings as well, which is not the case here. Therefore, we conclude that C and C' must be disjoint. By lemma <ref>, this disjointness criterion can be expressed explicitly as 16 inequalities (two for each of the 8 photons defining the two crooked surfaces). There is some redundancy in these inequalities, but there does not seem to be a natural way to reduce the system.§ ANTI-DE SITTER CROOKED PLANESIn this section, we show that the criterion for disjointness of anti-de Sitter crooked planes described in <cit.> is a special case of theorem <ref>, when embedding the double cover of anti-de Sitter space in the Einstein universe.The 3-dimensional Anti-de Sitter space, denoted , is the manifold (2,) ≅() endowed with the bi-invariant Lorentzian metric given by the Killing form. We now recall the definition of a (right)crooked plane.Let ℓ be a geodesic in the hyperbolic plane . The rightcrooked plane based at the identity associated to ℓ is the set of g∈(2,) such that g has a nonattracting fixed point in ℓ⊂∪∂. In other words, the isometries g∈(2,) which make up the crooked plane are : * elliptic elements centered on a point of ℓ,* parabolic elements with fixed point in ∂ℓ, and* hyperbolic elements with repelling fixed point in ∂ℓ. A rightcrooked plane based at g∈(2,) is a left-translate of one based at the identity. We will say that such a crooked plane is defined by the pair (g,ℓ).A leftcrooked plane is defined the same way, replacing nonattracting fixed point by nonrepelling fixed point. Since a rightcrooked plane and a leftcrooked plane always intersect, we will assume in what follows that all ourcrooked planes are of the first type. Let ℓ,ℓ' be geodesic lines ofand g∈(2,). Then, the rightcrooked planes defined by (I,ℓ) and (g,ℓ') are disjoint if and only if for any endpoints ξ of ℓ and ξ' of ℓ', we have ξ≠ξ' and d(ξ,gξ')-d(ξ,ξ')<0.In this criterion, the difference d(p,gq)-d(p,q) for p,q∈∂ is defined as follows : choose sufficiently small horocycles C,D through p,q respectively. Then, d(p,gq)-d(p,q):=d(C,GD)-d(C,D) and this quantity is independent of the choice of horocycles. §.§ AdS as a subspace of EinLet _0 be a real two dimensional symplectic vector space with symplectic form ω_0. Denote bythe four dimensional symplectic vector space =_0 ⊕_0 equipped with the symplectic form ω = ω_0 ⊕ -ω_0. This vector spacewill have the same role as in section <ref>.The Lie group (_0)=(_0) is a model for the double cover of anti-de Sitter 3-space. We will show how to embed this naturally inside the Lagrangian Grassmannian model of the Einstein Universe in three dimensions. To do this, definei : (_0) →Gr(2,) f ↦(f) The graph of f ∈(_0) is a Lagrangian subspace of =_0⊕_0. This means that i((_0))⊂() ≅. This map is equivariant with respect to the homomorphism:(_0) ×(_0) →() (A,B) ↦ B ⊕ A ,where the action of (_0) ×(_0) on (_0) is by (A,B)· X = AXB^-1. The involution ofinduced by the linear mapI ⊕ -I : _0⊕_0 ↦_0 ⊕_0,where I denotes the identity map on _0, preserves the image of i. It corresponds to the two-fold covering (_0)→(_0). The fixed points of this involution are exactly the complement of the image of i, corresponding to the conformal boundary of AdS (see section 2 of <cit.> for details).§.§ Crooked surfaces and AdS crooked planes As in <cit.>, we say that a crooked surface is adapted to an AdS patch if it is invariant under the involution I ⊕ -I. Goldman proves in <cit.> that a crooked surface is adapted to anpatch if and only if it is the closure in 3 of a left or rightcrooked plane in that patch. Moreover, twocrooked planes in the same patch are disjoint if and only if their closures in 3 are disjoint. If a crooked surface is invariant under I ⊕ -I, then its corresponding lightlike quadrilateral is invariant. Two of the opposite vertices are fixed (they lie on the boundary of AdS) and the two others are swapped. If we denote the four photons by ǔ_-,ǔ_+,v̌_-,v̌_+, this means v̌_- = (I ⊕ -I) ǔ_- and v̌_+ = (I ⊕ -I) ǔ_+.§.§.§ AdS crooked planes based at the identityFor concreteness, choose a basis ofto identify it with ^4. We will represent a plane in ^4 by a 4 × 2 matrix whose columns generate the plane, up to multiplication on the right by an invertible 2×2 matrix. For example, (f) corresponds to the matrix:[ I; f ].The identity element of (_0) maps to the plane[ I; I ]and its image under the involution I ⊕ -I is[I; -I ].The intersection of the lightcones of the two Lagrangians (I) and (-I) consists of Lagrangians which have the form[v_1v_1;v_2v_2;v_1 -v_1;v_2 -v_2 ]for some v_1,v_2∈ not both zero.Therefore, the lightlike quadrilaterals containing as opposite vertices (I) and (-I) are parameterized by pairs of distinct nonzero vectors ǎ,b̌∈_0 (2× 1 column vectors), up to projective equivalence. The four vertices of the lightlike quadrilateral are then:[ I; I ], [ǎǎ;ǎ -ǎ ], [b̌b̌;b̌ -b̌ ], [I; -I ].We will say that such a lightlike quadrilateral is based at I and defined by the vectors ǎ,b̌. We choose as representatives of its lightlike edges the vectors:v̌_- = [ ǎ; ǎ ], ǔ_- = [ -ǎ;ǎ ] v̌_+ = [ b̌; b̌ ], ǔ_+ = [b̌; -b̌ ]. With the definition of the wings using the sign choices of section <ref>, we will see that the intersection of the associated crooked surface with thepatch is a rightcrooked plane.Indeed, the definition of the photons foliating the wing 𝒲_+ was𝒲_+ = {[t ǔ_+ + s v̌_+]   |   t s ≥ 0}. Suppose that the graph Lagrangian [ I; f ] for some f∈(2,) is on such a photon, equivalently that it contains a vector of the form[ (t+s)b̌; (t-s)b̌ ]with ts≥ 0. This is equivalent tofb̌ = (t-s/t+s)b̌.When t s ≥ 0, the quantity |t-s/t+s| ≤ 1, hence the point [b̌]∈∂ is a nonattracting fixed point of f. By a similar calculation, we can show that 𝒲_- consists of elements (f) such that f has a nonattracting fixed point at [ǎ]∈∂.§.§.§ AdS crooked planes based at fIn order to get an AdS crooked plane based at a different point f∈(_0), we map the crooked plane by an element of the isometry group (_0) ×(_0) ⊂(). The easiest way is to use an element of the form :[ I 0; 0 f ].This corresponds to left multiplication by f in ().Applying f to a lightlike quadrilateral, we get a lightlike quadrilateral with vertices of the form:[ I; f ],[I; -f ] , [ǎ -ǎ; fǎ fǎ ], [ b̌ b̌;fb̌ -fb̌ ] and edges of the form:[ǎ; fǎ ], [ -ǎ; fǎ ] [b̌; fb̌ ], [ b̌; -fb̌ ].§.§ Disjointness The disjointness criterion for crooked surfaces in the Einstein Universe is given by 16 inequalities. Using the symmetries imposed by an AdS patch, we can reduce them to 4 inequalities.Using the involution defining the AdS patch, we can immediately reduce the number of inequalities by half. This is because both surfaces are preserved by the involution, and their defining photons are swapped in pairs. (So for example, we only have to check that ǔ_+ and ǔ_- are disjoint from the other surface, for each surface.)The second reduction comes from the fact that for AdS crooked planes, we only need to check that the four photons from the first crooked surface are disjoint from the second, and then the four from the second are automatically disjoint from the first.For a crooked surface based at the identity with lightlike quadrilateral defined by the vectors ǎ,b̌∈_0 and another based at f with quadrilateral defined by ǎ',b̌'∈_0, the inequalities reduce to: ω_0(ǎ',b̌)^2 > ω_0(fǎ',b̌)^2 ω_0(ǎ',ǎ)^2 > ω_0(fǎ',ǎ)^2 ω_0(b̌',b̌)^2 > ω_0(fb̌',b̌)^2 ω_0(b̌',ǎ)^2 > ω_0(fb̌',ǎ)^2. What remains is to interpret these four inequalities in terms of hyperbolic geometry. We first define an equivariant map from ℙ(_0) to ∂ℍ^2. As a model of the boundary of ℍ^2, we use the projectivized null cone for the Killing form in 𝔰𝔩(_0)≅𝔰𝔩(2,). Choose a basis of _0 in which ω_0 is given by the matrix J=[01; -10 ] and defineη : _0→(𝔰𝔩(2,)) ǎ ↦ -ǎǎ^T J,where ǎ is a column vector representing a point in ℙ(_0). This map associates to the vector ǎ the tangent vector to at identity of the photon between I and the boundary point [ǎǎ;ǎ -ǎ ]. Note that the image of η is contained in the upper part of the null cone. η is equivariant with respect to the action of (_0). η(Aǎ) = -Aǎ(Aǎ)^TJ = -Aǎǎ^T A^T J = -A ǎǎ^T J A^-1 = A η(ǎ) A^-1. We will denote by K the trace form on 𝔰𝔩(2,)K(X,Y) = (XY).Its value is 1/8 times the Killing form. Let ǎ,b̌∈_0. Then, ω_0(ǎ,b̌)^2 = -K(η(ǎ),η(b̌)). ω_0(ǎ,b̌)^2= -ǎ^T J b̌b̌^T J ǎ= ǎ^T J η(b̌) ǎ= ( ǎ^T J η(b̌) ǎ)= (ǎǎ^T J η(b̌))= -(η(ǎ)η(b̌))= -K(η(ǎ),η(b̌)).Note that the expression ω_0(ǎ,b̌) is not projectively invariant, but the sign of ω_0(ǎ,b̌)^2 - ω_0(ǎ, fb̌)^2 is.The following inequalities are equivalentω_0(ǎ,b̌)^2 - ω_0(ǎ,fb̌)^2>0, K(η(ǎ),fη(b̌)f^-1) > K(η(ǎ),η(b̌)). Finally, we want to show that the four inequalities (<ref>) are equivalent to the DGK criterion (Theorem <ref>).Let A,B,A',B' denote respectively η(ǎ),η(b̌),η(ǎ'),η(b̌'). Then, A, B, A', B' represent endpoints of two geodesics g,g' in the hyperbolic plane. We want to showd(ξ,fξ'f^-1) - d(ξ,ξ')<0for ξ∈{A,B} and ξ'∈{A',B'}.We use the hyperboloid model of , {X∈𝔰𝔩(2,) | K(X,X)=-1}. Consider horocycles C_ξ(r)={X∈ |  K(X,ξ)=-r} and C_ξ'(r') = {X∈ |  K(X,ξ')=-r'} at ξ and ξ' respectively. The distance between these two horocycles is given by the formulad(C_ξ(r),C_ξ'(r'))=arccosh(-1/2(K(ξ,ξ')/2rr' + 2rr'/K(ξ,ξ'))).Similarly,d(C_ξ(r),fC_ξ'(r')f^-1)=arccosh(-1/2(K(ξ,fξ'f^-1)/2rr'+2rr'/K(ξ,fξ'f^-1))).We know that K(ξ,fξ'f^-1)>K(ξ,ξ'). If r,r' are sufficiently small, by increasingness of the function x↦ x+1/x for x>1 and increasingness of arccosh we conclude d(C_ξ(r),C_ξ'(r'))>d(C_ξ(r),fC_ξ'(r')), which is what we wanted. amsplain
http://arxiv.org/abs/1702.08414v2
{ "authors": [ "Jean-Philippe Burelle", "Virginie Charette", "Dominik Francoeur", "William Goldman" ], "categories": [ "math.DG", "math.SG" ], "primary_category": "math.DG", "published": "20170227181855", "title": "Einstein tori and crooked surfaces" }
[E-mail: ]gorbatsievich@bsu.by[E-mail: ]staskomarov@tut.by[E-mail: ]tarasenk@tut.by Theoretical Physics Department, Belarusian State University, Nezavisimosti av., 4, 220030 Minsk, Belarus A.G. acknowledges support from A. von Humboldt Foundation in the framework of the Institute Linkage ProgrammOptical appearance of a compact binary star in the field of a supermassive black hole is modeled in a strong field regime.Expressions for the redshift, magnification coefficient and pulsar extinction time are derived.By using the vierbein formalism we have derived equations of motion of compact binary star in the external gravitational field. We have analysed both the evolution of redshift of the optical ray from the usual star or white dwarf, and the times of arrival of pulses of pulsar. The results are illustrated by a calculation for a model binary system for the case of external gravitational field of a Schwarzschild black hole. The obtained results can be used for fitting timing data from the X-ray pulsars that moves in the neighbourhood of the Galactic Center (Sgr A*). 04.25dg Optical appearance of a compact binary system in the neighbourhood of supermassive black hole A. Tarasenko December 30, 2023 ============================================================================================= § INTRODUCTIONSince the discovery of binary pulsar B1913+16 by Hulse and Taylor in 1975 (see <cit.> for review) many new possibilities for testing theories of gravity have appeared. But most of these tests are performed only for the case of weak gravitational field. One of the challenging ways for the purpose of testing gravity in a strong field regime gives us the studying of the motion of the astrophysical objects near the supermassive black hole. The recent investigations that have been completed by cosmic observatories Chandra and XMM-Newton provides us evidences for existence supermassive black hole in the Galactic Center (Sgr A*) <cit.>.Also it have provided evidences for plenty of binary pulsars and stars in this region <cit.>. In a volume of 1 pc^3 around SgrA*, there are ∼ 10^4 compact objects of about one stellar mass <cit.>; presumably, about half of these objects are bounded in binary systems (NS-NS, NS-BH and BH-BH).Therefore it is possible to perform some gravity experiments for the motion of binary neutron stars in strong external gravitational field. In the case of small velocity of the center of mass of the binary and the weak external gravitational field investigation of such systems can be performed by using the well known pulsar timing techniques (see, e. g., <cit.>). The approaches to calculation of the times of arrival of pulses from pulsars that are moving in external gravitational field are discussed in some papers (see, e. g., <cit.>). But must of this are uses the post-Newtonian expansion of the times of arrival of pulses, that have large deviations from exact result for the motion of the source near the horizon of the supermassive black hole. To study the motion in such strong field regimes it is necessary to improve of existing methods or develop of new approaches to this problem.Another useful quantity is gravitational waves from the pulsar. The first direct detection of gravitational waves <cit.>has shown to us an importance of the investigation of this characteristic of the binary <cit.>. Therefore, consideration of the problem of motion of binary systems in the field of the supermassive Black Hole is very important for calculation of gravitational and electromagnetic radiation from such binary systems. Unfortunately, there is no hope in any foreseeable future to have exact solutions describing the motion of three massive bodies, so we have to adopt some sort of approximation schemes for solving the Einstein equations in order to study such problems.The equations of motion of isolated binary systems (see for example review <cit.>) are commonly derived by the means of Post-Newtonian expansion of Einstein equations in the powers of v/c (Post-Newtonian approach), where v is characteristic velocity of the bodies, c is the vacuum speed of light, or in the powers of G (Post-Minkowskian approach), where G is the gravitational constant. Despite the fact, that 2-body problem has received considerable attention in the literature and has been solved up to 3.5PN order, the n-body problem is still much less investigated. It has been solved by Kopeikin <cit.> up to 1PN order under the assumption, that the fields are weak and the motion of bodies is non-relativistic.It is clear that the problem of binary motion in the field of supermassive black hole may be solved by an approximate consideration of 3-body problem. Namely, the problem corresponds to the case m_1,2/M≪ 1, where m_1, m_2 are the masses of stars in the binary system and M is the mass of the black hole. However, this approach seems to be inadequately complicated and in the case of relativistic motion of binary's center of inertia — even more tedious. At the same time high mass of the black hole suggests rather simple approximate method. It is based on the fact that in the vicinity of the binary system there may be introduced a comoving reference frame, in which the equations of relative motion of the stars are close to Newtonian. The conditions under which the approximation is adequate will be formulated in Sec. <ref>, as well as numerical estimates for real binary systems of neutron stars.In this paper we derive the equations of motion of the binary system that moves in external gravitational field. This equations can be applied to the any metric witch changes on a scale that is more large than the spatial size of the binary. Also we derive expression for the times of arrival of pulses that comes from pulsar in a binary system in external gravitational field. By using this expression it is possible to fit pulsar timing data to find parameters of motion.The value of the angular momentum of the Galactic center black hole is not known quite exactly. This quantity most lies in the ranges of 0<a/M<1. But the formulas for the case of a=0 (Schwarzschild black hole) the formulas are much simpler than in general (a≠ 0). Thus give us possibilities to simply analyse the results that are needed for the solving of inverse problem (the obtaining of the parameters of motion by using the redshift data). Because of this we have analyzed this approach on the example of a source in a binary pulsar that moves near Schwarzschild black hole. The result can be applied to the analyzing timing data of the pulsar that moves in the vicinity of Sgr A*. § EQUATIONS OF MOTIONOF A COMPACT BINARY SYSTEM IN THE FIELD OF THE SUPERMASSIVE BLACK HOLE§.§ Equations of motion inacomoving reference frameIt is known that in the general relativity the equations of motion of a many-body system can be obtained from the Einstein field equations. For the first time this idea had been realized by Einstein and Grommer <cit.>. It had received further development by Einstein, Infeld, and Hoffmann <cit.>, Fock <cit.>, Infeld and Plebański <cit.>, Will <cit.> and many other authors. Using the method of Einstein–Infeld–Hoffmann, we will derive the equations of motion of the binary system in the field of the supermassive Black Hole. We assume that the relative motion of the stars in this binary system is non-relativistic (the motion of the binary system as a whole relatively to the SBH can be relativistic or even ultrarelativistic). We can simplify our calculations essentially by the use of the comoving reference frame, i.e. the reference frame, which is connected to the center of mass of the binary system. Let us consider a gravitationally bound compact system which are freely moving in the field of a supermassive black hole.Let us make the following assumptions about this system:* The mass M ofthe supermassive BH is much greater then the masses of the both starsM≫ m_1,2 . * The mean distance ϱbetween the stars is much greater than their own sizes R_0:ϱ≫ R_0(e.g.for neutron starsϱ∼ 44km ,R_0∼ 10÷20km). It means that we can consider the stars in a good approximationas point-like masses m_1 and m_2.* The relative motion of the stars with respect to each other is non-relativistic:v/c≪ 1 . * The characteristic length scale of external field inhomogeneity is larger than the size of our binary star system.r_g≫ϱ ,wherer_g=2M is the gravitation radius of the black hole.Under the assumptions (<ref>) gravitational radiation almost doesn't affect the orbital motion of binary (around black hole) as well as relative motion of the stars. Particularly, the estimates based on quadrupole formula show that relative decrease of the radius of circular orbit of binary neutron star in flat background due to gravitational radiation would be of order Δϱ/ϱ∼ 10^-13 per period, ifv/c∼ 10^-2forϱ∼ 44km ,m_1,2∼ m_⊙ ,where m_⊙ is the mass of the Sun. Hence the effects of gravitational radiation will not be taken into account in this paper. However, one must be aware that as the distance between the stars decreases to the order of 100 km, gravitational radiation causes rapid collapse of both stars onto each other <cit.>. The assumptions (<ref>) allow us to simplify the calculations greatly by the use of a comoving reference frame.§.§ Comoving reference frameAs a comoving reference framewechoose the reference frame of a single observer <cit.>. This reference frame is determined by the motion of a single mass point. The world line of this mass pointx^i=ξ^i(τ) ,(τproper time)(“single observer”) is named basis. Using the world line ξ^i(τ) of the center of mass of the binary star as basis, we obtain a convenient comoving reference frame for the binary star system. Let us give a brief description of thisreference frame.Along the basic line ξ(τ) we establish an orthonormal vierbein (tetrad) h_(m) ^i, defined byh_(4) ^i=1/c u^i , u^i≡ξ^i(τ)τ , h_(i) ^k h_(j)k=η_(i)(j) ,with η_(i)(j) = diag(1, 1, 1, -1) being the Minkowski tensor and c — the speed of light[Latin indices run from 1 to 4, Greek ones from 1 to 3. The signature of space-time is (+ , +,+,-). ].The introduced vierbein is determined up to three-dimensional rotations. The three-dimensional physical space is given by a geodesic spacelike hypersurface f (related to τ), which lies orthogonally to the basic world line. In order to arithmetize the hypersurface f, at each point P∈ f we fix a set of three scalarsX^(α)=σ_Ph^(α) _iη^i ,where σ_P is the value of the canonic parameter σ at P, defined along a spacelike geodesic in f and going through the point P, η^i is the tangent unit vector to that geodesic (η_iη^i=1), defined at the point on the basis line (σ=0) (see Fig. <ref>).For a nonrotating frame (i.e. when the vectors h_(n) ^i are displaced along the basis line (<ref>) according to the Fermi-Walker transport) the quantities {X^(α),cτ} correspond to the Fermi normal coordinates (see for instance <cit.>). Analogous quantitiesx^α̂=X^(α) ,x^4̂=cτwe treat as rotating Fermi coordinates. In these coordinates the metric tensor g_îĵ becomesg_îĵ=η_(i)(j)+ε_(i)(j) ,where η_(i)(j)=diag (1,1,1,-1) andε_(α)(β)=-13 R_(α)(μ)(β)(ν) X^(μ)X^(ν)+O(ϱ^3) , ε_α (4)=1c K_α+Θ_α+O(ϱ^3) , ε_(4)(4)=-(2Θ+2ζ+ζ^2-1c^2 K_αK^α)+O(ϱ^3) ; ϱ≡√(X^X_) .K_=ϵ_X^ω^ ,ζ=1/c^2 W_X^ ,Θ_=23 R_X^X^+O(ϱ^3) ,Θ= 12 R_ X^X^ +O(ϱ^3) . Here, we used the following notations:W^=h^ _iu^iτandω^=1/2 ϵ^h_ ih_ ^iτare the acceleration and the angular velocity of the reference frame, respectively.From the last relations it follows, that the size of a world tube in which geodesic hypersurfaces are regular and in which the expansion (<ref>) is valid, are determined from the following conditions.ϱ≪Min{ c^2|W_(ν)| , c^2|ω^(ν)| , 1|R_(m)(n)(i)(j)|^1/2 , 1|R_(m)(n)(i)(j);(k)|^1/3 , |R_(m)(n)(i)(j);(k)||R_(m)(n)(i)(j);(k);(l)| } .The present calculation is carried out up to ϱ^2. §.§ Newtonian-like (non-relativistic) approximation in the comoving reference frameAs the first step we shall consider the Newtonian-like approximation in the comoving reference frame of a single observer which had been described above. In particular,in the Fermi coordinates {x^î} the metric tensor, describing a gravitational field of SBH + binary star, will be sought in the formg_îĵ=η_(i)(j)+ε_(i)(j)+φ_(i)(j)where ε_(i)(j) (background metric) is given by (<ref>), and unknown functions φ_(i)(j) can be determined from Einstein equationsR_îĵ-12 R g_îĵ=κ T_îĵwith usual expression for the stress-energy tensor T_îĵ describing two mass points m_1 and m_2. Further we shall restrict ourconsideration to non-relativistic motion of this mass points (stars) relatively to each other. In this case from equation (<ref>) we obtain immediately the Poisson-like equation for non-relativistic relative motion of the starsφ(X^α)=4π G(m_1δ^(3)(X^α-X_1^α) + m_2δ^(3)(X^α-X_2^α))+ O(ε_(i)(j),1/c^2) .Here φ≡ -c^2/2 φ_(4)(4) is analogue of Newtonian potential. The solution of the last equation corresponding toboundary conditions has the shapeφ(X^)=G(m_1/|X^-X^_1|+m_2/|X^-X^_2|)+O(ε_(i)(j) , 1/c^2) . One can obtain the equations of motion of both stars fromthe equationT^îĵ_; ĵ=0 ,which follows from Einstein equations (<ref>). Using the expression (<ref>) it is easy to show that this equations of motion can be written as Lagrange equation with the following Lagrangian:ℒ=m_1v_1^22+m_2v_2^22+Gm_1m_2|r_1-r_2| + ε_ ω^(m_1X_1^v_1^+m_2X_2^v_2^) +2c3 R_(4)[m_1X_1^X_1^v_1^+m_2X_2^X_2^v_2^]-W_(m_1X_1^+m_2X_2^) +D_(m_1X_1^X_1^+m_2X_2^X_2^) ,where the following abbreviations were used:D_=-c^22 R_(4)(4) +12 (δ_ω^2-ω_ω_),with ω^2=ω_ω^;v^_1,2=X^_1,2T .HereT=1/c x^4̂ denotesthe time coordinate in the comoving reference frame, i. e. the proper time of observer (<ref>), which coincides with the proper time of the center of mass of the binary system. After the transformation { X_1^=X^+m_2m_1+m_2 x^X_2^=X^-m_1m_1+m_2 x^.and{ v_1^=V^+m_2m_1+m_2 v^v_2^=V^-m_1m_1+m_2 v^. . into the reference frame of Newtoniancenter of mass (in the Fermi coordinates) we obtain for the Lagrangian (<ref>)ℒ=[v^22+G m_1m_2r+(ε_ω^ x^v^ +2c3 R_(4)m_2-m_1m_1+m_2x^x^v^ +D_x^x^)] + (m_1+m_2)[12 V^2+ε_ω^X^V^ +2c3 R_ (4)X^X^V^+D_X^X^ -W_X^]+c3 (R_(4)+R_(4)) ×(2x^X^v^+x^x^v^) ,whereϱ=r_1-r_2 ,x^=X_1^-X_2^ ,and ϱ=|ϱ|=√(x^x_) , R=m_1 r_1+m_2 r_2m_1+m_2andX^=m_1 X_1^+m_2 X_2^m_1+m_2 , v=ϱT ,andV=RT , =m_1 m_2/m_1+m_2 .The equations of motion of the center of mass (equation for R) and the equations of motion of both stars (equation for ϱ) relative to each other can be written in the Lagrange form T(ℒV^)-ℒX^=0 andT(ℒv^)-ℒx^=0 , respectively. After simple calculations we obtain from (<ref>)(m_1+m_2) V^T= (m_1+m_2)(2ε^ _V^ω^ -2cR^ _ (4)X^V^+2 D^ _ X^-W^)-2m^*cR^ _ (4)x^ v^ , v^T= (G(m_1+m_2)/r)_-2ε^ _ω^ v^ -2c(m_2-m_1)/(m_1+m_2) R^ _ (4)x^ v^+2 D^ _ x^+4/3 c S_(X^ v^+x^ V^) ,where the abbreviationS_=1/2 (R_ 4+R_ (4))is used. In order to use the reference frame, comoving with the center of mass,we letX^=0 , V^=0 .It is obvious that these conditions will make sense, if the equation dV^/dt=0 will follow from (<ref>) and(<ref>). Taking into account the equation (<ref>) (i.e. the explicit form of equation (<ref>)), we obtain the following expression for 4-acceleration of the center of masses in the comoving Fermi coordinates:W^=-c/m_1+m_2 R^_ (4)ε^_M^ -c/3(m_1+m_2) t(R^_(4)Q^) .In the above formula,M^=m_1ε^_X^_1v^_1+ m_2ε^_X^_2v^_2 =ε^_x^v^is the intrinsic angular momentum of the binary, which is calculated with respect to its center of mass,Q^=∑_a=1^2m_a(3X_a^X_a^-r^2_aδ^)= (3x^x^-ϱ^2δ^)(if R^=0)denotes the quadrupole moment tensor. Let's notice that in the approximation used here we can putT (R^ _(4)Q^)= R^ _(4)T Q^ , T (S_x^X^x^)= S_T(x^X^x^)and soon. In other words one can say that the center of mass of the binary star satisfies in good approximation the following equations <cit.>(m_1+m_2)u^iτ=-1/2c R^i_skmu^s ε^mkbnM_bu_n -1/3 h^i_sτ(R^s_klmQ^klu^m) ,where ε^mkbn is the Levi-Civita pseudotensor (ε^1234=1/√(-g); g=(g_ij)); h^i_s=g^i_s+(1/c^2)u^iu_s is the projective tensor. Taking into account the conditions (<ref>), thus we obtain from (<ref>) the following equation of relative motion in Newtonian approximation and in the comoving Fermi coordinates <cit.>:vT=-∇(ϕ+Υ)+2v×ω+A ,whereΥ=-D_x^x^=c^2/2 R_(4)(4)x^x^ -1/2 [ϱ^2ω^2- (ϱ·ω)^2] , ϕ=-G(m_1+m_2)/r , A^=c(m_1-m_2)/m_1 m_2 [R^ _ (4)ε^ _M^ -1/3 T(R^ _(4)Q^)] . It should be noticed that equations of motion (<ref>), derived in this section, are in accordance with those obtained by J. Anandan by the use of action based approach <cit.>.§ REDSHIFTElectromagnetic radiation is a unique source of information about the motion of compact binaries in external gravitational field.A typical wavelength of radiation used for observations λ≲ 10^3m is much less than the scale of gravitational inhomogeneities M∼ 10^9m. Because of this we use the geometrical optics approximation (see e. g.<cit.>). We will consider two characteristics of electromagnetic radiation:times of arrival of the pulses t_TOA and redshift z. Times of arrival is the moment of observation of pulses of pulsar and these is commonly used in the analysis of pulsar timing (see e. g. <cit.>). Radiation of usual stars (main sequence stars, white dwarfs or giants) is usually described by redshift (see e. g. <cit.>). Redshift is related to the times of arrival as follows (see <cit.>):δλ/λ=z=t_TOA-t_TOA'/T_p-1.where λ is the wavelength of the emitted light, δλ is the difference between wavelengths of the arrival light and the emitted light, t_TOA-t_TOA' is the difference between times of arrival of two consistent pulses of the pulsar, T_p is the period of the pulsar in the pulsar reference frame. Because the redshift z and difference of the times of arrival t_TOA-t_TOA' are interconnected, we can choose the former as radiation characteristic.Redshift of a radiation source which moves in external gravitational field can be calculated using the following expression (see <cit.>):z+1=(k^iu_i)_s/(k^iu_i)_o, where u^i is the 4-velocity vector of the source (subscript "s") or the observer (subscript "o"), and k^i is the wave vector of the ray in the corresponding points.From observations we know redshift as a function of the observer time: z=z_t(t). But it is more convenient in calculations to use the redshift as a function of proper time of the source z(τ). The transition from the function z(τ) to the z_t(t) can be accomplished by using the following expression:t(τ)=∫^τ_0dτ/z(τ)+1,where we chose the initial proper time of the source such that t(0)=0. Usually the obtained function is monothonic and we can find inverse function τ(t).Then, we havez_t(t)=z(τ(t)).Therefore for calculation of the redshift it is enough to know only the function z(τ).Since the observer is far from the Galactic Center, we can use the Minkowski metric and the Galilean coordinates (t, x^α) in the vicinity of observer. Then, we obtain(k^iu_i)_o/A≈c+n_αv^α/√(1-v_αv^α/c^2),where v^α=dx^α/dt, n_α is the unit vector in the direction of the Galactic Center. A is integral of motion and it quantity is dependent only on the parametrization of the ray. Due to this without loss of generality we can establish A=1 (see Appendix A). Then the redshift can be expressed asz+1=z_∞+1/(k^iu_i)_o.In the formula (<ref>) only the part z_∞+1 is of our interest. If one know this part, the whole expression for the redshift (<ref>) can be obtained by using the ephemerides of the Earth. The z_∞ can be expressed asz_∞+1=(k^i u_i)_s. The influence of the external gravitational field on the binary can be approximately described by an effective potential u_eff=m_1c^2· R_(α)(4)(β)(4)x^(α)x^(β) (see formula (<ref>)). For the system to be stable, this potential must be much less than the newtonian potential. In the present work we assume that|m_1c^2· R_(α)(4)(β)(4)x^(α)x^(β)|≪ Gm_1m_2r.ThereforeT/T_0≪ 1,where T_0 and T are characteristic timescales of motion of the binary in external field and relative motion of the stars in binary, respectively.It is follows from (<ref>) that the whole redshift is consists of two parts: the first one is fast oscillate (on timescale ∼ T) and with relatively small magnitude (of the order v), and the second one is changes on timescale of T_0 and has the order of magnitude of 1 (see Sec. <ref>).§ SCHWARZSCHILD METRIC Let us consider the case of external gravitational field of a Schwarzschild black hole to apply formalism that have been developed in this work. This field can be used as an approximation of the gravitational field of the supermassive black hole in the Galactic Center.The Schwarzschild metric has the following form (see e. g. <cit.>):g_ij^Sch dx^i dx^j=ds^2= dr^21-2M/r+r^2 dθ^2+r^2sin^2θdϕ^2-(1-2Mr) d(ct)^2,where (r,θ,ϕ,t) are the Schwarzschild coordinates.Components of 4-velocity of a timelike geodesic can be written as <cit.>:u^3=dϕdτ=Lcr^2, 1cu^4=dtdτ=E1-2M/r, u^θ=0,1cu^1=±√(E^2-(1-2Mr)(1+L^2r^2)). where E is mechanic energy and L is the angular momentum per unit mass of the test particle. In the chosen coordinate system θ(τ)=π/2.4-wave vectork^i of an isotropic geodesic is given by (we choose the parametrization with A=1, see Sec. <ref>): k^3=Dr^21c;k^4=1(1-2M/r)c;k^θ=0;k^1=±√([1-D^2r^2(1-2Mr)]1c^2). where integral of motion D is the impact parameter of the ray. Taking into account (<ref>), (<ref>) and(<ref>), from (<ref>) we get * The timelike geodesic 1r_s(ϕ_s)=1-eℓ+2eℓ^2((ϕ_s-ϕ_s0)√(1-6M/ℓ+2eM/ℓ)2,k); k=√(4· e/ℓ/M-6+2e). * The isotropic geodesic1r_r(ϕ_r)=1P-Qk^22PM ×^2([ϕ_r2√(QP)+(arccos(√(2MQk^2)),k)],k); k=√(Q-P+6M2Q). where [ϕ,k] is the elliptic integral of the first kind. Here the lower indices r and s denote the light ray and the radiation source, respectively. Angles ϕ_r and ϕ_s are measured in the planes of the ray and the source, respectively, as shown in Fig. <ref>. ϕ_s0 — some initial angle of the orbit. The initial point for the isotropic geodesic is chosen to be at spatial infinity where ϕ_r=0. Formula (<ref>) is valid only for orbits that have pericenter, which corresponds to D>3√(3)M. The integrals of motion are related to the apocenter distance s_1and pericenter distance s_2:{ L=s_1 s_2√(2)/√((s_2/M-2)(s_1^2+s_1 s_2)-2s_2^2);E=√((s_2-2M)(s_1-2M)(s_1+s_2)/(s_2-2M)(s_1^2+s_1 s_2)-2Ms_2^2)..Also we use the abbreviations:ℓ=2 s_1 s_2s_1+s_2;e=ℓ2s_1-s_2s_1 s_2;Q=√(P^2+4MP-12M^2). § CALCULATION OF THE REDSHIFT The purpose of this section is to describe a method of calculation of the redshift of the source in binary star system that moves in gravitational field of supermassive black hole. Redshift z of a point source of radiation is given by (<ref>). To apply this formula it is necessary to know the low of motion of the source. It follows as the solution of equations (<ref>) and (<ref>).As a first approximation it is possible to solve numerically the equations (<ref>) in assumption X^(α)=0, V^(α)=0.For the following approximations it is not difficult to solve the equations of motion of the center of mass (<ref>). But from it is follows from the analysing of equations (<ref>) that the right-hand side of it is negligible (has the order of ρ v/r^2) and we can consider the motion of the center of mass as geodesic (X^(α)=0, V^(α)=0.)Let us denote z_0 the redshift on infinity of a (non-real) source moving along the world line of the center of mass of the binary system. The redshift of light from this source can be found as1+z_0=k_iu^i.Where the components of the velocity vector are calculated from (<ref>), (<ref>), (<ref>). The binary star system must be described as the finite-size object. Therefore the redshift for the source in binary system is (see Appendix B)1+z_∞=(1+z_0)(1-d/dτ(n_(α)X_1^(α)))+O(ρ^2).Here X_1^α(τ)=-x^α(τ)m_2/(m_1+m_2) (we consider that the radiation of the component with index 1 is observed) are the vierbein components of the deviation of the source trajectory from the world line ξ(τ). The coordinate functionsx^α(τ)must be calculated from (<ref>).To describe the orientation of the orbit of the center of mass, we use the orbital inclination i and the longitude of periastron ω_0 (see Fig. <ref>). These two angles together with the integrals of motion s_1, s_2 form a full set of parameters of motion of the center of mass.We have the following expression for the angle ϕ_r:ϕ_r=arccos(cosϕ_ssin i).It is convenient to choose the following vierbeins: { h_(4)^1=(√(E^2-(1-2Mr-2ML^2r^3)-L^2r^2))×((drdτ)); h_(4)^2=0;h_(4)^3=Lr^2; h_(4)^4=Er(r-2M).. { h_(1)^1=L√(E^2r^3-(r-2M)(r^2+L^2)L^2r^3+r^5)×((drdτ)); h_(1)^2=0;h_(1)^3=√(L^2+r^2)r^2;h_(1)^4=LEr(r-2M)√(L^2+r^2).. { h_(2)^1=-Er√(L^2+r^2); h_(2)^2=0; h_(2)^3=0; h_(2)^4=(-√(E^2r^4-r(r-2M)(r^2+L^2))(r-2M)√(L^2+r^2))×((drdτ)).. h_(3)^i={0,-1/r,0,0}. Angular velocity (<ref>) has one non-zero componentω=ω^(3)=LE/L^2+r^2. The non-zero components of curvature tensor are:R_(2)(4)(2)(4)=-3L^2+2r^2r^5, R_(3)(4)(3)(4)=3L^2+r^2r^5, R_(1)(4)(1)(4)=1r^3,R_(1)(3)(3)(4)=-3L√(L^2+r^2)r^5=-R_(1)(2)(2)(4). For a given trajectory of the center of mass one knows the integrals L and E. To determine the impact parameter of the ray it is necessary to solve the boundary value problem. In our case it reduces to the following non-linear equationr_r(ϕ_r(ϕ_s))=r_s(ϕ_s).By using representations of radial functions (<ref>) and (<ref>), one can find impact parameter D for all ϕ_s. The components of the 4-wave vector k^i in a reference frame rotating relative to the Schwarzschild reference frame are:k'^1=k^1;k'^2=-k^3sinΩ;k'^3=k^3cosΩ;k'^4=k^4.where the components k^i are given by (<ref>), (<ref>), (<ref>), Ω is the angle between the ray plane and the orbital plane of the center of mass motion:Ω=arccos(sinϕ_ssin i/sinϕ_r).The redshift from the motion of the center of mass z_0has the form:z_0=±√((E^2rr-2M-(1+L^2/r^2)))×√((rr-2M-P^3/(r^2(P-2M))))+L√(P^3)r^2√(P-2M)cosΩ+E1-2M/r-1.The vierbein components of the wave vector has the form k^(α)=h^(α)_i k'^i.We find the redshift as a function of ϕ_s and τ. In order to calculate redshift as a function of proper time of the source one must solve a differential equation dϕ_s/dτ=L/r_s^2(ϕ_s).This equation for the function ϕ_s(τ) can be solved numerically.To find the relative motion of the stars, we have solved equations (<ref>) numerically. Choosing parameters of relative motion as in Table <ref>, it is simply to show that conditions (<ref>) and (<ref>) are satisfied: after numerical calculations we have obtained v/c<0,01, ρ/M<0,02.An example of calculating of the redshift is presented on Figure <ref>. The parameters of motion have been used are summarised in Table <ref>.§ THE MAGNIFICATION FACTOR AND EXTINCTION OF PULSES OF THE PULSARLet us consider the times of arrival of pulses of a pulsar in binary system that moves in the gravitational field of a supermassive black hole. The times arrival of these pulses can be simply obtained from the redshift (<ref>). Due to the precision of the rotation axis of the pulsar and the deviation of the wave vector in the curved space-time an observer can see the pulses in a finite intervals of time (see e. g. <cit.>). The formalism that has been developed in this paper give us possibilities to calculate this time intervals on timescal of several periods of orbital motion. We introduce a spherical system of coordinates center of which is coincide with the center of pulsar. Polar axis of this system is perpendicular to the plane of the motion of the center of mass. Then, the unit vecton of the rotation axe n_p^(α) has the following vierbein componentsn_p^(α)={sin(θ_p)cos(ϕ_p-ϕ_ω(τ)),sin(θ_p)sin(ϕ_p-ϕ_ω(τ)),cosθ_p}.Where ϕ_ω(τ)=∫^τ_0ω(r(τ'))dτ' is the rotation angle of the vierbeins, θ_p and ϕ_p are initial angles of a spherical coordinates of the vector n_p^(α).Pulse can be received by observer, if and only if the unit vector n_(α) lies between two cones√(1-(n_(β)n_p^(β))^2)/tan(α_2/2)<n_(β)n_p^(β)<√(1-(n_(β)n_p^(β))^2)/tan(α_2/2),where α_2 and α_1 are the cone angles, that bound the cones of pulsar beam (see, e. g., <cit.>). The results of calculations of the time of observations of the pulsar for some parameters of its orientation is presented in Figure <ref>. If the parameters of the motion of binary pulsar are known, by fitting this diagram one can extract the unknown parameters (θ_p, ϕ_p, α_1, α_2), and to use this to predict time of observation of this pulsar in future. The magnifification coefficient of this pulses can be calculated by using the following formula (for definition and derivation see <cit.>):I_p/I_0≡ K=1/r^2sinϕ_rD/(z+1)^4√(1-(1-2M/r)D^2/r^2)|d D/dϕ_r|.Light from a source can be received by observer from infinity number of trajectories (see, e. g., <cit.>). But the largest intensity has the main ray of light that is received which minimal angle ϕ. The intensity of other rays smaller as ∼ e^-2π n, where n is number of ray <cit.>.Because of this, the most intensity rays comes to the observer from the main trajectory. This gives us approach to distinct the lights from this ray from another. As an example on Fig. <ref> we plot the magnification factor (<ref>) for two first rays for the set of parameters of motion <ref>.§ DISCUSSION In the present work a technique for modeling electromagnetic radiation of a binary system in the field of a supermassive black hole has been presented. We have calculated redshift, magnification coefficient and extinction time for a model binary system. The results suggest that for a sufficiently close binary system the redshift can be of the order of unity and a weak field approximation may not be used.Redshift of a compact binary has two components: a slowly changing one and a rapidly changing but small one. They are connected with the motion of the system as a whole around the black hole and with the relative motion of stars in the system, respectively. Using the redshift data, the timescales and amplitudes of both components can be estimated, which provides constraints on the orbital parameters of the binary system. The part of the redshift that is correspondence to the motion of the system as a whole is more interesting because of this motion can be relativistic. It is follows from numerical calculations that changes in different parameters of the motion leads to characteristic changes of the function of redshift (see Fig. <ref>). This gives one possibilities to reconstruct of the motion of the binary system by using the redshift as a function of time that is obtained from observations. The equations of motion of binary star that have been presented in this work can be applied to the cases of all external gravitational fields that changes on a scale much more than the size of the compact binary system. Due to this it is possible to apply the method that is described to the cases of other external gravitational fields.The using of the Fermi coordinates formalism gives us possibilities to simply derive the condition of times of the extinctions of the pulsar (<ref>). In many works the Lorentz transformations approach for this purpose have been used (see<cit.>.). But this approach do not consists the geodesic precision in the field of supermassive black hole and do not can be used for the calculating of the times of extinction for the pulsar motion in the small vicinity of black hole.For practical applications it is much more interesting to solve the full inverse problem: given the redshift as a function of time, find orbital parameters of the binary system. In some works the inverse problem for the source that moves in the field of supermassive black hole has been solved <cit.> by using some additional data, such us magnification factor of the ray. It is interesting to solve inverse problem for a binary star in external gravitational field by using the redshift data only, that can be obtained with high accuracy (the other characteristic of electromagnetic radiation such us magnification coefficient are obtained with much less accuracy). We leave it for a separate paper.A decomposition of redshift of a compact radiation source into a series has been obtained. This expansion can be useful not only for a binary stars, but for any compact source in external gravitational field for which the law of internal motion is obtained separately in a comoving reference frame.§ WAVE VECTOR PROPERTIES Consider a radiation source moving along a trajectory x^i=x^i(τ), and an observer staying at the point x=(r=∞,θ_O,φ_O) at the infinity. Let k^i(x^l) be a wave vector of the light ray the was emitted by the source at the point x^i and will be received by the observer. Using equation of geodesic, k^i(x^l) can be obtained for every point of the spacetime (in the case when there are several such rays, a concrete one is considered, so that k^i is a smooth function of coordinates).Vector k^i is isotropic and satisfies equation of geodesic:k^i k_i = 0 k^l k_i;l = 0 Let ξ^i be a Killing vector (ξ_i;k+ξ_k;i=0). Equations (<ref>) and (<ref>) imply thatk^i (ξ^l k_l)_,i = 0,which means that ξ^l k_l is constant along the ray. If the space-time is static ∂∂ t is Killing vector and we have k_4=A=const. By using the appropriate parametrization of the isotropic geodesic, we can establish that A=1 and therefore k_4=1 in whole space time.For a static spherically-symmetric spacetime k_i is satisfied the following relations:k_i;j-k_j;i=0.To prove this, introduce spherical coordinates x^i=(r,θ,φ,ct) so that the observer is located at the pole θ=0. In this case k_i does not depend on time and angle φ. Trajectory of each ray lies in a plane for which φ=const, hence k^3=0. k_4(x)=1 by it definition.Equations (<ref>) and (<ref>) imply thatk^l(k_i,l-k_l,i)=0,which leads tok_1,2-k_2,1=0;k_3,j=0;k_4,j=0.The covariant form of the relations(<ref>) gives (<ref>), which completes the proof.§ REDSHIFT OF A FINITE-SIZE RADIATION SOURCEFor a finite-size source equation (<ref>) can be applied to each part of the source. In this case k^i is calculated at the location of a corresponding part of the source. For a compact source the redsift can be expanded into a series using ρ/r as a parameter, where ρ is a characteristic size of the source and r is the characteristic distance, i. e. distance to the field center.Let introduce some inner "point" of the body of the source C with coordinates x^i_C that moves along world line x^i_C=ξ^i(τ) and the considering part be located at a point P with coordinates x^i_P. Our aim is to express the redshift of emitted rays from the point P in terms of quantities that are defined at the world line ξ^i(τ). Consider a geodesic x^i(σ) that is orthogonal to the world line ξ^i(τ):d^2x^i/dσ^2+Γ^i_kldx^k/dσdx^l/dσ=0, u_idx^i/dσ(x^j_C)=0,where Γ^i_kl are the Christoffel symbols, u^i=dξ^i/dτ, σ is the parameter that is equal to the geodesic distance. Denoteη^i=dx^i/dσ(x^j_C).Equations (<ref>)-(<ref>) allow to express x^i_P using η^i:x^i_P=ξ^i(τ)+η^iσ_P+O(ρ^2).Where σ_P denotes the geodesic distance from x^i_C to x^i_P. This has the order of ρ.The vectors k^i(x^i_P) and u^i(x^i_P) that are included in (<ref>) must be calculated on the world line of the source. To find the corresponding quantities on the ξ^i(τ), it is necessary to introduce the vector fields k^i_F(x^j) and u^i_F(x^j) on the interval between x^i_C and x^i_P, by translating this vectors covariantly parallel along the geodesic that connect the points x^i_C, x^i_P. Therefore we obtaink^i_F(P)=k^i(P); u^i_F(P)=u^i(P).k^i_F(C)=k^i_F(P)+Γ^i_slk^i_F(P)η^lσ_P+O(ρ^2),u^i_F(C)=u^i_F(P)+Γ^i_slu^i_F(P)η^lσ_P+O(ρ^2).Apart from the fields that have been introduced we have another vector field k^i(x^j). This field can be determined as the field of all tangent vectors to the null geodesics (that are 1 order) that are leaves to the observer (see Appendix A). We also have k^i(P)=k^i_F(P)k^i_F(P)=k^i(P)=k^i(C)+k^i_,j(C)h^j_(α)X_1^(α)+O(ρ^2).Where X_1^(α) — is the Fermi coordinates of the source (P) relative to the center of mass in (C). By substituting (<ref>) into (<ref>), we havek^i_F(C)=k^i(C)+k^i_;j(C)h^j_(α)X_1^(α)+O(ρ^2).From (<ref>) and the equation σ_Pη^i=h^i_(α)X^(α) for the 4-velocity of the source we haveu^i(P)=dx^i_P/dτ=dx^i_C/dτ+d/dτ(h^i_(α)X_1^(α))=h^i_(4)+h^i_(α)v^(α)+e_(α)^(β)(γ)h^i_(β)ω_(γ)X_1^(α)-Γ^i_slh^s_(α)X_1^(α)h^l_(4).Thenu^i_F(C)=h^i_(4)+h^i_(α)v^(α)+e_(α)^(β)(γ)h^i_(β)ω_(γ)X_1^(α).Since the fields k^i_F(x^j), u^i_F(x^j) and g_ij(x^j) are covariantly constant along the geodesic from x^i_P to x^i_C we have(k^iu_i)_s=g_ij(P)k^i_F(P)u^j_F(P)=g_ij(C)k^i_F(C)u^j_F(C).By substituting (<ref>) and (<ref>) into (<ref>) with metric in the form g_ij(C)=g_ij^Sch+ϕ_(l)(m)h^(l)_i h^(m)_j (see also (<ref>)) and (<ref>) we obtainz_∞(τ+τ_ret)+1=h_(4)ik^i+v_(α)k^(α)+e_(α)(β)(γ)k^(α) X_1^(γ)ω^(β)+h_(4)ik^i_;jh^j_(α)X_1^(α)+2ϕ_(4)(4)k^(4)+O(ρ^2,v^2,ρ v,ϕρ,ϕ v).Where k^(l)=h^(l)_ik^i. All terms in (<ref>) must be calculated on the world line ξ(τ) at the proper time τ. The term ϕ_(4)(4)k^(4) in this expression has the sense of Shapiro delay due to the gravitational field of each pulsar. It can be found as2ϕ_(4)(4)k^(4)=Δ_1+Δ_2,whereΔ_1=4m_1/R_0(z_0+1),Δ_2=m_2/√(x^(α)x_(α))(z_0+1).The Δ_2 has the order of Δ_2∼ m/ρ. And usually this term ≲ 10^-6 for a pulsar and therefore we neglect of this term. The another thermΔ_1has the form Δ_1=const· (z_0+1). For a pulsar this leads only to rescaling of the quantity T_p (see Sec. <ref>) and therefore we will not consider of this term.The time dilation due to the finite size of the compact system we denote as τ_ret. This time dilation can be found from the relation k_idx^idλ=const that is hold along isotropic geodesic (see e. g. <cit.>). Here dx^idλ is a vector between two close geodesics in sheaf, and λ is a global parameter that numbers geodesics in this sheaf. We obtain(k_iu^i)_o(z+1)τ_ret=(k_iη^i)_sσ_P+O(ρ^2)⇒τ_ret=1/A(z+1)k_(α)(C)X_1^(α)+O(ρ^2).By substituting (<ref>) into (<ref>) we obtainz_∞(τ)+1=h_(4)ik^i-1/z_0+1dz_0/dτk_(α) X_1^(α)+v_(α)k^(α)+e_(α)(β)(γ)k^(α) X_1^(γ)ω^(β)+h_(4)ik^i_;jh^j_(α)X_1^(α)+O(ρ^2,v^2,ρ v).Where we denote z_0 from the relation z_0+1=k_idx^i_Cdτ. On taking into account the relations k_i;j=k_j;i, (see Appendix A) we can rewrite (<ref>) asz_∞(τ)+1=(z_0+1)(1-d/dτ(n_(α)X_1^(α)))+O(ρ^2,v^2,ρ v).Where n^(α)=k^(α)√(k_(α)k^(α))=k^(α)(z_0+1) — is the unit vector of the ray. All terms in right-hand side of equation (<ref>) must be calculated at the proper time τ.Note that the formula (<ref>) can be rewritten in covariant form:z_∞(τ)+1=(z_0+1)(1-d/dτ(n_iη^iσ_p)))+O(ρ^2,v^2,ρ v).Where η^iσ_p≈δ x^i is the coordinate distance between the source and the center of mass at the same proper time, n_i=k_i(z_0+1).
http://arxiv.org/abs/1702.08381v1
{ "authors": [ "Alexander Gorbatsievich", "Stanislav Komarov", "Alexander Tarasenko" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170227170815", "title": "Optical appearance of a compact binary system in the neighbourhood of supermassive black hole" }
We report on experimental investigations of an electrically driven WSe_2 based light-emitting van der Waals heterostructure. We observe a threshold voltage for electroluminescence significantly lower than the corresponding single particle band gap of monolayer WSe_2. This observation can be interpreted by considering the Coulomb interaction and a tunneling process involving excitons, well beyond the picture of independent charge carriers. An applied magnetic field reveals pronounced magneto-oscillations in the electroluminescence of the free exciton emission intensity with a 1/B-periodicity. This effect is ascribed to a modulation of the tunneling probability resulting from the Landau quantization in the graphene electrodes. A sharp feature in the differential conductance indicates that the Fermi level is pinned and allows for an estimation of the acceptor binding energy. A new step of complexity has recently been undertaken in the field of two-dimensional crystals, by deterministically placing atomically thin layers of different materials on top of each other. The resulting stacks are referred to as van der Waals (vdW) heterostructures <cit.>. Based on this idea, a few prototype devices, such as tunneling transistors <cit.> and/or light-emitting tunneling diodes <cit.>, have been fabricated and successfully tested. However, further work is necessary in order to better characterize such structures, to learn more about their electronic and optical properties, with the aim to properly design device operation. Here, we unveil new facets of light emitting vdW heterostructures, with reference to the issue of the alignment of electronic bands, effects of Coulomb interaction and a subtle but still active role of the graphene electrodes in these devices. We report on optoelectronic measurements performed on a WSe_2-based tunneling light-emitting diode. The differential tunneling conductance of our structure shows a large zero bias anomaly (peak), which we ascribe to pinning of the Fermi energy at the WSe_2 impurity/acceptor level. A conceivable scenario for the evolution of the band alignment as a function of the bias voltage is proposed. Strikingly, the bias-potential onset for the electroluminescence is found to coincide with the energy of the free exciton of the WSe_2 monolayer (and not with the energy of a single-particle bandgap). This fact points out the relevant role of Coulomb interactions between electrically injected carriers on the tunneling processes in our device. Furthermore, pronounced magneto-oscillations are observed in the electroluminescence emission intensity measured as a function of magnetic field applied perpendicularly to the layer planes. These oscillations, periodic with the inverse of the magnetic field, reflect the modulation of the efficiency of carrier tunneling and are caused by the Landau quantization of the two-dimensional graphene electrodes.We studied a light-emitting diode structure <cit.> that is based on a WSe_2 monolayer as the active part. The layer sequence for this device was Si / SiO_2 / hBN / graphene / hBN / WSe_2 / hBN / graphene. The emission area of the structure is presented on the microscope image in Figure <ref> (a). Figure <ref> (b) depicts a schematic drawing of the layered structure. The two hBN spacers that separate the WSe_2 monolayer from the graphene electrodes are two layers thick. A detailed description of the fabrication process can be found in Ref. [Withers2015].The optoelectronic characteristics of the sample were studied by recording the electroluminescence (EL) signal as a function of bias voltage and in magnetic fields up to 14 T. Current-voltage curves were measured to study the tunneling processes and photoluminescence (PL) mapping of the sample was performed to additionally characterize the structure.All measurements were performed with an optical-fiber-based insert placed in a superconducting coil. The investigated sample was located on top of an x-y-z piezo-stage kept in helium gas at T=4.2 K. The laser light from a continuous wave Ar+ laser (λ=514.5 nm) was coupled to an excitation fiber of 5 μm diameter, focused on the sample by an aspheric lens. The signal was detected with a 50 μm core fiber (collection spot diameter of ∼ 10 μm) by a 0.5 m long monochromator equipped with a charge-couple-device (CCD) camera. Electrical measurements were performed using a Keithley 2400 source-measure unit.A vertical current was observed upon the application of a bias voltage (V_b) between the two graphene electrodes. Such a charge transfer from one graphene layer to the other can only be achieved via tunneling. To describe the electronic transport perpendicular to the structure, we present in Figure <ref> (c) the variations of the differential conductance G=dI/dV_b as a function of the bias applied between the two graphene electrodes. The optical emission was monitored at the same time. The corresponding integrated EL intensity is presented in Figure <ref> (d).To interpret the optoelectronic behavior of this device, it is crucial to know the band alignment of the heterostructure. To this end one has to rely both on the theoretical estimations <cit.> and on the spectroscopic experimental works targeting these band offsets <cit.>. A schematic illustration of the bands is shown in Figure <ref> (e). The drawing depicts the two graphene electrodes, represented by Dirac cones, which are separated from the WSe_2 monolayer by the hBN barriers. The WSe_2 layer is schematically illustrated by the parabolic bands around the K-point of the Brillouin zone. In addition donor/acceptor-like bands are depicted by horizontal green lines in WSe_2. Transport measurements of the tunneling current in hBN/graphene/hBN structures found the valence band of hBN to be offset by about 1.4-1.5 eV <cit.> from the graphene Dirac point. The band alignment of monolayer WSe_2 and graphene has been recently studied using μ-ARPES and an offset of 0.70 eV between the Dirac point and the WSe_2 valence band edge has been reported <cit.>. Assuming a direct band gap for monolayer WSe_2 of about 2-2.2 eV <cit.> one can conclude that the energy separation between the Dirac point of graphene and the valence band edge of WSe_2 should be significantly lower as compared to the conduction band. This finding is also in agreement with theoretical estimations <cit.>. The results are summarized qualitatively in the sketch in Figure <ref> (e).By using the proposed band alignment scenario, we can divide the differential conductance (Figure <ref> (c)) in three distinct tunneling regimes. The first one occurs at around zero bias, where a pronounced peak is observed. We ascribe this feature to be caused by the tunneling through impurity donor/acceptor bands in WSe_2, that pin the Fermi level (Figure <ref> (e)). With an increase in bias voltage the tunneling through the impurity band ceases to be resonant and a decrease in differential conductance is observed, giving rise to a symmetric peak-like shape. Our measurements cannot directly infer whether these impurities are of donor or acceptor type. However, within the expected band alignment, the Dirac point of graphene is much closer in energy to the valence band edge of WSe_2, a material which shows preferably p-type conductivity <cit.>. Hence, we assume that the dominant impurities in the investigated WSe_2 monolayer are of acceptor type, and that the Fermi level is pinned to this impurity band at zero applied bias. A small applied bias is then sufficient to move the Fermi levels of the graphene electrodes out of resonance with this band, producing a symmetric differential conductance feature centered at zero applied bias. The peak at zero bias was also observed for other similar WSe_2 devices, however it was found that its magnitude can strongly vary from device to device (see supporting information) and it can also be absent. This variation can be understood in terms of different unintentional initial doping of WSe_2, which might vary from flake to flake.The second regime, indicated by the green color in Figure <ref> (c), shows an overall increase of conductance with two peak-like features. The first feature, around V_h ∼± 0.7 V, originates from the onset of hole tunneling into the valence band of WSe_2. This process becomes efficient when the Fermi level of one graphene layer is moved by the amount of the acceptor binding energy, to coincide with the valence band edge of WSe_2. The situation is schematically depicted in Figure <ref> (f). At this point one should mention that another possible reason for the above mentioned peak at zero bias could be resonant effects due to the direct graphene-graphene tunneling <cit.>. However, the graphene electrodes were not intentionally aligned, making the appearance of resonant effects very improbable. Another argument against this alternative scenario is that with a Femi level close to the Dirac point, one would roughly need to apply a voltage corresponding to twice the valence band offset to enable hole tunneling, which does not fit the observation of V_h ∼± 0.7 V. The second feature in this regime is the peak at larger bias voltage (V_d ∼± 1.2 V). The origin of this peak is still unclear, and a possible explanation could be tunneling involving mid-gap impurity states in WSe_2.The above discussion yields three conditions: an onset voltage for hole tunneling of V_h ∼± 0.7 V, a valence band offset for monolayer WSe_2 of E_VB∼ 0.7 eV, and a Fermi level that is pinned at zero applied bias to the acceptor level. These conditions together with simple considerations regarding the band structure and the electric field in the sample allows us to estimate an acceptor binding energy of E_acc∼ 250 meV (see supporting information).At larger bias (third regime) the increase in voltage will mostly drop across the graphene / hBN junction that does not permit tunneling into the WSe_2 layer. In order to observe EL, both electrons and holes must be present in the WSe_2 layer. This condition is satisfied in the voltage region around V_e ∼± 1.7 V above which EL is observed. The voltage dependence of the spectrally integrated EL signal, shown in Figure <ref> (d), displays a steep onset of emission in that bias range. We can therefore ascribe the strong increase in conductance to the tunneling of electrons into the WSe_2 monolayer (compare Figure <ref> (g)). Additional data for a similar device showing the same behavior is presented in the supporting information. Strikingly, the onset for EL of V_e ∼± 1.7 V is significantly smaller as compared to the direct band gap of a WSe_2 monolayer which is of about 2-2.2 eV <cit.>. Because the base temperature of our experiment T=4.2 K implies a thermal energy below 400 μeV and given the relative alignment of the graphene electronic bands with respect to those of hBN, the large difference can hardly be explained in terms of thermal activation of carriers or a lowering of the effective hBN barrier caused be the electric field. However, the EL onset at about V_e ∼± 1.7 V corresponds well with the emitted free exciton energy of ∼ 1.72 eV. Based on our experiments, the most probable scenario involves tunneling<cit.> directly into the excitonic states of the WSe_2 monolayer. Because the tunneling of holes starts at bias voltages close to V_h ∼± 0.7 V,a population of holes is already present in the valence band when electrons start to tunnel, directly forming excitons. Such processes were indeed observed for resonant electron tunneling into p-doped GaAs quantum wells (QWs) <cit.>. In the case of WSe_2 monolayer, the exciton binding energies are large (∼ 0.4 eV <cit.>) as compared to excitons in GaAs QW systems, which gives rise to the observed large differences. Moreover, it was shown that excitons can persist in such materials up to large carrier concentrations <cit.>, with an estimation of several 10^13 cm^-2 required for the quenching of the excitonic resonances <cit.>.Figure <ref> (a) shows representative PL and EL spectra. The highest energy band (E ∼ 1.72 eV) labelled X^0 can be attributed to the neutral, free A exciton resonance. As observed in EL, the X^0 feature has a full width at half maximum (FWHM) close to 20 meV, hence 3 to 4 times bigger than in PL (red dashed line in Fig. <ref> (a)). The large FWHM originates from inhomogeneous broadening, which is more apparent in EL than PL since in the case of the former the signal is collected from the entire flake. At lower energies, a complex broad band is observed, typical for monolayer WSe_2 samples and which has been attributed to charged and localized (bound) excitons <cit.>. This large broad band indicates the presence of a significant amount of impurities in the case of our device. The presence of defects as evidenced by the optical measurements fits into the scenario of a pinned Fermi level for this device. A magnetic field was applied perpendicular to the surface of the structure in order to study its impact on the EL signal. First, a strong magneto-resistance develops in the structure and significantly shifts the threshold bias for EL emission to larger voltages with increasing applied magnetic field. We ascribe this additional resistance to be caused by the in-plane magneto-resistance of the graphene contacts, which serve as conductors between the metal contacts and the active area <cit.>. This effect hinders measurements with constant applied voltage. To compensate for the additional voltage drop a constant current was kept for the magnetic field sweeps, which yielded stable EL measurement conditions.Figure <ref> (b) shows a three-dimensional false color plot of the raw EL signal as a function of magnetic field. We observed a very intense modulation of the X^0 line with an intensity and a shape that varies as a function of the magnetic field.This modulation of the exciton emission is not a simple on-off effect, but due to the large width of the X^0 feature in EL, an energy-dependent modulation of the X^0 emission can be observed. In order to establish the origin of the modulations, a cut at a constant energy of the X^0-feature plotted as 1/B is presented in Figure <ref> (d). A 1/B-periodicity is apparent, which is further supported by the results of a Fourier analysis for this graph giving a well defined peak for a period Δ(1/B)=93 T (see inset). Assuming Landau quantization in the graphene electrodes to be responsible for the observed behavior, one obtains the following Landau level (LL) spectrum <cit.> E_n=sign(n)v_f √(2eħ B |n| ) where v_f=1 · 10^6 m/s is the Fermi velocity and n the Landau level index. For a constant energy cut across the LL-spectrum of graphene, one obtains oscillations with a 1/B periodicity. Hence, by using the extracted periodicity one can determine the energy above the Dirac point by calculating E_osc=v_f√(2e ħ/Δ(1/B)) This consideration yields an energy separation of about E_osc=350 meV. In Figure <ref> (c) we present an overlay of the graphene Landau levels with the Dirac point located 350 meV below the energy of the X^0 line and the measured EL-spectra. We find an excellent agreement, since the spacing as well as the energy dependence of the modulations are fully described. Consequently, we conclude that the oscillations are related to the quantized density of states (DOS) of the graphene electrodes. This quantization leads to oscillations in the population of holes in the WSe_2 valence band, since the injection process via tunneling from graphene is modulated by the LL spectrum. At lower energies, no signatures of an energy-dependent modulation could be observed for the broad localized (bound) exciton emission band. However, an attenuation of the broad band that oscillates with the magnetic field but not with the energy is apparent when the exciton is affected by the above discussed tunneling process. Such a behavior can be observed in Figure <ref> (b) and (c) in the form of lines on top of the broad emission band. This effect is more markedly shown in Figure <ref>, which presents an EL false color map at lower injection current. This effect confirms that the population of these localized states is fed by the population of free excitons: electrons are injected into the WSe_2 layer via tunneling and directly bound to holes to first form excitons which can scatter to the localized excitonic bands, at lower energy <cit.>. As a consequence, instead of an energy-dependent modulation, as it was observed for the broadened X^0 line, one would expect the oscillation frequency to be transferred from the free exciton to the localized bands.Another characteristic observation is the appearance of a single frequency although there are two tunneling processes (holes and electrons) with different tunneling barriers. Eqn. <ref> describes a Landau level fan chart with an energy spacing between consecutive LLs that decreases with increasing LL-index n. This implies that for a high Fermi energy, the LL spectrum can be significantly smeared out, thus no clear oscillatory behavior can be expected concerning the electron tunneling. This observation of a single frequency modulating the exciton emission intensity, can therefore be seen as an additional evidence that the graphene electrodes are responsible for the oscillations. The modulation in tunneling that was registered optically should in principle also be present in the electrical characteristics. Instead, the experiment shows no clear modulation in the measured voltage as a function of magnetic field. This discrepancy can be understood when taking into account the actual processes that influence the two measurement techniques. The EL signal is only sensitive to the excitonic population, which is a result of the injection of electrons and holes via tunneling. The electrical measurement, however, is a sum of all possible tunneling pathways and does also include leakage and parasitic components, which can mask the effect. With the magneto-EL measurement we therefore gained information difficult to access with standard magneto-transport tunneling experiments.In summary, we report on optoelectronic properties of a WSe_2 based tunneling light-emitting vdW heterostructure in magnetic fields. We propose a conceivable scenario for the band alignment in the structure, which allows us to estimate an acceptor binding energy.The Landau quantization in the graphene electrodes is shown to strongly modulate the injection of holes into the valence band of the active WSe_2 monolayer, which in turn modulates the EL signal. The observed oscillations of the neutral exciton intensity as a function of the magnetic field show a pronounced 1/B periodicity which was used to deduce an effective band offset between graphene's Dirac point and the valence band edge of the WSe_2 monolayer. Our results hence show that the role of graphene electrodes in vdW heterostructures goes far beyond being a semitransparent electrode with a low density of states.In addition, we observed EL emission for applied voltages well below the corresponding band gap of monolayer WSe_2, which was explained in terms of direct tunneling of carriers into excitonic states in WSe_2. We found the EL signal to be more sensitive to the quantized hole injection as compared to magneto-transport, which illustrates the advantage of optoelectronic tunneling measurements. Our findings highlight the importance of excitonic states for the tunneling processes in vdW heterostructures,giving rise to sub-bandgap EL, which could be a key aspect for future optoelectronic device engineering.This work was supported by European Research Council Synergy Grant Hetero2D, EC-FET European Graphene Flagship (no.604391), The Royal Society, Royal Academy of Engineering, U.S. Army, Engineering and Physical Sciences Research Council (UK), U.S. Office of Naval Research,U.S. Air Force Office of Scientific Research and the European Research Council (MOMB project no.320590).§ SUPPORTING INFORMATIONBand structure and electric field considerations, additional data on the zero bias anomaly on other samples, evolution of the electroluminescence as a function of bias voltage for a different sample. § SUPPORTING INFORMATION§.§ Band structure and electric field considerations This section describes the approach employed to obtain an estimation for the acceptor binding energy given in the main text (E_acc∼ 250 meV).To extract this value we made the following assumptions:* The electric field is homogeneous across the structure for voltages below and equal to V_h, i.e. no free carriers are present in the WSe_2 monolayer. This condition makes it reasonable to simplify the situation by introducing an effective dielectric constant weighted by the thicknesses of the layers to describe the electric fields. * The offset between the Dirac point of the graphene electrodes and the valence band of the WSe_2 layer is E_VB=-0.7 eV. As described in the main text this value is based on literature. * The Fermi level is pinned to the acceptor states at zero applied voltage, in accordance with our interpretation of the feature centered at 0 V observed in the differential conductance (see main text). * The onset of hole tunnelling corresponds to a voltage of V_h= ± 0.7 V, as extracted from our measurements.As a first step we define an effective dielectric constant ϵ_eff for the whole hBN/WSe_2/hBN stack (using ϵ_hBN=4<cit.>, ϵ_WSe_2=7.2 <cit.>,d_hBN=1.34 nm, d_WSe_2=0.65 nm) ϵ_eff=ϵ_hBN· d_hBN+ϵ_WSe_2· d_WSe_2/d_stack∼ 5 Where ϵ_hBN, ϵ_WSe_2 are the dielect contants of hBN and of WSe_2, respectively. We use ϵ_eff to estimate the electric field F according to F=V_h/d_stack·ϵ_eff. This field causes an energy shift of the Dirac cones, which can be written as E_field=e · F · d_stack=e · V_h/ϵ_effThe definitions of the relevant energies needed to estimate the acceptor energy E_acc are presented in Figure <ref>.The application of a voltage equal to V_h=± 0.7 V leads to the build-up of an electric field across the structure (see Figure <ref>). As assumed above we estimate E_field by dividing the applied voltage by the effective dielectric constant giving E_field=0.14 eV.The Dirac cones of the graphene electrodes should shift by the same value equal to |E_field/2|, since we assumed the screening inside the stack to be zero. Please note the we choose the origin to be exactly in the center of the symmetric structure (long dashed line in Figure <ref>). We now can calculate the Fermi energy for holes E_fh, since we know that it has to coincide with the valence band edge of WSe_2 for the applied voltage V_h. E_fh=|E_VB|-|E_field/2|=-0.63 which gives a hole concentration (v_f=1 · 10^6 m/s) of n_h=2.92 · 10^13 1/^2 The Fermi level in the second graphene electrode can be obtained since we apply a constant voltage, which is equal to a constant energy difference between the quasi Fermi levels E_fh and E_fe. By using this fact we obtain E_fe=|e· V_h|-|E_field|-|E_fh|= -0.07 and hence we obtain a hole concentration of n_e=3.54 · 10^11 ^-2. The charge conservation allows us to obtain the initial hole carrier concentration of the pinned Fermi level, which yields n_0=n_e+n_h/2=1.48 · 10^13 ^-2. This concentration corresponds to an initial Fermi level E_f0 of E_f0=-0.45 . Having obtained the position of the pinned Fermi level we can estimate the acceptor binding energy to be E_acc=|E_VB|-|E_f0|=0.25 .§.§ Zero bias anomaly As mentioned in the main text a peak at zero bias was also observed for other similar graphene/hBN/WSe2/hBN/graphene heterostructures. Figure <ref> presents the differential conductance curves for two additional devices showing this peak.§.§ Additional electroluminescence data Figure <ref> presents the electroluminescence (EL) versus bias voltage behavior for another device. The differential conductance curve for this device is presented above as black trace in Figure <ref>. The map in Figure <ref> clearly illustrates that the onset for EL is slightly above V∼± 1.7 V in agreement with the interpretation of excitonic states taking part in the tunneling as given in the main text.Figure <ref> shows the EL spectra for several bias voltages corresponding to the map in Figure <ref>. For V∼± 1.8 V a clear EL signal can be observed showing that the onset is well below the band gap of monolayer WSe_2 of about 2-2.2 eV <cit.>.The bulk crystal used for the exfoliation of monolayer WSe_2 was purchased from HQ graphene.
http://arxiv.org/abs/1702.08333v1
{ "authors": [ "J. Binder", "F. Withers", "M. R. Molas", "C. Faugeras", "K. Nogajewski", "K. Watanabe", "T. Taniguchi", "A. Kozikov", "A. K. Geim", "K. S. Novoselov", "M. Potemski" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227154154", "title": "Sub-bandgap voltage electroluminescence and magneto-oscillations in a WSe2 light-emitting van der Waals heterostructure" }
R. Koopman OCLC Research, Schipholweg 99, Leiden, The Netherlands Tel.: +31 71 524 6500rob.koopman@oclc.orgS. Wang OCLC Research, Schipholweg 99, Leiden, The Netherlands Tel.: +31 71 524 6500shenghui.wang@oclc.org Mutual Information based labelling and comparing clusters Rob Koopman Shenghui Wang Received: date / Accepted: date ========================================================= After a clustering solution is generated automatically, labelling these clusters becomes important to help understanding the results. In this paper, we propose to use a Mutual Information based method to label clusters of journal articles. Topical terms which have the highest Normalised Mutual Information (NMI) with a certain cluster are selected to be the labels of the cluster. Discussion of the labelling technique with a domain expert was used as a check that the labels are discriminating not only lexical-wise but also semantically. Based on a common set of topical terms, we also propose to generate lexical fingerprints as a representation of individual clusters. Eventually, we visualise and compare these fingerprints of different clusters from either one clustering solution or different ones.§ INTRODUCTION Identifying thematic structures in science (so-called topics) is the shared goal for all the different methods described in this special issue “Same data, different results?” Every method produced a set of clusters, with each cluster grouping similar or relevant articles together, to reflect certain thematic structures in the same dataset. Comparing different clustering solutions is however a challenge in itself, as reported in <cit.>. What ever numeric measures we apply, such as shared documents across different solutions, or size distributions, those measures give little insight into the meaning of the differences between clustering solutions. For the interpretation of clustering solutions and to relate them to the research fields that people know about, it seems more natural and intuitive if we could assign human-understandable labels to describe the content of those clusters or the topics they represent. This paper presents a method to first assign labels and second to compare them at a more abstract yet still meaningful level. Although, the approach has been developed as a part of the “Same data, different results?” collaboration, we believe that it could also be applied in clustering of other objects. It is not straightforward to pick descriptive, human-understandable labels to summarize the content of clusters produced by an automated clustering algorithm <cit.>. Labelling clusters is often done by those producing them on the basic of common or specific tacit knowledge. When doing automatic clustering of documents, which also contain lexical information in titles, keywords, abstracts, publication venues and alike, using measures based on frequency and co-occurrence of terms come to mind. For example, one could look at the most frequent terms in the bibliographic metadata of the documents belonging to a cluster. Or, one could consider to use terms that occur frequently in the centroid (the middle of a cluster)or the documents that lies closest to the centroid. For this paper we follow another labelling approach, namely to use measures such as mutual information to compare distributions of terms in one cluster with that of other clusters. Those terms are extracted from the lexical information of the documents, in our case, the titles and abstracts of the articles. We call this a differential cluster labelling approach, because it selects those terms which are frequent in one cluster but are not frequent in others as potential labels for this cluster. The advantage of this method is that it is independent of the clustering method, because it only uses the terms extracted from the articles' title and abstract against the final cluster assignments. Furthermore, it can be applied independently of the availability of assigned keywords or subject headings which are commonly used.Having labels based on significant terms which are human-understandable should contribute to the comparison of clustering solutions.We further extend the labelling approach and identify a set of terms which are most informative for all clustering solutions that we need to compare. We then generate a fingerprint for each cluster by measuring their Normalised Mutual Information against these labels.We order these selected terms as solving the Travelling Salesman Problem <cit.>, resulting a one-dimensional word-space, where each term has a specific coordinate.Now we can visualize the fingerprints of all clusters in terms of the Normalised Mutual Information and compare them using those common label-based coordinates. This gives us direct insight in the qualitative differences between clusters across different clustering solutions. The main goal of this paper is to apply methods based on Normalised Mutual Information to label and compare clusters. In the first partwe describe our experiment of using the Normalised Mutual Information to identify labels for individual clusters from different clustering solutions.To test the meaningfulness of the labels we discussed them with a domain expert. The second part of the paper is about comparing clusters based on their label-based fingerprints. § MUTUAL INFORMATION BASED LABELLING Mutual Information is a common technique for labellingclusters <cit.>. The Mutual Information measures how much information the presence/absence of a term t contributes to making the correct clustering decision on a cluster c. In other words, the Mutual Information represents the reduction in uncertainty about the cluster c given the knowledge of the term t. A high Mutual Informationbetween a term t and a cluster c suggests this term describes a large part of the content of this cluster therefore this term could be a candidate for labelling this cluster. Formally, the mutual information between a term t and a cluster c is calculated as follows:I(t,c) = ∑_i=0^1∑_j=0^1 P(T_i, U_j) log_2 P(T_i, U_j)/P(T_i) P(U_j) where T_0 indicates an article does not contain the term t and otherwise T_1, U_0 indicates an article does not belong to the cluster c and otherwise U_1, P(T_i, U_j) is the probability that T_i and U_j happen together within one article, P(T_i) and P(U_j) are the probabilities that these events happen independently. The probabilities are estimated by dividing the frequency of the observed event (the article contains or does not contain the term t or it is in or not in the cluster c) by the total number of articles. We then normalize this mutual information I(t,c) by dividing over the entropy of cluster c, i.e.,NMI(t,c)=2 ×I(t,c)/H(t)+H(c)where H(t)= -∑_i=0^1 P(T_i) log_2 P(T_i)andH(c)=-∑_j=0^1 P(U_j) log_2 P(U_j). This Normalised Mutual Information (NMI) score is non-negative. When a term and a cluster are completely independent from each other, the NMI score is 0. When a term only occurs frequently in one cluster but rarely in others, then the NMI score between this term and this cluster is high. When a relatively frequent term has an extraordinarily low occurrence in one cluster, the NMI score between this term and the cluster is also pretty high, indicating that the cluster is not about this term. In order to distinguish the positive and negative associations between terms and cluster, we assign a negative sign to those NMI scores when the occurrence of a term is less than expected.[If a cluster covers 10% of the total dataset, then the term is expected to occur 10% of its occurrences over the total dataset.]For each cluster in a specific clustering solution, we calculate the NMI scores between this cluster and all the 60 thousand topical terms,[The topical terms were extracted from the titles and abstracts of the articles in the Astro dataset. Please refer to <cit.> for more details.]and rank these terms based on their NMI scores. The terms which have the highest NMI scores are good candidates for labelling this particular cluster. Table <ref> shows thetopical terms with top 10 highest NMI scores for our K-Means clusters <cit.>. The same method was used to label all the clusters from different solutions described in this special issue.As shown here, this method is independent of how these clusters are generated, as it only uses the information from the articles which are in the clusters. Also these topical terms are extracted automatically from the titles and abstracts of all articles <cit.>. As said before, this labelling method does not depend on the availability of the pre-assigned keywords or subject headings. It is actually generalisable to label any collections of articles. Because it is a data-driven approach based on lexical information, these topical terms are sometimes only understandable to domain experts, and most likely part of the very specific vocabulary in this domain. Our impression, alternating between articles and the identified labels, was that the labels do reflect potential topics, and do so on a lower level of abstraction or more specific thankeywords chosen by authors or librarians. Analysis with an domain expertWith the produced labels for the clusters we can now compare clusters based on those labels. Still, there is one problem, we cannot solve, there is no objective ground truth to evaluate the resulting labels. Not being experts in astrophysics, we cannot really judge how meaningful the selected labels are, compared with the content of the clusters. As cross-check, we discussed the labels for one clustering solution, shown in Table <ref>,with a domain expert. The fact that this expert also has a background in bibliometrics helped in the discussion. We explained the labelling procedure and provided the domain expert with the table, without providing the articles assigned to each cluster.In this discussion, he remarked on some inner logic between the labels and subsequently the clusters. Based on his knowledge of the Astrophysics field, he developed a concept map to order those clusters. He started with six categories of astrophysical objects at different scales: Cosmology, Galaxies, Compact Objects, Stars, Planets, Elementary Particles, plus a general category about Observation Techniques. He used “is part of”relations to draw a categorical backbone of the whole field. He then assigned each cluster to these categories and produced the concept map as shown in Figure <ref>.Each cluster is connected to one of the main categories by a solid blue arrow, representing a “deals with” relation. This relation indicates that these clusters mostly belong to that particular category. For example, the clusters ok 0, ok 1 and ok 7 is all about Galaxies. Indeed we find terms as galactic or galaxy among the labels for this cluster. Some clusters in the concept map also have an extra dotted red link to another categories, indicating that they are also related to the other category. For example, the cluster ok 13 is mostly about Elementary Particles, but is also related to Cosmology. This is not so much a surprise, because as we reported in <cit.> there are not always clear boundaries between clusters. So it is quite possible that some clusters are actually bridging two major categories. This exercise with one expert is not a representative evaluation. A more systematic evaluation with more experts carefully checking the articles in these clusters is of course more informative. Still this exercise gave us some confidence in the appropriateness and meaningfullness of those automatically generated labels. They were obviously specific, discriminating and at the end informative enough for a domain expert to draw a knowledge landscape about these clusters along the seven categories with good confidence. For us it was encouraging enough to trust the labels and engage in a comparison exercise between clusters from different solutions. § QUANTITATIVE COMPARING CLUSTERING SOLUTIONS BY LABELSSince each cluster in each solution is labelled by its most significant or informative topic terms, it is possible to find the most informative topic terms across all clusters in one solution. Those labels would than represent the whole clustering solution. Therefore, we further extend the formula (Eq. <ref>) to measure the NMI scores between a whole clustering solution and all the 60K topical terms. For a clustering solution C with m clusters ({c_1, c_2, …, c_m}), we computerI(t,C) = ∑_k=1^m ∑_i=0^1 P(T_i, U_k) log_2 P(T_i, U_k)/P(T_i) P(U_k)where t is a topical term, T_0 indicates an article does not contain the term t and otherwise T_1, and U_k indicates an article belongs the cluster c_j. We normalise it as follows:NMI(t,C)=2 ×I(t,C)/ (H(t)+H(C))where, H(t)=-∑_i=0^1 P(T_i) log_2 P(T_i)andH(C)=-∑_i=1^m P(U_i) log_2 P(U_i).The sign of the final NMI score is also assigned in the same way as described in the previous section. The probabilities are estimated by dividing the frequency of the observed event by the total number of articles which have a cluster assignment. This is different from labelling the individual clusters described in the previous session. The reason is that not all clustering solutions have a full coverage of the whole dataset.[Please see Table 1 in <cit.>.] We only look at the part of the dataset which is covered by the clustering solution and consider that the rest of the dataset do not contribute to the information of the clusters. For a clustering solution, we can now compute topic terms which have the highestNMI scores, i.e, they contain the most information about all the clusters in this solution.Table <ref> gives the top ten terms, in the descending order of their NMI scores, for seven clustering solutions described in this special issue.As Table <ref> shows, the most informative terms for these clustering solutions are very similar. That means these methods actually use the similar information to make major clustering decisions, while their differences lie more in how to handle less informative terms which are not listed in this table. The order of these terms, based on their NMI score, is also similar for most of the solutions, with “galaxies” as the most informative term. However, ECOOM-NLP11 ranks “black hole” and “gamma ray” at the top, which is different from the others. And compared to other solutions, “dark matter” ranked relatively higher for the two ECOOM solutions. This qualitative analysis invited us to do a quantitative comparison of individual clusters based on their information content. To be able to compare all the clusters using the same coordinates, we collected the top 50 labels computed from the seven clustering solutions listed in Table <ref>. We kept all terms occurring in at least two lists of labels for clustering solutions and removed all duplicates. This leads to 61 different labels in total. In a next step we ordered these labels in a way that the sum of the distances between neighbouring labels is the minimal, i.e. similar labels are positioned close to each other. In other words we deal with a Travelling Salesman Problem (TSP) <cit.>. Distances between labels are calculated based on their vectorial representations in the semantic matrix <cit.>. Then we apply one of the standard algorithms for the TSP <cit.> and implement a simplified version of the Chained Lin-Kernighan heuristic <cit.>. In the last step, we re-computed the NMI scores between these labels and all the clusters from all the solutions, using Eq. <ref>. Eventually each cluster is represented by a 61 dimensional vector, which we call the fingerprint of a cluster. We use this vector as global lexical coordinate system, based on which we can now compare individual clusters, within a solution or across solutions, in a more visual way.In Figure <ref>, we visualise selected K-Means and Louvain clusters in terms of their fingerprints. The selection of these clusters is based on the fact that their most informative labels have a very high absolute NMI score. Given the fact thatmost of the labels have a very low or even negative NMI values, the high peaks at a very few and sometimes unique labels are very informative about what these clusters are about. For example, we are almost certain that ok 19 is about “inflation,” ok 18 is about “white dwarf,” and etc. Actually, the 7 clusters in the upper figure for each solution, for example, ok 19, ok 18, etc. for K-Means, have more distinguishing labels than those in the figure below, i.e, their highest NMI scores are higher than those in the figure below. We are more certain about the topics of these clusters. The clusters in the lower figures for both solutions have relatively less distinguishing (multiple peaks with lower NMI scores) yet still pretty informative labels. It is also possible to group the individual clusters from different solutions based on their fingerprints. Figure <ref> shows two groups of clusters. In Figure <ref> (a), four clusters from four clustering solutions have highly similar fingerprints and they all have one single focus: “grb” (gamma ray burst). Looking at these four clusters more carefully, they share a large overlap in terms of articles they contain. The average Jaccard similarity coefficient[<https://en.wikipedia.org/wiki/Jaccard_index>] among them is 0.64. In Figure <ref> (b) the seven clusters also have very similar but more complicated fingerprints, with the top three peaks at the “galaxy,” “redshift” and “active galactic.” Their average Jaccard similarity coefficient is 0.47. It is not surprising to see that the fingerprints calculated from the NMI scores are consistent with the simple set-based similarity between clusters. If two clusters overlap more, then their fingerprints are more similar. Actually, after applying a simple Affinity Propagation clustering algorithm[We applied the Python package provided at <http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AffinityPropagation.html> with all default parameter settings.Further investigation with this clustering exercise is out of the scope of this paper.] over these fingerprints ofclusters from different clustering solutions, there are 22 “clusters of clusters,” including the two shown in Figure <ref>. This may suggest that there exist core articles who have a tendency to always cluster togetherand are recognised by different methods, in other words, they are prototypical articles which clearly represent certain topics agreed by different methods, while other articles are in between topics, forming the fuzzy boundaries among topics. Different methods have different ways of deciding the boundaries, which makes the study in this special issue interesting. More comparison between different methods can be found in <cit.>. § CONCLUSION In this paper, we showed that using Normalized Mutual Information between clusters and topical terms extracted from titles and abstractsis an effective way to identify important topical terms to describe clusters. It is a data driven approach which can clearly scale, and has the advantage that the process is independent from clustering methods, and so probably more objective than human judgement. The discussion with a domain expert also showed that these labels represent information he could interact with and which related to his own understanding of the field. The chosen labels are meaningful and useful in follow-up human interpretation and ordering. However, having said this, other labels chosen by other techniques might have a similar function. The aim of this exercise was not to find the best labels, but labels which can claim some representativeness and meaningfulness, and labels which enables further semantic interpretation. Once a selection of labels is determined to be lexical reference or lexical coordinates, we can compare different clustering solutions at a global level and also to map single clustering solutions against each other. We showed how such common label-based coordinates can be used to visually compare different clusters or generate “clusters of clusters”. This way of visual comparison is intuitive and straightforward. It is surprising yet understandable that different clustering methods, despite their differences in data models or algorithms, do share a fair amount of terms which are most informative about their clustering results. The most important result of this paper is a method that uses NMI measures based on lexical information from the documents (articles) which are clustered. We have shown that such a lexical based comparison complements the comparison of clusters in LittleAriadne <cit.>. Its findings are more or less consistent with other methods of comparison <cit.>. § ACKNOWLEDGEMENTPart of this work has been funded by the COST Action TD1210 Knowescape. We would like to thank Marcus John, who functioned as domain expert, for his valuable analysis of the cluster labels. We would like to thank Michael Heinz for discussions on the method.spmpsci
http://arxiv.org/abs/1702.08199v1
{ "authors": [ "Rob Koopman", "Shenghui Wang" ], "categories": [ "cs.IR", "cs.DL" ], "primary_category": "cs.IR", "published": "20170227092346", "title": "Mutual Information based labelling and comparing clusters" }
Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore, Singapore 117546 Department of Physics, National University of Singapore, Singapore 117542Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore, Singapore 117546 Department of Physics, National University of Singapore, Singapore 117542Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore, Singapore 117546 Department of Physics, National University of Singapore, Singapore 117542 School of Physics, Sun Yat-sen University, Guangzhou, China 510275 Corresponding author: vpereira@nus.edu.sg Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore, Singapore 117546 Department of Physics, National University of Singapore, Singapore 117542Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore, Singapore 117546 Department of Physics, National University of Singapore, Singapore 117542Charge density wave (CDW) phases are symmetry-reduced states of matter in which a periodic modulation of the electronic charge frequently leads to drastic changes of the electronic spectrum, including the emergence of energy gaps. We analyze the CDW state in a 1T-TiSe_2 monolayer within a density functional theory framework and show that, similarly to its bulk counterpart, the monolayer is unstable towards a commensurate 2×2 periodic lattice distortion (PLD) and CDW at low temperatures. Analysis of the electron and phonon spectrum establishes the PLD as the stable T=0 K configuration with a narrow bandgap, whereas the undistorted and semi-metalic state is stable only above a threshold temperature. The lattice distortions as well as the unfolded and reconstructed band structure in the CDW phase agree well with experimental results. We also address evidence in our results for the role of electron-electron interactions in the CDW instability of 1T-TiSe_2 monolayers. Stable charge density wave phase in a 1T-TiSe_2 monolayer Hsin Lin December 30, 2023 =========================================================Recent years have witnessed remarkable progress in the study of two dimensional (2D) materials owing to their diverse properties and potential applications <cit.>. Due to the reduced dimensionality, their physics can differ significantly from the bulk counterparts, while providing greater flexibility for tuning their electronic properties through changing the number of layers, chemical composition or by their integration in heterostructures <cit.>. The 2D thin films of transition metal dichalcogenides (TMDs) with chemical formula MX_2 (where M is a transition metal and X is a chalcogen) are particularly appealing because they offer a wealth of electronic properties ranging from insulating to semiconducting to metallic or semimetallic, depending on the choice of transition metal or chalcogen <cit.>. Their different electronic behavior generally arises from the partially filled d-bands of the transition metal ion. In addition, some of the layered TMDs are found to exhibit generic instabilities towards the symmetry-lowering charge density wave (CDW) state and superconductivity and, therefore, are ideal platforms to investigate in a controlled manner the interplay between these phases <cit.>. 1T-TiSe_2 (henceforth TiSe_2, for simplicity) is among the most studied TMDs due to its simple commensurate 2×2×2 CDW state below T_c≃200 K in the bulk <cit.>. It has been established that the CDW order in the bulk is weakened by either Cu intercalation or pressure, and that a dome-like superconducting phase appears near the point of CDW suppression in either phase diagram, indicating a tight interplay between the CDW order and superconductivity in bulk TiSe_2 <cit.>. The dominant underlying mechanism for the CDW transition in this material has been a subject of intense theoretical and experimental study for more than three decades, and the question remains unsettled. Several experimental studies have suggested that either an excitonic interaction and/or band Jahn-Teller effect is responsible for the CDW instability <cit.>. The difficulties to reach a consensus possibly arise because of the 3D nature of the CDW order which makes it difficult to identify the exact gap locations in the 3D Brillouin zone (BZ). In contrast, this problem seems to be more tractable in the case of the TiSe_2 monolayer because of its intrinsically 2D band structure that facilitates the experimental analysis of spectral weight transfer and gap opening. Recently, thin films of TiSe_2 have been fabricated and experimentally found to exhibit a 2×2 CDW ordering below a critical temperature that can be controlled in few-layer samples by changing the film thickness and/or field effect <cit.>. Furthermore, the superconducting dome remains in the thin films, and field effect doping can be used to reach it and tune the superconducting transition temperature <cit.>. These studies suggest that monolayer TiSe_2 will not only help explaining the CDW mechanism in the bulk, but constitutes an interesting system on its own as a prototypical 2D material to investigate the interplay between these collective phenomena. In this paper, we describe systematic ab-initio electronic structure calculations to investigate the periodic lattice distortion (PLD) and CDW ordering, as well as their underlying mechanism in the TiSe_2 monolayer. The normal 1× 1 phase is seen to be unstable at low temperatures due to the softening of a zone-boundary phonon mode at the M-point. This phonon's frequency depends strongly on the electronic smearing parameter (electronic temperature), indicating a structural phase transition with temperature. We find that the 2×2 superstructure is the ground state at T=0 K, whereas the normal 1×1 structure is stabilized only at higher temperatures. The unfolded band structure shows an energy gap coinciding with the Fermi level E_F, as well as clear backfolded bands at the M-point, in excellent agreement with experimental energy dispersions obtained from angle-resolved photoemission spectroscopy (ARPES). Our results clearly demonstrate that the CDW formation in the TiSe_2 monolayer is intimately associated with a robust structural phase transition that reduces the lattice symmetry at low temperatures. § METHODOLOGY Electronic structure calculations were performed with the projector augmented wave method <cit.> within the density functional theory (DFT) <cit.> framework, using the VASP code <cit.>. According to our calculations, results obtained with the GGA (generalized gradient approximation) <cit.> functional for the exchange-correlation (XC) effects agree better with the experimental data (namely, the magnitude of the atomic displacements and the energy spectrum reconstruction, to be discussed in detail below). Therefore, we report here the results obtained with the GGA except where explicitly stated otherwise. The spin-orbit coupling was included self-consistently. Lattice parameters and ionic positions were optimized until the residual forces on each ion were less than1.0× 10^-3 eV/Å. We obtained an optimized in-plane lattice constant of a3.536 Å, in close agreement with the experimental value (a3.538 Å) <cit.> and earlier theoretical results for bulk <cit.>. This good agreement leads to negligible spurious external pressure in the calculations which, as noted by Olevano et al. <cit.>, is crucial for a reliable prediction of the CDW instability based on phonon calculations. In view of this agreement, all calculations were performed using a slab model with the experimental lattice constant and fully optimized ionic positions. We used a vacuum region of 12 Å to avoid interaction between the periodically repeated slabs, and a plane-wave cutoff energy of 380 eV throughout thecalculations.The phonon dispersion curves were computed using density functional perturbation theory (DFPT) <cit.> as implemented in the PHONOPY code <cit.> with a 2 × 2 supercell. Convergence tests withrespect to k-point sampling within the normal and the superstructure BZ were carried out for both electronic and vibrational properties. The convergence was reached with a Γ-centered 16 × 16 × 1 k-mesh which is that ultimately used in the calculations reported below.All the ground state calculations were done with a small smearing parameter (σ 0.001 eV) which was well converged with respect to different smearing functions <cit.>. Unfolding of the band structure was done using a home-built code based on Ref.  <cit.>. As this method has been thoroughlydiscussed in earlier publications <cit.>, for the sake of brevity and to avoid repetition, we do not discuss its details here. § THE NORMAL PHASE The TiSe_2 monolayer has an hexagonal Bravais lattice with the space group D^3_3d (P3̅m1,164) <cit.>. It consists of three sublayers stacked in the order Se-Ti-Se within a single unit cell, as shown in Figs. <ref>(a)-<ref>(c). The two Se sublayers are strongly bonded with the Ti plane in the middle (bond length ≃ 2.56 Å) and Ti has an octahedral prismatic coordination as illustrated in Fig. <ref>(c). The first BZ is hexagonal with three high symmetry points Γ, M, and K, as shown in Fig. <ref>(d). This BZ can be obtained by projecting the 3D bulk BZ onto the (001) surface and, therefore, the points corresponding to the CDW wavevector in the bulk (L and M in the 3D BZ) map into M points of the 2D BZ. The PLD doubles the original lattice periodicity forming a 2×2 superstructure and, hence, reduces the 2D BZ, which is shown by broken blue lines in Fig. <ref>(d). Due to the BZ folding, the original M points of the 1×1 BZ are mapped into the zone center (Γ^*) of the 2×2 BZ. Beginning with amonolayer in the normal phase, a simplistic ionic insulator model suggests a strong tendency for electron transfer from the Ti 3d and 4s orbitals to the 4p states of Se. Valence and conduction bands would then arise from Se 4p and Ti 3d states, respectively. However, in reality, the crystalline environment and spatial extent of the d orbitals increase their bandwidth such that conduction and valence bands overlap. Figure <ref>(e) shows that, near the Fermi level, the Se 4p-derived valence bands are centered at the Γ-point whereas the Ti 3d-derived conduction bands lie at the M-point. These overlap in energy and make TiSe_2 a semimetal in the normal phase. The Fermi contours reveal a pair of hole pockets at the Γ-point whereas a single elongated elliptical electron pocket forms at the M-point, as shown in Fig. <ref>(f). In order to equivalently quantify a band overlap or band gap, we define the indirect gap as the difference between the minimum of the conduction band at the M-point and the maximum of the valence band at Γ. A negative value of the gap therefore represents a semimetal with indirect band overlap, while an insulator will have a positive gap. We find an indirect band gap of -0.446 eV using GGA, and a smaller value of -0.242 eV with an HSE functional. Although these values are different from some experimentally reported values of ∼+0.098 eV <cit.>, they agree well with earlier bandstructure calculations <cit.>. The difference with respect to the experimental data may arise because our DFT calculations assume T = 0 K whereas the 1×1 normal structure, in reality, exists only at higher temperatures. The phonon dispersion of the normal phase is shown in Fig. <ref> for different values of the electronic smearing parameter σ using a Methfessel-Paxton smearing <cit.>. This parameter determines the smearing width and is normally used as a technical tool to accelerate convergence in DFT calculations. However, when used with the Fermi-Dirac distribution, it mimics the electronic temperature and thusdescribes the occupation probability of the electronic states <cit.>. By varying σ, we can qualitatively estimate changes expected to occur in the phonon spectrum with increasingtemperature, thereby monitoring the structural stability at different temperatures. In Fig. <ref>(a) we show the phonon spectrum at a small smearing parameter. It is clear that the system is dynamically unstable with a Kohn-type <cit.> soft mode at the M-point. The partial phonon density of states is shown in Fig. <ref>(e) and demonstrates the imaginary frequencies are inherent to the Ti atoms, suggesting that the CDW is associated with the Ti sublayers. The structural instability at the M-point is consistent with the 2×2 commensurate PLD observed in experiments at low temperature. As we increase σ, the range over which the soft mode has imaginary frequency is reduced, and finally disappears for σ∼ 0.4-0.5 eV [see Fig. <ref>(a)-<ref>(d)]. This dependence of the soft mode frequency on σ indicates a structural phase transition with temperature and confirms that the 1×1 structure is stable only above a threshold temperature <cit.>. Phonons and electrons are inseparably intertwined in a crystal which prevents, in principle, the simplistic assignment of the CDW/PLD tendency to an instability of the electronic or phonon subsystems (through electron-phononcoupling), independently. In a simplistic description of this “hierarchy”, a transition driven entirely by electronic interactions, the PLD would be regarded “secondary”,as a readjustment of the ions to a modified Born-Oppenheimer potential <cit.>, or it might not even occur at all, as seems to be the case, for example, in the layered purple bronze K_0.9Mo_6O_17 <cit.>. The non-uniform charge density would therefore emerge regardless of whether or not the ions are clamped at their high symmetry positions <cit.>. Figures <ref>(f)-<ref>(g) show the charge density distribution in various crystal planes in an undistorted 2×2 superstructure of TiSe_2 monolayer. They show no appreciable charge redistribution in the presence of a doubled lattice periodicity, similarly to the previously studied case of the bulk material <cit.>. This, however, must be interpreted with care, as a consistency check, and not as confirmation that the role of electronic interactions is secondary. Such conclusion would be rather primitive because, on the one hand, systems where CDW arise only from electronic interactions are usually strongly correlated and, on the other, the GGA XC functional cannot capture correlations at that level. In fact, in bulk TiSe_2, the inclusion of many-body corrections at the level of the GW approximation has been shown to capture the spectral reconstruction and gapped state seen experimentally even without any deformation (i.e. in a 1 × 1 cell calculation) <cit.>, which underscores that electronic correlations are indeed of crucial importance to describe the electronic state through the CDW transition. The charge distributions seen in Fig. <ref>(g)] are spherically symmetric near the Ti atoms and nearly constant in the interstitial region. This is a reminiscent feature of the metallic-type ionic environment and suggests that Ti atoms are more likely to displace in their hexagonal plane to find an energy minimum.§ THE DISTORTED PHASE We now investigate the 2×2 superstructure to find the equilibrium configuration in the distorted phase. We have allowed all the ions' positions to relax using both the LDA and GGA functionals. Figures <ref>(a)-<ref>(b) illustrate the atomic movement in the fully relaxed 2×2 superstructure. Both the GGA and LDA predict the CDW instability, with an energy reduction of ∼4.7 meV and ∼3.7 meV perchemical unit, respectively. Interestingly, despite unconstrained, our results show that all atoms move only in their respective atomic planes without any out-of-plane distortion. The in-plane atomic displacements give rise to two different local octahedral structures in the 2×2 superstructure: in one octahedron, Ti atoms remain the center of the coordination unit while top (bottom) Se atoms are displaced clockwise (counterclockwise) in a circular fashion without affecting the original Ti-Se bond length, as depicted in the Figure. In contrast, Ti atoms in the second octahedron are displaced off-center giving rise to a distorted octahedron with three different Ti-Se bond lengths, as shown in the lower part of Fig. <ref>(b). Even though our calculations with either the GGA or LDA functional yield similar atomic displacement patterns, the magnitude of the atomic displacements depends strongly on the functional used. The calculated atomic displacements δ_Ti (δ_Se) for Ti (Se) are 0.090 (0.029) Å and 0.076 (0.016) Å with the GGA and LDA functional, respectively. The atomic displacement ratio δ_Ti/δ_Se with the GGA functional is ≈ 3.10, which agrees well with experiments <cit.>.Fig. <ref>(d) presents a complementary analysis of the relative stability of the distorted and undistorted configurations in terms of the energies per chemical unit of each phase obtained for different smearingparameters σ. The distorted phase has lower energy than the normal phase for small σ, and is thus more stable at lower temperature. As we increase σ, the energy of the distorted phase increases and surpasses that of the normal phase at σ ∼ 0.4-0.5 eV. This behavior is in accord with the normal phase phonon analysis presented above [cf. Fig. <ref>] and further confirms the stability of the 1 × 1 normal phase structure above a threshold temperature.The phonon spectrum of the fully relaxed 2×2 superstructure is shown in Fig. <ref>(e). The absence of imaginary frequencies in the whole 2D BZ reflects the dynamical stability of this configuration at T=0 K, and indicates that it is the ground state structure of the TiSe_2 monolayer. This is further supported by the local energy landscape of the CDW phase that we analyzed by changing the atomic distortion δ_Ti manually and computing the changes in total energy; the results are shown in Fig. <ref>(c). It should be noted that the Se atoms were fully relaxed for each manually set δ_Ti. The energy is minimal at a finite value of δ_Ti (in the plane), and a spatial reversal of the distortion yields a degenerate configuration. The system can then freeze in either configuration at low temperature. At higher temperature, however, thermal effects allow the system to fluctuate between configurations giving rise to an “average” 1×1 structure characteristic of the normal state.Figure <ref>(a) displays the electronic band structure of the relaxed 2×2 superlattice, and shows the emergence of a full band gap in the BZ [see Fig. <ref>(b)] at the Fermi level. The orbital character of the Bloch states is represented by the color map superimposed on each curve. It is clear that the coupling between the predominantly Ti-derived conduction band orbitals at M and the predominantly Se-derived valence orbitals at Γ lifts the band overlap that is present in the normal state and lowers (raises) the energy of the filled (empty) states in the vicinity of E_F that becomes gapped.In order to facilitate direct comparison with experimental dispersions <cit.>, in Figs. <ref>(c)-<ref>(d) we unfolded the superlattice band structure to the original 1×1 BZ. The most significant feature in this representation is the clear presence of back-folded bands at the M-point which, despite their smaller spectral weight, provide a prominent signature of the new periodic potential in the CDW phase. The CDW phase in TiSe_2 has been recently investigated experimentally by ARPES, whose spectra reveal the formation of a 2×2 superlattice with a band gap of ≃ 153 meV at 10 K and two back-folded bands at the M-point <cit.>. The spectral weight of the back-folded bands at the M-point is smaller than that of the bands at Γ <cit.>. These experimental results are well captured by our first-principles results. The insulating electronic state with a band gap of 82 meV (325 meV) with GGA (HSE) and the location and intensity of the two back-folded bands at the M-point are in reasonable agreement with those experiments.Finally, we highlight the fact that, in addition to obtaining magnitudes of the lattice distortion and electronic gap which are accurate in comparison with experiments, we obtain also the non-trivial restructuring of the bands around E_F that has been analyzed in detail on the basis of ARPES spectra in bulk TiSe_2 <cit.>. This is best seen in the close-up of Fig. <ref>(d) that shows the lowest conduction band around M remaining parabolic, whereas the valence band acquires a Mexican-hat type dispersion. Combined with finite temperature broadening, the latter causes a flattening of the top of the valence band, an effect that has been seen clearly by ARPES in bulk samples <cit.>. The qualitative significance of this reconstruction within a DFT calculation was first highlighted by Cazzaniga et al. in studies of the distorted phase in the bulk <cit.>. In that case, GW corrections on top of the LDA are seen to capture the experimental Mexican-hat reconstruction even in an undistorted 1 × 1 cell. It is very interesting that, contrary to the bulk case, the monolayer shows this reconstruction already at the GGA level, Fig. <ref>(c), without needing to add many-body corrections beyond the XC functional. As argued in detail in Ref. <cit.>, this fact strongly supports the built-in tendency of this electronic system towards an excitonic-insulator state, which predicts precisely such type of spectral reconstruction <cit.>. It is noteworthy that such physics is captured already at the GGA level in the monolayer which, providing a more rudimentary account of the electronic interactions than the GW approximation, perhaps suggests a stronger tendency for the excitonic instability in the monolayer in comparison with its bulk counterpart. § DISCUSSION AND CONCLUDING REMARKS The results presented above provide a careful and comprehensive analysis of the stability of a TiSe_2 monolayer, and establish the 2×2 PLD as the stable structure in the ground state. Fig. <ref> provides a schematic distillation of the essential physics following from our calculations. Fermi surface nesting is certainly excluded as the cause of this PLD/CDW instability because of the ill defined nesting of the circular/elliptical Fermi surfaces seen both in our results and in experiments<cit.>. The absence of any charge density redistribution in a clamped ion 2× 2 superlattice [cf. Figs. <ref>(f) and <ref>(g)] indicates that the CDW and PLD are intimately related in the monolayer, similarly to the bulk. It is then clear that the electron-phonon coupling is significant in this systembecause of the large lattice distortions it attains in comparison with similar CDW-prone TMDs <cit.>.Even though the problem is unavoidably interacting and self-consistent, there is a long standing interest in establishing to which extent the CDW is driven here primarily by an electronic instability, or by a strong electron-phonon coupling with negligible influence of electronic correlations (as happens, for example, in metallic TMDs such as NbSe_2 <cit.>). This is especially important to formulate analytical microscopic models capable of describing the CDW and superconductivity seen in TiSe_2 as a function of electron doping, because the presence of strong electronic interactions can affect the pairing instability both quantitative and qualitatively.It is generally difficult to answer this question from a purely DFT perspective and much less quantify precisely the role of electron-electron interactions because of their approximate treatment in any practical implementation. Nevertheless, combined with the experience and evidence learned from earlier studies of bulk TiSe_2, the present results reinforce the view that interactions play a rather consequential role here. One line of evidence arises from the fact highlighted earlier that our calculated distortions and restructured band dispersions in this phase are in good agreement with experimental results, but these properties are seen toquantitatively and strongly depend on the type of XC approximation used as mentioned earlier. Similar considerations apply to the stability of the PLD phase at T = 0 which is not reproduced at the LDA level, for example <cit.>. Since our calculations rely on fully relaxed ions and explicitly converged k-sampling, we are confident that this variation reflects directly the different treatment of interaction effects in those implementations of the XC functional. The other line of evidence is related to the spectral reconstruction in the distorted phase, and the reproduction within the GGA of the Mexican hat profile characteristic of the ARPES quasiparticle spectra. Such bandstructure is expected as the self-consistent ground state in the excitonic-insulator scenario, and can be obtained from a mean-field type analysis of the Coulomb interactions between holes at Γ and electrons at M based on effective non-interacting bands for the reference (normal) state <cit.>. The reproduction of this at the DFT level is a non-trivial outcome and, in fact, previously seen only in the electronic structure of bulk TiSe_2 after the inclusion of GW many-body corrections <cit.> (and apparently in no other electronic system to date). That the monolayer shows such spectral reconstruction without many-body corrections to the GGA bolsters the view that excitonic correlations do play a key role in the CDW transition.This study underlines that these monolayers are an exciting material platform to study CDW phases in general, have an interesting phase diagram on their own, and will contribute to illuminate the dominant and long-sought mechanism responsible for the CDW instability, both in bulk and monolayer TiSe_2.§ ACKNOWLEDGMENTSThe work at the National University of Singapore was supported by the Singapore National Research Foundation under the NRF fellowship Award No.NRF-NRFF2013-03 (HL), by the Singapore Ministry of Education Academic Research Fund Tier 2 under Grant No. MOE2015-T2-2-059 (VMP), and benefited from the HPC facilities of the NUS Centre for Advanced 2D Materials. WFT is supported by the National Thousand-Young-Talents Program, China. 102D_Nov2005 K. S. Novoselov et al., Proc. Natl. Acad. Sci. USA 102,10451(2005). 2DO_Butler2013 S. Z. Butler et al., ACS Nano 7,2898(2013).2DH_Nov2016 K. S. Novoselov, A. Mishchenko, A. Carvalho, and A. H. Castro Neto, Science 353, aac9439 (2016). TMDC_hetrostr H. Terrones, F. Lopez-Urias, and M. Terrones, Sci. Rep. 3,1549(2013).MoS2_NewM K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Phys. Rev. Lett. 105,136805(2010). MoS2_DIBansil Y. Zhang et al., Nat. Nanotechnol. 9,111(2014).TMDC_CDWReview K. Rossnagel, J. Phys.: Cond. Mat. 23,213001(2011).TaS2_Yu2015 Y. Yu et al., Nat. Nanotechnol. 10,270(2015).TMDC_CDWThin2015 J.-A. Yan, M. A. D. Cruz, B. Cook, and K. Varga, Sci. Rep. 5,16646 (2015). FermiSurfNest M. D. Johannes and I. I. Mazin, Phys. Rev. B 77,165135(2008).TiSe2M_superCE2016 L. J. Li et al., Nature 529,185(2016).TiSe2M_chrialSup R. Ganesh, G. Baskaran, J. van den Brink, and D. V. Efremov, Phys. Rev. Lett. 113,177001(2014). MoS2_Wang2012 Q. H. Wang, K. Kalantar-Zadeh, A. Kis, J. N. Coleman, and M. S. Strano, Nat. Nanotechnol. 7,699(2012). MoS2_ValleyDirac D. Xiao, G.-B. Liu, W. Feng, X. Xu, and W. Yao, Phys. Rev. Lett. 108,196802(2012). TMDC_FET N. R. Pradhan et al., ACS Nano 8,7923(2014).TiSe2B_expN F. J. Di Salvo, D. E. Moncton, and J. V. Waszczak, Phys. Rev. B 14,4321 (1976). TiSe2B_expARPES R. Z. Bachrach, M. Skibowski, and F. C. Brown, Phys. Rev. Lett. 37,40(1976). TiSe2B_expARPES_EHC T. E. Kidd, T. Miller, M. Y. Chou, and T.-C. Chiang, Phys. Rev. Lett. 88, 226402(2002). TiSe2B_JahnTell K. Rossnagel, L. Kipp, and M. Skibowski, Phys. Rev. B 65,235101(2002).TiSe2B_ThVCD J. von Boehm, H. Isomäki, and P. Krusius, Physica Scripta 22,523 (1980).TiSe2B_ThSm D. L. Duong, M. Burghard, and J. C. Schön, Phys. Rev. B 92,245131 (2015). TiSe2B_Th R. Bianco, M. Calandra, and F. Mauri, Phys. Rev. B 92,094107(2015).Olevano2014 V. Olevano et al., Phys. Rev. Lett. 112, 049701 (2014).TiSe2B_ThMech Z. Zhu, Y. Cheng, and U. Schwingenschlögl, Phys. Rev. B 85,245133 (2012). Cazzaniga2012 M. Cazzaniga et al., Phys. Rev. B. 85, 195111 (2012).TiSe2B_exp_Exct H. Cercellier et al., Phys. Rev. Lett. 99,146403(2007).TiSe2B_supCu E. Morosan et al., Nat. Phys. 2,544(2006).TiSe2B_supPress A. F. Kusmartseva, B. Sipos, H. Berger, L. Forró, and E. Tuti šš, Phys. Rev. Lett. 103,236401 (2009). TiSe2M_raman P. Goli, J. Khan, D. Wickramaratne, R. K. Lake, and A. A. Balandin, Nano Lett. 12,5941(2012). TiSe2M_CDWSTM J.-P. Peng et al., Phys. Rev. B 91,121113(2015).TiSe2M_CDWNC P. Chen et al., Nat Commun 6, 8943 (2015).TiSe2M_CDWNano K. Sugawara et al., ACS Nano 10,1341(2016).vasp G. Kresse and J. Furthmüller, Phys. Rev. B 54,11169(1996).paw G. Kresse and D. Joubert, Phys. Rev. B 59,1758(1999).kohan_dft P. Hohenberg and W. Kohn, Phys. Rev. 136,B864(1964).pbe J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77,3865 (1996). phonon_DFPT S. Baroni, P. Giannozzi, and A. Testa, Phys. Rev. Lett. 58,1861 (1987).phonopy A. Togo, F. Oba, and I. Tanaka, Phys. Rev. B 78,134106(2008).Sm_FD N. D. Mermin, Phys. Rev. 137, A1441 (1965).Sm_gauss P. E. Blöchl, O. Jepsen and O. K. Andersen, Phys. Rev. B49, 16223 (1994).Sm_MethPax M. Methfessel and A. T. Paxton, Phys. Rev. B 40, 3616 (1989). unfold_popeZung V. Popescu and A. Zunger, Phys. Rev. B 85, 085201 (2012). unfold_Chicheng W. Ku, T. Berlijn, and C.-C. Lee, Phys. Rev. Lett. 104, 216401 (2010).unfold_paulo1 P. V. C Medeiros, S. Stafström, and J. Björk, Phys. Rev. B 89, 041407(R) (2014). unfold_paulo2 P. V. C Medeiros, S. S. Tsirkin, S. Stafström, and J. Björk, Phys. Rev. B 91, 041116(R) (2015).kohn_anoM W. Kohn, Phys. Rev. Lett. 2,393(1959).footnote1 Our calculated critical smearing parameter σ strongly depends on the smearing method used in calculations, consistent with Ref. <cit.>. It should be noted that while σ can be used to access qualitative changes in the phonon spectrum and stability with temperature, its direct relation with the temperature relies on the smearing function used <cit.>. footnote2 Note that the phonon spectrum is calculated within the harmonic approximation whereas an estimation of the exact transition temperature (T_c) explicitly needs to consider quasi-harmonic effects <cit.>. Theσ dependence only changes the filling of energy bands without inducing noticeable structural changes <cit.>. It will be interesting to estimate T_c in the future through either calculating volume dependence of phonon spectrum <cit.> or using finite-difference phonon results with large displacements <cit.>.Mou2016 D. Mou et al., Phys. Rev. Lett. 116, 196401 (2016).Su2016 L. Su, C.-H. Hsu, H. Lin, and Vitor M. Pereira, Phys. Rev. Lett. 118, 257601 (2017).qha_Antolin N. Antolin, O. D. Restrepo, and W. Windl, Phys. Rev. B 86, 054119 (2012).qha_lazar P. Lazar, J. Martincová, and M. Otyepka, Phys. Rev. B 92, 224104 (2015).Kohn_EID. Jérome, T. M. Rice, and W. Kohn, Phys. Rev. 158, 462 (1967).Zhu_CDW X. Zhu et al., Proc. Natl. Acad. Sci. USA 112, 2367 (2015).Wezel:2010 J. van Wezel, P. Nahai-Williamson, S. S. Saxena, Phys. Rev. B. 81, 165109 (2010). Money:2009 C. Monney, H. Cercellier, F. Clerc, C. Battaglia, E. F. Schwier, C. Didiot, M. G. Garnier, H. Beck, P. Aebi, H. Berger, L. Forro, and L. Patthey, Phys. Rev. B. 79, 45116 (2009).
http://arxiv.org/abs/1702.08329v2
{ "authors": [ "Bahadur Singh", "Chuang-Han Hsu", "Wei-Feng Tsai", "Vitor M. Pereira", "Hsin Lin" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227152939", "title": "Stable charge density wave phase in a 1T-TiSe$_2$ monolayer" }
An update on statistical boosting in biomedicine Andreas Mayr[1] Address for correspondence: Andreas Mayr, Institut für Medizininformatik, Biometrie und Epidemiologie, Friedrich-Alexander Universität Erlangen-Nürnberg, Waldstr. 6, 91054 Erlangen.Email: mailto:andreas.mayratfau.deandreas.mayr@fau.de^1,3, Benjamin Hofner^2, Elisabeth Waldmann^1,Tobias Hepp^1, Olaf Gefeller^1, Matthias Schmid^3 [1] Institut für Medizininformatik, Biometrie und Epidemiologie,Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany[2] Section Biostatistics, Paul-Ehrlich-Institut, Langen, Germany [3] Institut für medizinische Biometrie, Informatik und Epidemiologie, Rheinische Friedrich-Wilhelms-Universität Bonn, GermanyStatistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine-learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.§ INTRODUCTION Statistical boosting algorithms are one of the advanced methods in the toolbox of a modern statistician or data scientist <cit.>. They offer multiple advantages in the presence of high-dimensional data as they can deal with more potential candidate variables than observations (p > n situations) while still yielding classical statistical models with well-known interpretability <cit.>. Key features in this context are automated variable selection and model choice <cit.>.The field of research is methodologically situated between the world of statistics and computer science. They bridge the gap between two rather different point of views on how to gather information from data <cit.>: on the one hand, there is the classical statistical modelling community that focuses on models describing and explaining the outcome in order to find an approximation to the underlying stochastic data generation process. On the other hand, there is the machine learning community that focuses primarily on algorithmic models predicting the outcome while treating the nature of the underlying process as unknown. Statistical boosting algorithms have their roots in machine learning <cit.> but were later adapted in order to estimate classical statistical models <cit.>. A pivotal aspect of these algorithms is that they incorporate data-driven variable selection and shrinkage of effect estimates similar to the one of classical penalized regression <cit.>. In a review some years ago <cit.>, we highlighted this evolution of boosting from machine-learning to statistical modelling. Furthermore, we emphasized the similarity of two boosting approaches – gradient boosting <cit.> and likelihood-based boosting <cit.> –introducing statistical boosting as a generic term for these kind of algorithms. Throughout this article, we will use this term to reflect both approaches. The earlier review <cit.> was accompanied by a second article <cit.>highlighting the multiple variants the basic algorithms have been extended towards (i) enhanced variable selection properties, (ii) new types of predictor effects, and (iii) new regression settings. The substantial new methodological developments on statistical boosting algorithms throughout the last years (e.g., stability selection <cit.>), opening the door for the growing community to new model classes and frameworks (e.g., joint models <cit.> and functional data <cit.>), make it necessary to provide an update on the available extensions. This article is structured as follows: In Section 2 we shortly highlight both basic structure and properties of statistical boosting algorithms and point to their connections to classical penalization approaches like the lasso. In Section 3 we focus on new developments regarding variable selection, which can be also combined with boosted functional regression models presented in Section 4. Section 5 focuses on advanced survival models before we briefly summarize in Section 6 what other relevant developments and applications have been proposed for the framework of statistical boosting. § STATISTICAL BOOSTING §.§ From machine learning to statistical models The original boosting concept by Schapire <cit.> and Freund <cit.> emerged from the field of supervised learning, focusing on boosting the accuracy of weak classifiers (base-learners) by iteratively applying them to re-weighted data to get stronger results.Even if the base-learners individually only slightly outperform random guessing the combined ensemble solution can often be boosted to a perfect classification <cit.>. The introduction of AdaBoost <cit.> was the breakthrough for boosting in the field of supervised machine learning, allegedly leading Leo Breiman to praise its performance: Boosting is the best off-the-shelf classifier in the world <cit.>.The main target of classical machine-learning approaches is predicting observations y_new of the outcome Y given one or more input variables X = {X_1, …,X_p}. The estimation of the prediction or generalization function is based on an observed sample (y_1, x_1), …, (y_n, x_n). However, since the underlying nature of the data generating process is treated as unknown, the focus is not on quantifying or describing this process, but solely on predicting ŷ_new for new observations x_new as accurately as possible.As a consequence, many machine-learning approaches (also including the original AdaBoost with trees or stumps) should mainly be seen as black box prediction schemes. Although typically yielding accurate predictions <cit.>, they do not offer much insight into the structure of the relationship between explanatory variables X and the outcome Y.Statistical regression models on the other hand, particularly aim at describing and explaining the underlying relationship in a structured way. The impact of single explanatory variables can not only be quantified in terms of variable importance measures <cit.>, but the actual effect of these variables is interpretable. The work of Friedman et al. <cit.> laid the groundwork to understand the concept of boosting from a statistical perspective and to adapt the general idea in order to estimate statistical models.§.§ General model structure The aim of statistical boosting algorithms is to estimate and select the effects in structured additive regression models. The most important model class are generalized additive models ('GAM', <cit.>), where the conditional distribution of the response variable is assumed to follow an exponential family distribution. Then, the expected response is modeled given the observed value x of one or moreexplanatory variables using a link-function g as g(E(Y|X=x)) =f(x). In the typical case of multiple explanatory variables,the function f(·) is called additive predictor and consists of the additive effects of the single predictors,f(x) = β_0 +f_1(x_1) + ⋯ + f_p(x_p)where β_0 represents a common intercept and the functions f_j(x_j), j = 1, … p are the partial effects of the variables x_j. The generic notation f_j(x_j) may comprise different types of predictor effects such as classical linear effects x_jβ_j, smooth non-linear effects constructed via regression splines, spatial effects or random effects of the explanatory variable x_j, to name but a few.In statistical boosting algorithms, the different partial effects are estimated by separate base-learners h_1(·), ..., h_p(·) (component-wise boosting, <cit.>) which are typically simple regression-type prediction functions. §.§ Gradient boosting Gradient boosting <cit.> is one of the two important approaches in the context of statistical boosting. For a generic overview on the structure of statistical boosting algorithms see Box <ref>. In gradient boosting, the iterative procedure fits the base-learners h_1(x_1),..., h_p(x_p) one-by-one to the negative gradient of the loss function ρ(y,f(·)), evaluated at the previous iteration:u =(-. ∂/∂ fρ(y_i,f(·) )|_ f = f̂^[m-1](·))_i = 1,...,n. The loss function describes the discrepancy between the observed outcome y and the additive predictor f(·) and is the target function that should be minimized to get an optimal fit for f(·). In case of GAMs, the loss function is typically the negative log-likelihood of the corresponding exponential family. For Gaussian distributed outcomes, this reduces to the L_2 loss ρ(y, f(·)) = (y - f(·))^2, where the gradient vector u is simply the vector of residuals y - f(·) from the previous iteration and boosting hence corresponds to refitting of residuals. In each boosting iteration, only the best-fitting base-learner h_j^* is selected based on the residual sum of squares of the base-learner fitj^* = 1 ≤ j ≤ pargmin∑_i=1^n (u_i - ĥ_j(x_j))^2.Only this base-learner h_j^* is added to the current additive predictor f(·). In order to ensure small updates, only a small proportion of the base-learner fit (typically the step length is ν = 0.1 <cit.>) is actually added. Note that the base-learner h_j(·) can be selected and updated various times; the partial effect of variable x_j is the sum of all corresponding base-learner that had been selected: f̂_j(x_j) = ∑_mν·ĥ_j(x_j) I_j = j* .This component-wise procedure of fitting the base-learners one by one to the current gradient of the loss function can be described as gradient descent in function space <cit.>, where the function space is spanned by the base-learners. The algorithm effectively optimizes the loss function step-by-step, eventually converging to the minimum. In order to avoid overfitting and to ensure variable selection, the algorithm is typically stopped before convergence (based on predictive performance evaluated via cross-validation or resampling <cit.>), which leads to an implicit penalization <cit.>. Gradient boosting is implemented in the add-on package mboost <cit.> for theopen source programming environment<cit.>, providing a large number of pre-implemented loss functions for various regression settings, as well as different base-learners to represent various types of effects (see <cit.> for an overview). Recent changes in the software, which were introduced after the comprehensive mboost tutorial <cit.> are provided as Appendix <ref>. §.§ Likelihood-based boosting Likelihood-based boosting <cit.> is the other general approach in the framework of statistical boosting algorithms. It follows a very similar structure as gradient boosting (see Box <ref>), although both approaches only coincide in special cases such as classical Gaussian regression via the L_2 loss <cit.>. In contrast to gradient boosting, the base-learners are directly estimated via optimizing the overall likelihood, using the additive predictor from the previous iteration as offset. In case of the L_2 loss, this has a similar effect than refitting the residuals.In every step, the algorithm hence optimizes regression models as base-learners one-by-one by maximizing the likelihood (using one step Fisher scoring), selecting only the base-learner j^* which leads to the largest increase in the likelihood. In order to obtain small boosting steps, a quadratic penalty term is attached to this likelihood. This has a similar effect as multiplying the fitted base-learner by a small step-length factor as in gradient boosting.Likelihood-based boosting for generalized linear and additive regression models is provided by theadd-on package GAMBoost <cit.>, and an adapted version for boosting Cox regression is provided with CoxBoost <cit.>. For a comparison of both statistical boosting approaches, i.e., likelihood-based and gradient boosting in case of Cox proportional hazard models, we refer to <cit.>.§.§ Connections to L_1-regularization Statistical boosting algorithms result in regularized models with shrinked effect estimates although they only apply implicit penalization <cit.> by stopping the algorithm before convergence. By performing regularization without the use of an explicit penalty term, boosting algorithms clearly differ from other direct regularization techniques such as the lasso <cit.>. However, both approaches sometimes result in very similar models after being tuned to a comparable degree of regularization <cit.>. This close connection has been first noted between the lasso and forward stagewise regression, which can be viewed as special case of the gradient boosting algorithm (Box <ref>), and led, along with the development of least angle regression (LARS), to the formulation of the positive cone condition (PCC) <cit.>. If this condition holds, LARS, lasso and forward stagewise regression coincide. Figuratively speaking, the PCC requires that all coefficient estimates monotonically increase or decrease with relaxing degree of regularization and applies, for example, to the case of low-dimensional settings with orthogonal X.It should be noted that the PCC is connected to the diagonal dominance condition for the inverse covariance matrix of X, which allows for a more convenient way to investigate the equivalence of these approaches in practice <cit.>.Given that the solution of the lasso is optimal with respect to the L_1-norm of the coefficient vector, these findings led to the notion of boosting as some “sort of L_1-sparse” regularization technique <cit.>, but it remained unclear which optimality constraints possibly apply to forward stagewise regression if the PCC is violated.By further extending X with a negative version of each variable and enforcing only positive updates in each iteration, Hastie et al. <cit.> demonstrated that forward stagewise regression always approximates the solution path of a similarly modified version of the lasso. From this perspective, they showed that forward stagewise regression minimizes the loss function subject to the L_1-arc-length∑_j=1^p ∫^t_0 | ∂β_j(s)/∂ s| ds ≤ t. This means that the travelled path of the coefficients is penalized (allowing as little overall changes in the coefficients as possible, regardless of their direction), whereas the L_1-norm considers only the absolute sum of the current set of estimates. In the same article, Hastie et al. <cit.> further showed that these properties hold for general convex loss functions and therefore apply not only to forward stagewise regression but for the more general gradient boosting method (in case of logistic regression models as well as for many other generalized linear regression settings). The consequence of these differing optimization constraints can be observed in the presence of strong collinearity, where the lasso estimates tend to be very unstable regarding different degrees of regularization while boosting approaches avoid too many changes in the coefficients as they consider the overall travelled path <cit.>.It has to be acknowledged, however, that direct regularization approaches as the lasso are applied more often in practice <cit.>. Statistical boosting, on the other hand, is far more flexible due to its modular nature allowing to combine any base-learner with any type of loss-function <cit.>. § ENHANCED VARIABLE SELECTIONEarly stopping of statistical boosting algorithms viacross-validation approaches plays a vital role to ensure a sparse model with optimal prediction performance on new data. Resampling, i.e., random sampling of the data drawn without replacement, tends to result in sparser models compared to other sampling schemes <cit.>, including the popular bootstrap <cit.>. By using base-learners of comparable complexity (in terms of degrees of freedom) selection bias can be strongly reduced <cit.>. The resulting models have optimal prediction accuracy on the test data. Yet, despite regularization the final models are often relatively rich <cit.>.§.§ Stability selectionMeinshausen and Bühlmann <cit.> proposed a genericapproach called stability selection to further refine the models and enhance sparsity.This approach was then transferred to boosting <cit.>.In general, stability selection can be combined with any variable selection approachand is especially useful for high-dimensional data with many potential predictors. To assess how stable the selection of a variable is, B random subsets that comprise half of the data are drawn. On each of these subsets, the model is fitted until q base-learners areselected. Usually, B = 100 subsets are sufficient. Computing the relative frequencies of random subsamples in which specific base-learners wereselected give a notion of how stable the selection is with respect to perturbations of the data. Base-learners are considered to be of importance if the selection frequency exceeds a pre-specified threshold level π_thr∈ [0.5, 1]. Meinshausen and Bühlmann <cit.> showed that this approach controlsthe per-family error rate (PFER), i.e., it provides an upper bound for the expected number of false positive selections (V):E(V) ≤q^2/(2π_thr - 1) p,where p is the number of base-learners. This upper bound is rather conservative and hence was further refined by Shah and Samworth <cit.> for specific assumptions on the distribution of the selection frequencies. Stability selection with all available error bounds is implemented for a variety of modelling techniques in thepackage stabs <cit.>.An important issue is the choice of the hyper-parameters of stability selection.The choice of a fix value of q should be made such that it is large enoughto select all hypothetically influential variables <cit.>. A sensible value for q should usually be smaller or equal to the number of base-learners selected via early stopping with cross-validation. In general, the size of q is of minor importance if it is in a sensible range.With a fixed q the threshold forstable effects either the threshold for π_thr can be chosen additionally or, as can be seen from Equation (<ref>) using equality, the upper bound for the PFER can be pre-specified andthe threshold can be derived accordingly. The latter would be the preferred choice if errorcontrol is of major importance, the former if error control is just considered a by-product (see e.g., <cit.>). For an interpretation of thePFER, particularly with regard to standard error rates such as the per-comparison error rate or the family-wise error rate, we refer to Hofner et al. <cit.>.Note that for a fixed q, it is computationally very easy to change any of theother two parameters (π_thr or the upper bound for the PFER) as theresampling results can be reused <cit.>.Please note that base-learners selected via stability selection might not reflect any model which can be derived with a specific penalty parameter using the original modelling approach. This means that for boosting, no m_stop value might exist that results in a model with the stably selected base-learners; the provided set of stable base-learners is a fundamentally new solution.§.§ Extension and application of boosting with stability selection Variable selection is especially important in high-dimensional gene expression data and other large scale biomedical data sources.Recently, stability selection with boosting was successfully applied to select a small number of informative biomarkers for survival of breast cancer patients <cit.>. The model was derived based on a novel boosting approach that optimizes the concordance index <cit.>. Hence, the resulting prediction rule was optimal with respect to its ability to discriminate between patients with longer and shorter survival, i.e., its discriminatory power. Thomas et al. <cit.> derived a modified algorithm for boosted generalized additive models for location, scale and shape (GAMLSS, <cit.>) to allow a combination of this very flexible model class with stability selection.The basic idea of GAMLSS is to model all parameters of the conditional distribution by their own additive predictor and associated link function. Extensive simulation studies showed that the new fitting algorithm leads to comparable models as the previous algorithm <cit.> but is superior regarding the computational speed, especially in combination with cross-validation approaches. Furthermore, simulations showed that this algorithm can be successfully combined with stability selection to select sparser models identifying a smaller subset of truly informative variables from high-dimensional data. The current algorithm is implemented in theadd-on package gamboostLSS <cit.>, the modified version is currently available on GitHub <cit.>. §.§ Further approaches for sparse models In order to construct risk prediction signatures on molecular data, such as DNA methylation, Sariyar et al. <cit.> proposed an adaptive likelihood-based boosting algorithm. The authors included a step size modification factor c_f which represents an additional tuning parameter, adaptively controlling the size of the updates. In case of sparse settings, the approach decreases shrinkage of effect estimates (by using a larger step-length) leading to a smaller bias. In settings with larger numbers of informative variables, the approach allows to fit models with lower degree of sparsity when necessary by smaller updates. The modification factor c_f has to be selected together with m_stop via cross-validation or resampling on a two-dimensional grid. Zhang et al. <cit.> argue that variable ranking in practice is more favourable than variable selection, as ranking allows to easily apply a thresholding rule in order to identify a subset of informative variables. The authors implemented a pseudo-boosting approach, which is technically not based on statistical boosting but is adapted to rank and select variables for statistical models. Note that also stability selection can be seen as a variable ranking scheme based on their selection frequency, as its selection feature is only triggered by implementing the threshold π_thr.Following a gradient based approach, Huang et al. <cit.> adapted the sparse boosting approach by Bühlmann and Yu <cit.> in order to promote similarity of model sparsity structures in the integrative analysis of multiple data sets, which surely is an important topic regarding the trend toward big data.§ FUNCTIONAL REGRESSION Due to technological developments, more and more data is measured continuously over time. Over the last years, a lot of methodological research focused on regression methods for this type of functional data. A groundbreaking work in this new and evolving field of statistics is provided by Ramsay and Silverman <cit.>. Functional regression models can either contain functional responses (defined on a continuous domain), functional covariates or both. This leads basically to three different classes of functional regression models, i.e., function-on-scalar (response is functional), scalar-on-function (functional explanatory variable) and function-on-function regression. For a recent review on general methodological developments on functional regression, see Morris <cit.>.§.§ Boosting functional data The first statistical boosting algorithm for functional regression, allowing for data-driven variable selection, was proposed by Brockhaus et al. <cit.>. The authors' approach focused on linear array models <cit.> providing a unified framework for all three settings outlined above. Since the general structure of their gradient boosting algorithm is similar to the one in Box <ref>, the resulting models still have the same form as in (<ref>), only that the response Y and the covariates may be functions. The underlying functional partial effects h_j(x_j, t) can be represented using tensor product basis h_j(x_j)(t) = (b_j(x_j)^⊤⊗ b_Y(t)^⊤)θ_j ,where θ_j is the vector of coefficients, b_j and b_Y are basis functions, and ⊗ denotes the Kronecker product. This functional array model is limited in two ways: (i) the functional responses need to be measured on a common grid and (ii) covariates need to be constant over the domain of the response.As particularly the second assumption might often not be fulfilled in practice, Brockhaus et al. <cit.> soon after proposed a general framework for boosting functional regression models avoiding this assumption and dropping the linear array structure.This newer framework <cit.> comprises also all three model classes outlined above and particularly focuses on historical effects, where functional response and functional covariates are observed over the same time interval. The underlying assumption is that observations of the covariate affect the response only up to the corresponding time point tE(Y(t) | X = x) = ∑_j = 1^J ∫_t_1^t x_j(s) β_j(s,t) ds,where s represents the time points the covariate was observed for. In other words, only the part of the covariate function lying in the past (not the future) can affect the present response. However, this is a sensible restriction in most practical applications and thus not a strong restriction.Both approaches for boosting functional regression are implemented in theadd-on package FDboost <cit.>, which relies on the fitting methods and infrastructure of mboost.§.§ Extensions of boosting functional regression Boosting functional data can be combined with stability selection (see Section <ref>) in order to enhance the variable selection properties of the algorithm <cit.>.The boosting approach for functional data was already extended towards the model class of generalized additive models for location, scale and shape (GAMLSS) for a scalar-on-function setting by Brockhaus et al. <cit.>.The functional approach was named signal regression models for location, scale and shape <cit.>. The estimation via gradient boosting is based on the corresponding gamboostLSS algorithm for boosting GAMLSS <cit.>. In an approach to analyse the functional relationship between bioelectrical signals like electroencephalography (EEG) and facial electromyography (EMG), Rügamer et al. <cit.> focused on extending the framework of boosting functional regression by incorporating factor specific historical effects, similar to (<ref>).Although functional data analysis triggered a lot of methodological research, a recent systematic review by Ullah and Finch <cit.> revealed that the number of actual biomedical applications of functional data analysis in general and functional regression in particular is rather small. The authors argued that the potential benefits of these flexible models (like richer interpretation and more flexible structures) are not yet well understood by practitioners and that further efforts are necessary to promote the actual usage of these novel techniques. § BOOSTING ADVANCED SURVIVAL MODELS While Cox regression is still the dominant model class for boosting time-to-event data (see <cit.> for a comparison of two different boosting algorithms, and <cit.> for different general approaches to estimate Cox models in the presence of high-dimensional data), over the last years several alternatives emerged <cit.>.In this section we will particularly focus on boosting joint models of time-to-event outcomes and longitudinal markersbut will also briefly refer to other recent extensions. §.§ Boosting joint models The concept of joint modelling of longitudinal and time-to-event data has found its way into the statistical literature in the last few years as it gives a very complete answer to questions on continous data recorded over time and event times related to this continous data. Modelling those to processes independently as done up to the suggestion of the joint modelling idea <cit.> leads to misspecified models prone to bias. There are various joint modelling approaches and thus also various different model equations. The type we are going to refer to in this review are of the following type:y_ij =η_l(x_ij) + η_ls(x_i,t_ij) + ε_ij λ(t|α,η_s(x_i, t),η_ls(x_i, t))= λ_0(t)exp(η_s(x_i, t) + αη_ls(x_i, t) ),where y_ij is the j-th observation of the i-th individual with i = 1,…,n and j=1,…,n_i and λ(t|α,η_s(x_i, t),η_ls(x_i, t)) is the hazard function for individual i at time point t. Both outcomes, the longitudinal measurement as well as the event time are modeled based on two sub-predictors each: one that is supposed to have an impact on only one of them (the longitudinal sub-predictor η_l(x_ij) and the survival sub-predictor η_s(x_ij,t)) and one of them being shared by both parts of the model (the shared sub-predictor η_ls(x_ij,t)). All those sub-predictors are functions of different, possibly time dependent variables x_i. In many cases the shared sub-predictor consists of or at least includes some type of random effects. The function λ_0(t) is the baseline hazard. Most approaches for joint models are based on likelihood or Bayesian inference using the joint likelihood resulting as a product from the above likelihoods <cit.>. Those approaches are, however, unable to conduct variable selection and cannot deal with high-dimensional data. Waldmann et al. <cit.> suggested a boosting algorithm tackling these challenges. The model used in this paper was a reduced version of (<ref>) in which no survival sub-predictor was considered and a fixed baseline hazard λ_0 was used. The algorithm is a version of the classical boosting algorithm as represented in Box <ref>, which is adapted to the special case of having to estimate a set of different sub-predictors (similar to <cit.>). The algorithm is therefore composed of three steps which are performed circularly. In the first step a regular boosting step to update the longitudinal sub-predictor η_l(x_ij) is performed and the parameters of the shared sub-predictor are treated as fixed. In the second step, the parameters of the longitudinal sub-predictor are fixed and a boosting step for the shared sub-predictor η_ls(x_ij) is conducted. The third step is a simple optimization step: based on the current values of the parameters in both sub-predictors the likelihoods are optimized with respect to λ_0, σ^2 and α (cf., <cit.>). The number of iterations now depends on two stopping iterations m_stop, l and m_stop, ls which have to be optimized on a two dimensional grid via cross validation. Waldmann et al. <cit.> showed that the benefits of boosting algorithm (automated variable selection and handling of p>n situations) can be transfered to joint modelling and hence lay the groundwork to further extended joint modelling approaches.The code for the approach presented here is available in theadd-on package JMboost <cit.>, currently on GitHub. §.§ Other new approaches on boosting survival data Reulen and Kneib <cit.> extended the framework of statistical boosting towards multi-state models for patients exposed to competing risks (e.g., adverse events, recovery, death or relapse). The approach is implemented in the gamboostMSM package <cit.>,relying on the infrastructure of mboost. Möst and Hothorn <cit.> focused on boosting patient-specific survivor function based on conditional transformation models <cit.> incorporating inverse probability of censoring weights <cit.>.When statistical boosting algorithms are used to estimate survival models, the motivation is most often the presence of high-dimensional data. De Bin et al. <cit.> investigated several approaches (including gradient boosting and likelihood-based boosting) to incorporate both clinical and high-dimensional omics data to build prediction models. Guo et al. <cit.> proposed a new adaptive likelihood-based boosting algorithm to fit Cox models, incorporating a direct lasso-type L_1 penalization in the fitting process in order to avoid the inclusion of variables with small effect. This general motivation is similar to the one of the boosting algorithm with step-length modification factor proposed by Sariyar et al. <cit.>. In another approach, Sariyar et al. <cit.> combined a likelihood-based boosting approach for the Cox model with random forest in order to screen for interaction effects in high-dimensional data. Hieke et al. <cit.> combined likelihood-based boosting with resampling in an approach to identify prognostic SNPs in potentially small clinical cohorts.§ NEW FRONTIERS AND APPLICATIONS There were even more new topics that have been incorporated into the framework of statistical boosting, but not all of them can be presented in detail here. However, we want to give a short overview of the most relevant developments, notably many of those were actually motivated by biomedical applications. Weinhold et al. <cit.> proposed to analyse DNA methylation data (signal intensities M and U), via a “ratio of correlated gammas” model. Based on a bivariate gamma distribution for M and U values, the authors derived the density for the ratio M/M+U and optimized it via gradient boosting.A boosting algortihm for differential item functioning in Rasch models was developed bySchauberger and Tutz <cit.> for the broader area of psychometrics, while Casalicchio et al. focused on a boosting subject-specific Bradley-Terry-Luce models <cit.>. Napolitano et al. <cit.> developed a sampled boosting algorithm for the analysis of brain perfusion images: Gradient boosting is carried out multiple times on different training sets. Each base-learner refers to a voxel and after every sampling iteration a fixed fraction of selected voxels is randomly left out from the following boosting fit, in order to force the algorithm to select new voxels. The final model is then computed as the global sum of all solutions. Feilke et al. <cit.> proposed a voxelwise boosting approach for the analysis of contrast-enhanced magnetic resonance imaging data (DCE-MRI), which was additionally enhanced to account for the regional structure of the voxels via a spatial penalty. Pybus et al. <cit.> proposed a hierarchical boosting algorithm for classification in an approach to detect positive selection in genomic regions (cf., <cit.>). Truntzer et al. <cit.> compared the classification performance of gradient boosting with other methods combining clinical variables and high-dimensional mass spectrometry data and concluded that the variable selection properties of boosting led also to a very good performance regarding prediction accuracy. Regarding boosting location and scale models (modelling both expected value and variance in the spirit of GAMLSS <cit.>), Messner et al. <cit.> proposed a boosting algorithm for predictor selection in ensemble postprocessing to better calibrate ensemble weather forecasts. The idea of ensemble forecasting is to account for model errors and to quantify forecast uncertainty. Mayr et al. <cit.> used boosted location and scale models in combination with permutation tests to assess simultaneously systematic bias and random measurement errors of medical devices. The use of a permutation test tackles one of the remaining problems of statistical boosting approaches in practical biomedical research: The lack of standard errors for effect estimates makes it necessary to incorporate resampling procedures to construct confidence intervals or to assess significance of effects. The methodological development in <cit.>, analogously to many of the extensions presented in this article, was motivated by the applied analysis of biomedical data. Statistical boosting algorithms, however, have been applied over the last few years in various biomedical applications without the need for methodological extensions that could be described here. Most application focus on prediction modelling or variable selection. We want to briefly mention a selection of the most recent ones from the last two years: The different research questions comprise the development of birth weight prediction formulas <cit.> for particularly small babies, the prediction of smoking cessation and its relapse in HIV-infected patients <cit.>, Escherichia coli Fed-Batch Fermentation Modeling <cit.>, the prediction of cardiovascular death on older patients in the emergency department <cit.> and the identification of factors influencing therapeutic decisions regarding rheumatoid arthritis <cit.>. § DISCUSSION After Friedman et al. <cit.> discovered the link between boosting and additive modelling in their seminal paper, most research on boosting methods has been focused on the development of methodology within the univariate GAM framework. This line of research included, among many other achievements, the estimation of smoothpredictor effects via spline base-learners <cit.> and the extension of boostingto other GAM families than binary classification and Gaussian regression <cit.>. We have summarized these methods and described their relationships in an earlier review <cit.>.In this article, we have highlighted several new research areas in the field of statistical boosting leaving the traditional GAM modeling approach. A particularly active research area during the last few years addresses the development of boosting algorithms for new model classes extending the GAM framework. These include, among others, the simultaneous modelling of location, scale and shape parameters within the GAMLSS framework <cit.>, the modelling of functional data <cit.>, and, recently, the class of joint models for longitudinaland survival data <cit.>. It goes without saying that these developments will make boosting algorithms available for practical use in much more sophisticated clinical and epidemiological applications than before.Another line of research, which we described in detail in Sections <ref> and <ref>, aims at exploring the connections between statistical boosting methods and machine learning techniques that were originally developed independently of boosting. An important example is stability selection, a generic methodology that, at the time of its development, mainly focussed on penalized regression models such as the lasso. Only in recent years, stability selection has been adapted to become a tool for variable selection within the boosting framework (e.g. <cit.>). Other work in this context is the analysis of the connections between boosting and penalized regression <cit.> and the work by Sariyar et al. <cit.> exploring a combination of boosting and random forest methods.Finally, as already noted by Hothorn <cit.>, boosting may not solely be regarded as a framework for regularized model fitting but also as a generic optimization tool on its own right. In particular, boosting constitutes a robust algorithm for the optimization of objective functions that, due to their structure or complexity, may pose problems for Newson-Raphson-type and related methods. This was, for example, the motivation for the use of boosting in the articles by Hothorn et al. <cit.> and Weinhold et al. <cit.>. Regarding future research, a huge challenge for the use of boosting algorithms in biomedical applications arises from the era of big data. Unlike other machine learning methods like random forests, the sequential nature of boosting methods hampers the use of parallelization techniques within the algorithm, which may result in issues with the fitting and tuning of complex models with multidimensional predictors and/or sophisticated base-learners like splines or higher-sized trees. To overcome these problems in classification and univariate regression, Chen and Guestrin <cit.> developed the extremely fast and sophisticated xgboost environment. However, for the more recent extensions discussed in this paper, big data solutions for statistical boosting still have to be developed. §.§ Conflict of interests The authors declare that there is no conflict of interest regarding the publication of this paper. §.§ Acknowledgements The authors thank Corinna Buchstaller for her help with the literature search. The work on this article was supported by the Deutsche Forschungsgemeinschaft (DFG) (), grant no. SCHM 2966/1-2 (grant to MS and OG) and the Interdisciplinary Center for Clinical Research (IZKF) of the Friedrich-Alexander-University Erlangen-Nürnberg via the Projects J49 (grant to AM) and J61 (grant to EW).§ DEVELOPMENTS REGARDING THE MBOOST PACKAGEThis appendix describes important changes during the last years that were implemented in thepackage mboost after the tutorial paper <cit.> on its use was published. Starting from mboost 2.2, the default for the degrees of freedom was changed, they are now defined as df(λ) = trace(2S - S^⊤S),with smoother matrix S = X(X^⊤X + λ K)^-1 X. Analyses have shown, thatthis leads to a reduced selection bias, see <cit.>. Earlier versions used the trace of the smoother matrix as degrees of freedom, i.e.,df(λ) = trace(S). One can change to the old definitionby setting . For parallel computations of cross-validated stopping values, mboost now uses the package parallel, which is included in the standardinstallation. The behavior ofwas changed whenis a factor: the intercept is simply dropped from the design matrix and the coding can be specified as usual for factors. Additionally, a new contrast was introduced:(see the manual offor details). Finally, the computation of B-spline basis at the boundaries was changed such that equidistant boundary knots are used per default.With mboost 2.3, constrained effects<cit.> are fitted per default using quadratic programming methods (option )improving the speed of computation drastically. Additional to monotonic, convex and concave effects, new constraints were introduced to fitoreffects or effects with boundary constraints (seefor details). Additionally, a new function to assign m_stop values to a modelobject was added () as well astwo new distribution families<cit.> and<cit.>. Finally, a new option was implemented to allow for stopping based on out-of-bag data during fitting (via). With mboost 2.4, bootstrap confidence intervals were implemented in thenovelfunction <cit.>. The stability selection procedure was moved to a dedicated package stabs <cit.>,while a specific function for gradient boosting was implemented in packagemboost.From mboost 2.5 onward, cross-validation does not stop on errors in singlefolds anymore and was sped up by settingif parallel computations viaare used. A documentation for the functionwas added, which allows to visualize model results. Values outside the boundary knots are now forbidden during fitting, while linear extrapolationis used for prediction. With mboost 2.6 a lot of bug fixes and small improvements were provided.Most notably, the development of the package is now hosted entirely on github inthe collaborative project https://github.com/boost-R/mboost/boost-R/mboostand the package maintainer changed.The current CRAN version mboost 2.7 provides a new family <cit.>, variable importance measures ()and improved plotting facilities.Changes in the current development version which will be deployed to CRAN with the next release of mboost include major changes to distribution families, allowing to specify link functions. Thefamily will additionally provide an alternative implementation of Binomial regression models along the linesof the classicimplementation, which can be used via .This family also works with a two-column matrix containing the number of successesand number of failures. Furthermore, models with zero steps (i.e., models containingonly the offset) will be supported as well as cross-validated models withoutbase-learners.
http://arxiv.org/abs/1702.08185v1
{ "authors": [ "Andreas Mayr", "Benjamin Hofner", "Elisabeth Waldmann", "Tobias Hepp", "Olaf Gefeller", "Matthias Schmid" ], "categories": [ "stat.AP", "stat.CO", "stat.ML" ], "primary_category": "stat.AP", "published": "20170227083326", "title": "An update on statistical boosting in biomedicine" }
[Adaptive Ensemble Prediction for Deep Neural Networks based on Confidence Level Hiroshi Inoue <inouehrs@jp.ibm.com>IBM Research - Tokyo]Ensembling multiple predictions is a widely used technique for improving the accuracy of various machine learning tasks. One obvious drawback of ensembling is its higher execution cost during inference. In this paper, we first describe our insights on the relationship between the probability of prediction and the effect of ensembling with current deep neural networks; ensembling does not help mispredictions for inputs predicted with a high probability even when there is a non-negligible number of mispredicted inputs. This finding motivated us to develop a way to adaptively control the ensembling. If the prediction for an input reaches a high enough probability, i.e., the output from the softmax function, on the basis of the confidence level, we stop ensembling for this input to avoid wasting computation power. We evaluated the adaptive ensembling by using various datasets and showed that it reduces the computation cost significantly while achieving accuracy similar to that of static ensembling using a pre-defined number of local predictions. We also show that our statistically rigorous confidence-level-based early-exit condition reduces the burden of task-dependent threshold tuning better compared with naive early exit based on a pre-defined threshold in addition to yielding a better accuracy with the same cost. § INTRODUCTION The huge computation power of today's computing systems, equipped with GPUs, special ASICs, FPGAs, or multi-core CPUs, makes it possible to train deep networks by using tremendously large datasets. Although such high-performance systems can be used for training, actual inference in the real world may be executed on small devices, such as a handheld devices or an embedded controller.Hence, various techniques (such as <cit.> and <cit.>) for achieving a high prediction accuracy without increasing computation time have been studied to enable more applications to be deployed in the real world. Ensembling multiple predictions is a widely used technique for improving the accuracy of various machine learning tasks (e.g., <cit.>, <cit.>) at the cost of more computation power. In image classification tasks, for example, accuracy is significantly improved by ensembling local predictions for multiple patches extracted from an input image to make a final prediction. Moreover, accuracy is further improved by using multiple networks trained independently to make local predictions. <cit.> averaged 10 local predictions using 10 patches extracted from the center and the 4 corners of input images with and without horizontal flipping in their Alexnet paper. They also used up to seven networks and averaged the prediction to increase the accuracy. GoogLeNet by <cit.> averaged up to 1,008 local predictions by using 144 patches and 7 networks.In some ensemble methods, meta-learning during training to learn how to best mix multiple local predictions from the networks is used (e.g., <cit.>). In Alexnet or GoogLeNet papers, however, significant improvements have been obtained by just averaging local predictions without meta-learning. In this paper, we do not use meta-learning either.Although the benefits of ensemble prediction are quite significant, one obvious drawback is its higher execution cost during inference. If we make a final prediction by ensembling 100 predictions, we need to make 100 local predictions, and, hence, the execution cost will be 100 times as high as that without ensembling. This higher execution cost limits the real-world use of ensembling, especially on small devices, even though using it is almost the norm to win image classification competitions that emphasize prediction accuracy. In this paper, we first describe our insights on the relationship between the probability of prediction and the effect of ensembling with current deep neural networks; ensembling does not help mispredictions for inputs predicted with a high probability, i.e., the output from the softmax, even when there is a non-negligible number of mispredicted inputs. To exploit this finding to speed up ensembling, we developed adaptive ensemble prediction that maintains the benefits of ensembling with much smaller additional costs. During the ensembling process, we calculate the confidence level of the probability obtained from local predictions for each input. If an input reaches a high enough confidence level, we stop ensembling and making more local predictions for this input to avoid wasting computation power. We evaluated our adaptive ensembling by using four image classification datasets: ILSVRC 2012, CIFAR-10, CIFAR-100, and SVHN. Our results showed that adaptive ensemble prediction reduces the computation cost significantly while achieving accuracy similar to that of static ensemble prediction with a fixed number of local predictions. We also showed that our statistically rigorous confidence-level-based early-exit condition yields a better accuracy with the same cost (or lower cost for the same accuracy) in addition to reducing the burden of task-dependent threshold tuning better compared with a naive early-exit condition based on a pre-defined threshold in the probability.§ ENSEMBLING AND PROBABILITY OF PREDICTION This section describes the observations that have motivated us to develop our proposed technique: how ensemble prediction improves the accuracy of predictions with different probabilities. §.§ Observations To show the relationship between the probability of prediction and the effect of ensembling, we evaluated the prediction accuracy for the ILSVRC 2012 dataset with and without the ensembling of two predictions made by two independently trained networks. Figure 1(a) shows the results of this experiment with GoogLeNet; the two networks follow the design of GoogLeNet and use exactly the same configurations (hence, the differences come only from the random number generator). In the experiment, we 1) evaluated the 50,000 images from the validation set of the ILSVRC 2012 dataset by using the first network without ensembling, 2) sorted the images in terms of the probability of prediction, and 3) evaluated the images with the second network and assessed the accuracy after ensembling two local predictions by using the arithmetic mean. The x-axis of Figure 1(a) shows the percentile of the probability from high to low, i.e., going left (right), and as can be seen, the first local predictions became more (less) probable. The gray dashed line shows the average probability for each percentile class. Overall, ensembling improved accuracy well, although we only averaged two predictions. Interestingly, we can observe that the improvements only appear on the right side of the figure. There were almost no improvements made by ensembling two predictions on the left side, i.e., input images with highly probable local predictions, even when there was a non-negligible number of mispredicted inputs. For example, in the 50- to 60-percentile range with GoogLeNet, the top-1 error rate was 29.6% and was not improved by averaging two predictions from different networks. For more insight into these characteristics, Figure 2(a) shows the breakdown of 5,000 samples in each 10-percentile range into 4 categories based on 1) whether the first prediction was correct or not and 2) whether the two networks made the same prediction or different predictions. When a prediction with a high probability was made first, we can observe that another local prediction tended to be the same regardless of its correctness. In the highest 10-percentile range, for instance, the two independently trained networks made the same misprediction for all 43 mispredicted samples. The two networks made different predictions only for 2 out of the 5,000 samples even when we included the correct predictions. In the 10- to 20-percentile range, the two networks generated different predictions only for 3 out of 139 mispredicted samples. Ensembling does not work well when local predictions tend to make the same mispredictions.To determine whether or not this characteristic of ensembling is unique to the GoogLeNet architecture, we conducted the same experiment using Alexnet as another network architecture and show the results in Figure 1(b) and 2(b). Although the prediction error rate was higher for Alexnet than for GoogLeNet, we observed similar characteristics of improvements made by ensembling. When we combined Alexnet (for the first prediction) and GoogLeNet (the second prediction), ensembling the local prediction from GoogLeNet, which yielded much higher accuracy than the first prediction by Alexnet, did not produce a significant gain in the 0- to 20-percentile range. We discuss this deeper in appendix. Also, we show the results with ResNet50 <cit.> in appendix. Even with ResNet, which has higher accuracy than GoogLeNet, the improvements made by ensembling were only observed on the right side of the figure, i.e., for images with low probabilities. These characteristics of the improvements made by ensembling are not unique to an ILSVRC dataset; we have observed similar trends in other datasets. §.§ Why This Happens To understand why ensembling does not work for inputs predicted with high probabilities, we investigated a simplified classification task, which we detail in appendix, and found a common type of misclassification that ensembling did not help.Trained neural networks (or any classifiers in general) often make a prediction for a sample near the decision boundary of class A and class B with a low probability; the classifier assigns similar probabilities for both classes, and a class with a higher probability (but with a small margin) is selected as the result. For such samples, ensembling works efficiently by reducing the effects of random perturbations.While ensembling works near a decision boundary that is properly learned by a classifier, some decision boundaries can be totally missed by a trained classifier due to insufficient expressiveness in the model that is used or a lack of appropriate training data. Figure 3 shows an example of true decision boundaries in 2-D feature space and classification results with a classifier that is not capable of capturing all of these boundaries. In this case, the small region of class A in the top-left was totally missed in the classification results obtained with a trained network, i.e., the samples in this region were mispredicted with high probabilities. Typically, such mispredictions cannot be avoided by ensembling predictions from another classifier trained with different random numbers since these mispredictions are caused by the poor expressiveness of the model rather than the perturbations that come from random numbers. These results motivated us to make our adaptive ensemble prediction for reducing the additional cost of ensembling while keeping the benefit of improved accuracy. Once we obtain a high enough prediction probability for an input image, further local prediction and ensembling will waste computation power without improving accuracy. The challenge is how to identify the condition under which ensembling is terminated early. As described later, we determine that an early-exit condition based on the confidence level of probability works well for all tested datasets. § RELATED WORK Various prediction methods that ensemble the outputs from many classifiers (e.g., neural networks) have been widely studied to achieve higher accuracy in machine learning tasks. Boosting (Freund and Schapire 1996) and bagging (<cit.>) are famous examples of ensemble methods. Boosting and bagging produce enough variances in classifiers included in an ensemble by changing the training set for each classifier. Another technique for improving binary classification using multiple classifiers is soft-cascade (e.g., <cit.>, <cit.>). With soft-cascade, multiple weak sub-classifiers are trained to reject a part of negative inputs. Hence, when these classifiers are combined to make one strong classifier, many easy-to-reject inputs are rejected in the early stages without consuming a huge amount of computation time.Compared with boosting, bagging, or soft-cascade, ours is an inference-time technique and does not affect the training phase.In recent studies on image classification with deep neural networks, random numbers (e.g., for initialization or for ordering input images) used in the training phase can give sufficient variances in networks even with the same training set for all classifiers (networks). Hence, we use networks trained by using the same training set and network architecture in this study, and we assume that the capabilities of local classifiers are not that different. If a classifier is way too much weaker than the later classifiers as in soft-cascade, the ensembling goes differently compared with our observations as discussed in Section 2; mispredicted inputs in a weak classifier may be predicted correctly by later powerful classifiers even if they are predicted with high probabilities in the local prediction made by the weak classifier. For example, a later classifier that is capable of capturing the top-left region of class A in Figure 3 may predict the sample in the top-left correctly. Another series of studies on accelerating classification tasks with two or few classes is based on the dynamic pruning of majority voting (e.g., <cit.>, <cit.>). Like our technique, dynamic pruning uses a certain confidence level to pruneensembling with a sequential voting process to avoid wasting computation time. We show that the confidence-level-based approach is quite effective at accelerating ensembling by averaging local predictions in many-class classification tasks with deep neural networks when we use the output of the softmax as the probability of the local predictions. COMET by <cit.> stops ensembling for random forest classifiers on the basis of the confidence interval. We also use the confidence interval but in a different way; COMET stops ensembling for binary classification tasks when an unobserved proportion of positive votes falls on the same side of 0.5 as the current observed mean with a certain confidence level. We use the confidence interval to confirm that a predicted label has a higher probability than other labels with confidence. We cannot naively follow GLEE since we do not target binary classification tasks. Also, COMET cannot take our approach because the random forest does not provide probability information for each local prediction unlike neural network classifiers. <cit.> decided the order of local predictions and also the thresholds for early stopping by solving a combinatorial optimization problem for binary classification tasks. In our study, we fixed the order of local predictions since our local predictions are based on the same network architecture and ordering is not that important. The basic idea of using the confidence level for the early-exit condition does not depend on this specific order of local predictions. Some existing classifiers with a deep neural network (e.g., <cit.>, <cit.>, <cit.>) take an early exit approach in ensembling similar to ours or take an early exit from one neural network. In our study, we study how the early-exit condition affects the execution time and the accuracy in detail and show that our confidence-level-based condition works better than naive threshold-based conditions.The higher execution cost of ensembling is a known problem, so we are not the first to attack it. For example, <cit.> also tackled the high execution cost of ensembling. Unlike us, they trained a new smaller network by distilling the knowledge from an ensemble of networks by following <cit.>. To improve the performance of the inference of deep neural networks by making small and efficient executables suitable for handheld devices from trained networks, graph compilers and optimizers, such as NNVM/TVM (<cit.>) and Glow (<cit.>), were recently developed.In our technique, we use the probability of predictions to control ensembling during inference. Typically, the probability of predictions generated by the softmax is used during the training of a network; the cross entropy of the probabilities is often used as the objective function of optimization. However, using probability for purposes other than the target of optimization is not unique to us. For example, <cit.> used probabilities from the softmax while distilling knowledge from an ensemble of multiple models to create a smaller network for deployment. As far as we know, ours is the first study focusing on the relationship between the probability of prediction and the effect of ensembling with deep neural networks. <cit.> showed an important observation related to ours. They showed that a large part of the gain of ensembling came from the ensembling of the first few local predictions. Our observation discussed in the previous sections enhances Opitz's observation from a different perspective: most of the gain of ensembling comes from inputs with low probabilities in prediction.§ ADAPTIVE ENSEMBLE PREDICTION§.§ Basic Idea This section details our proposed adaptive ensemble prediction method. As shown in Figure 1, ensembling typically does not improve the accuracy of predictions if a local prediction is highly probable. Hence, we terminate ensembling without processing all N local predictions on the basis of the probabilities of the predictions. We execute the following steps.* start from i = 1* obtain the i-th local prediction, i.e., the probability for each class label. We denote the probability for label L of the i-th local prediction p_L,i* calculate the average probabilities for each class label <p_L>_i = ∑_j=1^ip_L,j/i* if i < N, and the early-exit condition is not satisfied, increment i, and repeat from step 2* output the class label that has the highest average probability _L(<p_L>_i) as the final prediction.§.§ Confidence-level-based Early ExitFor the early-exit condition in step 4, we propose a condition based on confidence level. We can use a naive condition on the basis of a pre-determined static threshold T to terminate the ensembling, i.e., we just compare the highest average probability max_L(<p_L>_i) against the threshold T. If the average probability exceeds the threshold, we do not execute further local predictions for ensembling. As we empirically show later, the best threshold T heavily depends on the task. To avoid this difficult tuning of the threshold T, we propose a dynamic and more statistically rigorous condition in this paper.Instead of a pre-defined threshold, we can use confidence intervals (CIs) as an early-exit condition. We first find the label that has the highest average probability (predicted label). Then, we calculate the CI of the probabilities using i local predictions. If the calculated CI of the predicted label does not overlap with the CIs for other labels, i.e., the predicted label is the best prediction with a certain confidence level, we terminate the ensembling and output the predicted label as the final prediction.We calculate the confidence interval for the probability of label L using i local predictions as follows.<p_L>_i ± z 1/√(i)√(∑_j=1^i(p_L,j-<p_L>_i)^2/i-1)Here, z is defined such that a random variable Z that follows the Student's-t distribution with i-1 degrees of freedom satisfies the condition Pr[Z ≤ z ] = 1 - α / 2. α is the significance level, and (1 - α) is the confidence level. We can read the value z from a precomputed table at runtime. To compute the confidence interval with a small number of samples (i.e., local predictions), it is known that the Student's-t distribution is more suitable than the normal distribution. When the number of local predictions increases, the Student's-t distribution approximates the normal distribution.We can do pair-wise comparisons between the predicted label and all other labels. However, computing CIs for all labels is costly, especially when there are many labels. To avoid the excess costs of computing CIs, we compare the probability of the predicted label against the total of the probabilities of other labels. Since the total of the probabilities of all labels (including the predicted label) is 1.0 by definition, the total of the probabilities for the labels other than the predicted label is 1 - <p_L>_i, and the CI is the same size as that of the probability of the predicted label. Hence, our early-exit condition is as follows.<p_L>_i - (1 - <p_L>_i) > 2z 1/√(i)√(∑_j=1^i(p_L,j-<p_L>_i)^2/i-1)We avoid computing CI if <p_L>_i < 0.5 to avoid wasteful computation because the early-exit condition of equation 2 cannot be met in such cases. Since the CI cannot be calculated with only one local prediction as is obvious from equation (3) to avoid zero divisions, we can use a hybrid of the two early-exit conditions. We use the static-threshold-based condition only for the first local prediction with a quite conservative threshold (99.99% in the current implementation) to terminate ensembling only for trivial inputs as early as possible, and after the second local prediction is calculated, the confidence-level-based condition of equation (3) is used. We also performed evaluation by doing pair-wise comparisons of CIs with all other labels or with the label having the second-highest probability. However, the differences in the results obtained by doing pair-wise comparisons were mostly negligible. There can be many other criteria for the early-exit conditions, but our approach with the confidence level is a reasonable choice for balancing multiple objectives, including accuracy, computation cost, and ease of tuning.§ EXPERIMENTS§.§ Implementation In this section, we investigate the effects of adaptive ensemble prediction on the prediction accuracy and execution cost with various image classification tasks: ILSVRC 2012, Street View House Numbers (SVHN), CIFAR-10, and CIFAR-100 (with fine and coarse labels) datasets.For the ILSVRC 2012 dataset, we used GoogLeNet as the network architecture and trained the network by using stochastic gradient descent with momentum as the optimization method. For other datasets, we used a network that has six convolutional layers with batch normalization (<cit.>) followed by two fully connected layers with dropout. We used the same network architecture except for the number of neurons in the output layer. We trained the network by using Adam (<cit.>) as the optimizer. For each task, we trained two networks independently. During the training, we used data augmentations by extracting a patch from a random position of an input image and using random horizontal flipping. Since adaptive ensemble prediction is an inference-time technique, network training is not affected. We averaged up to 20 local predictions by using ensembling. We created 10 patches from each input image by extracting from the center and four corners of images with and without horizontal flipping by following Alexnet. For each patch, we made two local predictions using two networks. The patch size was 224 x 224 for the ILSVRC 2012 dataset and 28 x 28 for the other datasets. We made local predictions in the following order: (center, no flip, network 1), (center, no flip, network 2), (center, flipped, network 1), (center, flipped, network 2), (top-left, no flip, network 1), ..., (bottom-right, flipped, network 2). Since averaging local predictions from different networks typically yields better accuracy, we used this order for both our adaptive ensembling and fixed-number static ensembling. As far as we tested, the order of local predictions slightly affected the error rates, but it did not change the overall comparisons shown in the evaluations. §.§ Results To study the effects of our adaptive ensembling on the computation cost and accuracy, we show the relationship between them for the ILSVRC 2012 and CIFAR-10 datasets in Figure <ref>. We used two networks in this experiment, i.e., up to 20 predictions were ensembled. In the figure, the x-axis is the number of ensembled predictions, so smaller means faster. The y-axis is the improvements in classification error rate over the baseline (no ensemble), so higher means better. We evaluated the static ensembling (averaging the fixed number of predictions) by changing the number of predictions and our adaptive ensembling. For the adaptive ensembling, we also performed evaluation with two early-exit conditions: with naive static threshold and with confidence interval. We tested the static-threshold-based condition by changing the threshold T and drew lines in the figure. Similarly, we evaluated the confidence-level-based condition with three confidence levels frequently used in statistical testing: 90%, 95%, and 99%. From the figure, there is an obvious trade-off between the accuracy and the computation cost. The static ensemble with 20 predictions was at one end of the trade-off because it never exited early. The baseline, at which ensembling was not executed, was at the other end, and it always terminated at the first prediction regardless of the probability. Our adaptive ensembling with confidence-level-based condition achieved better accuracy with the same computation cost (or smaller cost for the same accuracy) compared with the static or naive adaptive ensembling with a static threshold.The gain with the confidence-level-based condition over the static-threshold-based was significant for CIFAR-10, whereas it was marginal for ILSVRC 2012. These two datasets showed the largest and smallest gain with the confidence-level-based condition over the static-threshold-based condition; the other datasets showed improvements between those of the two datasets shown in Figure <ref>. When comparing the two early-exit conditions in adaptive ensembling, the confidence-level-based condition eliminated the burden of parameter tuning better compared with the naive threshold-based condition in addition to the benefit of the reduced computation cost. Obviously, how to decide the best threshold T is the most important problem for the static-threshold-based condition. The threshold T can be used as a knob to control the trade-off between accuracy and computation cost, but the static threshold tuning is highly dependent on the dataset and task. From Figure <ref>, for example, T=90% or T=95% seem to have been a reasonable choice for ILSVRC 2012, but it was a problematic choice for CIFAR-10. For the confidence-level-based condition, the confidence level also controlled the trade-off. However, the differences in the computation cost and the improvements in accuracy due to the choice of the confidence level were much less significant and less sensitive to the current task than the differences due to the static threshold. Hence, task-dependent fine tuning of the confidence level is not as important as the tuning of the static threshold. The easier (or no) tuning of the parameter is an important advantage of the confidence-level-based condition.Tables 1 and 2 show how adaptive ensemble prediction affected the accuracy of predictions and the execution costs in more detail for five datasets. Here, for our adaptive ensembling, we used the confidence-level-based early-exit condition with a 95% confidence level for all datasets on the basis of the results of Figure <ref>. We tested two different configurations: with one network (i.e., up to 10 local predictions) and with two networks (up to 20 local predictions). In all datasets, the ensembling improved the accuracy in the trade-off for the increased execution costs as expected. Using two networks doubled the number of local predictions on average (from 10 to 20) and increased both the benefit and drawback. If we were to use further local predictions (e.g., original GoogLeNet averaged up to 1,008 predictions), the benefit and cost would become much more significant. Comparing our adaptive ensembling with static ensembling, our adaptive ensembling similarly improved accuracy while reducing the number of local predictions used in the ensembles; the reductions were up to 6.9 and 12.7 times for the one-network and two-network configurations.Since the speed-up with our adaptive technique over static ensembling becomes larger as the number of max predictions to ensemble increases, the benefit of our adaptive technique will become more impressive if we use larger ensemble configurations.§ CONCLUSION In this paper, we described our adaptive ensemble prediction with statistically rigorous confidence-level-based early-exit condition to reduce the computation cost of ensembling many predictions. We were motivated to develop this technique by our observation that ensembling does not improve the prediction accuracy if predictions are highly probable. Our experiments using various image classification tasks showed that our adaptive ensembling makes it possible to avoid wasting computing power without significantly sacrificing prediction accuracy by terminating ensembles on the basis of the probabilities of the local predictions. The benefit of our technique will become larger if we use more predictions in an ensemble. Hence, we expect our technique to make ensemble techniques more valuable for real-world systems by reducing the total computation power required while maintaining good accuracy and throughput. apalike § APPENDIX Here, we show additional results and discussion on mispredictions with high probabilities. §.§ How mispredictions with high probability happenIn Section 2.2, we discussed that the mispredictions with high probabilities can be caused by the insufficient expressiveness in the used model as one possible reason.We show a simple example in Figure 5. We built a three-layer perceptron for a simple binary classification problem which maps a 2-D feature into class A or B as shown in Figure 5(a). We label a sample with feature (x, y), where 0 ≤ x < 1 and 0 ≤ y < 1 as label(x,y) = {[ class A(x ≤ 0.2, y ≤ 0.2); class A (x+y ≥ 1.2, x ≥ 0.4, y ≥ 0.4); class B (otherwise) ]. We use the three-layer perceptron with 10 or 100 neurons in the hidden layer as the classifier and Sigmoid function as the activation function. The output layer has only one neuron and the classification results are shown as high or low value. We generated 1,000 random samples as the training data. The training is done by using stochastic gradient descent as the optimizer for 10,000 epochs.Figure 5(b) depicts the classification results with 10 hidden neurons. In this case, the top-left region of class A is not captured by the classifier at all due to the poor expressiveness of the network even though we have enough training samples in this region. As a result, this (weak) classifier misclassifies the samples in this region for the class B with almost 100% probability. Although we repeat the training using different random number seeds, this mispredictions cannot be avoided with 10 hidden neurons. Hence, the ensembling multiple local predictions from this classifier cannot help this type of mispredictions in the top-left region. The decision boundary between class A and B at the bottom-right region is not sharp and its shape differs run by run due to the random numbers. The ensembling can statistically reduce the mispredictions near the boundary by reducing the effects of the random numbers.When we increase the number of hidden neurons from 10 to 100, the top-left region of class A is captured as shown in Figure 5(c). So the expressiveness of the used model matters to avoid the mispredictions with high probabilities.Another type of the mispredictions with high probabilities can happen if we do not have enough training data in small regions, e.g. the top-left region of class A in the above example. In such case, of course, even highly capable classifiers cannot learn the decision boundary around the small region and mispredict the samples in this region with high probability.Ensembling multiple local predictions from the multiple local classifiers does not help both types of mispredictions and hence stop ensembling for them is effective to avoid wasting computation power without increasing the overall error rate. §.§ Ensembling and Probability of Prediction §.§.§ Results with ResNet50 In Section 2 of the paper, we showed the relationship between the probability of prediction and the effect of ensembling using GoogLeNet and Alexnet. To show that our observations are still valid with newer network architectures, the result with ResNet50 <cit.> is shown in Figure 6. In addition to the random cropping and flipping data augmentation used for experiments with GoogLeNet and Alexnet, we also employ sample pairing data augmentation technique <cit.> to achieve further improvements in accuracy. Our observation, ensembling does not help mispredictions for inputs predicted with a high probability, is still valid for a newer network architecture and with more advanced data augmentation technique as we can see by comparing the results for ResNet (Figure 6) and GoogLeNet and Alexnet (Figure 1).§.§.§ Ensemble using different networks In Section 2, we showed that ensembling two predictions from two local classifiers of the same network architecture (Alexnet <cit.> or GoogLeNet <cit.>) can improve the prediction accuracy only for samples that have low probabilities of prediction. Here, we show the results when we mix the predictions from Alexnet and GoogLeNet. Figure 7 shows the result when we use Alexnet in the first prediction and GoogLeNet in the second. The x-axis shows the percentile of the probability of the prediction by Alexnet from high to low as in figures shown in the main paper. The basic characteristics with two different networks are consistent with the cases using two identical networks discussed in the paper, although the improvements from the ensemble is much more significant since the second local classifier (GoogLeNet) is more powerful than the first one (Alexnet). For the leftmost region, i.e. 0- to 20-percentile samples, the ensemble from the two different networks does not improve the accuracy over the results with only the first local classifier. For the rightmost region, the ensemble improves the error rate significantly.Here, the ensemble improves the accuracy for much wider regions compared to the cases with two identical networks. For the 20- to 40-percentile range, ensembling two local predictions from Alexnet does not improve the accuracy as shown in Figure 1(b) while ensembling local predictions from Alexnet and GoogLeNet yields improvements as shown in Figure 7. As discussed above using Figure 5(b) and 5(c), a more powerful classifier can avoid some mispredictions with high probabilities. GoogLeNet, which has higher capability than Alexnet, can correctly classify some samples that are misclassified with high probabilities by Alexnet in this range. However, GoogLeNet cannot do better classification in the 0- to 20-percentile range compared to Alexnet.§.§ Prediction Accuracy and Computation Cost with 20 networksIn Section 4, we showed our evaluations using two networks and ten patches for each of the network. Here, we show the result of evaluation for CIFAR-10 using 20 independently-trained networks for 20 local predictions in Figure 8. The all local predictions use the same input image, which is extracted from the center of the input image without horizontal flipping.From the figure, the improvements from ensemble are more significant than the improvements with two networks because local predictions from different models are less correlated each other than the local predictions from the same network. Even in this configuration, our adaptive ensemble with the proposed confidence-level-based condition achieves better accuracy than static-threshold-based conditions with the same number of average evaluations.
http://arxiv.org/abs/1702.08259v3
{ "authors": [ "Hiroshi Inoue" ], "categories": [ "cs.LG", "cs.CV", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227125454", "title": "Adaptive Ensemble Prediction for Deep Neural Networks based on Confidence Level" }
^1Nuclear Physics Division, Bhabha Atomic Research Centre, Trombay, Mumbai-85, India^2Theoretical Physics Section, Bhabha Atomic Research Centre, Trombay, Mumbai-85, India ^3Department of Physics, Birla Institute of Technology & Science, Pilani, Goa, 403726, India^4Department of Physics, Indian Institute of Science Education and Research, Bhopal,462066, India 1:zahmed@barc.gov.in,2:sachinv@barc.gov.in, 3:achint1994@gmail.com, 4:mohai@iiserb.ac.in Thesub-barrier pairs of energy levels of a Hermitianone-dimensional symmetric double-well potential are known to merge into one, if the inter-well distance (a) is increased slowly. The energy at which the doublets merge are the ground state eigenvalues of independent (ϵ_0). We show that if the double-well is perturbedmildly by a complex PT-symmetric potential, the merging of levels turns into the coalescing of two levels at an exceptional point a=a_*. For a>a_*, the real part of complex-conjugate eigenvalues coincides withϵ_0 again. This is an interesting and rare connection between the two phenomena in two domains: Hermiticity and complex PT-symmetry. Coalescing versus merging of energy levels in one-dimensional potentials Zafar Ahmed^1, Sachin Kumar^2, Achint Kumar^3, Mohammad Irfan^4 December 30, 2023 ========================================================================In one dimensional quantum mechanics there isone to one correspondence between eigenvalues and eigenstates, there is an absence of degeneracy. So when a parameter of the Hamiltonian is varied slowly curves can not cross but they can come quite close and then diverge from each other (Avoided Crossing). Crossings and avoided crossings of levels is commonly observed inthe spectra of two or three dimensional systems.Mostly, in one dimensional systems if a parameter of the potential is varied slowly, the eigenvalues increase or decrease monotonically. For particle inan infinitely deep well of width a, E_n(= n^2 π^2 ħ^2/2ma^2) decrease as function of a. For harmonic oscillator potential, E_n=(n+1/2)ħω increase linearly as function of the frequency parameter ω. Two levels coming very close may either display merging of two levels or their avoided crossing. The former is well known to occur in symmetric double-well potentials wherein the sub-barrier doublets of energy levels merge [1,2] into the levels of the independent wells when the inter-well distance is increased slowly. On the other hand AC is observed rather rarely in one dimensional systems. Recently, it has been shown [2,3] that in double-well potentials if the width or depth of the potential is varied slowly very interesting level crossings can be observed. Notably the double-well becomes asymmetric.For one-dimensional non-Hermitian Hamiltonions, it is known that two complexeigenvalues may become real at one special value of the parameter(λ= λ_*)) of the potential after this point these two eigenvalues may again be complex. Such special values of the parameter are called Exceptional Point (EP) [4]. More interestingly, when a potential PT-symmetric (invariant under the joint action of Parity: x→ -x and Time-reversal (i → -i), the two discrete eigenvalues make a transition from realto complex-conjugate or vice versa. For instance, in the complex PT-symmetric potential: V(x)=-V_1 ^2x +i|V_2| x tanh x, |V_2|=V_1+1/4=V_c  (2m=1=ħ^2) is the EP ofthis potential when |V_2| ≤ V_c eigenvalues are real, otherwise these are complex conjugate pairs[5].In Fig. 2(a,b,c), we show the parametric evolutionof eigenvalues E_n(g) for various PT-symmetric potentials [6,7] and for their Hermitian counterparts.For Double Dirac Delta Potential(DDDP, Fig. 1(a)) the formula for finding the eigenvalues of bound states [2] can be written as4p^2-2pu=(u^2+g^2)(e^-2pa-1),p=√(-2μ E)/ħ.The solution of Schrödinger equation for the Double Dirac-delta well between two rigid walls(fig. 1(b)) is given byψ(-a<x<-b)= A sinh p (x+a),ψ(-b ≤ x ≤ b)=B e^px + C e^-px, ψ(b<x ≤ a)= Dsinh p(x-a).B e^-pb+Ce^pb=Asinh pd,(V_1+p)Be^-pb+(V_1-p)Ce^pb=Apcosh pd Be^pb+Ce^-pb=-Dsinh pd,(V_2-p)Be^pb+(V_2+p)Ce^-pb=-Dpcosh pd e^2pb[V_1-p(1+ pd)][V_2-p(1+ pd)]=e^-2pb[V_1+p(1- pd)][V_2+p(1- pd)]. Here, V_1=u+ig, V_2=u-ig, d=a-b.The solution of Schrödinger equationfor the square double-well potential (Fig.1(c)) is given as, ψ(x<-a)= A e^px + B e^-px, ψ(-a ≤ x ≤ -b)= C e^iqx + D e^-iqx, ψ(-b<x<b)= F e^px + G e^-px, ψ(b ≤ x ≤ a)= H e^irx + K e^-irx andψ(x>a)= L e^px + M e^-px, where, q=√(2μ(E+u+ig))/ħ,r=√(2μ(E+u-ig))/ħ. A e^-pa+Be^pa=C e^-iqa+D e^iqa,A p e^-pa-B pe^pa=iCq e^-iqa-iD q e^iqaC e^-iqb+D e^iqb=F e^-pb+G e^pb,iCq e^-iqb-iDq e^iqb=Fp e^-pb-Gp e^pbF e^pb+G e^-pb=H e^irb+K e^-irb,Fp e^pb-Gp e^-pb=iHr e^irb-iKr e^-irbH e^ira+K e^-ira=L e^pa+M e^-pa,iHr e^ira-iKr e^-ira=Lp e^pa-Mp e^-pa M_1 ([ A; B ])= M_2 ([ C; D ]),M_3 ([ C; D ])=M_4 ([ F; G ]) M_5 ([ F; G ])=M_6 ([ H; K ]),M_7 ([ H; K ])=M_8 ([ L; M ]) ([ A; B ])= M^-1_1M_2M^-1_3M_4M^-1_5M_6M^-1_7M_8 ([ L; M ])=([ m_11(E) m_12(E); m_21(E) m_22(E) ])([ L; M ])For bound states, we demand B=0=L. From Eq.(7) we have B=L m_21(E)+M m_22(E), where M 0. Finally we get,m_22(E)=0 gives the eigenvalues of bound states of double-well in Fig 1(c).Using Eqs. (1,3,6), we obtain eigenvalues of the double-well potentials (Fig. 1(a,b,c)) for both cases g=0 and g 0.In Fig 2, by solid curves, we show the evolution of eigenvalues E_n(g) for three PT-symmetric potentials. By dashed curves, weshow the same for Hermitian counterparts of these potentials when g is replaced by ± ig. Notice the coalescing of eigenvalues at special values of g, these special values are called EPs. Also note that at or around the EPs, the evolution ofeigenvalues in dashed lines does not show any special feature.So the spectra of Hermitian and their complex PT-symmetric counterparts do not relate to each other well, excepting for very small values of g where the eigenvalues of both coincide approximately.In Fig. 3-5, we present the variation of eigenvalues when the distance between wells in Fig. 1(a,b,c) is increased slowly. The blue lines present the coalescing of two levels when the total potential is mildly complex PT-symmetric (g is small). See the dashed red lines for Hermitian double-well (g=0) representing merging of two levels. Green dotted and blue dot-dashedlines arise when non-Hermiticity parameter becomes large, then the coalesced levelsare not contained between the merging levels (red dashed lines).Lastly, we conclude that the models discussed here bring the spectral phenomena of coalescing and merging of energy levels closer, however weknow that they occur in two different domains: Hermitian and complex PT-symmetric. Further investigations in this regard are welcome.§ REFERENCES[30] 1 E. Merzbacher, Quantum Mechanics, Wiley New York: Wiley, 1970 pp 128–39. 2 Z. Ahmed, S. Kumar, M. Sharma and V. Sharma, Eur. J. Phys. 37 (2016) 045406. 3 Z. Ahmed, S. Pavaskar, D. Sharma and L. Prakash, arXiv: 1508.00661 [quant-ph]. 4 T. Kato, `Perturbation Theory of Linear operators`,Springer, New York (1980). 5 Z. Ahmed, Phys. Lett. A282, 343 (2001); 295 287(2001). 6 Z. Ahmed, Phys. Lett. A 364, 12 (2007). 7 Z. Ahmed, Pramana j. Phys 73, 323 (2009); F.M. Fernandez, arXiv:1512.09326. [quant-ph].
http://arxiv.org/abs/1702.08335v2
{ "authors": [ "Zafar Ahmed", "Sachin Kumar", "Achint Kumar", "Mohammad Irfan" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170227154348", "title": "Coalescing versus merging of energy levels in one-dimensional potentials" }