text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Colorado State University, Fort Collins CO, 80523Electron evaporation plays an important role in the electron temperature evolution and thus expansion rate in low-density ultracold plasmas. In addition, evaporation is useful as a potential tool for obtaining colder electron temperatures and characterizing plasma parameters. Evaporation theory has been developed for atomic gases and has been applied to a one-component plasma system. We numerically investigate whether such an adapted theory is applicable to ultracold neutral plasmas. We find that it is not due to the violation of fundamental assumptions of the model. The details of our calculations are presented as well as a discussion of the implications for a simple description of the electron evaporation rate in ultracold plasmas.Evaluation of Charged Particle Evaporation Expressions in Ultracold Plasmas Craig Witte and Jacob L. Roberts Updated: March 2017 =========================================================================== § INTRODUCTION Ultracold plasmas (UCPs) offer the opportunity to study plasma physics within a unique range of plasma parameters.<cit.> Their low temperatures and controllable initial conditions make these plasmas good candidates to explore fundamental plasma physics as well as strong coupling physics. Electron temperatures play a critical role in establishing electron equilibration times, screening, and strong coupling effects<cit.> in the electron component of the UCP. There are many influences on the electron temperature in UCPs, including three-body recombination<cit.>, continuum lowering heating<cit.>, disorder-induced heating<cit.>, cooling via UCP expansion<cit.>, and evaporative cooling<cit.>. Electron evaporation occurs when electrons escape from the UCP's confining potential and leave the plasma. Since only high energy electrons are able to escape from the UCP, evaporation leads to a net energy loss from the trapped electron cloud that results in a lower electron temperature. Electron evaporation is especially important at low densities, where a large fraction of electrons are able to escape<cit.>. Electron evaporation, when properly understood, should offer a variety of insights into some of the fundamental plasma physics associated with UCPs. Evaporation can have a sizable impact on the expansion dynamics of UCPs as electron evaporation results in a cooling of the electron component, reducing the rate of expansion<cit.>. Furthermore, electron evaporation leads to higher levels of charge imbalance, resulting in additional Coulomb forces that distort the plasma expansion<cit.>. Additionally, strong coupling physics can be accessed in UCPs due to their low temperatures. Evaporation-induced cooling of the electron component should allow access to a greater degree of strong coupling.Finally, electron evaporation is a good candidate to probe the temperature of the electron cloud. Techniques for measuring UCP ion temperatures are well established<cit.>, but such techniques are not applicable to UCP electrons, and typically electron temperatures are estimated based on theoretical interpretations of ion expansion<cit.> or derived from other plasma properties<cit.>. In the absence of other effects, electron temperatures would be solely determined by the photon energy of the ionizing laser pulse. However, such a simple estimation ignores various heating mechanisms such as continuum lowering, disorder induced heating, and three-body recombination<cit.>. Theoretical predictions quantifying these heating mechanisms do exist, and can be incorporated into temperature estimates, but tests of the predicted heating rates are not available across all UCP parameter ranges. Furthermore, it is not clear how well these heating predictions extend into the strongly coupled regime.The development of an independent electron temperature measurement would offer the ability to test the net temperature change due to the previously listed effects. UCP electron evaporation is a good candidate for such measurement over a wide range of UCP conditions, since there is a strong electron temperature dependence expected in the electron evaporation rate, making the measurement of the rate potentially sensitive to the electron temperature.For the electron evaporation rate to be used to determine the electron temperature, the electron evaporation rate and the electron temperature needs to be linked. It would be ideal if analytical expressions describing the functional form of the electron evaporation rate as a function of electron temperature were known. In the context of ultracold atomic gases, such expressions have long been established<cit.>. More recently, the ALPHA collaboration has adapted these expressions to be applicable to the antiproton plasmas present in their experimental system<cit.>. Naively, we would expect that such expressions would also be applicable to UCP systems. However, these expressions make two assumptions that are not clearly applicable to UCPs. First, it is implicitly assumed that the average electron mean free path is much greater than the spatial size of the UCP. Second, this treatment ignores the possibility that larger angle Coulomb collisions have a significant role to play in evaporation.Evaluation of the validity of these assumptions is one of the main topics of this work.We find that neither assumption is valid under a typical set of UCP conditions.We can characterize the degree to which these assumptions are broken.Our results indicate that a simple analytic description of UCP electron evaporation rate would be highly challenging to formulate. The other main part of this work is the model that we have developed, which allows for making quantitative predictions of electron evaporation rates and can be directly applicable to experimentally-relevant parameters with straightforward modification.In Sec II, we discuss the theoretical treatment of evaporation in ultracold atomic systems, how this treatment has previously been adapted to plasma systems, and potential issues that could arise from applying this adapted treatment to ultracold plasmas. Sec III gives an overview of how our theoretical evaporation model functions. In Sec IV, we report our numerical results, as well as use these results to test whether previous plasma evaporation treatment can be applied to ultracold plasma systems. Finally, in Sec V, we present our conclusions and possibilities for future work. § THEORY Evaporation is a commonly used experimental technique in ultracold atomic physics, where collections of atoms are routinely trapped inside potential wells produced by external optical or magnetic fields<cit.>. In these systems, evaporation occurs when an atom gains sufficient energy to be able to escape from the trapping potential. The threshold energy necessary for escape is equivalent to the overall depth of the trapping potential, and will be henceforth referred to as the potential barrier, U.While confined, atoms in the cloud collide with each other. These collisions lead to energy being transfered between the atoms and can lead to an atom acquiring energy greater than the barrier. If an atom is assumed to escape once its energy exceeds the barrier, U, the evaporation rate would in that case be the rate at which collisions excite atoms above the barrier.Integrating over all possible elastic collisions in a thermal gas is difficult, making the determination of the evaporation rate a non-trivial endeavor. Traditionally, in neutral atoms, this difficulty has been mitigated by utilizing the principle of detailed balance. Since collisions with neutral atoms are predominantly large angle collisions, it is reasonable to assume that the vast majority of collisions involving an atom with energy exceeding U will cause that atom to lose energy and fall back below the barrier. Detailed balance indicates that rate at which particles fall below barrier is equivalent to the rate at which atoms are excited above the barrier in thermal equilibrium. This leads to the following approximate general expression for the rate of change in particle number from evaporation<cit.>: Ṅ=-nσ N_hev_U=-N_heν_colwhere n is the particle density, σ is the collisional cross-section, N_he is the number of high energy atoms with energy exceeding the barrier, v_U is the atom velocity that corresponds to the barrier energy, and ν_col is the average collision frequency for high energy atoms in the distribution. If the atoms are assumed to be distributed in a Maxwellian with three degrees of freedom, and the barrier energy to average thermal energy ratio is sufficiently high, the fraction of atoms above the barrier is approximately 2√(W/π)e^-W. In this limit Eq. (1) takes the following form<cit.>: Ṅ=-nσ We^-Wv̅Nwhere v̅ is the average velocity in the particle distribution and W is the scaled barrier height defined as U/k_bT.In plasmas, the presence of charged particles substantially alters the dynamics of evaporation. In general, Coulomb collisions have a much larger interaction range than the hard sphere collisions involved in collisions of neutrals. This leads to a much larger collisional cross-section, which in turn results in a substantially higher frequency of collisions. Furthermore, Coulomb collision cross-sections are velocity dependent, necessitating additional care when calculating Ṅ. Finally, the Coulomb interaction leads predominantly to small angle deflections from collisions<cit.>. This last point has a profound impact with respect to the previous detailed balance assumptions, since it is no longer reasonable to assume that a collision will knock a high energy particle below the barrier. In a recent adaptation of atomic evaporation theory to charged particles, an alternative assumption is made. It was assumed that only a small amount of energy is transfered between colliding particles. In the context of electron evaporation, this allows for the evaporation rate to expressed in the following way<cit.>: Ṅ=(dN/dt)|_v=v_U=(dN/dvdv/dt)|_v=v_Uwhere dN/dv is simply the electrons' velocity distribution, and dv/dt is a velocity damping rate. The negative sign enters the above equation from detailed balance. For a Maxwell Boltzmann distribution with three degrees of freedom, dN/dv takes the following form<cit.>: (dN/dv)=4π Nv^2(m/2π k_bT)^3/2e^-mv^2/2k_bTIn the limit of v being much larger than the average thermal velocity, dv/dt can be calculated via conservation of momentum. The result of this calculation is<cit.>: dv/dt=-2e^4nln(Λ)/4πϵ_0^2μ^2v^2where e is the fundamental electron charge, μ is the reduced mass of the two colliding particles, and ln(Λ) is the Coulomb logarithm that results from averaging over all possible Coulomb collision angles in the typical treatment of Coulomb collisions in a plasma. Combining eq3-5 yields the following expression for evaporation<cit.>: Ṅ=-Ne^-W√(2)e^4nln(Λ)/π^3/2ϵ_0^2√(μ)(k_bT)^3/2 As mentioned above, equation (6) has two underlying assumptions which are not obviously applicable to UCPs. First, it is assumed that once an electron's energy exceeds U, it immediately escapes from the plasma. However, it is easy to imagine that once a electron is excited above the barrier subsequent collisions could knock that electron back below the barrier before it is able to escape. Implicitly, immediate escape assumes that the average mean free path for an electron is much larger than the spatial size of the UCP. A naive estimate of the mean free path, c, would simply be v^2/dv/dt, the velocity times the effective velocity damping time constant. Such an estimate would yield a mean free path for typical experimental conditions roughlyan order of magnitude larger than typical UCP sizes, apparently satisfying the prior assumption.However, such an estimate implies an average electron slowing of 1/e is the relevant amount of velocity decrease for determining an effective mean free path electrons with kinetic energy just greater than U. Considering that the overwhelming number of electrons will be substantially closer to the barrier than a factor of e, it seems likely that the average electron will have to travel a significantly shorter distance than the estimated mean free path before it falls below the barrier due to collisions. Thus, this naive estimate is likely not relevant for evaporation considerations. Without a more sophisticated calculation, it is not intermediately obvious what the relevant mean free path is and thus it is unclear whether this underlying assumption is indeed met. We provide such a calculation below.Secondly, it is not clear whether the assumed functional form in Eq (3.) accounts for all collisions appropriately. The function only includes evaporation contributions from electrons with kinetic energy right at the barrier, and ignore the contributions from large angle collisions for higher-energy electrons not right at the barrier. It is unclear whether the contributions from these electrons are negligible for a typical set of experimental conditions. Furthermore, the assumed evaporation function assumes an average velocity slowing. Considering that the evaporation rate is the sum of discrete electron escapes, it is unclear whether utilizing an average slowing rate is appropriate.§ MODEL OVERVIEW In principle, a full molecular dynamics model of UCP electrons could be constructed to determine the electron evaporation rate in UCPs, but the necessary O(N^2) force calculations naively required make such a model computationally expensive. However, by assuming that the UCP electrons are in thermal equilibrium, approximations can be made that lead to drastically faster computational run times. Electrons in thermal equilibrium are, by definition, distributed in a Maxwell Boltzmann distribution. In the limit of U>>k_bT only electrons in the upper tail of the Maxwellian distribution are able to escape from the confining potential. To take advantage of this fact, our simulation only tracks positions and velocities for electrons above a certain tracking threshold energy. Threshold energies were chosen to be low enough to not interfere with evaporation mechanics, but to be high enough to minimize computation time. A diagram illustrating the relationship between the tracking threshold and the overall electron distribution is given in Fig 1.The thermal equilibrium assumption also allows for direct force calculations to be approximated by a series of random Coulomb collisions. Tracked electrons are assumed to be in contact with a reservoir of Maxwellian electrons, representing the complete electron distribution in the UCP. Tracked electrons collide with the reservoir electrons via a Monte Carlo collision operator, leading to changes in the momentum and energy of the tracked electron. In a similar fashion, tracked electrons also collide with areservoir of infinitely massive UCP ions, where treating the ions as having infinite mass is a reasonable approximation in UCPs. By utilizing such a method, an O(N^2) process is reduced to an O(N) process, greatly reducing computation time.Occasionally, collisions cause tracked electrons tolose sufficient energy so as to fall below the tacking threshold. When this occurs, these electrons are discarded from the simulation. To maintain detailed balance, a certain number of additional highly energetic electrons are generated at chosen time intervals. The process by which this was implemented will be discussed later in this section.The tracked electrons in the model are uniformly distributed across the volume of a sphere, with the surface of the sphere acting as a “hard wall” potential barrier. When an electron comes in contact with the barrier, it will either escape if it has sufficient energy, or be reflected back toward the center if it does not. The rate at which electrons escape from the system, the evaporation rate, can thus be calculated.In binary collisional theory, the probability of collision occurring in a time period, dt, is nσ<v>dt, where n is the particle density, σ is the collisional cross section, and <v> is the average difference velocity between a tracked electron and particles in either the electron or ion distribution. For electron-electron collisions, <v> can be expressed as a function of tracked electron velocity, v:<v>=k_bTerf(v√(m/2k_bT))/mv+erf(v√(m/2k_bT))v+√(2/π)e^-mv^2/2k_bTFor electron-ion collisions, <v> is set simply equal to v. For elastic collisions, when a collision occurs, the center of mass velocity of the two colliding particles is rotated by the angle χ, changing the momentum of each particle<cit.>. The angle χ is defined by the following relationship:χ=2arctan(q_1q_2/4πϵ_0μ|v⃗_⃗1⃗-v⃗_⃗2⃗|^2b) where μ is the reduced mass, b is the impact parameter, q_1 and q_2 are the charges of the two particles, and v⃗_⃗1⃗ and v⃗_⃗2⃗ are the 2 particle velocities.Following the standard treatment for Coulomb collisions in a plasma, impact parameters are assumed to not exceed a maximum cutoff. For the purposes of this work, the standard λ_D cutoff is assumed, where λ_D is the Debye length.Once a collision has been determined to have occurred, b is randomly generated. Because ions in the model are assumed to be stationary, the deflection angle, χ, for electron-ion collisions is solely a function of this generated impact parameter. However, for electron-electron collisions χ is also a function of the relative velocity of the two colliding electrons, |v⃗_⃗1⃗-v⃗_⃗2⃗|. The relative velocity, |v⃗_⃗1⃗-v⃗_⃗2⃗|, probability distribution as a function of v⃗_⃗1⃗, v⃗_⃗2⃗is as follows:|v⃗_⃗2⃗-v⃗_⃗1⃗|e^-mv⃗_⃗1⃗·v⃗_⃗1⃗/2kTe^-mv⃗_⃗2⃗·v⃗_⃗2⃗/2kTd^3v⃗_⃗1⃗d^3v⃗_⃗2⃗Since, in the context of the model, v⃗_⃗1⃗ is known, Eq. (9) can be reduced to exclusively a v⃗_⃗2⃗probability distribution:|v⃗_⃗2⃗-v⃗_⃗1⃗|e^-mv⃗_⃗2⃗·v⃗_⃗2⃗/2kTd^3v⃗_⃗2⃗By randomly generating a value for v⃗_⃗2⃗, in addition to an impact parameter, the deflection angle, χ, is fully determined.As mentioned previously, modeling these collisions will result in electrons' energies falling below the tracking energy threshold.Relatively quickly, this would mean that no electrons above the tracking energy would remain.In steady-state, the fraction of tracked electrons should be approximately constant.Thus, there needs to be some mechanism for “creating” electrons above the tracking threshold on a regular basis to maintain a Maxwellian distribution in the absence of electron evaporation. Care needed to be taken to generate a proper Maxwellian distribution. We elected to find a distribution of discrete velocities, henceforth known as a production function, that would lead to a Maxwellian steady state distribution in model simulations.The process for developing a production function was as follows. First, a number of simulations were run where electrons were not allowed to escape. Electrons were added into the system at regular intervals with random positions and random velocity directions, but a fixed velocity magnitude, v_i. As the simulation ran, some electrons were removed as they went below the threshold velocity. Eventually, the rate of electrons being added and removed balanced out and the system reached a steady state number of electrons, resulting in the ith simulation having the electron velocity distribution, f_i(v) for tracked electrons. The resulting velocity distributions from all of the simulations were then fit to a Maxwellian distribution through the following sum: f(v)=A∑_i a_if_i(v)where f(v) is the appropriate Maxwellian distribution, the a_i's are normalized fit coefficients, and A is a proportionality constant. Once the values of the a_i's were found, the production function, F(v) could be described:F(v)=∑_ia_iδ(v-v_i)dvWhere thebest fit a_i values represent the probability that an electron with the ith velocity magnitude will be generated. Once these values were determined, the production function could be utilized in simulations in which evaporation was included. § RESULTS A number of model simulations were run to determine the impact of different plasma parameters on the electron evaporation rate, Ṅ. Specifically, evaporation rates were calculated as a function of plasma spatial size, plasma depth, and electron temperature. For all simulations, ion and electron densities were both held constant at 1.35×10^13 m^-3, corresponding to a set of low-density UCP experimental parameters. Timesteps were 10 ns long, and each simulation lasted 300 time steps. Evaporation rates were extracted from averaging the number of escapes occurring in the last 200 timesteps, during which the plasma was in a steady state.Simulation results were used to test the veracity of scaling rules presented above. For instance, if T is held constant, Eq. (6) can be expressed as a functional form: Ṅ=-AW^αe^-Wwhere A and α are constants. The additional dependence on W in Eq. (13) is typically present in expression for evaporation in ultracold atomic gases. To see if this form was reasonable, α and A can be treated as a fit parameters. From Eq. (6), α would be expected to be 0. To test whether Eq. (13) expresses the proper functional form, evaporation rates were calculated for a series of different W values at a constant T, and the resultant curve was fit to the functional form in Eq. (13). This process was then repeated for a series of different electron temperatures. The resultant α parameters from these fits can be seen in Table I:These results show that α is consistently greater than 0, indicating that evaporation is less strongly dependent on W than Eq.(6) would predict. Additionally, since α increases with decreasing temperature, evaporation becomes more weakly dependent on W as T decreases. These results show that the functional form of the electron evaporation rate from Eq. (13) is itself incorrect, and thus evaporation cannot be modeled in such a manner. This point is further illustrated in Fig 2.Simulation results were also used to test the temperature dependence of evaporation at constant depth to temperature ratio (i.e. constant W). At a constant W, Eq. (6) reduces to the following functional form:Ṅ=k_1T^-k_2ln(-k_3T^3/2) where k_1,k_2, and k_3 are constants. Since we are only testing scaling laws at this point, these constants can be treated as fitting parameters. By using a method analogous to to the constant T case, the T^-3/2 scaling, implied by Eqs. (6) and (14), can again be tested. This leads to the results shown in Table II:These results are inconsistent with a T^-3/2 scaling given the value of k_2. Additionally, Eq. (14) suggests that k_2 and k_3 should stay constant as a function of W, which is also contradicted by the our calculated results. These results show that the functional form of the electron evaporation rate from Eq. (14) is incorrect for the given experimental conditions. Examples of these fits can be seen in Fig 3.The failure of Eq. (6) to properly predict the proper parameter scaling of the evaporation rate suggests that at least one of the two underlying assumptions of the theory is incorrect for typical UCP experimental conditions. The first of these assumptions states that when a collision excites an electron above the barrier, it is immediately considered to have escaped. However, if the plasma size is larger than the effective mean free path, it becomes likely that a secondary collision de-excites that same electron back below the barrier, preventing the electron from escaping. The only exception is if the electron is at the edge of the UCP. Since it is unclear what the effective mean free path is in the context of evaporation, it is questionable whether or not this assumption is valid for typical UCP collisions.To characterize the impact that the mean free path will have on evaporation, we found it useful to consider two limiting cases. In the case where the mean free path is much larger than the size of the plasma, the absolute evaporation should scale linearly with the electron number without any dependence on the plasma radius, R, assuming a constant electron density. Conversely, if the plasma is much larger than the mean free path, electrons in close proximity to the plasma edge should contribute much more heavily to the evaporation rate than other electrons. Presumably, in the limit of an infinitely large UCP, evaporation should scale with UCP surface area. However, if the electron density is held constant, the electron number scales with the UCP volume, which leads to the evaporation rate per electron, Ṅ/N, scaling as 1/R in this surface-dominated limit.To determine where plasmas with the parameters used in the simulation were between these two limits, we investigated how Ṅ/N varied with plasma spatial size, R. Simulations were run varying the UCP size and electron number in a manner consistent with a constant density, and a per electron evaporation rate vs UCP size curve was generated. The results of these calculations, for one set of conditions, can be seen in Fig 4.The figure shows that per particle evaporation scaled roughly as 1/R at larger values of R, which is consistent with the typical mean free path being much smaller than the plasma spatial size. At smaller plasma sizes, this scaling became shallower. Constant evaporation scaling with R, the scaling implicitly assumed in Eq. (6), was not observed over the investigated range of parameters. These results show that the underlying assumption in Eq (6) about the mean free path is incorrect. Furthermore, for typical simulation plasma parameters, the evaporation rate is consistent with mean free path effects being dominant. It is therefore desirable to quantify the magnitude of these effects. To do this, we introduce the concept of a effective electron evaporation source density, n_eff. In general, electrons that are closer to the edge are more likely to escape the plasma. The purpose of the effective density is to weight these more-likely-to-leave electrons more highly than their counterparts near the plasma center, and to quantify how this weighting changed as a function of R. To do this we utilized the following simple approximate model: n_eff=ne^-R-r/cwhere n is the electron density, R is the plasma size, r is the standard radial coordinate, and c is an effective evaporation skin depth quantifying the degree of which mean free path effects impact evaporation. Utilizing this approximate description, we assume that Ṅ takes the following form: Ṅ=-κ N_eff=-4πκ∫_0^Rr^2 n_effdr where κ is a proportionality constant, and N_eff is the effective number derived from n_eff. Combining Eq. (15) and Eq. (16) and evaluating the integral yields the following expression: Ṅ=-4π nκ (R^2-2Rc+2c^2-2c^2e^-R/c)By treating κ and c as fit parameters, Eq. (17) was fit to results similar to those in Fig. 4, and an effective evaporation skin depth was extracted.The resultant fit parameters, κ and c, can give insight about the applicability of the assumed analytical evaporation functional form, even in the absence of mean free path considerations. In the limit of R<<c, the model predicts an evaporation rate of κ. If the functional form of Eq.(6) is correct, its predicted evaporation rate should match our calculated κ. This was not the case, however, as we observed κ to be 3-7 times smaller than would be implied by Eq. (6). This was, at least in part, due to the electron velocity distribution.Eq. (6) assumes a Maxwellian, but a collection of electrons in a potential well will not form a Maxwellian distribution in steady state. At the low electron energies, these distributions are roughly the same, but the Maxwellian will significantly overestimate the electrons near the barrier. This leads to an overestimation of dN/dt in Eq. (6), and subsequently an overestimation in the evaporation rate.The mean free path fit parameters were also used to test of the scaling of the dv/dt component of the evaporation functional form in the limit of R<<c. We compared our calculated values of c to an estimate involving the form of dv/dt in Eq. (5) above. In the limit of small velocity changes, for a given initial velocity, v_0, above the barrier, the characteristic time,τ, it takes to decay to the barrier velocity, v_U, can be defined: τ(v_0)=v_0-v_U/(dv/dt)|_v_0 This implies an approximate velocity dependent mean free path of c(v)=vτ(v). By integrating over the velocities of all of the electrons above the barrier an average c can be calculated. For typical experimental conditions, such a calculation resulted in a c on the order of about a mm, an order of magnitude larger than suggested bythe results from the prior mean free path fit. The results of this calculation suggests that the average electron velocity slowing, dv/dt, is not the relevant rate with regards to evaporation. This is likely due to dv/dt being an average quantity. The evaporation rate is the sum of discrete electron escape events, and it is not immediately obvious that an average rate would properly account for the relevant physics. In addition, the average escape path for model electronscan often be much longer than R. A typical escaping electron will undergo a number of deflecting collisions over the course of its escape, which presumably effectively lengthen the electron's escape path.Unfortunately, the previously developed analytical expressions seem to not be applicable to ultracold plasmas, at least under the conditions studied. Eq. (6) incorrectly predicted electron evaporation rates and did not scale correctly. While analytical expressions that accurately predict the electron evaporation rate could presumably be developed, such expressions do not currently exist. Thus, a numerical model, such as the one developed in this work, will be needed to properly calculate the electron evaporation rate in ultracold plasma systems.§ CONCLUSIONWe have developed a model that calculates the rate evaporation from an ultracold plasma. Model results were compared to previously developed analytical expressions for evaporation. These expressions proved inconsistent with our results as the model scaled with plasma parameters differently than the simple evaporation expressions predicted. Furthermore, we demonstrated that this discrepancy can, at least partially, be explained by the finite size of the plasmas examined in this work, and that absence of size considerations is a limitation of these previously developed expressions. This work demonstrates that such simple scaling rules are not accurate when used to calculate evaporation in UCPs, and that a model like the one developed in this work is needed. § ACKNOWLEDGMENTSWe acknowledge support of the Air Force Office of Scientific Research (AFOSR), grant number FA9550-12-1-0222.
http://arxiv.org/abs/1703.08610v1
{ "authors": [ "Craig Witte", "Jacob L. Roberts" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170324214448", "title": "Evaluation of Charged Particle Evaporation Expressions in Ultracold Plasmas" }
[Thompson Sampling for Linear-Quadratic Control Problems Marc Abeille Alessandro Lazaric Inria Lille - Nord Europe, Team SequeL ] We consider the exploration-exploitation tradeoff in linear quadratic (LQ) control problems, where the state dynamics is linear and the cost function is quadratic in states and controls. We analyze the regret of Thompson sampling () (a.k.a. posterior-sampling for reinforcement learning) in the frequentist setting, i.e., when the parameters characterizing the LQ dynamics are fixed. Despite the empirical and theoretical success in a wide range of problems from multi-armed bandit to linear bandit, we show that when studying the frequentist regret in control problems, we need to trade-off the frequency of sampling optimistic parameters and the frequency of switches in the control policy. This results in an overall regret of O(T^2/3), which is significantly worse than the regret O(√(T)) achieved by the optimism-in-face-of-uncertainty algorithm in LQ control problems.§ INTRODUCTION One of the most challenging problems in reinforcement learning (RL) is how to effectively trade off exploration and exploitation in an unknown environment. A number of learning methods has been proposed in finite Markov decision processes (MDPs) and they have been analyzed in the PAC-MDP (see e.g., <cit.>) and the regret framework (see e.g., <cit.>). The two most popular approaches to address the exploration-exploitation trade-off are the optimism-in-face-of-uncertainty (OFU) principle, where optimistic policies are selected according to upper-confidence bounds on the true MDP paramaters, and the Thompson sampling () strategy[In RL literature, has been introduced by <cit.> and it is often referred to as posterior-sampling for reinforcement learning (PSRL).], where random MDP parameters are selected from a posterior distribution and the corresponding optimal policy is executed. Despite their success in finite MDPs, extensions of these methods and their analyses to continuous state-action spaces are still rather limited. <cit.> study how to randomize the parameters of a linear function approximator to induce exploration and prove regret guarantees in the finite MDP case. <cit.> develops a specific method applied to the more complex case of neural architectures with significant empirical improvements over alternative exploration strategies, although with no theoretical guarantees. In this paper, we focus on a specific family of continuous state-action MDPs, the linear quadratic (LQ) control problems, where the state transition is linear and the cost function is quadratic in the state and the control. Despite their specific structure, LQ models are very flexible and widely used in practice (e.g., to track a reference trajectory). If the parameter θ defining dynamics and cost is known, the optimal control can be computed explicitly as a linear function of the state with an appropriate gain.On the other hand, when θ is unknown, an exploration-exploitation trade-off needs to be solved. <cit.> and <cit.>, first proposed an optimistic approach to this problem, showing that the performance of an adaptive control strategy asymptotically converges to the optimal control. Building on this approach and the OFU principle, <cit.> proposed a learning algorithm () with O(√(T)) cumulative regret. <cit.> further studied how the strategy, could be adapted to work in the LQ control problem. Under the assumption that the true parameters of the model are drawn from a known prior, they show that the so-called Bayesian regret matches the O(√(T)) bound of . In this paper, we analyze the regret of in LQ problems in the more challenging frequentist case, where θ is a fixed parameter, with no prior assumption of its value. The analysis of relies on three main ingredients: 1) optimistic parameters, 2) lazy updates (the control policy is updated only a logarithmic number of times) and 3) concentration inequalities for regularized least-squares used to estimate the unknown parameter θ. While we build on previous results for the least-squares estimates of the parameters, points 1) and 2) should be adapted for . Unfortunately, the Bayesian regret analysis of in <cit.> does not apply in this case, since no prior is available on θ. Furthermore, we show that existing frequentist regret analysis for in linear bandit <cit.> cannot be generalized to the LQ case. This requires deriving a novel line of proof in which we first prove that has a constant probability to sample an optimistic parameter (i.e., an LQ system whose optimal expected average cost is smaller than the true one) and then we exploit the LQ structure to show how being optimistic allows to directly link the regret to the controls operated by over time and eventually bound them. Nonetheless, this analysis reveals a critical trade-off between the frequency with which new parameters are sampled (and thus the chance of being optimistic) and the regret cumulated every time the control policy changes. In this trade-off is easily solved by construction: the lazy update guarantees that the control policy changes very rarely and whenever a new policy is computed, it is guaranteed to be optimistic. On the other hand, relies on the random sampling process to obtain optimistic models and if this is not done frequently enough, the regret can grow unbounded. This forces to favor short episodes and we prove that this leads to an overall regret of order O(T^2/3) in the one-dimensional case (i.e., both states and controls are scalars), which is significantly worse than the O(√(T)) regret of .§ PRELIMINARIES The control problem. We consider the discrete-time infinite-horizon linear quadratic (LQ) control problem. Let x_t∈^n be the state of the system and u_t∈^d be the control at time t; an LQ problem is characterized by linear dynamics and a quadratic cost functionx_t+1 = A_* x_t + B_* u_t + ϵ_t+1, c(x_t,u_t) = x_t^ Q x_t + u_t^ R u_t, where A_* and B_* are unknown matrices and Q and R are known positive definite matrices of appropriate dimension. We summarize the unknown parameters in θ_*^ = (A_*, B_*). The noise process ϵ_t+1 is zero-mean and it satisfies the following assumption. {ϵ_t}_t is a ℱ_t-martingale difference sequence, where _t is the filtration which represents the information knowledge up to time t.In LQ, the objective is to design a closed-loop control policy π: ^n →^d mapping states to controls that minimizes the average expected costJ_π(θ_*) = lim sup_T →∞1/T𝔼[∑_t=0^T c(x_t,u_t)], with x_0=0 and u_t =π(x_t).Standard theory for LQ control guarantees that the optimal policy is linear in the state and that the corresponding average expected cost is the solution of a Riccati equation.Under Asm. <ref> and for any LQ system with parameters θ^ = (A, B) such that (A,B) is stabilizable[(A,B) is stabilizable if there exists a control gain matrix K s.t. A + B K is stable (i.e., all eigenvalues are in (-1,1)). ], and p.d. cost matrices Q and R, the optimal solution of Eq. <ref> is given byπ(θ)= K(θ) x_t, J(θ) = (P(θ)), K(θ)= -(R + B^ P(θ) B)^-1 B^ P(θ) A , P(θ)= Q + A^ P(θ) A +A^ P(θ) B K(θ)where π(θ) is the optimal policy, J(θ) is the corresponding average expected cost, K(θ) is the optimal gain, and P(θ) is the unique solution to the Riccati equation associated with the control problem. Finally, we also have that A + B K(θ) is asymptotically stable. For notational convenience, we use H(θ) = (I K(θ)^)^, so that the closed loop dynamics A + B K(θ) can be equivalently written as θ^ H(θ). We introduce further assumptions about the LQ systems we consider. We assume that the LQ problem is characterized by parameters (A_*,B_*,Q, R) such thatthe cost matrices Q and R are symmetric p.d., and θ_* ∈𝒮 where[Even if P(θ) is not defined for every θ, we extend its domain of definition by setting P(θ) = + ∞.] 𝒮 = {θ∈ℝ^(n+d) × n s.t.(P(θ)) ≤ Dand (θθ^) ≤ S^2 }. While Asm. <ref> basically guarantees that the linear model in Eq. <ref> is correct, Asm. <ref> restricts the control parameters to the admissible set 𝒮. This is used later in the learning process and it replaces Asm. A2-4 in <cit.> in a synthetic way, as shown in the following proposition. Given an admissible set 𝒮 as defined in Asm. <ref>, we have 1) 𝒮⊂{θ^ = (A,B)s.t.(A,B)is stabilizable}, 2) 𝒮 is compact, and 3) there exists ρ < 1 and C < ∞ positive constants such that ρ = sup_θ∈𝒮 A + B K(A,B) _2 and C = sup_θ∈𝒮 K(θ) _2.[We use · and ·_2 to denote the Frobenius and the 2-norm respectively.].As an immediate result, any system with θ∈𝒮 is stabilizable, and therefore, Asm. <ref> implies that Prop. <ref> holds.Finally, we derive a result about the regularity of the Riccati solution, which we later use to relate the regret to the controls performed by . Under Asm. <ref> and for any LQ with parameters θ^ = (A, B) and cost matrices Q and R satisfying Asm. <ref>, let J(θ) =(P(θ)) be the optimal solution of Eq. <ref>. Then, the mapping θ∈𝒮→ (P(θ)) is continuously differentiable. Furthermore, let A_c(θ) = θ^ H(θ) be the closed-loop matrix, then the directional derivative of J(θ) in adirection δθ, denoted as ∇ J(θ)^δθ, where ∇ J(θ) ∈ℝ^(n+d)× n is the gradient of J, is the solution of the Lyapunov equation ∇ J(θ)^δθ = A_c(θ)^∇ J(θ)^δθA_c(θ)+ C(θ,δθ) +C(θ,δθ)^, where C(θ,δθ) = A_c(θ)^ P(θ) δθ^ H(θ). The learning problem.At each time t, the learner chooses a policy π_t, it executes the induced control u_t = π_t(x_t) and suffers a cost c_t = c(x_t,u_t). The performance is measured by the cumulative regret up to time T as R_T = ∑_t=0^T (c^π_t_t - J_π_*(θ_*)),where at each step the difference between the cost of the controller c^π and the expected average cost J_π_*(θ_*) of the optimal controller π_* is measured. Let (u_0,…,u_t) be a sequence of controls and (x_0,x_1,…,x_t+1) be the corresponding states, then θ^⋆ can be estimated by regularized least-squares (RLS). Let z_t = (x_t,u_t)^, for any regularization parameter λ∈ℝ_+^*, the design matrix and the RLS estimate are defined asV_t = λ I + ∑_s=0^t-1 z_s z_s^; θ_t = V_t^-1∑_s=0^t-1 z_s x_s+1^. For notational convenience, we use W_t = V_t^-1/2. We recall a concentration inequality for RLS estimates.We assume that ϵ_t are conditionally and component-wise sub-Gaussian of parameter L and that𝔼 (ϵ_t+1ϵ_t+1^ | ℱ_t) = I. Then for any δ∈ (0,1) and any ℱ_t-adapted sequence (z_0,…,z_t), the RLS estimator θ̂_t is such that ( (θ̂_t - θ_*)^ V_t (θ̂_t - θ_*) ) ≤β_t(δ)^2, w.p. 1-δ (w.r.t. the noise {ϵ_t}_t and any randomization in the choice of the control), where β_t(δ) = n L√(2 log( (V_t)^1/2/(λ I)^1/2)) + λ^1/2 S. Further, when z_t ≤ Z, (V_t)/(λ I)≤ (n+d) log( 1 + T Z^2/λ (n+d) ). At any step t, we define the ellipsoid ℰ^_t = {θ∈ℝ^d| θ - θ̂_t_V_t≤β_t(δ^') } centered in θ_t with orientation V_t and radius β_t(δ^'), with δ^' = δ / (4T).Finally, we report a standard result of RLS that, together with Prop. <ref>, shows that the prediction error on the points z_t used to construct the estimator θ_t is cumulatively small. Let λ≥ 1, for any arbitrary ℱ_t-adapted sequence (z_0, z_1, …, z_t), let V_t+1 be the corresponding design matrix, then∑_s=0^tmin( z_s_V_s^-1^2, 1 ) ≤ 2 log(V_t+1)/(λ I). Moreover, when z_t≤ Z for all t ≥ 0, then ∑_s=0^tz_s _V_s^-1^2 ≤ 2 Z^2/λ (n + d) log(1 + (t+1) Z^2/λ (n+d)).§ THOMPSON SAMPLING FOR LQRWe introduce a specific instance of for learning in LQ problems obtained as a modification of the algorithm proposed in <cit.>, where we replace the Bayesian structure and the Gaussian prior assumption with a generic randomized process and we modify the update rule. The algorithm is summarized in Alg. <ref>.At any step t, given the -estimate θ_t and the design matrix V_t, samples a perturbed parameter θ_t. In order to ensure that the sampling parameter is indeed admissible, we re-sample it until a valid θ_t ∈𝒮 is obtained. Denoting as ℛ_𝒮 the rejection sampling operator associated with the admissible set 𝒮, we define θ_t as θ_t = ℛ_𝒮 ( θ_t + β_t(δ^')W_tη_t), where W_t = V_t^-1/2 and every coordinate of the matrix η_t∈^(n+d)×(n+d) is a random sample drawn i.i.d. from 𝒩(0,1). We refer to this distribution as . Notice that such sampling does not need to be associated with an actual posterior over θ^⋆ but it just need to randomize parameters coherently with the estimate and the uncertainty captured in V_t.Let γ_t(δ) = β_t(δ^')n √(2 (n+d) log( 2 n (n+d) / δ)), then the high-probability ellipsoid ^_t = {θ∈ℝ^d| θ - θ_t_V_t≤γ_t(δ^')} is defined so that any parameter θ_t belongs to it with 1-δ/8 probability.Given the parameter θ_t, the gain matrix K(θ_t) is computed and the corresponding optimal control u_t = K(θ_t) x_t is applied. As a result, the learner observes the cost c(x_t,u_t) and the next state x_t+1, and V_t and θ_t are updated accordingly. Similar to most of RL strategies, the updates are not performed at each step and the same estimated optimal policy K(θ_t) is kept constant throughout an episode. Let V_0 be the design matrix at the beginning of an episode, then the episode is terminated upon two possible conditions: 1) the determinant condition of the design matrix is doubled (i.e., (V_t) ≥ 2 (V_0)) or 2) a maximum length condition is reached. While the first condition is common to all RL strategies, here we need to force the algorithm to interrupt episodes as soon as their length exceeds τ steps. The need for this additional termination condition is intrinsically related to the nature and it is discussed in detail in the next section. § THEORETICAL ANALYSISWe prove the first frequentist regret bound for in LQ systems of dimension 2 (n=1, d=1). In order to isolate the steps which explicitly rely on this restriction, whenever possible we derive the proof in the general n+d-dimensional case.Consider the LQ system in Eq. <ref> of dimension n=1 and d=1. Under Asm. <ref> and <ref> for any 0 < δ < 1, the cumulative regret of over T steps is bounded w.p. at least 1-δ as [Further details can be recovered from the proof.]R(T) = O( T^2/3√(log (1/δ) )). This result is in striking contrast with previous results in multi-armed and linear bandit where the frequentist regret of is O(√(T)) and the Bayesian analysis of in control problems where the regret is also O(√(T)). As discussed in the introduction, the frequentist regret analysis in control problems introduces a critical trade-off between the frequency of selecting optimistic models, which guarantees small regret in bandit problems, and the reduction of the number of policy switches, which leads to small regret in control problems. Unfortunately, this trade-off cannot be easily balanced and this leads to a final regret of O(T^2/3). Sect. <ref> provides a more detailed discussion on the challenges of bounding the frequentist regret of in LQ problems.§.§ Setting the Stage Concentration events. We introduce the following high probability events.Let δ∈(0,1) and δ'=δ/(8T) and t∈[0,T]. We define the event (RLS estimate concentration) E_t = {∀ s ≤ t, θ_s - θ^⋆_V_s≤β_s(δ') } and the event (parameter θ_s concentrates around θ_s) E_t = {∀ s ≤ t, θ_s - θ_s_V_s≤γ_s(δ') }.We also introduce a high probability event on which the states x_t are bounded almost surely.Let δ∈(0,1), X,X^' be two problem dependent positive constants and t∈[0,T]. We define the event (bounded states) E̅_t ={∀ s ≤ t, x_s≤ XlogX^'/δ}. Then we have that E := E_T⊂…⊂E_1, E := E_T⊂…⊂E_1 andE̅ := E̅_T⊂…⊂E̅_1. We show that these events do hold with high probability.ℙ(E∩E) ≥ 1 - δ/4.On E∩E, ℙ(E̅ ) ≥ 1 - δ/4. Thus, ℙ(E∩E∩E̅) ≥ 1 - δ/2. Lem. <ref> leverages Prop. <ref> and the sampling distributionto ensure that E∩E holds w.h.p. Furthermore, Corollary <ref> ensures that the states remains bounded w.h.p. on the events E∩E.[This non-trivial result is directly collected from the bounding-the-state section of <cit.>.] As a result, the proof can be derived considering that both parameters concentrate and that states are bounded, which we summarize in the sequence of events E_t = E_t ∩E_t ∩E̅_t, which holds with probability at least 1- δ/2 for all t∈[0,T].Regret decomposition. Conditioned on the filtration _t and event E_t, we have θ^⋆∈ℰ_t^, θ_t ∈ℰ_t^ and x_t≤ X. We directly decompose the regret and bound it on this event as <cit.> R(T) = ∑_t=0^T { J(θ_t) - J(θ_*) }{E_t}_R^+ (R^_1 + R^_2 + R^_3){E_t}_R^ where R^ is decomposed into the three components R^_1= ∑_t=0^T {𝔼 ( x_t+1^ P(θ_t+1) x_t+1|ℱ_t) - x_t^ P(θ_t) x_t},R^_2= ∑_t=0^T𝔼[ x_t+1^⊤ (P(θ_t)- P(θ_t+1 ) ) x_t+1 |ℱ_t], R^_3=∑_t=0^T { z_t^θ_t P(θ_t) θ_t^ z_t- z_t^⊤θ_* P(θ_t) θ_*^ z_t}. Before entering into the details of how to bound each of these components, in the next section we discuss what are the main challenges in bounding the regret.§.§ Related Work and ChallengesSince the estimator is the same in both and OFU, the regret terms R^_1 and R^_3 can be bounded as in <cit.>. In fact, R^_1 is a martingale by construction and it can be bounded by Azuma inequality. The term R^_3 is related to the difference between the true next expected state θ_⋆^ z_t and the predicted next expected state θ_t^ z_t. A direct application of RLS properties makes this difference small by construction, thus bounding R^_3. Finally, the R^_2 term is directly affected by the changes in model from any two time instants (i.e., θ_t and θ_t+1), while R^ measures the difference in optimal average expected cost between the true model θ_* and the sampled model θ_t. In the following, we denote by R^_2,t and R^_t the elements at time t of these two regret terms and we refer to them as consistency regret and optimality regret respectively.Optimistic approach. explicitly bounds both regret terms directly by construction. In fact, the lazy update of the control policy allows to set to zero the consistency regret R^_2,t in all steps but when the policy changes between two episodes. Since in an episode terminates only when the determinant of the design matrix is doubled, it is easy to see that the number of episodes is bounded by O(log(T)), which bounds R^_2 as well (with a constant depending on the bounding of the state X and other parameters specific of the LQ system).[Notice that the consistency regret is not specific to LQ systems but it is common to all regret analyses in RL (see e.g., UCRL <cit.>) except for episodic MDPs and it is always bounded by keeping under control the number of switches of the policy (i.e., number of episodes).] At the same time, at the beginning of each episode an optimistic parameter θ_t is chosen, i.e., J(θ_t) ≤ J(θ_*), which directly ensures that R^_t is upper bounded by 0 at each time step.Bayesian regret. The lazy PSRL algorithm in <cit.> has the same lazy update as OFUL and thus it directly controls R^_2 by a small number of episodes. On the other hand, the random choice of θ_t does not guarantee optimism at each step anymore. Nonetheless, the regret is analyzed in the Bayesian setting, where θ_* is drawn from a known prior and the regret is evaluated in expectation w.r.t. the prior. Since θ_t is drawn from a posterior constructed from the same prior as θ_*, in expectation its associated J(θ_t) is the same as J(θ_*), thus ensuring that 𝔼[R^_t]=0.Frequentist regret. When moving from Bayesian to frequentist regret, this argument does not hold anymore and the (positive) deviations of J(θ_t) w.r.t. J(θ_*) has to be bounded in high probability. <cit.> exploits the linear structure of LQ problems to reuse arguments originally developed in the linear bandit setting. Similarly, we could leverage on the analysis of for linear bandit by <cit.> to derive a frequentist regret bound. <cit.> partition the (potentially infinite) arms into saturated and unsaturated arms depending on their estimated value and their associated uncertainty (i.e., an arm is saturated when the uncertainty of its estimate is smaller than its performance gap w.r.t. the optimal arm). In particular, the uncertainty is measured using confidence intervals derived from a concentration inequality similar to Prop. <ref>. This suggests to use a similar argument and classify policies as saturated and unsaturated depending on their value. Unfortunately, this proof direction cannot be applied in the case of LQR. In fact, in an LQ system θ the performance of a policy π is evaluated by the function J_π(θ) and the policy uncertainty should be measured by a confidence interval constructed as |J_π(θ_*) - J_π(θ_t)|. Despite the concentration inequality in Prop. <ref>, we notice that neither J_π(θ_*) nor J_π(θ_t) may be finite, since π may not stabilize the system θ_* (or θ_t) and thus incur an infinite cost. As a result, it is not possible to introduce the notion of saturated and unsaturated policies in this setting and another line of proof is required. Another key element in the proof of <cit.> for in linear bandit is to show that has a constant probability p to select optimistic actions and that this contributes to reduce the regret of any non-optimistic step. In our case, this translates to requiring that selects a system θ_t whose corresponding optimal policy is such that J(θ_t) ≤ J(θ_*). Lem. <ref> shows that this happens with a constant probability p. Furthermore, we can show that optimistic steps reduce the regret of non-optimistic steps, thus effectively bounding the optimality regret R^. Nonetheless, this is not compatible with a small consistency regret. In fact, we need optimistic parameters θ_t to be sampled often enough. On the other hand, bounding the consistency regret R^_2 requires to reduce the switches between policies as much as possible (i.e., number of episodes). If we keep the same number of episodes as with the lazy update of OFUL (i.e., about log(T) episodes), then the number of sampled points is as small as T/(T-log(T)). While guarantees that any policy update is optimistic by construction, with , only a fraction T/(p(T-log(T)) of steps would be optimistic on average. Unfortunately, such small number of optimistic steps is no longer enough to derive a bound on the optimality regret R^. Summarizing, in order to derive a frequentist regret bound for in LQ systems, we need the following ingredient 1) constant probability of optimism, 2) connection between optimism and R^ without using the saturated and unsaturated argument, 3) a suitable trade-off between lazy updates to bound the consistency regret and frequent updates to guarantee small optimality regret. §.§ Bounding the Optimality Regret R^ R^ decomposition. We define the “extended” filtration _t^x = (_t-1, x_t). Let K be the (random) number of episodes up to time T, {t_k}_k=1^K be the steps when the policy is updated, i.e., when a new parameter θ̃ is sampled, and let T_k be the associated length of each episode, then we can further decompose R^ as R^ = ∑_k=0^KT_k( J(θ_t_k)- 𝔼[J(θ_t_k)| ℱ_t_k^x,E_t_k] ) _E_t_k_R^,1_t_k+ ∑_k=0^K T_k{𝔼[J(θ_t_k) | ℱ_t_k^x,E_t_k] - J(θ_*) }_E_t_k_R^,2_t_k.We focus on the second regret term that we redefine R^,2_t_k=Δ_t for any t = t_k for notational convenience. Optimism and expectation.Let Θ^ = {θ : J(θ) ≤ J(θ_*) } be the set of optimistic parameters (i.e., LQ systems whose optimal average expected cost is lower than the true one). Then, for any θ∈Θ^, the per-step regret Δ_t is bounded by: Δ_t ≤( 𝔼[J(θ_t) | ℱ_t^x,E_t] - J(θ) ) _E_t , ≤| J(θ)- 𝔼[J(θ_t)| ℱ_t^x,E_t]| _E_t,which implies that Δ_t≤𝔼[| J(θ)-𝔼[J(θ_t) | ℱ_t^x,E_t] | _ E_t|ℱ_t^x , E_t,E̅_t,θ∈Θ^], where we use first the definition of the optimistic parameter set, then bounding the resulting quantity by its absolute value, and finally switch to the expectation over the optimistic set, since the inequality is true for any θ∈Θ^. While this inequality is true for any sampling distribution, it is convenient to select it equivalent to the sampling distribution of . Thus, we set θ = ℛ_𝒮(θ_t + β_t(δ^') W_t η) with η is component wise Gaussian 𝒩(0,1) and obtainΔ_t ≤𝔼[| J(θ_t)- 𝔼[J(θ_t) | ℱ_t^x,E_t] | _ E_t|ℱ_t^x ,E_t, E̅_t, θ_t ∈Θ^], ≤𝔼[| J(θ_t)- 𝔼[J(θ_t) | ℱ_t^x,E_t] | _ E_t|ℱ_t^x ,E_t, E̅_t ]/ℙ( θ_t ∈Θ^ | _t^x,E_t ). At this point we need to show that the probability of sampling an optimistic parameter θ_t is constant at any step t. This result is proved in the following lemma. Let Θ^ := {θ∈^d|J(θ) ≤ J(θ^⋆) } be the set of optimistic parameters and θ_t = ℛ_𝒮 (θ_t + β_t(δ^') W_t η) with η be component-wise normal 𝒩(0,1), then in the one-dimensional case (n=1 and d=1)∀ t≥ 0, ℙ( θ_t ∈Θ^ | ^x_t,E_t ) ≥ p,where p is a strictly positive constant. Integrating this result into the previous expression gives Δ_t≤1/p𝔼[| J(θ_t)- 𝔼[J(θ_t) | ℱ_t^x, E_t] | |ℱ_t^x , E_t ]. The most interesting aspect of this result is that the constant probability of being optimistic allows us to bound the worst-case non-stochastic quantity 𝔼[J(θ_t)| ℱ_t^x] - J(θ_*) depending on J(θ_*) by an expectation 𝔼[| J(θ_t)- 𝔼[J(θ_t) | ℱ_t^x] | |ℱ_t^x ] up to a multiplicative constant (we drop the events E for notational convenience). The last term is the conditional absolute deviation of the performance J w.r.t. the distribution. This connection provides a major insight about the functioning of , since it shows that does not need to have an accurate estimate of θ_* but it should rather reduce the estimation errors of θ_* only on the directions that may translate in larger errors in estimating the objective function J. In fact, we show later that at each step chooses a sampling distribution that tends to minimize the expected absolute deviations of J, thus contributing to reduce the deviations in R^_t.Variance and gradient. Let d' = √(n( n + d)), we introduce the mapping f_t from the ball ℬ(0,d') to ℝ_+ defined asf_t(η) = J(θ_t + β_t(δ^') W_t η) - 𝔼[J(θ_t) | ℱ_t^x, E_t] where the restriction on the ball is here to meet the ^_t confidence ellipsoid of the sampling. Since the perturbation η∼ is independent of the past, we can rewrite Eq. <ref> as Δ_t ≤𝔼_η∼[ |f_t(η)||η∈ℬ(0,d'), θ_t + β_t(δ^') W_t η∈𝒮]. We now need to show that this formulation of the regret is strictly related to the policy executed by . We prove the following result (proof in the supplement).Let Ω⊂ℝ^d be a convex domain with finite diameter diam. Let p be a non-negative log-concave function on Ω with continuous derivative up to the second order. Then, for all u ∈ W^1,1(Ω)[W^1,1(Ω) is the Sobolev space of order 1 in L^1(Ω).] such that∫_Ω u(z) p(z) dz = 0 one has∫_Ω |f(z)| p(z) dz ≤ 2 diam∫_Ω ||∇ f(z)|| p(z) dzBefore using the previous result, we relate the gradient of f_t to the gradient of J. Since for any η and any θ = θ_t + β_t(δ^') W_t η, we have ∇ f_t(η) = β_t(δ^') W_t ∇ J(θ) To obtain a bound on the norm of ∇ f_t, we apply Prop. <ref> (derived from Lem. <ref>) to get a bound on ∇ J(θ) _W_t^2: ∇ J(θ) _W_t^2≤ A_c(θ)_2^2 ∇ J(θ) _W_t^2+ 2P(θ)A_c(θ)_2H(θ)_W_t^2. Making use of M≤(M) for any positive definite matrix together with (P(θ)) ≤ D (Asm. <ref>) and A_c(θ)_2 ≤ρ (Prop. <ref>), ∇ J(θ) _W_t^2≤ρ^2 ∇ J(θ) _W_t^2 + 2 D ρ H(θ)_W_t^2, which leads to ∇ J(θ) _W_t^2≤ 2 D ρ/ (1 -ρ^2)H(θ)_W_t^2. We are now ready to use the weighted Poincaré inequality of Lem. <ref> to link the expectation of |f_t| to the expectation of its gradient. From Lem. <ref>, we have f_t ∈ W^1,1(Ω) and its expectation is zero by construction. On the other hand, the rejection sampling procedure impose that we conditioned the expectation with θ_t + β_t(δ^') W_t η∈𝒮 which is unfortunately not convex. However, we can still apply Lem. <ref> considering the function f̃_t(η) = f_t(η) 1 (θ_t + β_t(δ^') W_t η∈𝒮 ) and diameter diam = d'. As a result, we finally obtain Δ_t≤γ𝔼[ H(θ_t) _W_t^2|ℱ_t^x ], where γ = 8 √(n (n+d))β_T(δ^') D ρ / (p(1 - ρ^2)).From gradient to actions.Recalling the definition of H(θ) = (I K(θ)^)^ we notice that the previous expression bound the regret Δ_t with a term involving the gain K(θ) of the optimal policy for the sampled parameter θ. This shows that the R^ regret is directly related to the policies chosen by . To make such relationship more apparent, we now elaborate the previous expression to reveal the sequence of state-control pairs z_t induced by the policy with gain K(θ_t). We first plug the bound on Δ_t back into Eq. <ref> as R^≤ ∑_k=1^K T_k ( R^,1_t_k + γ𝔼[ H(θ_t_k) _V_t_k^-1|ℱ_t_k^x ])_E_t_k. We remove the expectation by adding and subtracting the actual realizations of θ_t_k as R^,3_t_k =𝔼[ H(θ_t_k) _V_t_k^-1|ℱ_t_k^x ] -H(θ_t_k) _V_t_k^-1. Thus, one obtains R^≤ ∑_k=1^K T_k (R^,1_t_k+R^,3_t_k+γH(θ_t_k) _V_t_k^-1) _E_t_k. Now we want to relate the cumulative sum of the last regret term to ∑_t=1^T z_t_V^-1_t. This quantity represents the prediction error of the RLS, and we know from Prop. <ref> that it is bounded w.h.p. We now focus on the one-dimensional case, where x_t is just a scalar value. Noticing that z_t_V_t^-1 = | x_t |H(θ_t)_V_t^-1, one has:∑_t=0^T z_t_V^-1_t = ∑_k=1^K ( ∑_t = t_k^t_k+1-1 |x_t | )H(θ_t_k)_V_t^-1.Intuitively, it means that over each episode, the more states are excited (e.g., the larger ∑_t = t_k^t_k+1-1 |x_t |), the more V_t^-1 reduces in the direction H(θ_t_k). As a result, to ensure that the term ∑_k=1^K T_kH(θ_t_k)_V_t^-1 in R^ is small, it would be sufficient ti show that ∑_t = t_k^t_k+1-1 |x_t | ∼ T_k, i.e., that the states provides enough information to learn the system in each chosen direction H(θ_t_k). More formally, let assume that there exists a constant α such that T_k ≤α∑_t = t_k^t_k+1-1 |x_t | for all k ≤ K. Then,∑_k=1^K T_kH(θ_t_k)_V_t_k^-1≤α∑_t=0^Tz_t_V_t_k^-1≤ 2α∑_t=0^Tz_t_V_t^-1, where we use that (V_t) ≤ 2(V_t_k) as guaranteed by the termination condition. Unfortunately, the intrinsic randomness of x_t (triggered by the noise ξ_t) is such that the assumption above is violated w.p. 1. However, in the one-dimensional case, the regret over the episode k can be conveniently written as R_k(T) =( ∑_t = t_k^t_k+1-1 |x_t |^2 ) ( Q + K(θ_t_k)^2 R ) - T_k J(θ_*). As a result, if we setα := X Q+RC^2/J(θ_*)≥ X Q+R K(θ_t_k)^2/J(θ_*), whenever ∑_t=t_k^t_k+1-1 x_t≤1/α T_k then we can directly conclude that R_k(T) is zero. On the other hand, in the opposite case, we have T_k ≤α∑_t = t_k^t_k+1-1 |x_t | and thus we can upper bound the last term in R^ as R^≤∑_k=1^K T_k (R^,1_t_k+R^,3_t_k) _E_t_k + 2 γα∑_t=0^T z_t_V_t^-1.§.§ Final bound Bounding R^_1 and R^_3. These two terms can be bounded following similar steps as in <cit.>. We report the detailed derivation in the supplement while here we simply report the final bounds R^_1 ≤2 D X^2 √(2log(4/δ))_:=γ_1√(T), and R^_3≤4 S D √( (1 + C^2) X^2 )μ_T(δ^')_:=γ_3∑_t=0^T z_t _V_t^-1_E_t,where μ_T(δ^') = β_T(δ^') + γ_T(δ^').Bounding R^_2. Since the policy is updated from time to time, the difference of the optimal values P(θ_t) - P(θ_t+1) is zero unless when the parameters are updated. When it is the case, thanks to the rejection sampling procedure which ensures that every parameters belong to the set 𝒮 of Asm. <ref>, it is trivially bounded by 2 D. Therefore, on event E, one has:R^_2 ≤ 2 X^2 D K, where K is the (random) number of episodes. By definition of , the updates are triggered either when the (V_t) increases by a factor 2 or when the length of the episode is greater than τ. Hence, the number of update can be split into K = K^det + K^len, where K^det and K^len are the number of updates triggered by the two conditions respectively. From Cor. <ref>, one gets:K ≤(T/τ + (n+d) log_2 ( 1 + T X^2 (1+C^2)/λ) ), and thus R^_2 ≤2 X^2 D (n+d) log_2 ( 1 + T X^2 (1+C^2)/λ)_:=γ_2 T/τ.Plugging everything together. We are now ready to bring all the regret terms together and obtain R(T)≤ (2 γα+ γ_3 )∑_t=0^T z_t_V^-1_t_E_t + γ_2 T/τ+ γ_1√(T) +∑_k=1^K T_k (R^,1_t_k + R^,3_t_k) _E_t_kAt this point, the regret bound is decomposed into several parts: 1) the first term can be bounded as ∑_t=0^T z_t _V_t^-1 = Õ (√(T)) on E using Prop. <ref> (see App. <ref> for details) 2) two terms which are already conveniently bounded as T/τ and √(T), and 3) two remaining terms from R^ that are almost exact martingales. In fact, T_k is random w.r.t. ℱ_t_k and thus the terms T_k R_t_k^,1 and T_k R_t_k^,3 are not proper martingale difference sequences. However, we can leverage on the fact that on most of the episodes, the length T_k is not random since the termination of the episode is triggered by the (deterministic) condition T_k ≤τ.Let α_k = (R^,1_t_k + R^,3_t_k )_E_t_k, 𝒦^det and 𝒦^len two set of indexes of cardinality K^det and K^len respectively, which correspond to the episodes terminated following the determinant or the limit condition respectively. Then, we can write ∑_k=1^K T_k α_k =∑_k ∈𝒦^det T_k α_k + τ∑_k ∈𝒦^lenα_k≤∑_k ∈𝒦^det T_k α_k +∑_k ∈𝒦^lenτα_k +∑_k ∈𝒦^detτα_k +∑_k ∈𝒦^detτα_k ≤ 2 τ∑_k ∈𝒦^detα_k+ τ∑_k=1^K α_k.The first term can be bounded using Lem. <ref>, which implies that the number of episodes triggered by the determinant condition is only logarithmic. On the other hand the remaining term ∑_k=1^K α_k is now a proper martingale and, together with the boundedness of α_k on event E, Azuma inequality directly holds. We obtain ∑_k=1^K T_k (R^,1_t_k + R^,3_t_k) _E_t_k = Õ(τ√(K)). w.p. 1 - δ/2. Grouping all higher-order terms w.r.t. to T and applying Cor. <ref> to bound K, we finally have R(T) ≤ C_1 T/τ + C_2 τ√(T/τ), where C_1 and C_2 are suitable problem-dependent constants. This final bound is optimized for τ = O(T^1/3) and it induces the final regret bound R(T) = O(T^2/3). More details are reported in App. <ref>.§ DISCUSSION We derived the first frequentist regret for in LQ control systems. Despite the existing results in LQ for optimistic approaches (), the Bayesian analysis of in LQ, and its frequentist analysis in linear bandit, we showed that controlling the frequentist regret induced by the randomness of the sampling process in LQ systems is considerably more difficult and it requires developing a new line of proof that directly relates the regret of and the controls executed over time. Furthermore, we show that has to solve a trade-off between frequently updating the policy to guarantee enough optimistic samples and reducing the number of policy switches to limit the regret incurred at each change. This gives rise to a final bound of O(T^2/3). This opens a number of questions. 1) The current analysis is derived in the general n/d-dimensional case except for Lem. <ref> and the steps leading to the introduction of the state in Sect. <ref>, where we set n=d=1. We believe that these steps can be extended to the general case without affecting the final result. 2) The final regret bound is in striking contrast with previous results for . While we provide a rather intuitive reason on the source of this extra regret, it is an open question whether a different or analysis could allow to improve the regret to O(√(T)) or whether this result reveals an intrinsic limitation of the randomized approach of .Acknowledgement This research is supported in part by a grant from CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020, CRIStAL (Centre de Recherche en Informatique et Automatique de Lille), and the French National Research Agency (ANR) under project ExTra-Learn n.ANR-14-CE24-0010-01.plainnat§ CONTROL THEORY§.§ Proof of Prop. <ref>* When θ^⊤ = (A,B) is not stabilizable, there exists no linear control K such that the controlled process x_t+1= A x_t + B K x_t + ϵ_t+1 is stationary. Thus, the positiveness of Q and R implies J(θ) =(P(θ)) = + ∞. As a consequence, θ^⊤∉𝒮.* The mapping θ→(P(θ)) is continuous (see Lem. <ref>). Thus, 𝒮 is compact as the intersection between a closed and a compact set.* The continuity of the mapping θ→ K(θ) together with the compactness of 𝒮 justifies the finite positive constants ρ and C. Moreover, since every θ∈𝒮 are stabilizable pairs, ρ < 1.§.§ Proof of Lem. <ref>Let θ^ = (A,B) where A and B are matrices of size n × n and n × d respectively. Let ℛ : ℝ^n+d,n×ℝ^n,n→ℝ^n,n be the Riccati operator defined by:ℛ (θ, P) := Q - P + A^ P A - A^ P B (R + B^ P B)^-1 B^ P A,where Q,R are positive definite matrices. Then, the solution P(θ) of the Riccati equation of Thm. <ref> is the solution of ℛ(θ,P) = 0. While Prop. <ref> guarantees that there exists a unique admissible solution as soon as θ∈𝒮, addressing the regularity of the function θ→ P(θ) requires the use of the implicit function theorem.Let E and F be two banach spaces, let Ω⊂ E × F be an open subset. Let f : Ω→ F be a C^1-map and let (x_0,y_0) be a point of Ωsuch that f(x_0,y_0) = 0. We denote as d_y f(x_0,y_0) : F → F the differential of the function f with respect to the second argument at point (x_0,y_0). Assume that this linear transformation is bounded and invertible. Then, there exists * two open subsets U and V such that (x_0,y_0) ∈ U × V ⊂Ω, * a function g : U → V such that g(x) = y for all (x,y) ∈ U × V.Moreover, g is C^1 and d g(x)= - d_y f(x,g(x))^-1 d_x f(x,g(x)) for all (x,y) ∈ U × V.Since R is positive definite, the Riccati operator is clearly a C^1-map. Moreover, thanks to Thm. <ref>, to any θ∈𝒮, there exists an admissible P such that ℛ(θ,P) = 0. Thanks to Thm. <ref>, a sufficient condition for θ→ P(θ) to be C^1 on 𝒮 is that the linear map d_Pℛ(θ,P(θ)) : ℝ^n× n→ℝ^n× n is a bounded invertible transformation i.e.* Bounded. There exists M such that, for any P ∈ℝ^n × n, d_Pℛ(θ,P(θ)) ( P) ≤ MP.* Invertible. There exists a bounded linear operator S : ℝ^n × n →R^n × n such that S P=I_n,n and P S = I_n,n.Let θ^ = (A,B) and ℛ be the Riccati operator defined in equation (<ref>). Then, the differential of ℛ w.r.t P taken in (θ, P(θ)) denoted as d_Pℛ(θ,P(θ)) is defined by:d_Pℛ(θ,P(θ))( δ P) := A_c^T δ P A_c - δ P, for any δ P ∈ℝ^n × n,where A_c = A - B (R + B^ P B)^-1 B^ P(θ) A. The proof is straightforward using the standard composition/multiplication/inverse operations for the differential operator together with an appropriate rearranging.Clearly, d_Pℛ(θ,P(θ)) is a bounded linear map. Moreover, thanks to the Lyapunov theory, for any stable matrix A_c _2 < 1 and for any matrix Q, the Lyapunov equation A_c^T X A_c - X = Q admits a unique solution. From Thm. <ref>, the optimal matrix P(θ) is such that the corresponding A_c is stable. This implies thatd_Pℛ(θ,P(θ)) is an invertible operator, and θ→ P(θ) is C^1 on 𝒮. Therefore, the differential of θ→ P(θ) can be deduced from the implicit function theorem. After tedious yet standard operations, one gets that for any θ∈𝒮 and direction δθ∈ℝ^(n+d)× n: d J(θ)(δθ) =(d P(θ)(δθ)) = ( ∇ J(θ)^δθ ), where ∇ J(θ) ∈ℝ^(n+d)× n is the jacobian matrix of J in θ. For any δθ∈ℝ^(n+d)× n, one has:∇ J(θ)^δθ = A_c(θ)^∇ J(θ)^δθA_c(θ)+ C(θ,δθ) +C(θ,δθ)^, where C(θ,δθ) = A_c(θ)^ P(θ) δθ^ H(θ).For any θ∈𝒮 and any positive definite matrix V, one has the following inequality for the weighted norm of the gradient of J:∇ J(θ) _V ≤A_c(θ)_2^2 ∇ J(θ) _V + 2P(θ)A_c(θ)_2H(θ)_V.For any θ∈𝒮 and any positive definite matrix V ∈ℝ^(n+d)×(n+d) . Applying (<ref>) to δθ = V ∇ J(θ) leads to:∇ J(θ)^ V ∇ J(θ) = A_c(θ)^∇ J(θ)^ V ∇ J(θ) A_c(θ) + C(θ,V ∇ J(θ)) + C(θ,V ∇ J(θ))^,where C(θ,V ∇ J(θ))^ = ( V^1/2 H(θ) )^ V^1/2∇ J(θ) P(θ) A_c(θ). Let ⟨ A , B ⟩ =A^ B be the Frobenius inner product, then taking the trace of the above equality, one gets:∇ J(θ) ^2_V = ∇ J(θ) A_c(θ) ^2_V + 2 ⟨ V^1/2 H(θ) , V^1/2∇ J(θ) P(θ) A_c(θ) ⟩.Using the Cauchy-Schwarz inequality and that the Frobenius norm is sub-multiplicative together with (M_1M_2) ≤M_1_2 (M_2) for any M_1,M_2 symmetric positive definite matrices, one obtains:∇ J(θ) ^2_V ≤A_c(θ)_2^2 ∇ J(θ) ^2_V + 2H(θ)_V P(θ)A_c(θ)_2 ∇ J(θ)_V.Finally, dividing by ∇ J(θ)_V provides the desired result. § MATERIALLet { M_s }_s ≥ 0 be a super-martingale such that | M_s - M_s-1 | ≤ c_s almost surely. Then, for all t > 0 and all ϵ> 0, ℙ( | M_t - M_0| ≥ϵ) ≤ 2 exp(- ϵ^2/ 2 ∑_s=1^t c_s^2). Let K^det be the number of changes in the policy of Algorithm <ref> due to the determinant trigger (V_t) ≥ 2 (V_0). Then, on E, K^det is at mostK^den≤(n+d) log_2 ( 1 + T X^2 (1+C^2)/ λ). Let K be the number of policy changes of Algorithm <ref>, K^det be defined as in Lem. <ref> and K^len= K - K^det be the number of policy changes due to the length trigger t ≥ t_0 + τ. Then, on E, K is at mostK ≤ K^det + K^len≤ (n+d) log_2 ( 1 + T X^2 (1+C^2)/ λ) + T/τ.Moreover, assuming that T ≥λ/X^2 (1+C^2), one gets K ≤ (n+d) log_2 ( 1 + T X^2 (1+C^2)/ λ) T/τ. Let X∼𝒩(0,1). For any 0 < δ <1, for any t≥ 0, then,ℙ(|X| ≥ t ) ≤ 2 exp(-t^2/2).Proof of Lem.<ref>. Let δ^' = δ/8T. * From Prop. <ref>, ℙ( θ_t - θ_*_V_t≤β_t(δ^') ) ≥ 1 - δ^'. Hence, ℙ( E)= ℙ( ⋂_t =0^T ( θ_t - θ_*_V_t≤β_t(δ^') ) )= 1- ℙ( ⋃_t =0^T ( θ_t - θ_*_V_t≥β_t(δ^') ) ) ≥ 1 - ∑_t=0^Tℙ( θ_t - θ_*_V_t≥β_t(δ^') )≥ 1 - Tδ^'≥ 1 - δ/8 * From Lem. <ref>, let η∼ then, for any ϵ > 0, making use of the fact that η≤ n √(n+d)max_i≤ n+d,j≤ n|η_i,j|, ℙ( η≤ϵ) ≥ℙ( n √(n+d)max_i,j |η_i,j| ≤ϵ)≥ 1 - ∏_i,jℙ( |η_i,j | ≥ϵ/n √(n+d)) ≥ 1 - n(n+d) ℙ_X ∼𝒩(0,1)( |X| ≥ϵ/n √(n+d)).Hence,ℙ( E)= ℙ( ⋂_t =0^T ( θ_t - θ_t_V_t≤γ_t(δ^') ) )= 1- ℙ( ⋃_t =0^T ( θ_t - θ_t_V_t≥γ_t(δ^') ) ) ≥ 1 - ∑_t=0^Tℙ( θ_t - θ_t_V_t≥γ_t(δ^') ) ≥ 1 - ∑_t=0^Tℙ( η≥γ_t(δ^')/β_t(δ^') )≥ 1 - ∑_t=0^Tℙ( η≥ n √(2 (n+d) log( 2 n (n+d) / δ^')) )≥ 1 - Tδ^'≥ 1 - δ/8. * Finally, a union bound argument ensures that ℙ(E∩E) ≥ 1 - δ/4.Proof of Cor. <ref>. This result comes directly from Sec. 4.1. and App. D of <cit.>. The proof relies on the fact that, on E, because θ_t is chosen within the confidence ellipsoid ^_t, the number of time steps the true closed loop matrix A_* + B_* K(θ_t) is unstable is small. Intuitively, the reason is that as soon as the true closed loop matrix is unstable, the state process explodes and the confidence ellipsoid is drastically changed. As the ellipsoid can only shrink over time, the state is well controlled expect for a small number of time steps.Since the only difference is that, on E∩E,θ_t ∈^_t, the same argument applies and the same bound holds replacing β_t with γ_t. Therefore, there exists appropriate problem dependent constants X,X^' such that ℙ( E̅ | E∩E ) ≥ 1 - δ/4. Finally, a union bound argument ensures that ℙ(E∩E∩E̅) ≥ 1 - δ/2. § PROOF OF LEM. <REF>We prove here that, on E, the sampling θ∼ℛ_𝒮 ( θ_t + β_t(δ^') V^1/2_t ) guarantees a fixed probability of sampling an optimistic parameter, i.e. which belongs to Θ_t^ := {θ∈^d|J(θ) ≤ J(θ^⋆) }. However, our result only holds for the 1-dimensional case as we deeply leverage on the geometry of the problem. Figure <ref> synthesizes the properties of the optimal value function and the geometry of the problem w.r.t the probability of being optimistic. * First, we introduce a simpler subset of optimistic parameters which involves hyperplanes rather than complicated J level sets. Without loss of generality we assume thatA_* + B_* K_* = ρ_* ≥ 0 and introduce H_* = [ 1; K_* ]∈ℝ^2 so that A_* + B_* K_* = θ^ H_*. Let Θ^lin, = {θ∈^d| | θ^ H_* | ≤ρ_* }. Intuitively, Θ^lin, consists in the set of systems θ which are more stable under control K_*. The following proposition ensures those systems to be optimistic. Θ^lin,⊂Θ_t^. Leveraging on the expression of J, one has when n=d=1, J(θ) =(P(θ)) = P(θ) = lim_T →∞∑_t=0^T x_t^2 (Q + K(θ)^2 R) =(Q + K(θ)^2 R) 𝕍(x_t),where 𝕍(x_t) = (1 - |θ^ H(θ) |^2)^-1 is the steady-state variance of the stationary first order autoregressive process x_t+1 = θ^ H(θ) x_t + ϵ_t+1 where ϵ_t is zero mean noise of variance 1 and H(θ) = [1; K(θ) ]. Thus,J(θ) = ( Q + K(θ)^2 R )(1 - | θ^ H(θ) |^2 )^-1.Hence, for any θ∈Θ^lin,, (1 - |θ^ H_* |^2)^-1≤ (1 - | θ_*^ H_* |^2)^-1 which implies that (Q + K_*^2 R ) (1 - |θ^ H_* |^2)^-1≤(Q + K_*^2 R )(1 - |θ_*^ H_* |^2)^-1 = J(θ_*).However, since K(θ) is the optimal control associated with θ, J(θ)= (Q + K(θ) ^2 R ) (1 - |θ^ H(θ) |^2)^-1= min_K(Q + K^2 R ) (1 - |[ 1 K ]θ |^2)^-1≤ (Q + K_*^2 R ) (1 - |θ^ H_* |^2)^-1≤ J(θ_*) As a result,ℙ( θ_t ∈Θ^ | ^x_t,E_t ) ≥ℙ(θ_t ∈ Θ^lin, | ^x_t,E_t ) and we can focus on Θ^lin,. * To ensure the sampling parameter to be admissible, we perform a rejection sampling until θ_t ∈𝒮. Noticing that Θ^lin,⊂Θ^⊂𝒮 by construction, the rejection sampling is always favorable in terms of probability of being optimistic. Since we seek for a lower bound, we can get rid of it and consider θ_t = θ_t + β_t(δ^') V^-1/2_t η where η∼𝒩(0,I_2).[In the 1-dimensional case, η is just a 2d standard gaussian r.v.] * On E_t, θ_⋆∈^_t, where ^_t is the confidence RLS ellipsoid centered in θ_t. Since θ_* is fixed (by definition), we lower bound the probability by considering the worst possible θ_t such that E_t holds. Intuitively, we consider the worst possible center for the RLS ellipsoid such that θ_⋆ still belong in ^_t and that the probability of being optimistic is minimal. Formally,ℙ(θ_t ∈Θ^lin, | ^x_t,E_t )=ℙ_θ_t∼𝒩(θ_t, β_t^2(δ^') V_t^-1)(θ_t ∈Θ^lin, | ^x_t,E_t ) ≥min_θ : θ - θ_*_V_t≤β_t(δ^')ℙ_θ_t∼𝒩(θ, β_t^2(δ^') V_t^-1)(θ_t∈Θ^lin, | ^x_t)Moreover, by Cauchy-Schwarz inequality, for any θ, | (θ - θ_* )^ H_* | ≤θ - θ_* _V_tH_* _V_t^-1≤β_t(δ^') H_*_V_t^-1,thus,ℙ(θ_t ∈Θ^lin, | ^x_t,E_t ) ≥min_θ : θ - θ_*_V_t≤β_t(δ^')ℙ_θ_t∼𝒩(θ, β_t^2(δ^') V_t^-1)(θ_t ∈Θ^lin, | ^x_t) ≥min_θ : | (θ - θ_*)^ H_* |≤β_t(δ^')H_*_V_t^-1ℙ_θ_t∼𝒩(θ, β_t^2(δ^') V_t^-1)(θ_t ∈Θ^lin, | ^x_t) = min_θ : | θ^ H_* - ρ_*| ≤β_t(δ^')H_*_V_t^-1ℙ_θ_t∼𝒩(θ, β_t^2(δ^') V_t^-1)(| θ_t^ H_* | ≤ρ_*| ^x_t)Cor. <ref> provides us with an explicit expression of the worst case ellipsoid. Introducing x=θ_t^ H_*, one has x ∼𝒩(x̅,σ^2_x) with x̅ = θ H_* and σ_x = β_t(δ^') H_* _V_t^-1. Applying Cor. <ref> with α = ρ_*, ρ = ρ_* and β = β_t(δ^') H_*_V_t^-1,inequality (<ref>) becomesℙ(θ_t ∈Θ^lin, | ^x_t,E_t ) ≥min_θ : | θ^ H_* - ρ_*| ≤β_t(δ^') H_*_V_t^-1ℙ_η∼𝒩(0,I_2)(| θ^ H_* + β_t(δ^') η^ V_t^-1/2 H_* | ≤ρ_*| ^x_t)≥ℙ_η∼𝒩(0,I_2)(| ρ_* +β_t(δ^') H_*_V_t^-1 + β_t(δ^') η^ V_t^-1/2 H_* | ≤ρ_*| ^x_t)Introducing the vector u_t =β_t(δ^') V_t^-1/2 H_*, one can simplify| ρ_* +β_t(δ^') H_*_V_t^-1 + β_t(δ^') η^ V_t^-1/2 H_* | ≤ρ_*,⇔-ρ_* ≤ρ_* +u_t + η^ u_t ≤ρ_*,⇔-ρ_*/u_t -1≤η^u_t/u_t≤ -1.Since η∼𝒩(0,I_2) is rotationally invariant , ℙ(θ_t ∈Θ^lin, | ^x_t,E_t )≥ℙ_ϵ∼𝒩(0,1) ( ϵ∈[1, 1+2 ρ_*/ u_t ]| ^x_t,E_t ). Finally, for all t≤ T, u_t is almost surely bounded: u_t≤β_T(δ^') √((1+C^2)/λ). Therefore,ℙ(θ_t ∈Θ^lin, | ^x_t,E_t )≥ℙ_ϵ∼𝒩(0,1) ( ϵ∈[1, 1+ 2 ρ_*/β_T(δ^') √((1+C^2)/λ)] ) := pFor any ρ,σ_x> 0, for any α, β≥ 0, min_x̅ : | x̅ - α | ≤βℙ_x ∼𝒩(x̅,σ_x^2)( |x| ≤ρ) = α+ β.This corollary is a direct consequence of the properties of standard gaussian r.v. Let x be a real random variable. For any ρ, σ_x > 0 Let f : ℝ→ [0,1] be the continuous mapping defined by f(x̅) = ℙ_x ∼𝒩(x̅,σ_x^2)( |x| ≤ρ). Then, f is increasing on ℝ_- and decreasing on ℝ_+. Without loss of generality, one can assume that σ_x = 1/√(2) (otherwise, modify ρ), and that x̅≥ 0 (by symmetry). Denoting as Φ andthe standard gaussian cdf and the error function, one has:f(x̅)=ℙ_x ∼𝒩(x̅,σ_x^2)( - ρ≤ x ≤ρ),= ℙ_x ∼𝒩(x̅,σ_x^2)( x ≤ρ)- ℙ_x ∼𝒩(x̅,σ_x^2)( x ≤ -ρ), =ℙ_x ∼𝒩(x̅,σ_x^2)( (x - x̅)/σ_x ≤ (ρ - x̅)/σ_x )- ℙ_x ∼𝒩(x̅,σ_x^2)( (x - x̅)/σ_x≤ (-ρ - x̅)/σ_x), = Φ ((ρ - x̅)/σ_x)- Φ (- (ρ + x̅)/σ_x), = 1/2 + 1/2 ( (ρ - x̅)/√(2)σ_x) -1/2 - 1/2 ( -(ρ + x̅)/√(2)σ_x), = 1/2( (ρ - x̅) - (-(ρ + x̅)) ).Sinceis odd, one obtains f(x̅) =1/2( (ρ - x̅) + (ρ + x̅) ). The error function is differentiable with ^'(z) = 2/π e^-z^2, thusf^'(x̅) = 1/π( exp( - (ρ + x̅)^2 ) - exp( -(ρ - x̅)^2 ) )= - 2/πsinh( (ρ - x̅)^2 ) ≤ 0Hence, f is decreasing on ℝ_+ and by symmetry, is increasing on ℝ_-.§ WEIGHTED L1 POINCAR INEQUALITY (PROOF OF LEM. <REF>)This result is build upon the following theorem which links the function to its gradient in L^1 norm: Let W^1,1(Ω) be the Sobolev space on Ω⊂ℝ^d. Let Ω be a convex domain bounded with diameter D and f ∈ W^1,1(Ω)of zero average on Ω then∫_Ω |f(x)| dx ≤D/2∫_Ω || ∇ f(x) || dx Lem. <ref> is an extension of Thm. <ref>. In pratice, we show that their proof still holds for log-concave weight.Let L > 0 and ρ any non negative and log-concave function on [0,L]. Then for any f ∈ W^1,1(0,L) such that ∫_0^L f(x) ρ(x) dx = 0one has:∫_0^L |f(x)| ρ(x) dx ≤ 2 L ∫_0^L |f^'(x)| ρ (x) dxThe proof is based on the following inequality for log-concave function. Let ρ be any non negative log-concave function on [0,1] such that ∫_0^1 ρ(x) = 1 then ∀ x∈ (0,1),H(ρ,x) := 1/ρ(x)∫_0^x ρ(t) dt ∫_x^1 ρ(t) dt ≤ 1 Since any non-negative log-concave function on [0,1] can be rewritten as ρ(x) = e^ν(x) where ν is a concave function on [0,1] and since x → e^x is increasing, the monotonicity of ν is preserved and as for concave function, ρ can be either increasing, decreasing or increasing then decreasing on [0,1].Hence, ∀ x ∈ (0,1), either* ρ(t) ≤ρ(x) for all t ∈ [0,x],* ρ(t) ≤ρ(x) for all t ∈ [x,1]. Assume that ρ(t) ≤ρ(x) for all t ∈ [0,x] without loss of generality. Then,∀ x∈ (0,1),H(ρ,x):= 1/ρ(x)∫_0^x ρ(t) dt ∫_x^1 ρ(t) dt= ∫_0^x ρ(t)/ρ(x)∫_x^1 ρ(t) dt ≤∫_0^x dt ∫_x^1 ρ(t) dt ≤ x ∫_0^1 ρ(t) dt ≤ x ≤ 1 This proof is exactly the same as <cit.> where we use lemma <ref> instead of a concave inequality. We provide it for sake of completeness. A scaling argument ensures that it is enough to prove it for L = 1. Moreover, dividing both side of (<ref>) by ∫_0^1 ρ(x)dx, we can assume without loss of generality that ∫_0^1 ρ(x) dx = 1.Since ∫_0^1 f(x) ρ(x) dx = 0 by integration part by part one has:f(y) = ∫_0^y f^'(x) ∫_0^x ρ(t) dt - ∫_y^1 f^'(x) ∫_x^1 ρ(t) dt |f(y)|≤∫_0^y |f^'(x)| ∫_0^x ρ(t) dt + ∫_y^1 |f^'(x)| ∫_x^1 ρ(t) dtMultiplying by ρ(y), integrating on y and applying Fubini's theorem leads to ∫_0^1 |f(y)| ρ(y) dy ≤ 2 ∫_0^1 | f^'(x)| ∫_0^x ρ(t) dt ∫_x^1 ρ(t) dtand applying (<ref>) of lemma <ref> ends the proof. While theorem <ref> provides a 1 dimensional weigthed Poincar inequality, we actually seek for one in ℝ^d. The idea of <cit.> is to use arguments of <cit.> to reduce the d-dimensional problem to a 1-d problem by splitting any convex set Ω into subspaces Ω_i thin in all but one direction and such that an average property is preserved. We just provide their result. Let Ω⊂ℝ^d be a convex domain with finite diameter D and u ∈ L^1(Ω) such that ∫_Ω u = 0. Then, for any δ > 0, there exists a decomposition of Ω into a finite number of convex domains Ω_i satisfyingΩ_i ∩Ω_j = ∅fori ≠ j, Ω̅ = ⋃Ω̅_i, ∫_Ω_i u = 0and each Ω_i is thin in all but one direction i.e. in an appropriate rectangular coordinate system (x,y) = (x,y_1,…,y_d-1) the set Ω_i is contained in { (x,y) :0 ≤ x ≤ D,0 ≤ y_i ≤δfor i = 1,…,d-1 }This decomposition together with theorem <ref> allow us to prove the d-dimensional weighted Poincar inequality. By density, we can assume that u ∈ C^∞(Ω̅). Hence, u p ∈ C^2(Ω̅). Let M be a bound for u p and all its derivative up to the second order.Given δ > 0 decompose the set Ω into Ω_i as in lemma <ref> and express z ∈Ω_i into the appropriate rectangular basis z =(x,y), where x ∈ [0,d_i], y ∈ [0,δ]. Define as ρ(x_0) the d-1 volume of the intersection between Ω_i and the hyperplan {x = x_0}. Since Ω_i is convex, ρ is concave and from the smoothness of up one has: | ∫_Ω_i | u(x,y) | p(x,y) dx dy - ∫_0^d_i |u(x,0)| p(x,0) ρ(x) dx | ≤ (d-1) M |Ω_i| δ| ∫_Ω_i | ∂ u/∂ x(x,y) | p(x,y) dx dy - ∫_0^d_i |∂ u/∂ x (x,0)| p(x,0) ρ(x) dx | ≤ (d-1) M |Ω_i| δ| ∫_Ω_iu(x,y)p(x,y) dx dy - ∫_0^d_i u(x,0) p(x,0)ρ(x) dx | ≤ (d-1) M |Ω_i| δ Those equation allows us to switch from d-dimensional integral to 1-dimensional integral for which we can apply theorem <ref> at the condition that ∫_0^d_i u(x,0) p(x,0) ρ(x) dx = 0 (which is not satisfied here). On the other hand, we can apply theorem <ref> to g(x) = u(x,0) - ∫_0^d_i u(x,0) p(x,0) ρ(x) dx/ ∫_0^d_i p(x,0) ρ(x) dxwith weigthed function x → p(x,0) ρ(x). Indeed, x → p(x,0) is log-concave - as restriction along one direction of log-concave function, x →ρ(x) is log-concave - as a concave function, and so is x → p(x,0) ρ(x) - as product of log-concave function. Moreover, g ∈ W^1,1(0,d_i) and ∫_0^d_i g(x) p(x,0) ρ(x) dx = 0 by construction. Therefore, applying theorem <ref> one gets: ∫_0^d_i | g(x) | p(x,0) ρ(x) dx ≤ 2 d_i ∫_0^d_i |g^' (x) |p(x,0) ρ(x) dx∫_0^d_i |u(x,0)| p(x,0) ρ(x) dx ≤ 2 d_i ∫_0^d_i |∂ u/∂ x(x,0)| p(x,0) ρ(x) dx - | ∫_0^d_i u(x,0) p(x,0) ρ(x) dx | ∫_0^d_i |u(x,0)| p(x,0) ρ(x) dx ≤ 2 d_i ∫_0^d_i |∂ u/∂ x(x,0)| p(x,0) ρ(x) dx + (d-1) M |Ω_i| δwhere we use equation (<ref>) together with ∫_Ω_i u(z) p(z) dz = 0 to obtain the last inequality. Finally, from (<ref>)∫_Ω_i | u(x,y) | p(x,y) dx dy ≤∫_0^d_i |u(x,0)| p(x,0) ρ(x) dx+(d-1) M |Ω_i| δfrom (<ref>)∫_Ω_i | u(x,y) | p(x,y) dx dy ≤ 2 d_i ∫_0^d_i |∂ u/∂ x(x,0)| p(x,0) ρ(x) dx+(d-1) M |Ω_i| δ (1 + 2 d_i)from (<ref>)∫_Ω_i | u(x,y) | p(x,y) dx dy ≤ 2 d_i ∫_Ω_i | ∂ u/∂ x(x,y) | p(x,y) dx dy+(d-1) M |Ω_i| δ (1 + 4 d_i) ∫_Ω_i | u(x,y) | p(x,y) dx dy ≤ 2 d_i ∫_Ω_i || ∇ u(x,y) || p(x,y) dx dy +(d-1) M |Ω_i| δ (1 + 4 d_i)Summing up on Ω_i leads to∫_Ω |u(z)|p(z) dz ≤ 2 D ∫_Ω || ∇ u(z) || p(z) dz + (d-1) M | Ω | δ (1 + 4 D)and since δ is arbitrary one gets the desired result. § REGRET PROOFSBounding R^_1. On E, x_t≤ X for all t ∈ [0,T]. Moreover, since θ_t ∈𝒮 for all t∈[0,T] due to the rejection sampling, (P(θ_t)) ≤ D.From the definition of the matrix 2-norm, sup_x≤ X x^ P(θ_t) x ≤ X^2P(θ_t)^1/2^2_2. Since for any A ∈ℝ^m,n, A_2 ≤A, one has P(θ_t)^1/2^2_2 ≤ P(θ_t)^1/2^2 =P(θ_t). As a consequence, for any t ∈ [0,T], sup_x≤ X x^ P(θ_t) x ≤ X^2 D and the martingale increments are bounded almost surely on E by 2 D X^2. Applying Thm. <ref> to R^_1 with ϵ = 2 D X^2 √(2 T log(4/δ)) one obtains that R^_1= ∑_t=0^T {𝔼 ( x_t+1^ P(θ_t+1) x_t+1|ℱ_t) - x_t^ P(θ_t) x_t}{E_t }≤2 D X^2 √(2 T log(4/δ))with probability at least 1 - δ/2.Bounding R^_3. The derivation of this bound is directly collected from <cit.>. Since our framework slightly differs, we provide it for the sake of completeness. The whole derivation is performed conditioned on the event E. R^_3= ∑_t=0^T { z_t^θ_t P(θ_t) θ_t^ z_t- z_t^⊤θ_* P(θ_t) θ_*^ z_t}= ∑_t=0^T {θ_t^ z_t ^2_P(θ_t)-θ_*^ z_t ^2_P(θ_t)}, = ∑_t=0^T ( θ_t^ z_t _P(θ_t)-θ_*^ z_t _P(θ_t))( θ_t^ z_t _P(θ_t)+θ_*^ z_t _P(θ_t))By the triangular inequality, θ_t^ z_t _P(θ_t)-θ_*^ z_t _P(θ_t)≤ P(θ_t)^1/2 ( θ_t^ z_t -θ_*^ z_t ) ≤P(θ_t)(θ_t^ -θ_*^)z_t. Making use of the fact that θ_t ∈𝒮 by construction of the rejection sampling, θ_⋆∈𝒮 by Asm. <ref> and that sup_t∈[0,T] z_t≤√( (1 + C^2) X^2 ) thanks to the conditioning on E and Prop. <ref>, one gets:R^_3 ≤∑_t=0^T ( √(D) (θ_t^ -θ_*^)z_t ) ( 2 S √(D)√( (1 + C^2) X^2 ))≤ 2 S D √( (1 + C^2) X^2 )∑_t=0^T (θ_t^ -θ_*^)z_tand one just has to bound ∑_t=0^T (θ_t^ -θ_*^)z_t. Let τ(t) ≤ t be the last time step before t when the parameter was updated. Using Cauchy-Schwarz inequality, one has:∑_t=0^T (θ_t^ -θ_*^)z_t = ∑_t=0^T (V^1/2_τ(t) (θ_τ(t) -θ_*))^V_τ(t)^-1/2 z_t ≤∑_t=0^T θ_τ(t) -θ_*_V_τ(t)z_t _V_τ(t)^-1However, on E, θ_τ(t) -θ_*_V_τ(t)≤θ_τ(t) -θ_τ(t)_V_τ(t)+ θ_* -θ_τ(t)_V_τ(t)≤β_τ(t)(δ^') + γ_τ(t)(δ^') ≤β_T(δ^') + γ_T(δ^') and, thanks to the lazy update rule z_t _V_τ(t)^-1≤z_t _V_t^-1(V_t)/(V_τ(t))≤ 2z_t _V_t. Therefore, R^_3 ≤ 4 S D √( (1 + C^2) X^2 )( β_T(δ^') + γ_T(δ^') ) ∑_t=0^T z_t _V_t^-1. Bounding ∑_k=1^K T_k α_k. From section <ref>,∑_k=1^K T_k α_k ≤ 2 τ∑_k ∈𝒦^denα_k+ τ∑_k=1^K α_k.First, it is clear from α_k= (R^,1_t_k + R^,3_t_k ){E_t_k}= (J(θ_t_k)- 𝔼[J(θ_t_k)| ℱ^x_t_k,E_t_k] ) {E_t_k},+ ( 𝔼[ [I; K(θ_t_k)^⊤ ]_V_t_k^-1|ℱ^x_t_k] -[I; K(θ_t_k)^⊤ ]_V_t_k^-1),that the sequence {α_k}_k=1^K is a martingale difference sequence with respect to ℱ^x_t_k. Moreover, since θ_t_k∈𝒮 for all k∈ [1,K], α_k≤ 2 D + 2 √( (1 + C^2 ) /λ). Therefore, * ∑_k ∈𝒦^denα_k ≤( 2 D + 2 √( (1 + C^2 ))) |K^den|,* with probability at least 1-δ/2, Azuma's inequality ensures that ∑_k=1^K α_k ≤( 2 D + 2 √( (1 + C^2 ))) √(2 |K| log (4/δ)).From Lem. <ref> and Cor. <ref>, |K^det| ≤ (n+d) log_2 ( 1 + T X^2 (1+C^2)/ λ) and |K| ≤ (n+d) log_2 ( 1 + T X^2 (1+C^2)/ λ) T/τ. Finally, one obtains:∑_k=1^K T_k α_k ≤4 ( 2 D + 2 √( (1 + C^2 )))(n+d) log_2 ( 1 + T X^2 (1+C^2)/ λ) √(log (4/δ))T/τ Bounding ∑_t=0^T z_t _V_t^-1. On E, for all t ∈ [0,T], z_t^2 ≤ (1 + C^2) X^2. Thus, from Cauchy-Schwarz inequality and Prop. <ref>,∑_t=0^T z_t _V_t^-1≤√(T)( ∑_t=0^T z_t ^2_V_t^-1)^1/2≤√(T)√(2(n + d) (1 + C^2) X^2 /λ)log^1/2( 1 + T (1 + C^2) X^2/λ(n+d)).
http://arxiv.org/abs/1703.08972v1
{ "authors": [ "Marc Abeille", "Alessandro Lazaric" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170327084557", "title": "Thompson Sampling for Linear-Quadratic Control Problems" }
firstpage–lastpage Multipair Massive MIMO Relaying Systems with One-Bit ADCs and DACs Chuili Kong, Student Member, IEEE, Amine Mezghani, Member, IEEE, Caijun Zhong, Senior Member, IEEE,A. Lee Swindlehurst, Fellow, IEEE, and Zhaoyang Zhang, Member, IEEE C. Kong, C. Zhong and Z. Zhang are with the Institute of Information and Communication Engineering, Zhejiang University, Hangzhou 310027, China (e-mail: kcl_dut@163.com; caijunzhong@zju.edu.cn; sunrise.heaven@gmail.com). A. Mezghani and A. L. Swindlehurst are with the Center for Pervasive Communications and Computing, University of California, Irvine, CA 92697, USA (e-mail: amine.mezghani@uci.edu; swindle@uci.edu) A. L. Swindlehurst and A. Mezghani were supported by the National Science Foundation under Grant ECCS-1547155. A. L. Swindlehurst was further supported by the Technische Universit��at M��unchen Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement No. 291763, and by the European Union under the Marie Curie COFUND Program. December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The distribution of exoplanet masses is not primordial. After the initial stage of planet formation is complete, the gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits.Here we use present-day eccentricities of the observed exoplanet population to reconstruct the initial mass function of exoplanets before the onset of dynamical instability.We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the planets that were ejected or collided in the past. Integrating across the exoplanet population, we obtained an estimate of the initial mass function of exoplanets.We find that the ejected planets are primarily sub-Saturn type planets. While the present-day distribution appears to be bimodal, with peaks around ∼ 1 and ∼ 20, this bimodality does not seem to be primordial. Instead, planets around ∼ 60 appear to be preferentially removed by dynamical instabilities.Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population has been depleted of intermediate-mass planets.Future work should explore how the system architecture and multiplicity might alter our results.planets and satellites: dynamical evolution and stability – planets and satellites: gaseous planets – planets and satellites: formation § INTRODUCTIONThe past twenty years of exoplanet observations have revealed a large diversity of planetary systems <cit.>, including a large number of planets with orbital eccentricities much higher than those found in the solar system. These eccentricities challenged our understanding of planet formation occurring in a disk, where eccentricities are dampened by planet-disk interactions. Several authors have suggested that these high eccentricities might have a dynamical origin <cit.>. In this view, the eccentricities arise from planet-planet interactions which can cause a planetary system to become dynamically unstable. During a dynamical instability, close encounters between planets lead to either collisions or planet-planet scatterings. This dynamically active period ends only with a collision, or the ejection of one of the planets from the system. <cit.> showed that this type of process can reproduce the observed distribution of exoplanet eccentricities (at least those above e = 0.2) for a wide range of possible initial conditions. A similar result was obtained by <cit.>; these authors also showed how the eccentricities can be subsequently dampened by a planetesimal belt for less massive planets (m_1 + m_2 < M_J). Several authors have noticed a mass-eccentricity correlation among exoplanets: low-mass planets tend to be on less eccentric orbits than high-mass planets <cit.>. <cit.> proposed that planet masses in the same system are correlated. Specifically, systems that form high-mass planets also tend to form equal-mass planets, while low-mass planet systems produce a wide diversity of mass ratios. The present work is an extension of this idea. We use the present-day eccentricities of observed exoplanets to estimate the masses of the planets that were ejected or collided. The result of this process is the exoplanet initial mass function (IMF).This paper is organized as follows. In Section <ref> we summarize the theory behind dynamical stability and discuss previous work. In Section <ref> we present an introduction to Approximate Bayesian Computation, and describe how the algorithm can be adapted to the problem we want to solve. We describe our core simulations in Section <ref>, including our data selection method. The results of these simulations are presented in Section <ref>. In Section <ref> we discuss possible caveats and suggest opportunities for future research. Finally, we summarize and conclude in Section <ref>.§ DYNAMICAL INSTABILITYIn this section we cover some important background and key results regarding the stability of planetary systems. In principle, the eccentricity of the surviving planet is determined by the need to conserve energy and angular momentum. For example, in the simple case of two planets, on coplanar orbits the energy E and angular momentum Λ of each planet is, E=- G M m/2a Λ =M m √(G a (1 - e^2)/M + m),where M is the stellar mass, and m, a, and e are the mass, semimajor axis, and eccentricity of the planet <cit.>. In practice, this has limited utility because the problem is degenerate, even in the simplest case of a two-planet system. After a dynamical instability, the escaping planet carries an uncertain amount of angular momentum <cit.>. Even if the angular momentum of the ejected planet was known, one would still have only two equations for five unknowns (the mass of the ejected planet, and the initial mass and semimajor axis of the two planets). In sections <ref> and <ref> we describe the statistical approach that we use to tackle this degeneracy.There are two common definitions of dynamical stability. In Hill stability, the ordering of the planets, in terms of their proximity to the central star, is fixed. In other words, the planet orbits never cross, but the outermost planet is allowed to escape the system. The other, more stringent definition, is Lagrange stability. In it, planets retain their ordering (i.e. are Hill stable), remain bound to the star, and variations in semimajor axis and eccentricity remain bounded. While Lagrange stability is the more useful definition, it has proven more difficult to solve mathematically than Hill stability.There is no analytic solution for the evolution of three gravitating bodies, but in some cases it is possible to derive analytic constraints on the motion of the planets <cit.>. These constraints can be understood as limitations on angular momentum exchange between planets. For example, <cit.> showed that, to first order, a pair of planets on coplanar orbits are guaranteed to be Hill stable whenever α^-3( μ_1 + μ_2/δ^2) (μ_1 γ_1 + μ_2 γ_2 δ)^2 > 1 + 3^4/3μ_1 μ_2/α^4/3where δ = √(a_2/a_1) γ_k= √(1 - e_k^2) μ_k = m_k/M α = μ_1 + μ_2and where M is the mass of the star, m_k is the mass of a planet, and a and e are the planet semimajor axis and eccentricity in barycentric coordinates. Therefore, for given planet masses and eccentricities, there is a critical value of δ that guarantees Hill stability <cit.>.<cit.> refer to the entire left-hand-side of Equation (<ref>) as β, and the right-hand-side as β_ crit. They showed that planet-planet scatterings naturally lead to planetary systems at the edge of dynamical stability, with β/β_ crit just above 1, and that this result is in agreement with observation. This is further evidence that dynamical instabilities driven by planet-planet scatterings occur frequently in planetary systems.For a system of two low-mass planets with near-circular coplanar orbits, the Hill stability criterion (Equation (<ref>)) is approximately Δ > 2 √(3) <cit.>, where Δ is the semimajor axis separation measured in mutual Hill radii, Δ = a_2 - a_1/R_H, R_H= ( m_1 + m_2/3 M)^1/3( a_1 + a_2/2),where m_1, m_2, a_1, and a_2 are the masses and semimajor axes of the two planets. For systems with more than two planets there is no analytic solution, but <cit.> showed that these systems are probably unstable for separations up to Δ = 10, at least for equal-mass planets. The time before planets experience their first close encounter (t_ ce) grows exponentially with Δ. For a given value of Δ, t_ ce seems to depend weakly on the number of planets (for systems with more than two planets) and the planet masses <cit.>. For a planetary system with ten equal-mass planets, <cit.> found log_10( t_ ce/ yr) ∼ -5 - log_10( μ/10^-3) + 1.44 Δ( μ/10^-3)^1/12,where μ = m/M is the planet-star mass ratio. This is an extremely steep dependence on Δ, meaning that two systems with similar Δ values can have very different lifetimes.§ APPROXIMATE BAYESIAN COMPUTATIONWe saw in section <ref> that it is not possible to exactly determine the mass of an ejected exoplanet given the present-day observables. In this section we explain how one can use modern statistical methods together with N-body simulations to obtain a probability distribution for the ejected planet mass. Our tool of choice is Approximate Bayesian Computation, or ABC <cit.>. Like all Bayesian algorithms, ABC is a way to solve the problem P(θ|D)∝P(D|θ) P(θ),where D is some data set, θ is a set of model parameters, and P(θ) is the Bayesian prior. In its simplest form, the ABC algorithm looks like thisThe function ρ is a distance measure that compares the synthetic dataset D' and the observed data D, and ϵ is some critical value. The resulting collection of values {θ_i} approximately follows the distribution P(θ|D).§.§ Application: two-planet instabilityThe simplest problem is a two-planet instability that always results in an ejection (i.e. planets are not allowed to collide). In this case, the data D = (m, e) is the mass and final eccentricity of the observed planet, and the model parameter θ = q is the mass ratio of the ejected and remaining planets (q = m_ ej / m_ rem). We choose a uniform prior for P(q) (we justify this choice in <ref>). The ABC algorithm becomesIn this investigation we choose N = 1000. Here we have overlooked the fact that the result of the N-body simulation depends on the initial separation between the two planets. Our approach is to choose a range of orbital separations that are consistent with planet formation: we select only the runs that remain stable for at least 0.5 Myr (see section <ref>). Algorithmically, we can write this as where t' is the time to the ejection. The final step is to allow collisions between planets and record whether or not a collision occurred. Let C be a boolean value that is true when the simulation ends in a collision. The final algorithm becomes Given the list of {(q_i, C_i)} one can construct a probability distribution of the masses of the two planets:The set of values {m_1,i, m_2,i} approximately follow the posterior distribution P(m_1, m_2| m, e). Finally, we repeat this process for every observed exoplanet that has a measured mass, and for each exoplanet we obtain a set {m_1,i, m_2,i}. Because all the sets have the same size (N = 1000), it is straight forward to combine them together. The result is an estimate of the exoplanet IMF.In section <ref> we explain our simulations in greater detail, and we give a rationale for choosing a uniform prior for P(q). We also explain how the stability requirement is implemented in practice. § OUR SIMULATIONSIn this section we describe how we selected our list of exoplanets, and we show that exoplanet observations support the use of a uniform prior P(q) as described in section <ref>. We also explain our simulations in detail, and explain how exoplanet observations inform our choices of initial conditions. At the end of the section we explain how the stability criterion is implemented in detail. §.§ Data selectionWe took the exoplanet catalogue fromon 16 August 2016. We selected the planets that have mass measurements (i.e. we did not use any mass-radius relation). This means that our sample is dominated by planets discovered by the Radial Velocity method, meaning that we only have the planet's minimum mass. We then exclude planets with orbital periods shorter than 10 days as many of those may have experienced high-eccentricity migration and tidal circularization. Our resulting dataset included six pulsar planets, so we removed those as well. This left us with 553 exoplanets. Figure <ref> shows the semimajor axes and eccentricities of these planets; Table <ref> also gives an overview of the dataset.In this investigation we limit the dynamical simulations to giant planets. To define “giant exoplanet” we use the planet-star mass ratio m /, which we normalize with the Jupiter-Sun mass ratio for convenience μ = m / //,whereis the mass of Jupiter. We define “giant exoplanet” as those with 0.05 < μ < 10. As a point of reference, Saturn has μ = 0.30, Neptune has μ = 0.054, and a super-Earth with m = 10M_⊕ orbiting the Sun would have μ = 0.031. Table <ref> lists the seven planets that were discovered by a method other than radial velocity or transit. §.§ Prior distribution of q / mass of planet 2Our goal is to use an informed prior P(q). Our dataset includes 171 planets with giant planet companions. Figure <ref> shows the cumulative distribution of the mass ratios of neighbouring planets in our dataset. The Kolmogorov-Smirnov test cannot distinguish this distribution from the uniform distribution (p-value of 0.96). Therefore, a uniform Bayesian prior is a reasonable choice. We choose 10 values for q spread uniformly between 0 and 1, q ∈{0.1, 0.2, 0.3, ⋯, 1.0 }and m_2 = q m_1§.§ Mass and semimajor axis of planet 1The collision cross section between two planets is σ = π R^2 (1 + v_ esc^2/v_∞^2)= π R^2 (1 + Θ),where R is the radius of the planet, v_ esc is the escape speed at the planet surface, and v_∞ is the relative speed of the two planets before gravitational focusing. The focusing factor Θ = v_ esc^2 / v_∞^2 is commonly known as the Safronov number. In the case of a dynamical instability, where planets approach each other at speeds comparable to the orbital speed, the Safronov number becomes Θ∼1/2( m/) ( a/R). When Θ≪ 1, close encounters between planets are are likely to lead to collisions, and when Θ≫ 1 ejections are more common <cit.>. The difference is important because collisions are more likely to leave planets in less eccentric orbits than ejections, since they conserve the total angular momentum in the system while ejected planets carry angular<cit.>. We would like our simulations to have planet masses (in terms of μ) and Safronov numbers comparable to the observed population, so that our runs have the correct proportion of collision and ejection events. Unfortunately, most planets in our sample do not have measured radii. Therefore, we make the simplifying assumption that most planets have a density similar to Jupiter, and introduce the quantityθ = ( m/)^2/3( /)^-1( a/)With this definition, a Jupiter-mass planet orbiting a Sun-like star at 1 AU has θ = 1. As a point of reference, table <ref> compares Θ and θ for the giant planets in the solar system. The two quantities are broadly comparable, but θ can be calculated for the planets in our sample.Figure <ref> shows the distribution of μ and θ for the exoplanets in our dataset. Clearly there is a wide range of values for both μ and θ. For this reason, we decided to do three different sets of runs (calling the sets A, B, and C) with the mass and semimajor axis of planet 1 spread along the best-fit line on figure <ref>. Planet 1 is the more massive planet in each run, and is the one more likely to survive. Our choices for m_1, and a_1 — shown in table <ref> — are a compromise between covering the full range of observed μ and θ, while also having our runs concentrated in the regions of the parameter space that have more planets. As it turns out, 56% of the planets in our sample are closer to set B (on a log scale). Therefore, we perform most of our runs in set B (see table <ref>). §.§ Other orbital elementsWe have already covered how we select m_1, a_1, and m_2. The remaining orbital elements are chosen to make our simulations consistent with planet formation. Giant planets form inside a protoplanetary disk, with typical lifetimes in the order of ∼ 3-6 Myr <cit.>. Orbital configurations that lead to orbit crossing on a timescale significantly shorter than 3 Myr would lead to ejections and collisions during the disk phase, with enough time for the disk to subsequently dampen the eccentricity of the remaining planet. For this reason, we require that the system experience no collisions or ejections for at least 0.5 Myr.To conduct our simulations, we select a range of semimajor axes for planet 2 near the Hill stability limit. The exact range of a_2 varies depending on the stability criterion. Figure <ref> shows the values of Δ (equation (<ref>)) for set B. The Δ values are equally spaced, in steps of 0.1. For each Δ, we solve for a_2 and we run ten N-body simulations using the hybrid integrator in mercury <cit.>. Both planets are initially in circular orbits, but we give them mutual inclinations of a few degrees (similar to the solar system). We assign each planet a random inclination I ∈ [0^∘, 5^∘], random longitude of ascending node Ω∈ [0^∘, 360^∘], and random mean longitude λ∈ [0^∘, 360^∘]. Mutual inclinations are important because an overly flat system will have an unrealistic number of collisions.The simulations initially run for 10 Myr. If there is an ejection or collision in less than 0.5 Myr, we do not use the run. If there are no collisions or ejections, we extend the runs in 20 Myr increments up to 70 Myr. Figure <ref> shows a bar plot for each value of (Δ, m_2) that indicates the fraction of runs that became unstable in less than 0.5 Myr (red), or 0.5-70 Myr (green), or had no instability at the end of the 70 Myr run. Notice that smaller m_2 requires larger Δ to be stable for 0.5 Myr. This is the reason why the range of a_2 varies with m_2. In a similar way, changing m_1 also affects the stability region, so the range of a_2 that are consistent with planet formation change as well.§ RESULT OF PLANET-PLANET SCATTERINGIn this section we present the results of our N-body simulations. In section <ref> we apply the Bayesian algorithm described in sections <ref> and <ref> to our two-planet simulations and we construct an estimate for the exoplanet IMF. Then in section <ref> we present a more limited set of simulations with three giant planets and discuss how a three-planet instability differs from the two-planet case. §.§ Two-planet systemsHere we present our main results. Figure <ref> shows the final eccentricity of the surviving planet for every single run in set B that had an instability after 0.5 Myr. By far the most common result was for the lower mass planet to become ejected from the system. In some cases the two planets collided, and in others it was the inner, more massive planet that was ejected. As a general rule, collisions are associated with low-eccentricity outcomes, even for equal-mass planets. The connection between collisions and dynamically quiet histories has already been noted by other authors in the context of three-planet instabilities <cit.>.Perhaps the most important feature of Figure <ref> is the correlation between the eccentricity of the remaining planet e and the mass ratio of the planet that was ejected or destroyed over the mass of the surviving planet. In most cases q = m_2 / m_1 where planet 2 is the outer planet; but in the runs where the inner (more massive) planet is ejected, we write q = m_1 / m_2. The correlation between e and q is not surprising, but it is important — it is what allows us to produce a probability distribution of q.Figure <ref> makes this point more concrete. Here we choose four sample eccentricities (e_ fin∈{0.2, 0.4, 0.6, 0.8}), and for each one we compute a probability distribution k(q) using the data in set B (Figure <ref>). Conceptually, for any given value of e one can draw a vertical line in Figure <ref> and select all the points near that line. We use kernel density estimation (KDE) to convert these points into a smooth distribution. A KDE is conceptually equivalent to replacing each point with a Gaussian distribution with standard deviation h (called the bandwidth). In Figure <ref> we chose a wide bin (e_ fin± 0.1) to focus on the overall shape of the curve, but notice that in Figure <ref> there is a relatively small number of points for any given e_ fin. As the number of simulations continues to increase, our statistics will gradually improve. Finally, to produce the exoplanet planet IMF, we generalize the procedure:* For planets with μ≥ 0.05 and e ≥ 0.05 we randomly select N = 1000 runs (see section <ref>) with |e_ sim - e_ obs| < 0.02 and compute the initial planet masses m_1 and m_2. * For planets with with μ < 0.05 or e < 0.05 we assume that there was no planet-planet scatter, and we simply copy the planet mass N = 1000 times. Together, these sets give a synthetic population that approximately follows the initial mass function of the exoplanets in our dataset. As in Figure <ref>, we compute the kernel density k(μ). Figure <ref> shows the final result for all the planets in our dataset, and for the planets orbiting Sun-like stars. One interesting feature of these plots is that the observed exoplanet population appears to have a “valley” at around μ = 0.2 (sub-Saturn planets), but the synthetic IMF suggests that this valley is not primordial, but was carved out later by planet-planet scatterings. If this interpretation is correct, there should be a population of free-floating Saturn-like planets that were ejected from their host system by a more massive planet. However, note that <cit.> have shown that dynamical instabilities cannot be the primary source of the observed population of free-floating giant planets <cit.>, as that would require an implausible number of planetary ejections per system. §.§ Three-planet systemsOur results so far only apply to two-planet systems, and it is important to understand to what extent they generalize to multiple-planet systems. In this section we present a set of N-body simulations with three giant planets, and we discuss the similarities and differences between two-planet and three-planet systems.All our three-planet simulations have a Jupiter-mass planet at 1 au, and two exterior planets with half the mass of Jupiter. We call this set . The planets have a uniform separation in terms of their mutual Hill radii (equal Δ, see Equation (<ref>)). We compare these runs against the two-planet runs with q = 1 (we call them ) and q = 0.5 (we call them ). The three-planet runs are generally less stable, and we need to be more widely spaced (Δ > 4) for them to last 0.5 Myr. That said, in terms of orbital energy and angular momentum the difference is not very large. For example, in asystem with Δ = 4.1 the two outer planets have 79% of the binding energy and 113% of the angular momentum of the outer planet in asystem with Δ = 2.5.Figure <ref> shows the final eccentricity of the surviving planets in each set of runs. Broadly speaking,produces eccentricities similar to those in , and somewhat lower than those in . Inthere are a few runs where only one planet survived. These planets are more eccentric, and are generally consistent with . § SOME CAVEATSIn this section we discuss sources of bias or uncertainty that might affect the accuracy of our results. When possible, we discuss ways that these sources of error might be corrected or at least measured.§.§ Observational errors and biasesThe exoplanet sample used in this investigation is plagued with observational biases. There are completeness issues because massive planets are easier to detect than lower-mass planets. In addition, eccentricities from radial velocity surveys are unreliable and the RV signal caused by two planets in circular orbits can be difficult to distinguish from the signal caused by a single planet in an eccentric orbit. De-biasing RV surveys is a difficult problem that lies beyond the scope of the present investigation.We did explore the completeness issue with a reduced subset of observed planets. We selected planets in the range of periods 10 to 300 days and masses between 0.1 and 10 , and repeated our procedure. This subset suffers from small number statistics, as there are only 27 planets in the set; nonetheless, our key conclusion continues to hold for this data set — that the observed exoplanet population has a deficiency in sub-Saturn-mass planets, and that that this gap is not primordial, but is the result of ejections. §.§ System architectureThis investigation is limited to planetary systems with two dynamically dominant planets. In section <ref> we quantified some of the key differences between 2-planet and 3-planet systems. That said, a full investigation is not feasible because the problem is too degenerate when there are more than two giant planets in the system.In all our simulations the inner planet is the most massive. It is possible that if the order of the planets was reversed, the simulation results would be different. To test this, we conducted 550 simulations with the same parameters as(i.e. set B with q = 0.5), but we reversed the order of the planets, so that the inner plant has a mass of 0.5and the outer planet has 1 . We call this new set . The two sets of runs give similar results. The eccentricity distributions are similar, and in both cases it is always the less massive planet that becomes ejected. One difference is thatis more collisional than . Inthere are 2.5 ejections per collision, while inthere are only 1.3 ejections per collision. Figure <ref> gives an overview of the simulation results. The two sets of runs have the same mean eccentricity, butmay have a wider eccentricity distribution. §.§ Eccentricity dampingThe next complication is that gravitational interactions with a planetesimal belt can dampen a planet's eccentricity <cit.>. In fact, some authors suggest that this might have occurred in the solar system at the time of the Late-Heavy Bombardment <cit.>. In this way, a planetesimal belt may erase information about the dynamical history of the planetary system. Fortunately, this is mainly a concern for low-mass (sub-Saturn) planets in wide orbits <cit.>. Hence, it is unlikely that this type of damping has a major impact in our results. Nonetheless, it is desirable to quantify how often this occurs. <cit.> suggested a search for K ≈ 5m s^-1 planets with long periods (P ∼ 10 yr). Another possibility is to look for planetary systems with highly depleted debris belts, which is an independent indicator of past dynamical instability <cit.> and measure how often these systems have low eccentricities (which could suggest eccentricity damping).§.§ Secular effectsFinally, in a multiple planet system, planets can exchange angular momentum over secular timescales. For example, if a planetary system with three giant planets ejects one, the two remaining planets will exchange eccentricity periodically. In other words, some of the eccentricities that we observe today may differ from those at the end of the dynamical instability. To tackle this one could model the secular evolution of the remaining planets and note the range of eccentricities that the planets acquire. This, in turn, gives a range of possible solutions for the ejected planet mass. We will revisit this idea in future work.§ SUMMARY AND CONCLUSIONSIn this work we developed a novel Bayesian framework, supported by N-body simulations, to estimate the probable masses of planets that have suffered collisions or ejections, using the present-day masses and orbits of the surviving planets. When applied to the entire exoplanet population, this technique yields the exoplanet initial mass function. We then demonstrated the use of this technique using present-day observations and several thousand N-body integrations.The observed exoplanet population has a paucity of sub-Saturn-mass planets. We find that this gap is not primordial, and is mainly the result of dynamical instabilities where more massive giant planets eject less massive ones. This in turn implies that there is a population of free-floating sub-Saturn planets that might be detectable by the upcoming WFIRST telescope through its micro-lensing survey <cit.>. § ACKNOWLEDGEMENTS The authors are supported by the project grant “IMPACT” from the Knut and Alice Wallenberg Foundation (KAW 2014.0017), as well as the Swedish Research Council (grants 2011-3991 and 2014-5775), and the European Research Council Starting Grant 278675-PEBBLE2PLANET. D.C. acknowledges Dimitri Veras for helpful discussions and guidance in calculating the Hill stability limit. Computer simulations were performed using resources provided by the Swedish National Infrastructure for Computing (SNIC) at the Lunarc Center for Scientific and Technical Computing at Lund University. Some simulation hardware was purchased with grants from the Royal Physiographic Society of Lund. mnras
http://arxiv.org/abs/1703.08647v1
{ "authors": [ "Daniel Carrera", "Melvyn B. Davies", "Anders Johansen" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170325043851", "title": "Toward an initial mass function for giant planets" }
http://arxiv.org/abs/1703.09130v3
{ "authors": [ "Kai-Lei Wang", "Li-Ye Xiao", "Xian-Hui Zhong", "Qiang Zhao" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170327150317", "title": "Understanding the newly observed $Ω_c$ states through their decays" }
firstpage–lastpage Immersed boundary model of aortic heart valve dynamics with physiological driving and loading conditions Boyce E. Griffith December 30, 2023. ========================================================================================================== Photometric redshift estimation is an indispensable tool of precisioncosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that areselected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus,ill-designed empirical methods that produce accurate and precise redshiftestimates for the former generally will not produce goodestimates for the latter.In this paper, we provide a principled framework for generatingconditional density estimates (i.e. photometric redshift PDFs) that takes intoaccount selection bias and the covariate shift that this bias induces. We baseour approach on the assumption that the probability that astronomerslabel a galaxy (i.e. determine its spectroscopic redshift)depends only on its measured (photometric and perhaps other)propertiesand not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (i.e. the ratio of densities of unlabeled and labeled galaxies for different values of ) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈10^6 galaxies, mostly observed by SDSS, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabeled galaxies.galaxies: distances and redshifts – galaxies: fundamental parameters – galaxies: statistics – methods: data analysis – methods: statistical § INTRODUCTION Photometric redshift (or photo-z) estimation is an indispensable tool of precision cosmology. The planners of current and future large-scale photometric surveys such as the Dark Energy Survey () and the Large Synoptic Survey Telescope (), which combined will observe over one billion galaxies,require accurate and precise redshift estimates in order to fully leverage the constraining power of cosmological probes such as baryon acoustic oscillations and weak gravitational lensing. Numerous estimators currently exist that achieve “good" point estimates of photo-z redshifts at low redshifts (z ≲ 0.5), where “good" means that photo-z and spectroscopic (or spec-z) estimatesfor the same galaxy largely match, with only a small percentage of catastrophic outliers. These estimators are conventionally divided into two classes: template fitters, oft-usedexamples of which include BPZ () and EAZY (), and empirical methods such as ANNz ().[See, e.g. <cit.>, <cit.>, and <cit.>, who compare and constrast numerous estimators from both classes, and references therein.]The former utilize sets of galaxy SED templates that are redshifted until a best match with a galaxy's observed photometry is found, whereas the latter utilize spectroscopically observed galaxies to train machine learning methods to predict the redshifts of thosegalaxies that are only observed photometrically.Less well established within the field of photo-z estimation, however, are methods that (1) produce conditional density estimates (or error estimates) of individual galaxy redshifts and at the same time (2) properly take into account the discrepancy between the populations of spectroscopically observed galaxies (roughly closer and brighter) and those observed via photometry only (farther and fainter).Regarding point (1): the error distributions of photo-z estimatesare often asymmetric and/or multi-modal, so that single-number summarystatistics such as the mean or median are insufficient to describe their shapes. Furthermore, the use of such statistics leads to biased estimation of parameters in downstream cosmological analyses(e.g. ); for instance,<cit.> demonstrate that the use of the conditional density estimate (z |) (often denoted p(z) in the astronomical literature and often called the probability density estimate, or PDF)reduces systematic error in galaxy-galaxy weak lensing analyses. (Here,can represent magnitudes and/or colours and/or other ancillary information measured for a galaxy.) Several other works have touted the use of (z |) as well, often as a step towards better estimates of ensemble redshift distributions (usually denoted N(z)) in tomographic studies (e.g. , , ,, , , , ), and standard methods such as the aforementioned BPZ, EAZY, and ANNz provide (z |) as an available output.Regarding point (2): it is a well-established truism that in large-scale surveys there is selection bias,wherein rare and bright galaxies are preferentially selected for spectroscopic observation. This bias induces a covariate shift,since the properties of these bright galaxies do not match those of more numerous dimmer galaxies (see e.g. Figure <ref>). This shift affects the accuracy and precision of empirical photo-z estimates. One can mitigate covariate shift by estimating importance weights β() = f_U()/f_L(),the ratio of the density of galaxies without redshift labels to those observed spectroscopically. For instance, <cit.> attempt to directly estimate N(z) in a covariate shift setting with a k-nearest-neighbor-based estimator of the importance weights, an estimator since utilized by <cit.>, <cit.>, and <cit.>. <cit.>, who propose a weighted kernel density estimator for f(z |), offer two other methods for computing the weights (quantile regression forest and ordinal classification PDF). All these weight estimators featureparameters that one must tune for proper performance. One would generally tune estimators by minimizing an estimate of risk using a validation dataset, but the authors listed above skirt the issue of tuning by setting the number of nearest neighbors a priori, or, in the case of <cit.>, by utilizing aplug-in bandwidth estimate via Scott's rule (see their equation 24).In this paper, we describe a principled and unified framework forgenerating conditional density estimates (z|) in a selection bias setting: specifically, we provide a suite of appropriate riskestimators, methods for tuning and assessing models, and diagnostic teststhat allows one to create accurate density estimatesfrom raw data .[ One may find R functions implementing our framework atgithub.com/pefreeman/CDESB.] In <ref>, we define both the problem and our notation. In <ref>-<ref> we show that if we assume that the probability that a galaxy is labeled depends only on its photometry and not on its true redshift, which is a valid assumption within the redshift regime probed byshallow surveys such as the Sloan Digital Sky Survey (SDSS; ),we can write down risk functions that allow one to properly tune estimators of both β() and f(z|). These risk functions also allow us to choose from among competing estimators. In <ref> we show how one can combine estimators ofconditional density to improve upon the results achieved by any one estimator alone. In <ref> we provide diagnostic tests that one may use to determine the absolute performance of conditional density estimators. In <ref> we demonstrate our methodsby applying them to SDSS data. Finally,in <ref> we summarize our results. In future works, we will provide methods for variable selection (i.e. the selection of the most informative colours, etc., to retain from a large set of possible covariates) and explore methods in which we relax the galaxy-labeling assumption stated above.§ PROBLEM STATEMENT: SELECTION BIAS The data in a conventional photometric redshift estimation problemconsist of covariates ∈ℝ^d (photometric colours and/or magnitudes, etc.) and redshifts z. We have access to two data samples: an independent and identically distributed (i.i.d.) sample^U_1,…,^U_n_U consisting of photometric data without associated labels (i.e. redshifts), and an i.i.d. labeled sample (^L_1,z^L_1),…,(^L_n_L,z^L_n_L)constructed from follow-up spectroscopic studies. (For computational efficiency, in our analyses these datasets are samples taken from larger pools of available labeled and unlabeled data.) Our goal is to construct a photo-z conditional densityestimator, (z|), that performs well when applied to the unlabeled data (where “well" can be defined by its performance with respect to a number of metrics; seee.g. <ref> for two examples). An issue that arises when constructing (z|) via empirical techniques is that of selection bias. A standard assumption in statistics and machine learning is that labeledand unlabeled data are sampled from similar distributions, which we denote ℙ_L and ℙ_U respectively. However, as Figure <ref> demonstrates, these two distributionscan differ greatly for sky surveys that mix spectroscopy and photometry;brighter galaxies are more likely to be selected for follow-up spectroscopicobservation. To model how selection bias affects learning methods, one needs to invoke additional assumptions aboutthe relationship between ℙ_L and ℙ_U(e.g. , ). In this work, we assume that the probability that a galaxy is labeled with a spectroscopic redshift depends only upon(in accord withand ); i.e.P(S=1|,z) = P(S=1|)where the random variable S equals 1 if a datum is labeled and 0otherwise. This assumption implies covariate shift, defined asf_L() ≠ f_U(), f_L(z|)=f_U(z|),and thus is, as shown below, critical for establishing the risk function estimators that ultimately allow us to estimate conditional density estimates (z |).Following the discussion of Section 2.3 of <cit.>, we point out that assuming P(S=1|,z) = P(S=1|) can be problematic, for instance when only colours are used in analyses in which the training data are selected in limited magnitude regimes. In this work we apply our framework to galaxies at SDSS depth using colours only; for optimalperformance, one should incorporate those covariates that act in concert with z to affect selection, e.g., morphology, size, surface brightness, environment, etc. Nothing in the current framework prevents theincorporation of these covariates.At first glance, it may seem that covariate shift would not pose a problem fordensity estimation; if f(z|) is the same for both labeled andunlabeled samples, one might infer that a good estimator off(z|) based on labeled data would also perform well forunlabeled data. However, this is generally untrue. The estimation of f(z|) depends on the marginaldistribution f(), so an estimator that performs wellwith respect to f_L() may not perform well with respect to f_U().We can mitigate selection bias by preprocessing thelabeled data so as to ensure that sufficient labeled data lay where the unlabeled data lay. This allows us to compute expected values with respect to the distribution ℙ_U using ℙ_L, similar to the idea of importance sampling in Monte Carlo methods. Carrying this mitigation out in practice involvestwo steps: first, we estimate importance weights as a function of the predictors :β() = f_U()/f_L();and second, we utilize these weights when estimating conditional densities f(z |). There are a myriad of estimators both for importance weighting and for conditional density estimation (see e.g. , ); what we provide here are rigorous proceduresfor tuning their parameters and for choosing among them.We note here that our overall procedure can be qualitatively summarized by the following dictum: one needs good estimates of importance weights at labeled data points in order to achieve good conditional density estimates at unlabeled data points. (One can observe how this dictum plays out mathematically in equation <ref> below: note how the importance weight estimates at labeled points enter into it, in the second term, whereas the importance weight estimates at unlabeled points do not enter into it at all.)§ IMPORTANCE WEIGHT ESTIMATION A naive method for computing importance weights β() would involve estimating f_U and f_L separately and computing the ratio of these densities, but this approach can enhance errors in individual densityestimates, particularly in regimes where f_L → 0 <cit.>. Many authors have thus proposed direct estimators of the ratioβ() (e.g. , , , , , , ). As an example, the estimator of <cit.> and <cit.> is() = 1/kn_L/n_U∑_i=1^n_U𝕀( _i^U ∈ V_k^L ),where V_k^L is the region of covariate space containing points that are closer tothen its k^ thnearest labeled neighbor, and 𝕀(·) is the indicator function.To choose between importance weight estimators,one needs first to optimally tune the parameters of each using the training and validation data (model selection), and then assess their performance using the test data (model assessment). We determine optimal values of tuning parameters by minimizing a risk function (equation <ref> below). Generating estimates () (and by extension (z |), below) implicitly requires smoothing the observed data, with the smoothing bandwidth set such that estimator bias and variance are optimally balanced. (See e.g. .) For instance, too much smoothing (e.g. adopting a value of k that is too largein nearest-neighbor-based methods)yields estimates with low variance and high bias, i.e. when applied to independent datasets sampled from the same distribution, the estimates will all look similar (i.e. will have low variance) but will be offset fromthe truth (i.e. will have high bias). Too little smoothing (e.g. k too small) conversely yields high-varianceand low-bias estimates that overfit the training data.Conditional Density Estimation for Photometric Redshifts When estimating importance weights, we apply the risk function <cit.>R(,β) := ∫(()-β() )^2dP_L() = ∫^2() dP_L()-2∫() β() dP_L()+∫β^2() dP_L() = ∫^2() dP_L()-2∫() dP_U()+C(β),where dP_U() = β() dP_L() and C(β) is a term that does not depend on the estimate . (We note that here the calculation of risk is with respect to the labeled dataset distribution ℙ_L; this is in accord with the dictum stated above that we need good estimates of importance weights for the labeleddata in order to achieve good estimates of conditional densities for the unlabeled data. See equation <ref> below.) While we utilize an L^2-loss function above in (<ref>) and below in (<ref>), one could in principle substitute other functions based on F-divergences, log-densities, or notions of L^1 loss. However, functions based on F-divergences and log-densities are overly sensitive to distribution tails andare generally not appropriate for density estimation (see, e.g.  and ), while estimating a risk based on L^1 loss requires knowledge of the true f(z |). Since in model selection and assessment we can ignore C(β), we rewrite the above asJ()= ∫^2() dP_L()-2∫() dP_U(),which we estimate asJ() = 1/_L∑_k=1^_L^2(x_k^L)-2/_U∑_k=1^_U(x_k^U),where the tildes indicate that the risk is evaluated using either validation data (during model selection) or test data (during model assessment). (Here we use J to represent a risk function in which the constant term C(β) is ignored.) Among multiple estimators of (), we choose the one that yields the smallest value of R when applied to test data.§ CONDITIONAL DENSITY ESTIMATION Given an estimate () of the importance weight, the ratio of densities of the unlabeled and labeled data at the point , our next step is to compute estimates of the conditional density (z |). Conditional density estimators include those of <cit.> and <cit.>; see e.g. <cit.> for more details. To buildintuition here, we write down the estimator of <cit.>, as it is particularly simple:(z|) ∝∑_i ∈𝒩_N()(^L_i) 𝕀(z^L_i ∈ b(z)),where 𝒩_N() denotes the N neighbors nearest toamong labeled data, and b(z) denotes the a priori defined bin to which z belongs. This estimator (up)down-weights labeled data in regions wheref_U() is (larger) smaller than f_L().In a selection bias setting where ℙ_L ≠ℙ_U, thegoal of conditional density estimation is to minimizeR(,f) := ∬((z|)-f(z|))^2dP_U()dz,i.e. the risk with respect to the unlabeled data. Under the covariate shift assumptionf_U(z|)=f_L(z|),one can rewrite the modified risk(<ref>) up to a constant asR(,f) = ∬^2(z|)dP_U()dz-2 ∬(z|)f(z|)dP_U()dz + ∬ f^2(z|)dP_U()dz= ∬^2(z|)dP_U()dz -2 ∬(z|)β()dP_L(z,) + C(f),where the second equality follows fromf_U(z|)dP_U()dz =f_L(z|) β() dP_L() dz=β()dP_L(z,).Again, this risk depends upon unknown quantities; we ignore C(f) andestimate the other terms via the equationJ() = 1/_U∑_k=1^_U[ ∫^2(z|_k^U)dz ] -2/_L∑_k=1^_L(z^L_k|^L_k)(^L_k),where again the tildes indicate use of validation data in model selection and test data in model assessment.§ COMBINING ESTIMATORS Typical photometric redshift estimation methods utilize one method for computing z| or f(z |). However, one can improve upon the prediction performances of individual estimators by combining them. Suppose that_1(z|),…, _p(z|)are p separate estimators of f(z|).Wedefine the weighted average to be^α(z|)=∑_k=1^p α_k _k(z|),where the weights minimize the empirical risk R(^α) under the constraints α_i ≥ 0 and ∑_i=1^p α_i=1.One can determinethe solution α=[α_i ]_i=1^p by solving a standard quadratic programming problem: α:α_i≥ 0,∑_i=1^p α_i=1minα' 𝔹α -2α'b,where 𝔹 is the p × p matrix [1/_U∑_k=1^_U∫_i(z|_k^U)_j(z|_k^U)dz]_i,j=1^pand b is the vector[ 1/_L∑_k=1^_L_i(z^L_k|^L_k)(^L_k)]_i=1^p;the tildes here indicate use of the validation data.§ DIAGNOSTIC TESTS FOR ESTIMATORS Risk estimates, such as those given in equations <ref> and <ref>, allow us to tune estimators and to choose between estimators, but they do not ultimately convey how well the estimator performs in an absolute sense. Below we describe diagnostic tests that one can use to moreclosely assess the quality of different models. Similar tests can be foundin the time series literature (see, e.g., ). §.§ Assessing uniformity using empirical CDFs Let F(z|_i) denote the estimated conditionalcumulative distribution function, i.e., letU_i = F(z_i|_i) = ∫_0^z_i(y |_i) dy.If the chosen estimator performs well, then the empirical cumulative distribution function (CDF) of the values U_1,…,U_n will be consistent with the CDF of the uniform distribution. We can test this hypothesis via, e.g., the Cramér-von Mises, Anderson-Darling, and Kolmogorov-Smirnoff tests. If the p-value output by the test is > 0.05, then we fail to reject the null hypothesis that the dataU_1,…,U_n__o are sampled from a uniform distribution. §.§ Assessing uniformity using quantiles We can use the values U_1,…,U_n defined above to build a quantile-quantile (or QQ) plot,by determining the number of data in bins of width Δ U. Let J be the number of bins, each of which has midpoint c_j and the fraction of data c_j. The QQ plot is that of the values c_j against c_j; ifthe chosen estimators perform well, the points in this plot will approximatelylie on the line c = c. We assess consistency with uniformity via the chi-square goodness of fit (GoF) test, which utilizes the chi-square statisticχ_ obs^2 = ∑_j=1^J(nc_j - n/J)^2/n/J .We conclude that the difference data are consistent with constancy if the pvalue, the fraction of the time that a value of χ^2 would be observed that is greater than χ_ obs^2 if the null hypothesis of constancy is true, is > 0.05. Note that the off-the-shelf GoF test, which allows one to compute the p value by taking the tail integral of an appropriatechi-square distribution, requires that the number of expected counts in each bin be ≳ 5. When that condition is violated, one should use simulations to estimate p values. §.§ Assessing uniformity in interval coverage For every α_j in a grid of values on [0,1] of length J, andfor every observation i in the labeled test sample,we determine the smallest intervalA_ij = [z_i^ lo,z_i^ hi] such that∫_A_ij(z|_i)dz = α_j.Then, for every α_j, we determine the proportion of redshifts lie within A_ij, i.e. we computeα_j = ∑_i ∈ S𝕀(z_i^L ∈ A_ij).If the chosen estimators perform well, then α≈α. We plot the values of α against corresponding values of α and assess how close the plotted points are to the lineα = α; we can test for consistency with that lineusing the chi-square GoF test, as described above.Note that the construction of coverage plots is also proposed by <cit.>, who conclude that the (z |) produced by the template-based BPZ and EAZY codes are consistently too narrow and approximately correct, respectively.§ APPLICATION TO SDSS DATA§.§ Data To demonstrate the efficacy of our conditional density estimation method, we apply it to ≈10^6 galaxies that are mostly from Data Release 8 of the Sloan Digital Sky Survey <cit.>.To build our unlabeled (i.e. photometric) dataset, we initially extract model magnitudes for 540,235 objects in the sky patch RA ∈ [168^∘,192^∘] and δ∈[-1.5^∘,1.5^∘]. After filtering these data in the manner of <cit.>, namely limiting our selection to those data whose ugriz magnitudes were all between 15 and 29, and then further limiting ourselves to data for whichu < 21   or  g < 22   or  r < 22   or  i < 20.5   or  z < 20.1 ,we obtain a sample of 538,974 objects.We use the labeled (i.e. spectroscopic) dataset of <cit.> (E. Sheldon, private communication). This dataset includes 435,875 objects from SDSS DR8 and 31,835 objects from eight other sources, or 467,710 objects in all. As noted by , this dataset contains a small number of stars. We remove these by excluding all data with spectroscopic redshift z_s = 0; after this, we are leftwith 465,790 objects.The steps of our analysis are given in Algorithms <ref> and<ref>. As noted in Section <ref>, the labeled and unlabeled data that we analyse are samples from the larger pools of available data described immediately above. (In this work we set n_L = n_U = 15,000.)This is for computational efficiency, both from a standpoint of time and memory;for instance, if we utilize matrices for storing distances between data points, we are currently limited to samples of size ∼ 10^4 when utilizing typical desktop computers.[ht] Preprocessing Labeled (i.e. Spectroscopic) Data§.§ Data preprocessing: construction of the labeled sample As shown in equation <ref>, the estimation of conditional densities is partially a function of (^L), so there is a distinction to be drawn between the stated labeled sample size (e.g. n_L = 15000, drawn from a pool of size 465,790) and theeffective size (the number of data that contribute to estimation, i.e. the number for which (^L) > 0). Thus one important step of our method involves preprocessing the labeled data to increase their effective size.The preprocessing of the labeled data requires the specification of athreshold importance weight _ thr. Given n_L and n_U labeled and unlabeled data, respectively, and having specifieda minimum number of unlabeled data u that would have to lie closerto a random labeled point ^L (drawn from the larger pool) than the k^ th nearest labeled neighbor to that point, we keep ^L as part of our new labeled dataset if(^L) ≥_ thr = u/k .The value _ thr is not tunable, per se, as different thresholds yield different labeled datasets, leading to estimated risks that are not directly comparable. One might conjecture that larger values of _ thr are better, in that the distribution of the labeled data will more closely resemble that of the unlabeled data (see Figures <ref> and <ref>). However, as we demonstrate below, our results are not highly sensitive to the choice of _ thr, so long as _ thr > 0. Once preprocessing is complete, we repeat the estimation of the importance weights (^L) and apply these values when estimatingconditional densities (e.g. equation <ref>, andequation <ref>).§.§ Results Once we generate our new labeled dataset, we split the labeled andunlabeled data into training (n_ train = 7000), validation (n_ valid = 3000), and test (n_ test = 5000) sets. We forego model assessment for importance weight estimators in this work; <cit.> demonstrate that the estimator given in equation <ref> consistently performs better than five other competing methods over a number of different levels of covariate shift. We apply equations <ref> and <ref> to the training and validation data to determine the optimal number of nearest neighbors k and importance weights (^L) given k. We then apply these importance weights and the estimated risk in equation <ref> to the training and validation data within the context of three CDE estimators detailed in<cit.>: NN-CS, the estimator of <cit.>; kerNN-CS, a kernelized variation of NN-CS; and Series-CS, the spectral series estimator of <cit.>.[ To expand upon the description of the bias-variance tradeoff in Section <ref>: We note that any estimates () and (z |) made near (potentially sharp) parameter-space boundaries will suffer from some amount of additional “boundary bias."Mitigating boundary biases in photo-z estimation is an important topic that we will pursue in a future work.] In Figure <ref> we demonstrate the importance of preprocessing to generate the labeled dataset: without it, the construction ofconditional density estimates for the unlabeled data would be effectively limited to the regime (^U) ≲ 0.5. The fraction of unlabeled test set data with (^U) ≤ 0.5 is approximately 7.5%; in contrast, the fraction for (^U) ≤ 3 is 88.5%. In Figure <ref> we demonstrate that for our particular SDSS data, the kerNN-CS estimator on average outperforms the other two. (Note that the risk can be negativebecause we ignore the positive additive constant C(f); seeequation <ref>.) We use the method outlined in Section <ref> to optimally combine the conditional density estimates from kerNN-CS and Series-CS and we determine that combining these estimators, on average, indeed yields better estimates of the conditional densities (z|).We make a preliminary assessmentof the noise properties of the estimates (z|) as follows.We create n bootstrap samples of thelabeled training data and use them to generate n(z|) curves for each labeled test datum.(See Figure <ref>.) Then we compute the quantile of the true redshift given each curve, so that for each labeled test datum we have n quantile values. We use the mean of the standard deviations of each set of quantile values as a metric of uncertainty. For our particular SDSS data, we determine this mean uncertainty to be ≈ 0.065.We will examine the noise properties of the estimates (z|) at greater depth in a future work.In Figure <ref> we demonstrate the tradeoff that is inherent when mitigating covariate shift, via the incorporation of importance weights into estimation, using QQ plots. If we do not mitigate covariate shift, we achieve good conditional density estimates within those portionsof covariate space in which (^L) ≲ 0.5 (orange dashed-dotted line); however,≲ 8% of the unlabeled data reside within these portions. Mitigation of covariate shift leads to a worsening of the CDEs within these portions of covariate space (black dashed line), but allows one to make good estimates throughout the remaining space (blue solid line). In Figures <ref>, <ref> and <ref>we show the results ofapplying hypothesis tests based on QQ plots (Section <ref>), coverage plots (Section <ref>) and the assumption of uniformity (Section <ref>). In the first two figures we show the results of using the GoF test to determine the consistency of expected and observed quantiles at each unique value of (^L), in the manner outlined in Section <ref>. These results are generated using the labeled test set data, assuming a preprocessing threshold _ thr = 0.3. (The optimal number of nearest neighbors given this threshold is 20, hence the unique values of (^L) are 0, 0.05, 0.1, etc.) In the middle and bottom panels, we observe that for(^L) ≲ 0.3, the chi-square values are much larger, and the p values much smaller, than what we would expect if c = c: we do not achieve good behavior in this regime. (This is consistent with the behavior of the QQ plots shown in Figure <ref>.) For (^L) ≳ 0.3, on the other hand, thep values are generally > 0.05. We thus conclude that our method generates useful conditional densityestimates in the regime (^L) ≳ 0.3. (We note that we come to similar conclusions if we use the preprocessing thresholds _ thr = 0.1 or 0.2 instead.) Figure <ref> shows the results of applying the Cramér-von Mises, Anderson-Darling, and Kolmogorov-Smirnov tests (from top to bottom, respectively) to the data U generated from the CDFs of the conditional density estimates (z |) of the labeled test set data, again assuming _ thr = 0.3. We observe similar behavior for the p values here as observed in Figure <ref>, with twodifferences: (1) the p values are generally above 0.05 for (^L) ≳ 0.5 as opposed to 0.3, and (2) there are more numerous deviations from uniformity in the regime (^L) ≳ 0.5 than seen in the bottom panel of Figure <ref>, particularly in the case of the AD test (middle panel, Figure <ref>). § SUMMARY In this paper, we provide a principled method for generating conditional density estimates (z|) (elsewhere commonly denoted p(z) and dubbed the “photometric redshift PDF") that takes into account selection bias and the covariate shift that this bias induces. (See Figure <ref> for an example of both:a bias towards brighter galaxies leads to shifted distributions of colours between spectroscopic and photometric data samples. See also Algorithms <ref> and <ref>.) If not mitigated, covariate shift leads to situations where models fit to labeled(i.e. spectroscopic) data will not produce scientifically useful fitsto the far more numerous unlabeled (i.e. photometric-only) data, lessening the impact of photo-z estimation within the context of precision cosmology. Here, we mitigate covariate shift by first estimating importance weights, the ratio β() between the densities of unlabeled and labeled data at point(Section <ref>),and then applying these weights to conditional density estimates (z|) (Section <ref>). In order for our two-step procedure to succeed, ultimately, we require good estimates of β() at labeled data points in order for it to achieve good estimates (z|) at unlabeled ones. We thus need both rigorously defined risk functions that allow us to tune the free parameters of our importance weight and conditional density estimators, and diagnostic tests that allow us to determine the quality of the estimates(z|). Our method is based on the assumption that theprobability that astronomers label a galaxy (i.e. determine its spectroscopic redshift) depends only on its (photometric and perhaps other) propertiesand noton its true redshift, an assumption currently valid for redshifts ≲0.5. This is equivalent to assuming that the conditional densities for labeled and unlabeled data match (f_L(z|) = f_U(z|)),even if the marginal distributions differ (f_L() ≠ f_U()), which allows us ultimately to substitute out the true unknown quantities β() and f(z|) in specifications of risk functions (equations <ref> and <ref>). These risk functions, and their estimates (equations <ref> and <ref>), are the backbone of our method: they allow us to tune parameters in a principled manner (e.g. what is the optimal number of nearest labeled neighbors when estimating β() via equation <ref>?), as well as choose between competing estimators (e.g. which is better: the NN-CS, kerNN-CS or Series-CS estimators of conditional density?). An important question to answer in future work is whether we can relax our central assumption and still be able to write down estimated risks that lead to useful estimates of conditional densities in higher redshift regimes.In Section <ref>, we demonstrate that once we generate p separate conditional density estimates_k(z |) (e.g. via the kerNN-CS and Series-CS estimators), tuned via the estimated risk given in equation <ref>, we can combine them to achieve better predictions (i.e. smaller values of risk). The method we propose utilizes a weighted linear combination, with the weights determinable via quadratic programming, but this is not the only possible way to combine estimates; see, e.g., <cit.>, who discuss three methods for combining estimates, including one (Method 2) that adds estimates together as we do, except that while we determine optimal coefficients by minimizing estimated risk, they combine estimates so that 68.3% of the spectroscopic redshifts in their sample fall within their final 1σ confidence intervals.It is not sufficient to generate estimates (z|) by minimizing risk; one also needs to demonstrate that the estimates are scientifically useful. There is no unique way to demonstrate the quality of conditional density estimates. InSection <ref>, we provide alternatives that test (1) whether estimated cumulative densities, evaluated at actual redshifts, aredistributed uniformly; (2) whether observed quantiles match expectation via QQ plots and the chi-square GoF test;and (3) testing uniformity as a function of interval coverage. The jury is still out as to which of these diagnostics will play a central role in future photo-z analyses; for now, we consider it sufficient to demonstrate in any analysis that these diagnostics yieldsimilar qualitative results.In Section <ref>, we demonstrate our method using≈500,000 galaxies with, and ≈500,000 without, spectroscopic redshifts, mostly from the Sloan Digital Sky Survey (seefor details). For computational efficiency, we sample 15,000 galaxies from both pools of data. While our initial labeled sample is chosen randomly from the larger pool of labeled data, we implement a preprocessing scheme (see Algorithm <ref>) togenerate a new labeled sample with a larger effective size, whose photometry also more closely resembles that of the unlabeled sample (see Figure <ref>). The preprocessing scheme requires the specification of an importance weight threshold (_ thr) whose value cannot be optimized via tuning (since different thresholds yield different labeled datasets, and thus yield not-directly comparable estimated risks). We demonstrate that our results are generally insensitive to the choice of threshold within the regime _ thr≲ 0.3. Those results include that (1) as expected, the conditional density of Combinedestimator, constructed from those of the kerNN-CS and Series-CS estimators, provide the best estimates as quantified via the risk estimate in equation <ref>, and (2) via our diagnostic tests, we determine that our Combined estimates exhibit good behavior in the regimes () ≳ 0.3, for QQ-based tests, and ≳ 0.5, for tests of coverage and cumulative densities. Our results thus demonstrate that our method achieves good, i.e. scientifically useful, conditional density estimates for unlabeled galaxies. § ACKNOWLEDGEMENTS The authors would like to thank Jeff Newman (University of Pittsburgh) for helpful discussions about photometric redshift estimation. This work was partially supported by NSF DMS-1520786, and the NationalInstitute of Mental Health grant R37MH057881. RI further acknowledges the support of the Fundação de Amparo à Pesquisa do Estado de São Paulo (2014/25302-2).99 [Aihara et al.2011]Aihara11 Aihara H., et al., 2011, ApJS, 193(2):29 [Ball & Brunner2010]Ball10 Ball N. M., Brunner R. J., 2010, International Journal of Modern Physics D, 19, 1049 [Benítez2000]Benitez00 Benítez N., 2000, ApJ, 536, 571 [Bonnett2015]Bonnett15 Bonnett C., 2015, MNRAS, 449, 1043 [Brammer, von Dokkum & Coppi2008]Brammer08 Brammer G., von Dokkum P., Coppi P., 2008, ApJ, 686, 1503 [Budavári2009]Budavari09 Budavári T., 2009, ApJ, 695, 747 [Carliles et al.2010]Carliles10 Carliles S., Budavári T., Heinis S., Priebe C., Szalay A. S., 2010, ApJ, 712, 511 [Carrasco Kind & Brunner2013]CarrascoKind13 Carrasco Kind M., Brunner R., 2013, MNRAS, 432, 1483 [Carrasco Kind & Brunner2014]CarrascoKind14 Carrasco Kind M., Brunner R., 2014, MNRAS, 438, 3409 [Collister & Lahav2004]Collister04 Collister A. A., Lahav O., 2004, PASP, 116, 345 [Corradi & Swanson2006]Corradi06 Corradi V., Swanson N. R., 2006, in Elliott G., Granger C., Timmermann A., eds, Handbook of Economic Forecasting. Elsevier, Amsterdam, p. 197 [Csabai et al.2007]Csabai07 Csabai I., Dobos L., Trencséni M., Herczegh G., Jósza P., Purger N., Budavári T., Szalay A. S., 2007, Astronomische Nachrichten, 328, 852 [Cunha et al.2009]Cunha09 Cunha C. E., Lima M., Oyaizu H., Frieman J., Lin H., 2009, MNRAS, 396, 2379 [Dahlen et al.2013]Dahlen13 Dahlen T., et al., 2013, ApJ, 775, 93 [De Vicente, Sánchez & Sevilla-Noarbe2016]DeVicente16 De Vicente J., Sánchez E., Sevilla-Noarbe I., 2016, MNRAS, 459, 3078 [Flaugher2005]Flaugher05 Flaugher B., 2005, Int. J. Modern Phys. A, 20, 3121 [Gretton et al.2008]Gretton08 Gretton A., Smola A., Huang J., Schmittfull M., Borgwardt K., Schölkopf B., 2008, in Quionero-Candela J., Sugiyama M., Schwaighofer A., Lawrence N. D., eds, Dataset Shift in Machine Learning. MIT Press, Cambridge [Hall1987]Hall87 Hall P., 1987, The Annals of Statistics, 15, 1491 [Hildebrandt et al.2010]Hildebrandt10 Hildebrandt H., et al., 2010, A&A, 523, A31 [Hoyle et al.2015]Hoyle15 Hoyle B., Rau M. M., Bonnett C., Seitz S., Weller J., 2015, MNRAS, 450, 305 [Ivezić et al.2008]Ivezic08 Ivezić Ž., 2008, arXiv:0805.2366 [Izbicki, Lee & Freeman2016]Izbicki16 Izbicki R., Lee A., Freeman P. E., 2016, Annals of Applied Statistics, submitted (arXiv:1604.01339) [Izbicki & Lee2015]Izbicki15 Izbicki R., Lee A. B., 2015, Journal of Computational and Graphical Statistics [Izbicki, Lee & Schafer2014]Izbicki14 Izbicki R., Lee A., Schafer, C. M., 2014, Journal of Machine Learning Research W&CP (AISTATS track), 33 [James et al.2014]James14 James G., Witten D., Hastie T., Tibshirani, R., 2014, An Introduction to Statistical Learning: With Applications in R, Springer, New York [Kanamori, Hido & Sugiyama2009]Kanamori09 Kanamori T., Hido S., Sugiyama, M., 2009, Journal of Machine Learning Research, 10, 1391 [Kanamori, Suzuki & Sugiyama2012]Kanamori12 Kanamori T., Suzuki T., Sugiyama, M., 2012, Machine Learning, 86(3), 335 [Kremer et al.2015]Kremer15 Kremer J., Gieseke F., Steenstrup Pedersen K., Igel C., 2015, Astronomy and Computing, 12, 67 [Lima et al.2008]Lima08 Lima M., Cunha C. E., Oyaizu H., Frieman J., Lin H., Sheldon E., 2008, MNRAS, 390, 118 [Loog2012]Loog12 Loog M., 2012, in 2012 IEEE International Workshop on Machine Learning for Signal Processing [Mandelbaum et al.2008]Mandelbaum08 Mandelbaum R., 2008, MNRAS, 386, 781 [Moreno-Torres et al.2012]MorenoTorres12 Moreno-Torres J. G., Raeder T., Alaíz-Rodríguez R., Chawla N. V., Herrera F., 2012, Pattern Recognition, 45, 521[Rau et al.2015]Rau15 Rau M. M., Seitz S., Brimiouelle F., Frank E., Friedrich O., Gruen D., Hoyle B., 2015, MNRAS, 452, 3710 [Sánchez et al.2014]Sanchez14 Sánchez C., et al., 2014, MNRAS, 445, 1482 [Sheldon et al.2012]Sheldon12 Sheldon E. S., Cunha C. E., Mandelbaum R., Brinkmann J., Weaver B. A., 2012, ApJS, 201(2):32 [Sugiyama et al.2008]Sugiyama08 Sugiyama M., Suzuki T., Nakajima S., Kashima H., Bünau P., Kawanabe M., 2008, Annals of the Institute of Statistical Mathematics, 60(4):699 [Wasserman2006]Wasserman06 Wasserman L., 2006, All of Nonparametric Statistics, Springer, New York [Wittman2009]Wittman09 Wittman D., 2009, ApJ, 700, L174 [Wittman, Bhaskar & Tobin2016]Wittman16 Wittman D., Bhaskar R., Tobin R., 2016, MNRAS, 457, 4005 [York et al.2000]York00 York D., et al., 2000, AJ, 120, 1579
http://arxiv.org/abs/1703.09242v1
{ "authors": [ "Peter E. Freeman", "Rafael Izbicki", "Ann B. Lee" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170327180533", "title": "A Unified Framework for Constructing, Tuning and Assessing Photometric Redshift Density Estimates in a Selection Bias Setting" }
EoS's of different phases ofdense quark matter E J FerrerDept. of Engineering Science and Physics, College of Staten Island, CUNY,and CUNY-Graduate Center,New York 10314, USA===================================================================================================================================================*Corresponding author^1Fluminense Federal UniversityNiterói, RJ - Brazil ^2PETROBRASRio de Janeiro, RJ - BrazilE-mails: gutocnet@ic.uff.br / fabio@ic.uff.br ABSTRACT. The Minimum Coloring Cut Problem is defined as follows: given a connected graph G with colored edges, find an edge cut E' of G (a minimal set of edges whose removal renders the graph disconnected) such that the number of colors used by the edges in E' is minimum. In this work, we present two approaches based on Variable Neighborhood Search to solve this problem. Our algorithms are able to find all the optimum solutions described in the literature. Keywords: Minimum Coloring Cut Problem, Combinatorial Optimization, Graph Theory, Variable Neighborhood Search, Label Cut Problem.=§ INTRODUCTION The Minimum Coloring Cut Problem (MCCP) has as input a connected (undirected) graph G=(V,E), with colored (or labeled) edges. Each color is assigned to one or more edges, but each edge e has a unique color c(e). The aim of the MCCP is to find an edge cut E' of G (a minimal set E' of edges such that G'=(V,E\ E') is disconnected) with the following property: the set of colors used by the edges in E' has minimum size. Formally:Minimum Coloring Cut Problem (MCCP)Input: a connected (undirected) graph G = (V,E,C) such that V is the set of nodes of G, E is the set of edges of G, and C={c(e) | e∈ E} is the set of colors (or edge labels).Goal: Find a subset E'⊆ E such that G'= (V,E\ E') is disconnected and the set of colors C'={c(e) | e∈ E'} is minimized. Figure <ref> shows a simple example.Note that if all the edge colors are distinct then the MCCP amounts to finding a usual minimum cut, a task that can be easily performed in polynomial time using max-flow algorithms. However, the complexity of the MCCP still remains as a theoretical open question. Intuitively, the MCCP is unlikely to be solvable in polynomial-time, because the related problem of finding an s-t cut with the minimum number of colors is NP-hard <cit.>. This fully justifies the design of heuristic algorithms to solve the MCCP.Colored cut problems are related to the vulnerability of multilayer networks since they provide tight lower bounds on the number of failures that can disconnect totally or partially a network <cit.>.The Minimum Color s-t Cut Problem (MCstCP for short) is closely related to the MCCP. The input of the MCstCP consists of a connected edge-colored graph G=(V,E) and two nodes s,t∈ V, and its objective is to find the minimum number of colors whose removal separates s and t in the remaining graph (where `removing a color' means removing all the edges with that color). <cit.> considered the MCstCP for the first time; they prove its NP-hardness and present approximation hardness results. However, five years before, <cit.> had already observed that the MCstCP is NP-hard via a simple reduction from the Minimum Hitting Set Problem.The papers by <cit.> and <cit.> approach the MCCP and the MCstCP with the goal of measuring the network's capability of remaining connected when sets of links share risks. For instance, in a WiFi network, an attacker could drop all links on a certain frequency by adding a strong noise signal to it. Other example happens when two links use the same physical environment.Another potential application of the MCCP is in transportation planning systems, where nodes represent locations served by bus and edge colors represent bus companies. In this case, a solution of the MCCP gives the minimum number of companies that must stop working in order to create pairs of locations not reachable by bus from one another. Such application is more suitably modeled by allowing a multigraph as the input of the MCCP, since two locations can be connected by bus services offered by more than a single company.<cit.> shows that the MCCP can be solved in polynomial time when the input graph is planar, has bounded treewidth, or has a small value of fmax (the maximum number of edges a color is assigned).In <cit.>, exact methods to solve the MCCP are presented. The authors propose three different integer programming formulations over which branch-and-cut and branch-and-bound approaches are developed. To evaluate their algorithms, they use the instances generated by <cit.>.In some sense, the MCCP is the dual of the Minimum Labelling Spanning Tree Problem (MLSTP), which aims at finding a minimum set C' of colors such that the edges with colors in C' form a connected, spanning subgraph H of G. For information on the MLSTP, we refer the reader to <cit.> and <cit.>. Note that any spanning tree T of H contains |C'| colors, and thus is a spanning tree of G using a minimum number of colors, i.e., a solution of the MLSTP with input G. An analogous argument can be applied to the MCCP: one can first find a disconnecting set E' of edges (not necessarily a cut) that uses a minimum number of colors, and then easily return a minimal disconnecting set E”⊆ E' as the solution of the MCCP.Another way of viewing the MCCP is: find a maximum set C' of colors such that G'=(V,E') is disconnected, where E'={e∈ E | c(e)∈ C'}, and then pick all the colors in the complementary set C\ C'. Such strategy is employed by the two new algorithms proposed in this work. The algorithms try to include new colors to the set of current colors, so that adding the edges with those new colors to the current subgraph still keeps it disconnected. When no new color can be included in this way, the colors in C\ C' correspond to a solution of the MCCP. Our algorithms are based on the Variable Neighborhood Search (VNS) metaheuristic <cit.>. As we shall see, the former algorithm uses a greedy, deterministic approach to choose new colors to be included to the current set of colors, while the latter uses a probabilistic approach.The remainder of this work is structured as follows. In Section <ref> we describe in detail all the functions and procedures used in our algorithms. Section <ref> presents the computational results, where we compare the quality of the solutions obtained by our algorithms with the ones produced by the exact methods described in <cit.>. Section <ref> contains our concluding remarks.§ DESCRIPTION OF THE ALGORITHMS In this section we first describe the general algorithm (Algorithm 1) which is the basic structure for both the greedy, deterministic approach (“VNS-Greedy”) and the probabilistic approach (“VNS-Probabilistic”). Next, we describe in detail each of its subroutines. Some subroutines ( Generate-Initial-Solution, New-Solution, and Local-Search) have a “greedy version” and a “probabilistic version”. Running Algorithm 1 using the greedy versions of such subroutines produces the VNS-Greedy algorithm, while running it using the probabilistic versions produces the VNS-Probabilistic algorithm. The remaining subroutines are common to both approaches.The description of the general algorithm is as follows:Along the execution of the algorithm, a solution is any subset C'⊆ C of colors. Let G'=(V,E') be the spanning subgraph of G such that E'={e∈ E | c(e)∈ C'}. As an abuse of terminology, we say that solution C' is disconnected (resp., connected) if G' is disconnected (resp., connected). Also, we may refer to the number of connected components of C' to mean the number of connected components of G'.The value (number of colors) of solution C' is denoted by C'. As mentioned in the introduction, we follow the strategy of finding a maximum disconnected solution. To be consistent with this approach, C' is a feasible solution if and only if G' is disconnected. The complementary set of colors C\ C' is denoted by C' and called complementary space of solution C'.Below we discuss the notation used in Algorithm 1: *is the current best solution. In line 29, the returned value C- is the number of colors in the disconnecting set consisting of all the edges whose colors are in .* 𝑀𝑎𝑥𝑁𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 is a variable that controls the neighborhoods (see line 11) in the core of the VNS strategy (lines 10 to 23).* S and S' are auxiliary solutions, explained later.* Number-of-Components(S') (line 14) is a standard function that returns the number of connected components of solution S'. It is implemented using the well-known disjoint-set (or union-find) data structure with weighted-union heuristic and path compression. Details can be found in <cit.>. An initial solutionis generated in line 1; next, 𝑀𝑎𝑥𝑁𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 is set as the number of colors not in(line 2). The main loop (lines 3 to 28) is executed until the stop condition is met. The stop condition (maximum running time) is defined empirically according to the instance size (number of nodes |V|). After some initial tests, we obtained the values shown in Table 1 below.In lines 4 to 9, a new candidate solution S is generated in the beginning of a new iteration. First, S is generated using subroutine New-Solution. (line 4). If S is better thanthenand 𝑀𝑎𝑥𝑁𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 are updated and another candidate solution S is generated by New-Solution. The while loop (lines 5 to 9) ends when the number of colors of the candidate solution is not greater than the number of colors of the current best solution.Lines 10 to 23 contain the core of the basic VNS strategy <cit.>. For each candidate solution S, S' is set to S (line 12), and then the shaking and local search procedures are executed over S' for k iterations, where k controls the neighborhoods and ranges in 1 . . 𝑀𝑎𝑥𝑁𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑. If shaking and local search are able to improve S' so that S'>S then S is updated and k is restarted to 1, i.e., a new cycle of k iterations begins.When k is equal to 𝑀𝑎𝑥𝑁𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑, the current best solutionis compared with S and updated if necessary (lines 24 to 27). The execution stops if the maximum running time is reached (line 28); otherwise, it returns to the candidate solution generation step.When the stop condition is true, the value C- is returned. The subset of edges E'={e∈ E| c(e)∈} is a disconnecting set using C- colors. If needed, a cut can be obtained by finding any minimal disconnecting set E”⊆ E'.In the next subsections we describe in detail the subroutines used in Algorithm 1. When applicable, the greedy and probabilistic versions of a subroutine are presented. §.§ Generate-Initial-Solution This subroutine has a greedy version (Algorithm 2) and a probabilistic version (Algorithm 3). In the greedy version, the initial solution is constructed iteratively color by color. At each step, a color c not appearing in the current solution is greedily chosen so that the number of connected components of ∪{c} is maximized. The subroutine stops when every color in the complementary setturns the current solution connected when added to it.Adding a color that maximizes the number of connected components (line 4 in Algorithm 2) usually guides the subroutine to locally optimal solutions. This strategy is precisely the deterministic approach used by <cit.> and other authors for theMLSTP.To avoid local optima, we use an adapted Boltzmann function that allows a probabilistic color choice at each iteration. Such adapted Boltzmann function is inspired by the Simulated Annealing Cooling Schedule described in <cit.>, and is used not only in subroutine Generate-Initial-Solution, but also in subroutines New-Solution and Local-Search.We remark that the probabilistic versions of subroutines Generate-Initial-Solution, New-Solution and Local-Search differ from the greedy ones precisely in the choice strategy of colors to be included in the current best solution.The probability P(c) of a color c to be included in the current best solutionis directly proportional to the number of connected components of ∪{c}. Let γ∈ be the color that maximizes Number-of-Components(∪{γ}). The probabilities P(c) are normalized by the Boltzmann function values exp(Δ(c)/T), where:∙ Δ(c) = Number-of-Components(∪{c}) - Number-of-Components(∪{γ})∙ T is a parameter referred to as temperature that controls the function's dynamic; in our experiments we use T=1. §.§ New-Solution New-solution is a subroutine used to generate a candidate solution S at the beginning of a new iteration in the repeat loop (lines 3 to 28) of Algorithm 1. It is implemented as a local search <cit.> on the colors inas an attempt to raise the diversity factor, since the complementary space ofis a completely different search zone with respect to the current best solution.Algorithms 4 and 5 are, respectively, the greedy and probabilistic versions of subroutine New-Solution. Our tests revealed that both algorithms produce an immediate peak of diversification as the local search evolves.In order to extract a feasible solution froman iterative process of inclusion of new colors is performed as follows.Solution S is initialized as an empty set of edges (line 1 in both algorithms). Note that the number of connected components of S at this moment is |V| (corresponding to a spanning subgraph containing only isolated vertices).The first while loop (lines 2 to 8 in Algorithm 4, and 2 to 13 in Algorithm 5) generates a partial solution S color by color, and stops in two cases:(a) the set \ S of unused colors is empty;(b) every remaining color in \ S would generate an infeasible (connected) solution if added to current solution S.The second while loop (lines 9 to 15 in Algorithm 4, and 14 to 25 in Algorithm 5) works in the same way, but try to add to current solution S colors frominstead. It stops when no color in \ S is able to produce a feasible solution when added to S. §.§ Shake This subroutine is common to both VNS-Greedy and VNS-Probabilistic. It consists of finding a new solution by adding/removing k colors randomly from current solution S', in order to diversify the range of solutions and try to escape from a local optimum. The total number of operations (additions plus removals) depends on k (parameter passed from the main algorithm), which is the size of the neighborhood. The value of k ranges from 1 to the maximum neighborhood size (variable 𝑀𝑎𝑥𝑁𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑). In line 2, δ is a random value in [0,1]. In line 3, it is necessary to check whether S'>0 before removing a color from S'. At the end of Algorithm 6, the symmetric difference between solutions S and S' contains exactly k colors, i.e., (S\ S')∪(S'\ S)=k.We remark that, after the shaking, the new solution S' may be infeasible (connected). The purpose of subroutine Fix (explained in the next subsection) is to deal with such event. §.§ Fix This subroutine is also common to VNS-Greedy and VNS-Probabilistic. If after the shaking procedure S' is infeasible (line 14 in Algorithm 1), subroutine Fix is invoked. It consists of iteratively removing colors at random from S' until it turns into a feasible solution. §.§ Local-Search The subroutine Local-Search has a greedy version (Algorithm 8) and a probabilistic version (Algorithm 9).In the greedy version, after solution S' is submitted to subroutines Shake and Fix, new colors are greedily added to S' until no longer possible.The probabilistic version is similar, but the choice of new colors follows the strategy already described in the probabilistic versions of subroutines Generate-Initial-Solution (Algorithm 3) and New-Solution (Algorithm 5).§ COMPUTATIONAL RESULTS The experiments were performed on an Intel Core I7 4GHz with 32Gb RAM, running Linux Ubuntu x64 14.04 operating system. Algorithms were implemented in C++ and compiled using optimization flag -O3.Our experiments were performed using the 720 problem instances created by <cit.>, divided in 72 datasets containing 10 randomly generated instances each. All the 10 instances in a single dataset have the same number of nodes |V|, number of colors |C|, and edge density d; that is, each dataset is characterized by a prescribed triple (|V|,|C|,d). The expected number of edges |E| of an instance is d |V| (|V|-1)/2; thus, in a same dataset, instances may have slightly different values of |E|. The value of |V| ranges in the set {50,100,200,400,500,1000}, while the value of d in {0.2, 0.5, 0.8} (corresponding, respectively, to a low, medium, or high density). The value of |C| varies according to the instance size. For example, if |V|=50 then |C|∈{12,25,50,62}. Tables 2 to 7 show all the combinations (|V|,|C|,d) used in our tests. Each row in a table corresponds to the 10 instances of a single dataset. For each dataset, solution quality is evaluated as the average solution value (number of colors in the solution) calculated over the 10 problem instances.Maximum allowed CPU times were chosen as stop conditions for the algorithms, determined according to instance sizes (see Table 1 in Section 2.1).In Tables 2 to 7, our results are compared with the results obtained by the three exact methods proposed in <cit.>. In all the tables, the first and second columns show, respectively, the number of colors and the density; in the third column, each entry shows the average solution value obtained by the exact methods over the 10 instances of the corresponding row (a symbol `-' means that the methods were unable to find the optima); in the fourth column, each entry shows the average computational time of the exact method that best deals with the 10 instances of the corresponding row (a symbol `-' means that the runs were aborted after reaching a time limit); columns 5 and 6 (resp., 7 and 8) have the same meaning as columns 3 and 4, but refer to our VNS greedy (resp., VNS probabilistic) approach.For instances with the same number of nodes, the tests show, as expected, that low density instances converge faster than medium/high density instances, because the latter have larger search spaces.The exact methods proposed in <cit.> are able to find optimum solutions only for |V|≤ 200. In this scenario (see Tables 2 to 4), both the VNS greedy and VNS probabilistic approaches reach all the optimum solutions, in lower computational times.For |V|∈{400,500,1000} (see Tables 5 to 7), the VNS greedy and VNS probabilistic approaches found exactly the same average solution value for all datasets. The VNS probabilistic approach is faster for 50-node instances (see Table 2). For other values of |V| (see Tables 3 to 7), no algorithm clearly outperforms the other in terms of computational times.§ CONCLUSIONS In this paper we described new VNS-based algorithms for the MCCP. Previously to this work, no other results for the MCCP besides the ones obtained by <cit.> were known for instances up to 200 nodes (to the best of the authors' knowledge). Our algorithms reach all the known optimal solutions in lower computational times. For instances with unknown optima, our algorithms provide the same solutions, in reasonable computational times.Computational experiments were performed using two different approaches, greedy and probabilistic, in order to evaluate how the algorithms are influenced by the color choice strategy. Computational results showed that the two approaches exhibit the same behavior in terms of solution quality, and no significant difference in terms of computational times.abbrvnatReceived xxxxx 2017 / accepted xxx 2017.
http://arxiv.org/abs/1703.09258v2
{ "authors": [ "Augusto Bordini", "Fábio Protti" ], "categories": [ "cs.DS", "cs.DM", "math.OC", "90C27" ], "primary_category": "cs.DS", "published": "20170327183617", "title": "New algorithms for the Minimum Coloring Cut Problem" }
hp.deoliveira@pq.cnpq.brDepartamento de Física Teórica - Instituto de Física A. D. Tavares, Universidade do Estado do Rio de JaneiroR. São Francisco Xavier, 524. Rio de Janeiro, RJ, 20550-013, Brazil We present a single domain Galerkin-Collocation method to calculate puncture initial data sets for single and binary, either in the trumpet or wormhole geometries. The combination of aspects belonging to the Galerkin and the Collocation methods together with the adoption of spherical coordinates in all cases show to be very effective. We have proposed a unified expression for the conformal factor to describe trumpet and spinning black holes. In particular, for the spinning trumpet black holes, we have exhibited the deformation of the limit surface due to the spin from a sphere to an oblate spheroid. We have also revisited the energy content in the trumpet and wormhole puncture data sets. The algorithm can be extended to describe binary black holes.Puncture black hole initial data: a single domain Galerkin-Collocation method for trumpet and wormhole data sets P. C. M. Clemente and H. P. de Oliveira December 30, 2023 ================================================================================================================§ INTRODUCTION The precise characterization of the gravitational and matter fields on some spatial hypersurface constitutes the initial data problem in numerical relativity <cit.>. In this instance, it is possible to identify if there exist interacting black holes and neutrons stars together or not with any other distribution of matter, placing an ideal set up to simulate astrophysical situations in which the high gravitational field plays a central role. In parallel to the decades-long effort to directly detect gravitational radiation which has been accomplished recently <cit.>, there has also been an endeavor to predict gravitational wave signals from compact binaries using numerical simulations. These simulations <cit.> start with initial data in general containing binary of black holes In more precise terms, the initial data problem in General Relativity consists in specifying the spatial metric and extrinsic curvature, γ_ij and K_ij, respectively, in a given spatial hypersurface. These quantities must satisfy the constraint equations, namely the Hamiltonian and momentum constraint equations, that arises from the Cauchy formulation of the field equations <cit.>. The most important strategy for solving the constrained equations is to introduce a conformal transformation of the spatial metric to a known background metric, γ̅_ij, and a similar transformation involving the extrinsic curvature <cit.>. Then, γ_ij = Ψ^4 γ̅_ij A_ij = Ψ^-2A̅_ij, where Ψ is the confomal factor and A_ij is the traceless part of the extrinsic curvature such that, K_ij = A_ij + 1/3γ_ij K, with K being the trace of K_ij. In this formulation, the set of functions (Ψ,γ̅_ij,A̅_ij,K) specified in the initial hypersurface characterizes the initial data. These quantities are not fixed by the constraint equations but must them. We assume here the Bowen-York <cit.> scheme, that is, the additional requirements of conformal flatness, maximal slicing, K=0, and vacuum, yielding the decoupling of the Hamiltonian and momentum constraints which, respectively, become, ∇̅^2 Ψ + 1/8Ψ^-7A̅^ijA̅_ij = 0 D̅_i A̅^ij = 0, where D̅_i = γ̅_ij∇^j is the covariant derivative associated with the flat background metric γ̅_ij and∇̅^2 is the flat-space Laplacian operator. Remarkably, Eq. (<ref>) can be solved analytically to describe boosted and spinning black holes denoted by A̅^ij_𝐏 and A̅^ij_𝐒, whose corresponding expressions are, A̅^ij_𝐏 = 3/2 r^2[2P^(i n^j)-(η^ij-n^i n^j) 𝐧.𝐏] A̅^ij_𝐒 = 6/r^3 n^(iϵ^j)_mpJ^m n^p, where 𝐏 and 𝐉 are, respectively, the ADM linear and angular momenta carried by the black hole <cit.>. The quantity n^k=x^k/r is the normal vector pointing away from the black hole located at r=0. Due to the linearity of the momentum constraint, we can construct spacetimes containing a boosted-spinning black hole or multiple black holes by superposing several conformal extrinsic curvature given by Eqs. (<ref>) and (<ref>).In general, the Hamiltonian constraint (<ref>) is solved numerically for the conformal factor after specifying the extrinsic curvature A̅_ij. To guarantee that there are black holes in the initial hypersurface it is necessary to satisfy appropriate boundary conditions which are dictated by the excision or puncture methods. We are going to focus here on the puncture method that consists <cit.> in decomposing the conformal factor into two pieces: the background component containing the black holes singularities and usually given analytically, and the regular component which is obtained by solving the Hamiltonian constraint numerically. Accordingly, we have, Ψ = Ψ_0 + u. Considering a single black hole, Ψ_0 is taken as the Schwarzschild black hole in its wormhole representation,or equivalently on a slice of constant Schwarzschild time. It means that, Ψ_0=1+m_0/2r, where r=0 locates the puncture and m_0 is a free parameter. It can be verified that the above expression is the solution of the Hamiltonian constraint for A̅_ij=0 and u=0, and in this situation the parameter m_0 is the ADM mass. The substitution of Eqs. (<ref>) and (<ref>) into the Hamiltonian constraint (<ref>) results in an elliptic equation for the regular component u. We can construct initial data with multiple black holes by a direct generalization of the background conformal factor to Ψ_0 = 1 + ∑_k m_k/2r_k, where each puncture m_k located at r_k=0. Of particular interest is the case of binary black holes for which most of the initial data used in the simulations adopt the puncture method <cit.>. There is another representation of the Schwarzschild black hole based on spatial slices that terminate at non-zero areal radius known as the trumpet representation. The interest in constructing trumpet initial data has increased after the advent of the moving puncture method <cit.>. It has been shown that the Schwarzschild wormhole puncture data evolves in such a way the numerical slices tend a spatial slice with finite areal radius or trumpets <cit.>. Therefore, it is motivating to construct initial trumpet data for single and binary black holes endowed with spin and linear momentum. In this direction, we mention the derivation of the analytical solutions for maximally sliced and 1+log trumpet Schwarzschild black holes in Refs. <cit.>, respectively. The initial data for spinning boosted, single and binary trumpets were studied by Hannan et al. <cit.>, Immerman and Baumgarte <cit.> for the maximally sliced case. More recently, Dietrich and Brugman <cit.> constructed 1+log sliced initial data for single and binary systems.We present here a single domain algorithm based on Galerkin-Collocation spectral method <cit.> to obtain wormhole and trumpet initial data sets. The algorithm is distinct from other spectral codes <cit.>, but nonetheless very efficient and simple. We believe that this task is valuable in its own right. The selection of the radial and angular basis functions is of crucial importance; we have the spherical harmonics as the most natural basis functions for the angular domain, whereas the radial basis functions are expressed as appropriate linear combinations of the Chebyshev polynomials to satisfy the boundary conditions. The algorithm is well suited to describe spinning and boosted single black hole, a wormhole or a trumpet binary system. The paper is divided as follows. After the Introduction in Section 1, we have focused on presenting the basic equations for constructing trumpet initial data sets. We have used the maximal sliced analytical solution of Naculich and Baumgarte <cit.> to establish a convenient expression for the conformal factor describing single or binary trumpets. The numerical scheme is detailed in Section 3. We have presented the numerical tests and discussed some cases of interest in Section 4. In particular, we highlighted the proposed unified description of single trumpet spinning and trumped black hole. For a single spinning black hole, we have shown the influence of the spin in altering the minimal surface from a sphere to an oblate spheroid. We have also considered wormhole and trumpet binaries to illustrate the feasibility of the algorithm in more general cases. Finally, in Section 5 we have concluded and traced some directions of the present investigation. § TRUMPET AND WORMHOLE PUNCTURE DATA SETS The starting point to construct maximal sliced puncture trumpet initial data is to establish the trumpet slicing of the Schwarzschild spacetime. Baumgarte and Naculich <cit.> have derived the corresponding exact conformal factor in function of the areal radius R = r Ψ_0^2 (cf. Appendix A). With the exact solution, they have shown following asymptotic behavior,Ψ_0= (3 m_0/2 r)^1/2, r → 0 Ψ_0=1 + m_0/2r, r →∞, where m_0 is the Schwarzschild mass. The corresponding expression for the traceless part of the extrinsic curvature is, A̅_0^ij = 3 √(3)m_0^2/4 r^3 (γ̅^ij - 3 n^i n^j). In the case of wormhole data we have A̅_0^ij=0. With the above expression it can be shown that the momentum constraint D̅_i A̅_0^ij = 0 is satisfied along with the validity of the Hamiltonian constraint,∇̅^2 Ψ_0 + 1/8Ψ_0^-7A̅^ij_0 A̅^0_ij = 0,where, A̅^ij_0 A̅^0_ij = 81 m_0^4/8r^6. For the trumpet initial data sets, we propose the following puncture-like expression for the conformal factor, Ψ = Ψ_0(1+u), where Ψ_0 is trumpet Schwarzschild solution. Introducing the new conformal factor into the Hamiltonian constraint (<ref>), we have, ∇̅^2 u + 2D̅_iΨ_0D̅^i u/Ψ_0 + A̅^ijA̅_ij/8 Ψ_0^8(1+u)^7 - (1+u)/8 Ψ_0^8A̅_0^ijA̅^0_ij = 0, where the total traceless part of the extrinsic curvature is given by,A̅^ij = A̅^ij_0 + A̅^ij_𝐏 + A̅^ij_𝐒, due to the linearity of the momentum constraint equation. In the case of the wormhole data sets the conformal factor is expressed by (<ref>) and the Hamiltonian equation becomes,∇̅^2 u + 1/8(Ψ_0+u)^-7A̅^ijA̅_ij = 0, with A̅^ij_0=0 and Ψ_0 given by Eq. (<ref>).The main reason of not adopting the usual decomposition for the conformal factor (Eq. (<ref>)) for trumpet black hole data sets is to provide an unified framework for describing spinning and boosted black holes with regular functions u. For instance, for a single trumpet spinning black hole in which Ψ=Ψ_0+u, it can be shown that <cit.> u ∼𝒪(r^-1/2) near r=0, and for a single boosted black hole u ∼𝒪(r). On the other hand, by considering the new decomposition (<ref>), we have followed the analysis of Immerman and Baumgarte <cit.> of the behavior of u near the puncture at r=0 for a boosted (u_P) and a spinning black hole (u_S) in the axisymmetric case. Assuming that u ≪ 1, the corresponding Hamiltonian constraints are approximated by, ∇̅^2 u_P - 1/r∂ u_P/∂ r≈√(3)Pcosθ/3m_0^2 r + 2u_P/r^2∇̅^2 u_S - 1/r∂ u_S/∂ r≈ -4 J^2 sin^2θ/9 m_0^4 r^2 + (1+28J^2sin^2θ/9m_0^4)u_S/r^2.From these equations one can show that near the origin, u_P ∼𝒪(r), and u_S ∼𝒪(1). The above behaviors near the origin can be dealt numerically without difficulties. To guarantee that the spacetime is asymptotically flat, the function u must satisfy the following asymptotic condition, u = δ m/r + 𝒪(r^-2), where δ=δ(θ,ϕ) in general after adopting the spherical coordinates. As indicated in the sequence, the function δ m is the contribution due to angular and linear momenta to the ADM mass which is calculated from, M_ADM = - 1/2π lim_r →∞∫_Ω r^2 Ψ_,r d Ω. Assuming the conformal factor either expressed by Eq. (<ref>) or Eq. (<ref>), and taking into account the behavior of u and Ψ_0 for r →∞, we obtain, M_ADM = m_0 + 1/2π ∫_0^2π∫_0^π δ m(θ,ϕ) sinθdθ dϕ. According to the numerical scheme of next Section, we can read off an analytical expression for δ m(θ,ϕ), and the ADM mass is calculated straightforwardly. In the case of multiple black holes, we have to replace m_0 →∑ m_i in the above expression. § THE GALERKIN-COLLOCATION ALGORITHMWe present here the Galerkin-Collocation scheme to solve the Hamiltonian constraint (<ref>) or (<ref>) for trumpet and wormhole data sets. The centerpiece of the numerical treatment is the spectral approximation of the function u(r,θ,ϕ) given by,u_a(r,θ,ϕ) = ∑^N_x,N_y_k,l = 0∑^l_m=-l c_klm χ_k(r) Y_lm(θ,ϕ). Here c_klm represents the unknown coefficients or modes, N_x and N_y are, respectively, the radial and angular truncation orders that limit the number of terms in the above expansion. The angular patch has the spherical harmonics, Y_lm(θ,ϕ), as the basis functions. The choice of spherical coordinates together with the adoption of spherical harmonics basis functions are quite natural, and as we are going to show, are computationally very efficient.Concerning the radial basis functions, χ_k(r), we have followed the prescription of the Galerkin method in which each basis function satisfies the boundary conditions. Usually, this is done by establishing an appropriate combination of Chebyshev polynomials. Near r=0, we have, χ_k(r) ∼𝒪(r), and χ_k(r) ∼𝒪(1), according with the boundary conditions (<ref>). The asymptotic behavior of each basis function is, χ_k(r) ∼𝒪(r^-1). To satisfy these boundary conditions, we define each radial basis function as, χ_k(r) = 1/2(TL_k+2(r)-TL_k(r)), χ_k(r) = 1/2(TL_k+1(r)-TL_k(r)), for boosted and spinning black holes, respectively. For the wormhole case the basis function is given by expression (<ref>). HereTL_k(r) represents the rational Chebyshev polynomials defined by, TL_k(r) = T_k(x = r - L_0/r+L_0) where T_k(x) is the Chebyshev polynomial of kth order and L_0 is the map parameter that connects -1 ≤ x <1 to 0 ≤ r < ∞ through the algebraic map <cit.> r=L_0 (1+x)/(1-x).The spherical harmonics are complex functions implying that the coefficients c_klm must be complex but satisfying some symmetry conditions to guarantee that the conformal factor be a real function. The symmetry conditions are, c^*_kl-m=(-1)^-m c_klm, due to the symmetry relation of the spherical harmonics Y^*_l-m(θ,ϕ)=(-1)^-mY_lm(θ,ϕ). Consequently, the number of independent modes (N_x+1)(N_y+1)^2.We now establish the residual equation associated with the Hamiltonian constraint by substituting the spectral approximation (<ref>) into the Hamiltonian constraint (<ref>) (or (<ref>)). In addition, we have taken into account the differential equation for the spherical harmonics to get rid of the derivatives with respect to θ and ϕ. After a straighforward calculation, we have arrived to the following expression, Res(r,θ,ϕ) = ∑_k,n,p c_knp[1/r^2∂/∂ r(r^2 ∂χ_k/∂ r) -n(n+1)/r^2χ_k ] Y_np(θ,ϕ) + 2/Ψ_0∂Ψ_0/∂ R∂ R/∂ r∂ u_a/∂ r -(1+u_a(r,θ,ϕ))/8Ψ_0^8(A̅^ijA̅_ij)_0 + + (1+u_a(r,θ,ϕ))^-7/8Ψ_0^8A̅^ijA̅_ij. In the case of binary systems with trumpet punctures, it is necessary to modify the second term on the RHS to include the angular dependence that appears in the background solution Ψ_0.The next and final step is to describe the procedure to obtain de coefficients c_klm. From the method of weighted residuals <cit.>, these coefficients are evaluated with the condition of forcing the residual equation to be zero in an average sense. It means that, <Res,R_j(r)S_lm(θ,ϕ)> == ∫_𝒟 Res R^*_j(r) S^*_lm(θ,ϕ) w_r w_θ w_ϕ dr dΩ=0, where the functions R_j(r) and S_lm(θ,ϕ) are called the test functions while w_r, w_θ and w_ϕ are the corresponding weights. We have chosen the radial test function as prescribed by the Collocation method, R_j(r) = δ(r-r_j), which is the delta of Dirac function; r_j represents the radial collocation points and w_r=1. Following the Galerkin method we identify the angular test function S_lm(θ,ϕ) as the spherical harmonics, and consequently w_θ=w_ϕ=1. Therefore Eq. (<ref>) becomes, <Res(r,θ,ϕ),Y_lm(θ,ϕ)>_r=r_j=0, where j=0,1,..,N_x, l=0,1,..,N_y and m=0,1,..,l. The N_x+1 radial collocation points are, r_j = L_0 (1+x̃_j)/1-x̃_j. with the Chebyshev-Gauss collocation points x̃_j in the computational domain,x̃_j=cos[(2 j+1) π/2 N_x+2],j=0,1,..,N_x. We have excluded the point at infinity (x̃=1) since the residual equation (<ref>) is identically satisfied asymptotically due to the choice of the radial basis functions. Noticed that the origin is also excluded. In Fig. 1 we show schematically the spatial domain spanned by the new coordinates (x̃,y=cosθ,ϕ).We are in conditions to present schematically the set of equations resulting from the relations (<ref>). The integration on the angular domain takes into account the orthogonality of the spherical harmonics in the first three terms of the residual equation (<ref>) whose result is,<Res,Y_lm(θ,ϕ)>_r_j = ∑_k c_klm[1/r^2∂/∂ r(r^2 ∂χ_k/∂ r) - l(l+1)χ_k/r^2]_r_j + (2/Ψ_0∂Ψ_0/∂ R∂ R/∂ r)_r_j∑_k c_klm(∂χ_k/∂ r)_r_j -((A̅^ijA̅_ij)_0/8Ψ_0^8)_r_j(2√(π)δ_0lδ_0m+∑_k c_klmχ_k(r_j)) + <(A̅^ijA̅_ij)/8Ψ_0^8(1+u_a)^7,Y_lm(θ,ϕ)>_r_j=0, with j=0,1,..,N_x, l=0,1,..,N_y and m=-l,..,l. The last term is calculated using quadrature formulae as indicated below, <(..),Y_lm(θ,ϕ)>_r_j≈∑_k,n=0^N_1,N_2 (..) Y^*_lm(θ_k,ϕ_n) v^θ_k v^ϕ_n, where (θ_k,ϕ_n), k=0,1,..,N_1, n=0,1,..,N_2 are the quadrature collocation points, and v^θ_k v^ϕ_n are the corresponding weights <cit.>. To achieve better accuracy we have set N_1=N_2=2 N_y+1, but this is not mandatory since it is possible to use simply N_1=N_2=N_y. In summary, we have to solve the set of (N_x+1) (N_y+1)^2 nonlinear algebraic equations indicated by expression (<ref>) for an equal number of coefficients c_klm. For that aim the Newton-Raphson algorithm was employed.§ APPLICATIONS§.§ Single spinning and boosted black holesWe begin by considering a single spinning or a boosted black hole located at the origin r=0. In each case the angular and linear momenta lie on the z-axis, that is 𝐉 = (0,0,J_0) and 𝐏 = (0,0,P_0). The quantities A_ijA^ij corresponding to spinning and boosted black holes are given by,A̅_ijA̅^ij = 18 J_0^2/r^6sin^2θ + 81 m_0^4/8 r^6A̅_ijA̅^ij = 9 P_0^2/2r^4(1+2cos^2θ)+81 m_0^4/8 r^6-27√(3)m_0^2P_0/2r^5cosθ. The resulting Hamiltonian constraint in each case is axisymmetric due to the absence of any dependence of the polar angle ϕ. Thus, in the spectral approximation of the function u(r,θ) (cf. Eq. (<ref>)) the spherical harmonics are replaced by Legendre polynomials as the angular basis functions.We have adopted the convergence of the ADM mass evaluated according to Eq. (<ref>) as the main numerical test. From the spectral approximation (<ref>) we can obtain δ m(θ) after -lim_r →∞ r^2 ∂ u_a(r,θ)/∂ r without approximating the infinity to some finite radius r_max. We have established the convergence of the ADM mass by calculating the difference of the ADM mass corresponding to approximate solutions with fixed N_y=12 and varying N_x=5,10,15,.. such that δ M(N_x) = |M_ADM(N_x+5)-M_ADM(N_x)|. As reported previously <cit.> the value of the map parameter can improve the convergence of δ M. Figs. 2 and 3 show the convergence tests for spinning and boosted black holes, respectively, where in both cases m_0=1.0; the spin parameter is J_0=0.5 m_0^2 while the boost is P_0=1.0m_0. In Fig. 2 the results are displayed for L_0=2.0 and L_0=0.2 for the trumpet data sets to illustrate the role of L_0 in the convergence rate. Noticed the improvement of the convergence rate is achieved when L_0=0.2. For the spinning wormhole, the best map parameter is L_0=0.5, and the convergence is better than in the trumpet case. Fig. 3 show the convergence of the ADM mass for trumpet and wormhole boosted black holes with their respective best map parameters, L_0=0.1 and L_0=2.0. Spinning trumped black holes alter the geometry of the minimal surface characterized by r=0 from spherical to an oblate spheroid. It will be instructive to quantity this change by evaluating the eccentricity of the spheroid in function of the spin parameter J_0. The eccentricity of the minimal surface is defined by, ϵ = √(1 - R^2_min(J_0,θ=0)/R^2_min(J_0,θ=π/2)), where R_min(J_0,θ) = lim_r → 0r Ψ_0^2(1+u)^2. We have expressed the eccentricity in function of J_0/m_0^2 and J_0/M_ADM^2 in Fig. 4. Notice that the eccentricity tends to a limit value of ϵ≈ 0.439. We have included an inset plot with the eccentricity calculated from the approximate solution due to Immerman and Baumgarte <cit.> (continuous line) valid for small J_0 and the corresponding numerical eccentricities (circles). As expected, it becomes evident the disagreement between both results as the spin increases.We have revisited the estimate of the radiation content or the junk radiation present in the trumpet and wormhole initial data sets which have been considedred in Refs. <cit.>. The radiation content, E_rad, is estimate as <cit.>, E_rad = √(M_ADM-P^2) - M_BH, where J^2=J_i J^i, and M_irr is the irreducible mass given by, M_BH = M^2_irr + J^2/4 M_irr^2, where J^2=J_i J^i, and the irreducible mass M_irr is,M_irr = √(A/16 π), here A is the area of the apparent horizon. After solving the apparent horizon equation for spinning and boosted black holes (see the Appendix B), A can be calculated, allowing to determine the ratio e_rad≡ E_rad/M_BH in function of j_0=J_0/M_BH^2 and p_0=P_0/M_BH, respectively. We have noticed that for spinning black holes the radiation content in the trumpet and wormhole data is nearly the same. However, there is a slight exception for small j_0 in which (e_rad)_trumpet > (e_rad)_wormhole (cf. Fig. 5). On the other hand, the amount of radiation in the trumpet and wormhole boosted black holes is indistinguishable according to the Fig. 5. To illustrate an application of the Galerkin-Collocation algorithm to a simple three-dimensional case, we have considered a trumpet puncture located at the origin and with linear and intrinsic angular momenta characterized, respectively by 𝐏 = (P_0,0,0) and 𝐒 = (0,0,J_0). In this case,A̅_ijA̅^ij = 18 J_0^2/r^6sin^2θ + 9 P_0^2/2r^4(1+2 sin^2θ cos^2θ) ++ 81 m_0^4/8 r^6 - 18 J_0 P_0/r^5sinθsinϕ - 27 √(3)P_0 m_0^2/2r^5sinθcosϕ. We have adopted the conformal factor as given by Eq. (<ref>) due to the presence of spin, and the relevant parameters are: m_0=1, P_0=0.2 m_0, and the spin parameter assumes several values, J_0 = 0.1 m_0^2,0.2 m_0^2,..,0.5m_0^2. The influence of increasing the spin parameter on the regular part of the conformal factor, 1+u(r,θ,ϕ), can be viewed in Fig. 6 showing the projection of 1+u on the plane y=z=0. Notice the deformation produced by increasing J_0 by inspecting the curves from down to up. §.§ Binary black holesWe discuss here a boosted binary formed with trumpet punctures lying at the axis z at the coordinate locations indicated by 𝐂_1=(0,0,-a) and 𝐂_2=(0,0,a) with 2a the coordinate separation between the punctures. We have adopted a simpler form of the conformal factor <cit.>, Ψ = Ψ_1 + Ψ_2 - 1 + u, where Ψ_1 and Ψ_2 have the same form of Ψ_0 (see Appendix A) but are centered on 𝐂_1 and 𝐂_2 <cit.> respectively. Since the momentum constraint is a linear equation, the extrinsic curvature A̅_ij is given by, A̅_ij = A̅_ij^0(1) + A̅_ij^0(2) + A̅_ij^𝐏_1 + A̅_ij^𝐏_2, and the Hamiltonian constraint becomes, ∇̅^2 u + 1/8(Ψ_1 + Ψ_2 - 1 + u)^-7A̅^ijA̅_ij- 1/8 Ψ_1^7(A̅_0^ijA̅^0_ij)^(1) - 1/8 Ψ_2^7(A̅_0^ijA̅^0_ij)^(2) = 0. The expression for A_ijA^ij is shown in the Appendix C, and (A^0_ijA_0^ij)^(1,2)=81 m_1,2^2/r_1,2^6. The algorithm presented in the last Section is straightforwardly adapted to solve the Hamiltonian constraint for trumpet binary punctures with the function u approximated as indicated in Eq. (<ref>). The radial basis function is given by Eq. (<ref>). To test the algorithm, we have verified the convergence of the ADM mass for the axisymmetric binary system after setting m_1=m_2=0.5, 𝐏_1=(0,0,P_0), 𝐏_2=(0,0,-P_0), together with a=3 and P_0=0.4 m_1. Following the convergence test, we have fixed N_y=14, and the radial truncation order is made to vary as N_x=20,25,30,..,100.Fig. 7 shows the exponential convergence of the ADM that is calculated according to Eq. (<ref>)in which m_1+m_2 replaces by m_0. In this case, the best choice for the map parameter is the coordinate separation between the punctures, L_0=2a. For the sake of illustrationwe have included in Fig. 7 the plot of 1+u(r,θ) in the plane x=y=0 for the binary black hole under consideration. As the last application, we have considered a three-dimensional binary formed by boosted wormhole punctures with 𝐏_1=(P_0,0,0) and 𝐏_2=(-P_0,0,0). The conformal factor is expressed in the same way as in Eq. (<ref>), Ψ = 1 + 1/2(m_1/r_1 + m_2/r_2) + u, Here the Hamiltonian constraint and the function u are given respectively by Eqs. (<ref>) and (<ref>), and the corresponding expression for A_ijA^ij is given in the Appendix C. The values of the parameters are the same of Ref. <cit.>: a=3.0 M, m_1=m_2=0.5M and P_0=0.2M, where M=m_1+m_2. In Fig. 8 we show the two and the three dimensional plots of the 1+u(x,y=0,z). We have used truncation orders N_x=40,N_y=16 which means 40 radial collocation points and a grid of 33 × 33 collocation points for the quadrature formulae given by expression (<ref>).§ FINAL REMARKS We have presented a single domain algorithm using the Galerkin-Collocation method to solve the Hamiltonian constraint for trumpet and wormhole puncture data sets with emphasis for the first data sets.We have considered Bowen-York data including the cases of spinning, boosted, single and binary black hole. We find worth of mentioning some features of the algorithm. The spatial domain is covered by spherical coordinates (r,θ,ϕ). In all cases, the regular part of the conformal factor is approximated by Eq. (<ref>) with the radial basis functions satisfying the appropriate boundary conditions and taking the spherical harmonics as the angular basis functions. To describe trumped data corresponding to a single spinning and boosted black hole, we have proposed a puncture-like approach with a new form of the conformal factor given by expression (<ref>). We have also taken into account the analytical solution that describes the trumpet Schwarzschild black hole found by Baumgarte and Naculich <cit.> as the background solution. This procedure is analogous to make the explicit use of the background solution Ψ_0=1+m_0/2r in the case of a single wormhole Schwarzschild black hole.We have tested the algorithm successfully by checking the exponential convergence of the ADM mass that was present in most of the cases.In the sequence, we have made some applications of the algorithm to situations of interest. Of particular importance is the case of a single spinning trumpet black hole, in which we have shown the influence of the spin in deforming the minimal surface from a spherical to an oblate spheroid by evaluating the eccentricity of the resulting surface. The eccentricity has the limit value of about 0.439 obtained for large spin parameters. Interestingly, this value is approximate the half of the eccentricity of the ergosphere of the extremal Kerr black hole.We have revisited the amount of radiation content present in the trumpet and wormhole single spinning and boosted black holes. In general, the radiation content is nearly the same in both families of initial data sets as indicated by Fig. 5. We have also presented the profiles of the regular function u(r,θ,ϕ) for the single trumpet black hole with spin and boost. By fixing the boost parameter P_0 and decreasing the spin J_0 we noticed that the profile approach to that corresponding to single boosted black hole as expected. For the last and more illustrative applications of the algorithm, we have considered initial data for trumpet and wormhole binaries. Trumpet data constituted by binary boosted black holes was envisaged for the axisymmetric case; the ADM mass converges exponentially. For a more general case, we generate an initial data with wormhole boosted black holes with the same parameters of Ref. <cit.> but with truncation orders N_x=40 and N_θ=16, which means 40 radial collocation points, and a grid of 33 × 33 angular points for the quadrature formulae (<ref>). The Galerkin-Collocation method is a viable alternative to solve the Hamiltonian constraint for the trumpet and wormhole initial data sets. We point out two directions to follow. The first is to consider 1+log trumpet data sets for which the maximal sliced conditions is relaxed <cit.>. The second is to extend the present algorithm including more than one domain using the technique of domain decomposition. § ACKNOWLEDGEMENTS The authors acknowledge the financial support of the Brazilian agencies CNPq, CAPES and FAPERJ. HPO thanks FAPERJ for support within the grant BBP (Bolsas de Bancada para Projetos). We also would like to thank Thomas W. Baumgarte for comments on the manuscript.§ BACKGROUND SCHWARZSCHILD TRUMPET EXACT SOLUTION The exact expression corresponding to the maximally sliced trumpet of the Schwarzschild spacetime was derived by Baumgarte and Naculich <cit.>:Ψ_0 = [4 R/2R+m_0+√(4R^2+4m_0R+3m_0^2)]^1/2×[8R+6m_0+3√(8R^2+8m_0R+6m_0^2)/(4+3√(2))(2R-3m_0)]^1/2√(2) where the isotropic radial coordinate r is,r = [2R+m_0+√(4R^2+4m_0R+3m_0^2)/4] ×[(4+3√(2))(2R-3m_0)/8R+6m_0+3√(8R^2+8m_0R+6m_0^2)]^1/√(2) We have located the binary punctures along the z-axis (𝐂_1,2=(0,0,± a)) for the sake of convenience. The background conformal factors have the same form of Eq. (A1), however with Ψ_1=Ψ_1(R_1) and Ψ_2=Ψ_2(R_2). The relation between the areal radius R_1 with the coordinates (r,θ) is √(r^2+2arcosθ+a^2) = [2R_1+m_1+√(4R_1^2+4m_1R_1+3m_1^2)/4] ×[(4+3√(2))(2R_1-3m_1)/8R_1+6m_1+3√(8R_1^2+8m_1R_1+6m_1^2)]^1/√(2), and a similar expression connecting R_2 with (r,θ). § THE APPARENT HORIZON The apparent horizon for axisymmetric systems satisfies the following ordinary differential equation, ∂^2_θ h=-Γ_BC^A M_A u^B u^C - (ds/dθ)^2 γ^ϕϕΓ^A_ϕϕ m_A - (γ^(2))^-1/2 ×ds/dθ u^A u^B K_AB - (γ^(2))^-1/2(ds/dθ)^3 γ^ϕϕ K_ϕϕ, where r=h(θ) describes the apparent horizon surface, m_i=(1,-∂_θ h,0), u^i=(∂_θ h,1,0) and (ds/dθ)^2 = γ_AB u^A u^B; the capital letters run over the coordinates r,θ. Since K=0 it follows that K_ij = A_ij = Ψ^2 A̅_ij. The conformal factor is obtained after solving numerically the Hamiltonian constraint and inserted into the apparent horizon equation.We have introduced ỹ=cosθ and transformed the apparent horizon equation in a non-autonomous dynamical system of the type ∂_ỹ h = v, ∂_ỹ v = f(h,v,ỹ) whose solution must satisfy the boundary conditions ∂_θ h=0 for θ=0,π or v √(1-ỹ) = 0 for ỹ=-1,1. § EXTRINSIC CURVATURE FOR BINARY BLACK HOLES The quantity A̅_ijA̅^ij for trumpet boosted punctures with 𝐏_1=(0,0,P_1), 𝐏_2=(0,0,P_2) and located at 𝐂_1=(0,0,-a),𝐂_2=(0,0,a), respectively, is given by,A̅_ijA̅^ij= 9 P_1^2/2 r_1^6[(1+2cos^2θ)r^2 + 6arcosθ + 3a^2] + 9 P_2^2/2 r_2^6[(1+2cos^2θ)r^2 - 6arcosθ + 3a^2] + 9 P_1 P_2/2 r_1^5 r_2^5[(1+2cos^2θ)r^6+ (2cos^4θ-14cos^2θ+3)a^2r^4 +(8cos^2θ+1)a^4r^2 - 3a^6] + 81m_1^4/8 r_1^6 + 81m_2^4/8 r_2^6 + 81 m_1^2 m_2^2/4r_1^5r_2^5(2a^2r^2cos^2θ+a^4-4a^2r^2+r^4) -27√(3)m_1^2 P_1/2r_1^6(rcosθ+a) -27√(3)m_2^2 P_2/2r_2^6(rcosθ-a) -27√(3)m_2^2P_1/2r_1^5r_2^5[a^5+a^4rcosθ-2a^3r^2+(2cos^2θ-4)cosθ a^2r^3+(2cos^2θ-1)ar^4+r^5cosθ] + 27√(3)m_1^2 P_2/2r_1^5 r_2^5[a^5-a^4rcosθ-2a^3r^2+(-2cos^2θ+4)a^2r^3cosθ+(2cos^2θ-1)ar^4-r^5cosθ]where r_1=√(r^2+2arcosθ+a^2) and r_1=√(r^2-2arcosθ+a^2). For the case of wormhole boosted punctures located at 𝐂_1,𝐂_2 with 𝐏_1=(P_1,0,0), 𝐏_2=(P_2,0,0), we have, A̅_ijA̅^ij = 9 P_1^2/2 r_1^6[2arcosθ+a^2+r^2+2r^2(1-cos^2θ)cos^2 ϕ] + 9 P_2^2/2 r_2^2(-2arcosθ+a^2+r^2+2r^2(1-cos^2θ)cos^2ϕ)+9 P_1 P_2/2 r_1^3 r_2^3 ×[r^2-a^2+2r^2(1-cos^2θ)(r^4-a^4-a^2r^2(1-cos^2θ))/(2arcosθ+a^2+r^2)(r^2-2arcosθ+a^2)cos^2ϕ] 99cook G. B. Cook, Initial Data for Numerical Relativity, Liv. Rev. Relativity, 3, 5 (2000).LIGO_gws B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett. 116, 061102 (2016).pretorius F. Pretorius, Phys. Rev. Lett. 95, 121101 (2005).campanelli M. Campanelli, C. O. Lousto, P. Marroneti and Y. Zlochower, Phys. Rev. Lett. 96, 111102 (2006).baker J. G. Baker, J. Centrella, D. -I. Choi, M. Koppitz and J. van Meter, Phys. Rev. Lett. 96, 111102 (2006).adm R. Arnowitt, S. Deser and C. W. Misner, in Gravitation: an Introduction to Current Research, edited by L. Witten (Willwy, 1962), p.227.york_1979J. W. York, Jr., in Sources of Gravitational Radiation, edited by L. L. Smarr (Cambridge University Press, London, 1979), p. 83.bowen_york J. M. Bowen and J. W. York, Phys. Rev. D 21, 2047 (1980).baumgarte_shapiro Thomas W. Baumgarte and Stuart L. Shapiro, Numerical Relativity, Solving the Einstein's Equations on the Computer, Cambridge University Press (2010).brandt_brugmann S. Brandt adn B. Brugmann, Phys. Rev. Lett. 78, 3606 (1997).brugmann_04 B. Brugmann, W. Tichy and N. Jansen, Phys. Rev. Lett. 92, 211101 (2004).baumg_2000 T. W. Baumgarte, Phys. Rev. D 62, 024018 (2000).diener_06 P. Diener, F. Herrman, D. Pollney, E. Schnetter, E. Seidel, R. Takahashi, J. Thornburg and J. Centrella, Phys. Rev. Lett. 96, 121101 (2006).baker_06 J. G. Baker, J. Centrella, Dae-Il Choi, M. Koppitz and J. R. van Meter, Phys. Rev. D 73, 104002 (2006).meter_06 J. R. van Meter, J. G. Baker, M. Koppitz and Dae-Il Choi, Phys. Rev. D 73, 124011 (2006).bode_09 T. Bode, P. Laguna, D. M. Schoemaker, I. Hinder, F. Hermann and B. Vaishnav, Phys. Rev. D 80, 024008 (2009).hannan_mov_punct M. Hannan, S. Husa, N. O. Murchadha, B. Brugmann, J. A. Gonzalez and U. Sperharke, J. Phys. Conf. Series 66, 01247 (2007).hannan_mov_punct2 M. Hannan, S. Husa, D. Pollney, B. Brugmann and N. O. Murchadha, Phys. Rev. Lett. 99, 241102 (2007).brown J. D. Brown, Phys. Rev. D 77, 044018 (2008).hannan_mov_punct3 M. Hannan, S. Husa, F. Ohme, B. Brugmann and N. O. Murchadha, Phys. Rev. D 78, 064020 (2008).baumg_nac T. W. Baumgarte and S. G. Naculich, Phys. Rev. D 75, 067502 (2007).denninson_baumg K. A. Denninson and T. W. Baumgarte, Class. Quantum Grav. 31, 117001 (2014).hanann_id_trumpet M. Hanann, S. Husa and N. O. Murchadha, Phys. Rev. D 80, 124007 (2009).immer_baumg Jason D. Immerman, T. W. Baumgarte, Phys. Rev. D 80, 061501(R) (2009).boyd John P. Boyd, Chebyshev and Fourier Spectral Methods, Dover Publications (2001).dietrich_brugmann T. Dietrich and B. Brugmann, Phys. rev. D 89, 024014 (2014).deol_rod_bondi H. P. de Oliveira and E. L. Rodrigues, Class. Quant. Grav, 28, 235011 (2011).deol_rod_RT H. P. de Oliveira, E. L. Rodrigues and J. F. E. Skea, Phys. Rev. D 84, 044007 (2011).deol_rod_idata2 H. P. de Oliveira and E. L. Rodrigues, Phys. Rev. D 86, 064007 (2011).pfeiffer_CPC Harald P. Pfeiffer, Lawrence E. Kidder, Mark A. Scheel and Saul Teukolsky, Comp. Phys. Commun. 152, 253 (2003).ansorg_1 Marcus Ansorg, Bernd Brugmann and Wolfang Tichy, Phys. Rev. D 70, 064011 (2004).ansorg_07 Marcus Ansorg, Class. Quant. Grav. 24, S1-14 (2007).ossokine S. Ossokine, F. Foucart, H. P. Pfeiffer, M. Boyle and B. Szilagyi, Class. Quant. Grav. 32, 245010 (2015).finlayson B. A. Finlayson, The Method of Weighted Residuals and Variational Principles (Academic Press, New York, 1972).fornberg B. Fornberg, A Pratical Guide to Pseudospectral Methods, Cambridge Monographs on Applied and Computational Mathematics, Cambrige University Press (1998).cook_phd G. B. Cook, Ph.D. thesis, University of North Carolina at Chappel Hill, Chappel Hill, North Carolina (1990).cook_york G. B. Cook and J. W. York, Phys. Rev. D 41, 1077 (1990).
http://arxiv.org/abs/1703.09131v1
{ "authors": [ "P. C. M. Clemente", "H. P. de Oliveira" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170327150348", "title": "Puncture black hole initial data: a single domain Galerkin-Collocation method for trumpet and wormhole data sets" }
[PHENIX Spokesperson: ]akiba@rcf.rhic.bnl.govDeceased Deceased PHENIX CollaborationThe cross section and transverse single-spin asymmetries of μ^-and μ^+ from open heavy-flavor decays in polarized p+pcollisions at √(s)=200 GeV were measured by the PHENIX experimentduring 2012 at the Relativistic Heavy Ion Collider.Becauseheavy-flavor production is dominated by gluon-gluon interactions at√(s)=200 GeV, these measurements offer a unique opportunity toobtain information on the trigluon correlation functions.Themeasurements are performed at forward and backward rapidity(1.4<|y|<2.0) over the transverse momentum range of 1.25<p_T<7GeV/c for the cross section and 1.25<p_T<5 GeV/c for the asymmetrymeasurements. The obtained cross section is compared to afixed-order-plus-next-to-leading-log perturbative-quantum-chromodynamicscalculation. The asymmetry results are consistent with zero withinuncertainties, and a model calculation based on twist-3 three-gluoncorrelations agrees with the data. Cross section and transverse single-spin asymmetry of muons fromopen heavy-flavor decays in polarized p+p collisions at √(s)=200 GeV L. Zou December 30, 2023 ====================================================================================================================================== § INTRODUCTIONTransverse single-spin asymmetry (TSSA) phenomena have gainedsubstantial attention in both experimental and theoretical studies inrecent years. The existence of TSSAs has been well established in theproduction of light mesons at forward rapidity in transversely polarizedp+p collisions at energies ranging from the Zero GradientSynchrotron up to the Relativistic Heavy Ion Collider (RHIC).Surprisingly large but oppositely-signed TSSA results were firstobserved in π^+ and π^- production at large Feynman-x(x_F) in transversely polarized collisions at √(s) = 4.9GeV <cit.>. These results surprised thequantum-chromodynamics (QCD) community because they disagreed with theexpectation from the naive perturbative QCD of very small spinasymmetries <cit.>. The large TSSA of pion production hasbeen subsequently observed in hadronic collisions over a range ofenergies extending up to √(s) = 500 GeV for π^0 (√(s) =200 GeV forπ^±) <cit.>.Furthermore, TSSA in η meson production has also been studied atforward rapidity <cit.>. The results areconsistent with the observed π^0 asymmetries at various energies inthe overlapping x_F regions. Two theoretical formalisms within theperturbative QCD framework have been proposed to explain the origin ofthese large TSSAs at forward rapidity. Both formalisms connect the TSSAto the transverse motion of the partons inside thetransversely-polarized nucleon and/or to spin-dependent quarkfragmentation.One framework is based on the transverse-momentum-dependent (TMD) partondistribution and fragmentation functions, called TMD factorization. Theinitial state contributions are originating from the Siversfunction <cit.>, which describes thecorrelation between the transverse spin of the nucleon and the partontransverse momentum in the initial state. The final state contributionoriginates from the quark transversity distribution and the Collins<cit.> fragmentation function, which describes thefragmentation of a transversely polarized quark into a final statehadron with nonzero transverse momentum relative to the partondirection. This framework requires two observed scales where only oneneeds to be hard and both effects have been observed in SIDISmeasurements <cit.>. However, TMDfactorization cannot be used in the interpretation of hadron productionin p+p collisions as only one hard scale is available<cit.>.A second framework, applicable to our study, follows the QCD collinearfactorization approach. The collinear, higher-twist effects become moreimportant in generating a large TSSA when there is only one observedmomentum scale that is much larger than the nonperturbative hadronicscale Λ_QCD≈ 200 MeV <cit.>. Alarge TSSA can be generated from the twist-3, transverse-spin-dependent,multi-parton correlation functions in the initial state or fragmentationfunctions in the final state.At RHIC energies, gluon-gluon interaction processes dominate heavy quarkproduction <cit.>, so heavy quarks serve to isolate thegluon contribution to the asymmetries. PHENIX has measured the TSSA(A_N) of J/ψ in central and forward rapidity <cit.>.Theoretical predictions of the J/ψ single-spin asymmetry arecomplicated by the lack of good understanding of J/ψ productionmechanism <cit.>. In addition, there are feed-downcontributions from higher resonance states in inclusive J/ψproduction <cit.>. On the other hand, the effect of puregluonic correlation functions on D-meson production in transverselypolarized collisions has been extensively studied within the twist-3mechanism in the framework of collinearfactorization <cit.>. However, it is difficultto constrain the trigluon correlation functions due to the lack ofexperimental results. Future measurements including D-meson productionare proposed at the Large Hadron Collider <cit.>.This paper reports on measurements of the cross section and TSSA formuons from open heavy-flavor decays in polarized collisions at√(s)=200  GeV. Results are presented for muons fromsemi-leptonic decays of open heavy-flavor hadrons, mainlyD→μ + X and B→μ + X, in the forward and backwardrapidity regions (1.4<|y|<2.0); the accessible momentum fraction ofgluons in the proton is 0.0125–0.0135 and 0.08–0.14 in the backward(x_F<0) and forward (x_F>0) regions with respect to the polarizedbeam direction, respectively. Sec. <ref> describes the RHICpolarized proton beams and the PHENIX experimental setup. The detailedanalysis of muons from open heavy-flavor, including cross sections andTSSAs, will be described in Sec. <ref> and the results willbe presented in Sec. <ref>. Finally, a discussion of theresults and their possible implications will be provided inSec. <ref>.§ EXPERIMENTAL SETUP§.§ The PHENIX experimentThe PHENIX detector comprises two central arms at midrapidity andtwo muon arms at forward and backward rapidity <cit.>. Asshown in Fig. <ref>, two muon spectrometers cover the fullazimuthal angle in the pseudorapidity range 1.2<η<2.4 (north arm)and -2.2<η<-1.2 (south arm). In front of each muon arm, there isabout 7 interaction lengths (λ_I) of copper-and-iron absorberwhich provides a rejection factor of 1000 for charged pions, and anadditional stainless-steel absorber (2 λ_I in total) installedin 2011 contributes to further suppress hadronicbackground <cit.>.Each muon arm hasthree stations of cathode strip chambers, muon tracker (MuTr), formomentum measurement and five layers (labeled from Gap0 to Gap4) ofproportional tube planes, muon identifier (MuID), for muonidentification.Each MuID gap comprises a plane of absorber(∼1λ_I) and two planes of Iarrocci tubes whose orientationis along either the horizontal or the vertical direction in each plane.The MuID also provides a trigger for events containing one or more muoncandidates.The minimum bias (MB) trigger is provided by the beam-beam counters(BBC) <cit.>, which comprise two arrays of 64 quartzČerenkov detectors to detect charged particles at high pseudorapidity.Each detector is located at z=±144  cm from the interactionpoint, and covers the pseudorapidity range 3.1<|η|<3.9. The BBCalso determines the collision-vertex position (z_ vtx) along thebeam axis, with a resolution of roughly 2 cm in collisions. §.§ RHIC polarized beams RHIC is a unique, polarized collider located at BrookhavenNational Laboratory.RHIC comprises two counter-circulating storagerings, in each of which as many as 120 polarized-proton bunches can beaccelerated to a maximum energy of 255 GeV per proton.In the 2012 run, the beam injected into RHIC typically consisted of 109filled bunches in each ring. The bunches collided with a one-to-onecorrespondence with a 106 ns separation.Pre-defined polarizationpatterns for every 8 bunches were changed fill-by-fill in order toreduce systematic effects. Two polarimeters are used to determine thebeam polarizations. One is a hydrogen-jet polarimeter, which takesseveral hours to measure the absolute polarization <cit.>.The other is a fast, proton-carbon polarimeter which measures relativechanges in the magnitude of the polarization and any variations acrossthe transverse profile of the beam several times perfill <cit.>. During the √(s)=200 GeVrun in 2012, the polarization direction in the PHENIX interaction regionwas transverse. The average clockwise-beam (known as blue beam)polarization for the data used in this analysis was P=0.64±0.03,and the average counter-clockwise-beam (yellow beam) polarization wasP=0.59±0.03. There is a 3.4% global scale uncertainty in themeasured A_N due to the polarization uncertainty. § DATA ANALYSIS§.§ Data set We analyzed a data set from transversely polarized collisions at=200  GeV collected with the PHENIX detector in 2012 withan integrated luminosity of 9.2 pb^-1. These data have been recordedby using the MuID trigger in coincidence with the BBC trigger. The BBCtrigger requires at least one hit in both BBCs. The BBC triggerefficiency for MB events (events containing muons from openheavy-flavor) is 55% (79%) <cit.> with the van der Meerscan technique <cit.>. The MuID trigger serves to selectevents containing at least one MuID track reaching Gap3 or Gap4. §.§ Yield of muons from open heavy-flavor PHENIX has reported several measurements of muons from open heavy-flavordecays in various collision systems <cit.>.Similar methods developed in the previous analyses for backgroundestimation are used in this analysis. Due to the benefit of theadditional absorber material, the measurement of positively-chargedmuons from open heavy-flavor decays is possible in PHENIX for the firsttime with these data.§.§.§ Muon-candidate selection We choose tracks penetrating through all the MuID gaps as good muoncandidates from events for which the BBC z-vertex is within±25  cm. Track quality cuts, shown in Table <ref>, arealso required to reject background tracks. DG0 is the distance betweenthe projected positions of a MuTr track and a MuID track at the zposition of the MuID Gap0. DDG0 is the angular difference between thetwo projected positions used in the DG0. r_ ref is the distancebetween the interaction point and a projected position of a MuID trackat z=0. p·(θ_ MuTr - θ_ vtx) is the polarscattering angle of a track inside the absorber scaled by the momentum,where θ_ vtx is the angle at the vertex andθ_ MuTr is the angle at the MuTr Station 1. Two cuts, onp·(θ_ MuTr - θ_ vtx) and χ^2 atz_ vtx, are effective for rejecting tracks suffering from large multiplescattering or decaying to muons inside the absorber. Track quality cutsare determined with the help of a Monte Carlo simulation with geant4 <cit.>; the cut values vary with the momentumof the track.In this analysis, we also use tracks that stopped at MuID Gap3 forbackground estimation, although these tracks are not considered as muoncandidates. After applying a proper p_z cut (p_z∼3.8 GeV/c), we obtain a data sample enriched in hadrons (called stoppedhadrons) <cit.>. These tracks are used to determine thepunch-through hadron background which arises from hadrons traversingthrough all MuID layers without decay; this background is described inmore detail in the next section.§.§.§ Background estimationThe primary sources of background tracks are charged pions and kaons.Decay muons from π^± and K^± are the dominant backgroundfor <5  GeV/c, while the fraction of punch-through hadronsbecomes larger at >5  GeV/c. Another background component ismuons from decays. The contribution from decay is small inthe low-region but increases up to 20% of muons from inclusiveheavy-flavor decays at ∼5  GeV/c.Backgrounds from lightresonances (ϕ, ρ, and ω) or other quarkonium states(χ_c, ψ^', and Υ) arenegligible <cit.>. Therefore, the number ofmuons from open heavy-flavor decays is obtained as, N_ HF = N_ incl/ε_ trig - N_ DM - N_ PH - N_J/ψ→μ,where N_ HF is the number of muons from open heavy-flavor decays,N_ incl is the number of muon candidates passing through alltrack quality cuts in Table <ref>, ε_ trig isthe trigger efficiency of the MuID trigger, N_ DM is theestimated number of decay muons from π^± and K^±, N_ PHis the estimated number of punch-through hadrons, andN_J/ψ→μ is the estimated number of muons from decay.The trigger efficiency correction should be taken into account beforesubtracting the background, because the simulation of the backgroundsdoes not include any inefficiency of the MuID trigger. The MuID triggerefficiency is evaluated with data by measuring the fraction of MUIDtriggers in non-MUID triggered events containing tracks at MuID Gap3 orGap4.To estimate the hadronic background (N_ DM and N_ PH),the hadron-cocktail method, developed for the previousanalysis <cit.>, is used. Initial particledistributions for the hadron-cocktail simulation are estimated frommeasurements of charged pions and kaons atmidrapidity <cit.>. The pythiaevent generator <cit.> is used to extrapolate the spectra at midrapidity to the forward rapidity region. Toobtain enough statistics of reconstructed tracks in the high-region,a p_T^3 weight is applied to the estimated spectra for thesimulation and the simulation output is reweighted by 1/p_T^3 fora proper comparison with the data. Based on these initial hadrondistributions, a full chain of detector simulation withgeant4 <cit.> and track reconstruction is performed.Due to uncertainties in the estimation of input distributions andhadron-shower simulation with the thick absorber in front of the MuTr,an additional, data-driven, tuning procedure of the simulation is neededto determine the background more precisely. Two methods, describedbelow, are used to tune the hadron-cocktail simulation: Normalized z_ vtx distribution: The z_ vtxdistribution of tracks (dN_μ/dz_ vtx) normalized by thez_ vtx distribution of MB events (dN_ evt/dz_ vtx)provides a good constraint on the decay muon background.Because thedistance from z_ vtx to the front absorber is relatively shortcompared to the decay length of π^± and K^±, theproduction of decay muons shows a linear dependence on z_ vtx.Therefore, the number of decay muons can be estimated by matching theslope in the normalized z_ vtx distribution at MuID Gap4 for eachbin. More details are described in <cit.>. Stopped hadrons: Hadrons stopping at MuID Gap3 can be removedwith an appropriate momentum cut (p_z∼3.8  GeV/c) asdescribed in the previous section. The remaining stopped muons are lessthan 10% in the tracks at MuID Gap3, based on the simulation study. Thepunch-through hadron background at the last MuID gap can be estimated bymatching the distribution of stopped hadrons at MuID Gap3.After tuning the hadron-cocktail simulation, the decay muons (N_ DM)from the normalized z_ vtx distribution matching and thepunch-through hadrons (N_ PH) from the stopped-hadron matchingare combined for the final estimate of the background from lighthadrons. For the decay muons at >3  GeV/c and thepunch-through hadrons, the difference between the two methods of tuningis assigned as the systematic uncertainty. More details on thehadron-cocktail simulation and the tuning procedure are givenin <cit.>. Muons from decays are also subtracted in order to obtain thenumber of muons from open heavy-flavor decays. From the measurement ofthe invariant cross section in the forwardregion <cit.> and a decay simulation, the number of muonsfrom decay (N_J/ψ→μ) can beestimated <cit.>. The contribution of muons from tothe muons from inclusive heavy-flavor decays is ∼2% at low andincreases up to ∼20% at >5  GeV/c.Because there is aB→ contribution in the inclusive measurement, a fractionof B is included in N_J/ψ→μ and subtracted as background.However, the fraction, N_B→ J/ψ→μ/N_ HF, is quitesmall based on the measurements of the B→fraction <cit.>.Figure <ref> shows the spectra of inclusive muontracks and estimated background components; the relative contributionfrom each source varies with . After subtraction of backgrounds fromlight hadrons and , the spectra of muons from open heavy-flavordecays can be obtained. Figure <ref> shows thesignal-to-background ratio(N_ HF/N_ DM + N_ PH + N_J/ψ→μ)of negatively (top panel) and positively (bottompanel) charged tracks; blue open circle (red closed rectangle) pointsrepresent the results in the South (North) arm. Vertical bars (boxes)around the data points are statistical (systematic) uncertainties;details on systematic uncertainties will be described in the followingsection. Because K^+ has a longer nuclear interaction length thanother light hadrons, the signal-to-background ratio ofpositively-charged tracks is smaller than that of negatively-chargedtracks. §.§.§ Acceptance and efficiency correctionThe acceptance and efficiency correction is evaluated by using asingle-muon simulation. The same simulation procedure as for thehadron-cocktail simulation is used, and reconstructed muons are filteredwith the same track quality cuts and fiducial cuts as was applied to thedata.Because detector performance throughout the data-taking period isstable, one reference run is used to calculate the correction factors.The variation of the number of muon candidates per event throughout thedata-taking period is 8.1% (4.6%) for the South (North) arm, and thequadratic sum with the systematic uncertainty on the MuTr (4%) and MuID(2%) is assigned to the systematic uncertainty on the acceptance andefficiency correction.§.§.§ Systematic uncertainty There are three major sources of systematic uncertainty; the backgroundestimation (δ_bkg), the acceptance and efficiency correction(δ_Aε), and the BBC efficiency (δ_ BBC).The sources of δ_bkg are listed here:δ_ trig A 5% (15%) systematic uncertainty isassigned to the MuID trigger efficiency for tracks at MuID Gap4 (Gap3)by considering the statistical uncertainty of tracks in the non-MuIDtriggered events, and the uncertainty is included in the systematicuncertainty on the N_ DM (Gap4) and N_ PH (Gap3). δ_ sim The hadron-cocktail simulation with the thickabsorber (∼ 13λ_I) can be a source of systematicuncertainty. In case of the N_ DM in <3  GeV/c wherebackground can be constrained with muons, a 10% systematic uncertaintyis assigned conservatively due to extraction of the slope in thenormalized z_ vtx distributions. The difference between the twomethods of tuning described in Sec. <ref> is assignedto the systematic uncertainty on the N_ DM in >3  GeV/cand the N_ PH. The systematic uncertainty on the N_ DM(N_ PH) is 10–15% (10–40%) depending on . δ_ inputBecause there is no precise measurement ofπ^± and K^± production at forward rapidity, a 30%systematic uncertainty is assigned to the estimation of K/π ratiobased on the systematic uncertainty of measurements atmidrapidity <cit.>. The impact onN_ HF is evaluated by performing the hadron-cocktail tuningprocedure with various initial K/π ratios, and the variation ofN_ HF is less than 10%. The uncertainty on the shape of the distribution is negligible, because the tuning of the hadron-cocktailsimulation can take into account a dependence. A 10% systematicuncertainty is assigned to N_ HF conservatively. δ_J/ψ→μ The upper and lower limit of systematicuncertainty on the cross section measurement is taken into accountfor the systematic uncertainty on N_J/ψ→μ. The contributionfrom B decays is also considered. A 3% systematic uncertainty isassigned to the N_ HF due to the uncertainty on theN_J/ψ→μ.For the systematic uncertainty on the N_ HF, the δ_ trigand δ_ sim on the N_ DM (N_ PH) arepropagated into the N_ HF with the ratio of N_ DM/N_ HF(N_ PH/N_ HF). This propagated uncertainty is combinedwith the δ_ input and δ_J/ψ→μ on the N_ HFas a quadratic sum. The δ_bkg is 8–40%, depending on .There are also systematic uncertainties on the acceptance and efficiencycorrection (δ_Aε) and the BBC efficiency(δ_ BBC); see the discussion in <cit.>. For theδ_Aε, all sources described in Sec. <ref>are added in quadrature, and 9.3% and 6.4% systematic uncertaintiesare assigned to the South and North arm, respectively.Table <ref> summarizes the systematic uncertainty on thecross section of muons from open heavy-flavor decays, and the quadraticsum of the three components is the final systematic uncertainty. §.§ Transverse Single-Spin Asymmetry §.§.§ Determination of the TSSA Both of the proton beams are transversely polarizedat the interaction point.The TSSA (A_N) in the yield of muons fromheavy-flavor decays is obtained for each beam separately by summingover the spin information of the other beam.The final asymmetry iscalculated as the weighted average of the asymmetries for the two beams.The maximum likelihood method is used for this measurement. Thelikelihood ℒ is defined as, ℒ = ∏ (1 + P · A_N sin(ϕ_ pol - ϕ_i)),where P is the polarization, ϕ_ pol is the direction of beampolarization (+π/2 or -π/2), and ϕ_i is theazimuthal angle of each track in the PHENIX lab frame. The unbinnedlikelihood method is used in this study, so that the result is notbiased by low statistics bins. The likelihood function is usuallywritten in logarithmic form logℒ = ∑log(1 + P · A_N sin(ϕ_ pol - ϕ_i)),The A_N value is determined by maximizing logℒ. Thestatistical uncertainty of the log-likelihood estimator is related toits second derivative, σ^2(A_N) = (-∂^2 ℒ/∂ A_N^2)^-1. §.§.§ Inclusive- and background-asymmetry estimation We study tracks that penetrate to the last MuID gap (Gap4); these tracksare created by muons from open heavy-flavor decays, punch-throughhadrons, muons from light hadrons, and muons from decay. Thecontribution from other sources is negligible as discussed inSec. <ref>. To obtain the asymmetry of muonsfrom open heavy-flavor decays (A_N^ HF), the asymmetry of thebackground from light hadrons (A_N^ h) and muons from (A_N^J/ψ→μ) should be eliminated from the asymmetry ofinclusive muon candidates (A_N^ incl).Because hadron tracks can beselected with the p_z cut, A_N^ h is obtained from theasymmetry of stopped hadrons at MuID Gap3. Possible differences betweenthe A_N of stopped hadrons at MuID Gap3 and the mixture of decay muonsand punch-through hadrons at MuID Gap4 is studied with thehadron-cocktail simulation. The details are described inSec. <ref>.For the estimation of A_N^J/ψ→μ, a previous PHENIXA_N^J/ψ measurement <cit.> is used. The asymmetry ofsingle muons from decay (A_N^J/ψ→μ) isestimated from a decay simulation with the initial A_N^J/ψin <cit.> (A_N^J/ψ=-0.002±0.026 at x_F<0, and-0.026±0.026 at x_F>0). The initial and rapidity distributionsof are taken from <cit.>. The obtainedA_N^J/ψ→μ is -0.002^+0.018_-0.022 at x_F<0and -0.019^+0.019_-0.025 at x_F>0. A possible effect fromJ/ψ polarization is tested by assuming maximum polarization, andthe variation of A_N^J/ψ→μ is <0.001.Because thevariation due to J/ψ polarization is much smaller than thevariation from the uncertainty of A_N^J/ψ, the J/ψpolarization effect is not included to evaluateA_N^J/ψ→μ and the systematic uncertainty.Once A_N^ h and A_N^J/ψ→μ are determined, the A_N ofmuons from open heavy-flavor decays and its uncertainty can be obtained as A_N^ HF=A_N^ incl-f_ h· A_N^ h-f_J/ψ· A_N^J/ψ→μ/1-f_ h-f_J/ψ,δ A_N^ HF=√((δ A_N^ incl)^2+f_ h^2· (δ A_N^ h)^2+f_J/ψ^2· (δ A_N^J/ψ→μ)^2)/1-f_ h-f_J/ψ,where f_ h=(N_ DM+N_ PH)/N_ incl is the fractionof the light-hadron background, and f_J/ψ=N_J/ψ→μ/N_incl is the fraction of muons from . Both fractions (f_ hand f_J/ψ) are determined from the background estimationdescribed above. δ A_N^J/ψ→μ, estimated from theprevious PHENIX measurement, is included in the systematic uncertainty.§.§.§ Systematic UncertaintyThe systematic uncertainty is determined from variation of A_N^ HFbetween the upper and lower limit of each background source. Anadditional systematic uncertainty is derived from the comparison betweenthe two A_N^ HF calculation methods; the maximum likelihoodmethod (Eq. (<ref>)) and the polarization formula(Eq. (<ref>)). The final systematic uncertainty iscalculated as the quadratic sum of systematic uncertainties from eachsource (δA_N^δ f_ h, δA_N^ h,δA_N^J/ψ→μ, and δA_N^ method), describedhere: δA_N^δ f_ h Systematic uncertainty on thefraction of light-hadron background (δ f_ h) fromFig. <ref> is an important source of systematic uncertaintyon A_N^ HF. The upper and lower limits of A_N^ HF arecalculated using Eq. (<ref>) with the upper and lower limits ofthe fraction of the light-hadron background (f_ h±δ f_ h). δA_N^ h The asymmetry of the light-hadronbackground (A_N^ h) at MuID Gap4 is estimated by using stoppedhadrons at MuID Gap3. Due to decay kinematics, the A_N^ h at MuIDGap4 can be different from the A_N^ h measured at MuID Gap3. Inorder to quantify the difference, a simulation study using the decaykinematics of light hadrons from the hadron-cocktail inSec. <ref> and an input asymmetry (A_N^ input)is performed. A_N^ input is taken as 0.02× (with inGeV/c) at <5  GeV/c and 0.1 at >5  GeV/c, based onthe most extreme case of A_N^ h measured at MuID Gap3. Thedetailed procedure is as follows: * Generate a random spin direction (↑,↓) for alltracks. * Apply a weight (1± A_N^ input·cosϕ_0) for eachtrack based on the manually assigned initial asymmetry (A_N^input). The sign is determined from the random polarization directionin step 1, and ϕ_0 is the azimuthal angle of the track at thegeneration level. * Extract A_N^ reco of the tracks at MuID Gap3 and Gap4 withthe azimuthal angle and momentum of the reconstructed tracks by fittingthe asymmetry of the two polarization cases with A_N^ reco·cosϕ_0.The largest difference between A_N^ reco at MuID Gap3 and Gap4 is∼ 0.008 in the entire range, so ±0.008 is assigned to thesystematic uncertainty. In the case of x_F binning, the difference ofA_N^ reco at MuID Gap3 and Gap4 is quite small (<0.001). δA_N^J/ψ→μThe systematic uncertainty from A_N^J/ψ→μ is determined fromthe J/ψ→μ simulation with the upper and lower limits ofA_N^J/ψ in <cit.>. Propagation to A_N^ HF iscalculated using Eq. (<ref>). The effect from B→ J/ψ isnegligible due to its small fraction in the inclusive . δA_N^ method The A_N^ incl results from the maximum likelihood method atEq. (<ref>) are compared with result using thepolarization formula at Eq. (<ref>).Because the measurementof A_N^ h using tracks at MuID Gap3 suffer from large statisticalfluctuations, the difference of two methods with inclusive tracks atMuID Gap4 is used for both A_N^ incl and A_N^ h variationsusing Eq. (<ref>). A_N(ϕ) of inclusive tracks for each or x_F bin is calculated as, A_N(ϕ)=σ^↑(ϕ)-σ^↓(ϕ)/σ^↑(ϕ)+σ^↓(ϕ)=1/P·N^↑(ϕ)-R· N^↓(ϕ)/N^↑(ϕ)+R· N^↓(ϕ),where P is the average beam polarization, σ^↑,σ^↓ are cross sections for each polarization,N^↑, N^↓ are yields for two polarizations andR = L^↑/L^↓ is the relative luminosity where theluminosity (L^↑, L^↓) is measured by the BBCdetectors. A_N^ incl is calculated by fitting the A_N(ϕ)distribution with a function ± A_N·cosϕ, where ±depends on the beam direction. The systematic uncertainty on A_N^ HFis evaluated by propagating variations of A_N^ incl andA_N^ h between the maximum likelihood method and the polarizationformula. § RESULTS§.§ Cross section of muons from open heavy-flavor decays The invariant cross section of muons from open heavy-flavor decays iscalculated as Ed^3σ/dp^3=1/2πΔΔ y(N_ HF/ε_ BBC^ HF)·σ_pp^ inel/(N_ evt/ε_ BBC^ MB)· Aε,where Δ and Δ y are the bin widths in and y,N_ evt is the number of sampled MB events, ε_BBC^ MB (ε_ BBC^ HF) is the BBC correctionfactor for the trigger efficiency of MB events (events containing muonsfrom open heavy-flavor decays), Aε is the detectoracceptance and track reconstruction efficiency, and σ_pp^inel=42±3  mb is the inelastic cross section of collisionsat =200  GeV. Figure <ref> shows the invariant cross section ofpositively- (open square) and negatively-charged (open circle), muonsfrom open heavy-flavor decays as a function of in collisions at=200  GeV. Vertical bars (boxes) correspond to thestatistical (systematic) uncertainties. The previous PHENIX results fornegatively charged muons <cit.> are shown and verticalbars represent total uncertainties. The bottom panel shows the ratiobetween positively- and negatively-charged muons from open heavy-flavordecays (red open circles); the two spectra are consistent within thesystematic uncertainties which are dominated by the uncertainty from thehadron contamination. The comparison with the previous PHENIX resultsfor negative muons is also presented as a ratio (black diamonds); thefit function in <cit.> is used to make a ratio at>4.0  GeV/c. The uncertainties from the new results areincluded in the ratio, and two results are in good agreement. §.§ Transverse single-spin asymmetry The TSSA of muons from open heavy-flavor decays is calculated by usingEq. (<ref>) and the statistical uncertainty is determined byusing Eq. (<ref>). Figures <ref>and <ref> present the TSSA of negatively- (A_N^μ^-)and positively- (A_N^μ^+) charged muons from open heavy-flavor asa function of p_T in the forward (x_F>0) and backward (x_F<0)regions with respect to the polarized-proton beam direction.Figure <ref> shows the TSSA versus x_F of muons fromopen heavy-flavor decays. Vertical bars (boxes) represent statistical(systematic) uncertainties; a scale uncertainty from the polarization(3.4%) is not included. A_N^μ^+ in the negative x_F region,shown in the left panel of Fig. <ref>, shows someindication of a negative asymmetry; in the combined range of2.5<<5.0  GeV/c the asymmetry is -0.117±0.048(stat)±0.037 (syst). However, the combined asymmetries for allor x_F bins are consistent with zero within total uncertainties.Other results for A_N^μ^+ at positive x_F and A_N^μ^- inall kinematic regions are consistent with zero within statisticaluncertainties. The results are tabulated in Tables <ref>and <ref>, while Tables <ref>and <ref>, list the systematic uncertainties from eachsource.§ DISCUSSIONFigure <ref> shows the charge-combined, invariantcross section of muons from open heavy-flavor decays as a function of. Vertical bars (boxes) correspond to the statistical (systematic)uncertainties. The solid line in Fig. <ref> representsthe fixed-order-plus-next-to-leading-log (FONLL) calculation of muonsfrom open heavy-flavor decays from charm andbottom <cit.>, and the band around the line representsthe systematic uncertainty from the renormalization scale, factorizationscale, and heavy (c and b) quark masses. The bottom panel shows theratio between the data and the FONLL calculation. In general, theagreement between the data and the FONLL prediction becomes better withincreasing where the systematic uncertainties of both aredecreasing. At <4  GeV/c where the charm contribution islarger than that from bottom, the measured yield is larger than theFONLL calculation, but systematic uncertainties are large in both thedata and the theoretical calculation. Recently, a theoretical approachwithin the gluon saturation (Color-Glass-Condensate) framework alsopresents the cross section of leptons from heavy-flavor decays in and p+A collisions <cit.>.A recent theoretical calculation <cit.> incorporating thecollinear factorization framework makes predictions for A_N in theproduction of D-mesons (A_N^D) produced by the gluon-fusion(gg→ cc̅) process and therefore is sensitive to thetrigluon correlation functions which depend on the momentum fraction ofthe gluon in the proton in the infinite-momentum frame (x-Bjorken).Two model calculations, assuming either a linear x-dependence (Model 1in Fig. <ref>, <ref>,and <ref>) or a √(x)-dependence (Model 2 inFig. <ref>, <ref>,and <ref>), for the nonperturbative functionsparticipating in the twist-3 cross section for A_N^D are introducedto compare their behavior in the small-x region, and the overallA_N^D scale is determined by assuming |A_N^D|≤0.05 at|x_F|<0.1.To compare with our results for A_N^μ, the decaykinematics and cross section of D→μ frompythia <cit.> have been used to convertA_N^D into A_N^μ. The theory calculations of the x_F and dependence of A_N for D^0, D̅^̅0̅, D^+, and D^- at-0.6<x_F^D<0.6 and 1<p_T^D<10  GeV/c are used as theinput A_N^D to the simulation. A similar procedure to that describedin the systematic-uncertainty evaluation for δ A_N^ h isused. A weight of (1± A_N^D(p_T^D,x_F^D)·sin(ϕ^D-ϕ_pol)) is applied for each muon from a D meson and the sign isdetermined with a random polarization direction(↑,↓). Then, A_N^μ is extracted by fitting theasymmetry of the two polarization cases with A_N^μ·cosϕ^μ.Figure <ref> shows the and |x_F| distributions ofD mesons which decay into muons in the kinematic range of thismeasurement (1.25<p_T^μ<5.0  GeV/c, 0.0<|x_F^μ|<0.2,and 1.4<|y^μ|<2.0); accepted charm hadrons compriseD^0(18.7%), D̅^̅0̅(20.3%), D^+(24.2%), D^-(26.1%), andothers (D_s^+, D_s^-, and baryons).Because A_N^D^0 andA_N^D^+ (A_N^D̅^̅0̅ and A_N^D^-) are very close in bothmodels, the effect of potential different abundance of D mesonsbetween the data and pythia is negligible. In addition, themodification of A_N due to azimuthal smearing from the D-decay isquite small (<5% relative difference between A_N^D andA_N^μ) in p_T^μ>1.25  GeV/c. One notes that muons fromcharm and bottom are combined in the data, and the contribution frombottom is about 2% (55%) at =1  GeV/c (5  GeV/c)according to the FONLL calculation shown inFig. <ref>. Therefore, the charm contribution isexpected to be dominant except for the last bin of A_N^μ(3.5<<5  GeV/c). In addition, subprocesses other thangluon-fusion can contribute to the measured yield of muons fromheavy-flavor decays. The converted A_N of muons from D mesons areshown in Fig. <ref>, <ref>,and <ref>, and both calculations are in agreement withthe data within the statistical uncertainties. The difference betweentwo models becomes larger at increasing |x_F|, but it is hard todistinguish these two models due to the limited x_F coverage for thismeasurement (⟨|x_F^μ|⟩=0.04, 0.07).§ SUMMARYWe have reported the cross section and transverse single-spin asymmetryof muons from open heavy-flavor decays at 1.4<|y|<2.0 intransversely-polarized collisions at =200  GeV.Comparing with previous measurements by PHENIX, the cross section andasymmetry for positively-charged muons from open heavy-flavor decays aremeasured for the first time with the help of additional absorbermaterial in the PHENIX muon arms. In the comparison with the FONLLcalculation, the FONLL prediction is smaller than the measured crosssection at low where both experimental and theoretical systematicuncertainties are large, but it shows an agreement atp_T>4  GeV/c within systematic uncertainties.Following the cross section results, we have measured the single-spinasymmetry of muons from open heavy-flavor decays for the first time.There is no clear indication of a nonzero asymmetry in the results,which have relatively large statistical uncertainties. Theoreticalcalculations of A_N for D-meson production which take into accounttrigluon correlations are converted into A_N for muons with the helpof pythia to compare directly with the data.Thecalculations are in agreement with the data within experimentaluncertainties.Future studies with improved statistics (6.5 timescurrent integrated luminosity of this analysis), using data taken withthe PHENIX detector at RHIC in 2015, could provide further constraintson the trigluon correlation functions. § ACKNOWLEDGEMENTS We thank the staff of the Collider-Accelerator and Physics Departments at Brookhaven National Laboratory and the staff of the other PHENIX participating institutions for their vital contributions. We also thank S. Yoshida and Y. Koike for the theory calculation. We acknowledge support from theOffice of Nuclear Physics in the Office of Science of the Department of Energy, the National Science Foundation,Abilene Christian University Research Council,Research Foundation of SUNY, and Dean of the College of Arts and Sciences, Vanderbilt University(U.S.A), Ministry of Education, Culture, Sports, Science, and Technology and the Japan Society for the Promotion of Science (Japan), Conselho Nacional de Desenvolvimento Científico e Tecnológico and Fundação de Amparo à Pesquisa do Estado de São Paulo (Brazil), Natural Science Foundation of China (People's Republic of China), Croatian Science Foundation and Ministry of Science and Education (Croatia), Ministry of Education, Youth, and Sports (Czech Republic), Centre National de la Recherche Scientifique, Commissariat à l'Énergie Atomique, and Institut National de Physique Nucléaire et de Physique des Particules (France), Bundesministerium für Bildung und Forschung, Deutscher Akademischer Austausch Dienst, and Alexander von Humboldt Stiftung (Germany), National Science Fund, OTKA, EFOP, and the Ch. Simonyi Fund (Hungary), Department of Atomic Energy and Department of Science and Technology (India),Israel Science Foundation (Israel),Basic Science Research Program through NRF of the Ministry of Education (Korea), Physics Department, Lahore University of Management Sciences (Pakistan), Ministry of Education and Science, Russian Academy of Sciences, Federal Agency of Atomic Energy (Russia), VR and Wallenberg Foundation (Sweden),the U.S. Civilian Research and Development Foundation for the Independent States of the Former Soviet Union,the Hungarian American Enterprise Scholarship Fund, and the US-Israel Binational Science Foundation.§ APPENDIX:DATA TABLES49 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Klem et al.(1976)Klem, Bowers, Courant, Kagan, Marshak, Peterson, Ruddick, Dragoset, and Roberts]klem:1976ui author author R. D. Klem, author J. E. Bowers, author H. W. Courant, author H. Kagan, author M. L. Marshak, author E. A. Peterson, author K. Ruddick, author W. H. Dragoset,and author J. B. Roberts, title title Measurement of Asymmetries of Inclusive Pion Production in Proton Proton Interactions at 6 and 11.8 GeV/c, 10.1103/PhysRevLett.36.929 journal journal Phys. Rev. Lett. volume 36, pages 929 (year 1976)NoStop[Kane et al.(1978)Kane, Pumplin, and Repko]Kane:1978nd author author G. L. Kane, author J. Pumplin, and author W. Repko, title title Transverse Quark Polarization in Large-p_T Reactions, e^+e^- Jets, and Leptoproduction: A Test of Quantum Chromodynamics, 10.1103/PhysRevLett.41.1689 journal journal Phys. Rev. Lett. volume 41, pages 1689 (year 1978)NoStop[Allgower et al.(2002)Allgower et al.]Allgower:2002qi author author C. E. Allgower et al., title title Measurement of analyzing powers of π+ and π^- produced on a hydrogen and a carbon target with a 22- GeV/c incident polarized proton beam, 10.1103/PhysRevD.65.092008 journal journal Phys. Rev. D volume 65, pages 092008 (year 2002)NoStop[Antille et al.(1980)Antille, Dick, Madansky, Perret-Gallix, Werlen, Gonidec, Kuroda, and Kyberd]Antille:1980th author author J. Antille, author L. Dick, author L. Madansky, author D. Perret-Gallix, author M. Werlen, author A. Gonidec, author K. Kuroda,and author P. Kyberd, title title Spin dependence of the inclusive reaction p+p (polarized) →π^0+X at 24 GeV/c for high-p_T π^0 produced in the central region, 10.1016/0370-2693(80)90933-8 journal journal Phys. Lett. B volume 94, pages 523 (year 1980)NoStop[Adams et al.(1991a)Adams et al.]Adams:1991rw author author D. L. Adams et al. (collaboration FNAL-E581/E704 Collaboration), title title Comparison of spin asymmetries and cross sections in π^0 production by 200 GeV polarized antiprotons and protons, 10.1016/0370-2693(91)91351-U journal journal Phys. Lett. B volume 261, pages 201 (year 1991a)NoStop[Adams et al.(1991b)Adams et al.]Adams:1991cs author author D. L. Adams et al. (collaboration FNAL-E704 Collaboration), title title Analyzing power in inclusive π^+ and π^- production at high x_F with a 200 GeV polarized proton beam, 10.1016/0370-2693(91)90378-4 journal journal Phys. Lett. B volume 264, pages 462 (year 1991b)NoStop[Arsene et al.(2008)Arsene et al.]Arsene:2008mi author author I. Arsene et al. (collaboration BRAHMS Collaboration),title title Single Transverse Spin Asymmetries of Identified Charged Hadrons in Polarized pp Collisions at √(s)=62.4  GeV, 10.1103/PhysRevLett.101.042001 journal journal Phys. Rev. Lett. volume 101, pages 042001 (year 2008)NoStop[Adams et al.(2004)Adams et al.]Adams:2003fx author author John Adams et al. (collaboration STAR Collaboration),title title Cross Sections and Transverse Single-Spin Asymmetries in Forward Neutral-Pion Production from Proton Collisions at √(s)=200  GeV, 10.1103/PhysRevLett.92.171801 journal journal Phys. Rev. Lett. volume 92, pages 171801 (year 2004)NoStop[Abelev et al.(2008)Abelev et al.]Abelev:2008qb author author B. I. Abelev et al. (collaboration STAR Collaboration),title title Forward Neutral-Pion Transverse Single-Spin Asymmetries in p+p Collisions at √(s)=200  GeV, 10.1103/PhysRevLett.101.222001 journal journal Phys. Rev. Lett. volume 101, pages 222001 (year 2008)NoStop[Mondal(2014)]Mondal:2014vla author author M. M. Mondal (collaboration STAR Collaboration), title title Measurement of the Transverse Single-Spin Asymmetries for π^0 and Jet-like Events at Forward Rapidities at STAR in p+p Collisions at √(s)=500  GeV,booktitle booktitle Proceedings, 22nd International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS 2014): Warsaw, Poland, April 28-May 2, 2014, @noopjournal journal Proc. Sci. volume DIS2014, pages 216 (year 2014)NoStop [Heppelmann(2016)]Heppelmann:2016siw author author S. Heppelmann (collaboration STAR Collaboration), title title Preview from RHIC Run 15 pp and pAu Forward Neutral Pion Production from Transversely Polarized Protons, inhttp://inspirehep.net/record/1456055/files/MPIatLHC2015_C15-11-23.228.pdf booktitle Proceedings, 7th International Workshop on Multiple Partonic Interactions at the LHC (MPI@LHC 2015) (year 2016) p. pages 228NoStop[Adare et al.(2014a)Adare et al.]Adare:2013ekj author author A. Adare et al. (collaboration PHENIX Collaboration),title title Measurement of transverse-single-spin asymmetries for midrapidity and forward-rapidity production of hadrons in polarized p+p collisions at √(s)=200 and 62.4 GeV, 10.1103/PhysRevD.90.012006 journal journal Phys. Rev. D volume 90, pages 012006 (year 2014a)NoStop[Adare et al.(2014b)Adare et al.]Adare:2014qzo author author A. Adare et al. (collaboration PHENIX Collaboration),title title Cross section and transverse single-spin asymmetry of η mesons in p^↑+p collisions at √(s)=200 GeV at forward rapidity, 10.1103/PhysRevD.90.072008 journal journal Phys. Rev. D volume 90, pages 072008 (year 2014b)NoStop[Adamczyk et al.(2012)Adamczyk et al.]Adamczyk:2012xd author author L. Adamczyk et al. (collaboration STAR Collaboration),title title Transverse Single-Spin Asymmetry and Cross-Section for π^0 and η Mesons at Large Feynman-x in Polarized p+p Collisions at √(s)=200 GeV, 10.1103/PhysRevD.86.051101 journal journal Phys. Rev. D volume 86, pages 051101 (year 2012)NoStop[Sivers(1990)]Sivers:1989cc author author D. W. Sivers, title title Single Spin Production Asymmetries from the Hard Scattering of Point-Like Constituents, 10.1103/PhysRevD.41.83 journal journal Phys. Rev. D volume 41, pages 83 (year 1990)NoStop[Sivers(1991)]Sivers:1990fh author author D. W. Sivers, title title Hard scattering scaling laws for single spin production asymmetries, 10.1103/PhysRevD.43.261 journal journal Phys. Rev. D volume 43, pages 261 (year 1991)NoStop[Collins(1993)]Collins:1992kk author author John C. Collins, title title Fragmentation of transversely polarized quarks probed in transverse momentum distributions, 10.1016/0550-3213(93)90262-N journal journal Nucl. Phys. B volume 396, pages 161 (year 1993)NoStop[Airapetian et al.(2010)Airapetian et al.]Airapetian:2010ds author author A. Airapetian et al. (collaboration HERMES Collaboration), title title Effects of transversity in deep-inelastic scattering by polarized protons, 10.1016/j.physletb.2010.08.012 journal journal Phys. Lett. B volume 693, pages 11 (year 2010)NoStop[Adolph et al.(2012)Adolph et al.]Adolph:2012sn author author C. Adolph et al. (collaboration COMPASS Collaboration),title title Experimental investigation of transverse spin asymmetries in μ-p SIDIS processes: Collins asymmetries, 10.1016/j.physletb.2012.09.055 journal journal Phys. Lett. B volume 717, pages 376 (year 2012)NoStop[Rogers and Mulders(2010)]Rogers:2010dm author author Ted C. Rogers and author Piet J. Mulders, title title No Generalized TMD-Factorization in Hadro-Production of High Transverse Momentum Hadrons, 10.1103/PhysRevD.81.094006 journal journal Phys. Rev. D volume 81, pages 094006 (year 2010)NoStop[Efremov and Teryaev(1985)]Efremov:1984ip author author A. V. Efremov and author O. V. Teryaev, title title QCD Asymmetry and Polarized Hadron Structure Functions, booktitle booktitle Workshop on High-Energy Spin Physics Protvino, USSR, September 14-17, 1983, 10.1016/0370-2693(85)90999-2 journal journal Phys. Lett. B volume 150, pages 383 (year 1985)NoStop[Qiu and Sterman(1991)]Qiu:1991pp author author J.-W. Qiu and author G. F. Sterman, title title Single transverse spin asymmetries, 10.1103/PhysRevLett.67.2264 journal journal Phys. Rev. Lett. volume 67, pages 2264 (year 1991)NoStop[Norrbin and Sjöstrand(2000)]Norrbin:2000zc author author E. Norrbin and author T. Sjöstrand, title title Production and hadronization of heavy quarks, 10.1007/s100520000460 journal journal Eur. Phys. J. C volume 17, pages 137 (year 2000)NoStop[Adare et al.(2010)Adare et al.]Adare:2010bd author author A. Adare et al. (collaboration PHENIX Collaboration),title title Measurement of Transverse Single-Spin Asymmetries for J/ψ Production in Polarized p+p Collisions at √(s) = 200 GeV, 10.1103/PhysRevD.82.112008, 10.1103/PhysRevD.86.099904 journal journal Phys. Rev. D volume 82, pages 112008 (year 2010), note [Erratum: Phys. Rev.D86,099904(2012)]NoStop[Yuan(2008)]Yuan:2008vn author author F. Yuan, title title Heavy Quarkonium Production in Single Transverse Polarized High Energy Scattering, 10.1103/PhysRevD.78.014024 journal journal Phys. Rev. D volume 78, pages 014024 (year 2008)NoStop[Adare et al.(2012a)Adare et al.]Adare:2011vq author author A. Adare et al. (collaboration PHENIX Collaboration),title title Ground and excited charmonium state production in p+p collisions at √(s)=200 GeV,10.1103/PhysRevD.85.092004 journal journal Phys. Rev. D volume 85, pages 092004 (year 2012a)NoStop[Koike and Yoshida(2011)]Koike:2011mb author author Y. Koike and author S. Yoshida, title title Probing the three-gluon correlation functions by the single spin asymmetry in p^↑p→ DX, 10.1103/PhysRevD.84.014026 journal journal Phys. Rev. D volume 84, pages 014026 (year 2011)NoStop[Kang et al.(2008)Kang, Qiu, Vogelsang, and Yuan]Kang:2008ih author author Z.-B. Kang, author J.-W. Qiu, author W. Vogelsang,andauthor Feng Yuan, title title Accessing tri-gluon correlations in the nucleon via the single spin asymmetry in open charm production, 10.1103/PhysRevD.78.114013 journal journal Phys. Rev. D volume 78, pages 114013 (year 2008)NoStop[Brodsky et al.(2013)Brodsky, Fleuret, Hadjidakis, andLansberg]Brodsky:2012vg author author S. J. Brodsky, author F. Fleuret, author C. Hadjidakis,andauthor J. P. Lansberg,title title Physics Opportunities of a Fixed-Target Experiment using the LHC Beams, 10.1016/j.physrep.2012.10.001 journal journal Phys. Rept. volume 522, pages 239 (year 2013)NoStop[Adcox et al.(2003)Adcox et al.]Adcox:2003zm author author K. Adcox et al. (collaboration PHENIX Collaboration),title title PHENIX detector overview, 10.1016/S0168-9002(02)01950-2 journal journal Nucl. Instrum. Methods Phys. Res., Sec. Avolume 499, pages 469 (year 2003)NoStop[Akikawa et al.(2003)Akikawa et al.]Akikawa:2003zs author author H. Akikawa et al. (collaboration PHENIX Collaboration),title title PHENIX muon arms,10.1016/S0168-9002(02)01955-1 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 499, pages 537 (year 2003)NoStop[Adachi et al.(2013)Adachi et al.]Adachi:2013qha author author S. Adachi et al., title title Trigger electronics upgrade of PHENIX muon tracker, 10.1016/j.nima.2012.11.088 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 703,pages 114 (year 2013)NoStop[Allen et al.(2003)Allen et al.]Allen:2003zt author author M. Allen et al. (collaboration PHENIX Collaboration),title title PHENIX inner detectors,10.1016/S0168-9002(02)01956-3 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 499, pages 549 (year 2003)NoStop[Okada et al.(2006)Okada et al.]Okada:2005gu author author H. Okada et al., title title Measurement of the analyzing power in pp elastic scattering in the peak CNI region at RHIC, 10.1016/j.physletb.2006.06.008 journal journal Phys. Lett. B volume 638, pages 450 (year 2006)NoStop[Nakagawa et al.(2008)Nakagawa et al.]Nakagawa:2008zzb author author I. Nakagawa et al., title title p-carbon polarimetry at RHIC, booktitle booktitle Proceedings, 12th International Workshop on Polarized ion sources, targets and polarimetry (PSTP 2007), 10.1063/1.2888112 journal journal AIP Conf. Proc. volume 980, pages 380 (year 2008)NoStop[Huang and Kurita(2006)]Huang:2006cs author author H. Huang and author K. Kurita,title title Fiddling carbon strings with polarized proton beams, booktitle booktitle Beam Instrumentation Workshop 2006 : Twelfth Beam Instrumentation Workshop, Batavia, Illinois, 1-4 May 2006, 10.1063/1.2401392 journal journal AIP Conf. Proc. volume 868, pages 3 (year 2006)NoStop[Adler et al.(2003)Adler et al.]Adler:2003pb author author S. S. Adler et al. (collaboration PHENIX Collaboration),title title Mid-rapidity Neutral Pion Production in Proton-Proton collisions at √(s) = 200 GeV, 10.1103/PhysRevLett.91.241803 journal journal Phys. Rev. Lett. volume 91, pages 241803 (year 2003)NoStop[Drees et al.(2003)Drees, Xu, Fox, and Huang]vanderMeerscan author author K. A. Drees, author Z. Xu, author B. Fox,and author H. Huang, title title Results from Vernier Scans at RHIC during the pp Run 2001-2002, in http://warrior.lbl.gov:7778/pac.html booktitle Proceedings of the PAC2003 Conference, Portland(year 2003) p. pages 1688NoStop [Adare et al.(2012b)Adare et al.]Adare:2012px author author A. Adare et al. (collaboration PHENIX Collaboration),title title Nuclear-Modification Factor for Open-Heavy-Flavor Production at Forward Rapidity in Cu+Cu Collisions at √(s_NN)=200 GeV, 10.1103/PhysRevC.86.024909 journal journal Phys. Rev. C volume 86, pages 024909 (year 2012b)NoStop[Adare et al.(2014c)Adare et al.]Adare:2013lkk author author A. Adare et al. (collaboration PHENIX Collaboration),title title Cold-Nuclear-Matter Effects on Heavy-Quark Production at Forward and Backward Rapidity in d+ Au Collisions at √(s_NN)=200 GeV, 10.1103/PhysRevLett.112.252301 journal journal Phys. Rev. Lett. volume 112, pages 252301 (year 2014c)NoStop[Agostinelli et al.(2003)Agostinelli et al.]Agostinelli:2002hh author author S. Agostinelli et al. (collaboration GEANT4 Collaboration), title title GEANT4: A Simulation toolkit, 10.1016/S0168-9002(03)01368-8 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 506, pages 250 (year 2003)NoStop[Adare et al.(2011a)Adare et al.]Adare:2010de author author A. Adare et al. (collaboration PHENIX Collaboration),title title Heavy Quark Production in p+p and Energy Loss and Flow of Heavy Quarks in Au+Au Collisions at √(s_NN)=200 GeV, 10.1103/PhysRevC.84.044905 journal journal Phys. Rev. C volume 84, pages 044905 (year 2011a)NoStop[Adare et al.(2011b)Adare et al.]Adare:2011vy author author A. Adare et al. (collaboration PHENIX Collaboration),title title Identified charged hadron production in p+p collisions at √(s)=200 and 62.4 GeV, 10.1103/PhysRevC.83.064903 journal journal Phys. Rev. C volume 83, pages 064903 (year 2011b)NoStop[Agakishiev et al.(2012)Agakishiev et al.]Agakishiev:2011dc author author G. Agakishiev et al. (collaboration STAR Collaboration), title title Identified hadron compositions in p+p and Au+Au collisions at high transverse momenta at √(s__NN) = 200 GeV, 10.1103/PhysRevLett.108.072302 journal journal Phys. Rev. Lett. volume 108, pages 072302 (year 2012)NoStop[Sjöstrand et al.(2006)Sjöstrand, Mrenna,and Skands]Sjostrand:2006za author author T. Sjöstrand, author S. Mrenna,and author P. Z. Skands, title title PYTHIA 6.4 Physics and Manual, 10.1088/1126-6708/2006/05/026 journal journal J. High Energy Phys. volume 05, pages 026 (year 2006)NoStop[Aidala et al.()Aidala et al.]Aidala:2017yte author author C. Aidala et al. (collaboration PHENIX Collaboration),@nooptitle B-meson production at forward and backward rapidity in p+p and Cu+Au collisions at √(s__NN)=200 GeV, note ArXiv:1702.01085NoStop [Cacciari et al.(1998)Cacciari, Greco, and Nason]Cacciari:1998it author author M. Cacciari, author M. Greco, and author P. Nason, title title The p_T spectrum in heavy flavor hadroproduction, 10.1088/1126-6708/1998/05/007 journal journal J. High Energy Phys. volume 05, pages 007 (year 1998)NoStop[Fujii and Watanabe(2016)]Fujii:2015lld author author H. Fujii and author K. Watanabe, title title Leptons from heavy-quark semileptonic decay in pA collisions within the CGC framework, 10.1016/j.nuclphysa.2016.03.045 journal journal Nucl. Phys. A volume 951, pages 45 (year 2016)NoStop[Sjöstrand et al.(2015)Sjöstrand, Ask, Christiansen, Corke, Desai, Ilten, Mrenna, Prestel, Rasmussen, and Skands]Sjostrand:2014zea author author T. Sjöstrand, author S. Ask, author J. R. Christiansen, author R. Corke, author N. Desai, author P. Ilten, author S. Mrenna, author S. Prestel, author C. O.Rasmussen,and author P. Z.Skands, title title An Introduction to PYTHIA 8.2, 10.1016/j.cpc.2015.01.024 journal journal Comput. Phys. Commun. volume 191, pages 159 (year 2015)NoStop
http://arxiv.org/abs/1703.09333v2
{ "authors": [ "C. Aidala", "N. N. Ajitanand", "Y. Akiba", "R. Akimoto", "J. Alexander", "M. Alfred", "K. Aoki", "N. Apadula", "H. Asano", "E. T. Atomssa", "T. C. Awes", "C. Ayuso", "B. Azmoun", "V. Babintsev", "A. Bagoly", "M. Bai", "X. Bai", "B. Bannier", "K. N. Barish", "S. Bathe", "V. Baublis", "C. Baumann", "S. Baumgart", "A. Bazilevsky", "M. Beaumier", "R. Belmont", "A. Berdnikov", "Y. Berdnikov", "D. Black", "D. S. Blau", "M. Boer", "J. S. Bok", "K. Boyle", "M. L. Brooks", "J. Bryslawskyj", "H. Buesching", "V. Bumazhnov", "C. Butler", "S. Butsyk", "S. Campbell", "V. Canoa Roman", "C. -H. Chen", "C. Y. Chi", "M. Chiu", "I. J. Choi", "J. B. Choi", "S. Choi", "P. Christiansen", "T. Chujo", "V. Cianciolo", "B. A. Cole", "M. Connors", "N. Cronin", "N. Crossette", "M. Csanád", "T. Csörgő", "T. W. Danley", "A. Datta", "M. S. Daugherity", "G. David", "K. DeBlasio", "K. Dehmelt", "A. Denisov", "A. Deshpande", "E. J. Desmond", "L. Ding", "J. H. Do", "L. D'Orazio", "O. Drapier", "A. Drees", "K. A. Drees", "M. Dumancic", "J. M. Durham", "A. Durum", "T. Elder", "T. Engelmore", "A. Enokizono", "S. Esumi", "K. O. Eyser", "B. Fadem", "W. Fan", "N. Feege", "D. E. Fields", "M. Finger", "M. Finger, Jr.", "F. Fleuret", "S. L. Fokin", "J. E. Frantz", "A. Franz", "A. D. Frawley", "Y. Fukao", "Y. Fukuda", "T. Fusayasu", "K. Gainey", "C. Gal", "P. Garg", "A. Garishvili", "I. Garishvili", "H. Ge", "F. Giordano", "A. Glenn", "X. Gong", "M. Gonin", "Y. Goto", "R. Granier de Cassagnac", "N. Grau", "S. V. Greene", "M. Grosse Perdekamp", "Y. Gu", "T. Gunji", "H. Guragain", "T. Hachiya", "J. S. Haggerty", "K. I. Hahn", "H. Hamagaki", "S. Y. Han", "J. Hanks", "S. Hasegawa", "T. O. S. Haseler", "K. Hashimoto", "R. Hayano", "X. He", "T. K. Hemmick", "T. Hester", "J. C. Hill", "K. Hill", "R. S. Hollis", "K. Homma", "B. Hong", "T. Hoshino", "N. Hotvedt", "J. Huang", "S. Huang", "T. Ichihara", "Y. Ikeda", "K. Imai", "Y. Imazu", "J. Imrek", "M. Inaba", "A. Iordanova", "D. Isenhower", "A. Isinhue", "Y. Ito", "D. Ivanishchev", "B. V. Jacak", "S. J. Jeon", "M. Jezghani", "Z. Ji", "J. Jia", "X. Jiang", "B. M. Johnson", "K. S. Joo", "V. Jorjadze", "D. Jouan", "D. S. Jumper", "J. Kamin", "S. Kanda", "B. H. Kang", "J. H. Kang", "J. S. Kang", "D. Kapukchyan", "J. Kapustinsky", "S. Karthas", "D. Kawall", "A. V. Kazantsev", "J. A. Key", "V. Khachatryan", "P. K. Khandai", "A. Khanzadeev", "K. M. Kijima", "C. Kim", "D. J. Kim", "E. -J. Kim", "M. Kim", "Y. -J. Kim", "Y. K. Kim", "D. Kincses", "E. Kistenev", "J. Klatsky", "D. Kleinjan", "P. Kline", "T. Koblesky", "M. Kofarago", "B. Komkov", "J. Koster", "D. Kotchetkov", "D. Kotov", "F. Krizek", "S. Kudo", "K. Kurita", "M. Kurosawa", "Y. Kwon", "R. Lacey", "Y. S. Lai", "J. G. Lajoie", "E. O. Lallow", "A. Lebedev", "D. M. Lee", "G. H. Lee", "J. Lee", "K. B. Lee", "K. S. Lee", "S. H. Lee", "M. J. Leitch", "M. Leitgab", "Y. H. Leung", "B. Lewis", "N. A. Lewis", "X. Li", "X. Li", "S. H. Lim", "L. D. Liu", "M. X. Liu", "V. -R. Loggins", "S. Lokos", "D. Lynch", "C. F. Maguire", "T. Majoros", "Y. I. Makdisi", "M. Makek", "M. Malaev", "A. Manion", "V. I. Manko", "E. Mannel", "H. Masuda", "M. McCumber", "P. L. McGaughey", "D. McGlinchey", "C. McKinney", "A. Meles", "M. Mendoza", "B. Meredith", "W. J. Metzger", "Y. Miake", "T. Mibe", "A. C. Mignerey", "D. E. Mihalik", "A. Milov", "D. K. Mishra", "J. T. Mitchell", "G. Mitsuka", "S. Miyasaka", "S. Mizuno", "A. K. Mohanty", "S. Mohapatra", "T. Moon", "D. P. Morrison", "S. I. M. Morrow", "M. Moskowitz", "T. V. Moukhanova", "T. Murakami", "J. Murata", "A. Mwai", "T. Nagae", "K. Nagai", "S. Nagamiya", "K. Nagashima", "T. Nagashima", "J. L. Nagle", "M. I. Nagy", "I. Nakagawa", "H. Nakagomi", "Y. Nakamiya", "K. R. Nakamura", "T. Nakamura", "K. Nakano", "C. Nattrass", "P. K. Netrakanti", "M. Nihashi", "T. Niida", "R. Nouicer", "T. Novák", "N. Novitzky", "R. Novotny", "A. S. Nyanin", "E. O'Brien", "C. A. Ogilvie", "H. Oide", "K. Okada", "J. D. Orjuela Koop", "J. D. Osborn", "A. Oskarsson", "K. Ozawa", "R. Pak", "V. Pantuev", "V. Papavassiliou", "I. H. Park", "J. S. Park", "S. Park", "S. K. Park", "S. F. Pate", "L. Patel", "M. Patel", "J. -C. Peng", "W. Peng", "D. V. Perepelitsa", "G. D. N. Perera", "D. Yu. Peressounko", "C. E. PerezLara", "J. Perry", "R. Petti", "M. Phipps", "C. Pinkenburg", "R. P. Pisani", "A. Pun", "M. L. Purschke", "H. Qu", "P. V. Radzevich", "J. Rak", "I. Ravinovich", "K. F. Read", "D. Reynolds", "V. Riabov", "Y. Riabov", "E. Richardson", "D. Richford", "T. Rinn", "N. Riveli", "D. Roach", "S. D. Rolnick", "M. Rosati", "Z. Rowan", "J. Runchey", "M. S. Ryu", "B. Sahlmueller", "N. Saito", "T. Sakaguchi", "H. Sako", "V. Samsonov", "M. Sarsour", "K. Sato", "S. Sato", "S. Sawada", "B. Schaefer", "B. K. Schmoll", "K. Sedgwick", "J. Seele", "R. Seidl", "Y. Sekiguchi", "A. Sen", "R. Seto", "P. Sett", "A. Sexton", "D. Sharma", "A. Shaver", "I. Shein", "T. -A. Shibata", "K. Shigaki", "M. Shimomura", "K. Shoji", "P. Shukla", "A. Sickles", "C. L. Silva", "D. Silvermyr", "B. K. Singh", "C. P. Singh", "V. Singh", "M. J. Skoby", "M. Skolnik", "M. Slunečka", "K. L. Smith", "S. Solano", "R. A. Soltz", "W. E. Sondheim", "S. P. Sorensen", "I. V. Sourikova", "P. W. Stankus", "P. Steinberg", "E. Stenlund", "M. Stepanov", "A. Ster", "S. P. Stoll", "M. R. Stone", "T. Sugitate", "A. Sukhanov", "J. Sun", "S. Syed", "A. Takahara", "A Takeda", "A. Taketani", "Y. Tanaka", "K. Tanida", "M. J. Tannenbaum", "S. Tarafdar", "A. Taranenko", "G. Tarnai", "E. Tennant", "R. Tieulent", "A. Timilsina", "T. Todoroki", "M. Tomášek", "H. Torii", "C. L. Towell", "R. S. Towell", "I. Tserruya", "Y. Ueda", "B. Ujvari", "H. W. van Hecke", "M. Vargyas", "S. Vazquez-Carson", "E. Vazquez-Zambrano", "A. Veicht", "J. Velkovska", "R. Vértesi", "M. Virius", "V. Vrba", "E. Vznuzdaev", "X. R. Wang", "Z. Wang", "D. Watanabe", "K. Watanabe", "Y. Watanabe", "Y. S. Watanabe", "F. Wei", "S. Whitaker", "S. Wolin", "C. P. Wong", "C. L. Woody", "M. Wysocki", "B. Xia", "C. Xu", "Q. Xu", "Y. L. Yamaguchi", "A. Yanovich", "P. Yin", "S. Yokkaichi", "J. H. Yoo", "I. Yoon", "Z. You", "I. Younus", "H. Yu", "I. E. Yushmanov", "W. A. Zajc", "A. Zelenski", "S. Zharko", "S. Zhou", "L. Zou" ], "categories": [ "hep-ex", "nucl-ex" ], "primary_category": "hep-ex", "published": "20170327224752", "title": "Cross section and transverse single-spin asymmetry of muons from open heavy-flavor decays in polarized $p$+$p$ collisions at $\\sqrt{s}=200$ GeV" }
From molecules to YSC in M33 INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi, 5,50125 Firenze, Italy edvige@arcetri.astro.it,bandiera@arcetri.astro.it Laboratoire d'Astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allée Geoffroy Saint-Hilaire, 33615 Pessac, France jonathan.braine@u-bordeaux.fr,nathalie.brouillet@u-bordeaux.fr,clement.druard@gmail.com, pierre.gratier@u-bordeaux.fr,jimmy.mata@hotmail.fr Observatoire de Paris, LERMA (CNRS: UMR 8112), 61 Av. de l'Observatoire, 75014, Paris, France francoise.combes@obspm.fr Institut de Radioastronomie Millimétrique 300 rue de la Piscine, Domaine Universitaire 38406 Saint Martin d'Hères, France mailto:schuster@iram.fr Institute for Astronomy, Astrophysics, Space Applications & Remote Sensing, NationalObservatory of Athens, P. Penteli, 15236, Athens, Greece xilouris@astro.noa.gr We study the association between Giant Molecular Clouds (GMCs) and Young Stellar Cluster Candidates (YSCCs),to shed light on the time evolution of local star formation episodes in the nearby galaxy M33.The CO (J=2-1) IRAM-all-disk survey was used to identify and classify 566 GMCs with masses between 2× 10^4 and 2× 10^6 M_⊙ across the whole star forming disk of M33. In the same area, there are 630 YSCCs, identifiedusing Spitzer-24 μm data.Some YSCCs are embedded star-forming sites while the majority have GALEX-UV and Hα counterparts with estimated cluster masses and ages.The GMC classes correspond to different cloud evolutionary stages: inactive clouds are 32% of the total, classified clouds withembedded and exposed star formation are 16% and 52% of the total respectively. Across the regular southern spiral arm, inactive clouds are preferentially located in the inner part of the arm,possiblysuggesting a triggering of star formation as the cloud crosses the arm. The spatial correlation between YSCCs and GMCs isextremelystrong, with a typicalseparationof 17 pc,less than half the CO(2–1) beamsize,illustrating theremarkable physical link between the two populations. GMCs and YSCCsfollow the HI filaments, except inthe outermost regions where the survey finds fewer GMCs than YSCCslikely due to undetected, low CO-luminosity clouds. The distribution of the non-embedded YSCC ages peaks around5 Myrs with only a few being as old as8–10 Myrs. These age estimates together with thenumber of GMCs in the various evolutionary stageslead us to conclude that14 Myrs is a typical lifetime of a GMC in M33, prior tocloud dispersal. The inactive and embedded phases are short, lasting about 4 and 2 Myrs respectively.This underlines that embedded YSCCs rapidly break out from the clouds and become partially visible in Hα or UV long before cloud dispersal. From molecules to Young Stellar Clusters: the star formation cycle across the M33 disk Edvige Corbelli1 Jonathan Braine2 Rino Bandiera1 Nathalie Brouillet2 Françoise Combes 3 Clement Druard 2 Pierre Gratier 2 Jimmy Mata 2 Karl Schuster 4 Manolis Xilouris 5 Francesco PallaDeceased1 Received .....; accepted .... ===========================================================================================================================================================================================================================================================================================================§ INTRODUCTION The formation of giant molecular clouds (hereafter GMCs) in thebright disks of spiral galaxies requires the onset of instabilities and the ability of the gas to cool and fragment.Withingravitationally unstable clouds, the process of fragmentation continues tosmaller scales, yielding a distribution of clumpsand prestellar cores which later collapse to form stars.The evolution ofthese clouds is then driven by the Young Stellar Clusters (hereafter YSC) which have formed.As the YSC evolves, it could disrupt the cloud or triggernew episodes of star formation.Our Galaxy is the natural laboratory where these processes have been studied in detail because observations can be carried out with an unbeatablespatial resolution. Galactic observations suffer from limitationsdue to the fact that we reside within the star forming disk. Moreover, galaxies with different masses, morphologies, metal contents and in different environment or cosmic time, might transform the gas into stars over different timescales and with different efficiencies. The molecular gas fragments, from GMCs to protostellar clumps,do not necessarily follow the same mass spectrum as Galactic clouds nor share their characteristics. These considerations have triggered great interest in studying molecular clouds in nearby galaxies, with the support ofmillimeter telescopes which are steadily improving in resolution and sensitivity (e.g. ALMA,NOEMA).Validating a cloud formation and evolution model requires an unbiased survey of molecular clouds in a galaxy and of star-forming sites.All-disk-surveys of the ^12CO J=1-0 or J=2-1 lineemissionin nearby galaxies, are the most commonly used to providea census of molecular complexesdown to a certain sensitivity limit. The Large Magellanic Cloud in the southern hemisphere, and M33 in the northern hemisphere,have been targetsofseveral observing campaigns of molecular gas emission,as they are nearby,gas rich and with active star formation <cit.>. The NANTEN group studied the LMC and identified 168 clouds <cit.>. <cit.>surveyed the inner 2 kpc of M33 and detected 38 GMC with a 7" synthesized beam.The ^12CO J=1-0 survey of the M33 disk with the BIMA interferometer with a 13" synthesized beam,has provided a catalogue of 148 clouds out to R=6 kpc,complete down to 1.5×10^5 M_⊙ <cit.>.<cit.> analyzed the IRAM CO J=2-1 survey of a large area of the disk of M33 and catalogued 337 GMCs.The star forming disk of M33 was observed in the CO J=1-0 and J=3-2 linewith an angular resolution of 25"by <cit.> who identified 71 GMCs, 70 of which were already in the <cit.> or <cit.> catalogues. The data used here are from the deep CO(J=2-1) whole-disk survey carried out with the IRAM-30m telescope <cit.>at 12" resolution (49pc) from which 566 cloudswere identified. Molecular clouds are not located around or very close tooptically visible stellarclusters, as it will be shown in this paper. In the early phases of star formation (hereafter SF), protostars and stars are embedded in the gas and can be detected via imaging with infrared telescopes due to the high extinctionprovided by the molecular material.The Spitzer Space Telescope has surveyed the LMC and M33 in the Mid-Infrared (hereafter MIR)with sufficient spatial resolutionas to provide adetailed view of wherethe hot dust emission is located.Emission from hot dust, typically detected at 24 μm, is an excellent tracer of star forming sites wheremassive orintermediate mass stars have formed.The detection of protostars and of the earliest phases of SF is not yet feasible in M33 since it requiresradio and far-infrared surveys withfarhigher resolution than what is currently available. Searches for embedded star forming sites in M33 are limited to less compact sites,i.e. to stellar clusters within clouds which are visible at 24μm in the Spitzer survey of thewhole galaxy <cit.>.A catalogue ofMIRemitting sites in M33 isnow available <cit.> anda list of candidate star-forming sites has beenselectedto investigate their distribution in mass andspatiallyin the disk of M33. After formation, stars may clear some of the gas such that, depending on viewing angle,optical counterparts to the MIR emission can be found.The YSC is still compact and may stillsuffer from modest extinction such that individual members cannot be resolved. In this context, we study the relationship between Young Stellar Cluster Candidates(hereafter YSCCs, referred here only to sources which have been selected via their MIRemission and are listed in Table 6),and the molecular cloud population inM33. A detailed analysis of the spatial correlation between GMCs and YSCCs is carried out across the whole star-forming disk of M33.Using essentially the same classification scheme as in <cit.>, we define classes ofGMCs in terms of their star formation.Combined with the classification and age determination of YSCCs, it is possible to estimate durations for the various phases of the star formationprocess and the GMC lifetime in this nearby galaxy. This is important in order to link the physical conditionswithin GMCs to the processeswhich regulate star formation, its efficiency, and possible time variations <cit.>.The paper plan is the following. In Section 2 we describe the new GMC catalogue and introduce the cloud classification scheme. In Section 3we describe the MIR-source catalogue and introduce the YSCC classification scheme.In Section 3 we also discuss the association between the GMCs andYSCCsand in Section 4 the association between GMCs andoptically visible YSCs and other sources related to the SF cycle. The catalogs of GMCs and YSCCs andtheir classification are provided in the on-line Tables.The propertiesof molecular cloud classes and of YSCC classes are presented in Sections 5 and 6. Molecular cloud lifetimesacrossthe M33 disk areanalyzedin Section 7. Section 8 summarizes the main results. § THE MOLECULAR CLOUD POPULATION M33 is a low-luminosity spiral galaxy, the third most luminous member of the Local Group, with a well determined distanceD=840 kpc <cit.>. The GMC catalogue is a product of theIRAM-30m all-diskCO J=2-1 surveyof M33 presented in<cit.> using a modified version of the CPROPS package, originally developed by <cit.>, which is described in detail by<cit.>. The CO datacube has a spatial resolution of 12 arcsec which, for the adopted distance of M33, corresponds to a physical scale of 49 pc (hence, GMCs are not well resolved). The spectral resolution of the datacube is 2.6 km s^-1, and the pixel size is 3 arcsec (i.e. 12 pc).CPROPS identifies continuous regions of CO emission in the datacube and details on our use can be found in <cit.>. In Figure <ref> we show the location of the clouds on the CO (J=2-1) map. The cloud mass is computed either converting the total CO line luminosity of the cloud into mass, referred to asluminous mass, or using the virial relation, referred to asvirial mass. The luminous mass of the GMC, M_H_2, is computed for a hydrogen fraction f_h of 73%, using a CO-to-H_2 conversion factor X=N(H_2)/I_CO(1-0)=4× 10^20 cm^-2/(K km s^-1) <cit.>, twice the standard Galactic value, and anintrinsic line ratio R^2-1_1-0=I_2-1/I_1-0= 0.8 <cit.>. The luminous mass is a function oftheCO (J=2-1) line luminosity, L_CO, which is the CO J=2-1 line intensity integrated over the cloud.Another way of estimating cloud masses is to assume virial equilibrium. The virial mass, M_H_2^vir, is a function of the deconvolved effective cloud radius, r_e, and of the CO (J=2-1) line dispersion <cit.>.The line dispersionis measured by fitting a gaussian function to the cloud integrated line profile. If Δ V_FWHM^gau is the full width half maximum of the fitted line profile, corrected for the finite channel width (by subtracting in quadrature 2.6 km s^-1), then σ_v^gau = Δ V_FWHM^gau/√(8ln2). We apply the following relations which include chemical elements heavier than hydrogen: M_H_2 M_⊙= 19.1R^2-1_1-0X 4× 10^202m_p f_h L_CO(2-1) K km s^-1 pc^2 = 10.9L_CO(2-1) K km s^-1 pc^2M_H_2^vir M_⊙= 1040 r_epc(σ_v^gau km s^-1)^2Typical cloud linewidths are of order 6-10 km sec^-1with σ_v∼3-4 km sec^-1. Cloud radii vary between 10 and 100 pc, typical of GMCsand complexes. However, given the spatial resolution of the survey, cloud radii have large uncertainties and in some cases they might be overestimated. As a consequence virial masses often turn out to be larger than luminous masses, and scaling relations may apply only to smaller clouds within a GMC complex.We shall use the luminous mass definition when we refer to the GMC mass, unless stated otherwise. The algorithm finds 566 GMCs (see Fig. <ref>) with luminous masses between 2× 10^4 and 2× 10^6 M_⊙ and virial masses between 2× 10^4 and6× 10^6 M_⊙. The completeness limit for the luminous masses is about 2/3 of what has been estimatedby <cit.> due to the lower rms noise of the <cit.>full-disk survey and to the revision of telescope efficiency. Hence we estimate a completeness limit of 5700 K km s^-1 pc^2. Given the assumed CO-to-H_2 conversion factorand J=2-1/J=1-0 line ratio, this corresponds to a total cloud mass completeness limit of 6.3 × 10^4 M_⊙, including He. Figure <ref> shows the cumulative distribution of the 566 GMC masses. In the left panel of Figure <ref> we show the radial distribution of the luminous masses of the GMCs while in the right panel we plot the luminous and virial masses of the clouds. The average luminous mass decreases with radius because the CO cloud luminosity isa decreasing function of galactocentric radius<cit.>. There are fewer massive cloud complexes beyond 4.5 kpc. The virial mass shows a marginal dependence on galactocentric radius because the velocity dispersion(line width) is weakly anti-correlated with the galactocentric distance of the cloud <cit.>. Clearly the luminous and virial masses are correlated even though the dispersion is non-negligible.The right panel of Figure <ref> seems to suggest thatGMCs may simply begravitationally bound entities, not necessarily in virial equilibrium. §.§ Cloud Classification Molecular clouds are classified in three broad categories – clouds without obvious star formation (A), clouds with embedded star formation (B) and clouds withexposed star formation (C). Clouds with embedded or exposed SF are identified from the presence of emission at 8 or 24 μm, that in C-type clouds is associated with Hα and often to Far-UV emission peaks, while B-type clouds have no optical counterpart. There are a few ambiguous cases which were classified D-type. This classification follows the spirit of the <cit.> procedure in which maps at four wavelengths were made of each cloud and the region surroundingit (see catalog in <cit.>) as displayed in Figure <ref>.The wavebands were chosen to probe a variety of optical depths, at a higher angularresolution than the CO data in order to locate the SF region within the molecular cloud.The 566 clouds were visually inspected using the maps as shown in Figure <ref>.Strict automatic criteria are extremely difficult to use in a reliableway as the general flux levels decrease greatly with galactocentric radius, making a common threshold impractical as one misses regions of star formation inthe outer disk. Furthermore, crowded regions (center, arms) are difficult to analyse without visual inspection. It is not possible to know whether continuum emission is from the cloud or simply along the line of sight through the disk. When the emission is from a region near the center of the cloud, or there is line emission at the same location, the association is assumed to be real. In Figure <ref>, we showexamples ofthe images used to classify each of the 566 GMCs. The CO J=2-1 integrated intensity contours for each GMC (solid white line,first contour at 80 mK km s^-1 and following stepped by 330 mK km s^-1) are plotted on maps of Hα (upper left panel, units give emission measure in pc cm^-6),Spitzer 8 μm (upper right panel, MJy/sr), GALEX FUV (lower left panel, counts), and Spitzer 24 μm (lower right panel, MJy/sr). Theflux levels are different for each image. The scaling used is plotted as a color bar at the top of each panel. One can see that for GMC 461 (upper four panels) there are no visible sources in any band, and hence it has been classified as A-type. The GMC 147 (middle four panels) has a weak infrared source visibleat 8 and 24μm with no Hα or FUV emission peak at the MIR peak location and hence it has been classified as B-type.This cloud is located close to the M33 center, in a crowded region, and the weak MIR-source is not present in the <cit.> catalogue discussedin the next Section. This underlines the need for visual inspection for a reliable cloud classification. At thelocation of GMC 15 (bottom four panels)there is a source visible at all selected wavelengths, which corresponds tosource 8 in the catalogue of <cit.> (see next Section) and hence GMC 15 has been classified as C-type cloud.A star forming region on the distant side of a cloud will be classified as B although an observer, in say M 31, might see the cloud as exposed (C). Thus, acloud could be more evolved than its classification shows either due to geometry (the case above) or if only low-mass star formation is taking place. We estimate that the maps are sensitive enough to detect a single B0 main sequence star. There are some differences with respect to the<cit.> classification.A slightly higher fraction of A clouds is found due tothe classification of some clouds with very weak and diffuse IR emission as A rather than B or C.A slightly higher fraction of C-type clouds is alsofound because, with the idea that an apparently embedded cloud could be exposed from a different vantage point, we pushed ambiguous B/C cases into the C class. However, generally the agreement with<cit.> is excellent.Table <ref> summarizes the classification scheme and the number of GMCs in each class. In Table <ref> we list for each cloud the cloud type and the following properties: celestial coordinates, galactocentric radius R,cloud deconvolved effective radius r_e, CO(2-1) line velocity dispersion from CPROPS σ_v and its uncertainty, line velocity dispersion from gaussian fit σ_v^gau (corrected for finite channel width),CO luminous mass M_H_2 and its uncertainty, and virial mass from gaussian fit M_H_2^vir. The uncertianties on the velocity dispersion from CPROPS can be considered upper limits to the uncertainties on the velocity dispersion from gaussian fit. Using these and the uncertainties on the cloud radius one can verify the large uncertainties on the virial mass estimates.There are no estimates of σ_v^gau and M_H_2^vir when the gaussian fit to the cloud integrated profile results in a full width half maximum comparable to spectral resolution. In addition we givein Table <ref> the number of the YSCC associated with the MIR-source and with the GMC, as described in the nextSection, that lies within the 80 mK km s^-1 GMC boundary. Clouds of A-type should not host any source and in fact only 5 clouds in this category have aYSCC associated with them which lies at the cloud boundary.The cloud classification was done by seven testers without knowledge of the MIR-source position. At a later time the possible presence of sources at the GMC positions has been checked by inspecting the whole M33 maps at 8 μm, 24 μm, Hα, and GALEX-FUV, each with a uniform contrast, and by analyzing statistically the spatial correlation of catalogued YSCCs and GMCs (see next Section). § YOUNG STELLAR CLUSTER CANDIDATES IN THE DISK OF M33 AND THEIR ASSOCIATION WITH GMCS In the star formation cycle, YSCs form out of molecular gas; as the cluster evolves, stellar activity removes the molecular material and light starts toescapefrom the clouds. Eventually, shocks due to massive stars may compress the nearby gasand trigger star formation anew. Before the cold gas is removed from the stellar birth place, newborn starsheat the dust in the surroundings with a consequent emission in the MIR.Therefore we expect the presence of YSCCs at the location of MIR-sources associated withthe M33 disk and a spatial correlation between YSCCs andstar forming GMCs.The establishment of the association between YSCCs and GMCs is done following 3different methods with different levels of accuracy; each method is described in the followingsubsections. Using the Spitzer satellite 24μm data, <cit.> have selected 915 MIR-sources in the area covered by the M33 disk. Complementing the mid and far-infrared Spitzer data with UV data from the GALEX satelliteand with Hα data, it has been possible to build up the Spectral Energy Distribution (SED) for most of the sources. The optical images cover a smaller region than the Spitzer images and hencefor about 60MIR-sourcesat large galactic radii the SED could not be derived. The presence in the sample of a few AGBs, theweakness of the emission in the Hα or UV bandsof some sources, andsome large photometric errors,has further limited to 648 the number ofMIR-sourceswith available SEDs<cit.>.We find that 738 out of the 915 MIR-sources catalogued by <cit.> are within the area of the CO survey. By visually inspecting these 738 sources in several bands andwith available stellar catalogues we eliminated obvious AGB and Milky Way stars as well as background galaxies. The SDSS has been used to inspect the sourceoptical morphology orthe photometric redshiftwhen the associated Hα emission is weak,below ∼ 10^36 erg sec^-1. We have excluded sources witha reliable redshift determination (with a χ^2<3 as given by SDSS),and a few MIR-sources withX-ray counterpartssince these are probably background sources (QSOs, galaxies etc..).We have a final sample of 630 MIR-sources which are strong candidates for being star forming sites over the area covered by the IRAM CO-all-disk survey and we shall refer to theyoung stellar clusters associated with these MIR-sources as YSCCs. The purpose of the identification of YSCCs in this paper is to associate themwith GMCs to studycloudand star formation properties across M33. The YSCCs may have an optical counterpart, or may be fully embedded, detected only in the infrared while stars are still forming. Soon after stars of moderate mass are born, the dust in the surrounding molecularmaterial absorbs almost all the UV and optical emission of the recently born stars and re-emits the radiation in the Mid and Far-IR. Hence, MIR-sources without optical or UV counterparts might indicate the presence of recently born stars still in their embedded phase. Furthermoreone has to bear in mind that small star forming sites might be below the critical mass to fully populate the IMF and only occasionally form a massive stellar outlier with Hα or FUVluminosity above the detection threshold. This implies that for the purpose of this paper, i.e. association of YSCCs with molecular clouds, both MIR-sourceswith and without UV or Hαcounterparts are of interest. Ages and masses are available for 506 YSCCs with UV and Hα emission, and have been determined by <cit.>.§.§ The association between GMCs and YSCCs: filamentary structures across M33 In Figure <ref> we plot the position of the 566 GMCs over the 24 μm image to show their large scale distribution over a MIR-Spitzerimage of M33. Even though the correspondence between MIR-peaks, which are YSCCs, and GMCs is not a one-to-one correspondence, the majority of GMCs lie alongfilaments traced by the MIR emission.There are no GMCs in areas devoid of MIR emission. There are however some regions where MIR filaments are present but no GMCs have been found. Similarly, some GMCs are present along tenuous and diffuse MIR filaments but don't overlap with emission peaks i.e. with compact sources as those detected bythe <cit.> extraction algorithm. Even using the Spitzer 8 μm map at 3 arcsecresolution(better than the 6 arcsec resolution of 24 μm map)some of these clouds seem associated onlywith diffuse MIR emission.In Figure <ref> we plot the GMC positions over the HI map at a spatial resolution of 10 arcsec, very similar to the CO map resolution.The 21-cm map, presented in <cit.>is obtained by combiningVLA and GBT data.There is an extraordinary spatial correspondence between the GMCs and the distribution of atomic hydrogen overdensities, underlined also by <cit.>and quantified by<cit.>. This correspondence seems to weaken at large galactocentric radii. Here, in fact, wenotice the presence of bright HI filamentsin areas devoid of GMCs.This may be due to a decrease of the CO J=2-1 line brightness far from the galaxy centerbecause of CO dissociation, ordue to a gradient in metallicity <cit.> or gas density(which implies a lower CO J=2-1/J=1-0 line ratio).Another possibility is thatfewer GMCs are formed in the absence of spiral arms. Spiral armsmay favor the growth of GMCs by collisional aggregation of smaller clouds. In the absence of the arms, only individual molecular clouds of lower mass and size than GMCs may be found, undetected by the survey because of beam dilution.In the outer regions most of the CO J=2-1 emission is in factdiffuse,at the 12" resolution of our CO data, which may be due to low mass clouds.For R<4 kpc most of the detected CO emission is due to GMCs in the catalogue <cit.>. Furthermore, as pointed out already by <cit.>, the presence of ahigh HIsurface density is a necessary condition but not a sufficient one for the formation of molecular clouds: the atomic gas might just not be converted into molecules if hydrostatic pressure andthe dust content decrease as it happens going radially outwards in a spiral disk. In Section 7 we will discuss further what can cause the drop in the number density of GMCs in the outer disk of M33by examining the association of GMCs with YSCCs and the GMC lifetime. §.§ The association between GMCs and YSCCs: a close inspection of cloud boundaries and the YSCC classification The spatial correspondence between the position of GMCs and that of YSCCs can be studied by an accurate inspection of the area covered by each GMC. We start by searching for YSCCs which are within1.5 cloud radii of all GMCs listed in Table <ref>. Since GMCs are often not spherical, we used a search radius larger than the cloud radius and subsequently we checked theassociation by inspecting visually the GMC contours drawn over the M33 Spitzer images at 8 and 24 μm (as for cloud classification).If in projection a YSCC and a GMC overlap we claim they are associated. We searched for optical or UV counterparts to YSCCsby analyzing the Hα and GALEX-FUV images of M33 and by checking the SDSS image at the locationof each MIR-source. Taking into account whether a YSCC is associated with aGMCor not and whether it has or not an optical counterpart, we place the 630 YSCCs into 4 different categories. We describe these categories briefly below and discuss them in more detail in the following sections. * class b: YSCCs, associated with GMCs, with no optical counterpart (unidentifiable in SDSS and with no Hα emission)* class c: YSCCs with optical (SDSS and/or Hα) counterpart * c1:YSCCs associated with GMCs with coincident Hα and MIR emission peaks but FUV emission peaks are spatially shifted or absent * c2:YSCCs associated with GMCs with coincident Hα, FUV and MIR emission peaks* c3:YSCCs not associated with GMCs but star forming with optical and FUV counterparts; these often have weak Hα emission* class d:YSCCs associated with GMCs which are ambiguous for b or c1/c2 class* class e:YSCCs not associated with GMCs and with no Hα emission and no or weakred optical counterpart in SDSS; some FUV may be presentThe lowercase letters used for YSCC categories are such that if there are GMCs associated with them, these are mostly placed in the corresponding capitalletter class. This is why we do not have a-type MIR-sources, because A-type GMCs are not associated with YSCCs. Similarly we do not have E-type GMCs because the YSCCs in the class eare not associated with GMCs. After an automated search we inspected the images and checked that a YSCC associated with a GMC lies within the cloud boundary and whether it has Hα, FUV, and/or optical counterparts. We find that 243 YSCCs lie beyond any catalogued cloud borders and 104 of these have no Hα counterpart and weak or no emission in the UV. We place these sources in the class e. Some of themmight notbe star forming sites but foreground or background objects; however, in the class e we may also find some small, embedded star forming region whose associatedGMC has a brightness below the survey detection threshold. We place the remaining 139 YSCCs not associated with GMCs in the c3 class. Optically these look like SF regions. Some might be YSC associated with smallerclouds which are not in the catalogue and others might be associated with more evolved sources whose original cold gas reservoir has been mostly dissipated.By looking at the ages and masses for c3-type YSCCs we findagessimilar to those for c1- or c2-type YSCCs, while masses are smaller on average. Hence it is likely that the majority of the YSCCs of class c3 are associated with molecular clouds of smaller mass, undetected by the survey.We have 387 YSCCs (61% of the total) which have a high probabilityof being linked to catalogued GMCs since they are within the cloud boundary.A few sources are associatedwith more than one cloud (since they are spatially overlapping with two clouds which are at different velocities or are at the boundary of two clouds).We classified 368 of these 387 YSCCs asc1, c2 , or b-type according to the presence of an optical counterpart. We place the remaining 19 YSCCs, ambiguousbetween b- and c-type sources, in theclass d.We find an optical counterpart to the majority of YSCCsassociated with GMCs: 271 out of 368. Only 97 YSCCs, i.e.26% of YSCCs associated with GMCs,do not have an optical counterpart and are candidates for beingYSCC still in their fully embedded phase. The 8 and 24 μm images of a b-type YSCCcan be found in the central panels of Figure <ref>.Eachof the 271 YSCC associated with a GMC has beenclassified as c1 or c2-type according to whether the FUV emission peak is absent/shifted with respect to the PSF of the 24 μm source or not. An example of a c2-type YSCC is shown at the center of the four bottom panels of Figure <ref> at various wavelengths.The summary of the YSCC classification is given in Table 2.The 630 YSCCs which are within the CO-all-disk survey map boundary are listed in Table <ref> with the type (from b to e) as described in the previous paragraph, and the number of the GMCs associated with them, if any. In this case, we also list the corresponding cloud class (A, B, C, D). In addition, for each YSCC we give the celestial coordinates, the bolometric, total infrared, FUV and Hα luminosities, the estimated mass and age,the visual extinction, the galactocentric radius, the source size, and its flux at 24 μm. The estimates of allquantities given in Table <ref> and their uncertainties are discussed in <cit.>. Photometric errors on source luminosities are smaller than 0.1 dex. However, since for a source of agiven size we perform surface photometry with a fixed aperture at all wavelengths, the errors due to nearby source contaminationcan increase the uncertainties in crowded field(FUV in center and spiral arms for example) and are hard to quantify. The uncertainties of the 24 μm flux, as given by SExtractor, are available in the on-line Tablesof <cit.>. As described by <cit.>, we apply an average correction to stellar masses of low-luminosity YSCC for the IMF incompleteness. While uncertainties on the distribution of cluster masses and on the mass of bright individual YSCs are of order 0.1 dex, the IMF incompleteness implies larger uncertaintieson individual YSCC masses when L_bol<10^40 erg s^-1 <cit.>. As expected, YSCCs are associated with GMCs of B, C, or D type. We have only 5 YSCCs which lie at the boundary ofA-type GMCs and the cloud testers consideredthat these peaks were probably not associated with GMCs. We expect a correspondence between b-type YSCCs and B-type GMCs, or c-type YSCCs and C-type GMCs. And that is what indeed happens, even though the cloudclassification was not based on the correspondence with MIR-sources in the <cit.> catalogue. However, there are a few exceptions, and the apparent non-correspondence betweenthe cloud and the associated YSCC classification needs some explanation. Sometimes a cloud hosts more than one source: if a b-typeYSCC and ac-type YSCC are associated with one cloud then the cloud is classified as C-type. Sometimes a YSCC is identified with a MIR-source which effectively is a blend of two ormore sources and onlyone of these has an optical counterpart. In this case the cloud and the YSCC might not belong to a similar class.There are also a few clouds classified as B-type or C-type with no associated YSCC. Clearly for most of thesecases the MIR emissionat 24 μm was too weak for the emitting area to be classified as "source".In a few cases,blending/confusion with nearby sources might have caused the failure of the source extraction algorithm (see <cit.>for references and a description of the Sextractor software). Only 18 out of 87 GMCs of B-type are not associated with YSCCs and 46 GMCs out of 286of C-type do not host aYSCC.We can summarize that 332 GMCs are associated with at least oneYSCC.We have 58 cloudswith aweak non catalogued MIR-source (with Hαcounterpart) and 176 clouds which do not have associated emission in UV, optical or MIR and are considered inactive. §.§ The association between GMCs and YSCCs: the spatial correlation function In order toquantify the link between GMCs and YSCCs, we analyse statistically the association by computing the positional correlation function of the two distributions. We then compare this to what we expect for a random distribution. This approach is fully justified provided that the distribution of each class of sources, taken separately, is spatially homogeneous. This is not true for two reasons: (i) the density of objects changes with the galactocentric distance and (ii) GMCsand YSCCsare mostlylocated along the spiral arms which implies that even the clouds which have not yet formed stars are expected to lie closer to a YSCC with respect to a randomlydistributed population in the disk. However, since the average distance between sources of these two classes is much smaller than these large scale variations, the analysis that we are going to present is reasonably justified. In our treatment we take into account the density dependencefrom the galactocentric distance but we do not apply any correction for the spiral pattern modulation,weonly briefly discuss it in our analysis. In the next Section we compare with the positional correlation between GMCs and other populations in the disk, in order to check that indeed the correlation with YSCCs selected viaMIR emission is the strongest.To compute the expected positional correlation function of YSCCs and GMCs in a galaxy disk, one has to consider first f_YSCC(R), the radial densitydistribution of YSCCs. This is an azimuthal average of the deprojected local surface density i.e. of the radial surface density on the galaxy plane. One can then compute N(d,R), the average number of YSCC within a circle of radius d centered on a randomly selected GMC, and P(d,R), the probability of finding the closest YSCC to a GMCat a distance d.Having P(d,R), it is then straightforward to retrieve the cumulative probability function C(d,R), which is the probability of finding a YSCC within a distance d of a GMC, or equivalently the fraction of GMCs which have at least one YSCC within a distance d. We can then compare C(d,R) with what is observed in a galaxy disk.Since we have a limited number of YSCCs and GMCs in the M33 disk, we take into account the density dependence on the galactocentric distance by dividing M33 into 3 radial intervals: 1) R<1.5 kpc; 2) 1.5≤ R < 4 kpc; 3) R≥ 4 kpc; The number of YSCCs are105, 290, and 236 in zones 1, 2, and 3 respectively, and in each zone we compute themean density of YSCCsas the ratio between the number of YSCCs and the disk area of the zone. We refer to the mean YSCC density using the symbol ⟨ f_YSCC⟩_iwhere i=1, 2, 3 for zones 1, 2, and 3 respectively and the angle bracketsindicate that it is an average over the zone.We define d̅as the radius of a circle insidewhich there is on average one YSCC for a randomly distributed population.The length scale d̅ is a typical separation length for YSCCs randomly distributedin a plane and in general it is a function of R.For each zone we determine ⟨d̅⟩_i=1/√(π⟨ f_YSCC⟩_i), an average value of the separation length,and find the following values:146 pc,218 pc, 374 pc for zones 1, 2, and 3 respectively. It is useful to define this length scale because reasoning in terms of normalized distances allows usto better compare results obtained for the different subsamples.Therefore, in the absence of correlations, the average number of YSCCs within a circle of radius d centered on a randomly selected GMC, N(d,R),can be approximated in each zone as: N_i(d) = π d^2 ⟨ f_YSCC⟩_i = (d⟨d̅⟩_i)^2,The probability P(d,R) of finding the closest source to a GMC at a distance d and the corresponding cumulative probability C(d,R)can be retrieved in general from N(d,R) as: P(d,R)=e^-N(d,R); C(d,R)=1-e^-N(d,R). In the absence of correlations, we use Eq.<ref> and Eq.<ref> to compute in each zone the mean expected positional correlation functions of GMCs and YSCCs for random distributions. These are shown as dashed lines in Figure <ref> as a function of the separation between a GMC and a YSCC given in parsecs. In the same Figure we plot the observed fraction of GMCs with at least oneYSCC ata separationdusing black squares for the inner region, open blue circles for the middle region and red crosses for the outer region.Assuming that a YSCC and a GMC are closely associated if theyare separated by no more thanacloud radius, which is typically r_e≃ 50 pc, we expect to find a fraction of GMCs of order r^2_e/⟨d̅⟩_i^2 which have a YSCC closer than 50 pc if YSCCs are randomly distributed.On averagewe should find 12%, 5% and 2% of GMCs with a YSCC withinr_e for zones 1, 2, and 3 respectively. Instead we observe GMC fractions of 53%, 45%, and43% for zone 1, 2 and 3 respectively. This implies that we have about 4 times more YSCCs in zone 1,9 times more YSCCs in zone 2, and 21 times more YSCCs in zone 3 that are in close association with GMCs than what might be expected if GMCs and YSCCs wererandomly distributed.One can also display the distributions of Eq.<ref> as a function of the normalized distance d/⟨d̅⟩,and in this case only one curve is necessary to pin down the randomly distributed population, independently of the disk zone. But before doing this we would like to introduce some weighted average quantities to better take into account radial variations of GMC densities,as one goesfrom the crowded central areas of M33 to the disk outskirts. If YSCCs and GMCs are randomly but non-uniformly distributed in the disk, we can compute the mean separationsand densities of YSCCs as seen by the GMC population.In this case a good approximation to the density of YSCCs in each zone is a weighted mean,with the weights given by the GMC number densities. We call this weighted mean ⟨ f_YSCC⟩^w_i (with i=1, 2, 3 for zones 1, 2, and 3 respectively). Analoguous to f_YSCC(R), we define f_GMC(R), the radial density distribution of GMCs in the galactic plane. The weighted average density of YSCCs in the surrounding of GMCs for a random distribution can then be estimated as:⟨ f_YSCC⟩ ^w_i=∫_R_min,i^R_max,iR f_GMC(R) f_YSCC(R) dR/∫_R_min,i^R_max,iR f_GMC(R) dRwhere R_min,i and R_max,i are the radial boundaries of each disk zone. We can then estimate the mean separation between YSCCs and GMCs for a random distribution as ⟨d̅⟩^w_i=1/√(π⟨ f_YSCC⟩^w_i) and use this to normalize d.We have ⟨d̅⟩^w_1=145 pc, ⟨d̅⟩^w_2=210 pc ⟨d̅⟩^w_3=309 pc. Thus, the difference with the earlier calculations is small and mostly relevant for zone 3. Using the weighted average YSCC density for each zone, as in Eq.<ref>,the quantities N(d), P(d) and C(d) can be computed from Eq. <ref> and <ref>.In Figure <ref> the random distribution is shown as a function of d/⟨d̅⟩^w. The fraction of GMCs which have at least one YSCC at a distance ⟨d̅⟩^w for a random distribution is 0.63,and almost all YSCCs are at a distance d< 2 ⟨d̅⟩^w from a GMC. The crosses in Figure<ref> show the observed cumulative functions in the three disk zones.As already stated, the true distributions are farfrom random since about halfof the YSCCs are within 0.25 d/⟨d̅⟩^w of a GMC.In what follows wemodel the observed positional correlation function of Figure <ref> and determine the correlation length.For a simplified approach, we propose the following form for the observed average density of YSCCs at a distance d from a randomly chosen GMC:F(d)=c_0/πd̅^2+c_1/2π^2e^-d/;where λ_c plays the role of a correlation length, and the length scale d̅is, as stated before, the typical separation length for YSCCs randomlydistributed in a plane. In the absence of correlation, we should expect c_0=1 and c_1=0; in the case of a positive correlation, instead, we expect to have c_1>0, as well as c_0<1to balance the average density on randomly chosen positions. The quantity N(d)can be computed by integration, giving:N(d)=∫_0^d2π d' F(d') d d' = c_0d^2d̅^2+c_1(1-(1+d)e^(-d/)).Using the values of the weighted mean YSCC separation ⟨d̅⟩^w_i in Eq. <ref>, we derive as usual the expected P(d) and C(d)for each zone. By comparing the modelled C(d) to the observed fractions of GMCs which haveat leastone YSCC within a given value ofd/⟨d̅⟩^w, we retrieve the average values of c_0, c_1 and λ_c in each zone. A least square fitting method of the data is used todetermine c_0, c_1 and λ_c in the modelled C(d). The values of ⟨d̅⟩^w andall parameters derived from our fits for the three zones are listed in Table <ref>. The continuous lines in Figure <ref>shows the very goodquality of the fits and,for comparison,also the expectation in the absence of correlation (dashed line). It is easy to see that introducing a correlation one obtains fits far better than without it. The correlation length comes out to be very similar in the three zones: 15.8, 17.7, and 17.9 pc, which seems to outline the physical relation between the GMC and YSCC within the samestar-forming region. There is a highly statistically significant clustering ofGMCs and YSCCs at larger distances.The relative density contrast of the correlated pairs, within a circle of radius 3 (around 50 pc in all three cases) is 82.3%, 87.7%, and 91.4% respectively. The reason whyc_0 is larger than unity in all cases, while it should have been lower than that, is likely an effect of radial density variations within each zone. The mean separationsvary across the disk and have been averaged combining regions with rather different local source densities.This is especially true at large galactocentricdistances and it is enhanced by filamentary overdensities, justifying why the largest value of c_0 is found for the outer region (here the radial density of sources drops and density variations are higher).The steeper increase of the positional correlation function around 100-200 pc in the three zones is likely due to the presence of spiral arms or gas rich filaments. These filaments host many CO clouds and YSCCs and have thicknesses of 100-200 pc.The positional coincidence of YSCCs and GMCs in the outer disk is extraordinary and chance alignments much less likely than in the inner disk. It would be interesting to analyze the effects of both GMC and YSCC crowding in the spiral arms. The present treatment is however not suited to investigate quantitativelysuch effects, because the correlation is taken to be isotropicwhile in the presence of a spiral pattern this assumption should be released.We plan to devise a 2-dimensional extension of the present analysis in a future work, aimed at outlining also the correlation pattern at intermediate distances (of order 100-200 pc).The short-range correlation presented here is however not appreciably affected by spiral arm crowding, since it corresponds to distances much smaller than the typical width of the arms.If we subdivide the clouds into a high-mass sample and a low-mass sample, according to whether the GMC has a luminous mass higher or lower than 2 10^5 M_⊙, we find that 69% of the high-mass GMCs have a YSCC within 50 pc, while this happens only for 44% of the low-massGMCs. This implies that it is rarer to find inactive GMCs of high mass than of low mass. For random association the percentage is much lower, of order 3%.§ THE ASSOCIATION BETWEEN GMCS AND OTHER TYPES OF SOURCES IN THE DISK In nearby galaxies the spatial resolution of today observations is high enough to identifythe various products of the gas-star formation cycle, such as Hα regions, massive stars, embeddedor optically visible stellar clusters. <cit.> used the BIMA disk survey of the innerdisk of M33 to correlate the position of 148 GMCs withHII regions identified through Hα emission. They found a significant clustering especiallyfor the high mass GMCs. Given the high number of identified HII regions (about 3000 in the innermost 4 kpc), the difference in the number of HII regions closer than 50 pc to a GMCs with respect to the random distribution is only a factor 2. Hence the clustering of our sample of YSCCsaround GMCs is much stronger than that of HII regions and this can be easily understood since the YSCCs have been identified through MIR emission, present in the early phases of star-formation, while less compact Hα sources, like shells and filaments, are formed at a later stage during the gas dispersal phase.<cit.> analyzed the location of 65 massive GMCs in the inner disk of M33 with respect to massive stars which they identify through the optical surveys of<cit.> in various optical bands. The aim is to study the evolution and lifetime of GMCs by estimating the ages of the closest bright stars using stellar evolution models. In particular they confirm a scenario of recursive star formation since dense molecular gas, the fuel for the next stellar generation, is found around previously generated massive stars. The <cit.> clusters, with estimated stellar masses in the range 10^3.5-10^4.7 M_⊙ and ages between 4 and 31 Myrs, are on average older and more massive than YSCCs associated with MIR sources. Unfortunately the authors do not list the positions of OB associations and clusters and do not determine thestatistical significance of the clustering around GMCs.To check whether optically visible stellar clusters lienear GMCs, we use the compilation of <cit.>who identified 707 stellar cluster candidates. They determined ages and masses of 671 of them throughUBVRI photometry using archival images of the Local Group GalaxiesSurvey <cit.>. Some of these clusters have ages ≤ 100 Myrs. We find that 668 of the 707 clusters lie in the area of the CO survey and of these only 64, about 10%, are separated by less than one cloud radiusfrom the nearest GMC. The estimated ages of these 64 clusters vary between 5 Myrs and 10 Gyrs with a mean valueof about 50 Myrs. Since theexpected fraction of GMCswithin 50 pc distance from a stellar clusteris only a factor 3 smaller for a random distribution, the statistical significance of the association between GMCs and optically visible clusters is far less than withYSCCs associated with MIR sources.We also check the association considering only optically visible stellar clusterswhose ages are less than 100 Myrs (about 1/3 of the sample), as displayed bythe thicker lines in Figure <ref>. The ratio between the observed fraction of GMCs in the proximity of optically selected YSCs, and the expected fraction for a random distribution, is larger than for the whole sample of optically selected clusters.Ever in this case, however, optically visible clusters are less correlated with GMCs than infrared selected YSCCs examined inthis paper. Moreover, the peak of the excess of the observed fraction of GMCs, with respect to a random distribution, is at distances of order 200-300 pc, much largerthan a typical cloud radius, as shown in Figure <ref>. The weak correlation found is then likely driven by the location of GMCs and YSCs along gaseous filaments or spiral arms rather then by a one to one correspondence. § CLOUD CLASSES AND THEIR PROPERTIES In this Section we examine cloud classes as being representative of different stages of cloud evolution and their properties, such as their location acrossthe disk and the associated luminosities. We have classified the population of molecular clouds in M33 into 3 categories according to whether they are non-active,A-type, have MIR emission without optical counterpart, B-type, or have some MIR emission with associated Hα emission, C-type. These 3 classes may correspond to three different stages of molecular cloud evolution. The 172 clouds classified as A-type are considered inactive because they mayhave just formed but have not yet fragmented to form stars.In this category we might havesome clouds which are left over from a previous episode of star formation; the starsbreak out from the cloud and the most massivestars couldenhance the formation of molecular material close to bright Hα filaments by compressingthe ISM throughwinds andexpanding bubbles.Some radiation from nearby stellar associations might heat the dust triggering some diffuseMIR emission. We associate the 87 clouds of B-type to the early phases of star-formation, where radiation of massive stars is not yet visible in optical or UV bands because it is absorbed by the dust of the surrounding cloud material. In this case the radiationheats the dust in the clouds which emit in the MIR in localized areas. In the 286 clouds of class C, ultraviolet emissionfrom young stars or Hα line emission from ionized gas is visible within the cloud contours.In this case, winds from evolved stars cansweep outgas in their vicinity and UV light escapesfromthe cloud. The Hα radiation, less absorbed by dust than UV continuum, may become detectablewhen the HII region is still compact.Winds from young massive stars can ultimately disrupt the parent cloud and quench star formation. If the stellar cluster is of small mass,massive stars are not always present<cit.>.In this case, thelack of Hα radiation might push the classification of evolved clouds into class B and cause the MIR emission to be weak. Moreover, since clouds are not spherically symmetric and stellar clusters arenot necessarily born at cloud centers, geometrical effects may play a rolein mixing somewhat the B and C type.On the other hand, emission on the line of sight to the cloud but unrelated to the cloud (i.e. foreground or background) could result in an overestimate of the state of evolution of the cloud. This implies thatin each class there will be a few clouds whose evolutionary stage might be inappropriate. There are two evident differences in the location of the 3 types of clouds across the M33 disk. The GMCs are plotted in Figure <ref> over the HI image of M33 and in Figure <ref> as a function of galactocentric radius.The A-type clouds have a median galactocentric distance over 3 kpc with only 20% within 2 kpc of the center, where the B- and C-type clouds are more numerous (see Fig. <ref>).Both the A- and C-type clouds are found along HI filaments and along the northern and southern spiral arm of this flocculent galaxy. Along the southern arm, which is more regular andless disturbed than its northern counterpart, the A-type clouds are found in the inner part of the arm, whileC-type clouds aremore often found on the arm, where the peak of the HI and Hα emission is. This is in agreement with the theories of formation of molecular complexes across the spiral arms <cit.>: the atomic gas experiences a firstcompression in the inner part of the armand becomes molecular, then as it enters the armthe supersonic turbulence enhances the fragmentation and the process of star formation. This scenario is not seen for the less regular northern arm. A-type clouds are also more clustered than C-type clouds which are more coarsely placed along the gaseous filaments. Clouds of highmass are very rare at large galactocentric radii,possibly because there are no spiral arms that trigger their growth.The B-type clouds are rarely found close to the spiral arms or the brightest 21-cm peaks but they are more numerous along filaments of moderateHI surface density. Theyare present at all radii smaller than 4 kpc and are never a dominant population. If they represent a transition phase between inactive and active clouds the lack of these clouds along the arm means that this transition must be a rapid one.Lower mass stellar clusters, not bright in the UV or Hα, might beassociated with these clouds, but the star formation process may continue to increase the YSCC mass at a later stage. Although there might be somecontamination among MIR-sources with no optical counterpart, due to obscured distant galaxies in the background of M33, the presence of CO emission at the same location of the MIR-sources assures that most of theseareYSCCs associated to M33. Their clustering along HI overdensities underlines again their association with the M33 ISM. We now examine the CO-luminosities of GMCs in the three classes as well as their emission at 24 μm. The meanGMC mass is progressively increasing from 1.3× 10^5 M_⊙ for A-type clouds to 2.1× 10^5 K km s^-1 M_⊙ for B-type to 3.6× 10^5  M_⊙ for C-type clouds. Hence, B-type clouds are typically of intermediate mass, and not necessarily of low mass. If we interpret the sequence of clouds from A-type, to B-type, to C-type, as a time sequence, the increase of the average cloud mass going from A-type to B- and to C-type implies that GMCs collect more gas as they age. The trend remains if we look only at inner disk clouds (at R<3.5 kpc)or outer disk clouds (at R≥ 3.5 kpc). Clouds of C-type extend to higher molecular masses than B-type clouds and Figure <ref>shows the total luminous masses of the clouds versus the YSCC bolometric luminosity (for GMCs hosting these),extinctionand flux density at 24 μm. We compute the MIR flux densities of YSCC associated with B or C-type sources by dividing the flux as measured by <cit.> by the source area (defined at 8-sigma isophote).The A-type clouds have very low MIR flux densities except a few which are contaminated by nearby sources.For these and for clouds with no associated YSCC wemeasurethe emission at 24 μm in apertures of 3, 5, and 8 pixels (1 px=1.6 arcec) in radius centered on the cloud.After background subtraction, we choose the aperture which gives the largest flux density; this is derived by dividing the flux in the aperture by the aperture area.With this procedure we underestimate theMIR flux density of the emitting area because we cannot measure the effective source size (the emission is indeed so weak that a source size is hard to define and in fact flux levels are below that of sources found by the <cit.> procedure). The GMCs associated with catalogued sources have flux densities higher than 0.01 mJy/arcsec^-2 (425 MJy/sr). This flux limit for catalogued sources, which is evidentin panel (c) of Figure <ref>, is a result of the source extraction algorithm <cit.>. GMCs with no catalogued sources aremostly of A-type but there are also a few C-type clouds with weak MIR emission, perhaps being in the process of dissolving due to theevolution of thenewborn cluster.A correlation between the cloud CO luminousmass and the source extinction is found for sources of c2-type, as shown in Figure <ref>. The same figure also shows there is no correlation between the GMC mass and the YSCC bolometric luminosity. Similarly,there is no correspondencebetween GMC masses and YSCC masses when, through successful SED fits, stellar masseshave been determined <cit.>. This implies variations of the amount of gas turned into stars within giant complexes where star formation is not uniformly spread out.There is no difference in the CO line-width for clouds of class A, B, C (3.5±1.5 3.3±1.1 and 3.4±1.2 km s^-1respectively). Theslightly larger dispersion for clouds of A-typeis due to a secondary peak in the width distribution at narrow linewidths (σ_v^gau∼ 2 km s^-1).Similarly there is not much difference insize betweencomplexes of different types except that GMCs of large size are of C-type. Sizes are distributedbetween 10 and 100 pc with a peak at 30 pc and a mean value of order 40 pc forA- and B-type clouds, and of 50 pc for C-type clouds. § YSCC CLASSES AND THEIR PROPERTIES In Section 3.1 we defined classes of YSCCs.YSCCs ofb-, c1-, c2- and d-type are associated with GMCs, while those ofc3- or e-type are not. The YSCCs of b- and e-type do not have an optical counterpart or only very faint emission in the SDSSimages.YSCCs ofc-type are associated with blue light in SDSS images and have Hα and FUV emission.Sources of d-type are ambiguous between b- and c-type. Since sources ofb-type are associated with GMCs, theypresumablyrepresent the early stages of SF, when stars are fully embedded in the cloud.e-Type YSCCs may be proto-clusters in their embedded phase associated with lower mass molecular clouds, which have not been detected by the IRAM-survey.Like b-type sources, the e-sources tend to lie along HI overdensities but not along the brightest ones. The MIR-sources identified as e-typeYSCCs often lienear a SF region and it is not clear whether they host protostars or if the dust is heated by young stars in their proximity. A few are rather isolated and may be background obscured objects. They are more numerous than b-type YSCCsin regions far from the center.The MIR emission ofe-type YSCCs is on average lower than that ofb-type YSCCs, and hence they might be associatedwith small clouds <cit.>, with a lower metallicity <cit.> and dust content and undetectable through the IRAM-CO survey. If the sequenceb, c1, c2, c3, represents a time sequence in the evolution of a star forming region, we should find progressively less extinction and older ages for the associated clusters. In Figure <ref> we show the YSCC FUV luminosity versus extinction A_v, as given by Eq.(8) of <cit.>, similar to the visualextinction definition used by <cit.>. In Figures <ref> and <ref> we can see that YSCCs of b and c1-type are the most embedded, having the highest values of A_v. There are, however, two differences between these types of sources: some YSCCs of c1-type havehigh bolometric luminosities and hence may have already formed massive stars. On the birthline diagram <cit.> the most luminous c1-type YSCCs occupy the same area as c2-type YSCCs while the low luminosity ones tend to overlap with the region where b-type sources are. Thisimpliesthat c1-type YSCCs are intermediate between fully embedded clusters in the process of formation (b-type) and very YSC (c-type).It is likely that c1-type YSCCs do not have FUV emissionat the location of the MIR peak because of extinction.The YSCCs of c2-type have A_v values always lower than 2 and often lower than 1, with an average value of 0.62±0.38. Their infrared luminosity correlates very well with the FUV luminosity. There are no YSCCs with very low Total Infrared (hereafter TIR) or FUV luminosity in this class <cit.>.The lack or faintness of Hα emission for b-type sources is coherent with the picture that in these sites massive stars have not formed yet.The YSCCs not associated with GMCs are of c3-type, star forming with optical counterpart,or of e-type, without optical counterpart. The YSCCs of c3-type mostly have low extinction, and this is consistent with the fact that they are not associated with GMCs. They are notof low luminosity and hence they might be slightly more evolved than c2-type sources or associated to low mass clouds highly efficient in making stars. In Figure <ref> we plot the location of all YSCCs, color coded according totheir type, superimposed on the HI map of M33. There are some differences between the various YSCC classes:c1- or c2-type YSCCs lie over bright innerHI filaments or arms. The b-type YSCCs populatethe inner disk and there are very few beyond 4.5 kpc, but they avoid the brightest HI ridges. The c3- and e-type YSCCsare the dominantpopulation at large galactocentric radii. It islikelythat at these radiimolecular clouds are of low mass to escape detection in the IRAM all-disk survey.e-Type YSCCs have more diffuse MIR emission, often mixed withfaint Hα filaments and hence it is unclear if at the location of these sources any star formation will ever take place. We now examine the luminosities of the YSCCs for the various classes. In Figure <ref> we plot the bolometric luminosity <cit.> as a function of galactocentric radius for the variousclasses and the linear fit to the log distributions.We have included all YSCCs with an estimated bolometric luminosity whichlie at galactocentric distances R<7 kpc. We have not considered the uncertainties on the bolometric luminosities because they are hard to quantify asthey are not dominated by photometric errors but by uncertainties related to the modelling. On average and at all galactocentric radii the class with the highest luminosities is the c2 class, followed by c1, c3, b, and e classes. Since YSCCs of c-typeare likely to be associated with clusters which have completed the formation process, it is conceivable that they are more luminous than b- and e-type sources linked to the embedded phase of star formation, when not all cluster members have been formed. In particular this finding is in agreement with non-instantaneous cluster formation theories which predict that massive stars are fully assembledat a later stage and their quick evolution switches off the cluster formation process <cit.>.A similar trend to that shown in Figure <ref> is found if we plot the TIR luminosity.Through SED fits, <cit.> have estimated the ages of YSCCsof c-type which are shown in Figure <ref>. Given the uncertainties inage determination (of order of 0.1 dex for bright rich YSCCs andlarger for dimones due to IMF incompleteness) the mean ages of YSCCsassociated with c1,c2 or c3-type YSCCs are consistent.It is important to underline thatthe vast majorityof the YSCCs have ages between 3.5 and 8 Myrs and only 15 of them, less than 4%,are older than 8 Myrs.§ MOLECULAR CLOUD LIFETIME AND THESTAR-FORMATION CYCLE Stars form from gas and leave their imprint on the gas through cloud dispersal or gas compression for the next generation of stars.During the formation phase and for the following few Myrs, the dust associated with the molecularmaterial is heated by the energetic photons emitted by the YSC. The CO survey of the LMC at a resolution of 40 pc demonstrated the good correlation ofmolecular clouds with young stellar clusters of age ≤ 10 Myr<cit.>. The timescale of each evolutionary stage has been then estimated by subdividing GMCs into different classes corresponding to different agesof the associated YSC <cit.>. The LMC is an irregular metal poor galaxy, interacting with the Milky Way,and hence it is not clear that the same timescales for GMC evolution, as for M33 or the Milky Way, apply. One difficulty we have to face for M33 is that despite the excellent spatial resolution of the CO survey, giving a comparable physical scale to the LMC CO survey,there is no high resolution radio continuum database which can be used to locate thermal and non-thermal sources, to obtain information on the very early phases ofstellar cluster formation. Dedicated optical surveys of the LMC have produced a catalogue of clusters and OB associations with 137 of them beingof age < 10 Myrs. In M33 there are only16 optically identified clusters with age < 10 Myrsand none of these are within the cloud contour of identified GMCs. The MIR-source catalogue of <cit.> with numerous YSCCs is a substitute forthe lack of optically identified YSCs in M33. However the use of these data implies that we have more information on the intermediate stages of YSC formation but less on the dissipation phase, when the YSC emerges from the GMC and it is optically identified. TheYSC candidates in the <cit.>catalogue have reliable age estimates, especially if they are bright and with coincident peaks in the various bands, such as c2-type YSCCs. All 216 c2-type YSCCs have ages ≤ 10 Myrs and 90% between 3.5 and 8 Myrs and a marked peak around5 Myrs. Similar ages are found for c3-type YSCCs, not associated with GMCs, but with an optical counterpart as well.Figure <ref>shows a histogram of the age distribution of c-type YSCCs. In the previous section we have seen that the number of YSCC associated with GMCs drops when their age is > 8 Myrs.If 8 Myrs is the typical age of the cluster whenitbreaks through the cloud,we can say thatphase C last 8 MyrsWe hence define 8 Myrs as the timescale for a GMC of C-type. During this stage YSC are fully assembled, including massive star formation. Shortly after this stage the YSC dissipates the associated GMCs.We canestimate how long the inactive A-typeand the embedded B-type phases last based on the number ofclouds in the catalogue. Considering only GMCs above our survey completeness limit, the total number of classified clouds is 474. Of these 127 are of A-type, 79 are of B-type, 268 are of C-type. Assuming a continuous rate of star formation in M33 and that the C-type phaselasts 8 Myrs, we estimate that theB-type phase lasts 2.4 Myrs andA-type phase about 3.8 Myrs. The shortest phase for GMCs in M33 is the totally embedded phase, when the newborn cluster has no Hα or optical counterpart: this is a phase which was not considered explicitly by <cit.>and<cit.> but it is included in their inactive, Type-I phase.We define the GMC lifetime as the time interval between the stage when most of the GMC mass has been assembled but the cloud is still inactive, and the starting of the gas dissipation phase, when most of the molecular gas isdispersed in the interstellar mediumand the YSC has no large molecular clumps nearby. This lifetime includes the cloud inactive phase, the embedded star-forming phase, and the time when the YSC breaks through the cloud and has an optical counterpart. The cloud growth and dissipation times, which involve a substantial change of the GMC mass, have not been considered due to the GMC survey completeness limits. Wetherefore have a lifetime of about 14.2 Myrs for GMCs in M33, before they are dissipated. This issomewhat shorter than the lifetime derived by <cit.> who used a sample of GMCs of masses and effective spatial resolution similar to that of our survey.The Type I (similar to our A- and B-type clouds) and Type II(C-type) evolutionary sequence of <cit.>, lasts 19 Myrs. In the Milky Way the GMC lifetime has been estimated to be in the range 10-20 Myrs <cit.>,shorter than earlier estimates of 30-40 Myrs <cit.>. Considering only GMCs above a limiting mass of 10^5 M_⊙ we have 115 clouds of A-type, 58 clouds of B-type, 221 of C-type. Sincethe cluster age is not a function of the associated cloud mass, assuming that C-type clouds last 8 Myrs prior to gas dispersal, we estimate that GMCs spend 4.2 Myrs in class A, and 2.1 Myrs in class-B. This gives a total lifetime of 14.3 Myrs, very close to our previous estimate for the whole sample of GMCs above the completeness limit. In M33 GMC lifetime prior to gas dispersal is comparable with estimates of GMC lifetimes in the Milky Way. We now subdivide the galaxy in three zones, as in Section 3:1) R<1.5 kpc; 2) 1.5≤ R < 4 kpc; 3) R≥ 4 kpc.The number of classified clouds and sources in the 3 zones is given in Table <ref> (unclassified objects like D-type clouds or d-type sources are not considered here). We give in parenthesis the percentage of the various GMC types for eachradial zone and, in addition, the number of GMCs in that zone whose mass is above the completeness limit. We can estimate the cloud lifetime for the three zonesconsidering only GMCs above the completeness limit and find that in zone 2 GMCs have the longest lifetime,of order 15.4 Myrs, due to the longer time clouds spend in the inactive phase. In zone 1 and 3 the GMC lifetime is of order 13.4 Myrs and 12.5 Myrs respectively.In the intermediate radial range, where spiral arm are found, molecular clouds have a longer quiescent timeas more A-type clouds are found. The quiescent time is short for GMCs in the inner regions, about half of that inferred for zone 2. In the outer regions instead, the embedded phase lasts less than elsewhere, about 1.5 Myrs.However, as wewill discuss in the next paragraph, the large drop in GMC number density in zone 3 and the lower average stellar mass of the associated YSCCs increase the uncertainties in the estimated cloud type fractions and YSC ages, and therefore in the GMC lifetime. Going radially outwards from zone 2 to zone 3, the number density of GMCs decreases more rapidly than the number density of YSCCs.This can be explained either with an unseen population of clouds of smaller mass, undetected by the IRAM-all disk survey, or by a quicker evolution of GMCs (once stars are formed, GMCs dissolve in a shorter time as the stellar cluster evolves).The number of YSCC without any associated GMCs (c3-type sources), isin factthe dominant population in zone 3, while in the other zones the dominant YSCC population is of c2-type. Let us supposethat the larger drop in the density of GMCs at large galactocentric radii with respect to YSCC density isdue to an unseen population of molecular clouds, either GMCs withweaker CO emissionor molecular clouds of lower mass (which have not been detected by the IRAM-survey). Suppose that the percentage of missing clouds is the same for each cloud class and that the scalelength of themolecular gas surface density is the same at all radii (i.e. 2 kpc) and that the cloud mass spectrumis also radially constant. In zone 3 we should find a number of clouds equal to 75% the number of clouds of zone 2 i.e. 222 clouds. Hence there are 72 clouds which escaped detection (or even more if their mass decreases with galactocentric radius). Of these about 42 clouds will be of C-type, 8 clouds ofB-type and 22 ofA-type.Consequently the number of b-type sourceswouldbe 31 (13%), the number of c1+c2 sourceswould be117 (51%), and the number of c3-typewould be41 (18%) and of e-type sourcewouldbe 40. In this casethe percentage of optically visible YSC associated with cloudswouldbe the same as in zone 2 and also the percentage of sources with or without clouds will be in closer agreement.So, if the hypothesis of the unseen population is correct there is not much difference in the timescale of cloud dissipation across the M33 disk. § SUMMARY In this paper we present the largest database of GMCs and candidate YSCs across a galactic disk.Using the IRAM-30m CO J=2-1 datacube of M33 we identify 566 GMCs andwe select 630 MIR-sources from the the <cit.> list which areyoung stellar clusters in the early formation and evolutionary phases. We classified the GMCs as non-starforming (class A), with embedded SF (B), or with exposed SF (C). The YSCCs were put in classes based on their emission in the MIR, FUV and Hα bands and according to the association with GMCs. Most of the YSCC with optical and UV counterparts have estimated ages and masses. The classification helps in drawing a possible evolutionary sequence and the relative timescales. The results of the classification together withthe most relevant parameters of the GMCs and YSCCscan be found in the on-line tables.Since M33 has a non-uniform star forming disk with varying population densities and structures going radially outwards, we examine three distinct radial ranges: R<1.5 kpc, 1.5<R<4 kpc, and R≥ 4 kpcand refer to these as zone 1 (inner disk),zone 2 (spiral arm dominated) and zone 3 (outer disk). Below we summarize the main results discussed in the paper. * The GMCs catalogue comprises 566 clouds with masses between 2× 10^4 and 2× 10^6 M_⊙ and radii between 10 and 100 pc. 490 clouds areabove the survey completeness limit ofL_CO≥ 5700 K km s^-1 pc^2 (M_H_2≥ 6.3× 10^4 M_⊙). By examining the 8, 24 μm, FUV,and Hα emission withineach cloud, 545 of them have been classified asA-, B-, or C-type clouds.The remaining 21 clouds could not be assigned unambiguous classes.More than half of the catalogued GMCs have exposed star formation (C-type) with emission peaks at several wavelengthswithin the cloud contours and these are the most massive ones. About 32% are inactive (A-type)with no sources at any wavelengths and only 16% have embedded or low mass star formation (B-type) with emission peaks at MIR wavelengths only.* The peak of the distribution of A-type clouds is near4 kpc. Beyond 2 kpc their number densityis comparable to that ofC-type clouds, which is the most numerous group. Both the A-and C-type clouds are found along HI filaments or spiral arms.On the southern arm (but not the northern), the A-type clouds are foundentering the arm while C-type clouds are found on the arm, suggesting the arm environment may play a role in triggering star formation.The average CO luminosity increases going from A- to B- to C-type clouds and this suggests that GMC mass maycontinue to grow as they evolve from the inactive phase to the formation of massive stars. * We classified 611 of the 630 YSCCs into 5 categories,b-c1-c2-c3-e-type according to the presence and location of optical, Hα or UV counterpart within a GMC boundary. The majority of these sources lie (in projection) within a GMC boundary,especially in zone 1 and zone 2. The largest class of YSCC has coincident peaks at in the UV and Hα bands which made possible to estimate the age and mass of the associated YSC. There is an extraordinary spatial correspondence between the GMCs and the distribution of atomic hydrogen overdensities in zones 1 and 2. In zone 3 there are fewer GMCs, possibly because of a steepening of the molecular cloud mass spectrum with a larger fraction of clouds being below the survey completeness limit.* We find that GMCs classified as B- or C-type are associated with catalogued MIR-sources classified as YSCCs of b- or c-type with only a few exceptions.The physical associationbetween GMCs and YSCCs is established in the three zones considering the whole sample of GMCs and YSCCs, independently of their prior classification.We analyse visually and statistically the association by generating the positional correlation function of the two distributions,and indeed the correlation is remarkable and stronger than with other populations in the M33 disk. If d̅ is the typical separation length between YSCCs, we expect to find only 20% of theGMCs with a YSCC at distance less than 0.5d̅ if they are randomly distributed. Instead we find fractions of order 60-70 %. The correlation length is of order17 pc, but there is a highly statistically significant clustering out to larger distances. There is little or no correlation between the mass of the GMCs and that of the YSCCs.* The extinction estimates are higher for b-type sources with weakor no UV/optical counterpart, which likely represent the early phases of SF. The c1-type YSCCs have visible Hα but not FUV emission and show onaverage higher extinction than the c2-type YSCCs, whereFUV emission is also detected. The c1-type YSCC may represent YSC at an earlier stage than c2-typeYSCC, even though the YSC age determination is not precise enough to separatethese two classes. The most luminous YSCCs are of c2-type and are clusters which likely have completed the formation process.* Estimated ages of most YSCCs are between 3.5 and 8 Myrs, with a marked peak around 5 Myrs, and are associated with C-type GMCs. Using the cluster ages and the fractions of GMCs in each class i.e. in each evolutionary phase, we have estimated the GMC lifetime in M33 as being 14.2 Myrs from when they are assembled to the time when the YSCbreaks through the cloud, prior to gas dispersal. Even though the lifetime may be slightly longer if the cloud dispersal time is included, our estimate for the GMC lifetimein M33 seemscomparable to GMC lifetimes in theMilky Way and somewhatshorter than theestimated GMC lifetime in the LMC. The embedded phase, where MIR emission is visible but no Hα or FUV emission is detected, is the shortest phase from when the cloud isassembled and inactive to theswitch-off of the stellar cluster formation process. The analysis of the largest available sample of GMCs and YSCCs and their association across the whole star forming disk of M33 provides reliable estimates of GMC lifetimes and evolutionary timescalesnecessary for for understanding the gas-star formation cycle across spiral galaxy disks. We would like to thank the referee, Christine Wilson, for her useful comments to improve the original version of the manuscript.40 natexlab#1#1[Bash et al.(1977)Bash, Green, & Peters]1977ApJ...217..464B Bash, F. N., Green, E., & Peters, III, W. L. 1977, , 217, 464[Blitz & Shu(1980)]1980ApJ...238..148B Blitz, L. & Shu, F. H. 1980, , 238, 148[Braine et al.(2010)Braine, Gratier, Kramer, Schuster, Tabatabaei, & Gardan]2010A A...520A.107B Braine, J., Gratier, P., Kramer, C., et al. 2010, , 520, A107[Calzetti(2001)]2001PASP..113.1449C Calzetti, D. 2001, , 113, 1449[Churchwell(2002)]2002ARA A..40...27C Churchwell, E. 2002, , 40, 27[Corbelli et al.(2011)Corbelli, Giovanardi, Palla, & Verley]2011A A...528A.116C Corbelli, E., Giovanardi, C., Palla, F., & Verley, S. 2011, , 528, A116[Corbelli et al.(2014)Corbelli, Thilker, Zibetti, Giovanardi, & Salucci]2014A A...572A..23C Corbelli, E., Thilker, D., Zibetti, S., Giovanardi, C., & Salucci, P. 2014, , 572, A23[Corbelli et al.(2009)Corbelli, Verley, Elmegreen, & Giovanardi]2009A A...495..479C Corbelli, E., Verley, S., Elmegreen, B. G., & Giovanardi, C. 2009, , 495, 479[Druard(2014)]Druardthesis Druard, C. 2014, PhD thesis, Université Bordeaux 1[Druard et al.(2014)Druard, Braine, Schuster, Schneider, Gratier, Bontemps, Boquien, Combes, Corbelli, Henkel, Herpin, Kramer, van der Tak, & van der Werf]2014A A...567A.118D Druard, C., Braine, J., Schuster, K. F., et al. 2014, , 567, A118[Elmegreen(2007)]2007ApJ...668.1064E Elmegreen, B. G. 2007, , 668, 1064[Engargiola et al.(2003)Engargiola, Plambeck, Rosolowsky, & Blitz]2003ApJS..149..343E Engargiola, G., Plambeck, R. L., Rosolowsky, E., & Blitz, L. 2003, , 149, 343[Fan & de Grijs(2014)]2014ApJS..211...22F Fan, Z. & de Grijs, R. 2014, , 211, 22[Freedman et al.(1991)Freedman, Wilson, & Madore]1991ApJ...372..455F Freedman, W. L., Wilson, C. D., & Madore, B. F. 1991, , 372, 455[Fukui et al.(2008)Fukui, Kawamura, Minamidani, Mizuno, Kanai, Mizuno, Onishi, Yonekura, Mizuno, Ogawa, & Rubio]2008ApJS..178...56F Fukui, Y., Kawamura, A., Minamidani, T., et al. 2008, , 178, 56[Fukui et al.(2001)Fukui, Mizuno, Yamaguchi, Mizuno, & Onishi]2001PASJ...53L..41F Fukui, Y., Mizuno, N., Yamaguchi, R., Mizuno, A., & Onishi, T. 2001, , 53, L41[Fukui et al.(1999)Fukui, Mizuno, Yamaguchi, Mizuno, Onishi, Ogawa, Yonekura, Kawamura, Tachihara, Xiao, Yamaguchi, Hara, Hayakawa, Kato, Abe, Saito, Mano, Matsunaga, Mine, Moriguchi, Aoyama, Asayama, Yoshikawa, & Rubio]1999PASJ...51..745F Fukui, Y., Mizuno, N., Yamaguchi, R., et al. 1999, , 51, 745[Gieren et al.(2013)Gieren, Górski, Pietrzyński, Konorski, Suchomska, Graczyk, Pilecki, Bresolin, Kudritzki, Storm, Karczmarek, Gallenne, Calderón, & Geisler]2013ApJ...773...69G Gieren, W., Górski, M., Pietrzyński, G., et al. 2013, , 773, 69[Gratier et al.(2012)Gratier, Braine, Rodriguez-Fernandez, Schuster, Kramer, Corbelli, Combes, Brouillet, van der Werf, & Röllig]2012A A...542A.108G Gratier, P., Braine, J., Rodriguez-Fernandez, N. J., et al. 2012, , 542, A108[Gratier et al.(2010)Gratier, Braine, Rodriguez-Fernandez, Schuster, Kramer, Xilouris, Tabatabaei, Henkel, Corbelli, Israel, van der Werf, Calzetti, Garcia-Burillo, Sievers, Combes, Wiklind, Brouillet, Herpin, Bontemps, Aalto, Koribalski, van der Tak, Wiedner, Röllig, & Mookerjea]2010A A...522A...3G Gratier, P., Braine, J., Rodriguez-Fernandez, N. J., et al. 2010, , 522, A3[Gratier et al.(2016)Gratier, Braine, Schuster, Rosolowsky, Boquien, Calzetti, Combes, Kramer, Henkel, Herpin, Israel., Koribalski, Mookerjea, Tabatabaei, Röllig, van der Tak, van der Werf, & Wiedner]2016arXiv160903791G Gratier, P., Braine, J., Schuster, K., et al. 2016, ArXiv e-prints [Heyer et al.(2004)Heyer, Corbelli, Schneider, & Young]2004ApJ...602..723H Heyer, M. H., Corbelli, E., Schneider, S. E., & Young, J. S. 2004, , 602, 723[Kawamura et al.(2009)Kawamura, Mizuno, Minamidani, Filipović, Staveley-Smith, Kim, Mizuno, Onishi, Mizuno, & Fukui]2009ApJS..184....1K Kawamura, A., Mizuno, Y., Minamidani, T., et al. 2009, , 184, 1[Larson(1981)]1981MNRAS.194..809L Larson, R. B. 1981, , 194, 809[Magrini et al.(2010)Magrini, Stanghellini, Corbelli, Galli, & Villaver]2010A A...512A..63M Magrini, L., Stanghellini, L., Corbelli, E., Galli, D., & Villaver, E. 2010, , 512, A63[Massey et al.(2006)Massey, Olsen, Hodge, Strong, Jacoby, Schlingman, & Smith]2006AJ....131.2478M Massey, P., Olsen, K. A. G., Hodge, P. W., et al. 2006, , 131, 2478[Miura et al.(2012)Miura, Kohno, Tosaki, Espada, Hwang, Kuno, Okumura, Hirota, Muraoka, Onodera, Minamidani, Komugi, Nakanishi, Sawada, Kaneko, & Kawabe]2012ApJ...761...37M Miura, R. E., Kohno, K., Tosaki, T., et al. 2012, , 761, 37[Mizuno et al.(2001)Mizuno, Yamaguchi, Mizuno, Rubio, Abe, Saito, Onishi, Yonekura, Yamaguchi, Ogawa, & Fukui]2001PASJ...53..971M Mizuno, N., Yamaguchi, R., Mizuno, A., et al. 2001, , 53, 971[Murray(2011)]2011ApJ...729..133M Murray, N. 2011, , 729, 133[Roberts(1969)]1969ApJ...158..123R Roberts, W. W. 1969, , 158, 123[Rosolowsky et al.(2003)Rosolowsky, Engargiola, Plambeck, & Blitz]2003ApJ...599..258R Rosolowsky, E., Engargiola, G., Plambeck, R., & Blitz, L. 2003, , 599, 258[Rosolowsky et al.(2007)Rosolowsky, Keto, Matsushita, & Willner]2007ApJ...661..830R Rosolowsky, E., Keto, E., Matsushita, S., & Willner, S. P. 2007, , 661, 830[Rosolowsky & Leroy(2006)]2006PASP..118..590R Rosolowsky, E. & Leroy, A. 2006, , 118, 590[Sharma et al.(2011)Sharma, Corbelli, Giovanardi, Hunt, & Palla]2011A A...534A..96S Sharma, S., Corbelli, E., Giovanardi, C., Hunt, L. K., & Palla, F. 2011, , 534, A96[Solomon et al.(1987)Solomon, Rivolo, Barrett, & Yahil]1987ApJ...319..730S Solomon, P. M., Rivolo, A. R., Barrett, J., & Yahil, A. 1987, , 319, 730[Verley et al.(2009)Verley, Corbelli, Giovanardi, & Hunt]2009A A...493..453V Verley, S., Corbelli, E., Giovanardi, C., & Hunt, L. K. 2009, , 493, 453[Verley et al.(2007)Verley, Hunt, Corbelli, & Giovanardi]2007A A...476.1161V Verley, S., Hunt, L. K., Corbelli, E., & Giovanardi, C. 2007, , 476, 1161[Wilson & Scoville(1990)]1990ApJ...363..435W Wilson, C. D. & Scoville, N. 1990, , 363, 435[Yamaguchi et al.(2001)Yamaguchi, Mizuno, Mizuno, Rubio, Abe, Saito, Moriguchi, Matsunaga, Onishi, Yonekura, & Fukui]2001PASJ...53..985Y Yamaguchi, R., Mizuno, N., Mizuno, A., et al. 2001, , 53, 985[Zinnecker & Yorke(2007)]2007ARA A..45..481Z Zinnecker, H. & Yorke, H. W. 2007, , 45, 481
http://arxiv.org/abs/1703.09183v1
{ "authors": [ "Edvige Corbelli", "Jonathan Brain", "Rino Bandiera", "Nathalie Brouillet", "Françoise Combes", "Clement Druard", "Pierre Gratier", "Jimmy Mata", "Karl Schuster", "Manolis Xilouris", "Francesco Palla" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170327165141", "title": "From molecules to Young Stellar Clusters: the star formation cycle across the M33 disk" }
SCAN: Structure Correcting Adversarial Network for Organ Segmentation in Chest X-rays Wei Dai, Joseph Doyle, Xiaodan Liang, Hao Zhang, Nanqing Dong, Yuan Li, Eric P. XingPetuum Inc.{wei.dai,joe.doyle,xiaodan.,hao.zhang,nanqing.dong,christy.li,eric.xing}@petuum.comDecember 30, 2023 ===================================================================================================================================================================================================== The polynomial eigenvalue problem arises in many applications and has received a great deal of attention over the last decade. The use of root-finding methods to solve the polynomial eigenvalue problem dates back to the work of Kublanovskaya (1969, 1970) and has received a resurgence due to the work of Bini and Noferini (2013). In this paper, we present a method which uses Laguerre iteration for computing the eigenvalues of a matrix polynomial. An effective method based on the numerical range is presented for computing initial estimates to the eigenvalues of a matrix polynomial. A detailed explanation of the stopping criteria is given, and it is shown that under suitable conditions we can guarantee the backward stability of the eigenvalues computed by our method. Then, robust methods are provided for computing both the right and left eigenvectors and the condition number of each eigenpair. Applications for Hessenberg and tridiagonal matrix polynomials are given and we show that both structures benefit from substantial computational savings. Finally, we present several numerical experiments to verify the accuracy of our method and its competitiveness for solving the roots of a polynomial and the tridiagonal eigenvalue problem.Matrix polynomial, polynomial eigenvalue problem, root-finding algorithm, Laguerre's method 15A22, 15A18, 47J10, 65F15 myheadings plain THOMAS R. CAMERON AND NIKOLAS STECKLEYTHE POLYNOMIAL EIGENVALUE PROBLEM § INTRODUCTIONThe polynomial eigenvalue problem consists of computing the eigenvalues, and often eigenvectors, of an n× n matrix polynomial of degree d:P(λ)=i=0d∑λ^iA_i, where  A_i∈ℂ^n× n  and  A_d≠ 0.An eigenvalue of P(λ) is any scalar λ∈ℂ such that P(λ)=0. Any nonzero vector x∈ P(λ) is an eigenvector corresponding to λ. The algebraic multiplicity of λ is its multiplicity as a root of P(λ), and the geometric multiplicity of λ is the dimension of P(λ). Throughout this paper we assume that the matrix polynomial is regular, that is, P(λ) is not the constant zero polynomial, and therefore the set of all eigenvalues is a subset of the extended complex plane with cardinality nd. Infinite eigenvalues of (<ref>) can occur if the leading coefficient matrix is singular and are defined as the zero eigenvalues of the reversal polynomial,(ρ)=i=0d∑ρ^d-iA_i. Computing an eigenpair (λ,x) is useful for a large range of applications <cit.>. Of extreme importance are special cases of the polynomial eigenvalue problem, such as finding the roots of a scalar polynomial (n=1) and solving the linear eigenvalue problem (d=1). What's more, the established techniques for solving these special case problems motivate two current approaches for solving the polynomial eigenvalue problem: linearization and root finding methods. The linearization of a matrix polynomial results in an equivalent linear eigenvalue problem which is often solved using QZ iteration. Algorithms which adopt this approach have computational complexity O(d^3n^3) and include the popular MATLAB functions QUADEIG <cit.> and POLYEIG <cit.>. More recently, it was shown that exploiting the inherent structure in the companion linearization results in a O(d^2n^3) algorithm <cit.>. However, often the original matrix polynomial comes with structure worth exploiting and, in general, the companion linearization does not preserve this structure. Furthermore, the conditioning of the larger linear problem can be worse than the original problem <cit.>.To our knowledge, Kublanovskaya was the first to use root-finding methods to solve the polynomial eigenvalue problem <cit.> when she suggested the use of the QR decomposition with column pivoting and Newton’s method to compute an eigenvalue of the matrix polynomial. Improvements to this method were given, and quadratic convergence was shown, by Jain, Singhal, and Huseyin <cit.>. More recently, a cubic convergent algorithm using the Ehrlich-Aberth method to compute the eigenvalues of a matrix polynomial was presented <cit.>. These root-finding methods are rather inefficient, though, as they compute one eigenvalue at a time, each requiring several O(n^3) factorizations. However, certain structures in the original problem can be exploited, thus increasing the efficiency of these methods. In addition to preserving the structure, root-finding methods have the advantage of preserving the size and conditioning of the original problemRoot-finding methods exhibit a high level of accuracy, thus making them useful in the context of iterative refinement of computed eigenvalues and eigenvectors. They have been shown to be cost efficient for solving large degree polynomial eigenvalue problems <cit.>, and are the driving force behind what is perhaps the fastest and most accurate algorithm for solving the nonsymmetric tridiagonal eigenvalue problem <cit.>. Furthermore, both Laguerre's method and the Ehrlich-Aberth method have been used as an accurate and efficient method for solving the quadratic tridiagonal eigenvalue problem <cit.>. In this paper, we propose a root-finding algorithm which uses Laguerre iteration to solve the polynomial eigenvalue problem. Our method is motivated by the previous work of Bini and Noferini <cit.>, Gary <cit.>, and Parlett <cit.>. In <ref> we present a method for computing the Laguerre iterate of an approximate eigenvalue. We provide robust methods for computing the corresponding right and left eigenvectors, backward error, and condition estimates. Both Hessenberg and tridiagonal structures are considered, and it is shown that Hyman's method can be used to obtain significant computational savings. In <ref> we develop a method based on the numerical range for computing initial estimates to the eigenvalues of a matrix polynomial. Under suitable conditions these initial estimates are no bigger in absolute value than the upper Pellet bounds. Finally, an a priori check for both zero and infinite eigenvalues is implemented and comparisons are made to the approach developed in <cit.>.In <ref> we discuss the stability of our method. Specifically, we show that our method is robust against overflow and that under suitable conditions we can guarantee the backward stability of the eigenvalues computed. In <ref> numerical experiments are provided to verify the accuracy and cost analysis of our method. Additionally, comparisons are made to the methods in <cit.> and <cit.> to verify the effectiveness of our method for computing the roots of a polynomial and solving the tridiagonal polynomial eigenvalue problem, respectively. § LAGUERRE'S METHOD APPLIED TO THE POLYNOMIAL EIGENVALUE PROBLEM Laguerre's method has a rich history originating with the work of Edmond Laguerre <cit.>. Laguerre's method has incredible virtues including guaranteed global convergence when all roots are real <cit.>, and when these are simple zeros this method is known to exhibit local cubic convergence. In practice, the complex iterations seem as powerful as the real one's <cit.>. Both Numerical Recipes (zroots) and the NAG 77 Library (C02AFF) employ a modified Laguerre method to compute the roots of a scalar polynomial. In 1964-65, Laguerre's method was applied to the linear eigenvalue problem, both in the monic <cit.> and non-monic <cit.> cases. Now we apply Laguerre's method to the polynomial eigenvalue problem.Since P(λ) is assumed to be regular, the polynomial p(λ)= P(λ) has at most N_1≤ nd roots, where N_1+N_2=nd and N_2 is the number of infinite eigenvalues. Given an approximation λ to one of the roots of p(λ), Laguerre's method uses p(λ), p^'(λ), and p^”(λ) to obtain a better approximation. Following the development in <cit.>, we define the following:S_1(λ)=p^'(λ)/p(λ)=i=1N_1∑1/λ-r_i,where r_1,…,r_N_1 are the roots of p(λ), andS_2(λ)=-(p^'(λ)/p(λ))^'=i=1N_1∑1/(λ-r_i)^2.Then the next approximation is given byλ̂=λ-N_1/S_1±√((N_1-1)(N_1S_2-S_1^2)),where the sign of the square root is chosen to maximize the magnitude of the denominator. We call λ̂ the Laguerre iterate of λ. Once the roots r_1,…,r_k have been found, we deflate the problem by subtractingi=1k∑1/λ-r_i  and  i=1k∑1/(λ-r_i)^2from equations (<ref>) and (<ref>), respectively.The undesirable numerical properties of the determinant are well-known, and it is for these reasons that we do not work with the polynomial p(λ) directly. Rather, an effective method for computing equations (<ref>)–(<ref>) can be derived from Jacobi's formula:p^'(λ)/p(λ) =trace(X_1(λ)),-(p^'(λ)/p(λ))^' =trace(X_1^2(λ)-X_2(λ)),where P(λ)X_1(λ)=P^'(λ) and P(λ)X_2(λ)=P^”(λ). The first formula in (<ref>) can be found in <cit.>. The second formula follows from the first by using the derivative product rule and noting that (P^-1(λ))^'P^'(λ)=-X_1^2(λ). Note that only the diagonal entries of X_1(λ)^2 are needed in (<ref>), which is significantly less expensive than computing the matrix product.In general, the method we propose begins with initial estimates to the eigenvalues of the matrix polynomial. Then, proceeding one at a time, the Laguerre iteration of each eigenvalue approximation is computed, which requires solving the matrix equations in (<ref>). Each eigenvalue is updated until at least one of the stopping criteria are met (see <ref>). Locally, if the root is simple, convergence is cubic; otherwise, it is linear. Furthermore, in practice, the total number of iterations needed to compute all eigenvalues is proportional to the product nd; therefore our method has computational complexity O(dn^4+d^2n^3).In <ref>, we show that significant computational savings can be obtained from Hyman's method for both Hessenberg and tridiagonal matrix polynomials. In addition, the general method can easily be specialized for scalar polynomials, and we are left with a method that has computational complexity O(d^2). §.§ Eigenvectors, Stopping Criteria, and Condition NumbersDenote by λ∈ℂ an approximate eigenvalue, and defineQR=P(λ)E,where E is a permutation matrix such that |r_11|≥⋯≥|r_nn|. If |r_nn|<τ, where τ is some predetermined tolerance, then we say that the approximate eigenvalue has converged. This constitutes our first stopping criterion. In <ref> we define τ and show that the first stopping criterion guarantees that the backward error in the approximate eigenpair is very small. Given that λ has converged, we compute the corresponding right and left eigenvectors byx=Ex̂  and  y=Qe_n,whereR(1:n-1,1:n-1)x̂(1:n-1)=-R(1:n-1,n)x̂(n)=1 and e_n is the nth standard basis vector.This approach works well when |r_nn|<τ, and while this is sufficient to guarantee that the approximate eigenvalue has converged, it is not necessary. Indeed, there exist upper triangular matrices that are “nearly” rank deficient, yet none of the main diagonal entries are extremely small <cit.>. For this reason, we introduce a second stopping criterion based on an upper bound estimation of the backward error in the eigenvalue approximation.If an approximate eigenvector has not been computed, then it follows from <cit.>[Lemma 3] that for any nonzero vector b∈ℂ^n the backward error in the eigenvalue approximation is bounded above by‖ b ‖_2/α‖ P(λ)^-1b ‖_2,where α=∑_i=0^d|λ|^i‖ A_i‖_2.Suppose λ is an approximate eigenvalue such that the upper bound on its backward error, and therefore its backward error, is less than double precision unit roundoff: ϵ=2^-53; then we say that λ has converged. This constitutes our second stopping criterion. In practice we take the min of (<ref>) over three nonzero vectors b∈ℂ^n.If none of the diagonal entries in the matrix R are less than τ, then (<ref>) is not suitable for computing the corresponding eigenvectors. Rather, we compute the singular vectors corresponding to the smallest singular value of P(λ). With the QR factorization with column pivoting in (<ref>), we apply inverse iteration toE(R^*R)E^T  and  Q(RR^*)Q^*,to compute the right and left singular vectors, respectively. Our experience indicates that using (<ref>) to form initial estimates for the inverse iterations results in quick convergence to excellent eigenvector approximations.Now, suppose that λ is an approximate eigenvalue which satisfies|λ̂-λ|<ϵ|λ|where λ̂ is the Laguerre iterate defined in (<ref>). Then no significant change to the current eigenvalue approximation is made, and we say that λ has converged. This constitutes our third stopping criterion. In this case, or in the case where some predefined maximum number of iterations has been reached, we cannot make a strong statement about the approximations backward error. At this point, our best option is to proceed by computing the singular vectors corresponding to the smallest singular value of P(λ) using the inverse iteration described in (<ref>).In summary, given an approximate eigenvalue, we compute the QR factorization with column pivoting in (<ref>). If any of the three stopping criteria are met, or the maximum number of iterations is reached, then we cease to update the eigenvalue approximation and compute corresponding right and left eigenvectors. Otherwise, we use the QR factorization to update the eigenvalue approximation by solving (<ref>) and computing the Laguerre iterate. Several remarks are in order. First, the norm of the matrix coefficients are only computed once, and in practice, we replace the matrix 2-norm with the Frobenius norm. Second, the definition we've given for α in (<ref>) results in a relative normwise measurement of the backward error (see <ref>). Finally, we note that the addition of computing the eigenvectors for each approximate eigenvalue has not changed the computational complexity of our method, which is O(dn^4+d^2n^3).Once the approximate eigenvalue has converged and the corresponding right and left eigenvectors have been computed, we report each eigenvalue's condition number. It follows from <cit.>[Theorem 5], that the normwise condition number of a nonzero finite simple eigenvalue is given byκ(λ,P)=α‖ x ‖_2‖ y ‖_2/|λ||y^*P^'(λ)x|.For simple zero and infinite eigenvalues, we report‖ x ‖_2‖ y ‖_2/|y^*x|as the condition number, where x and y are right and left eigenvectors corresponding to the zero eigenvalues of the matrices A_0 and A_d, respectively. §.§ Initial EstimatesA root-finding method's performance is greatly influenced by its initial estimates. In <cit.>, it is suggested to use the Newton polygon of a polynomial formed from the norm of the coefficient matrices to obtain initial estimates to the eigenvalues of the matrix polynomial P(λ). In this section, we review this Newton polygon approach, since we will use it to form initial estimates in the scalar case. However, for matrix polynomials we propose a new method, motivated by the numerical range of the matrix polynomial, for computing the initial estimates.§.§.§ Newton PolygonThe Newton polygon approach works by placing initial estimates on circles of suitable radii. We quantify what constitutes suitable radii from the Pellet bounds for matrix polynomials.Let P(λ) be an n× n matrix polynomial of degree d≥ 2, where A_0≠ 0. For each k∈{0,1,…,d} such that A_k is nonsingular, consider the equation‖ A_k^-1‖^-1μ^k=i≠ k∑‖ A_i‖μ^i,where ‖·‖ is any induced matrix norm. * If k=0 there exists one real positive solution r, and P(λ) has no eigenvalues of moduli less than r.* If 0<k<d there are either no real positive solutions or two real positive solutions r_1≤ r_2. In the latter case, P(λ) has no eigenvalues in the annulus 𝒜(r_1,r_2)={z∈ℂ : r_1<|z|<r_2}.* If k=d, then there exists one real positive solution R, and P(λ) has no eigenvalues of moduli greater than R.A proof of Theorem <ref> can be found in <cit.>. Moreover, it was noted in <cit.> that the bounds in Theorem <ref> can be sharpened if (<ref>) is replaced by μ^k=i≠ k∑‖ A_k^-1A_i‖μ^i. Let k_0,…,k_q be values of k such that A_k is nonsingular and there exists real positive solution(s) s_k_i≤ t_k_i to (<ref>). Then t_k_i-1≤ s_k_i for i=1,…,q, and there are n(k_i-k_i-1) eigenvalues of P(λ) in the closure of the annulus 𝒜(t_k_i-1,s_k_i). If for k=0 or k=d the matrix A_k is singular, then t_0=0 or s_d=∞, respectively. Computing the value of s_k and t_k is expensive, since it requires solving several matrix and polynomial equations. However, a cheap algorithm for approximating s_k and t_k was proposed in <cit.>. For the scalar case (n=1) there is an alternative to computing the values of s_k and t_k. Consider the polynomial w(λ)=i=0d∑a_iλ^i, where a_0a_d≠ 0. The Newton polygon associated with this polynomial is the upper convex hull of the discrete set {(i,log|a_i|):i=0,1,…,d}. Let 0=k_0<k_1<⋯<k_q=d denote the abscissas of the vertices of the Newton polygon, and define the radiir_i=|a_k_i-1/a_k_i|^1/k_i-k_i-1,for i=1,…,q. Then (k_i-k_i-1) initial estimates to the roots of w(λ) are placed on circles centered at 0 with radius r_i. In <cit.> they show that these estimates lie within the Pellet bounds for the polynomial w(λ), and in <cit.> they establish the efficiency of these initial estimates for solving the roots of a polynomial. In <cit.> they generalize this approach for a specific class of matrix polynomials, and in <cit.> to general matrix polynomials. In practice, the idea is simple. Let w(λ)=i=0d∑‖ A_i‖λ^i. Then, n(k_i-k_i-1) initial estimates to the eigenvalues of P(λ) are placed on circles centered at zero with radius r_i, for i=1,…,q, where both k_i and r_i are defined as in (<ref>) with reference to the Newton polygon associated with the polynomial w(λ). §.§.§ Numerical RangeThe numerical range of a matrix polynomial is the setW(P)={λ∈ℂ x^*P(λ)x=0, for some nonzero vectorx∈ℂ^n}which clearly contains the set of all eigenvalues. Under suitable conditions, see Theorem <ref>, the roots of the quadratic form x^*P(λ)x, where x∈ℂ^n is of unit length, are no bigger in absolute value than the upper Pellet bound, see Theorem <ref>. In practice, we make use of the columns of Q=[q_j]_j=1^n, already obtained from the QR factorization of the constant and leading coefficient matrices, see <ref>. Initial estimates to the finite eigenvalues are computed as the roots of q_j^*P(λ)q_j for j=1,…,n.If P(z)=zI-A, then W(P) coincides with the classical numerical range (field of values) of the matrix A, which has wonderful properties including convexity and connectedness. In general, however, the numerical range of a matrix polynomial need not have these properties and is bounded if and only if the field of values of the leading coefficient matrix does not contain the origin. For a detailed introduction to the numerical range of a matrix polynomial and its geometric properties see <cit.>. It is highly nontrivial to give a complete description of the set W(P). Despite this, we have experienced great success using elements from the numerical range as initial estimates for the eigenvalues we wish to compute. This seems to be a consequence of the habitual nature of elements from the numerical range to adhere to the geometric structure of the spectrum. To exemplify this statement, consider the hyperbolic matrix polynomial P(λ), which by definition has a numerical range that satisfies W(P)⊂ℝ. Then, it is clearly advantageous to use initial estimates from the numerical range over elements on a circle in the complex plane.Even more revealing, the numerical range of a hyperbolic matrix polynomial is split into d “spectral regions” each containing a root of x^*P(λ)x. Each spectral region is an interval (possibly degenerate) on the real line that contains n eigenvalues of P(λ) <cit.>. In general, singling out a part of W(P) containing precisely k roots of x^*P(λ)x for any unit vector x∈ℂ^n and separated from the rest of W(P) by a circle establishes the existence of a spectral divisor of order k whose spectrum lies in that region <cit.>[ 26.4]. For simplicity, we also reference this region as a spectral region.In what follows, we provide three example problems from the NLEVP package <cit.> to illustrate the potential competitive advantage to be had from using the numerical range. The first two examples are of hyperbolic matrix polynomials, but the third is not. In each case, it is clear that the roots of the quadratic form are adhering to some spectral region in the plane. Each example contains a plot of the initial estimates using both the numerical range and Newton polygon, as well as the approximated eigenvalues. [Spring] < g r a p h i c s >[CD Player]< g r a p h i c s > The earlier examples highlight the advantage the numerical range has to offer, especially when the eigenvalues are real. This advantage leads to cutting the computation time in half when solving the Spring problem, and by a quarter when solving the CD Player problem. In the following example, the eigenvalues are complex, but the advantage of the numerical range is still evident. Note how the elements from the numerical range clearly identify the 4 spectral regions in the complex plane.[Butterfly] < g r a p h i c s >Not only do the elements of the numerical range adhere to the spectrum better than points on a circle in the complex plane, they are often, in practice, within the Pellet bounds from Theorem <ref>. We can make the following precise statement.Let P(λ) be a self-adjoint matrix polynomial. Then for any λ∈ W(P), |λ| is no bigger than the upper Pellet bound.Let x∈ℂ^n be a vector with unit length. The upper Pellet bound on the roots of the polynomial x^*P(λ)x is the unique real positive solution to the equation|x^*A_dx|μ^d=∑_i=0^d-1μ^i|x^*A_ix|.For any self-adjoint matrix A, it is well-know that ‖ A‖_2=sup_x^*x=1|x^*Ax|. Therefore,|x^*A_dx|μ^d≤∑_i=0^d-1μ^i‖ A_i‖_2.Let R denote the upper bound on the roots of x^*P(λ)x and R̂ denote the upper bound on the eigenvalues of P(λ). Then, by Theorem <ref>, R^m≤R̂^m and the result follows. §.§.§ Zero and Infinite EigenvaluesLaguerre's method experiences local cubic convergence if the root is simple; otherwise, convergence is linear. In practice, it is most common to have multiple zero and infinite eigenvalues. Therefore to avoid poor performance when dealing with multiple roots, we employ an a priori identification of zero and infinite eigenvalues. During this identification process, we assume that the zero and infinite eigenvalues are semi-simple and thus our problem turns into a familiar one: to determine the rank of the matrices A_0 and A_d.In order to determine the rank of a matrix A, we perform a QR factorization with column pivoting. Let QR=AE, whereR=[ R_11 R_12;0 R_22 ]R_11 is k_1× k_1, R_2 is k_2× k_2, k_1+k_2=n, and E is a permutation matrix such that the diagonal entries in R occur in non-increasing order. Our aim is to determine an index k_1 such that R_11 is well-conditioned and R_22 is negligible. If k_1<n, then the matrix A is rank deficient and the dimension of its null space is k_2. We compute a basis for the right and left nullspace byx_j=Ex̂_j and y_j=Qe_j,whereR(1:k_1,1:k_1)x̂_j(1:k_1)=-R(1:k_1,j),x̂_j(k_1+1:j-1)=0, x̂_j(j)=1, x̂_j(j+1:n)=0, and e_j is the jth standard basis vector for j=k_1+1,…,n.The above process is performed on both matrices A_0 and A_d, thereby computing the geometric multiplicity of the zero and infinite eigenvalues, respectively, and their corresponding right and left eigenvectors. Once this is done, the columns of the matrix Q are then used to compute initial estimates to the remaining finite eigenvalues via the roots of the quadratic form q_j^*P(λ)q_j for j=1,…,n. We compute the roots of each polynomial using Laguerre's method, specialized for the scalar polynomial, which was outlined previously. Note that the computation of the QR factorization along with solving the n polynomial equations has a computational complexity of O(n^3+nd^2) and is therefore in accordance with the computational complexity of our method.§.§ Hessenberg and Tridiagonal Form We are motivated to consider the case where the coefficients of the matrix polynomial are in Hessenberg or tridiagonal form. The Hessenberg case is of both theoretical and practical importance. In light of the original development of Hyman's method, we will consider this method for upper Hessenberg matrix polynomials and note the tridiagonal matrix polynomial as a special case. What's more, every matrix polynomial can be reduced to Hessenberg form <cit.>. While no numerically stable algorithm currently exists to perform this reduction, there exist applications where this Hessenberg structure arises naturally; for example, the Bilby problem in <cit.>. With regards to the tridiagonal case, previous developments have focused on the linear and quadratic polynomial eigenvalue problem <cit.>, whereas our development is applicable to any degree polynomial eigenvalue problem. §.§.§ Hyman's MethodHyman's method, a method for evaluating the characteristic polynomial and its derivatives at a point, is attributed to a conference presentation given by M.A. Hyman of the Naval Ordnance Laboratory in 1957 <cit.>. The backward stability of this method has been shown <cit.>, and this method has been used to evaluate the characteristic polynomial of a matrix <cit.> and matrix pencil <cit.>. Here we generalize these approaches in order to apply Hyman's method to the matrix polynomial.We denote an upper Hessenberg matrix polynomial as followsP(λ)=[p_11(λ)p_12(λ)⋯p_1n(λ);p_21(λ)p_22(λ)⋯p_2n(λ); ⋱⋱⋮; p_n,n-1(λ)p_nn(λ) ],where p_ij(λ) is a scalar polynomial of degree at most d. Note that in the tridiagonal case p_ij(λ)=0 for j>i+1. The insightful observation that Hyman made was that P(λ) has the same determinant as[p_11(λ)p_12(λ)⋯ b(λ);p_21(λ)p_22(λ)⋯0; ⋱⋱⋮; p_n,n-1(λ)0 ],provided thatP(λ)[ x_1(λ);⋮; x_n-1(λ);1 ]=[ b(λ);0;⋮;0 ].If we let p(λ)= P(λ), thenp(λ)=(-1)^n-1b(λ)q(λ),where q(λ)=∏_j=1^n-1p_j+1,j(λ). Given a fixed scalar λ, all unknown values in (<ref>) can be computed in O(n^2) time for Hessenberg P(λ) and in O(n) time for tridiagonal P(λ). The values of x_1(λ),…,x_n-1(λ) are then used to solve the following equationP(λ)[ x_1^'(λ);⋮; x_n-1^'(λ);0 ]=[ b^'(λ);0;⋮;0 ]-P^'(λ)[ x_1(λ);⋮; x_n-1(λ);1 ].Then the values of x_1(λ),…,x_n-1(λ) and their derivatives are used to compute b^”(λ)P(λ)[ x_1^”(λ);⋮; x_n-1^”(λ);0 ]=[ b^”(λ);0;⋮;0 ]-2P^'(λ)[ x_1^'(λ);⋮; x_n-1^'(λ);0 ]-P^”(λ)[ x_1(λ);⋮; x_n-1(λ);1 ]. Once b(λ), b^'(λ), and b^”(λ) have been computed, an efficient computation of the Laguerre correction term can be obtained from the followingp^'(λ)/p(λ) =b^'(λ)+b(λ)(q^'(λ)/q(λ))/b(λ),-(p^'(λ)/p(λ))^' =(p^'(λ)/p(λ))^2-(b^”(λ)+2b^'(λ)(q^'(λ)/q(λ))+b(λ)(q^”(λ)/q(λ))/b(λ)).Note that we have carefully avoided the potentially hazardous product in computing q(λ) and its derivatives by replacing it with q^'(λ)/q(λ)=j=1n-1∑p_j+1,j^'(λ)/p_j+1,j(λ),andq^”(λ)/q(λ)=(q^'(λ)/q(λ))^'+(q^'(λ)/q(λ))^2,where (q^'(λ)/q(λ))^'=j=1n-1∑(p_j+1,j^”(λ)/p_j+1,j(λ)-(p_j+1,j^'(λ)/p_j+1,j(λ))^2).Several remarks are in order. First, if any subdiagonal of P(λ) is zero, then solving (<ref>)-(<ref>) will require division by zero. Fortunately, we can replace any zero subdiagonal with double precision unit roundoff ϵ and maintain the backward stability of Hyman's method <cit.>. Second Hyman's method significantly reduces the cost of each iteration and the resulting cost of our method is O(dn^3+d^2n^3) for Hessenberg matrix polynomials and O(d^2n^2) for tridiagonal matrix polynomials. §.§.§ Eigenvectors, Stopping Criteria, and Condition NumbersLet λ∈ℂ be an approximate eigenvalue and defineQR=P(λ).This factorization can be done in O(n^2) time for Hessenberg P(λ) and in O(n) time for tridiagonal P(λ). Let j denote the index that minimizes |r_jj|. If |r_jj|<τ, where τ is some predetermined tolerance, then we say that the approximate eigenvalue has converged. This constitutes our first stopping criterion. Given that λ has converged, we compute the corresponding right and left eigenvectors usingx=x̂  and  y=Qŷ,where R(1:j-1,1:j-1)x̂(1:j-1)=-R(1:j-1,j),R(j+1:n,j+1:n)ŷ(j+1:n)=-R(j+1:n,j),x̂(j)=1, x̂(j+1:n)=0, ŷ(j)=1, and ŷ(1:j-1)=0.If there exists no index j such that |r_jj|<τ, then we compute an upper bound for the backward error of the eigenvalue approximation via (<ref>). If the backward error of λ is less than ϵ, then we say that λ has converged. This constitutes our second stopping criterion. We then apply inverse iteration to R^*R  and  QRR^*Q^*,to compute the right and left singular vectors, respectively. Using (<ref>) to form initial estimates for the inverse iteration results in quick convergence to excellent eigenvector approximations. As was done in <ref>, we also check if the approximate eigenvalue λ satisfies (<ref>). In this case, no significant change to the current eigenvalue approximation is made, and we say that λ has converged. This constitutes our third stopping criterion. In summary, given an approximate eigenvalue, we compute the QR factorization in (<ref>). If any of the three stopping criteria are met, or the maximum number of iterations allowed is reached, then we cease to update the eigenvalue approximation and compute corresponding right and left eigenvectors. Otherwise, we use Hyman's method to compute the Laguerre iterate. Once the approximate eigenvalue has converged and the corresponding right and left eigenvectors are computed, we report each eigenvalue's condition number (<ref>). §.§.§ Initial EstimatesJust as was done with the general matrix polynomial, initial estimates consist of computing the geometric multiplicity of the zero and infinite eigenvalues, a basis for the corresponding eigenspace, and initial estimates to the remaining finite eigenvalues via the numerical range. For the Hessenberg case, there is no difference whatsoever, since we can accomplish all of the above while adhering to the cost of the method. However, for the tridiagonal case we must make several changes in order to align with the method's cost. When computing the geometric multiplicity of the zero and infinite eigenvalues, we must settle for only a QR factorization of the coefficient matrices A_0 and A_d, since the column pivoting has the potential to destroy the tridiagonal structure and make this method too expensive. Therefore, we cannot expect that the diagonal entries of the upper triangular R appear in descending order. It is for this reason that we identify the pivots of R one row at a time. By keeping track of the location of the previous pivot and utilizing the structure of R, we can identify whether or not each row has a pivot, and the location of said pivot, in O(n^2) time. Then, the dimension of the corresponding eigenspace is (n-k), where k is the number of rows with a pivot. If (n-k)>1, then we use the location of each non-pivot column to compute a basis for eigenspace in O(n^2) time. The QR factorization of the tridiagonal matrices A_0 and A_d is computed using plane rotations and therefore each column vector of Q can be computed in O(n) time. Furthermore, each quadratic form x^*P(λ)x can be computed in O(dn) time and the roots of each scalar polynomial can be computed in O(d^2) time. It follows that the initial estimates of the tridiagonal matrix polynomial can be found in O(d^2n^2) time.§ STABILITYThe stability of any numerical method is of the utmost importance. In this section, we provide a detailed account of why our method is robust against the potentially harmful overflow in the evaluation of the matrix polynomial and its derivatives. Furthermore, we identify the predetermined tolerance used in the stopping criteria (<ref>) and show that if either the first or second stopping criterion holds then we can guarantee the backward stability of our eigenvalue approximation.§.§ Robustness against OverflowBoth the computation of the Laguerre iterate as well as the corresponding eigenvector approximation is driven by a QR factorization of P(λ), with column pivoting for the general matrix polynomial, where λ is the current eigenvalue approximation. The evaluation of P(λ) can be done efficiently using Horner's method, but for large degree matrix polynomials this computation is prone to overflow. It is for this reason that when |λ|>1 we opt to work with the reversal polynomial (<ref>), with ρ=1/λ. One may argue that now our computation is prone to underflow, but this is not harmful; as λ→∞, (ρ)→ A_d, which is aligned with our definition of the infinite eigenvalues of P(λ) being the zero eigenvalues of (ρ). Now, the general Laguerre correction term in (<ref>) becomes:p^'(λ)/p(λ) =ρ·trace(dI-ρ X_3(ρ)),-(p^'(λ)/p(λ))^' =ρ^2·trace(dI-2ρ X_3(ρ)+ρ^2(X_3^2(ρ)-X_4(ρ))),where (ρ)X_3(ρ)=^'(ρ), and (ρ)X_4(ρ)=^”(ρ). By using (<ref>) when |λ|≤ 1 and (<ref>) when |λ|>1, we have a method for computing the Laguerre iterate of a matrix polynomial which is robust against overflow. This is similar to the approach in <cit.> for evaluating polynomials, but to our knowledge, we are the first to apply this to matrix polynomials. For Hessenberg matrix polynomials (tridiagonal case included), we apply Hyman's method to the reversal polynomial in order to obtain the values of r^'(ρ)/r(ρ) and r^”(ρ)/r(ρ), where r(ρ)=(ρ). The Laguerre correction term in (<ref>) then becomes:p^'(λ)/p(λ) =ρ(nd-ρr^'(ρ)/r(ρ)),-(p^'(λ)/p(λ))^' =ρ^2(nd-2ρr^'(ρ)/r(ρ)+ρ^2((r^'(ρ)/r(ρ))^2-r^”(ρ)/r(ρ))).By using (<ref>) when |λ|≤ 1 and (<ref>) when |λ|>1, we have a method for computing the Laguerre iteration of an upper Hessenberg matrix polynomial (tridiagonal case included) which is both efficient and robust against overflow. For nonzero eigenvalues P(λ)=(ρ), where ρ=1/λ. Therefore, if |λ|>1 then we switch P(λ) with (ρ) in both (<ref>), for the general matrix polynomial, and (<ref>), for the Hessenberg matrix polynomial (which includes the tridiagonal case). The discussion on computing corresponding right and left eigenvectors in <ref>, for the general matrix polynomial, and <ref>, for the Hessenberg matrix polynomial, carries over naturally. With the exception that the upper bound on the backward error in the eigenvalue approximation from (<ref>) becomes:‖ b ‖_2/‖(ρ)^-1b ‖_2,where =∑_i=0^d|ρ|^d-i‖ A_i‖_2, and b∈ℂ^n is nonzero. In addition, the normwise condition number from (<ref>) becomes:κ(λ,P)=‖ x ‖_2‖ y ‖_2/|y^*(d·(ρ)-ρ^'(ρ))x|. §.§ Backward StabilityLet λ∈ℂ be an approximate eigenvalue, x a corresponding right eigenvector, and y a corresponding left eigenvector. Following the development in <cit.>, we define the normwise backward error of the right eigenpair byη(λ,x)=min{ϵ [P(λ)+Δ P(λ)]x=0, ‖Δ A_i‖_2≤ϵ‖ A_i‖_2, i=0,1,…,d}where Δ P(λ)=∑_i=0^dλ^iΔ A_i. This definition of the normwise backward error is concerned with a relative measurement of perturbation in the coefficients of the matrix polynomial. The normwise backward error for the left eigenpair (λ, y) is similarly defined.The first stopping criterion outlined in Sections <ref> and <ref> is concerned with the smallest diagonal entry of R being less than τ. We define τ=αϵ if |λ|≤ 1 and τ=ϵ otherwise, where α=∑_i=0^d|λ|^i‖ A_i‖_2, =∑_i=0^d|ρ|^d-i‖ A_i‖_2, ρ=1/λ, and ϵ is double precision unit roundoff. If the first stopping criterion holds, then the approximate right eigenpair has a backward error bounded above by ϵ(2n+1)+O(ϵ^2). From definition (<ref>) and <cit.>[Theorem 1] it follows that we may compute the normwise backward error for the right eigenpair (λ,x) byη(λ,x)=‖ P(λ)x ‖_2/α‖ x ‖_2 if |λ|≤ 1, η(λ,x)=‖(ρ)x ‖_2/‖ x ‖_2 otherwise.Without loss of generality we assume that |λ|≤ 1 for the remainder of the proof. Denote by QR the QR factorization, with column pivoting for general matrix polynomials, of P(λ). Denote by x the corresponding eigenvector, for the general matrix polynomial see (<ref>) and for the Hessenberg matrix polynomial (including tridiagonal case) see (<ref>). Then the computed right eigenvector satisfies(R+δ R)x=b,where ‖ b‖_2<τ. It follows from <cit.>[Corollary 2.7.9] that ‖δ R‖_F≤ 2nϵ‖ R‖_F+O(ϵ^2), where F denotes the Frobenius norm. Therefore, ‖ Rx‖_2≤τ + ‖δ R‖_F‖ x‖_2.Recall, in practice, that we replace the matrix 2-norm in the definition of α with the Frobenius norm and note that ‖ R‖_F≤α. Thus, the result follows from dividing both sides of the above equation by α‖ x‖_2 to give‖ Rx‖_2/α‖ x‖_2≤ϵ(1 + 2n)+O(ϵ^2).If an approximate eigenvector has not been computed, then an appropriate measure of the backward error is given byη(λ)=1/α‖ P(λ)^-1‖_2 if |λ|≤ 1, η(λ)=1/‖ rP(ρ)^-1‖_2 otherwise.Again, without loss of generality, we assume that |λ|≤ 1.If λ is an approximate eigenvalue for which the second stopping criterion holds, then there exists a nonzero vector b∈ℂ^n such that ‖ b‖_2/α‖ P(λ)^-1‖_2<ϵ,it follows that the backward error in the approximate eigenvalue (<ref>) is bounded above by ϵ. The corresponding right eigenvector is computed as an approximate right singular vector of P(λ) corresponding to the smallest singular value λ and therefore minimizes the backward error in the right eigenpair (<ref>).Note that the results in the section hold naturally for left eigenpairs. Additionally, the result in Theorem <ref> is a worst case scenario and typically you can ignore the factor of n. Finally, even though we can only guarantee the backward stability of our eigenvalue approximation if the first or second stopping criterion hold, in practice it is highly unlikely to experience anything but backward stability. § NUMERICAL EXPERIMENTSWe have implemented the algorithm for solving the polynomial eigenvalue problem via Laguerre's method in the software package LMPEP. This package contains our implementation in FORTRAN 90 and can be freely downloaded from Github by visiting <https://github.com/Nick314159/LMPEP>. In this section, we provide numerical experiments to verify the computational complexity, stability, and accuracy of our methods. All tests were performed on a computer running CENTOS 7 with an Intel Core i5 processor, where the code was compiled with the GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11) compiler. §.§ ComplexityWe first verify the asymptotic complexity of the method. In <ref> it was shown that the computational complexity of the method for general matrix polynomials is O(dn^4+d^2n^3), and therefore for scalar polynomials, the expected computational complexity is O(d^2). In addition, in <ref> it was shown that the computational complexity of the method for Hessenberg matrix polynomials is O(dn^3+d^2n^3) and for tridiagonal matrix polynomials is O(dn^2+d^2n^2). Four tests were executed: * For the general matrix polynomial, we verify the quadratic complexity in d by fixing n=2 and computing the eigenvalues of random matrix polynomials of degree d=50, 100, …, 1600. We also verify the quartic complexity in n by fixing d=2 and computing the eigenvalues of random matrix polynomials of size n=20, 40, …, 320.* For the scalar polynomial, we verify the quadratic complexity by computing the roots of random polynomials of degree d=50, 100, …, 6400. We compare the timings with the POLZEROS from <cit.> and AMVW from <cit.>. * For the Hessenberg matrix polynomial, we verify the quadratic complexity in d by fixing n=2 and computing the eigenvalues of random Hessenberg matrix polynomials of degree d=50, 100, …, 1600. We also verify the cubic complexity in n by fixing d=2 and computing eigenvalues of random Hessenberg matrix polynomials of size n=20, 40, …, 320.* For the tridiagonal matrix polynomial, we verify the quadratic complexity in d by fixing n=2 and computing the eigenvalues of random tridiagonal matrix polynomials of degree d=50, 100, …, 1600. We also verify the quadratic complexity in n by fixing d=2 and computing eigenvalues of random tridiagonal matrix polynomials of size n=20, 40, …, 320.0.5< g r a p h i c s >0.5< g r a p h i c s > figureTest of the quadratic complexity of degree d and quartic complexity of size n of the general matrix polynomial. The tests are averaged over 5 runs.< g r a p h i c s >figureTest of the quadratic complexity of degree d of the scalar polynomial. The tests are averaged over 5 runs and runtimes are reported for POLZEROS and AMVW. 0.5< g r a p h i c s >0.5< g r a p h i c s > figureTest of the quadratic complexity of degree d and cubic complexity of size n of the Hessenberg matrix polynomial. The tests are averaged over 5 runs. 0.5< g r a p h i c s >0.5< g r a p h i c s > figureTest of the quadratic complexity of degree d and quadratic complexity of size n of the tridiagonal matrix polynomial. The tests are averaged over 5 runs. §.§ Stability and AccuracyWe then verify the stability of our method. In <ref> it was shown that our method is robust against overflow and that if either the first or second stopping criterion are met then the approximate eigenpair has a tiny backward error. This along with a well-conditioned problem implies that our method is highly accurate. Four tests were executed:* For the scalar polynomial, we verify the accuracy of our method by computing the roots of random polynomials of degree d=50,100,…,6400. We compare the forward error with POLZEROS from <cit.> and AMVW from <cit.>.* For the tridiagonal matrix polynomial, we verify the accuracy of our method by computing the eigenvalues of selected problems from both <cit.> and <cit.>. The forward error in our method is compared to the forward error in each respective method. * For the general matrix polynomial, we verify the stability of our method by solving select problems from the NLEVP package <cit.> and comparing the backward error in our approximation to the backward error in QUADEIG.* For the general matrix polynomial, we verify the accuracy of our method by comparing the forward error in our approximations to those from QUADEIG for select problems from the NLEVP package <cit.>.< g r a p h i c s >figureTest of the maximum forward error in the approximation of the roots of a scalar polynomial of degree d. The tests are averaged over 5 runs and average forward errors are reported for POLZEROS and AMVW.< g r a p h i c s >figureAverage forward error and elapsed time comparisons between our method and a relevant method from the QEP3D package <cit.>. The problems all come from the QEP3D package, with exception to the spring problem from the NLEVP package; note that our method is competitive, even though it was designed to handle more tridiagonal polynomial eigenvalue problems.< g r a p h i c s >figureAverage forward error and elapsed time comparisons between our method and the method EIGEN <cit.>. The problems also come from the EIGEN package and it is important to note that our method is competitive even though it was designed for more general tridiagonal polynomial eigenvalue problems.< g r a p h i c s >figureComparison of the average and maximum backward error in our method and QUADEIG for many problems from the NLEVP package. The problems which QUADEIG is unable to solve are marked by NA. 0.5< g r a p h i c s >0.5< g r a p h i c s > figureComparison of the forward error in all eigenvalue approximations between our method and QUADEIG on the dirac and damped_beam problem from the NLEVP package. 0.5< g r a p h i c s >0.5< g r a p h i c s > figureComparison of the forward error in all eigenvalue approximations between our method and QUADEIG on the speaker_box and wiresaw2 problem from the NLEVP package. § CONCLUSIONLeveraging the inherent strengths of Laguerre's method and the numerical range, we have proposed a versatile, stable, and efficient method for solving the polynomial eigenvalue problem; supported by numerical experiments. Furthermore, we have demonstrated the effectiveness of our initial estimates (<ref>), as well as the robustness (<ref>) and backward stability (<ref>) of our method. To our knowledge, we are the first to utilize Laguerre's method with such generality as to cover such a large range of polynomial eigenvalue problems. Our method is also alone in its use of the numerical range for initial estimates. In Section <ref> we argue that these initial estimates adhere naturally to the geometry of the spectrum and we show that under suitable conditions they are no bigger in absolute value than the upper Pellet bound (Theorem <ref>). Implemented in the FORTRAN package LMPEP, numerical results attest to our method's computational complexity of O(d^2) in the scalar case, O(d^2n^2) in the tridiagonal case, O(dn^3+d^2n^3) in the Hessenberg case, and O(dn^4+d^2n^3) in the general case. Moreover, numerical results verify the backward stability of our method and exhibit its unprecedented level of accuracy. We are eagerly awaiting the formal release of the complete code in <cit.>, so that we can make additional comparisons to our method, especially for solving large degree polynomial eigenvalue problems. It would be remiss not to mention some open questions and areas worth exploration. In Theorem <ref>, we show that roots of the quadratic form, under a vector of unit length, are no bigger in absolute value than the upper Pellet bound. We conjecture that they are also no smaller than the lower Pellet bound, but at this time are unable to produce a proof. We also conjecture that there are easily constructible vectors x that such that the corresponding quadratic forms x^*P(λ)x are distinct and their roots are within some minimal distance of the eigenvalues of P(λ). However, we know of no such construction at the time of this writing.In summary, we have proposed a new method for solving the polynomial eigenvalue problem that is strong in its virtues, capable of high degrees of accuracy, relatively unconstrained in its domain of operability, and promising in its possibility for future advancements. § ACKNOWLEDGMENTSThe authors wish to acknowledge conversations with David Watkins and Dario Bini which helped construct the ideas in this paper, and we wish to thank Zdenek Strakos and an anonymous referee whose comments helped improve this paper.10 Acton1970 F. S. Acton, Numerical Methods that Work, Harper and Row, New York, 1970. Aurentz2016 J. L. Aurentz, T. Mach, L. Robol, R. Vandebril, and D. S. Watkins, Fast and backward stable computation of the eigenvalues of matrix polynomials, Preprint on arXiv.org math,(2016). Aurentz2015 J. L. Aurentz, T. Mach, R. Vandebril, and D. S. Watkins, Fast and backward stable computation of roots of polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 942–973. Betcke2013 T. Betcke, N. J. Higham, V. Mehrmann, C. Schröder, and F. Tisseur, NLEVP: a collection of nonlinear eigenvalue problems, Trans. Math. Software, 39 (2013), p. 28. Bini1996 D. A. Bini, Numerical computation of polynomial zeros by means of Aberths method, Numer. Algorithms, 13 (1996), pp. 179–200. Bini2005 D. A. Bini, L. Gemignani, and F. Tisseur, The Ehrlich-Aberth method for the nonsymmetric tridiagonal eigenvalue problem, SIAM J. Matrix Anal. Appl., 27 (2005), pp. 153–175. Bini2013-1 D. A. Bini and V. Noferini, Solving polynomial eigenvalue problem by means of the Ehrlich-Aberth method, Linear Algebra Appl., 439 (2013), pp. 1130–1149. Bini2013-2 D. A. Bini, V. Noferini, and M. Sharify, Locating the eigenvalues of matrix polynomials, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 1708–1727. Cameron2015 T. R. Cameron, Spectral bounds for matrix polynomials with unitary coefficients, Electronic Journal of Linear Algebra, 30 (2015), pp. 585–591. Cameron2016 height 2pt depth -1.6pt width 23pt, On the reduction of matrix polynomials to Hessenberg form, Electronic Journal of Linear Algebra, 31 (2016), pp. 321–334. Dedieu2003 J.-P. Dedieu and F. Tisseur, Perturbation theory for homogeneous polynomial eigenvalue problems, Linear Algebra Appl., 358, pp. 71–94. Gary1965 J. Gary, Hyman's method applied to the general eigenvalue problem, Mathematics of Computation, 19 (1965), pp. 314–316. Hammarling2013 S. J. Hammerling, C. J. Munro, and T. Francoise, An algorithm for the complete solution of quadratic eigenvalue problem, Transactions on Mathematical Software, 39 (2013), p. 19. Jain1983 N. K. Jain, K. Singhal, and K. Huseyin, On roots of functional lambda matrices, Comput. Meth. Appl. Mech. Engrg., 40 (1983), pp. 277–292. Kublanovskaya1970 V. Kublanovskaya, On an approach to the solution of the generalized latent value problem for λ-matrices, SIAM J. Matrix Anal. Appl., 7 (1970), pp. 532–537. Laguerre1898 E. Laguerre, Oeuvres de Laguerre, Paris Authier-Villars, 1898. Li1994 C.-K. Li and L. Rodman, Numerical range of matrix polynomials, SIAM J. Matrix Anal. Appl., 15 (1994), pp. 1256–1265. Mackey2015 D. S. Mackey, N. Mackey, and F. Tisseur, Polynomial eigenvalue problems: theory, computation, and structure,(2015), pp. 319–348. Markus1988 A. Markus, Introduction to the Spectral Theory of Polynomial Operator Pencils, AMS Translations of Mathematical Monographs, 1988. Meerbergen2001 K. Meerbergen and F. Tisseur, The quadratic eigenvalue problem, SIAM Review, 43 (2001), pp. 235–286. Melman2013 A. Melman, Generalization and variations of Pellet's theorem for matrix polynomials, Linear Algebra Appl., 439 (2013), pp. 1550–1567. Melman2014 height 2pt depth -1.6pt width 23pt, Implementation of pellet's theorem, Numerical Algorithms, 65 (2014), pp. 293–304. Noferini2015 V. Noferini, M. Sharify, and F. Tisseur, Tropical roots as approximations to eigenvalues of matrix polynomials, SIAM J. Matrix Anal. Appl., 36 (2015), pp. 138–157. Parlett1964 B. Parlett, Laguerre's method applied to the matrix eigenvalue problem, Mathematics of Computation, 18 (1964), pp. 464–485. Plestenjak2006 B. Plestenjak, Numerical methods for the tridiagonal hyperbolic quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl., 28 (2006), pp. 1157–1172. Tisseur2000 F. Tisseur, Backward error and condition of polynomial eigenvalue problem, Linear Algebra Appl., 309 (2000), pp. 339–361. Watkins2010 D. S. Watkins, Fundamentals of Matrix Computations, John Wiley and Sons, New Jersey, 3 ed., 2010. Wilkinson1963 J. Wilkinson, Rounding Errors in Algebraic Processes, Prenctice-Hall, New Jersey, 1963.
http://arxiv.org/abs/1703.08767v1
{ "authors": [ "Thomas R. Cameron", "Nikolas I. Steckley" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170326044143", "title": "On the application of Laguerre's method to the polynomial eigenvalue problem" }
The Superradiant Instability in AdS Joseph M.U. Sullivan May 5, 2016 =================================== We consider the intermediate and end state behavior of the superradiantly perturbed Kerr black hole. Superradiant scattering in an asymptotically flat background is considered first. The case of a Kerr black hole in an Anti de-Sitter background is then discussed. Specifically we review what is known about the superradiant instability arising in AdS and its possible end state behavior. § ACKNOWLEDGEMENTSI would like to dedicate this essay to the memory of Professor Steve Detweiler. It was during our many conversations during my time as an undergraduate that he first spurred my interest in the physics of black holes. I am just one of many students who has benefited immensely from his guidance and insight. I would like to thank Dr. Jorge Santos for shepparding me toward a better understanding of the problems at the heart of this essay, for taking the time to thoroughly review & critique my work and for the introduction to AdS. I also owe Dr. Mike Blake a debt of gratitude for the mathematical clarifications he provided. While I'm at it, I should probably also thank my mother. § INTRODUCTIONThis essay is concerned with the phenomena of superradiance in the setting of the Kerr black hole (Kerr BH). Superradiance is a wave phenomena in which an ingoing wave scatters off an object and in the process extracts some energy; the scattered wave is more energetic than the incident one. This is a particularly interesting process in the context of black hole physics because it provides an outlet for energy dissipation by a BH. Much of our discussion will be concerned with the results of <cit.> and <cit.>. In <cit.> superradiant scattering was studied in the space-timeof a Kerr BH in Minkowski background. Frequency dependent conditions for superradiance and calculations of the extent of amplification were obtained; we will review these.We also consider superradiance in a spacetime comprised of a Kerr BH in an Anti de-Sitter (AdS) background. While superradiant scattering in a Minkowski background is interesting in its own right, we will see that the corresponding problem in AdS is much richer and more complex. This is due in large part to the box-like nature of AdS. A scattered wave can now reflect off the boundary at infinity and return, in finite time, to the Kerr BH to extract more energy. This process can repeat many times suggesting that the Kerr-AdS BH is susceptible to a superradiant instability. This motivates a plethora of fascinating questions; can we characterize this instability, is there a relationship between general instabilities and these superradaint ones, what is the end state of the superradaintly perturbed Kerr-AdS BH...etc? In an effort to answer these questions we will draw heavily from the results of <cit.>. Rather than dive right into a discussion of superradiance in the two Kerr BH spacetimes, we first provide an introduction to some of the objects and concepts fundamental to the problem. To begin we give a treatment of the Kerr BH. Next we discuss the ergoregion of the Kerr BH and the associated phenomena of the Penrose Process to motivate the idea of Kerr BHs being susceptible to energy extraction. To aid our discussion of the superradiant instability we give an overview of some of the important properties of AdS. We next introduce the Teukolsky formalism and comment on its importance to Kerr BH perturbation theory. Finally, with the Teukolsky formalism in our tool box, we give a generic analysis of BH superradiance.§.§ The Kerr Black HoleA generic uncharged rotating BH in Minkowski background belongs to the Kerr family. Remarkably, this is just a two parameter family of solutions characterized by M and J which describe the mass and angular momentum of the BH. In natural units, with c=1 and G =1, the metric <cit.> is given by:ds^2 = -Δ/ρ^2(dt-asin^2θ dϕ)^2 +ρ^2/Δdr^2 + ρ^2 dθ^2 sin^2θ/ρ^2(adt-(r^2+a^2)dϕ)^2withΔ = r^2 + a^2 -2Mr = (r-r_+)(r-r_-),   ρ^2 = r^2 + a^2cos^2θwhere the mass of the spinning BH is given by M and the angular momentum is given by J=aM. We see that the metric has a coordinate singularity at the roots of Δ with the larger root r_+ = M+√(M^2 - a^2) determining the event horizon and the smaller root r_- = M-√(M^2 -a^2) corresponding to a Cauchy horizon.§.§ Ergoregion and Energy ExtractionNote that the spacetime has the Killing vector fields (KVFs)k^a=(∂/∂ t)^a,    m^a=(∂/∂ϕ)^a .Observe that k^ak_a = g_tt(r,θ) which, while monotone with respect to r and negative for sufficiently large r, is not strictly negative in the region r>r_+,   θ∈ [0,π]. Thus finding the roots of g_tt we see k^a is timelike in the region r> r_erg = M+√(M^2 -a^2cos^2θ), null at r = r_erg and actually spacelike in the region r_+ <r<r_erg. This latter region defines the ergoregion of the Kerr BH.A static observer, i.e person with 4-velocity parallel k^a, is not allowed in the ergoregion as curves with tangent vector ∝ k^a are spacelike when r<r_ergo. Interpreting this physically, an observer cannot simply sit still in the ergoregion but is forced to rotate with the BH.On the other hand we can consider a stationary observer at constant (r,θ) with 4-velocityv^μ = (ṫ,0,0,ϕ̇) = ṫ(1,0,0,Ω).Such an observer can exist provided he/she travels on a timelike curve or equivalently v^2<0. This provides a condition for the existence of a stationary observer:v^2 ∝ g_tt + 2Ω g_tϕ + Ω^2g_ϕϕ <0.The zeros of the above expression are given byΩ_± = -g_tϕ±√(g^2_tϕ - g_ttg_ϕϕ)/g_ϕϕ = -g_tϕ±√(Δ)sinθ/g_ϕϕNote that (Ω_±) 0 for r_-<r<r_+ hence there cannot be a stationary observer in this region. The permissible values are Ω∈ [Ω_-, Ω_+]. At r_+, Ω_-=Ω_+, meaning there is only one possibly stationary observer at the event horizon. To motivate superradiance we now give an example of how a rotating BHs allow energy extraction. Suppose we have a particle with 4-momentum P^a = μ u^a which approaches the Kerr BH along a geodesic. The energy of the particle as measured by a static observer at infinity is conserved along the geodesic: E =-k · P. Now suppose that inside the ergosphere the particle decays into two other particles with momenta P_1^a   &   P_2^a. Momentum must be conserved so P=P_1+P_2E = E_1 + E_2. But since k^a is spacelike in the ergoregion it is possible that E_1 <0 which implies that E_2 = E-E_1>E. It can be shown that p_1 must fall into the BH while p_2 can escape to infinity greater energy than the incident particle. Hence the BH will actually decrease in mass and energy will be extracted. There are limits to the amount of energy which can extracted in this way. A particle crossing H^+ must have P_μ (k^μ + Ω_H m^μ) ≤ 0 as both are future directed causal curves. Defining L = m· P, one has E-Ω_H L ≥0. So the particle carries energy E and angular mom L into the BH. Hence δ M = E and δ J = L. Our inequality givesδ J ≤δ M/Ω_H = 2M(M^2 + √(M^4 - J^2))/Jδ MDefining M_irr = (1/2 [M^2 +√(M^4 - J^2)])^1/2 we clearly see thatM^2≥ M_irrso there is a bound to how much energy can be extracted from the BH. One can show that A = 16π^2 M_irr. §.§ Anti de-Sitter SpaceBecause one of our interests is superradiance in Kerr-AdS we give here a quick introduction into Anti de-Sitter space and its properties.The simplest vacuum solution of Einstein's equation with cosmological constant,G_ab + Λ g_ab = 0are spacetimes of constant curvature. They are locally characterized by the conditionR_abcd =R/(d-1)d(g_acg_bd-g_adg_bc)where d is the dimension of spacetime. Making use of these expressions we see that G_ab = -Rg_abd-2/2d = -Λ g_abhence the Ricci scalar is the constant R = 2d/d-2Λ. We are in the domain of AdS when Λ<0. In d=4, which is our primary dimension on interest, the metric can be written as ds^2 = -(1+r^2/L^2)dt^2 + (1+r^2/L^2)^-1dr^2 + r^2(dθ^2 + sin^2θ dϕ^2) The quantity L in eq <ref> is the radius of curvature of the spacetime and is related to Λ byΛ = -3/L^2 .This is a maximally symmetric solution to Einstein's equation. One of the most famous properties of AdS, and one which is central to the discussion of the superradiant instability later on, is that it acts like a “box". To demonstrate this consider the norm of the tangent vector of a radially outward null geodesic:0 = ||ẋ||_AdS = g_μνẋ^μẋ^ν= -(1+r^2/L^2)ṫ^2 + (1+r^2/L^2)^-1ṙ^2from this we havedt/dr= 1/1+r^2/L^2Δ t=∫^∞_0 dr/1+r^2/L^2 = π/2LWe see that it takes a finite time for a radial null geodesic to reach the boundary and so we can think of AdS in some sense as an enclosed space. Note that it follows from this that AdSis not globally hyperbolic. For any hypersurface one can always construct a timelike curve which reaches the boundary before it is able to intersect the surface. Hence when evolving AdS initial data, boundary conditions (BCs) become very important. §.§ Perturbations of Kerr Black HolesBlack Hole perturbation theory is an incredibly complex and rich subject. Here we will simply introduce what is relevant to the Kerr BH. For convenience we have left out a satisfactory discussion of Newman-Penrose (NP) formalism. For those unfamiliar, we highly recommend that the reader consult chapters 2,6 and 7 of <cit.>.Newman-Penrose formalism is a tetrad formalism in which the basis vectors are selected so as to emphasize the lightcone structure of the space-time. We pick an "isotropic tetrad"{e_1,e_2,e_3,e_4}={l,n,m,m̅}with l, n real valued and m complex valued, such that the only non-zero inner products arel^μ n_μ =1=-m^μm̅_μ Given a tensor T_ij we can project onto the tetrad frame and express the object in tetrad coordinates:T_ab = e_a^ie_b^jT_ij  .We can pass freely in either direction, considering the problem in which ever frame provides the most simplification. The original metric of the spacetime can be recovered viag_μν = 2[l_(μn_ν) - m_(μm_ν)] For electromagnetic and gravitational perturbations we can consider the relevant tensors F_μν and C_αβγδ in the tetrad frame. Because m, m̅ are complex valued we can actually express the 6 and 10 independent components of the above tensors as a set of 3 and 5 complex valued scalars respectively. The information contained in the Maxwell tensor is encoded in the following φ_0 = F_μνl^μm^ν,  φ_1 = 1/2F_μν(l^μn^ν - m^μm̅^ν),  φ_2 = F_μνm̅^μn^νand the Weyl tensor is distilled into the 5 complex scalars Ψ_0=-C_1313 = C_αβγδ l^α m^β l^γ m^δ Ψ_1=-C_1213 Ψ_2=-C_1342 Ψ_3=-C_1242 Ψ_4=-C_2424 = - C_αβγδn^αm̅^β n^γm̅^δ Maxwell's equations manifest in the NP formalism as a system of equations involving φ_i, the derivative operators given by the tetrad basis, e_i = e_i^μ∂_μ, and a set of 12 spin coefficients which are related to the structure constants of the tetrad basis under the bracket operation. The Weyl tensor shares the symmetries of the Riemann tensor and has the further restriction of tracelessness.This gives a similar set of equations this time involving Ψ_i in place of φ_i. The form of these tetrads, corresponding to the Kerr geometry, was discovered by Kinnersly and is given by: l = (r^2+a^2/Δ, 1, 0, a/Δ) n=1/2(r^2 +a^2cos^2θ)(r^2+a^2 -Δ,0,a) m = 1/√(2)(r+iacosθ)(iasinθ,0,1,i/sinθ) .In the effort to obtain linearized perturbation equations a natural first approach would be to start with the Einstein equation and let g_μν→ g_μν + h_μν for metric perturbation h_μν. Expanding the field equations to first order in h_μν yields a set of linear equations. In the setting of the Kerr geometry however this approach is complicated. The fewer symmetries, relative to say the Schwarzchild solution, mean that the resulting PDEs in r and θ are not seperable.Fortunately the NP formalism provides a simpler alternative approach. It can be shown that when studying electromagnetic <cit.> and gravitational <cit.> perturbations of the Kerr geometry it suffices to consider the NP scalars {φ_0,φ_2} and {Ψ_0,Ψ_4} respectively. Further, it was shown by Teukolsky <cit.> that the linear perturbations of the Kerr BH could be described by a single master equation: [(r^2+a^2)^2/Δ-a^2sin^2θ] ∂_t^2ψ + 4Mar/Δ∂_t∂_ϕψ +[a^2/Δ-1/sin^2θ]∂_ϕ^2 ψ-Δ^-s∂_r[Δ^s+1∂_rψ] - 1/sinθ∂_θ[sinθ∂_θψ]-2s[a(r-M)/Δ+icosθ/sin^2 θ]∂_ϕψ -2s[M(r^2-a^2)/Δ -r-iacosθ]∂_t ψ +(s^2^2θ - s)ψ=0with ψ and s are related as follows: where ρ = -1/(r-iacosθ). Further, by Fourier decomposing ψ with the formψ = 1/2π∫ dω e^-iω te^iωϕ S(θ)R(r)Teukolsky was able separate eq <ref> into the following ODEs for R and S: Δ^-sd/dr(Δ^s+1dR/dr) + {[(r^2+a^2)^2ω^2-4aMmω r +a^2m^2 +2ia(r-M)ms -2iM(r^2-a^2)ω s]Δ^-s + 2iω r s -λ}R = 0 1/sinθd/dθ(sinθdS/dθ) -(a^2ω^2sin^2θ + m^2/sin^2θ +2aω scosθ +2mscosθ/sin^2θ +  s^2^2θ -s)S + λ S=0The separation constant λ is constrained when BCs are imposed leading to a complex eigenvalue problem.§.§ Superradiance We will now outline the theory of superradiant scattering of test fields on a BH background. For concreteness and simplicity we will consider an asymptotically flat spacetime (so not AdS). It should be noted that fluctuations of order 𝒪(ϵ) in the scalar fields induce a change of order 𝒪(ϵ^2), so one is justified in fixing the BH geometry.Let us assume that our spacetime is stationary and axisymmetric, as in the case of the Kerr BH. As we have seen above for such a spacetime various types of perturbations can be expressed in terms of a master variable Ψ. It can be shown that Ψ obeys a Schrodinger-like equation d^2 Ψ/dr_∗ ^2 + V_effΨ = 0 with V_eff(r) dependent on th curvature of the background and the test field properties. We let r_∗ be some coordinate which maps [r_+,∞] →ℝ. Consider the scattering of a monochromatic wave of frequency ω with t   &  ϕ dependence given (because of the ∂_t   &  ∂_ϕ isometries) by e^-i(ω t - mϕ). Supposing V_eff is constant at the boundaries, the asymptotics of eq <ref> giveΨ∼𝒯e^-ik_+ r_∗ as   r → r_+ℛe^ik_∞ r_∗ + ℐe^-ik_∞ r_∗ as  r→∞ where k^2_+ = V_eff(r→ r_+) and k^2_∞ = V_eff (r→∞). The event horizon imposes the boundary condition of a one-way membrane. We have a wave incident from spatial infinity of amplitude ℐ which upon reaching the boundary at r_+ gives rise to a transmitted wave of amplitude 𝒯 and reflected wave of amplitude ℛ. Superradiance corresponds to the condition that |ℛ|^2 > |ℐ|^2As a further simplification lets assume that V_eff is real. The symmetries of the field equations imply that there is another solution Ψ̅ satisfying the complex conjugate of the above BCs. Note Ψ and Ψ̅ are linearly independent which implies that their Wronskian, W, does not depend on r_∗. Hence -2ik_+|𝒯|^2 = W(r_+) = W(∞) = 2ik_∞(|ℛ|^2 - |ℐ|^2) which gives|ℛ|^2 = |ℐ|^2 - k_+/k_∞|𝒯|^2.We see that superradiance occurs when k_+/k_∞<0.§ SUPERRADIANCE AND THE KERR BHIn this section we discuss superradiant scattering in the Minkowski background. As we have seen superradiant scattering, like the Penrose Process, is a means of extracting energy from a Kerr BH. For an incident wave of suitable conditions, reflection off of the event horizon occurs and the outgoing wave is more energetic (i.e has greater amplitude) than the ingoing wave. It should be emphasized that there is no change in frequency involved in superradaince; it is not a Doppler phenomenon. Waves of arbitrary spin may be considered by introducing the appropriate field term in the Einstein-Hilbert field action. For our purposes, as we are mainly reviewing the work of <cit.>, we will discuss waves of spin s=0, 1   &  2, i.e. scalar, electromagnetic and gravitational perturbations. We will discuss some of the conditions for superradiance to occur. We will also study the magnitude of reflection and its dependence on the relevant quantities associated with the scattered wave. §.§ Perturbations of the Kerr MetricConsider for a moment and electromagnetic perturbation. The total energy flux per steradian at infinity is given byd^2E/dtdΩ = lim_r→∞ r^2T^r_t .Now the Maxwell tensor can be expressed in terms of the NP scalars as follows4π T_ij = φ_0φ̅_̅0̅n_in_j + φ_2φ̅_̅2̅l_il_j+2φ_1φ̅_̅1̅[l_(in_j)+m_(im̅_j)] -4φ̅_̅0̅φ_1n_(im_j) -4φ̅_̅1̅φ_2l_(im_j) + φ_2φ̅_̅0̅m_im_j +   complex conjugate of the preceeding terms .Using the Kinnersly tetrad <ref> we haveT^r_t = -1/4φ_0 φ̅_̅0̅ + φ_2 φ̅_̅2̅with the first term corresponding to an ingoing wave and the second to an outgoing wave. Thus we interpret the terms as (d^2E/dtdΩ)_in = lim_r→∞r^2/8π|φ_0|^2,   (d^2E/dtdΩ)_out = lim_r→∞r^2/2π|φ_2|^2 .In the case of gravitational perturbations we can get at the desired energy fluxes by a similar method only with the use of the Landau-Lifshitz pseudotensor. The results obtained are (d^2E/dtdΩ)_in = lim_r→∞r^2/64πω^2|Ψ_0|^2,  (d^2E/dtdΩ)_out = lim_r→∞r^2/4πω^2|φ_4|^2 We need only consider the s=1, 2 scalars φ_0 and Ψ_0 as it turns out thatthe s=-1, 2 follow from these. Recalling the discussion of the Teukolsky master equation we use the ansatzφ_0, ψ_0 = R(r)S(θ)e^imφ - i ω twith frequency ω and angular momentum z-component m.The S(θ) ODE <ref> combined with the physically desirable BCs, |S(0)|<∞ and |S(π)|<∞ yields an eigenvalue problem forλ=  _sλ_l^m(aω)=  _sλ_l^-m where l is some whole number such that l≥max(|m|,s). For aω = 0 (either Schwarzchild or wave of zero frequency) one has _sλ_l^m(0) = (l -s)(l+s+1) and the eigenfunctions are weighted spinor spherical harmonics. For aω≠ 0,   λ is not analytically expressible as a function of l, n   &   aω.It remains to treat the R(r) ODE <ref>. If we introduce the coordinate y defined bydy/dr=r^2+a^2/Δ, y∈(-∞, ∞)the asymptotic solutions of eq <ref> are given byR(r→∞) ∼ℐe^-iω r/r + ℛe^iω r/r^2s+1and R(r→ r_+) ∼ 𝒯e^-i(ω-mΩ)y/Δ^swhere Ω=a/r_+^2 +a^2. Note that as r→∞ φ_0 →(ℐe^-iω r/r + ℛe^iω r/r^2s+1)S_in(θ)e^imϕ - iω t §.§ Conditions for Superradiant Scattering Wenow present the frequency conditions necessary for superradiance arrived at in <cit.>. We also discuss the dependence of the strength of the reflection R on ω and the spin number s. Comparing equations <ref> and <ref> to the results of our discussion of superradiance in the introduction we see that k_H = ω -mΩ. The frequency dependent condition for superradiant scattering of an m-mode wave is thusk_H =ω - mΩ<0 It follows that the condition is the same for all integral values of s and hence the same for scalar, vector and gravitational waves. The convention that w>0 and the observation that Schwarzchild solution corresponds to Ω = 0 implies superradiant scattering doesn't occur for Schwarzchild BH. As a check of physical plausibility it is a good idea to make sure the condition just stated adheres to the laws of BH thermodynamics. In particular, when superradiance occurs energy is extracted from the BH causing M and a to decrease. At first glance this might seem ominous as the surface area of the horizon grows monotonically with respect to a^2 but the 2_nd law of BH mechanics requires that S_hor increase with time. We will show that M and a decrease in such a way that S_hor actually increases under the process of superradiant scattering. Let I be the energy flux of an incident wave of frequency ω and multipole order m. Then energy flux of the reflected wave is RI anddM/dt = -(R-1)I,   dL/dt = -m/ω(R-1)IThese expressions makes sense; RI - I is just Ė_̇ḟ - Ė_̇i̇ the energy flux gained by the wave which, is the negative of the energy flux lost by the BH (i.e -dM/dt). Recall the discussion of the irreducible mass M_irr in section 1; specifically the relationS_hor = 4π(r_+^2+a^2)=16π M_irr^2 .It also follows from the definition of M_irr thatM^2 = M_irr^2 + L^2/4M_irr^2 .By considering the total derivative of M(M_irr^2,L), making use of ∂ M/∂ L = L/(M*4M_irr^2)= aM/M(r_+^2 + a^2)= Ω and eq <ref>, it follows thatdS_hor/dt = 16πd/dtM_irr^2 = 32πM/1-a^2/r_+^2(Ṁ - ΩL̇)= 32πMI/1-a^2/r_+^2(nΩ/ω-1)(R-1) ≥ 0We see that the process is reversible, that is dS/dt = 0, only when a<M(ensures r_1 ∈ℝ and not extremal) and ω = mΩ. We will show below that this implies R=1  (&  Ṁ = 0). So reversibility corresponds to a perfectly reflected wave. It is apparent that we may get as close to reversibility as we wish by choosing ω arbitrarily close to mΩ. We will now study the behavior of R in the neighborhoods of ω =0 and ω = mΩ. The amplification factor R can be determined numerically by integrating equations <ref> and <ref>. If we restrict attention to the low frequency realm the problem has also been solved analytically <cit.>. In what follows we simply give the analytically obtained expressions for R without any prior derivation. See Appendix B of <cit.> if curious.For a wave of quantum numbers s, l, m we have reflection coefficient _sR_lm. It can be shown that R is of the from _sR_lm -1 = (_0R_lm-1)[(l-s)!(l+s)!/(l!)^2]^2 with the reflection for the scalar wave given by _0R_lm-1 = -8Mr_+(ω-mΩ)ω^2l+1(r_+-r)^2l [(l!)^2/(2l!)(2l+1)!]^2∏_k=1^l[1+M^2/k^2(ω-mΩ/π r_+ T_H)] where T_H = r_+ -r_-/4π r^2_+ is the temperature of the BH. The above expressions are valid in the region a ≤ M, ω M ≪ 1 and for any spin s. Furthermore the expression is physically valid even when the superradiant condition ω<mΩ is not satisfied. In that setting eq <ref> describes the absorption cross section of a rotating BH. Note that in _sR_lm -1>0 when ω<mΩ, for any s   &   l. For a given s we see that for l≪ s^2, _sR_lm≈   _0R_lm. Restricting or focus to those s =1, 2 physically relevant case we see_1R_lm -1 = (_0R_lm-1)[(l+1)/l]^2and_2R_lm -1 = (_0R_lm-1)[((l+1)(l+2)/(l-1)l]^2 so at most the electromagnetic and gravitational waves are amplified a factor of 4 and 36 times more than the corresponding {l,m} scalar wave, respectively. Letting ω→ 0, we only need keep the lowest order terms in ω. Hence we see that for m 0,  _sR_lm -1 ∼ω^2l+1. Now the ω→mΩ case. Consider the quantity α = 1 - ω/mΩ. One can show that if a<M then _sR_lm -1 ∼α in the region |α|Q_1 = |α|am/(r_1-r_2) >0. Further constraining a≪ M   &   n ≪M/a places us the realm of small ω allowing us to use eq <ref> compute the coefficient of α.For the extremal Kerr BH, a=M, as ω→ mΩ we have two cases. Letδ^2 = 2m^2 - λ-(s+1/2)^2.If δ^2<0 then_sR_lm -1 = 4sgn(α) |δ|^2(2m^2|α|)^2|α||Γ(1/2 + s + |δ|+im)|^2|Γ(1/2 - s + |δ|+im)|^2/Γ(1+2|δ|)^4e^π m[1-sgn(α)]which is continuous and varies monotonically in the vicinity of ω = 0. For δ^2>0 on the other hand, in the region |α| ≪ n^-4max(|α|^2, 1) we have: (_sR_ln -1)^-1 = sgn(α)e^-π m[1-sgn(α)]/sinh(2πδ)^2{cosh(π[m-δ])^2e^-πδ[1-sgn(α)] + cosh(π[m+δ])^2e^πδ[1-sgn(α)] -2cosh(π[m-δ])cosh(π[m+δ])cos[γ_0 -2δlog(2m^2|α|)] } where γ_0 is a function involving the argument of the Γ terms. See <cit.> for the exact form. Note that δ^2>0 issatisfied by the majority of modes. For example if s=1 it holds for all l=m≥1 and if s=2 it holds for all l=m=2. In the vicinity of the onset of superradiant scattering, α =0 the reflection coeffienct R has an infinite number of oscillations in the region |α|m^2≪1. Aside from the case when m=1   &  πδ≤ 1 these oscillations have small amplitude and can be ignored. In case α>0 we have_sR_ln -1 ≈ e^2π(δ - m)and the amplification factor is discontinuous near the onset of superradiance. For α<0, min  _sR_ln =0 suggesting that the barrier can be totally transparent, as one would expect for the region unable to superradiantly scatter. Switching our attentions to the non-extremal Kerr BH, if a M but M-a ≪ M and m≪√(M/(M-a)) then R(δ,m,s,α) is described by equations <ref> or <ref>, depending on the values of sign of δ^2, in the region Q_1^-1 << |α| ≪ m^-1. In region |α|<<Q_1^-1,    R-1 ∼α. Hence R is continuous at α =0 when a≠ M.Using our expressions for R, calculations of the magnitude of R yield: R_em-1<.1, in particular for a=M, ω = Ω-0 we get _1R_1^1 - 1 ≈ .02. For gravitational waves with a=M, ω=2Ω-0 we have _2R_2^2 -1 =1.37 so reflected gravitational waves can more than double in amplitude! In general for fixed s   &   m →∞ the effect decreases as an mth-power exponential; when a=M,ω=nΩ -0   &   m≫ s^2 we get_sR_mm-1 ≈ e^-mπ(2-√(3))§ KERR-ADS AND THE SUPERRADIANT INSTABILITYNow we shift our focus by studying gravitational perturbations in a Kerr-AdS background. Because of the box like nature of AdS we see that superradiance will tend to lead to instabilities; a superradiantly reflected wave is free to bounce off the boundary and return to the ergoregion in finite time. We will see <cit.> that general instabilities in Kerr-AdS are always superradiant in nature. Finally we will explore the possible evolution of the superradiantly perturbed Kerr AdS BH.§.§ Kerr-AdSIn this section we give a brief overview of the properties of Kerr-AdS BHs. For purposes of studying instability it is useful to use a variation of Boyer-Lundquist coordinates,{T,r,θ,φ}, introduced by Chambers and Moss <cit.> given by{t=Ξ T,r,χ = a cosθ, ϕ}where a is the rotation parameter of the solution, L is the radius of curvature of the AdS background and Ξ = 1-a^2/L^2. In this coordinate system the Kerr-AdS metric is given by ds^2 = -Δ_r/(r^2+χ^2)Ξ^2(dt-a^2-χ^2/adϕ)^2 + Δ_χ/(r^2+χ^2)Ξ^2(dt-a^2+r^2/adϕ)^2+ (r^2+χ^2)/Δ_rdr^2 + (r^2+χ^2)/Δ_χdχ^2whereΔ_r = (r^2+a^2)(1 + r^2/L^2) -2Mr,  Δ_χ= (a^2-χ^2)(1- χ^2/L^2) .In this frame the horizon angular velocity and temperature areΩ_H = a/a^2+r_+^2,   T_H = 1/Ξ[r_+/2π(1+r_+^2/L^2)1/r^2_+ +a^2-1/4π r_+(1-r_+^2/L^2)].The Kerr-AdS BH asymptotically approaches global AdS with radius of curvature L. This is not obvious when one looks at eq <ref> because the coordinate frame {t,r,χ,ϕ} rotates at infinity with Ω_∞ = -a/(L^2 Ξ). If one introduces the coordinate changeT=t/Ξ,   Φ = ϕ + a/L^2t/Ξ R = √(L^2(a^2+r^2)-(L^2+r^2)χ^2)/L^2√(Ξ),   cosΘ = Lr√(Ξ)χ/a√(L^2(a^2+r^2)-(L^2+r^2)χ^2)and then considers the limit as r→∞ one getsds^2 = -(1+ R^2/L^2)dT^2 + dR^2/(1+ R^2/L^2) + R^2(dΘ^2 + sin^2(Θ)dΦ^2) = ds^2_AdSwhich we recognize from the section introducing AdS. Hence the conformal boundary of the bulk spacetime is the Einstein static universe ℝ× S^2: lim_R→∞L^2/R^2ds^2_AdS = -dT^2 + dΘ^2 + sin^2Θ dΦ^2. The ADM mass and angular momentum of the BH are related to the parameters M and a by M_ADM = M/Ξ^2   &   J_ADM = Ma/Ξ^2. We can express the angular velocity and temperature in the manifestly globally AdS coordinates in terms of those obtained in Chambers-Moss (CM) coordinates:T_h = Ξ T_H   &  Ω_h = ΞΩ_H + a/L^2As in the Kerr BH the event horizon is located at the largest real root of Δ_r,  r=r_+, and is a Killing horizon generated by the KVF K=∂_T + Ω_h ∂_Φ. We can express the mass parameter in terms of a, r_+ and L as follows M =(r_+^2 +a^2)(r_+^2+L^2)/(2L^2r_+).Any regular BH solution must obey T_H≥0   &   a/L<1 which gives us restrictions on r_+/L and a/L:a/L≤r_+/L√(L^2+3r_+^2/L^2-r_+^2),   for  r_+/L < 1/√(3) a/L<1,  for  r_+/L≥1/√(3) In discussing superradiance it is useful to parametrize the BH by gauge invariant variables associated with its onset: (R_+, Ω_h), where R_+ = √(r^2_+ + a^2)/√(Ξ). The extremal curve, where T_H=0, is given by|Ω_h^extr| = 1/LR_+√((L^2+R^2_+)(L^2+3R_+^2)/2L^2+3R^2_+) .Note R_+ is just the square root of the area of the spatial section of the EH divided by 4π. §.§ Teukolsky Master EqIn general, the study of the linearized gravitational perturbations of the Kerr BH involves solving a coupled nonlinear PDE obtained from the linearized Einstein equation for the metric perturbation. This is hard to do. Fortunately, as we have already discussed, in d=4 the approach of Teukolsky simplifies the problem immensely. By studying gauge invariant scalar variables we can reduce the problem to solving a single PDE. Furthermore, by making use of harmonic decomposition when can make use of seperation of variables to further reduce the problem to two ODEs. It should be noted that in the setting of AdS background the curvature slightly alters the terms in the ODEs <ref> and <ref>. Further, in asymptotically AdS we use the Chambers-Moss null tetradl = 1/√(2)√(r^2+χ^2)(Ξa^2+r^2/√(Δ_r),√(Δ_r),0,aΞ/√(Δ_r)),n =1/√(2)√(r^2+χ^2)(Ξa^2+r^2/√(Δ_r),-√(Δ_r),0,aΞ/√(Δ_r))m=-i/√(2)√(r^2+χ^2)(Ξa^2-χ^2/√(Δ_r),0, i√(Δ_χ),aΞ/√(Δ_χ))rather than the Kinnersly. Still, the information about gravitational perturbations with spin s= -2 is encoded in the perturbations of the Weyl scalar ψ_4 = C_abcdn^am̅_̅b̅n^cm̅_̅d̅. The equation of motion for δψ_4 is given by the Teukolsky master equation <cit.>. We expect something of the form:δψ_4 = (r-iξ)^-2 e^-iω̂te^-imϕR^(-2)_ω̂lm(r)S^(-2)_ω̂lm(χ)where S   &   R satisfy ∂_χ(Δ_χ∂_χ S^(-2)_ω̂lm) = -[-(K_χ +Δ^'_χ)^2/Δ_χ+(6χ^2/L^2 + 4K^'_χ + Δ^''_χ) + λ] S^(-2)_ω̂lmand ∂_r (Δ_r∂_r R^(-2)_ω̂lm) =- [(K_r -iΔ^'_r)^2/Δ_r+(6r^2/L^2 + 4iK^'_r + Δ^''_r) - λ]R^(-2)_ω̂lm whereK_r = Ξ[ma - ω̂(a^2+r^2)],   K_χ= Ξ[ma-ω̂(a^2-χ^2)]In the eigenfunctions S_ω̂lm^(-2)(χ) we have the spin weighted s=-2 AdS spheroidal harmonics. The positive integer l specifies the number of zeros of S along the polar direction, the value is l - max(|m|,|s|). Note that for the eigenfunction of interest to us,S^(-2), l_min = |s| = 2 and -l≤ m ≤ l. These equations implicitly contain 5 parameters {a,r_+,ω̂, m,l}. Considering a particular Kerr BH amounts the fixing {a,r_+}. To study the physical problem of interest we need to solve equations <ref> and <ref> but we also need to impose BCs to restrict the solutions to those which are physically meaningful. As a natural example; at infinity we want the perturbations to preserve global AdS. At the horizon, it is not possible to have waves coming out from r<r_+, hence the BC is such that only ingoing modes are allowed. A Frobenius analysis at the horizon gives two independent solutions:R_ω̂lm^(-2)∼ A_in(r-r_+)^1-iω̅-mΩ_H/4π T_H[1+𝒪(r-r_+)] + A_out(r-r_+)^-1+iω̂-mΩ_H/4π T_H[1+𝒪(r-r_+)] .One can extend the solution through the horizon by introducing ingoing Eddington-Finkelstein coordinates{v,r,χ,φ} defined byt=v - Ξ∫r^2+a^2/Δ_rdr,  ϕ = φ - ∫aΞ/Δ_rdr  .Imposing the BC then amounts to requiring that the metric perturbation is regular in these ingoing EF coordinates. It follows <cit.> that this is the case iff R(r)|_H behaves as R(r)|_H ∼ R_IEF(r)|_H(r-r_+)^1-iω̂-mΩ_H/4π T_H for a smooth function R_IEF(r)|_H. Thus the appropriate boundary condition yieldsR_ω̂lm^(-2)∼ A_in(r-r_+)^1-iω̅[1+𝒪(r-r_+)]where ω̅ = ω̂-mΩ_H/4π T_H, note the relevance of the sign of this quantity to superradiance. Shifting our attentions to the boundary at infinity, a Frobenius analysis of the radial Teukolsky yieldsR_ω̂lm^(-2)|_r→∞ = B_+^(-2)L/r + B_-^(-2)L^2/r^2 + 𝒪(L^3/r^3)  .We are interested in perturbations which preserve the asymptotic global AdS background. As shown in <cit.> the following Robin BC ensures this preservation: B_-^(-2) = iβ B_+^(-2)where β has two possible values β_s   &  β_v for "scalar" and "vector" sector perturbations respectively. It should be emphasized that the terms scalar and vector do not refer to s=0, 1 perturbations. In the limit a=0 one can map the solutions of the Kerr-AdS perturbations as given by the Teukolsy formalism to the perturbations of the AdS-Schwarzschild background. When this is done it turns out the solutions with the β_s BC correspond to the scalar harmonics and those with the β_v the vector harmonics. As noted previously the CM coordinates, {t,r,χ,ϕ}, rotate at infinity. The coordinates {T,R,Θ,Φ} are better suited to discussing the global AdS structure of the background at the boundary. Remember that the reason it is sufficient to solve equations <ref> and <ref> is because in CM frame ∂_t   &  ∂_ϕ are isometries of the background geometry so any linear perturbation can be Fourier decomposed in these directions as e^-iω̂te^imϕ. In the {T,R,Θ,Φ} frame one measures frequency ω≡Ξω + ma/L^2 with perturbation decomposition e^-iω Te^imΦ. The quantity ω can be viewed as the natural frequency as it measures the frequency wrt a frame which does not rotate at infinity. We will often refer to ω in plots because of its natural physical significance. In particular note that we can express ω̅ in terms of ω by way ofω̅ = Ξ/Ξ(ω̅+̅ + ma/(Ξ L^2) -mΩ_H -ma/(Ξ L^2)/4π T_H) = ω - mΩ_h/4π T_hwhere only quantities measured in {T,R,Θ,Φ} appear in the above expression. §.§ QNMs and Superradiance in AdSRecall that for fixed {a,r_+} (or equivalently {R_+/L,Ω_hL}) and quantum numbers {l,m}, the equations <ref> and <ref> along with our BCs give us a complex eigenvalue problem for λ. The radial and angular ODEs are coupled through ω̂ and λ and cannot be solved analytically when M, a 0. In <cit.> numerical methods were used to find solutions of equations <ref> and <ref> subject to the scalar and vector BCs <ref>. As with the Minkowski background, there is a region where an approximate analytical solution can also be obtained for the frequency spectrum. This analytical treatment is valid when we have a small horizon radius and still smaller rotation parametera/L≪r_+/L≪ 1and for perturbations with wavelength much bigger then the BH length scales or equivalently the low frequency limit (which we recognize from the treatment of the Minkowski background case). We now discuss some of the numerical results obtained in <cit.>. Consider a Kerr-AdS BH lying somewhere in the phase diagram given by {R_+/L, Ω_h L}. The stability of a generic perturbation δψ_4 ∼ e^-iω T is clearly dictated by the sign of (ω). In the case (ω)<0 we have a decaying perturbation, i.e a Quasinormal mode or QNM. Unstable modes on the other hand, grow exponentially and have (ω)>0. We also have the case in which the imaginary part vanishes, (ω) = 0. The amplitude of such a perturbation neither decays or grows with time. For a given pair {l,m} in parameter space {R_+/L, Ω_h L} we can plot the onset curve (OC) of points for which the mode (ω)=0. Essentially for a given value of R_+ we find the value of Ω_h for which the eigenfrequency of equations <ref> and <ref> has (ω)=0. We trace out a curve in phase space of Kerr BHs which admit an {l,m} mode with an amplitude constant in time. Note that this, a priori, tells us nothing about (ω)To better understand the nature of the unstable perturbations it is helpful to consider the real part of ω or more specifically (ω) - mΩ_h. Recall that this quantity determines the sign of the energy flux through ℋ_+. In particular, superradiant modes have negative energy flux at the future horizon. Vanishing flux, perfect reflection in other words, occurs when (ω)=mΩ_h. In <cit.> the (numerically) obtained spectrum of ω revealed the following relationships between the real and imaginary parts of ω. It was found that perfect reflection occurred whenever (ω)=0 and that (ω)-mΩ_h <0 whenever (ω)<0. Hence unstable modes in Kerr-AdS are unstable precisely because of the superradiant instability. In fig <ref> we plot the onset curves of the first few l=m "scalar" modes in the phase space of {R_+/L,Ω_hL}. In the {R_+/L, Ω_hL} phase space each point in the blue shaded region, bounded above by the extremal curve (black), represents a Kerr BH which is stable when no perturbation is present. For a given l=m scalar perturbation, all BHs above the l=m onset curve are unstable to that particular type of perturbation. That is to say if a point lies above an {l,m} onset curve then the {l,m} mode eigenfrequency for that BH will be such that (ω)>0 and (ω)<mΩ. On the other hand points beneath the {l,m} onset curve are stable and such perturbations manifest as QNMs with (ω)<0 and (ω) >mΩ. As R_+/L→ 0 we consider the BH as it becomes very small, approaching the global AdS limit in which the BH disappears. In the global AdS limit the scalar mode frequencies and angular eigenvalues can be computed analytically. Hence by following the OCs back to R_+ = 0 and comparing we get a good check of the numerics used.Analytically the global AdS eigenfrequencies are given by:Lω_s^AdS = 1 +l + 2p,   λ = l(l+1)-2 . where p= 0, 1, 2,... is the number of radial nodes(the radial overtone). To obtain the values Ω_h|_R_+=0 we use the superradiant onset condition to find Ω_h|_R_+=0 = ω_s^AdS/m and we set p=0 and l=m givingLΩ_h|_R_+ = 0 = 1 + 1/mAs mentioned all the OCs plotted have no zero radial overtone, for each pair {l,m} there is actually an OC for each value of p but p>0 curves always lie above the p=0 curve. Hence the p=0 modes are the first to become unstable as the rotation Ω_h is increased. Note that all of the onset curves monotonically approach LΩ_h=1 from above in the asymptotic limit R_+/L →∞, bunching up in process. Hence only BHs with LΩ_h>1 can be superradiantly unstable as was previously convincingly argued by Hawking and Reall <cit.>. A visually suggestive aspect of fig <ref> should also be addressed. There is a value of R_+ where the l=2=m curve dips below all the others and remains below as R_+ →∞. This might seem to suggest that there exists a (non compact) region in phase space which is superradiantly unstable to l=2=m but stable to all other superradiant modes. In reaching this conclusion we have jumped the gun however. Each onset curve is monotone decreasing with respect to R_+/L and the LΩ_h|_R_+=0 decreases monotonically towards 1 with respect to m, as we previously noted. Thus for any point {LΩ_h^',R_+^'/L} on the l=2=m onset curve we can find an m such that for the curve l=m,    1<LΩ_h|_R_+=0<LΩ_h^' which clearly implies the BH associated with {LΩ_h^',R_+^'/L} is superradiantly unstable to perturbations l=m.In order to further explore the stability properties of these BHs it is illuminating to consider a specific scalar mode. In <cit.> the mode l=2=m was considered. We will follow their lead and discuss the results they obtained. In fig <ref> the l=2=m eigenfrequency ω = ω(r_+,a) is considered; (ω L) and (ω̅L) are plotted with respect to r_+/L and a/L. Recall that Re(ω̅) and not Re(ω) is what is relevant to superradiance. In the figure the 2-d surface formed by the plot has been marked to indicate physically important regions: an auxiliary plane marks the (ω̅) = 0   &  (ω)=0. The black curves indicate paths of constant r_+. The onset curve is shown in blue. In the (ω) plot the points on the surface above the auxiliary plane correspond to superradiantly unstable modes and in the (ω̅) plot the superradiant modes are those below the plane.The red dot in fig <ref> representing the BH most susceptible to l=2=m mode perturbations corresponds to the point {R_+/L, Ω_hL}∼{0.914,1.295} in fig <ref>. Note that this occurs close to extremality but not at it. In fact at the onset of instability the timescale increases until achieving a maximum near extremality and then decreases as the T_H = 0 Kerr-AdS BH is approached.We can now consider gravitational vector modes obeying the BC with β = β_v. Consider fig <ref>. As with the scalar modes the values of the onset curves at R_+/L=0 describe the vector normal modes of global AdS limit:Lω^AdS_v = 2+l+2p,  λ = l(l+1)-2 .In conjunction with the superradiant onset condition Ω_h|_R_+=0 = ω_v^AdS/m, with p=0   &  l=m, we obtainLΩ_h|_R_+=0 =1+2/m .As in the scalar case, the vector onset curves are bounded below with the condition Ω_hL>1. Unlike the scalar case though, the vector OCs never cross one another and asymptotically approach the extremal curve. If a BH is unstable to the l=2=m modes it is necessarily unstable to all l=m≥ 3 modes. The value of R_+/L at which the OC hits extremality increases monotonically with l=m, the curves then both approach the line Ω L =1. As m→∞ extremality is obtained only as R_+/L →∞.As was done in the scalar case we consider the specific case l=2=m for vector modes. Again an auxiliary plane divides the surface into the stable ((ω)<0   &  (ω)>mΩ) and unstable ((ω>0   &  (ω)<mΩ) BHs. The red dot in fig <ref> representing the BH most susceptible to l=2=m mode perturbations corresponds to the point {R_+/L, Ω_hL}∼{0.539,1.687} in fig <ref>. Note that moving along a constant r_+/L, the maximum of the vector superradiant instability is achieved much closer to the extremal curve than in the scalar case. The instability growth rate of scalar and vector modes is of the same order, with the vector rate being approximately twice as large as the scalar rate. Comparing the most unstable case in the vector and scalar modes we see that the BH corresponding to maximal vector instability is smaller (in terms of R_+/L) but rotates faster than corresponding scalar BH. §.§ Bifurcation at the onset of SuperradianceWe have seen that the Kerr-AdS BH is not stable to superradiance. For waves with the appropriate ω_R and ω_I conditions amplification occurs until Ω_h is sufficiently reduced by which time the mode has accumulated enough energy to backreact with the Kerr-AdS background. A natural question to ask then is what does this instability evolve into?As we have already seen, by appealing to monotonicity, any Kerr-AdS BH with Ω_hL>1 is superradiantly unstable to some gravitational perturbational mode {l,m}. For a given m at the onset of superradiance we have an exact zero mode with (ω) = 0 and (ω) = mΩ_h:since the perturbation is proportional to e^-iω T+imΦ the amplitude of the zero mode is constant with respect to T. These perturbational modes are of the forme^-iω T+imΦ= e^-imΩ_hT + imΦand so have the special property of being invariant under the horizon generating KVF, K=∂_T + Ω_h∂_Φ. For a given m it was suggested <cit.> that the OC of the instability should initiate the merger of the Kerr-AdS BH with a new family of BH solutions, stable to m superradiant modes and invariant under a single KVF, K=∂_T + Ω_h ∂_Φ. This has been explored for the case of scalar field perturbations. BHs with a similar helical KVF that merge with the Kerr-AdS were found to have scalar hair orbiting the central core. For those unfamiliar, "hairy black hole" is a blanket term referring to a BH with a characterizing parameter other than {a,M,μ}, i.e a BH which does not belong to the Kerr family. With the scalar field case as motivation we expect that an analogous family of single KVF BHs with "lumpy gravitational hair" merges with Kerr-AdS at the OC of gravitational superradiance. In <cit.> these single KVF BHs were constructed numerically, these "black resonators" are periodic and single out a particular frequency. Rather than discuss this numerical construction we will present an explicit construction carried out in <cit.> which approximates the black resonator and gives its leading order thermodynamic properties. This approximation is illuminating as it gives a heuristic insight into the stability properties of these black resonators. An important takeaway from this discussion is that Kerr-AdS BHs are not the only stationary BHs in Einstein-AdS gravity. Recall the fundamental behavior of superradaince in global AdS. The amplitude of a mode e^-iω T+imΦ can be increased by scattering off a rotating BH with angular velocity Ω_h when ω<mΩ_h. In the setting of asymptotically global AdS the boundary allows reflection and so the process of energy extraction repeats leading to an instability. The modes extract enough energy to backreact. The energy gained by the modes has the effect of decreasing Ω_h, eventually leading to a BH with "lumpy hair" rotating around it. Heuristically speaking lumpy, possibly inhomogeneous clumps, of energy co-rotating with the BH would tend to destroy any axisymmetry formerly present in the system; as you move in the Φ direction you may encounter a more concentrated region of energy. Furthermore, the same can be said of time symmetry. Thus we do not expect ∂_T or ∂_Φ to be KVFs of the system. However if we simply follow the same clump of hair around we would not expect the metric to vary; i.e the co-rotating vector field ∂_T + Ω_h ∂_Φ will be a KVF. We see that instability naturally leads to a BH with a single periodic KVF. An object fundamental to the evolution of the superradiant instability is the geon. A geon is a lump of light or energy which is dense enough to be gravitationally bound. Geons can be thought of as nonlinear normal modes of AdS and are solutions that contain only a single Killing field. Any gravitational radiation emitted by the geon is balanced by absorption of waves reflected from the AdS boundary. They are of interest to us because in <cit.> the black resonator was approximated by placing a small Kerr BH "on top" of a geon. We will review this construction. We first introduce and review some properties about geons and Kerr-AdS: Geons have harmonic time dependence e^-iω T + i m Φ in which the centrifugal force balances gravitational attraction. They are horizon free, nonsingular and asymptotically globally AdS. Geons are specified by l and m; the number of zeros of the solution along the polar direction and the azimuthal quantum number. They are a 1-parameter family of solutions parameterized by the frequency. At linear order, a geon is a small perturbation around a global AdS background. The energy and angular momentum of the geon are related by E_g = ω/mJ_g + 𝒪(J^2_g); they have zero entropy and an undefined temperature. Note the first law of thermodynamics is obeyed; dE_g = ω/m dJ_g For a Kerr-AdS BH with small E and J the leading and next-to leading order thermodynamic quantities are given by:E_K ≊r_+/2(1+r_+^2/L^2(1+Ω_h^2L^2) ) + 𝒪(r_+^4/L^4),   J_K ≊1/2r_+^3Ω_h + 𝒪(r_+^4/L^4)S ≊π r_+^2(1+Ω_h^2r_+^2) + 𝒪(r_+^5/L^5),   T_h ≊1/4π r_+(1 +(3-2Ω_h^2L^2)r_+^2/L^2) +𝒪(r_+^2/L^2)Note this also obeys the first law of thermodynamics up to next-to-leading order; dE_K = Ω_h dJ_K + T_h dS.The general idea of the construction is that to leading to order the two objects do not interact. The single KVF is inherited from the geon and the charges E, J of the system are given by E=E_g+E_K  &   J=J_g+J_K The entropy and temperature of the final BH are clearly controlled by the Kerr-AdS BH as the geon has zero entropy and undefined temperature. The single KVF chooses the partition of charges between the geon and Kerr-AdS components so as to extremize the total entropy of the system. In particular, maximizing S=S_K(E-E_g,J-J_g) with respect to J_g and considering the first laws for the geon and Kerr-AdS BH, shows that the partition is such that the angular velocities of the two components are the same; Ω_h=ω/m. Hence the two components are in thermodynamic equilibrium. To see this one can also make the following heuristic argument: Since the geon has only one KVF given by K=∂_T + ω/m∂_Φ and the Kerr-AdS BH is placed at its center the geon KVF must coincide with the horizon generator of the BH given by K=∂_T+Ω_h∂_Φ.The various restrictions on the system yield the following distribution of charges amongst the components: {J_g,E_g}={J,ω/mJ},  {J_K,E_K}={0,E-ω/mJ}, S=4π(E-ω/mJ)^2,   T_h=1/8π(E-ω/mJ)^-1We see that at leading order the rotation of the system is carried by the geon and the entropy is stored by the Kerr-AdS BH component. These relations obey the first law, dE=T_hdS+Ω_hdJ, up to order 𝒪(M,J), with Ω_h = ω/m where ω is given by eq <ref> for scalar gravitational perturbations and eq <ref> for vector gravitational perturbations. The single KVF BH merges with the Kerr-AdS family at an m-mode onset curve. This occurs when the superradiant condition, ω<mΩ_h, is saturated. Here {ω,m} are the frequency and azimuthal number of the linearized geon component of the single KVF BH. At the superradaint merger the Kerr-AdS BH and single KVF BH therodynamics coincide, so the Kerr-AdS BH thermodynamics (with Ω_h=ω/m) can be used to determine the charges of the final system:E|_merg≊r_+/2 + r_+^3/2L^2(1+ω^2L^2/m^2),   J|_merg≊1/2ω/mr_+^3 In fig <ref> we plot a phase diagram {E,J} for the l=2=m perturbation mode, with the above curve {E_merg,J_merg} determining the upper bound of the region where single KVF BHs exist. In the light gray region we have stable Kerr-AdS BH and hence no single KVF BHs. The blue and black curves signify the onset of instability and extremality respectively for Kerr-AdS BH. Hence in the light blue region we have only single KVF BHs. The dashed curve represents the scalar l=2=m geon described by E=ω/mJ with ω = ω_s =1+l. We see that the black resonators bridge the gap between the onset of superradiance and the geons. In the middle dark gray/blue region we have both Kerr-AdS BH and single KVF BHs; i.e there exists BH pairings (a Kerr BH and a hairy BH) with the same masses and angular momenta but different entropies. Here we have only considered the l=2=m mode, but similar behavior is expected for higher {l,m} so we actually have a countably infinite number of examples of non uniqueness for rotating BHs in AdS! In <cit.> numerics were used to compare the entropies of a Kerr-AdS BH and the correspondingblack resonator with the same asymptotic charges E and J. We see from fig <ref> that of the two, the black resonator is the most entropically favorable.§.§ Endpoint of the Superradiant InstabilityThe superradiant instability of Kerr-AdS naturally motivates the question of what the end state of the perturbed Kerr BH in AdS is. We have seen that at the onset of superradiance the Kerr BH merges with a family of single KVF BHs, of which the so called black resonator is one type. These single KVF BHs are certainly a possible intermediate state in the evolution but could they represent the end state? The answer to this question is no. We have seen that any stable candidate for the endpoint must satisfy Ω_H L≤ 1 but it was found numerically that Ω_H L >1 for the black resonators constructed in <cit.>. It is not difficult to see that a single KVF BH associated with a given mode m can only be metastable. Note that while the single KVF is stable to a particular mode m it is not stable to other superradaint modes m^'>m which are excited in time evolution because of the nonlinearities in the Einstein equation. To convince yourself of this recall that the black resonator is approximated by a Kerr BH placed inside of a geon. This Kerr BH is the problem because while it will be stable to m as we have seen it will not generally be stable to m^'. So what is the end state then? At this point we simply don't know. As just mentioned, typically a BH which is stable to perturbation modes m is unstable to modes m^' >m so one logically permissible possibility is that system just continues to evolve to black resonators of higher and higher order m. This idea was explored in <cit.> and it was shown that the m →∞ limit of the black resonator is not a possible end state. Such a solution saturates the bound E ≥J/L, required of generic asymptotically AdS solutions with energy-momentum tensor adhering to the dominant energy condition. For details see <cit.>, but it can be shown that this bound is saturated iff the solution is supersymmetric (i.e admits a Killing spinor). The authors were then able to prove that the only supersymmetric vacuum solution which is asymptotically AdS is global AdS itself. It seems, at present, there are no viable candidates for the end point of the superradiant instability. This leaves two possible outcomes: a singular solution is settled upon in finite time or the system never settles down to a solution. The former violates cosmic censorship as it admits a naked singularity. While for the latter, the development of smaller and smaller structure is driven by the entropically favorable evolution through higher and higher order m black resonators. The ever decreasing scale means that at some point quantum gravitational effects need to be considered. This may be interpretedas being at odds with cosmic censorship, at least in spirit, because initial data which is well-described classically evolves to a system requiring a quantum mechanical description.9 Santos V. Cardoso, S. J. Dias, G. S. Hartnett, L. Lehner, and J. E. Santos, "Holographic thermalization, quasinormal modes and superradiance in Kerr-AdS", JHEP 1404 (2014) 183 russ A. A. Starobinski and S. M. Churilov, "Amplification of electromagnetic and gravitational waves scattered by a rotating black hole", Zh. Eksp. Teor. Fiz. 65 (1973) 3. (Sov.Phys. - JETP, 38, 1, 1973). Kerr R. P. Kerr, "Gravitational field of a spinning mass as an example of algebraically special metrics" Phys. Rev. Lett. 11 (1963) 237-238 chandra S. Chandrasekhar, The Mathematical Theory of Black Holes. Oxford University Press, 1983 supover R. Brito, V. Cardoso and P. Pani, "Superradiance", arXiv:1501.06570v3 [gr-qc] 4 Sep 2015 coscen B.E. Niehoff, J.E. Santos and B. Way, "Towards a Violation of Cosmic Censorship", arXiv:1510.00709v1 [hep-th] 2 Oct 2015 blackres J.C. Dias, J.E. Santos and B. Way, "Black holes with a single Killing vector field: black resonators", arXiv:1505.04793v! [hep-th] 18 May 2015 cham C. M. Chambers and I. G. Moss, "Stability of the Cauchy horizon in Kerr de-Sitter spacetimes", Class. Quant. Grav. 11 (1994) 1035 [gr-qc/9404015]. hawk S. W. Hawking and H. S. Reall, "Charged and rotating AdS black holes and their CFT duals" Phys. Rev. D 61 (2000) 024014 [arXiv:hep-th/9908109]. Teuk S. A. Teukolsky, "Perturbations of a rotating black hole. 1. Fundamental equations for gravitational electromagnetic and neutrino field perturbations" Astrophys. J. 185, 635 (1973) ipser E.D. Fackerell and J. Ipser, 1972, Phys. Rev., D5,2455 wald R. Wald, "On perturbations of a Kerr black hole," J. Math. Phys. 14 (1973) 1453. BC O. J. C. Dias and J. E. Santos, "Boundary Conditions for Kerr-AdS Perturbations," JHEP 1301 (2013) 156 [arXiv:1302.1580]. Reall H. K. Kunduri, J. Lucietti and H. S. Reall, "Gravitational perturbations of higher dimensional rotating black holes: Tensor Perturbations" Phys. Rev. D 74 (2006) 084021.
http://arxiv.org/abs/1704.00593v1
{ "authors": [ "Joseph Sullivan" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170327013611", "title": "The Superradiant Instability in AdS" }
The Deep Poincaré Map: A Novel Approach for Left Ventricle Segmentation Yuanhan Mo1 Fangde Liu1Douglas McIlwraith1 Guang Yang2 Jingqing Zhang1 Taigang He3 Yike Guo1 December 30, 2023 ================================================================================================This paper considers a distributed multi-agent optimization problem, with the global objective consisting of the sum of local objective functions of the agents. The agents solve the optimization problem using local computation and communication between adjacent agents in the network. We present two randomized iterative algorithms for distributed optimization. To improve privacy, our algorithms add “structured” randomization to the information exchanged between the agents. We prove deterministic correctness (in every execution) of the proposed algorithms despite the information being perturbed by noise with non-zero mean. We prove that a special case of a proposed algorithm (called function sharing) preserves privacy of individual polynomial objective functions under a suitable connectivity condition on the network topology. § INTRODUCTIONDistributed optimization has received a lot of attention in the past couple of decades. It involves a system of networked agents that optimize a global objective function f(x) ≜∑_i f_i(x), where f_i(x) is the local objective function of agent i. Each agent is initially only aware of its own local objective function. The agents solve the global optimization problem in an iterative manner. Each agent maintains “a state estimate”, which it shares with its neighbors in each iteration, and then updates its state estimate using the information received from the neighbors. A distributed optimization algorithm must ensurethat the state estimates maintained by the agents converge to an optimum of the global cost function. Emergence of networked systems has led to the application of distributed optimization framework in several interesting contexts, such as machine learning, resource allocation and scheduling, and robotics <cit.>. In a distributed machine learning scenario, partitions of the dataset are stored among several different agents (such as servers or mobile devices <cit.>), and these agents solve a distributed optimization problem in order to collaboratively learn the most appropriate “model parameters”. In this case, f_i(x) at agent i may be a loss function computed over the dataset stored at agent i, for a given choice x of the model parameters (i.e., here x denotes a vector of model parameters).Distributed optimization can reduce communication requirements of learning, since the agents communicate information that is often much smaller in size than each agent's local dataset that characterizesits local objective function.The scalability of distributed optimization algorithms, and their applicability for geo-distributed datasets, have made them a desirable choice for distributed learning <cit.>. Distributed optimization algorithms rely on exchange of information between agents, making them vulnerable to privacy violations. In particular, in case of distributed learning, the local objective function of each agent is derived using a local dataset known only to that agent. Through the information exchanged between agents, information about an agent's local dataset may become known to other agents. Therefore, privacy concerns have emerged as a critical challenge in distributed optimization <cit.>. In this paper we present two algorithms that use “structured randomization” of state estimates shared between agents. In particular, our structured randomization approachobfuscates the state estimates by adding correlated random noise. Introduction of random noise into the state estimates allows the agents to improve privacy. Correlation (as elaborated later) helps to ensure that our algorithms asymptotically converge to a true optimum, despite perturbation of state estimates with non-zero mean noise. We also prove strong privacy guarantees for a special case of our algorithm for a distributed polynomial optimization problem. Contributions of this paper are as follows:* We present Randomized State Sharing (RSS) algorithms for distributed optimization that use structured randomization. Our first algorithm, named RSS-NB, introduces noise that is Network Balanced (NB), as elaborated later, and the second algorithm, RSS-LB, introduces Locally Balanced (LB) noise. We prove deterministic convergence (in every execution) to an optimum, despite the use of randomization. * We consider a special case of RSS-NB (called “Function Sharing” or FS), where the random perturbations added to local iterates are state-dependent. State-dependent random perturbations simulate the obfuscation of objective function using a noise function. We argue that the FS algorithm achieves a strong notion of privacy. * We use RSS-NB and RSS-LB algorithms to train a deep neural network for digit recognition using the MNIST dataset, and to train a logistic regression model for document classification of the Reuters dataset. The experiments validate our theoretical results and we show that we can obtain high accuracy models, despite introducing randomization to improve privacy.Related Work:Many distributed optimization algorithms have appeared in the literature over the past decade, including Sub-gradient Descent <cit.>, Dual Averaging <cit.>, Incremental Algorithms <cit.>, Accelerated Gradient <cit.>, ADMM <cit.> and EXTRA <cit.>. Solutions to distributed optimization of convex functions have been proposed for myriad scenarios involving directed graphs <cit.>, communication link failures and losses <cit.>, asynchronous communication models <cit.>, and stochastic objective functions <cit.>.Privacy-preserving methods for optimization and learning can be broadly classified into cryptographic approaches and non-cryptographic approaches <cit.>. Cryptography-based privacy preserving optimization algorithms <cit.> tend to be computationally expensive. Non-cryptographic approaches have gained popularity in recent years. ϵ-differential privacy is a probabilistic technique that involves use of randomized perturbations <cit.> to minimize the probability of uncovering of specific records from databases. Differential privacy methods, however, suffer from a fundamental trade-off between the accuracy of the solution and the privacy margin (parameter ϵ) <cit.>. Transformation is another non-cryptographic technique that involves converting a given optimization problem into a new problem via algebraic transformations such that the solution of the new problem is the same as the solution of the old problem <cit.>. This enables agents to conceal private data effectively while the quality of solution is preserved. Transformation approaches in literature, however, cater only to a relatively small set of problem classes. § NOTATION AND PROBLEM FORMULATION We consider a synchronous system consisting of n agents connected using a network of undirected (i.e., bidirectional) communication links. The communication links are always reliable. The set of agents is denoted by 𝒱; thus, |𝒱| = n.Although all the links are undirected, for convenience, we represent each undirected link usinga pair of directed edges. Define ℰ as a set of directed edges corresponding to the communication links in the network:ℰ = {(u,v) : u,v ∈𝒱 anducommunicates withv }.Thus, the communication network is represented using a graph 𝒢 = (𝒱, ℰ). The neighbor set of agent v is defined as the set of agents that are connected to agent v. By convention, N_v includes v itself, i.e. N_v = {u |(u,v) ∈E}∪{v}. We assume that the communication graph G is strongly connected. We impose additional connectivity constraint later when analyzing privacy in Section <ref>.The focus of this paper is on iterative algorithms for distributed optimization. Each agent maintains a state estimate, which is updated in each iteration of the algorithm. The state estimate at agent i at the start of iteration k is denoted by x^i_k. We assume that argument x of f_i(x) is constrained to be in a feasible set 𝒳⊂ℝ^D. The state estimate of each agent is initialized to an arbitrary vector in 𝒳. For z∈ℝ^D, we define projection operator P_X as,P_X(z)  = min_y∈X z - y.Problem <ref> below formally defines the goal of distributed optimization.Given local objective function f_i(x) at each agent i∈𝒱, and feasible set 𝒳⊂ℝ^D (i.e., the set of feasible x), design a distributed algorithm such that, for somex^* ∈min_x ∈𝒳 ∑_i=1^n f_i(x),we havelim_k→∞  x^i_k = x^*,     ∀ i∈𝒱.Let f^* denote the optimal value of f(x), i.e.f^* = inf_x ∈X f(x).Let X^* denote the set of all optima of f(x), i.e.,X^* = {x | x ∈X, f(x) = f^*}.Let . denote the Euclidean norm. For any matrix A, A_2 = √(λ_max(A^† A)), where A^† denotes conjugate transpose of matrix A, and λ_max is the maximum eigenvalue. We make the following assumptions. [Objective Function and Feasible Set] * The feasible set, X, is a non-empty, convex, and compact subset of ℝ^D. * The objective function f_i : X→ℝ, ∀ i∈𝒱, is a convex function. Thus, f(x) := ∑_i=1^n f_i(x) is also a convex function.* The set of optima, X^*, is non-empty and bounded. [Gradient Bound and Lipschitzness] * The gradients are norm-bounded, i.e., ∃ L > 0 such that, ∇ f_i(x) ≤ L, ∀ x ∈X and ∀ i∈𝒱.* The gradients are Lipschitz continuous, i.e., ∃ N > 0 such that, ∇ f_i(x) - ∇ f_i(y) ≤ Nx -y, ∀ x,y ∈X and ∀ i∈𝒱. § DISTRIBUTED ALGORITHMS This section first presents an iterative Distributed Gradient Descent algorithm (DGD) from prior literature <cit.>. Later we modify DGD to improve privacy. In particular, we present two algorithms based on Randomized State Sharing (RSS).§.§ DGD Algorithm <cit.> Iterative distributed algorithms such as Distributed Gradient Descent (DGD) use a combination of consensus dynamics and local gradient descent to distributedly find a minimizer of f(x). More precisely, in each iteration, each agent receives state estimates from its neighbors and performs a consensus step followed by descent along the direction of the gradient of its local objective function. The pseudo-code for the DGD algorithm is presented below as Algorithm <ref>. The algorithm presents the steps performed by any agent j∈V. The different agents perform their steps in parallel. Lines 4-5 are intentionally left blank in Algorithm <ref>, to facilitate comparison with other algorithms presented later in the paper.As shown on Lines 6 and 7 of Algorithm <ref>, in the k-th iteration, each agent j first sends its current estimate x_k^j to the neighbors, and then receives the estimates from all its neighbors. Using these estimates, as shown on line 8, each agent performs aconsensus step (also called information fusion), which involves computing a convex combination of the state estimates. The resulting convex combination is named v_k^j. Matrix B_k used in this step is a doubly stochastic matrix <cit.>, which can be constructed by the agents using previously proposed techniques,[B_k has the property that entries B_k[i,j] and B_k[j,i] are non-zero if and only if i∈N_j. Recall that the underlying network is assumed to consist of bidirectional links. Therefore, i∈N_j implies j∈N_i.] such as Metropolis weights <cit.>. The Metropolis weights are:B_k[i,j] =1/(1+max(|N_i|,|N_j|))ifj ∈N_i1 - ∑_l ≠ i B_k[i,l]ifi = j 0otherwise Agent j performs projected gradient descent step (Line 9, Algorithm <ref>) involving descent from v_k^j along the local objective function's gradient ∇ f_j(v^j_k), followed by projection onto the feasible set X. This step yields the new state estimate at agent j, namely, x_k+1^j. α_k used on line 9 is called the step size. The sequence α_k, k≥ 1, is a non-increasing sequence such that ∑_k=1^∞α_k=∞ and ∑_k=1^∞α_k^2<∞.Prior work <cit.> has shown that DGD Algorithm <ref>solves Problem 1, that is, the agents' state estimates asymptotically reach consensus on an optimum in X^*.DGD is not designed to be privacy-preserving and an adversary may learn information about an agent's local objective function by observing information exchange between the agents. We now introduce algorithms that perturb the state estimates before the estimates are shared between the agents. The perturbations are intended to hide the true state estimate values and improve privacy. §.§ RSS-NB Algorithm The first proposed algorithm, named Randomized State Sharing-Network Balanced (RSS-NB) is a modified version of Algorithm <ref>. The pseudo-code for algorithm RSS-NB is presented as Algorithm <ref> below.Random variables s_k^i,j are used to compute the perturbations. We will discuss the procedure for computing the perturbation after describing the rest of the algorithm. As we will discuss in more detail later, on Line 4 of Algorithm <ref>, agent j computes perturbation d_k^j to be used in iteration k. On Line 5, the perturbation is weighted by step size α_k and added to state estimate x_k^j to obtain the perturbed state estimate w_k^j of agent j. That is,w_k^j= x_k^j + α_k d^j_k.α_k here is the step size, which is also used in the information fusion step in Line 8. Properties satisfied by α_k are identical to those in the DGD Algorithm <ref>.Having computed the perturbed estimate w_k^j,each agent then sends the perturbed estimate w_k^j to its neighbors (Line 6) and receives the perturbed estimates of the neighbors (Line 7, Algorithm <ref>). Similar to Algorithm <ref>, Steps 8 and 9 of the RSS-NB algorithm also perform information fusion using a doubly stochastic matrix B_k, followed by projected gradient descent. Now we describe how the perturbation d_k^j∈ℝ^D is computed on Line 4 of Algorithm <ref>. The strategy for computing the perturbation is motivated by a secure distributed averaging algorithm in <cit.>. In iteration k, the computation of the perturbation d_k^j at agent j uses variables s_k^j,i and s_k^i,j, i∈N_j, which take values in ℝ^D. As shown on Line 4, the perturbation d_k^j is computed as follows.d^j_k = ∑_i ∈N_j s^i,j_k - ∑_i ∈N_j s^j,i_k.Initially, as shown on Line 1, s_1^i,j=s_1^j,i is the 0 vector (i.e., all elements 0) for all i∈N_j. Thus, the perturbation d_1^j computed in iteration 1 is also the 0 vector. As shown on Line 6 of Algorithm <ref>, in iteration k≥ 1, agent j sends to each neighbor i a random vector s_k+1^j,i and then (on Line 7) it receives random vector s_k+1^i,j from each neighbor i. These random vectors are then used to compute the random perturbations in Line 4 of the next iteration. Due to the manner in which d_k^j is computed, we obtain the following invariant for all iterations k≥ 1.∑_j∈V d_k^j = 0.The distribution from which the random vectors s_k^j,i are drawn affects the privacy achieved with this algorithm. In our analysis, we will assume that s^j,i_k≤Δ/(2n), for all i,j,k, where constant Δ is a parameter of the algorithm, and n=|V| is the number of agents. Procedure for the Computation of perturbation d_k^j, as shown in (<ref>), then implies that d_k^j≤Δ. As elaborated later, there is a trade-off between privacy and convergence rate of the algorithm, with larger Δ resulting in slower convergence rate.§.§ RSS-LB AlgorithmOur second algorithm is called Randomized State Sharing-Locally Balanced algorithm (RSS-LB). Recall that in RSS-NB Algorithm <ref>, each agent shares an identical perturbed estimate with its neighbors. Instead, in RSS-LB, each agent shares potentially distinct perturbed state estimates with different neighbors. The pseudo-code for RSS-NB is presented as Algorithm <ref>.On Line 4 of Algorithm <ref>, in iteration k, agent j chooses a noise vector d_k^j,i∈ℝ^D for each i∈N_j such that d^j,j_k = 0, d^j,i_k≤Δ, where constant Δ is a parameter of the algorithm, and∑_i ∈N_jB_k[i,j]d^j,i_k = 0.For convenience, for i∉N_j, define d^j,i_k = 0, that is, the perturbations for non-neighbors are zero. Here, matrix B_k is identical to that used in the information fusion step in Line 8. Observe that each agent j uses B_k[j,i], i∈N_j, in the information fusion step, and B_k[i,j], i∈N_j, in the computation of above noise vectors. In both cases, the matrix elements used by agent j correspond only to its neighbors in the network. Since the random vectors generated by each agent j are locally balanced, as per (<ref>) above, the agents do not need to cooperate in generating the perturbations (unlike the RSS-NB algorithm).Using d_k^j,i as the perturbation for neighbor i, in Line 5 of Algorithm <ref>, agent j computes the perturbed state estimate w^j,i_k to be sent to neighbor i, as follows.w^j,i_k = x^j_k + α_k d^j,i_k.α_k here is the step size, which is also used in the information fusion step in Line 8. Properties satisfied by α_k are identical to thos in the DGD Algorithm <ref>.Next, in Lines 6 and 7 of Algorithm <ref>, agent j sends w^j,i_k to each neighbor i and receives perturbed estimate w^i,j_k from each neighbor i. Agent j performs the information fusion step in Line 8 followed by projected gradient descent in Line 9, similar to the previous algorithms.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ redraw separate figures for the two algorithms, or just show figure for one algorithm ++++++++++ Figure <ref> shows perturbed state exchange for RSS-LB. Per definition, w^j,j_k = x^j_k. This implies that perturbed estimates are shared with neighbors and never used by the agent himself (unlike RSS-NB algorithm).++++++++++++++++ rewrite this paragraph ++++++++++ +++++++++++++++++++++++++++++++++++++§.§ FS Algorithm The function sharing algorithm FS presented in this section can be viewed as a special case of the RSS-NB algorithm. In this special case of RSS-NB, the random vector s_k^j,i computed by agent jis a function of its state estimate x_k^j, where the function is independent of k. Thus, the function sharing algorithm uses state-dependent random vectors. The pseudo-code for function sharing is presented in Algorithm <ref> below using random functions, instead of state-dependent random vectors. However, the behavior of Algorithm <ref> is equivalent to using state-dependent noise in RSS-NB.In Line 1 of Algorithm <ref>, each agent j selects a function s^j,i(x) to be sent to neighbor i in Line 2. These functions are exchanged by the agents. Agent j then uses them in Line 3 to compute the noise function, which is, in turn, used to compute an obfuscated local objective function f_j(x).Finally, the agents perform DGD Algorithm <ref> with each agent j using f_j(x) as its objective function. We assume that s^j,i(x) have bounded and Lipschitz gradients. This implies the obfuscated functions f_j(x) satisfy assumption A2. The obfuscated objective function f_j(x) is not necessarily convex. Despite this, the correctness of this algorithm can be proved using the following observations:∑_j∈V  p_j(x) = 0,∑_j∈Vf_j(x) = ∑_j∈V f_j(x) =  f(x)Effectively, Algorithm <ref> minimizes a convex sum of non-convex functions. Distributed optimization of a convex sum of non-convex functions, albeit with an additional assumption of strong convexity of f(x), was also addressed in <cit.>, wherein the correctness is shown using Lyapunov stability arguments. However, <cit.> also does not address how privacy may be achieved. Additionally, our approach for improving privacy is more general than function sharing, as exemplified by algorithms RSS-NB and RSS-LB.§ MAIN RESULTS The specification of Problem <ref> in Section <ref> identifies the requirement for correctness of the proposed algorithms.The proof of Theorem <ref> below is outlined in Section <ref> and presented in detail in Appendix <ref>. Under Assumptions <ref> and <ref>, RSS-NB Algorithm <ref>, RSS-LB Algorithm <ref> and FS Algorithm <ref> solve distributed optimization Problem <ref>.Theorem <ref> implies that the sequence of iterates {x^j_k}, generated by each agent j converges to an optimum in 𝒳^* asymptotically, despite the introduction of perturbations.Now we discuss privacy improvement achieved by our algorithms. We consider an adversary that compromises a set of up to f agents, denoted as A (thus, |A| ≤ f). The adversary can observe everything that each agent in A observes. In particular, the adversary has the knowledge of the local objective functions of agents in A, their state, and their communication to and from all their neighbors. Furthermore, the adversary knows the network topology. The goal here is to prevent the adversary from learning the local objective function of any agent i∉A. The introduction of perturbations in the state estimates helps improve privacy, by creating an ambiguity in the following sense. To be able to exactly determine f_i(x) for any i∉A, the adversary's observations of the communication to and from agents in A has to be compatible with the actual f_i(x), but not with any otherpossible choice for the local objective function of agent i. The larger the set of feasible local objective functions of agent i that are compatible with the adversary's observations, greater is the ambiguity.The introduction of noise naturally increases this ambiguity, with higher Δ (noise parameter) resulting in greater privacy. However, this improved privacy comes with a performance cost, as Theorem <ref> will show. Before we discuss Theorem <ref>, we first present more precise claims for privacy for the FS algorithm. +++++++++++++++++++++++++++++++++++++++++++++++++++++++The adversary compromises agents in A⊂V, where |A|≤ f. Since all the communication to and from the agents in V-A is with the agents in A, under the strong adversary model above, the adversary can learn (within a constant) the function ∑_i∈V-A  f_i(x). Then theideal goal for a privacy-preserving algorithm is to ensure that the adversary cannot learnany information about the following sum for any I⊂V-A (i.e., I is a strict subset of V-A).f_I(x)  ≜ ∑_i∈If_i(x). Intuitively, even after observing the execution of the optimization protocol, if an adversary finds significant ambiguity in determining f_I(x)  ≜ ∑_i∈If_i(x) for any I⊂V-A, the privacy is achieved. +++++++++++++++++++++++++++++++++++++++++++++  Privacy Claims: Let F denote the set of all feasible instances of Problem <ref>, characterized by sets of local objective functions. Thus, each element of F, say {g_1(x),g_2(x),⋯,g_n(x)} corresponds to an instance of Problem <ref>, where the g_i(x) become the local objective functions for each agent i. When each agent's local objective function is restricted to be any polynomial of a bounded degree, the set of feasible functions forms an additive group. Theorem <ref> makes a claim regarding the privacy achieved using function sharing in this case.Recall that F is the set of all possible instance of Problem <ref>. The adversary's observations are said to be compatible with problem instance {g_1(x),g_2(x),⋯,g_n(x)}∈F if the information available to the adversary may be produced when agent i's local objective function is g_i(x) for each i∈V. Let the local objective function of each agent be restricted to be a polynomial of a bounded degree. Consider an execution of the FS algorithm in which the local objective function of each agent i is f_i(x). Then FS algorithm provides the following privacy guarantees:(P1) Let the network graph G have a minimum degree ≥ f+1. For any agent i∉A, choose any feasible local objective function g_i(x) ≠ f_i(x). The adversary's observations in the above execution are compatible with at least one feasible problem in F in which agent i's local objective function equals g_i(x). In other words, the adversary cannot learn function f_i(x) for i∉A. (P2) Let the network graph G have vertex connectivity ≥ f+1. For each I⊂V-A, choose a feasible local objective function g_i(x)≠ f_i(x) for each i∈I. The adversary's observations in the above execution are compatible at least one feasible problem in F wherein, for i∈I, agent i's local objective function is g_i(x). In other words, the adversary cannot learn ∑_i ∈If_i(x).The proof for property (P2) in Theorem <ref> is sketched in Section <ref> and detailed in Appendix <ref>. Property (P1) can be proved similarly.  Convergence-Privacy Trade-off:Addition of perturbations to the state estimates canimprove privacy, however, it also degrades the convergence rate. Analogous to the finite-time analysis presented in <cit.>, the theorem below assumes α_k=1/√(k), and provides a convergence result for a weighted time-average of the state estimates x^j_T defined below. Let estimates {x^j_k} be generated by RSS-NB or RSS-LB with α_k = 1/√(k). For each j∈V, letx^j_T = ∑_k=1^T α_k x^j_k/∑_k=1^T α_k.Then,f(x^j_T) - f(x^*) = 𝒪((1 + Δ^2) log(T)/√(T)). Section <ref> presents the proof. The above theorem shows that the gap between the optimal function value and function value at the time-average of state estimates (x^j_k) has a gap that is quadratic in noise bound Δ. The dependence on time T in the convergence result above is similar to that for DGD in <cit.>, and is a consequence of the consensus-based local gradient method used here. The quadratic dependence on Δ is a consequence of structured randomization. Larger Δ results in slower convergence, however, would result in larger randomness in the iterates, improving privacy. Random perturbations used in algorithms RSS-NB and RSS-LB cause a slowdown in convergence, however, do not introduce an error in the outcome. This is different from ϵ̃-Differential Privacy where perturbations result in slowdown in addition to an error of the order of 𝒪(1/ϵ̃^2) <cit.>.§ PERFORMANCE ANALYSISWe sketch the analysis of RSS-NB here. Analysis of RSS-LB and FS follows similar structure. For brevity, only key results are presented here. Detailed proofs are available in Appendix (also <cit.>). We often refer to the state estimate of an agent as its iterate. Define iterate average (x̅_k, at iteration k) and the disagreement of iterate x^j_k agent j with x̅_k as,x̅_k = 1/n∑_j=1^n x^j_k,andδ^j_k = x^j_k - x̅_k.The computation on Line 8 of RSS-NB Algorithm <ref> can be represented using “true-state”, denoted as v^j_k, and a perturbation, e^j_k, as follows. v^j_k= ∑_i =1^n B_k[j,i] x^i_k,e^j_k= ∑_i =1^n B_k[j,i] d^i_k,v^j_k = ∑_i =1^n B_k[j,i] w^i_k = v^j_k + α_k e^j_k Since ∑_j d^j_k = 0 and B_k is doubly stochastic, we get, ∑_j e^j_k = 0 (see Appendix for details).Similarly for RSS-LB, computation on Line 8 of RSS-LB Algorithm <ref> can be represented using “true state”, denoted as v^j_k, and a perturbation, e^j_k, as follows. v^j_k= ∑_i =1^n B_k[j,i] x^i_k,e^j_k= ∑_i =1^n B_k[j,i] d^i,j_k,v^j_k = ∑_i =1^n B_k[j,i] w^i,j_k = v^j_k + α_k e^j_kFollowing the construction of noise (Line 4 in Algorithm <ref>) we can show ∑_j e^j_k = 0 (see Appendix for details). Now we can represent the projected gradient descent step (Line 9 of Algorithm <ref> or Line 9 of Algorithm <ref>) as,x^j_k+1= P_X[ v^j_k - α_k ( ∇ f_j(v^j_k) - e^j_k) ].Note that the same equation holds for both RSS-NB and RSS-LB where e^j_k definition is defined by Eq. <ref> for RSS-NB and Eq. <ref> for RSS-LB. In the above expression, the perturbation can be viewed simply as noise in the gradient. This perspectiveis useful for the analysis. Using a result from <cit.> on linear convergence of product of doubly stochastic matrices, we obtain a bound on disagreement δ^j_k in Lemma <ref> below.For constant β<1 and constant θ that both only depend on the network G, doubly-stochastic matrices B_k, iterates x^j_k generated by RSS-NB, RSS-LB, and for k ≥ 1max_j∈Vδ^j_k+1 ≤ n θβ^kmax_i ∈Vx^i_1 + 2 α_k ( L + Δ) + n θ (L+Δ) ∑_l=2^k β^k+1-lα_l-1The proof of Lemma <ref> is presented in Appendix <ref>.Lemma <ref> can be used to show that the iterates maintained by the different agents asymptotically reach consensus.Lemma <ref> below provides a bound on the distance between iterates and the optimum.For iterates x^j_k generated by RSS-NB, RSS-LB, y ∈𝒳 and k≥ 1, the following holds,η_k+1^2 ≤(1 + F_k )η_k^2 - 2 α_k (f(x̅_k) - f(y) ) + H_k, η_k^2 = ∑_j=1^n x^j_k - y^2,F_k = α_k N ( max_j∈Vδ^j_k + α_k Δ),andH_k = 2 α_k n (L+N/2+Δ) max_j ∈Vδ^j_k+ α_k^2 n (NΔ +(L +Δ)^2 ). The proof of Lemma <ref> is presented in Appendix <ref>. The expressions in Lemma <ref> has the same structure as supermartingale convergence result from <cit.>. We can show that ∑_k H_k < ∞ and ∑_k F_k < ∞. Then using the result from <cit.> asymptotic convergence of the iterate average x̅_k to an optimum x^*∈X^* can be proved, proving Theorem <ref>.   Proof of Thoerem <ref>: Next, we sketch the proof of Theorem <ref>, which uses Lemma <ref>. The detailed proof of Theorem <ref> is presented in Appendix <ref>. As discussed earlier, Theorem  <ref> assumes α_k=1/√(k). Recall the definition of x_T^j in Theorem <ref>. Let x̅_T = 1/n∑_j=1^n x_T^j. Observing that x̅_T also equals ∑_k=1^Tα_kx̅_k and using the fact that f(x) is convex, we get,f(x̅_T) - f^* ≤∑_k=1^T α_k f(x̅_k)/∑_k=1^T α_k - f^* = ∑_k=1^T α_k (f(x̅_k) - f^*)/∑_k=1^T α_kLemma <ref> and the observation ∑_k=1^T α_k ≥√(T) yields:f(x̅_T) - f^*≤∑_k=1^T ((1+F_k)η^2_k - η^2_k+1 + H_k)/2∑_k=1^T α_k≤η_1^2 + ∑_k=1^T (F_k η^2_k + H_k)/2√(T) Next we bound ∑_k=1^T F_k and ∑_k=1^T H_k.∑_k=1^T F_k= 2N ∑_k=1^T α_k max_j δ^j_k + 2NΔ∑_k=1^T α_k^2 ≤ 2N ∑_k=1^T α_k max_j δ^j_k + 2N Δ (log(T) + 1)∑_k=1^T H_k≤ 2n(L+N/2+Δ) ∑_k=1^T α_k max_j δ^j_k + n[(L+Δ)^2 + N Δ] (log(T)+1) Use Lemma <ref> to bound ∑_k=1^T α_k max_j δ^j_k.f(x̅_T) - f^* ≤C_0 + C_1log(T) + C_2log(T-1)/√(T) We then use the Lipschitzness of f(x) to arrive at,f(x^j_T) - f^*= f(x^j_T) - f(x̅_T) + f(x̅_T) - f^* ≤ L x^j_T - x̅_T + C_0 + (C_1+C_2)log(T)/2√(T) C_1, C_2 = O(Δ^2) = O((1+ Δ^2 )log(T)/√(T)) □ § PRIVACY WITH FUNCTION SHARINGIn this section we consider a special case of Problem <ref>. Assume that all objective functions f_i(x) are polynomials with degree ≤ d. Consequently, f(x) is a polynomial with deg(f(x)) ≤ d. We now prove property (P2) in Theorem <ref>; the proof of property (P1) can be obtained similarly.++++++++++++ (Necessity) We prove this by contradiction. Let us assume that κ(G) = f. Hence, if f nodes are deleted (along with their edges) the graph becomes disconnected to form components I_1 and I_2, (see Figure <ref>). Consider that f nodes, that we just deleted, were compromised by an adversary. Due to the strong assumption on adversarial knowledge, the adversary can estimate the obfuscated function at each node f_i(x).Agents generate correlated noise function using Eq. <ref>. All the perturbation functions s^j,i(x) shared by nodes in I_1 (with agents outside I_1) are observed by the adversary. This follows from the fact that all the neighbors of I_1 (nodes outside I_1) are compromised. Hence, the true objective function for the the subset of nodes in I_1 can be easily estimated by using, ∑_l ∈ I_1 f_l(x) = ∑_l ∈ I_1f_l - (∑_j = 1, i∈ I_1^j=f s^j,i(x) - ∑_j = 1, i∈ I_1^j=f s^i,j(x) ).This gives us contradiction. If κ(G) = f, then the privacy of I_1 can be broken. Hence, κ(G)>f is necessary.(Sufficiency) +++++++++++++++++++++++  Proof of Theorem 2 (P2): In particular, we use a constructive approach to show that given any execution, any feasible candidate for objective functions of nodes in any subset I⊂V-A is compatible with the adversary's observations.Suppose that the actual local objective function of each agent i in a given execution is f_i(x).Consider subset I⊂V-A, and consider any feasible local objective function g_i(x) for each i∈I. Now, for any i∈A, let g_i(x)=f_i(x) (adversary observes own objective functions). Also, since the functions are polynomials of bounded degree, it should be easy to see that, for each i∈V-A-I, we can find local objective functions g_i(x) such that ∑_i∈V-Ag_i(x)=∑_i∈V-Af_i(x), for all x∈X. Thus, for the given functions g_i(x) for i∈I, we have found feasible local objective functions g_i(x) for all agents such that – i) the local objective functions of compromised agents are identical to those in the actual execution, and ii) the sum of objective functions of “good” nodes is preserved ∑_i∈V-A g_i(x) = ∑_i∈V-Af_i(x). Recall that the function sharing algorithm adds noise functions to obtain perturbed function f_i(x) at each agent i∈V. In particular, agent j sends to each neighboring agent i a noise function, say s^j,i(x), and subsequently computes f_i(x) using the noise functions it sent to neighbors and the noise functions received from the neighbors.When the vertex connectivity of the graph is at least f+1 it is easy to show that, for local objective functions{g_1(x),g_2(x),⋯,g_n(x)} defined above, each agent j∈V-A can select noise functions, say t^j,i(x) for each neighbor i, with the following properties: * For each j∈V-A and neighbor i of j such that i∈A, t^j,i(x)=s^j,i(x) for all x∈X. That is, the noise functions exchanged with agents in A are unchanged.* For each j∈V-A,g_j(x) + ∑_i ∈N_j t^i,j(x) - ∑_i ∈N_j t^j,i(x) = f_j(x).That is, the obfuscated function of each agent in V-A remains the same as that in the original execution.Due to the above two properties, the observations of the adversary in the above execution will be identical to those in the original execution. Thus, the adversary cannot distinguish between the two executions. This, in turn, implies property (P2) in Theorem <ref>. Property (P1) can be proved similarly. □++++++++++++++++++++++++++++++++ * Adversary knows objective functions of compromised nodes f_i(x),∀ i ∈A (and correspondingly h_i(x)), and noise functions transmitted/received by compromised nodes s^i,j(x)i ∈A, j ∈A (and correspondingly t^i,j(x))* Strong adversary can estimate f_i(x), ∀ i by observing algorithm execution at every node* Construct a spanning tree over G* Assign arbitrary polynomials (feasible, bounded degree) s^i,j(x) (and correspondingly t^i,j(x)) to all edges (i,j) that do not belong to the spanning tree * Compute noise functions for spanning tree edges (a unique solution exists since a spanning tree ensures that there are n-1 unknowns with n-1 equations) This procedure provides us two separate problems P_1 and P_2 (and correspondingly noise functions s^i,j(x) and t^i,j(x)) that give the same execution. Since the choice of noise polynomial for non spanning tree functions are infinite, for each such selection we get a new problem that corresponds to the same execution f_i(x). Note, since the vertex connectivity of the graph is > f, every node and any strict subset of good nodes has ≥ f+1 neighbors. Hence, setting the noise function in Step 1, still leaves us with freedom to choose (Step 4) or compute noise polynomial (Step 5) for atleast one more edge.The inability of adversary to differentiate between infinitely many such problems P_i ∈F (i = 1,2,…) gives privacy. We get privacy from the fact that, an adversary can estimate f_i(x) (∀ i) by observing the execution of DGD algorithm invoked in Line 3 of Algorithm <ref>. However, adversary cannot correctly resolve the perturbation function p_i(x) due to the f vertex connectivity of graph. Hence, all possible problem instances in F are equally likely for the adversary.Note, we are concerned with protecting the privacy of strict subset of “good" nodes. Under strong adversarial knowledge, adversary can observe both the sum of all functions ∑_j f_j(x) and the functions of compromised nodes. Adversary can easily estimate the sum of functions of all “good" nodes by computing ∑_(ℓ∈V-A)f_ℓ(x) = ∑_j f_j(x) - ∑_ℓ∈Af_ℓ(x). ++++++++++++++++++++++++++++++++++++++++++++++++RSS-NB and RSS-LB with additional assumption of random perturbations being dependent on state x^j_k are also privacy preserving in the sense of Definition <ref>. Privacy for RSS-NB: The gradient descent update for the RSS-NB algorithm can be rewritten to incorporate the perturbation term into the gradient (Eq. <ref>). -e^j_k, while ∑_j e^j_k = 0. The RSS-NB algorithm simulates a scenario where erroneous gradients are used in DGD algorithm. Now consider that the perturbations added to the state in Eq. <ref> are state dependent, i.e. for the same v^j_k we have the same error in every execution. Under this added assumption of state dependent randomness, we can construct a function p_j(x) such that ∇ p_j(v^j_k) = -e^j_k ≜ - ∑_i=1^n B_k[j,i] d^i_k, ∀ k. Adding -e^j_k to the gradient is effectively perturbing the objective function with p_j(x). ∑_j e^j_k = 0, ∀ k, implies ∑_j∇ p_j(v^j_k) = 0, ∀ v^j_k. Hence, RSS-NB and FS are equivalent if the random perturbation are dependent on states. And we have privacy under κ(G)>f.The proof for RSS-LB is similar to the proof for RSS-NB and is excluded due to space constraints.The privacy analysis can be extended to other machine learning problems, however, it is beyond the scope of this paper. We show via simulations that the algorithms in this paper provide high accuracy models.+++++++++++++++++++++++++++++ § EXPERIMENTAL RESULTS We now provide some experimental results for RSS-NB and RSS-LB algorithms. We present two sets of experiments. First, we show that RSS-NB and RSS-LB correctly solve distributed optimization of polynomial objective functions. Next, we apply our algorithms in the context of machine learning for handwritten digit classification (using MNIST dataset) and document classification (using Reuters dataset).   Polynomial Optimization:We solve polynomial optimization on a network of 5 agents that form a cycle. The objective functions of the 5 agents are chosen as f_1(x) = x^2, f_2(x) = x^4, f_3(x) = x^2+x^4, f_4(x) = x^2 + 0.5x^4, and f_5(x) = 0.5 x^2 + x^4. The aggregate function f(x) = 2.5(x^2+x^4). We consider X = [-30,30]. Simulation results in Figure <ref> show that the two algorithms converge to the optimum x^* = 0 for two different values of Δ. Large Δ results in larger perturbations and the convergence is slower. For smaller Δ, as expected, the performance of both RSS-NB and RSS-LB is closer to DGD. Machine Learning:We consider two classification problems. We use a deep neural network <cit.> for digit recognition using the MNIST dataset <cit.> and regularized logistic regression for the Reuters dataset <cit.>. We use two graph topologies: a cycle of 5 agents (namely, C_5) and a complete graph of 5 agents (namely, K_5). Due to the high overhead of computing gradients on full datasets, we evaluate versions of the different algorithm that are adapted to performstochastic gradient descent (SGD) on minibatches of local, non-overlapping datasets, to solve the distributed machine learning problem. Also, by performing consensus only every 40 gradient steps of gradient descent, we decrease network overhead, while still retaining the accuracy. Figure <ref> shows convergence results for DGD, RSS-NB and RSS-LB, and a centralize algorithms SGD-C, which demonstrate that our algorithms can achieve high accuracy despite the introduction of perturbances in state estimates.* MNIST: Our algorithms converge quickly despite the deep learning problem being non-convex and despite using stochastic gradient descent.* Reuters: We achieve testing accuracy comparable to a centralized solution SGD-C. DGD works best followed by RSS-NB and RSS-LB for cycle topology. § CONCLUSION In this paper, we develop and analyze iterative distributed optimization algorithms RSS-NB, RSS-LB and FS that exploit structured randomness to improve privacy, while maintaining accuracy. We prove convergence and develop trade-off between convergence rate and the bound on perturbation. We provide claims of privacy for the FS algorithm, which is a special case of RSS-NB. We apply versions of RSS-NB and RSS-LB to distributed machine learning, and evaluating their effectiveness for training with MNIST and Reuters datasets. ieeetr § NOTATION SUMMARYSummary of symbols and constants used in the analysis is presented in Table <ref> We will analyze both RSS-NB and RSS-LB simultaneously. Before we move on to the proofs, we first define key bounds on the error for both the algorithms. We first establish the boundedness of error term in Eq. <ref> (for RSS-NB) and in Eq. <ref> (for RSS-LB). We also show that the error adds to zero over the network. These relationships will be critical for proving the convergence of our algorithms.* RSS-NB * Boundedness (e^j_k≤Δ): The noise perturbations s^i,j_k are bounded by Δ/2n. It follows that, d^j_k = ∑_i ∈N_j s^i,j_k - ∑_i ∈N_j s^j,i_k will also be bounded, d^j_k≤Δ since node j may have at most n-1 neighbors. Following Eq. <ref>, we can show, e^j_k = ∑_i ∈N_j B_k[j,i]d^i_k≤∑_i ∈N_j B_k[j,i] d^i_k since B_k[j,i] is non-negative. e^j_k≤∑_i =1^n B_k[j,i] d^i_k≤Δ∑_i =1^n B_k[j,i] = Δ. * Aggregate Randomness (∑_j e^j_k = 0): We prove this using Eq. <ref>, followed by the fact that B_k matrix is a doubly stochastic matrix. ∑_j=1^n e^j_k = ∑_j=1^n ( ∑_i ∈N_j B_k[j,i] d^i_k ) = ∑_i=1^n ( ∑_j=1^n B_k[j,i]) d^i_k = ∑_i=1^n d^i_k = 0. (from Eq. <ref>) * RSS-LB * Boundedness (e^j_k≤Δ): The noise perturbations d^i,j_k are bounded by Δ (by construction) i.e. d^i,j_k≤Δ. Following Eq. <ref>, we have, e^j_k = ∑_i ∈N_j B_k[j,i]d^i,j_k≤∑_i ∈N_j B_k[j,i] d^i,j_k since B_k[j,i] is non-negative and B_k is doubly stochastic. e^j_k≤∑_i =1^n B_k[j,i] d^i,j_k≤Δ∑_i =1^n B_k[j,i] = Δ. * Aggregate Randomness (∑_j e^j_k = 0): We prove this using Eq. <ref>, and Eq. <ref>. ∑_j=1^n e^j_k = ∑_j=1^n ( ∑_i ∈N_j B_k[j,i] d^i,j_k ) = 0. (from Eq. <ref>)Next, we prove an important relationship between the iterate-average and average of “true-state” defined in Eq. <ref> (for RSS-NB) and in Eq. <ref> (for RSS-LB).v̅_k ≜1/n∑_j=1^n v̂^j_k= 1/n∑_j=1^n ( ∑_i=1^n B_k[j,i] x^i_k ) =1/n∑_i=1^n ( ∑_j=1^n B_k[j,i] ) x^i_k= 1/n∑_i=1^n x^i_k= x̅_k∑_j=1^n B_k[j,i] = 1, B_kis column stochastic We have showed that the iterate-average stays preserved under convex averaging. This proof is adapted from <cit.>. We define transition matrix, Φ(k,s), as the product of doubly stochastic weight matrices B_k,Φ(k,s) = B_k B_k-1… B_s+1 B_s,(∀ k ≥ s > 0).We first note two important results from literature. The first result relates to convergence of non-negative sequences (Lemma <ref>) and the second result describes the linear convergence of transition matrix to 1/n11^T (Lemma <ref>). Let {ζ_k} be a non-negative scalar sequence. If ∑_k=0^∞ζ_k < ∞ and 0 < β < 1, then ∑_k=0^∞( ∑_j=0^k β^k-jζ_j ) < ∞. Let the graph G be connected, then, * lim_k →∞Φ(k,s) = 1/n11^T for all s > 0.* |Φ(k,s)[i,j] - 1/n| ≤θβ^k-s+1 for all k ≥ s > 0, where θ = (1 - ρ/4n^2)^-2 and β = (1 - ρ/4n^2).The well known non-expansive property (cf. <cit.>) of Euclidean projection onto a non-empty, closed, convex set 𝒳, is represented by the following inequality, ∀x, y ∈ℝ^D, 𝒫_𝒳[x] - 𝒫_𝒳[y] ≤x - y.We state the deterministic version of a known result on the convergence of non-negative almost supermartinagales.<cit.>.Let {F_k}, {E_k}, {G_k} and {H_k}, be non-negative, real sequences. Assume that ∑_k=0^∞ F_k < ∞, and ∑_k=0^∞ H_k < ∞ andE_k+1≤(1+F_k) E_k - G_k + H_k.Then, the sequence {E_k} converges to a non-negative real number and ∑_k=0^∞ G_k < ∞. The proofs in this report follow the structure similar to <cit.>, with the key difference being in proof for Lemma <ref> and Theorem <ref>. §.§ Proof of Lemma <ref> (Disagreement Lemma)Define for all j ∈𝒱 and all k,z^j_k+1 = x^j_k+1 - ∑_i=1^n B_k[j,i] x^i_k x^j_k+1 =z^j_k+1 + ∑_i=1^n B_k[j,i] x^i_kWe then unroll the iterations to get x^j_k+1 as a function of z^j_k+1, and x^i_k and doubly stochastic weight matrix at current and previous iteration. x^j_k+1 = z^j_k+1 + ∑_i=1^n [ B_k[j,i] ( z^i_k + ∑_l=1^n B_k-1[i,l] x^l_k-1) ]We perform the above mentioned unrolling successively and use the definition of transition matrix Φ(k,s), x^j_k+1 = z^j_k+1 + ∑_i=1^n Φ(k,1)[j,i] x^i_1 + ∑_l=2^k [∑_i=1^n Φ(k,l)[j,i] z^i_l].Note that Φ(1,1) = B_1. We verify the expression for k = 1, and we get the relationship x^j_2 = z^j_2 +∑_i=1^n Φ(1,1)[j,i] x^i_1. We can write the relation for iterate average, x̅_k, and use doubly stochastic nature of B_k to get,x̅_k+1 = 1/n∑_j=1^n x^j_k+1 = 1/n∑_j=1^n ( ∑_i=1^n B_k[j,i] x^i_k + z^j_k+1) = 1/n( ∑_i=1^n ( ∑_j=1^n B_k[j,i]) x^i_k + ∑_j=1^n z^j_k+1), = x̅_k + 1/n∑_j=1^n z^j_k+1 = x̅_1 + 1/n∑_l=2^k+1∑_j=1^n z^j_l.Using relations for x̅_k+1 (Eq. <ref>) and x^j_k+1 (Eq. <ref>) to get an expression for the disagreement. We further use the property of norm ∑ a ≤∑a,∀ a, to get,x^j_k+1 - x̅_k+1 ≤∑_i=1^n |1/n - Φ(k,1)[j,i] |x^i_1 + ∑_l=2^k∑_j=1^n |1/n - Φ(k,l)[j,i]|z^j_l + z^j_k+1 + 1/n∑_j=1^n z^j_k+1. We use Lemma <ref> to bound terms of type |1/n - Φ(k,l)[j,i] | and max_i∈Vx^i_1 to bound x^i_1, to get, x^j_k+1 - x̅_k+1 ≤ n θβ^kmax_i∈Vx^i_1 + θ∑_l=2^k β^k+1-l∑_i=1^n z^i_l + z^j_k+1 + 1/n∑_j=1^n z^j_k+1. We now bound each of the norms z^j_k, using the fact that v^j_k ∈𝒳, the non-expansive property of projection operator (see Eq. <ref>), and Assumption <ref>,z^j_k+1 = 𝒫_𝒳 [v̂^j_k - α_k ( ∇f_j(v^j_k) - e^j_k )] - v̂^j_kProjected Descent for RSS-NB and RSS-LB, Eq. <ref>≤α_k ∇f_j(v^j_k) - e^j_k≤α_k (∇f_j(v^j_k) +e^j_k) Triangle inequality≤α_k (L + Δ) e^j_k≤Δ (Eq. <ref> for RSS-NB and Eq. <ref> for RSS-LB) Note that we used the boundedness of perturbation e^j_k to obtain the above relation. We show the boundedness earlier in the report: Eq: <ref> for RSS-NB and Eq. <ref> for RSS-LB. Next, recall the definition of δ^j_k from Eq. <ref>. Combining Eq. <ref> and Eq. <ref>,max_j ∈Vδ^j_k+1 = max_j ∈Vx^j_k+1 - x̅_k+1 ≤ n θβ^kmax_i∈Vx^i_1 + n θ (L+Δ) ∑_l=2^k β^k+1-lα_l-1 + 2 α_k ( L + Δ)§.§ Proof of Lemma <ref> (Iterate Lemma) Recall the relationships established in the appendix for our algorithms RSS-NBand RSS-LB. * RSS-NB (Eq. <ref>, <ref>) *Boundedness e^j_k≤Δ *Aggregate Randomness ∑_j e^j_k = 0 * RSS-LB (Eq. <ref>, <ref>) *Boundedness e^j_k≤Δ *Aggregate Randomness ∑_j e^j_k = 0 In summary, for both RSS-NB and RSS-LB, we have showed that e^j_k≤Δ and ∑_j e^j_k = 0. These relationships are used for proving Lemma <ref> and thereby proving convergence. They allow us to perform unified analysis of both our algorithms despite the inherent differences ioin RSS-NB and RSS-LB. To simplify analysis, we adopt the following notation, η_k^2= ∑_j=1^n x^j_k - y^2,ξ_k^2= ∑_j=1^n v̂^j_k - y^2.Note, η_k and ξ_k are both functions of y, however for simplicity we do not explicitly show this dependence.Note, 𝒫_𝒳[y] = y for all y ∈𝒳. Using the non-expansive property of the projection operator (Eq. <ref>), and the projected gradient descent expression in Eq. <ref> we get, x^j_k+1 - y^2= 𝒫_𝒳[v̂^j_k - α_k (∇f_j(v^j_k) - e^j_k ) ] - y^2 ≤v̂^j_k - α_k (∇f_j(v^j_k) - e^j_k ) - y^2= v̂^j_k - y^2 + α_k^2 ∇f_j(v^j_k) - e^j_k^2 - 2 α_k ( ∇f_j(v^j_k) - e^j_k )^T (v̂^j_k - y) Now we add the inequalities Eq. <ref> for all agents j = 1, 2, …, n followed by using expressions for η_k and ξ_k (Eq. <ref>, Eq. <ref>). Next we use the boundedness of gradients (Assumption <ref>) and perturbations (Eq. <ref> for RSS-NB or Eq. <ref> for RSS-LB), to get the following inequality,η_k+1^2≤ξ_k^2 + ∑_j=1^n α_k^2 ∇f_j(v^j_k) - e^j_k^2 - 2 α_k ∑_j=1^n ( ∇f_j(v^j_k) - e^j_k )^T (v̂^j_k - y) ≤ξ_k^2 + ∑_j=1^n α_k^2 (∇f_j(v^j_k) + e^j_k)^2 - 2 α_k ∑_j=1^n ( ∇f_j(v^j_k) - e^j_k )^T (v̂^j_k - y) Triangle Inequality η_k+1^2≤ξ_k^2 + α_k^2 n (L +Δ)^2 - 2 α_k ∑_j=1^n (∇f_j(v^j_k) - e^j_k)^T (v̂^j_k - y)Assumption <ref> and Eq. <ref> or Eq. <ref>We use consensus relationship used for information fusion (Eq. <ref> for RSS-NB, or Eq. <ref> for RSS-LB). We know that in D-dimension the consensus step can be rewritten using Kronecker product of D-dimension identity matrix (I_D) and the doubly stochastic weight matrix (B_k) <cit.>. Consider the following notation of vectors. We use bold font to denote a vector that is stacked by its coordinates. As an example, consider three vectors in ℝ^3 given by a = [a_x,a_y,a_z]^T, b = [b_x,b_y,b_z]^T, c = [c_x,c_y,c_z]^T. Let 𝐚 be a vector of a, b and c stacked by coordinates, then it is defined as 𝐚 = [a_x,b_x,c_x,a_y,b_y,c_y,a_z,b_z,c_z]^T. Similarly we can write stacked model parameter vector as, 𝐱_k = [x^1_k[1], x^2_k[1], …, x^n_k[1], x^1_k[2], x^2_k[2], …, x^n_k[2], …, x^1_k[D], …, x^n_k[D]]^T. Next, we write the consensus term using the new notation and Kronecker products and compare norms of both sides (2-norm),𝐯̂_k = (I_D ⊗ B_k) 𝐱_k 𝐯̂_k - 𝐲 = (I_D ⊗ B_k) (𝐱_k - 𝐲)𝐯̂_k - 𝐲_2^2= (I_D ⊗ B_k) (𝐱_k - 𝐲)_2^2≤(I_D ⊗ B_k)_2^2 (𝐱_k - 𝐲)_2^2 We use the property of eigenvalues of Kronecker product of matrices. The eigenvalues of I_D ⊗ B_k are essentially D copies of eigenvalues of B_k. Since B_k is a doubly stochastic matrix, its eigenvalues are upper bounded by 1. Recall that A_2 = √(λ_max (A^† A)) where A^† represents the conjugate transpose of matrix A and λ_max represents the maximum eigenvalue. Observe that I_D ⊗ B_k is a doubly stochastic matrix and (I_D ⊗ B_k)^†(I_D ⊗ B_k) is also doubly stochastic matrix since product of two doubly stochastic matrices is also doubly stochastic. Clearly, (I_D ⊗ B_k)_2^2 = λ_max((I_D ⊗ B_k)^†(I_D ⊗ B_k)) ≤ 1.[An alternate way to prove this inequality would be to follow the same process used to prove Eq. <ref> except that we start with squared terms and use the doubly-stochasticity of B_k. Detailed derivation in Appendix <ref>]ξ_k^2 = 𝐯̂_k - 𝐲_2^2 ≤(𝐱_k - 𝐲)_2^2 = η_k^2 Merging the inequalities in Eq. <ref> and Eq. <ref>, we get,η_k+1^2≤η_k^2 + α_k^2 n (L + Δ)^2- 2 α_k ∑_j=1^n (∇f_j(v^j_k) - e^j_k)^T (v̂^j_k - y)_Λ. Typically, at this step one would use convexity of f_j(x) to simplify the term Λ in Eq. <ref>. However, since the gradient of f_j(x) is perturbed by noise e^j_k, and we need to follow a few more steps before we arrive at the iterate lemma.Consider the fused state iterates v̂^j_k, the average v̅_k ≜ (1/n) ∑_j=1^n v̂^j_k and the deviation of iterate from the average, q^j_k = v̂^j_k - v̅_k.We now derive a simple inequality here that will be used later. We use the fact that x̅_k = v̅_k proved earlier in the appendix (Eq. <ref>).q^j_k = v̂^j_k - v̅_k = ∑_i=1^n B_k[j,i]x^i_k - v̅_kfrom Eq. <ref> (RSS-NB) or Eq. <ref> (RSS-LB)= ∑_i=1^n B_k[j,i]x^i_k - x̅_kx̅_k = v̅_k, Eq. <ref>≤∑_i=1^n B_k[j,i] x^i_k - x̅_k≤( ∑_i=1^n B_k[j,i] ) max_i∈Vx^i_k -x̅_k≤max_j ∈Vδ^j_kEq. <ref>Note that similarly, we can derive another inequality that will be used later. ∑_j=1^n v̂^j_k - y = ∑_j=1^n ∑_i=1^n B_k[j,i]x^i_k - yfrom Eq. <ref> (RSS-NB) or Eq. <ref> (RSS-LB)= ∑_j=1^n ∑_i=1^n B_k[j,i] ( x^i_k - y )B_k is row stochastic≤∑_j=1^n ∑_i=1^n B_k[j,i] x^i_k - y= ∑_i=1^n ( ∑_j=1^n B_k[j,i] ) x^i_k -y≤∑_i=1^n x^i_k - yB_k is column stochastic We use gradient Lipschiztness assumption and write the following relation,∇ f_j (v^j_k) = ∇f_j(v̅_k) + l^j_k,where, l^j_k is the (vector) difference between gradient computed at v^j_k (i.e. ∇ f_j(v^j_k)) and the gradient computed at v̅_k (i.e. ∇ f_j(v̅_k)). Next, we bound the vector l^j_k, using Lipschitzness of gradients,max_j ∈Vl^j_k = max_j ∈V∇ f_j (v^j_k) - ∇f_j(v̅_k), ≤max_j ∈V N v^j_k - v̅_k, Assumption <ref>≤max_j ∈V{N v̂^j_k - v̅_k +α_k N e^j_k}, Eq. <ref> or Eq. <ref> and Triangle Inequality≤ N ( max_j ∈Vq^j_k + α_k Δ) Eq. <ref> and e^j_k≤Δ We use above expressions to bound the term Λ, in Eq. <ref>. We use v̂^ j_k = v̅_k + q^j_k from Eq. <ref> and the gradient relation in Eq. <ref> to get,Λ = -2 α_k ∑_j=1^n(∇f_j(v^j_k) - e^j_k)^T (v̂^j_k - y)= 2 α_k ∑_j=1^n [ (∇f_j(v̅_k) - e^j_k + l^j_k)^T (y - v̅_k - q^j_k) ]Λ = 2 α_k [T_1 + T_2 + T_3], where,T_1 = ∑_j=1^n ( ∇f_j(v̅_k) - e^j_k )^T (y - v̅_k), T_2= ∑_j=1^n ( ∇f_j(v̅_k) - e^j_k )^T (-q^j_k),andT_3 = ∑_j=1^n (l^j_k)^T(y - v̅_k - q^j_k) =∑_j=1^n (l^j_k)^T(y - v̂^j_k). Individually T_1, T_2 and T_3 can be bound as follows, T_1= ∑_j=1^n (∇f_j(v̅_k) - e^j_k )^T (y - v̅_k) = ∇ f(v̅_k)^T (y-v̅_k) Eq. <ref>, Eq. <ref>, ∑_j=1^n e^j_k = 0and ∑_j=1^n ∇ f_j(v̅_k) = ∇ f(v̅_k)≤ f(y) - f(v̅_k)f(x) is convexT_2= ∑_j=1^n ( ∇f_j(v̅_k) - e^j_k )^T (-q^j_k) ≤∑_j=1^n ∇f_j(v̅_k) - e^j_k(-q^j_k)Cauchy-Schwarz Inequality≤ (L+Δ) nmax_j ∈Vq^j_kTriangle Inequality and Eq. <ref>, Eq. <ref>, Assumption <ref>≤ (L+Δ) nmax_j ∈Vδ^j_kfrom Eq. <ref>T_3= ∑_j=1^n (l^j_k)^T (y - v̂^j_k) ≤max_j ∈Vl^j_k∑_j=1^n v̂^j_k - y≤N ( max_j ∈Vq^j_k + α_k Δ) ∑_j=1^n v̂^j_k - yfrom Eq. <ref>≤N ( max_j ∈Vδ^j_k + α_k Δ) ∑_j=1^n v̂^j_k - yfrom Eq. <ref>≤ N ( max_j ∈Vδ^j_k + α_k Δ)[∑_j=1^n x^j_k - y] from Eq. <ref>We further use 2 a≤ 1 + a^2 to bound term T_3.≤N/2( max_j ∈Vδ^j_k + α_k Δ)[∑_j=1^n ( 1 + x^j_k - y^2 ) ]2 a≤ 1 + a^2We combine the bounds on T_1, T_2 and T_3 (Eq. <ref>, <ref> and <ref>) to get,Λ ≤ 2 α_k (f(y) - f(v̅_k)) + 2 α_k n(L+Δ) max_j ∈Vδ^j_k + α_k N( max_j ∈Vδ^j_k + α_k Δ)[∑_j=1^n ( 1 + x^j_k - y^2 ) ]Λ ≤ -2 α_k ( f(v̅_k) - f(y) ) + 2 α_k n(L+Δ) max_j ∈Vδ^j_k+ α_k N ( max_j ∈Vδ^j_k + α_k Δ)[n + η_k^2 ] Eq. <ref> Recall from Eq. <ref>,η_k+1^2≤η_k^2 + α_k^2 n (L + Δ)^2- 2 α_k ∑_j=1^n (∇f_j(v^j_k) - e^j_k)^T (v̂^j_k - y)_Λ.We replace Λ with its bound from Eq. <ref>, and use the fact that x̅_k = v̅_k (Eq. <ref>) to we replace, f(v̅_k) with f(x̅_k), η_k+1^2≤η_k^2 + α_k^2 n (L + Δ)^2 -2 α_k ( f(v̅_k) - f(y) ) + 2 α_k n(L+Δ) max_j ∈Vδ^j_k+ α_k N ( max_j ∈Vδ^j_k + α_k Δ)[n + η_k^2 ]≤(1 + α_k N ( max_j ∈Vδ^j_k + α_k Δ)) η_k^2-2 α_k ( f(v̅_k) - f(y) ) + 2 α_k n(L+N/2+Δ) max_j ∈Vδ^j_k+ α_k^2 n N Δ+ α_k^2 n (L + Δ)^2 ≤(1 + F_k )η_k^2- 2 α_k (f(x̅_k) - f(y) ) + H_k,where, F_k = α_k N ( max_j ∈Vδ^j_k + α_k Δ) and H_k = 2 α_k n (L+N/2+Δ) max_j ∈Vδ^j_k+α_k^2 n [NΔ +(L +Δ)^2 ].We first state a claim about asymptotic behavior of iterates x^j_k and correspondingly of max_j δ^j_k. The claim is proved in Appendix <ref> after the Proof of Theorem <ref>.[Consensus]All agents asymptoticaly reach consensus, lim_k →∞max_j δ^j_k = 0 andlim_k →∞x^i_k - x^j_k = 0, ∀ i, j.See Appendix <ref>.§.§ Proof of Theorem <ref> We prove convergence using Lemma <ref>. We begin by using the relation between iterates given in Lemma <ref> with y = x^* ∈𝒳^*, and for k ≥ 1,η_k+1^2≤(1 + F_k )η_k^2 - 2 α_k (f(x̅_k) - f(y) ) + H_k We check if the above inequality satisfies the conditions in Lemma <ref> viz. ∑_k=1^∞ F_k < ∞ and ∑_k=1^∞ H_k < ∞. F_k and H_k are defined in Lemma <ref> (Eq. <ref>). Note that F_k and H_k are non-negative, real sequences.We first show that ∑_k = 1^∞α_k max_j ∈Vδ^j_k < ∞ using the expression for state disagreement from average given in Lemma <ref>.∑_k = 1^∞ α_kmax_j ∈Vδ^j_k = α_1 max_j ∈Vδ^j_1 + ∑_k=1^∞α_k+1max_j ∈Vδ^j_k+1≤α_1 max_j ∈Vδ^j_1_U_0 + n θmax_i∈Vx^i_1∑_k=1^∞α_k+1β^k_U_1 + n θ (L+Δ) ∑_k=1^∞α_k+1∑_l=2^k β^k+1-lα_l-1_U_2 + 2 (L+Δ) ∑_k=1^∞α_k α_k+1_U_3(from Lemma <ref>) The first term U_0 is finite since max_j ∈Vδ^j_1 and α_1 are both finite. The second term U_1 can be shown to be convergent by using the ratio test. We observe that,lim sup_k →∞α_k+2β^k+1/α_k+1β^k = lim sup_k →∞α_k+2β/α_k+1 < 1 ⇒∑_k=1^∞α_k+1β^k < ∞,since, α_k+1≤α_k and β < 1. Now we move on to show that U_2 is finite. It follows from α_k≤α_l when l ≤ k, and Lemma <ref>and ∑_kα_k^2 < ∞,∑_k=1^∞α_k+1∑_l=2^k β^k+1-lα_l-1≤∑_k=1^∞∑_l=2^k β^k+1-lα_l-1^2 < ∞.U_3 is finite because U_3 ≤ 2(L+Δ)∑_k=1^∞α_k^2 < ∞. Since we have shown, U_1 < ∞, U_2 < ∞, and U_3 < ∞, we conclude ∑_k=1^∞α_k max_j ∈Vδ^j_k < ∞.Clearly, ∑_k=1^∞ F_k< ∞ and ∑_k=1^∞ H_k< ∞, since we proved that ∑_k=1^∞α_k max_j ∈Vδ^j_k < ∞ and we know that ∑_kα_k^2 < ∞. We can now apply Lemma <ref> to Eq. <ref> and conclude ∑_k=1^∞ 2 α_k ( f(x̅) - f(x^*)) < ∞. We use ∑_k=1^∞ 2 α_k ( f(x̅) - f(x^*)) < ∞ to show the convergence of the iterate-average to the optimum. Since we know ∑_k=1^∞α_k = ∞, it follows directly that liminf_k →∞ f(x̅_k) = f(x^*) = f^* (an alternate proof for this statement is provided later in Appendix <ref>). Also note that Lemma <ref> states that η_k^2 has a finite limit. Let lim_k →∞η_k^2 = η_x^* (∀ x^* ∈X^*). lim_k →∞η_k^2= lim_k →∞∑_i=1^n x^i_k - x^*^2 = lim_k →∞∑_i=1^n x̅_k + δ^i_k - x^*^2 = lim_k →∞∑_i=1^n [ x̅_k - x^*^2 + δ^i_k^2 + 2 (x̅_k - x^*)^T δ^i_k ] = lim_k →∞[ n x̅_k - x^*^2 + ∑_i=1^n δ^i_k^2 + 2 (x̅_k - x^*)^T ( ∑_i=1^n δ^i_k ) ] = n lim_k →∞x̅_k - x^*^2 + lim_k →∞∑_i=1^n δ^i_k^2∑_i δ^i_k = 0by definition of δ^j_k= n lim_k →∞x̅_k - x^*^2 ≜η_x^*lim_k →∞max_j ∈Vδ^j_k =0from Claim <ref> From the statement above, we know lim_k →∞x̅_k - x^* = √(η_x^*/n). This, along with liminf_k →∞ f(x̅_k) = f(x^*) proves that x̅_k converges to a point in X^*. We know from Claim <ref> that the agents agree to a parameter vector asymptotically (i.e. x^j_k→ x^i_k,∀ i ≠ j as k →∞). Hence, all agents agree to the iterate average. This along with the convergence of iterate-average to an optimal solution gives us that all agents converge to a point in optimal set 𝒳^* (i.e. x^j_k→ x^* ∈𝒳^*,∀ j,ask →∞). This completes the proof of Theorem <ref>. §.§ Alternate Proof of lim inf_k →∞ f(x̅_k) = f(x^*) = f^*We will prove this statement using contradiction. We know that ∑_k α_k (f(x̅_k) - f(x^*)) < ∞ and ∑_k α_k = ∞. {f(x̅_k)} is a sequence of real numbers and we know that liminf always exists for this sequence (its either a real number or symbols ±∞). Let us assume that lim inf_k →∞ f(x̅_k) = f(x^*) + δ for some δ > 0. Note that δ cannot be less than 0 since f(x^*) is the minimum. Now we know from the definition of lim inf, ∀ϵ > 0, ∃ K_0 ∈ℕ such that ∀ k ≥ K_0,lim inf_l →∞ f(x̅_l) - ϵ ≤ f(x̅_k), f(x^*) + δ - ϵ ≤ f(x̅_k).Consider ϵ = δ/2 and we have, ∃ K_0 such that, ∀ k ≥ K_0 such that,f(x^*) + δ/2≤ f(x̅_k)f(x̅_k) - f(x^*) ≥δ/2. Let us consider C ≜∑_k=1^∞α_k (f(x̅_k) - f(x^*)) < ∞.C ≜∑_k=1^∞α_k (f(x̅_k) - f(x^*))= ∑_k=1^K_0α_k (f(x̅_k) - f(x^*)) + ∑_k=K_0+1^∞α_k (f(x̅_k) - f(x^*)) ≥∑_k=1^K_0α_k (f(x̅_k) - f(x^*))_T1 + ∑_k=K_0+1^∞α_k δ/2_T2T1 is finite since C is finite. T2 grows unbounded since ∑_k = K_0+1^∞α_k = ∞. Substituting both T1 and T2 in Eq. <ref> we get C ≥∞ in contradiction. Hence δ = 0, implying lim inf_k →∞ f(x̅_k) = f(x^*) = f^*. §.§ Proof of Claim <ref> (Consensus Claim) We begin with the iterate disagreement relation in Lemma <ref>,max_j ∈Vδ^j_k+1 ≤n θβ^kmax_i∈Vx^i_1_V_1 + n θ (L+Δ) ∑_l=2^k β^k+1-lα_l-1_V_2 + 2 α_k ( L+ Δ)_V_3 The first term V_1 decreases exponentially with k. Hence, for any ϵ > 0, ∃ K_1 = ⌈log_βϵ/3 n θmax_i∈Vx^i_1⌉ such that, ∀ k > K_1, we have V_1 < ϵ/3.For given ξ = ϵ (1 - β)/6 β n θ (L+Δ), ∃ K_2 such that, α_k < ξ, ∀ k ≥ K_2, due to the non-increasing property of α_k and ∑_k α_k^2 < ∞. Observe that,∑_i = 1^k-1( α_i β^k-i)=( α_1 β^k-1 +… + α_K_2-1β^k-K_2+1)_A + (α_K_2β^k-K_2 + … + α_k-1β^1 )_BWe can bound the terms A and B.A=α_1 β^k-1 + α_2 β^k-2 + … + α_K_2-1β^k-K_2+1≤α_1 (β^k-1 +… + β^k-K_2+1)α_1 ≥α_i∀ i ≥ 1 ≤α_1 β^k-K_2+1(1 - β^K_2-1/1-β) ≤α_1 β^k-K_2+1/1-ββ < 1 B= α_K_2β^k-K_2 + … + α_k-1β^1< ξβ( 1-β^k-K_2/1-β) ≤ξβ/1-βα_i < ξ,∀ i ≥ K_2The right side of inequality in Eq. <ref> is monotonically decreasing in k (β < 1) with limit 0 as k →∞. Hence ∃ K_3 > K_2 such that A < ϵ/6n θ (L+Δ), ∀k ≥ K_3. We know α_i < ξ = ϵ (1 - β)/6 β n θ (L+Δ) for all k ≥K_2. Hence, following Eq. <ref>, B ≤ξβ/1-β = ϵβ (1 - β)/6 (1-β) β n θ (L+Δ) <ϵ/6n θ (L+Δ), for all k ≥ K_2. Hence, ∃ K_4 = max{K_2,K_3} such that V_2 = n θ (L+Δ) (A+B) < ϵ/3 for all k > K_4. The third term of Eq. <ref>, V_3, decreases at the same rate as α_k. Hence, for any ϵ>0, ∃ K_6 = min{k | α_k < ϵ/6 (L+Δ)}, such that ∀ k > K_6, we have V_3 < ϵ/3. We have convergence based on ϵ-δ definition of limits. For any ϵ>0, there exists K_max = max{K_1, K_5, K_6} such that max_j ∈Vδ^j_k≤ V_1 + V_2 + V_3 < ϵ for all k ≥ K_max. This implies that,lim_k →∞max_j ∈Vδ^j_k = lim_k →∞max_j ∈Vx^j_k - x̅_k≤ 0. Since, max_j ∈Vδ^j_k≥ 0, the above statement implies lim_k →∞max_j ∈Vδ^j_k = lim_k →∞max_j ∈Vx^j_k - x̅_k = 0.Now note that lim_k →∞max_j ∈Vx^j_k - x̅_k = 0 lim_k →∞x^j_k - x̅_k = 0∀ j. Hence, we can also show that the following relationship holds,lim_k →∞x^j_k - x^i_k = lim_k →∞(x^j_k - x̅_k) + (x̅_k - x^i_k)≤lim_k →∞ (x^j_k - x̅_k + x̅_k - x^i_k) Triangle Inequality= lim_k →∞x^j_k - x̅_k + lim_k →∞x̅_k - x^i_k= 0 Since, x^j_k - x^i_k≥ 0, for all i,j, the above statement implies lim_k →∞x^j_k - x^i_k = 0.§.§ Proof of Privacy Result (Theorem <ref>) Proof for P2.(Necessity) Let us assume that κ(G)≤ f. Hence, if f nodes are deleted (along with their edges) the graph becomes disconnected to form components I_1 and I_2, (see Figure <ref>). Now consider that f nodes, that we just deleted, are compromised by an adversary. Agents generate correlated noise function using Eq. <ref>. All the perturbation functions s^j,i(x) shared by nodes in I_1 (with agents outside I_1) are observed by the coalition. Hence, the true objective function is easily estimated by using, ∑_l ∈ I_1 f_l(x) = ∑_l ∈ I_1f_l - (∑_j = 1, i∈ I_1^j=f s^j,i(x) - ∑_j = 2, i∈ I_1^j=f s^i,j(x) ). This gives us contradiction. Hence, κ(G)>f is necessary. (Sufficiency) We present a constructive method to show that given an execution (and corresponding observations), any estimate of objective functions made by the adversary is equally likely.We conservatively assume that the adversary can observe the obfuscated functions, f̂_i(x), the private objective functions of corrupted nodes f_a(x) (a ∈𝒜) and arbitrary functions transmitted from and received by each of the coalition members, s^a,J and a^K,a (J ∈𝒩_a and Ksuch thata ∈𝒩_K, for all a ∈𝒜). Since the corrupted nodes also follow the same protocol (Algorithm <ref>), the adversary is also aware of the fact that the private objective functions have been obfuscated by function sharing approach (Eq. <ref>). f̂_i(x) = f_i(x) + ∑_k:i ∈𝒩_k s^k,i(x) - ∑_j ∈𝒩_i s^i,j(x) Clearly, one can rewrite this transformation approach, using signed incidence matrix of bidirectional graph 𝒢 <cit.> <cit.>.𝐟̂ = f + BR.where, 𝐟̂ = [ f̂_1(x), f̂_1(x), …, f̂_S(x) ]^T is a S × 1 vector of obfuscated functions f̂_i(x) for i = {1, 2, …, S}, and f = [ f_1(x), f_1(x), …, f_S(x) ]^T is a S × 1 vector of private (true) objective functions, f_i(x). B = [ B_C, -B_C ], where B_C (of dimension S × |ℰ|/2) is the incidence matrix of a directed graph obtained by considering only one of the directions of every bidirectional edge in graph 𝒢[This represents an orientation of graph 𝒢 <cit.>.]. Each column of 𝐁 represents a directed communication link between any two agents. Hence, any bidirectional edge between agents i and j is represented as two directed links, i to j, (i,j) ∈ℰ and j to i (j,i) ∈ℰ and corresponds to two columns in 𝐁. S represents a |ℰ| × 1 vector consisting of functions s^i,j(x). Each entry in vector S, function s^i,j(x) corresponds to a column of 𝐁 which, in turn corresponds to link (i,j) ∈ℰ; and similarly, function s^j,i(x) corresponds to a different column of 𝐁 which, in turn corresponds to link (j,i) ∈ℰ. Note that ℓ^th row of column vector S corresponds to ℓ^th column of incidence matrix 𝐁. We will show that, two different sets of true objective functions (𝐟 and 𝐟^o) and correspondingly two different set of arbitrary functions (S and 𝐆), can lead to exactly same execution and observations for the adversary[f and f^o are dissimilar and arbitrarily different.]. We want to show that both these cases can result in same obfuscated objective functions. That is,𝐟̂ =f +BS =f^o +BG. We will show that given any set of private objective functions 𝐟^o, suitably selecting arbitrary functions g^i,j(x) corresponding to links incident at “good" agents, it is possible to make 𝐟^o indistinguishable from original private objective functions 𝐟, solely based on the execution observed by the corrupted nodes. We do so by determining entries of G, which are arbitrary functions that are dissimilar from s^i,j(x) when i and j are both “good". The design 𝐆 such that the obfuscated objective functions 𝐟̂ are the same for both situations. Since corrupted nodes observe arbitrary functions corresponding to edges incident to and from them, we set the arbitrary functions corresponding to edges incident on corrupted nodes as g^k,a = s^k,a and arbitrary functions corresponding to edges incident away the corrupted nodes as g^a,j = s^a,j (where k: a ∈𝒩_k and j ∈𝒩_a, for all a ∈𝒜). Now, we define 𝐆̃ as the vector containing all elements of 𝐆 except those corresponding to the edges incident to and from the corrupted nodes[The only entries of 𝐆, that are undecided at this stage are included in 𝐆̃. These are functions g^i,j such that i, j are both “good".]. Similarly, we define 𝐁̃ to be the new incidence matrix obtained after deleting all edges that are incident on the corrupted nodes (i.e. deleting columns corresponding to the links incident on corrupted nodes, from the old incidence matrix 𝐁). We subtract g^a,j(x) and g^k,a(x) (∀ a ∈𝒜) by subtracting them from [𝐟̂ - 𝐟^𝐨] (in Eq. <ref>) to get effective function difference denoted by [𝐟̂ - 𝐟^𝐨]_ eff as follows, [𝐟̂ - 𝐟^𝐨] = 𝐁𝐆 = 𝐟 - 𝐟^o +𝐁S, … (From Eq. <ref>)[𝐟̂ - 𝐟^𝐨]_ eff= [𝐟̂ - 𝐟^o] - ∑_a ∈𝒜[ ∑_k:a ∈𝒩_k g^k,a(x) - ∑_j∈𝒩_a g^a,j(x) ] = 𝐁̃𝐆̃, where, if d entries of 𝐆 were fixed[Total number of edges incident to and from corrupted nodes is d. We fixed them to be the same as corresponding entries from S, since the coalition can observe them.] then 𝐆̃ is a (|ℰ|-d) × 1 vector and 𝐁̃ is a matrix with dimension S × (|ℰ| - d). The columns deleted from 𝐁 correspond to the edges that are incident to and from the corrupted nodes. Hence, 𝐁̃ represents the incidence of a graph with these edges deleted.We know from the f-admissibility of the graph, that 𝐁̃ connects all the non-adversarial agents into a connected component[The adversarial nodes become disconnected due to the deletion of edges incident on corrupted nodes (previous step).]. Since, the remaining edges form a connected component, the edges can be split into two groups. A group with edges that form a spanning tree over the good nodes (agents) and all other edges in the other group (see Remark <ref> and Figure <ref>). Let 𝐁̃_ ST represent the incidence matrix[Its columns correspond to the edges that form spanning tree.] of the spanning tree and 𝐆̃_ ST represents the arbitrary functions corresponding to the edges of the spanning tree. 𝐁̃_ EE represents the incidence matrix formed by all other edges and 𝐆̃_ EE represents the arbitrary functions related to all other edges.[𝐟̂ - 𝐟^𝐨]_ eff = [ 𝐁̃_ ST 𝐁̃_ EE ][ 𝐆̃_ ST; 𝐆̃_ EE ]=𝐁̃_ ST𝐆̃_ ST + 𝐁̃_ EE𝐆̃_ EE. We now arbitrary assign functions to elements of 𝐆̃_ EE and then compute the arbitrary weights for 𝐆̃_ ST. We know that the columns of 𝐁̃_ ST are linearly independent, since 𝐁̃_ ST is the incidence matrix of a spanning tree (cf. Lemma 2.5 in <cit.>). Hence, the left pseudoinverse[A^† represents the pseudoinverse of matrix A.] of 𝐁̃_ ST exists; and 𝐁̃_ ST^ †𝐁̃_ ST = 𝕀, giving us the solution for 𝐆̃_ ST [An alternate way to look at this would be to see that 𝐁̃_ ST^T 𝐁̃_ ST represents the edge Laplacian <cit.> of the spanning tree. The edge Laplacian of an acyclic graph is non-singular and this also proves that left-pseudoinverse of 𝐁̃_ ST exists.]. And no adversary can estimate any sum of functions associated with strict subset of “good" agents.𝐆̃_ ST = 𝐁̃_ ST ^ †[ [𝐟̂ - 𝐟^𝐨]_ eff - 𝐁̃_ EE𝐆̃_ EE]. Using the construction shown above, for any f^o we can construct G such that the execution as seen by corrupted nodes is exactly the same as the original problem where the objective is f and the arbitrary functions are S. A strong PC coalition cannot distinguish between two executions involving f^o and f. Hence, no coalition can estimate f_i(x) (i ∈𝒜).  Proof for P1.(Necessity) The proof of necessity here (for P1) follows the proof of necessity for P2. We prove this statement by contradiction. Assume that a node i has degree f and we have |A| = f adversaries. Consider that I_1 = i in Figure <ref>. Agents generate correlated noise function using Eq. <ref>. All the perturbation functions s^j,i(x) shared by nodes in I_1 (with agents outside I_1) are observed by the adversary. Hence, the true objective function is easily estimated by using, f_i(x) = f_i - (∑_j = 1^j=f s^j,i(x) - ∑_j = 1^j=f s^i,j(x) ).This gives us contradiction. Hence, degree >f is necessary. (Sufficiency) We can use a construction similar to the sufficiency proof for P2. Instead of considering all objective functions 𝐟, we consider f_i(x).We present an example for the construction used in the above proof. Let us consider a system of S=7 agents communicating under a topology with κ(G)>2 (see Figure <ref>). An adversary with two corrupted nodes (𝒜 = {6, 7}, f=2) is a part of the system. We can divide the task of constructing 𝐆 into three steps -* Fix g^a,l and g^l,a (links incident on corrupted nodes) to be the corresponding entries in S,* Arbitrarily select the functions corresponding to non spanning tree edges (𝐆_ EE), and* Solve for the functions corresponding to the spanning tree (𝐆_ ST) using Eq. <ref>. We first, follow the Step 1 and fix g^k,a = s^k,a and g^a,j = s^a,j (where k: a ∈𝒩_k and j ∈𝒩_a, for all a ∈𝒜). Step 1 follows form the fact that the adversary observes s^k,a and s^a,j, and hence they need to be same in both executions. This is followed by substituting the known entries in 𝐆 and subtract them from the left hand side as shown in Eq. <ref>. This corresponds to the deletion of all incoming and outgoing edges from the corrupted nodes.The incidence matrix of this new graph is denoted by 𝐁̃. The edges in the new graph can be decomposed into two groups - a set containing edges that form a spanning tree and a set that contains all other edges. This is seen in Figure <ref> where the red edges are all the remaining links (incidence matrix, 𝐁̃_ EE); and Figure <ref> where the green edges form a spanning tree (incidence matrix, 𝐁̃_ ST) with Agent 1 as the root and all other “good" agents as its leaves (agents 2, 3, 4, 5). §.§ Proof of Theorem <ref> The proof of Theorem <ref> follows from Lemma <ref> and a few elementary results on sequences and series.We begin our analysis by considering the time weighted average of state x^j_T = ∑_k=1^T α_k x^j_k/∑_k=1^T α_k, and noting the fact that x̅_k = 1/n∑_j=1^n x^j_k = ∑_k=1^T α_k x̅_k/∑_k=1^T α_k. Also consider the fact that f(x) is convex. We get,f(x̅_T) - f^* = f(∑_k=1^T α_k x̅_k/∑_k=1^T α_k) - f^* ≤∑_k=1^T α_k f(x̅_k)/∑_k=1^T α_k - f^* = ∑_k=1^T α_k (f(x̅_k) - f^*)/∑_k=1^T α_k Next, consider y ∈X^* in Lemma <ref>, and we use it to bound the expression in Eq. <ref>.f(x̅_T) - f^* ≤∑_k=1^T((1+F_k)η^2_k - η^2_k+1 + H_k)/2∑_k=1^T α_kCanceling the telescoping terms in Eq. <ref>, we get,f(x̅_T) - f^*≤η_1^2 - η_T+1^2 + ∑_k=1^T (F_k η^2_k + H_k)/2∑_k=1^T α_k≤η_1^2 + ∑_k=1^T (F_k η^2_k + H_k)/2∑_k=1^T α_k If α_k = 1/√(k), we have from comparison test, ∑_k=1^T α_k ≥√(T). This gives us from Eq. <ref>,f(x̅_T) - f^* ≤η_1^2 + ∑_k=1^T (F_k η^2_k + H_k)/2√(T) Let us define the maximum value of η_k^2 as D_0, i.e. D_0 ≜ n max_x ,y ∈Xx - y^2. Note that due to compactness of X⊆ℝ^D, the bound D_0 is finite. f(x̅_T) - f^* ≤D_0 + D_0∑_k=1^T F_k + ∑_k=1^T H_k/2√(T) Next we bound ∑_k=1^T F_k and ∑_k=1^T H_k. ∑_k=1^T F_k= N ∑_k=1^T α_k max_j δ^j_k + NΔ∑_k=1^T α_k^2 = N ∑_k=1^T α_k max_j δ^j_k + NΔ∑_k=1^T 1/k≤ N ∑_k=1^T α_k max_j δ^j_k + N Δ (log(T) + 1) (∑_k=1^T 1/k < log(T)+1)∑_k=1^T H_k=2n(L+N/2+Δ) ∑_k=1^T α_k max_j δ^j_k + n[(L+Δ)^2 + N Δ] ∑_k=1^T α_k^2 =2n(L+N/2+Δ) ∑_k=1^T α_k max_j δ^j_k + n[(L+Δ)^2 + N Δ] ∑_k=1^T 1/k≤ 2n(L+N/2+Δ) ∑_k=1^T α_k max_j δ^j_k + n[(L+Δ)^2 + N Δ] (log(T)+1)We use Lemma <ref> to bound ∑_k=1^T α_k max_j δ^j_k.∑_k=1^Tα_k max_j δ^j_k = α_1 max_j δ^j_k + ∑_k=1^T-1α_k+1max_j δ^j_k+1≤α_1 max_j δ^j_1 + nθmax_i x^i_1∑_k=1^T-1α_k+1β^k + n θ (L+Δ) ∑_k=1^T-1( α_k+1∑_l=2^kβ^k+1-lα_l-1) + 2(L+Δ) ∑_k=1^T-1α_k+1α_k ≤α_1 max_j δ^j_1 + nθmax_i x^i_1∑_k=1^T-1α_k+1β^k + n θ (L+Δ) ∑_k=1^T-1( α_k+1∑_l=2^kβ^k+1-lα_l-1) + 2(L+Δ) ∑_k=1^T-1α^2_k ≤α_1 max_j δ^j_1 + n θ C_0 max_i x^i_1 + n θ (L+Δ) C_1 (log(T) + 1)+ 2(L+Δ) (log(T-1)+1) ≤α_1 max_j δ^j_1 + n θ C_0 max_i x^i_1 +2(L+Δ) + n θ (L+Δ) C_1 + ( n θ (L+Δ) C_1 + 2(L+Δ) ) log(T)where C_0 = ∑_k=1^T-1α_k+1β^k and C_1 = β (1-β^T-1)/(1-β). The existence of C_0 and the bound on C_1 is presented in the next few lines. C_0 exist and can be proved using ratio test for series convergence. Since, α_k+1≤α_k and β < 1, we get,lim sup_k →∞α_k+2β^k+1/α_k+1β^k = lim sup_k →∞α_k+2β/α_k+1 < 1 ⇒∑_k=1^∞α_k+1β^k < ∞.Note that C_0 ≤∑_k=1^∞α_k+1β^k < ∞, hence, C_0 is a finite constant.Next, we estimate a bound on, ∑_k=1^T-1( α_k+1∑_l=2^kβ^k+1-lα_l-1). ∑_k=1^T-1( α_k+1∑_l=2^kβ^k+1-lα_l-1)≤∑_k=1^T-1( ∑_l=2^kβ^k+1-lα^2_l-1)α_k+1≤α_l-1, ∀ l ≤ k = ∑_k=1^T-1( ∑_l=2^kβ^k+1-l/l-1) α^2_l-1 = 1/l-1= ∑_k=1^T-1(∑_j=1^T-1-kβ^j/k) Rearranging Terms= ∑_k=1^T-1(β1-β^T-k/1-β/k) Sum of geometric series ∑_j=1^T-1-kβ^j = β1-β^T-k/1-β≤β1-β^T-1/1-β∑_k=1^T-11/k≤ C_1 log(T-1) + C_1C_1 ≜β1-β^T-1/1-β and ∑_k=1^T-11/k≤log(T-1)+1 ≤ C_1 log(T) + C_1 log(T-1) < log(T) We use the bound on ∑_k=1^T α_k max_j ∈Vδ^j_k to get the bound on ∑_k=1^T F_k (in Eq. <ref>) and ∑_k=1^T H_k (in Eq. <ref>).∑_k=1^T F_k≤ N ∑_k=1^T α_k max_j δ^j_k + N Δ (log(T) + 1) ≤ C_2 + ( C_3 + C_4 )log(T)where,the constants C_2, C_3 and C_4 are defined as,C_2= NΔ + N (α_1 max_j δ^j_1 + nθ C_0 max_i x^i_1 + n θ (L+Δ) C_1 + 2(L+Δ) ),C_3= NΔ, C_4= 2N(L+Δ) + Nn θ (L+Δ) C_1.Next we construct a bound on ∑_k=1^T H_k.∑_k=1^T H_k≤ 2n(L+N/2+Δ) ∑_k=1^T α_k max_j δ^j_k + n[(L+Δ)^2 + N Δ] (log(T)+1) ≤ C_5 + (C_6+C_7) log(T)where, the constants C_5, C_6 and C_7 are defined as,C_5= 2n(L+N/2+Δ) ( α_1 max_j δ^j_1 + nθ C_0 max_i x^i_1 + n θ (L+Δ) C_1 + 2(L+Δ) ) + n[(L+Δ)^2 + N Δ], C_6= n[(L+Δ)^2 + N Δ], C_7= 2n(L+N/2+Δ) ( n θ (L+Δ) C_1 + 2(L+Δ)). We use the bound from in Eq. <ref> and Eq. <ref> and combine with finite time relation in Eq. <ref>.f(x̅_T) - f^* ≤C_8 + (C_9 + C_10)log(T)/2√(T)where, C_8= D_0 C_2+C_5+D_0, C_9= D_0C_3+C_6, C_10 = D_0 C_4+C_7. Next we use the Lipschitzness of f(x) to arrive at the statement of Theorem <ref>.f(x^j_T) - f^*= f(x^j_T) - f(x̅_T) + f(x̅_T) - f^* ≤ L x^j_T - x̅_T + C_8 + (C_9 + C_10)log(T)/2√(T)≤ L [ ∑_k=1^T α_kx^j_k - x̅_k/∑_k=1^T α_k] + C_8 + (C_9 + C_10)log(T)/2√(T)≤ L [ ∑_k=1^T α_kmax_j δ^j_k/∑_k=1^T α_k] + C_8 + (C_9 + C_10)log(T)/2√(T)≤[ C_11 + C_12log(T)/√(T)] + C_8 + (C_9 + C_10)log(T)/2√(T)where,C_11 = Lα_1 max_j δ^j_1 + nθ L C_0 max_i x^i_1 + 2L(L+Δ),C_12 = n L θ (L+Δ) C_1 + 2L(L+Δ). Observe that C_9 = 𝒪(Δ^2). We can rewrite the result as,f(x^j_T) - f^* = O( (1+Δ^2) log(T)/√(T))§.§ Alternate Proof of Eq <ref> We begin with the consensus update equation in Eq. <ref> (RSS-NB) and Eq. <ref> (RSS-LB), followed by subtracting vector y ∈𝒳 on both sides.v̂^i_k = ∑_j=1^n B_k[i,j] x^j_kv̂^i_k - y= ∑_j=1^n B_k[i,j] (x^j_k - y)because ∑_j=1^n B_k[i,j] = 1Let us define ẑ^i_k = v̂^i_k - y and z^i_k = x^i_k - y, ∀ i = {1, 2, …, n}. Then we can rewrite the above equation as,ẑ^i_k = ∑_j=1^S B_k[i,j] z^j_kWe now norm both sides of the equality and use the property that norm of sum is less than or equal to sum of norms.ẑ^i_k = ∑_j=1^n B_k[i,j] z^j_k≤∑_j=1^n B_k[i,j] z^j_k since B_k[i,j] ≥ 0Squaring both sides in the above inequality followed by algebraic expansion, we get, ẑ^i_k^2≤(∑_j=1^n B_k[i,j] z^j_k)^2 = ∑_j=1^n B_k[i,j]^2 z^j_k^2 + 2 ∑_m<j B_k[i,j]B_k[i,m]z^j_kz^m_kNow we use the property, a^2 + b^2 ≥ 2 a b for any a, b in the latter term of the expansion above,ẑ^i_k^2≤∑_j=1^n B_k[i,j]^2 z^j_k^2 + ∑_m < j B_k[i,j]B_k[i,m](z^j_k^2 + z^m_k^2).Rearranging and using row stochasticity of B_k matrix we get,ẑ^i_k^2≤∑_j=1^n [ z^j_k^2 (B_k[i,j]^2 + ∑_m≠ j B_k[i,j]B_k[i,m] ) ]≤∑_j=1^n [ z^j_k^2 (B_k[i,j] ( B_k[i,j] + ∑_m≠ j B_k[i,m] ) ) ] ≤∑_j=1^n B_k[i,j] z^j_k^2Row stochasticity of B_k givesB_k[i,j] + ∑_m≠ j B_k[i,m] = 1,Summing the inequality over all servers, i = 1, 2, …, n, ∑_i=1^n ẑ^i_k^2≤∑_i=1^n ∑_j=1^n B_k[i,j] z^j_k^2 = ∑_j=1^n (z^j_k^2 [ ∑_i=1^n B_k[i,j] ] ) ≤∑_j=1^n z^j_k^2Column stochasticity of B_k gives ∑_i=1^n B_k[i,j] = 1,This gives us Eq. <ref>,ξ_k^2 = ∑_j=1^n v̂^j_k - y^2 ≤∑_j=1^n x^j_k - y^2 = η_k^2
http://arxiv.org/abs/1703.09185v2
{ "authors": [ "Shripad Gade", "Nitin H. Vaidya" ], "categories": [ "cs.DC", "cs.LG", "math.OC" ], "primary_category": "cs.DC", "published": "20170327165949", "title": "Private Learning on Networks: Part II" }
Peculiar Rotation of Electron Vortex Beams [========================================== When a recurrent neural network language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in the RNN – conditioning the language model by `injecting' image features – or in a layer following the RNN – conditioning the language model by `merging' image features. While both options are attested in the literature, there is as yet no systematic comparison between the two. In this paper we empirically show that it is not especially detrimental to performance whether one architecture is used or another. The merge architecture does have practical advantages, as conditioning by merging allows the RNN's hidden state vector to shrink in size by up to four times. Our results suggest that the visual and linguistic modalities for caption generation need not be jointly encoded by the RNN as that yields large, memory-intensive models with few tangible advantages in performance; rather, the multimodal integration should be delayed to a subsequent stage. § INTRODUCTIONImage caption generation[Throughout this paper we refer to textual descriptions of images as captions, although technically a caption is text that complements an image with extra information that is not available from the image. Specifically, the descriptions we talk about are `concrete' and `conceptual' image descriptions <cit.>.] is the task of generating a natural language description of the content of an image <cit.>, also known as a caption. One way to do this is to use a neural language model, typically in the form of a recurrent neural network, or RNN, which is used to generate text (illustrated in Figure <ref>). Given a sentence prefix, a neural language model will predict which words are likely to follow. With a small modification, this simple model can be extended into an image caption generator, that is, a language model whose predictions are conditioned on image features. To do this, the neural language model must somehow accept as input not only the sentence prefix, but also the image being captioned. This raises the question: At which stage should image information be introduced into a language model?Recent work on image captioning has answered this question in different ways, suggesting different views of the relationship between image and text in the caption generation task. To our knowledge, however, these different models and architectures have not been systematically compared. Yet, the question of where image information should feature in captioning is at the heart of a broader set of questions concerning how language can be grounded in perceptual information, questions which have been addressed by cognitive scientists <cit.> and AI practitioners <cit.>. As we will show in more detail in Section <ref>, differences in the way caption generation architectures treat image features can be characterised in terms of three distinct sets of design choices: Conditioning by injecting versus conditioning by merging: A neural language model can be conditioned by injecting the image (Figure <ref>) or by merging the image (see Figure <ref>). In `inject' architectures, the image vector (usually derived from the activation values of a hidden layer in a convolutional neural network) is injected into the RNN, for example by treating it on a par with a `word' and including it as part of the caption prefix. The RNN is trained to encode the image-language mixture into a single vector in such a way that this vector can be used to predict the next word in the prefix. On the other hand, in the case of `merge' architectures, the image is left out of the RNN subnetwork, such that the RNN handles only the caption prefix, that is, handles only purely linguistic information. After the prefix has been encoded, the image vector is then merged with the prefix vector in a separate `multimodal layer' which comes after the RNN subnetwork. Merging can be done by, for example, concatenating the two vectors together. In this case, the RNN is trained to only encode the prefix and the mixture is handled in a subsequent feedforward layer.In the terminology adopted in this paper: if an RNN's hidden state vector is somehow influenced by both the image and the words then the image is being injected, otherwise it is being merged. Early versus late inclusion of image features: As the foregoing description suggests, merge architectures tend to incorporate image features somewhat late in the generation process, that is, after processing the whole caption prefix. On the other hand, some inject architectures tend to incorporate image features early in the generation process. Other inject architectures incorporate image features for the whole duration of the generation process. Different architectures can make visual information influence linguistic choices at different stages. Fixed versus modifiable image features: For each word predicted, some form of visual information must be available to influence the likelihood of each word. Merge architectures typically use the exact same image representation for every word output. On the other hand, injecting the image features into the RNN allows the internal representation of the image inside the hidden state vector to be changed by the RNN's internal updates after each time step. Different architectures allow for different degrees of modification in the image features for each generated word.The main contribution of this paper is to present a systematic comparison of the different ways in which the `conditioning' of linguistic choices based on visual information can be carried out, studying their implications for caption generator architectures. Thus, rather than seeking new results that improve on the state of the art, we seek to determine, based on an exhaustive evaluation of inject and merge architectures on a common dataset, where image features are best placed in the caption generation and image retrieval process.[All the code used in our experiments is available at <https://github.com/mtanti/where-image2>.]From a scientific perspective, such a comparison would be useful for shedding light on the way language can be grounded in vision. Should images and text be intermixed throughout the process, or should they initially be kept separate before being combined in some multimodal layer? Many papers speak of RNNs as `generating' text. Is this the case or are RNNs better viewed as encoders which vectorise a linguistic prefix so that the next feedforward layer can predict the next word, conditioned on an image? Answers to these questions would help inform theories of how caption generation can be performed. The architectures we compare provide different answers to these questions. Hence, it is important to acquire some insights into their relative merits.From an engineering perspective, insights into the relative performance of different models could provide rules of thumb for selecting an architecture for the task of image captioning, possibly for other tasks as well such asmachine translation. This would make it easier to develop new architectures and new ways to perform caption generation.The remainder of this paper is structured as follows. We first give an overview of published caption generators based on neural language models, focusing in particular on the architectures used. Section <ref> discusses the architectures we compare, followed by a description of the data and experiments in Section <ref>. Results are presented and discussed in Section <ref>. We conclude with some general discussion and directions for future work.§ BACKGROUNDIn this section we discuss a number of recent image caption generation models with emphasis on how the image conditions the neural language model, based on the distinction between inject and merge architectures illustrated in Figure <ref>. Before we discuss these models, we first outline four broad sub-categories of architectures that we have identified in the literature. §.§ Types of architectures In Section <ref>, we made a high-level distinction between architectures that merge linguistic and image features in a multimodal layer, and those that inject image features directly into the caption prefix encoding process. We can in fact distinguish four theoretical possibilities arising from these, as illustrated in Figure <ref> and described below. * Init-inject: The RNN's initial hidden state vector is set to be the image vector (or a vector derived from the image vector). It requires the image vector to have the same size as the RNN hidden state vector. This is an early binding architecture and allows the image representation to be modified by the RNN. * Pre-inject: The first input to the RNN is the image vector (or a vector derived from the image vector). The word vectors of the caption prefix come later. The image vector is thus treated as a first word in the prefix. It requires the image vector to have the same size as the word vectors. This too is an early binding architecture and allows the image representation to be modified by the RNN.[In addition to the above, there is an additional, theoretical possibility, which we might refer to as `post-inject'. Post-inject architectures would put the the image vector (or a vector derived from the image vector) at the end of each prefix rather than at the beginning as is done in pre-inject. This would be a late binding architecture which allows minimal modification in the image representation by the RNN. In practice, it would only be possible by structuring the training set as a collection of `sentence prefix - next word' pairs and training the language model using minibatches of individual prefixes rather than full captions at once. No attested work actually adopts this architecture, to our knowledge; hence, we shall not refer to it further in what follows.] * Par-inject: The image vector (or a vector derived from the image vector) serves as input to the RNN in parallel with the word vectors of the caption prefix, such that either (a) the RNN takes two separate inputs; or (b) the word vectors are combined with the image vector into a single input before being passed to the RNN. The image vector doesn't need to be exactly the same for each word (such as is the case with attention-based neural models); nor does it need to be included with every word. This is a mixed binding architecture and, whilst allowing some modification in the image representation, it will be harder for the RNN to do so if the same image is fed to the RNN at every time step due to its hidden state vector being refreshed with the original image each time. * Merge: The RNN is not exposed to the image vector (or a vector derived from the image vector) at any point. Instead, the image is introduced into the language model after the prefix has been encoded by the RNN in its entirety. This is a late binding architecture and it does not modify the image representation with every time step. With these distinctions in mind, we next discuss a selection of recent contributions, placing them in the context of this classification. Table <ref> provides a summary of these published architectures. Init-inject architectures: Architectures conforming to the init-inject model treat the image vector as the initial hidden state vector of an RNN <cit.>. <cit.> combine two RNNs in parallel, both initialized with the same image.A similar architecture to init-inject is used in traditional deep learning machine translation systems <cit.> where a source sentence is encoded into a vector and used to condition a language model to generate a sentence in another language. This is the basis for the system described by <cit.>, who first extract a sequence of attributes from an image, then translate this sequence into a caption.It is also used in attention mechanisms in order to provide a vector representing information about the whole image whilst parts of the image that are attended differently during each time step are provided via par-injection. For example <cit.> initialize the RNN with the centroid of all image parts before attending to some parts as needed. Pre-inject architectures: Pre-inject models treat the image as though it were the first word in the prefix <cit.>. Image attributes are sometimes used instead of image vectors <cit.>. <cit.> also try passing an image as the first two words instead of just one word by using the image vector as the first word and image attributes as a second, or vice versa.Just like init-inject, pre-inject is also used to provide information about the whole image in attention mechanisms <cit.>.<cit.> generate paragraph-length captions in two stages. First, an RNN is used to convert the image vector into a sequence of image vectors by incorporating the image at every time step. This sequence of vectors represents sentence topics, each of which is to be converted into a separate sentence by conditioning a language model using pre-inject. Par-inject architectures: Par-injection inputs the image features into the RNN jointly with each word in the caption. It is by far the most common architecture used and has the largest variety of implementation. For example <cit.> do this with two RNNs in series and find that it is better to inject the image in the second RNN than the first. <cit.> par-inject the image whilst pre-injecting image attributes (or vice versa); and <cit.> par-inject attributes from the image whilst init-injecting the image vector. Other, less common instantiations include par-injecting the image, but only with the first word (this is not pre-inject as the image is not injected on a separate time step) <cit.>; and passing the words through a separate RNN, such that the resulting hidden state vectors are what is combined with the image vector <cit.>.Many times this architecture is used in order to pass a different representation of the same image with every word so that visual information changes for different parts of the sentence being generated. For example <cit.> perform element-wise multiplication of the image vector with the last generated word's embedding vector in order to attend to different parts of the image vector. <cit.> pass the image through its own RNN for as many times as there are words in order to use a different image vector for every word. <cit.> use a simple RNN to try to predict what the image vector looks like given a prefix. This predicted image is then used as a second image representation which is par-injected together with the actual image vector.More commonly, modified image representations come from attention mechanisms <cit.>. <cit.> inject the image not as an input to the RNN but use a modified long short term memory network <cit.>, or LSTM, which allows them to inject the attended image directly inside the input gated expression (the part of the LSTM which is multiplied by the input gate).Like init-inject and pre-inject, par-inject is sometimes used to provide information about the whole image in attention mechanisms whilst the attended image regions are merged <cit.>. Merge architectures: Rather than combining image features together with linguistic features from within the RNN, merge architectures delay their combination until after the caption prefix has been vectorised <cit.>. <cit.> use a merge architecture in order to keep the image out of the RNN and thus be able to train the part of the neural network that handles images and the part that handles language separately, using images and sentences from separate training sets.Some work on attention mechanisms also uses merge architectures with attention mechanisms by merging a different image representation at every time step. <cit.> and <cit.> merge as well as par-inject the attended visual regions, whilst <cit.> only merge the regions whilst par-injecting a fixed image representation.Though they do not use an RNN and hence are not focussed on in this review, caption generators that use log-bilinear models <cit.> usually merge the image with the prefix representation <cit.>. §.§ Summary and outlook While the literature on caption generation now provides a rich range of models and comparative evaluations, there is as yet very little explicit systematic comparison between the performance of the architectures surveyed above, each of which represents a different way of conditioning the prediction of language sequences on visual information. Work that has tested both par-inject and pre-inject, such as by <cit.>, reports that pre-inject works better. The work of <cit.> compares inject and merge architectures and concludes that merge is better than inject. However Mao et al.'s comparison between architectures is a relatively tangential part of their overall evaluation, and is based only on the BLEU metric <cit.>.Answering the question of which architecture is best is difficult because different architectures perform differently on different evaluation measures, as shown for example by <cit.>, who compared architectures with simple RNNs and LSTMs. Although the state of the art systems in caption generation all use inject-type architectures, it is also the case that they are more complex systems than the published merge architectures and so it is not fair to conclude that inject is better than merge based on a survey of the literature alone.In what follows, we present a systematic comparison between all the different architectures discussed above. We perform these evaluations using a common dataset and a variety of quality metrics, covering (a) the quality of the generated captions; (b) the linguistic diversity of the generated captions; and (c) the networks' capabilities to determine the most relevant image given a caption.§ ARCHITECTURESIn this section we go over the different architectures that are evaluated in this paper. A diagram illustrating the main architecture schema, which is the basis of every tested architecture in this work, is shown in Figure <ref>. The schema is based on the architecture described in <cit.>, without the ensemble. This architecture was chosen for its simplicity whilst still being the best performing system in the 2015 MSCOCO image captioning challenge.[See: <http://mscoco.org/dataset/#captions-leaderboard>] Word embeddings: Word embeddings, that is, the vectors that represent known words prior to being fed to the RNN, consist of vectors that have been randomly initialised. No precompiled vector embeddings such as word2vec <cit.> were used. Instead, the embeddings are trained as part of the neural network in order to learn the best representations of words for the task. Recurrent neural network: The purpose of the RNN is to take a prefix of embedded words (with image vector in inject architectures) and produce a single vector that represents the sequence. A gated recurrent unit <cit.>, or GRU, was used in our experiments for the simple reason that it is a powerful RNN that only has one hidden state vector. By contrast, an LSTM has two state vectors (hidden and cell states). This would make architecture comparisons more complex, as the presence of two state vectors raise the possibility of multiple versions of the init-inject architecture. By using an RNN with a single hidden state vector there is only one way to implement init-inject. Image: Prior to training, all images were vectorised using the activation values of the penultimate layer of the VGG OxfordNet 19-layer convolutional neural network <cit.>, which is trained to perform object recognition and returns a 4096-element vector. The convolutional neural network is not influenced by the caption generation training. During training, a feed forward layer of the neural network compresses this vector into a smaller vector. Output: Once the image and the caption prefix have been vectorised and mixed into a single vector, the next step is to use them to predict the next word in the caption. This is done by passing the mixed vector through a feed-forward layer with a softmax activation function that outputs the probability of each possible next word in the vocabulary. Based on this distribution, the next word that comes after the prefix is selected.The four architectures discussed in the previous section are evaluated in our experiments as follows: * init-inject: The image vector is treated as an initial hidden state vector for the RNN. After initialising the RNN, the vectors in the caption prefix are then fed to the RNN as usual. * pre-inject: The image vector is used as the first `word' in the caption prefix. This makes the image vector the first input that the RNN will see. * par-inject: The image vector is concatenated to every word vector in the caption prefix in order to make the RNN take a mixed word-image vector. Every word would have the exact same image vector concatenated to it. * merge: The image vector and caption prefix vector are concatenated into a single vector before being fed to the output layer. We now discuss the architecture in a more formal notation. As a matter of notation, we treat vectors as horizontal.The GRU model is defined as follows:r_t= (x_t W_xr + s_t-1 W_sr + b_r) u_t= (x_t W_xu + s_t-1 W_su + b_u) c_t= tanh(x_t W_xc + (r ⊙ s_t-1) W_sc + b_c) s_t= u_t ⊙ s_t-1 + (1 - u_t) ⊙ c_t where x_t is the t^th input, s_t is the hidden state vector after t inputs, r_t is the reset gate after t inputs, u_t is the update gate after t inputs, W_αβ is the weight matrix between α and β, b_α is the bias vector for α, and ⊙ is the elementwise vector multiplication operator. In the above, `sig' refers to the sigmoid function which is defined as: (x)= 1/1 + e^-x The feedforward layers used for the image and output are defined asz= x W + b where z is the net vector, x is the input vector, W is the weight matrix, and b is the bias vector.The net vector can then be passed through an activation function, such as the softmax function, which is defined as(z)_i= e^z_i/∑_j e^z_j where (z)_i refers to the i^th element of the new vector.Another activation function is the rectified linear unit function, or ReLU, which is defined as(z)_i= max(z_i, 0)where (z)_i refers to the i^th element of the new vector.§ EXPERIMENTSThis section describes the experiments conducted in order to compare the performance of the different architectures described in the previous section. Tensorflow[See: <https://www.tensorflow.org/>] v1.2 was used to implement the neural networks. §.§ DatasetsThe datasets used for all experiments were the version of Flickr8K <cit.>, Flickr30K <cit.>, and MSCOCO <cit.> distributed by <cit.>.[See: <http://cs.stanford.edu/people/karpathy/deepimagesent/>] All three datasets consist of images taken from Flickr combined with between five and seven manually written captions per image. The provided datasets are split into a training, validation, and test set using the following number of images respectively: Flickr8K - 6000, 1000, 1000; Flickr30K - 29000, 1014, 1000; MSCOCO - 82783, 5000, 5000. The images are already vectorised into 4096-element vectors via the activations of layer `fc7' (the penultimate layer) of the VGG OxfordNet 19-layer convolutional neural network <cit.>, which was trained for object recognition on the ImageNet dataset <cit.>.The known vocabulary consists of all the words in the captions of the training set that occur at least 5 times. This amounts to 2539 tokens for Flickr8K, 7415 tokens for Flickr30K, and 8792 tokens for MSCOCO. These words are used both as inputs, which are embedded and fed to the RNN, and as outputs, which are assigned probabilities by the softmax function. Any other word which is not part of the vocabulary is replaced with an UNKNOWN token. §.§ Hyperparameter tuningFor the results to be reliable, it is important to find the best (within practical limits) hyperparameters for each architecture so that we can judge the performance of the architectures when they are optimally tuned, rather than using one-size-fits-all hyperparameter settings which might cause some architectures to under-perform. For this reason we used a multi-step process of hyperparameter tuning, which is described below. We optimized the hyperparameters in order to maximize caption quality on the Flickr8K validation set, using beam search as a generation method and CIDEr as the objective function. The optimal hyperparameters were then fixed across all datasets. <cit.> also used Flickr8K for hyperparameter tuning and CIDEr was shown by <cit.> to be a useful metric to optimise on, yielding an improvement on other quality metrics when used as the objective function.The following hyperparameters were fixed across all architectures: * Parameter optimization is performed using the Adam algorithm <cit.> with its hyperparameters kept as suggested in the original paper: α = 0.001, β_1 = 0.9, β_2 = 0.999, and ϵ = 10^-8. * The loss function is the mean of the cross-entropy of each word in each caption in a minibatch. The crossentropy of the t^th word in a caption is defined as follows: crossentropy(P, I, C_0 …t-1, C_t) = -ln(P(C_t | C_0 …t-1, I) ) where P is the trained neural network that gives the probability of a particular word being the next word in a caption prefix, C is a caption with |C| words, and I is an image described by caption C. Note that C_t is the t^th word in C and C_0 …t-1 are the first t-1 words in C plus the START token. * An early stopping criterion is used, such that the geometric mean of the language model perplexity on the validation set is measured and as soon as one epoch results in a worse perplexity than the previous epoch, the training stops. A maximum number of epochs are still used to prevent training from going on for too long (more on this later). * During caption generation, the caption must be between 5 and 50 words long. Beam search will not end a sentence before there are at least 5 words in it and will abruptly stop using a partial sentence that is 50 words long. * All biases are initialized to zeros. The following are hyperparameters that were tuned (the ranges of values were minimized in order to keep the search space tractable): * The weights initialization procedure (normal distribution or xavier <cit.> with normal distribution). * The weights initialization range (-0.1 to 0.1 or -0.01 to 0.01). * The size of the layers for embedding, image projection (FF^img in Figure <ref>), and RNN hidden state vector (64, 128, 256, or 512), all three of which are constrained to be equal[Note that if we allowed each layer to change freely from the other layers, init-inject would still require that the image size and RNN size be equal and pre-inject would still require that the image size and the embedding size be equal, whilst par-inject and merge would have no such size restrictions. This would make the former two architectures have significantly less hyperparameter combinations to explore which would likely result in an unfair advantage after hyperparameter tuning.]. * Whether to normalize the image vector before passing it to the neural network. * Whether to use ReLU after the image projection (FF^img in Figure <ref>) or to leave it linear. * Whether to use an all-zeros vector as an initial RNN hidden state vector or to use a learnable vector (not applicable to init-inject since its initial hidden state vector is the image projection). * Whether to use L2 weights regularization with a weighting constant of 10^-8. * Whether to apply dropout regularisation at different points in the architecture (in Figure <ref>: after `image', after `FF^img', after `embed', and/or after `RNN'). Each application of dropout (if any) has a dropout rate of 0.5. * The minibatch size (32, 64, or 128). The following steps were followed in order to tune these hyperparameters, which were evaluated by training a neural network for a maximum of 10 epochs, generating captions with a beam width of 2, and evaluating the captions using CIDEr: 1. Randomly generate 100 unique hyperparameter combinations and record their performance. 2. Use Baysian optimization via the library GPyOpt[See: <http://sheffieldml.github.io/GPyOpt/>] for 100 iterations and record each generated candidate combination's performance. Use the combinations from step 1 to initialize the search. 3. Use trees of Parzan estimators via the library hyperopt[See: <https://jaberg.github.io/hyperopt/>] for 100 iterations and record each generated candidate combination's performance. 4. Take the best combination found in all of the previous steps and fine-tune it using greedy hill climbing and record each modified combination. This is to check if changing any one hyperparameter will improve the performance. The previous steps do not have very reliable CIDEr scores associated with them as their score was produced using just one training and generation run and so might coincidentally be an unusual score (far from the mean score if we trained the same neural network several times). Ideally we would have tested each hyperparameter combination three times and taken the mean of the resulting CIDEr scores. Ideally we would have also tried different values for maximum number of epochs and beam width. This, however, would have been extremely time consuming. Thus, we only apply the procedure to a subset of the best performing combinations from the previous steps. We ensure that the subset is diverse by only choosing combinations that are dissimilar from each other, as follows: 5. Take all duplicate combinations generated in all of the previous steps and replace them with a single combination with their average CIDEr score. Take the top 10 scoring combinations. 6. Out of the selected 10 combinations take the three combinations that are most different from each other in terms of Hamming distance. Ensure that one of these three combinations is the best combination found in the previous step. 7. Take the three combinations selected and try different maximum epochs (10 and 100) and beam widths (1, 2, 3, 4, 5, and 6) on them. Each evaluation is measured using the average CIDEr score of three independent training and generation runs. 8. Return the best combination found in the previous step. In Section <ref> we will discuss the optimal hyperparameters found. §.§ Evaluation metrics To evaluate the different architectures, the test set captions (which are shared among all architectures) are used to measure the architectures' quality using metrics that fall into three classes, described below. Generation metrics: These metrics quantify the quality of the generated captions by measuring the degree of overlap between generated captions and those in the test set. We use the MSCOCO evaluation code[See: <https://github.com/tylin/coco-caption>] which measures the standard evaluation metrics BLEU-(1,2,3,4) <cit.>, ROUGE-L <cit.>, METEOR <cit.>, and CIDEr <cit.>. Diversity metrics: Apart from measuring the caption similarity to the ground truth we also measure the diversity of the vocabulary used in the generated captions. This is intended to shed light on the extent to which the captions produced by models are `stereotyped', that is, the extent to which a model re-uses (sub-)strings from case to case, irrespective of the input image.As a limiting case, consider a caption generator which always outputs the same caption. Such a generator would have the lowest possible diversity score. In order to quantify this we measure the percentage of known vocabulary words used in all generated captions and the entropy of the unigram and bigram frequencies in all the generated captions together, which is calculated as:entropy(F)= -∑_i=1^|F| P_i(F) log_2P_i(F)P_i(F)= F_i/∑_j=1^|F|F_j where F is the frequency distribution over generated unigrams or bigrams with |F| different types of unigrams or bigrams and P_i is the maximum likelihood estimate probability of encountering unigram or bigram i. Note that F_i is the frequency of the unigram or bigram i.Entropy gives a measure of how uniform the frequency distributions are (with higher entropy for more uniform distributions). The more uniform, the more likely that each unigram or bigram was used in equal proportion, rather than using the same few words for the majority of the time, hence the greater the variety of words used.Finally we also measure the percentage of generated captions that already exist in the training set, as an estimate of the extent to which a model evinces `parroting', or wholesale caption reuse from the training set.For these diversity metrics, we obtain a ceiling estimate by computing the same measures on the test set captions themselves. We take the first caption out of the group of human-written captions available for each image in the test set and apply these diversity metrics on them. Retrieval metrics: Retrieval metrics are metrics that quantify how well the architectures perform when retrieving the correct image out of all the test set images in the test set given a corresponding caption. A conditioned language model can be used for retrieval by measuring the degree of relevance each image has to the given caption. Relevance is measured as the probability of the whole caption given the image (by multiplying together each word's probability). Different images will give different probabilities for the same caption. The more probable the caption is, the more relevant the image.We use the standard R@n recall measures <cit.>, and report recall at 1, 5, and 10. Recall at n is the percentage of captions whose correct image is among the top n most relevant images.Since this process takes time proportional to the number of captions multiplied by the number of images, the pool of possible captions to consider during retrieval excluded all captions except the first out of the group of captions available for each image in order to reduce the evaluation time. For MSCOCO we only used the first 1000 test set images out of 5000 for the same reason, similar to Flickr8K and Flickr30K which only have 1000 images.We also included the language model perplexity. The perplexity of a sentence/image pair is calculated as:perplexity(P, C, I)= 2^H(P, C, I)H(P, C, I)= -1/|C|∑_n=0^|C|log_2 ( P(C_t | C_0 …t-1, I) ) where P is the trained neural network that gives the probability of a particular word being the next word in a caption prefix, C is a caption with |C| words, I is an image described by caption C, and H is the entropy function. Note that C_t is the t^th word in C and C_0 …t-1 are the first t-1 words in C plus the START token.In order to aggregate the caption perplexity of the entire test set of captions into a single number, we report the geometric mean of all the caption's scores.§ RESULTS AND DISCUSSIONThree runs of each experiment, on each of the three datasets, were performed. For the various evaluation measures, we report the mean together with the standard deviation (reported in parentheses) over the three runs. For each run, the initial model weights, minibatch selections, and dropout selections are different since these are randomly determined. Everything else is identical across runs. §.§ Optimal hyperparametersWe start by discussing the optimal hyperparameters found for each architecture which are listed in Table <ref>. It is interesting to note that, in every architecture's optimal hyperparameters, the RNN output needs to be regularized with dropout, the image vector should not have a non-linear activation function or be regularized with dropout, and the image input vector must be normalized before being fed to the neural network. Par-inject seems to need the most help in terms of regularization and even in terms of beam width, whilst the small size of merge means that it needs the least amount of regularization.The most interesting observation is that the merge architecture is much `leaner' overall. In terms of RNN size, it needs half of what par-inject needs, and only a quarter of what init-inject and pre-inject require for optimal performance. This makes sense, since merge only needs the RNN for storing linguistic information, whilst the other architectures need to additionally store visual information from the image. Using a larger RNN with the merge architecture would likely lead to overfitting. The implication is that init-inject and pre-inject are much more memory-hungry architectures that require large RNN hidden state vectors in order to function well, whilst merge is more efficient. In fact, the number of parameters for merge is between 3 and 4 times smaller than the number of parameters for init-inject and pre-inject. Merge is also about 2 or 3 times faster to train. Of the inject architectures, par-inject has the smallest optimal RNN size. This is probably due to the fact that, in this model, the image is present at all time steps, thereby necessitating less memory to be allocated to `remember' visual information together with linguistic information, compared to early-binding architectures. It's interesting to note that the par-inject RNN size is equal to the size of the concatenated image and RNN hidden state vector in the merge architecture. §.§ Quality of generated captions Table <ref> and Table <ref> display the metrics that measure the quality of generated captions, calculated using the MSCOCO evaluation toolkit and averaged over the three experimental runs.Does merge's small size impact its performance when generated captions are compared to corpora? The metrics reported here show considerable variability in ranking of the various architectures depending on dataset. For example, CIDEr scores place init-inject at the top for both Flickr8K and MSCOCO, but merge outperforms it on this measure on Flickr30K. Comparing ROUGE-L, METEOR and CIDEr, init-inject seems to be ranked highest over most datasets (the situation is far more variable with the BLEU scores in Table <ref>, however). However, the differences among architectures are very small. This is especially true for the larger MSCOCO dataset. Thus, though init-inject often comes out on top, the other architectures are not lagging behind by a wide margin.§.§ Image retrieval Image retrieval results across the three datasets are shown in Table <ref>.When it comes to retrieving the most relevant image for a caption, we once again see merge ranked first on Flickr30K, while init-inject is at the top on Flickr8K and MSCOCO, on practically all R@n measures, as well as median rank. Interestingly, in the two sets of cases where init-inject outperforms other architectures, merge is a close second, at least for R@1. In terms of perplexity, the general picture is in favour of inject models, with merge evincing marginally greater perplexity on all datasets. Overall, however, the outcomes mirror those of the previous sub-section: differences among architectures do not seem compelling and although the init-inject model outperforms merge in a number of instances, merge is a close second. §.§ Caption diversity metrics Next, we turn to the caption diversity metrics, shown in Table <ref>.These diversity metrics evince the most dramatic performance differences. If we focus on the proportion of generated captions that were found in the training set, on MSCOCO, this figure ranges from just over 40% for merge to over 60% for par-inject. With the exception of Flickr8K, merge has the lowest proportion of caption reuse overall. If these results are compared to those in preceding sub-sections, the fact that those models with the greatest tendency to reuse captions tend to perform well on corpus-based metrics such as CIDEr suggests that the datasets under consideration are highly stereotyped, perhaps with a significant amount of redundancy and lack of variety.A similar observation has been made by <cit.>. In a comparison of retrieval-based and neural architectures for image captioning, these authors found that corpus-based metrics (especially BLEU) tend to give higher scores on test instances where the images were very similar to training instances. Neural architectures performed better for more similar images overall. The results obtained for the human captions (bottom rows of Table <ref>) suggest that the level of caption reuse by humans is extremely low compared to the models under consideration, though it stands at 7% on MSCOCO.Turning to the extent to which architectures use their training vocabulary, the picture that emerges is consistent with the above. While humans used between 29% and 47% of the known vocabulary (taken from the training set) to describe the test set images, none of the evaluated systems used more than 14%. The merge architecture tops the ranks for all datasets by a small margin, although unigram and bigram entropy is highest for pre-inject (Flickr8K and Flickr30K) and init-inject (MSCOCO). We interpret these results as showing that neural caption generators require seeing a word in the training set very often in order to learn to use it. From a methodological perspective, this further implies that setting an even higher frequency threshold, below which words are mapped to the UNKNOWN token (the current experiments set the threshold at five), would be feasible and would make relatively little difference to the results. §.§ Visual information retention As noted in Section <ref>, one of the differences between the architectures under consideration is whether they incorporate the image features early or late. This raises the possibility of differences in the degree to which visual information is retained by each architecture in the multimodal vector, that is, the input to `FF^out' in Figure <ref>. This is where information about visual and linguistic input is combined and is the information bottleneck that the output depends on. The question we want to answer is: Do (early-binding) inject architectures tend to `forget' about the image as more words are input into the RNN? Given that the RNN's memory is finite, it should be difficult to retain information about all inputs as the length of the sequence increases, so information about the image might start fading away as the input sequence gets longer. Merge architectures do not have this problem with visual information as it is kept outside of the RNN and so is fully retained in the multimodal vector regardless of the number of time steps.To measure how much visual information is retained as the number of time steps grows, we do the following: * Take a trained neural network and input an image and a matching caption. * Record the multimodal vector in the neural network at every time-step. * Replace the image from the original neural network in step 1 with a randomly selected image, paired with the original caption, thus introducing an image-caption mismatch. * Record the new, adulterated multimodal vector at every time-step for the new caption-image combination. * Compare the original and adulterated vectors: if these converge as more words are fed to the model, it implies that the multimodal vector is losing image information, as it would be getting influenced less by the image and more by the prefix. As a measure of distance between original and adulterated vectors, we use the mean absolute difference, that is, we take the absolute difference between each corresponding dimension in the two multimodal vectors and then take the mean of these differences. Mean absolute difference avoids giving a larger distance to larger vectors and is also intuitive as a measure of difference between vectors. It also keeps the distance between time steps exactly equal for merge, which is desirable since merge does not lose visual information across time steps.For this set of experiments, we used all 20-word captions in the MSCOCO test set and measured the mean distance over all 21 time steps (the 20 words plus the START token). 20-word captions are long enough to see a trend without ending up with too few captions (the mean caption length on the MSCOCO test set is about 10.4). To create a more reliable mean we repeat this procedure 100 times so that the mean is over all images in the test set using 100 random images per instance. The results are shown in Figure <ref>.None of the inject architectures maintained a consistent distance between the original and adulterated multimodal vectors. Crucially, the merge architecture also has the largest distance among all architectures, demonstrating that, in this architecture, the words in a caption exhibit a greater dependency on the image to which they pertain (hence, adulterating the multimodal vector with an irrelevant image alters the representation considerably). Par-inject comes in second place in terms of multimodal vector distance. This suggests that it retains more visual information than the other inject architectures, though not as much as merge. It seems that the amount of retention across time steps changes somewhat unpredictably, but tends to decrease overall, which means that information gets lost over time (though not to the extent of init-inject and pre-inject). Init-inject comes third in visual information retention followed by pre-inject, both of which decrease over time. It seems that, in a GRU trained for caption generation, the initial hidden state vector exerts more influence on the final hidden state vector than the first input.These results predict that if the generated captions needed to be very long, late binding architectures will produce better captions as they will retain visual information over longer time steps, maintaining a tighter coupling between visual and linguistic information.§ CONCLUSIONThis paper presented a systematic evaluation of a number of variations on architectures for image caption generation and retrieval. The primary focus was on the distinction between what we have termed `inject' and `merge' architectures. The former type of model mixes image and language information by training an RNN to encode an image-prefix mixture. By contrast, merge architectures maintain a separation between an RNN subnetwork, which encodes a linguistic string, and the image vector, merging them late in the process, prior to a prediction step. These models are therefore compatible with approaches to image caption generation using a `multimodal' layer <cit.>. While both types of architectures have been discussed in the literature, the inject architecture has been more popular. Yet, there has been little systematic evaluation of its advantages compared to merge. Our experiments show that on standard corpus-based metrics such as CIDEr, the difference in performance between architectures is rather small. Init-inject tends to be better at generation and retrieval measures. Thus, from the perspective of corpus similarity, early binding of image features in models that view such features as “modifiable” (in the sense outlined in the introduction) appear to be better than the alternatives. Crucially, however, we also show that inject architectures are much more likely to re-generate captions wholesale from the training data and evince less vocabulary variation. Hence, from the perspective of variation, late-binding models that treat image features as fixed (i.e. not mixed with linguistic features) are better. While this is due in part to the nature of the available corpora, the superior performance of merge on this measure does suggest that, by encoding information from the two modalities separately, merge architectures might be producing less generic and stereotyped captions, exploiting their multimodal resources more effectively.Our experiments on visual information retention show that, over time, inject architectures tend to loosen the coupling between visual and linguistic features, so that the difference between actual and adulterated multimodal vectors gets smaller. This too supports the view that inject models may, especially for longer captions, tend towards more generic and less image-specific captions, a finding that echoes the observations of <cit.>, to some extent. In any case, late merging is, by definition, not susceptible to this problem.From an engineering perspective, there is a significant difference between the required sizes of the RNN hidden state vectors. Whilst merge only requires a hidden state vector size sufficient to `remember' caption prefixes, which depends on the length and complexity of the training set captions, inject architectures require additional memory to also store image information. This means that merge architectures make better use of their RNN memory. They also require less regularization whilst maintaining similar performance as other architectures.The work presented here opens up some avenues for future research. In future work, we hope to investigate whether the results in this paper would remain similar when the experiments are repeated on other applications of conditioned neural language models such as neural machine translation or question answering.Furthermore, by keeping language and image information separate, merge architectures lend themselves to potentially greater portability and ease of training. For example, it should be possible in principle to take the parameters of the RNN and embedding layers of a general text language model and transfer them to the corresponding layers in a caption generator. This would reduce training time as it would avoid learning the RNN weights and the embedding weights of the caption generator from scratch. As understanding of deep learning architectures evolves in the NLP community, one of our goals should be to maximise the degree of transferability among model components.§ ACKNOWLEDGEMENTS The research in this paper is partially funded by the Endeavour Scholarship Scheme (Malta). Scholarships are part-financed by the European Union - European Social Fund (ESF) - Operational Programme II – Cohesion Policy 2014-2020 “Investing in human capital to create more opportunities and promote the well-being of society”.apalike
http://arxiv.org/abs/1703.09137v2
{ "authors": [ "Marc Tanti", "Albert Gatt", "Kenneth P. Camilleri" ], "categories": [ "cs.NE", "cs.CL", "cs.CV" ], "primary_category": "cs.NE", "published": "20170327151349", "title": "Where to put the Image in an Image Caption Generator" }
paul.hockett@nrc.ca National Research Council of Canada, 100 Sussex Drive, Ottawa, K1A 0R6, CanadaAngle-resolved RABBIT: theory and numerics Paul Hockett Received: date / Accepted: date ========================================== § ABSTRACTAngle-resolved (AR) RABBIT measurements offer a high information content measurement scheme, due to the presence of multiple, interfering, ionization channels combined with a phase-sensitive observable in the form of angle and time-resolved photoelectron interferograms. In order to explore the characteristics and potentials of AR-RABBIT, a perturbative 2-photon model is developed; based on this model, example AR-RABBIT results are computed for model and real systems, for a range of RABBIT schemes. These results indicate some of the phenomena to be expected in AR-RABBIT measurements, and suggest various applications of the technique in photoionization metrology.Article history* arXiv (this version)* https://www.authorea.com/users/71114/articles/152997-angle-resolved-rabbit-theory-and-numerics/_show_articleOriginal article (Authorea)DOI: 10.22541/au.149037518.89916908 See also * https://doi.org/10.6084/m9.figshare.4702804AR-RABBIT results presentationDOI: 10.6084/m9.figshare.4702804 * https://doi.org/10.6084/m9.figshare.c.3511731Background material on angle-resolved photoionization (and refs. therein)DOI: 10.6084/m9.figshare.c.3511731 § INTRODUCTION The RABBIT methodology - “reconstruction of attosecond harmonic beating by interference of two-photon transitions" <cit.> - essentially defines a scheme in whichXUV pulses are combined with an IR field, and the two fields are applied to a target gas. The gas is ionized, and the photoelectrons detected. In the typical case, the IR field is at the same fundamental frequency ω as the field used to drive harmonic generation, and the XUV field generated is an atto-second pulse train with harmonic components nω, with odd-n only. In this case, if the intensity of the IR field is low to moderate, the resultant photoelectron spectrum will be comprised of discrete bands corresponding to direct 1-photon XUV ionization, and sidebands corresponding to 2-photon XUV+IR transitions <cit.>. (The energetics of this situation are illustrated in fig. <ref>.)Temporally, if the XUV pulses are short relative to the IR field cycle, the sidebands will also show significant time-dependence, since they will be sensitive to the optical phase difference between the XUV and IR fields, with an oscillatory frequency of 2ω. In this case, a measurement which is angle-integrated, or made at a single detection geometry, can be viewed as a means to characterising the properties of the XUV pulses (spectral content and optical phase), provided that the ionizing system is simple or otherwise well-characterised <cit.>; RABBIT can therefore be utilised as a pulse metrology technique <cit.>, and this is the typical usage.Conversley, RABBIT can also be regarded as a photoelectron metrology technique, since it is sensitive to the magnitudes and phases of the various photoionization pathways accessed. In contrast to most traditional (energy-resolved) photoelectron spectroscopy techniques, RABBIT has the distinction of interfering pathways resulting from different 1-photon transition energies: it is thus sensitive to the energy-dependence of the photoionization dynamics, as well as to the partial-wave components within each pathway.An angle-resolved (AR) RABBIT measurement is particularly powerful in this regard, since the partial-wave phases are encoded in the angular part of the photoelectron interferogram. Although this is a potentially powerful technique, the underlying photoionization dynamics may be extremely complicated, hence quantitative analysis of experimental results is challenging.In essence, AR-RABBIT can therefore be considered as a technique which combines traditional photoionization and scattering physics with an additional (time-dependent) perturbation in the form of the IR laser field. This field provides additional couplings between the 1-photon (XUV) channels. In the usual RABBIT intensity regime, these two steps can be decoupled, allowing for the XUV absorption to be treated as a weak-field bound-free transition (photoionization),followed by absorption of an IR photon - this latter step is a transition purely between different free electron states in the continuum, often termed continuum-continuum coupling. This scheme is illustrated in the energy-domain in fig. <ref>(left). Therefore, the problem becomes one of dealing witha two-photon matrix element, describing these two sequential light-matter interactions. Furthermore, if the continuum-continuum coupling is assumed to be at long-range (i.e. temporally and spatially distinct from the bound-continuum coupling of the first, bound-free, step, and at the asymptotic limit of the continuum wavefunction), then a simplified treatment can be developed for this second transition. In this vein, Dahlström, L'Hullier and coworkers have done significant work, including angle-integrated resonant cases and extensive theoretical treatments of the problem. See, for instance, Introduction to attosecond delays in photoionization <cit.> and Study of attosecond delays using perturbation diagrams and exterior complex scaling <cit.> for general background theory and perturbative treatments similar to those discussed herein, Phase measurement of resonant two-photon ionization in helium <cit.> for a specific example (angle-integrated), and On the angular dependence of the photoemission time delay in helium <cit.> for work on this specific angle-resolved case. In this work, the same basic conceptual path to modelling RABBIT as a sequential two-photon process is followed, but the emphasis is placed on the role of the photoinization dynamics.This provides a route to the modelling and analysis of angle-resolved RABBIT, based on canonical photoionization theory and employing a full partial-wave treatment of the continuum.Following the similar treatment of ref. <cit.>, which investigated sequential 3-photon ionization in a time-dependent IR field (conceptually similar to a RABBIT scheme), the electric fields are modelled in a circular basis to allow for arbitrary field polarization states. The treatment is general, and applicable to any atomic or molecular system, provided that the IR field can be neglected for the first step. Essentially, within this framework angle-resolved RABBIT can be considered as an extension of traditional angle-resolved photoelectron measurements, and many of the same fundamental considerations and potential applications apply <cit.>. As usual, in cases where the XUV and/or IR field is strong, only full numerical treatments are capable of correctly describing the coupled light-matter system (see, for instance, refs. <cit.>), and this regime is not within the scope of the perturbative model discussed herein.In the following, a framework for AR-RABBIT modelling is defined in terms of the general form of the required photoionization matrix elements, the final continuum wavefunctions and the resultant observables (sect. <ref>). This framework is then applied to simple model cases (sect. <ref>), in order to develop a phenomenological understanding of AR-RABBIT measurements. To explore the application of the framework to real systems (sect. <ref>), numerical treatments for the radial matrix elements are detailed (sect. <ref>), and the framework is applied to model a range of specific AR-RABBIT measurements of neon.§ THEORY In this section, a basic theoretical framework for AR-RABBIT is defined. Further numerical details are discussed in sect. <ref>. §.§ 1-photon ionization by the XUV field The dipole matrix element for 1-photon ionization by the XUV field, corresponding to direct ionization from an initial bound state |n_il_im_i⟩ to a final continuum state |l_fm_f; 𝐤⟩, is given as: d_xuv(𝐤,t) =⟨𝐤;l_fm_f|μ̂_if.E(Ω,t,q)|n_il_im_i⟩ = R_l_il_f(k)E_xuv^q(Ω,t)⟨ l_fm_f,1q|l_im_i⟩where μ̂_if is the dipole operator. In the second line, the matrix element is decomposed in terms of radial and geometric parts. Here R_l_il_f(k) denotes the radial integrals, which are dependent on the magnitude of the photoelectron wavevector 𝐤;⟨ l_fm_f,1q|l_im_i⟩ is a Clebsch-Gordan coefficient which describes the angular momentum coupling for single photon absorption, where the field polarization (circular basis) is defined by q, and the spectral (Ω) and temporal (t) properties of each polarization component by E_xuv^q(Ω, t).This matrix element is essentially identical to canonical treatments for 1-photon ionization (e.g. Cooper & Zare <cit.> for atomic photoionization, Dill <cit.> for fixed-molecule photoionization), apart from the inclusion of a time-dependent E-field. In this decomposition, the Clebsch-Gordan coefficients can be calculated analytically, the E-field can be defined analytically or numerically, and the R_l_il_f(k) (complex) require numerical solution for a given ionizing system.Essentially, the analytical part of the solution encodes the angular momentum selection rules, while the R_l_il_f(k) provide the amplitude and phase coefficients for each partial-wave channel for a specific problem (ionizing system and energy). The notation used here implicitly assumes that the radial integrals R_l_il_f(k) are independent of m_i and m_f. For atomic systems this is a good approximation, and allows for a simplified treatment of the photoionization dynamics, but for molecules this assumption does not hold (due to the loss of spherical symmetry in the core region) and all m components must be treated explicitly (see, e.g., refs. <cit.>). §.§ Continuum-continuum coupling The transition between two continuum states, i and f, further labelled by energy and angular momentum, coupled by 1-photon absorption or emission from the IR field, can be similarly given as: d_ir(𝐤_𝐢, 𝐤_𝐟,t) =⟨𝐤_𝐟;l_fm_f|μ̂_if.E(Ω,t,q)|𝐤_𝐢;l_im_i⟩ = R_l_il_f(k_i,k_f)E_ir^q(Ω,t)⟨ l_fm_f,1q|l_im_i⟩ Note that, as for bound-free ionization, the radial part of the matrix elements R_l_il_f(k_i,k_f) are here not defined explicitly, but must be considered specifically for the problem at hand. §.§ Final state wavefunctions The final continuum states populated are given by expansions in continuum partial-waves |l_fm_f; 𝐤⟩.The expansion parameters are defined by the matrix elements given above, for the various pathways of interest in a RABBIT scheme, as: * One photon (XUV) final states Ψ_xuv(𝐤,t)=∑_l_fm_f,l_im_id_xuv(𝐤,t)|l_fm_f; 𝐤⟩* Two photon (XUV+IR) final states Ψ_±(𝐤,t)=∑_l_fm_f,l_vm_v,l_im_id_xuv(𝐤_𝐯,t)d_ir(𝐤_𝐯, 𝐤,t)|l_fm_f; 𝐤⟩where the ± refers to absorption or emission of an IR photon, and v denotes the intermediate 1-photon continuum states. * Generic channel summed and partial-wave resolved final states. This notation simply indicates a final state which is the resultant sum over various ionization channels c, each decomposed into a set of final |l_fm_f⟩ waves, and serves as a general short-hand. Ψ(𝐤,t)=∑_cΨ_c(𝐤,t)=∑_c∑_l_fm_fψ^c_l_fm_f(𝐤,t) In this case, the number of angular momentum components (l, m) involved depends on the ionizing system. For centro-symmetric systems (e.g. hydrogen), l is a good quantum number and only bound-free transitions with Δ l=±1 are allowed; this is also usually a reasonable approximation for multi-electron atomic systems. However, as eluded to previously, for molecular systems many angular momentum components are typically expected, due to the loss in symmetry of the scattering potential at short range, and the problem becomes more complex; for discussion on this topic see, for instance, refs. <cit.>.In this treatment, t denotes the temporal dependence of the final states, due to both laser fields E(Ω,t,q). This dependence can be simplified to a dependence upon only the relative XUV to IR field delay, τ, under the assumption that the XUV field is short relative to the rate of change of the IR field. In the limiting case, the time-dependence of the XUV field is a δ-function, and only the instantaneous properties of the IR field at t=τ are important, hence no temporal integration is required. §.§ Observables The energy and angle resolved photoelectron measurements, as a function of the XUV-IR delay τ, are then given by: * One photon transitions, single path - direct ionization, the usual case for odd-harmonic bands in standard RABBIT experiments (odd-harmonics only in the XUV spectrum). Note this signal is effectively time-independent, since the signal does not depend on τ I_1(E,θ,ϕ)=Ψ_xuv(𝐤,t)Ψ_xuv^*(𝐤,t)* Two photon matrix elements, with two paths - usual RABBIT sidebands for an XUV spectrum with odd-harmonics only I_2(E,θ,ϕ, τ) = (Ψ_+(𝐤, τ)+Ψ_-(𝐤, τ))× c.c.* One & two photon paths - all photoelectron bands for “extended" RABBIT experiments, when the XUV spectrum also contains even-harmonicsI_3(E,θ,ϕ, τ) = (Ψ_+(𝐤, τ)+Ψ_-(𝐤, τ)+Ψ_xuv(𝐤, τ))× c.c. In all cases the resultant observable, for each photoelectron band observed in a RABBIT scheme, centered at photoelectron energy E, can be described by an expansion in spherical harmonics Y_L,M with time-dependent expansion parameters β_L,M; E(τ), and a Gaussian radial function: I(E,θ,ϕ, τ) = ∑_L,Mβ_L,M; E(τ)Y_L,M(θ,ϕ)G(E,σ) In practice, the Gaussian features centered at energies E, of width σ, are defined from the experimental harmonic spectrum. In principle, the energy dependence of the matrix elements across each photoelectron band, and the effect of this dependence on the radial spectrum, should be considered; however, in many cases it is reasonable to assume that the matrix elements are smoothly varying as a function of energy, and may be approximated as constant for each discrete photoelectron band (typically spanning a few 100 meV) - hence a single energy point at the peak of the band is assumed to be representative of the band. This is essentially the “smoothly varying continuum" (SVC) approximation, but will clearly break-down in the presence of any sharp features such as autoionizing resonances. For more general discussion, in the context of photoionization and the energy-dependence of PADs, see refs. <cit.>. § MODEL SYSTEMSWhile general, the preceding treatment does not offer much direct insight since many details remain to be defined - specifically the angular momentum states which play a role, and the radial integrals. In order to proceed, one must model specific cases, thus select appropriate initial states and compute the relevant integrals - for example,Dahlström et. al. have presented specific results for a hydrogenic treatment, including different levels of approximation <cit.>. Herein, the cases of “standard" and “extended" RABBIT are explored, starting with a basic model system to provide physical insight, while sect. <ref> details specific real cases. §.§ Sidebands in standard AR-RABBIT The “usual" RABBIT sidebands result from two interfering pathways, corresponding to 2 photon transitions via H(n)+IR and H(n+2)-IR, where H(n) denotes a harmonic of order n. The corresponding wavefunctions were denoted by Ψ_+ and Ψ_- above. To model this, and explore paradigmatic behaviours, the dipole matrix elements required can be set as model parameters, and the energy dependence of the pathways neglected. This provides a model in which the angular interferograms, and temporal behaviour, can be probed. Fig. <ref> illustrates a basic RABBIT scheme, for the simplest model system. Ionization is from a pure s-state, resulting in ionization pathways s p s+d. To model this case, identical radial matrix elements were set for each 2-photon channel (denoted c), with variable phases:R_s→ p^c= 1e^iϕ_s,p^cR_p→ d^c= 1e^iϕ_p,d^cR_p→ s^c= 0.3e^iϕ_p,s^c Fig. <ref> shows the results for these case, in which the laser fields are set to q=0 only (linear polarization), and the XUV phases are set to zero. The phases of the dipole matrix elements were varied to probe the behaviour of the sidebands, and the three example cases have the following phases set:(a) All phases set to 0.(b) ϕ_s,p^2=π/2 - an overall phase-shift in the second path.(c) ϕ_s,p^2=π/2 and ϕ_p,d^1=π/4 - an overall phase-shift in the second path, plus a phase-shift of the d-wave for channel 1.Physically, intra- and inter-channel magnitude and phase differences of the partial-wave components are expected purely from the energy-dependence of the ionization dynamics. Contributions from the harmonic phase, or from other physical processes such as resonances at the 1-photon level in specific channels, can also play a role. Depending on the physical origin, such phase effects might shift all partial waves in a given channel (the simplest case of an optical phase shift in the XUV), or affect the photoionization dynamics in more complex and subtle ways. For further discussion on and around this point see, for example: refs. <cit.> and <cit.> for a general discussion and observation of resonant phase effects in photoionization, ref. <cit.> for a similar observation in RABBIT measurements, and ref. <cit.> for the case of autoionizing resonances in RABBIT-type measurements (recently demonstrated experimentally <cit.>); ref. <cit.> for the related case of control over multi-path ionization schemes, specifically with l-wave parity breaking due to interfering 1 and 2-photon pathways, and ref. <cit.> for application in an AR-RABBIT type experiment (see also sects. <ref> and <ref> herein); ref. <cit.> discusses conceptually similar cases of time-domain control schemes in photoionization, including temporal and polarization control in multi-photon ionization schemes.For the usual sidebands, the amplitude of the resultant wavefunction will oscillate at 2ω_ir, with a total phase defined by the interfering partial-waves for each channel (including any contribution from the XUV optical phase). Within the approximations described above, the angular form of the sidebands will not show any time-dependence, since this requires a change in the relative phases of the contributing paths as a function of time. In this simple case, there are no dynamics which affect these quantities, and it is only the absolute amplitudes which vary as the IR-laser field oscillates. Hence, the angle-resolved interferograms will appear to simply breathe (in intensity) as a function of time. However, the presence of any time-dependence to the dipole matrix elements - e.g. Stark shifts affecting the ionizing states as the IR field cycles - would create additional time-dependence in the angular content, and might be expected in the strong-field regime.Thus, in the usual regime, although the shape of the angular distribution is sensitive to the relative phases of the matrix elements, it is time-invariant; the total photoelectron yields are, however, sensitive to both the phases of the matrix elements and the instantaneous laser field. In particular, the phase shift of the yields relative to the laser field is sensitive to both the relative phases of the channels (hence may be used to probe the effect of resonances in one channel, as per ref. <cit.>), and the partial-wave phases within each channel. In this manner, the angular information provides a phase-sensitivity which is otherwise lost in an angle-integrated measurement. §.§ Sidebands in AR-RABBIT with non-linearly polarized light As illustrated in fig. <ref>, the use of polarization states other than linear (and a parallel polarization geometry), will result in population of different m states. In the most general case, where the XUV and IR fields have different polarization states, many additional pathways may play a role. Here a simplified case is illustrated, in which the XUV and IR fields are assumed to have the same ellipticity ξ,in order to illustrate the general concepts and trends with polarization state.The results are shown in fig. <ref>. In these calculations, the model system detailed above is utilized, incorporating the set of phase shifts (c) (sect. <ref>). The three columns in the figure show the results for different ellipticities, defined mathematically by the phase shift between the two Cartesian components of the E-fields (ϕ_y) (see ref. <cit.> for details), and illustrated by the two spherical components of the IR field (q=± 1). The effect of the polarization state is quite clear here: as the polarization state moves from linear (equal magnitudes for the q=± 1 components) towards pure circular polarization (q=+1 in this example) the continuum wavefunction becomes increasingly dominated by the |d,m=2⟩ component. In this (relatively) simple example, this is a direct consequence of the selection of pathway by the polarization state of the light: the handedness of the light is approximately mapped onto the m=± 2 final states.The most interesting case is, therefore, shown infig. <ref>(b), where the presence of both q=± 1 breaks the cylindrical symmetry of the distribution.In contrast, fig. <ref>(c), pure q=+1 light, produces a much simpler angular distribution, with only the |d,m=2⟩ continuum state contributing. The symmetry breaking is also present in fig. <ref>(a), but is not yet pronounced with only a slight difference in the magnitudes of the m=± 2 states.In this simple case, the additional pathways accessible with q=± 1 allow for breaking of the cylindrical symmetry of the angular distribution when the E-fields are elliptical, thus providing additional interferences, hence information content, in such measurements. Generally, the mapping between q and the final observable is less direct, since many more states typically play a role. Examples for a more realistic case are given in sect. <ref>. In traditional ionization studies, the use of polarization state and geometry is a powerful tool, and has been used in a variety of methodologies, for example in photoelectron metrology <cit.> and control problems, including time-domain polarization-multiplexed schemes <cit.>. Recently, the related case of XUV field polarization effects on photoionization in the strong field regime has been investigated by Yuan, Bandrauk and co-workers (see, for example, refs. <cit.>); of particular note in that case is the presence of a strong radial (energy) dependence of the angular interferogram within a single photoelectron energy band, and asymmetries in the molecular frame. Polarization geometry in XUV-XUV 2-photon transitions have been investigated theoretically by the same authors <cit.>, and XUV-IR schemes with polarization control have also recently been investigated experimentally <cit.>. §.§ Sidebands in extended AR-RABBIT with even harmonics Additional interferences in the final state wavefunction can be created by adding ionization channels.In a RABBIT experiment the addition of even-harmonics is the simplest scheme which achieves this, and is illustrated in Fig. <ref>. Adding an interfering 1-photon channels has two effects: (1) time-dependence of the angular interferograms is now present, since the 1-photon channel is not coupled to the IR field, hence remains an invariant reference throughout the measurement; (2) the mixing of channels with odd and even photon order provides a route to parity breaking via the mixing of odd- and even-l waves. In more detail, (1) implies that this scheme can be considered as a hetrodyne measurement, in which the 1-photon channel acts as a local reference for the 2-photon channels. This implies that additional information may be gained on the photoionization dynamics, since the usual sidebands are now additionally referenced to this 1-photon channel. In the usual case, the phase of the photoelectron yield provides relative phase information on the interfering 2-photon paths, referenced to the IR field. In this extended case, the overall phase remains referenced to the IR field, but the individual partial-wave phases play a more significant role in the time-dependence of the observed angular interferogram. In essence, one expects to see different features of the angular interferogram at different delays, and a much more complex time-dependence than the basic breathing mode of the usual RABBIT sidebands.Generally, (2) applies to any scheme which mixes channels of odd and even photon order provides a route to parity breaking via the mixing of odd- and even-l waves. While this type of final state control can be achieved in a number of ways (see, for example, refs. <cit.>), in a RABBIT experiment the addition of even-harmonics is the simplest and most appropriate route <cit.>. In this specific case, one can view the temporal dependence of the resulting interferograms as a form of control, since this is nothing but a shift of the relative phase of the pathways defined by τ; however, it is a relatively weak form of control, since the amplitudes of the 2-photon pathways are also dependent on the IR field. The use of additional E-fields, different polarization states, or shaped pulses, could all potentially provide more powerful means of interferogram control.The basic concept of phase control is illustrated in Fig. <ref>, which shows the concept for a simplified two channel model. In this case, path 1 has only odd-l components (as per the previous example, outlined in Sect. <ref>), and path 2 has only even-l components. The phases of all components are set to zero, but a relative phase between the paths is varied in the model. The resultant wavefunction therefore takes the form Ψ=Ψ_1+Ψ_2e^iϕ^2. In this case, the change in the relative phase of the paths (ϕ^2) results in different regions of constructive and destructive interference, with lobes in the final interferogram shifting as a function of phase. Again, this phase could be the result of the time-dependence of one path, as for (1) above, but could also be the result of another form of phase-control, or result from other dynamic effects. The full time-dependence of the angular interferograms in this class of scheme is discuss further in Sect. <ref>.§ REAL SYSTEMS In order to treat real systems within the framework defined herein, a numerical treatment for the photoionization matrix elements (specifically, the radial integrals) for a given ionizing system is required. In this work, the bound-free matrix elements are computed using the ePolyScat suite <cit.>, and the continuum-continuum matrix elements treated as hydrogenic (similar to the treatment of ref. <cit.>). This specific choice of numerical treatment is general, since ePolyScat is capable of accurate calculations for both atomic and molecular scattering systems, but is expected to be poor at low energies where the assumption of hydrogenic continuum-continuum transitions does not hold.§.§ Numerical detailsAs discussed above, in order to model real systems numerical methods must be employed in order to determine the radial matrix elements (as distinct from the model cases above, in which the radial matrix elements are set as model parameters). In order to achieve this, a combination of numerical treatments was used: * Bound-free matrix elements. For a given ionizing system and ionizing orbital, ePolyScat (ePS) can be used to compute dipole matrix elements. ePS takes electronic structure input from standard quantum chemistry codes, solves the continuum wavefunctions variationally with a Lipmann-Schwinger approach, and computes dipole integrals based on these wavefunctions; for further details, see refs. <cit.>.* Continuum-continuum matrix elements. Absorption of an IR photon in the continuum is modelled using Coulomb functions, in a similar manner to ref. <cit.>, see sec. <ref> for details.* VMI measurements.To model the experimental VMI measurements, the input harmonic spectrum (800 nm driving field) was estimated as a series of Gaussians. Photoelectron energies then follow from the photon energies so defined, and IP of the ionizing system. This procedure also provided specific photoelectron energy points for the ePS and continuum-continuum calculations, and the matrix elements were assumed to be constant over the width of the spectral features and as a function of the laser field intensity.See sect. <ref> for details. In this manner, ionization of any given system, at a given photon energy, can be accurately computed (ePS), while the continuum-continuum coupling is approximated assuming Coulombic (asymptotic) continuum wavefunctions.§.§.§ Continuum-continuum coupling with Coulomb wavefunctions The continuum wavefunctions in this case are, as previously (eqn. <ref>), given by a general expansion, which can be written in radial and angular functions. For the Coulombic case this is usually given as (see, e.g., ref. <cit.>):ψ_lm(𝐤,𝐫)=ϕ_l(k,r)Y_lm(θ,ϕ)=A_l(k,r)F_l(r)Y_lm(θ,ϕ) Where,A_l=2l+1/kri^le^iσ_lσ_l=Γ[l+1-iZ_1Z_2/k] Here F_l is a regular Coulomb function <cit.>, σ_l is the (Coulomb) scattering phase, Z_1 and Z_2 are the charges on the scattering centre and scattered particle, and Γ is the gamma function. Solutions of these equations can be computed numerically, as herein; analytical approximations have also been derived <cit.>.The explicit form of the continuum-continuum radial matrix element, for specific initial and final states defined by |k,l,m⟩ is then given by:R_l_il_f(k_i,k_f)=_rdr ϕ_l_f(k_f,r).r.ϕ_l_i(k_i,r) Of note in this case is the assumption of an m-independence to the scattering problem, which is correct over all r for a Coulombic scatterer (point charge), but only correct asymptotically in general: hence this continuum-continuum form is appropriate only for overlap integrals at long-range from the ionic core in general.For general discussion on short and long-range scattering, see ref. <cit.>; for discussion of far-field onset in multipolar systems see ref. <cit.>; for discussion in the context of RABBIT see ref. <cit.>. Physically, the characteristic ranges of the problem will depend on the scattering system and the precise details of the potential (which may additionally be affected by the IR field in cases of moderate to strong fields), and may need to be evaluated for specific cases when a high degree of accuracy is sought.Finally, it is interesting to note that Dahlström et. al. <cit.> analyse these matrix elements analytically, and derive some approximate forms. Of particular interest is that the phase contribution from the continuum-continuum transition can be approximated as:ϕ_cc(k_i,k_f)≡{(2k_f)^iZ/k_f/(2k_i)^iZ/k_iΓ[2+iZ(1/k_f-1/k_i)]+γ(k_i,k_f)/(k_f-k_i)^iZ(1/k_f-1/k_i)} γ(k_i,k_f)=iZ(k_f-k_i)(k_f^2-k_i^2)/2k_f^2k_i^2Γ[1+iZ(1/k_f-1/k_i)] Here Z is the nuclear charge, and the term γ(k_i,k_f) is a long-range amplitude correction. This form, according to ref. <cit.>, “leads to an excellent agreement with the exact calculation at high energies". However, this comparison with exact results also indicated that it is not expected to work well at low energies, <8eV. Also of note in this approximation is that the continuum-continuum transition simply defines an energy-dependent phase-shift, with no l-dependence.§.§.§ Velocity Map Image (VMI) SimulationIn order to provide visceral results, and provide a more direct comparison with experimental measurements, the calculated photoelectron interferograms can be used to simulate velocity map imaging (VMI) measurements of photoelectron interferograms.Numerically, this involves calculating a volumetric (3D) space, simulating the photoelectron distribution and summing to form 2D image planes: full details of the approach can be found in ref. <cit.>.In the current model the radial (energy) spectrum is not calculated directly, so measured or estimated harmonic spectra are used to determine a set of Gaussian radial functions G(k), as discussed above, which are then mapped to velocity space and used to describe each band in the measured photoelectron spectrum. An example is given in figure <ref>, where the main features correspond to direct 1-photon ionization (labelled as `DB') by the input harmoic spectrum (odd-harmonics from an 800 nm driving field), and the minor bands correspond to the position of the 2-photon RABBIT sidebands (labelled as `SB') and even-harmonics (if present). These radial distributions are combined with the modelled angular distributions to determine the final photoelectron distribution on a 200x200x200 voxel array, and consequent 2D projections on a 200x200 pixel grid. §.§ AR-RABBIT resultsFollowing the prescription of sec. <ref>, model results for photoionization of neon, and RABBIT measurements, were calculated. In modelling this case, ionization from a single initial state |p,m=0⟩ was assumed for simplicity, corresponding to one component of the 2p valence orbital.Experimentally, one would assume that all degenerate m states contribute equally, however the general phenonmenology and form of the results is unchanged by incorporating the degenerate m=± 1 initial states.Physically, the choice of a single m state corresponds to a choice of reference frame and, potentially, a form of alignment: in the atomic case this can be considered as orbital polarization, while in the molecular case may correspond to an aligned molecular ensemble, or to the molecular frame <cit.>.The calculated photoionization matrix elements for the 1 and 2 photon transitions are given in the appendix (sect. <ref>).The position of the direct and sidebands calculated follow those shown in fig. <ref>, which assumes an 800 nm driving field and the 1st ionization energy of neon (http://physics.nist.gov/PhysRefData/Handbook/Tables/neontable1.htm21.56 eV <cit.>). The lowest energy feature, SB1, is not accurately modelled in this case, since the Ψ_+ pathway corresponds to direct (and possibly resonant) 2-photon ionization, which is not defined by the simple 2-step model. (For discussion of the similar case of RABBIT measurements in He, which also involved a resonant channel, see ref. <cit.>.) However, this pathway was approximated by using the lowest energy bound-free matrix elements, and is included here to emphasize the velocity mapping effect, which causes this central feature to perceptually dominate the final VMI measurements. All other direct and sidebands are expected to be within the range of applicability of the model, although accuracy of the model is expected to vary slightly as a function of energy due to the form of the continuum-continuum matrix elements assumed.Figs. <ref> - <ref> provide a summary of the results.Fig. <ref> provides the (angle-integrated) photoelectron yeilds, I_2(τ) for the four sidebands, and the corresponding, time-invariant, angular interferograms are shown in fig. <ref> for both contributing channels, and the resultant (channel-summed) observable. Fig. <ref> illustrates a set of iso-velocity (Newton) spheres from the full 3D photoelectron distribution, which each sphere corresponding to one band in the photoelectron spectrum, and the 2D projections of the full distribution.A number of features are of note from these results: * As expected, the sideband phases vary according to the ionization dynamics (as a function of energy), incorporating both the direct ionization phase and the continuum-continuum phase.* The angular interferograms reflect the changing magnitude and phases of the Ψ_+ and Ψ_- channels, and this is particularly apparent for SB3 and SB4. In these cases, it is primarily the relative phase of the SBs which contributes to the change in the final observable. The PADs change form significantly, and the temporal traces show a phase difference of approximately π/2.* The resultant PADs indicate structures with L higher than the usual symmetry-imposed laboratory frame (LF) limit (for an isotropic initial state distribution) of L≤ 2N, where N is the photon-order of the process <cit.>. However, these structures only follow from the assumption of polarized orbitals (m=0 selection), which allow a specific definition of the frame of reference. Additional m-state averaging over all initial |pm ⟩ components would reinstate the usual symmetry restriction; conversely, the presence of these structures in experimental measurements would provide evidence for orbital polarization, and this effect has recently been observed in AR-RABBIT measurements <cit.>. As mentioned above, it is of note that these considerations are analogous to those for laboratory versus molecular frame measurements <cit.> and angular distributions from aligned molecules <cit.>.* As discussed above, the simulated VMI measurements show a perceptual dominance of the lowest order bands due to the non-linear mapping from energy to velocity space, despite the fact that all bands are modelled in an identical fashion. Essentially, the energy resolution of VMI is non-uniform over the image, with the central region magnified relative to the outer region. This feature of VMI has previously been utilized to enable high-resolution spectroscopy <cit.> and combined with field ionization for “photoionization microscopy" experiments <cit.>. Overall, these model results indicate some of the expected features of AR-RABBIT, as measured using VMI. Of particular note in this case is the fact that this modelling was motivated by recent work on neon AR-RABBIT measurements <cit.>, in which aspects of the key features shown here were observed. In particular, the experimental measurements, performed at IR field intensities of ∼10^13 Wcm^2, revealed a 6-fold central structure, suggesting orbital polarization and selection in the strong laser field. It is, however, of note that this observation may also indicate higher-order photon processes than those expected (N > 2) contribute to the observable: in general careful intensity-dependence studies are required to determine which effect plays the key role <cit.>. §.§ Elliptically polarized lightFollowing from the above, example AR-RABBIT results were also computed for an elliptically polarized IR field (ξ=0.4, as shown in fig. <ref>(b)), and a circularly polarized IR field. In these cases the XUV field was assumed to be linearly polarized, and a crossed polarization geometry was also assumed. In this geometry, again assuming a single initial |p,m=0⟩ state, the XUV ionization accesses only m=0 states, while the IR field additionally accesses m=± 1 states. Essentially, this case allows for some, but limited, m-state mixing in the continuum-continuum transition. Results are shown in fig. <ref> for four sidebands. In the observables for the elliptically polarized case, the frame rotation between the XUV and IR field polarization vectors, and subsequent m-mixing in the continuum-continuum transition, results in “twisted" structures (with specific handedness) appearing in the resultant distributions in most cases. It is of note that 2D VMI projections (fig. <ref>) will usually obscure such symmetry breaking, see e.g. ref. <cit.> and references therein for discussion; furthermore, other experimental factors which break spatial symmetry (e.g. a strong laser field) may also lift the m state degeneracy in practice, and may thus constitute other mechanisms of spatial symmetry breaking. For the circularly polarized case, the lack of m-state interferences - since only a m=+1 states are accessed - results in a distinct, but cylindrical symmetric, distributions. Experiments utilizing this geometry are therefore particularly sensitive to any effects which break the m-state symmetry, such as a slight ellipticity in the XUV field or m-state mixing in a strong IR field. §.§ Extended AR-RABBITExtended AR-RABBIT, in which even harmonics also contribute,presents the most information rich measurement. In this case, the interference between the time-dependent and time-independent channels provides an additional phase reference, creates interferences between channels with different photon orders, and results in a time-dependent angular interferogram. This provides the potential for control and metrology schemes analogous to many explored in previous energy-domain studies, such as odd-even parity mixing <cit.> and bound state resonance measurements <cit.>: indeed, related concepts have already been explored in the time-domain <cit.>. In principle, it may also be possible to obtain a full set of partial wave magnitudes and phases using this technique (cf. “complete" photoionization studies, e.g. refs. <cit.>) for a large number of partial waves, and the concept has recently been demonstrated for the atomic case <cit.>; equivalently, one can consider the technique as a means of obtaining full angle-resolved Wigner delays <cit.>.The same model methodology as outlined above was employed, but with the addition of even harmonics in the XUV spectrum.Example results are shown in fig. <ref>, which shows the resultant observables I(θ,τ; E) and associated β_LM(t; E) for three different photoelectron bands. In all cases complex behaviours can be observed, with multiple l-waves and phase contributions in the 3-path photoionization interfereometer leading to highly structured observables. Accross all of the bands, a similar structural motif is observed in the I(θ,τ) plots, with the lobes along the laser polarization axis (θ=90, 270^o) dominant, and weaker higher-order lobes. This structure is particularly clear in the polar plots given at discrete time-steps, and the corresponding β_L,M(t) parameters, which contain both even and odd L terms.The time-dependence of the observables now contains contains two frequency components: even L terms which oscillate at 2ω, and odd L terms which oscillate at ω. This basic behaviour has previously been observed and modeled by Laurent et. al. <cit.>.However, the oscillation of the even terms corresponds to the same “breathing" mode as described in the 1-colour case (since no additional cross-terms between the 1 and 2-photon pathways contribute), in which the photoelectron yield oscillated, but the angular distribution shows no time-dependence.Hence, normalisation of the angular interferograms by the total yield removes the time-dependence, and reveals time-invariant even L terms. For this reason, no oscillations are observed in the even L terms shown in fig. <ref> (right column), and this part of the angular interferogram is time-invariant as for the “usual" RABBIT case. The odd L terms are more interesting, and result from the interferences between even and odd l-waves, correlated with the 1 and 2 photon transitions respectively. The effect of these interferences is, as noted above, to create up-down asymmetries in the observables.Clearly, the resultant interferograms are complicated, and the exact form of the observables are sensitive to the relative contributions and phases of the l-waves contributing to each of the three pathways. The relative phases observed in the β_LM(t) can be considered as a probe of this behaviour, since different l-waves contribute to different L terms <cit.>; AR-RABBIT thus suggests a route to disentangling different phase contributions, related to the contributing ionization paths and l-waves, for use in phase-sensitive metrology scenarios. Of particular interest in this vein are “complete" photoionization experiments, and angle-resolved Wigner delays, as noted previously. Also noteworthy is the apparent temporal asymmetry of the observable in some cases: this is particularly apparent in the higher energy bands (e.g. band at 7.9 eV, fig. <ref>(c)), with arrow-like structures spreading from the central lobes. This characteristic of the observable is a result of distinct temporal dependencies to the phases of the l-waves from different channels, leading to a skew in the temporal behaviour in some cases. Similar behaviour has previously been predicted based on a 2-path interferometer mediated by a vibronic wavepacket <cit.>, which resulted in analogous l-wave intereferences; however,in that case the asymmetry was not cleanly observed experimentally due the the temporal resolution of the measurement, although the results did strongly suggest such asymmetry was present. The presence of this type of temporal asymmetry in experimental measurements can therefore be regarded as a (relatively) direct phenomenological signature of significant phase-shifts between different l-waves. This characteristic is potentially useful as a means to observe experimentally-mediated changes in l-wave phases (e.g. due to laser intensity or wavelength) without the necessity of a full theoretical analysis of the results. § SUMMARY AND CONCLUSIONSIn this work, some general properties of AR-RABBIT measurements have been investigated via a multi-channel, 2-photon ionization model in the perturbative regime. A range of interesting phenomena are observed in this case, due to the range of interferences contributing to the final observable. Of particular note is the fact that RABBIT type schemes mix bound-free matrix elements of different energies, which cannot be interfered in usual (energy-resolved) photoionization studies; furthermore, angle-resolved RABBIT provides observables which are also highly sensitive to the l-wave amplitudes and phases (in direct analogy with traditional angle-resolved photoelectron spectroscopy). As discussed in the introduction, this presents AR-RABBIT as a potentially interesting methodology for any metrology schemes requiring phase-sensitivity to the ionization matrix elements as a function of angular-momentum and energy. Studies of photoionization dynamics in the energy and time-domain (Wigner delays) both come under this category, as does polarization-sensitive XUV pulse metrology. Experiments investigating the effects of bound-state or continuum resonances are one clear application of AR-RABBIT, and such effects can also be investigated as a function of the IR field intensity. The capabilities of “extended" AR-RABBIT schemes, utilizing even haromics, are most interesting here, since 1 and 2-photon channels are interfered in this case, providing a hetrodyne type measurement, with the direct 1-photon channel as a time-independent phase reference. This scheme also allows for control over the resultant photoelectron interferogram, since up-down asymmetry can be broken as a function of IR field phase (i.e. XUV-IR time-delay).Some of these concepts have already been investigated using RABBIT or AR-RABBIT techniques, but much work is open to fruitful exploration in this vein. Since VMI apparatus, along with other angle-resolved charged particle techniques (e.g. COLTRIMS), have proliferated in recent years, angle-resolved photoelectron measurements are now routine for many experimenters. This has lead to a range of novel studies utilising the related high-information content observable of photoelectron angular distributions <cit.>, and the outlook and utility of AR-RABBIT is similarly promising.§ ACKNOWLEDGEMENTSSpecial thanks to Hiromichi Niikura and David Villeneuve, for presentation and discussion of AR-RABBIT experimental results, which both suggested and motivated this study<cit.>. Thanks also to Ruaridh Forbes for suggesting the extension to elliptical polarization states, and Varun Makhija and Albert Stolow for general discussion.§ APPENDIX - MATRIX ELEMENTS The full set of matrix elements for the neon calculations are shown in figure <ref>. As detailed in sect. <ref>, the 1-photon bound-free matrix elements were computed using ePolyScat, while the continuum-continuum elements assume Coulomb wavefunctions. In all cases the matrix elements are shown as a function of the final photoelectron energy. For the 2-photon bands the calculations assume an 800 nm IR field, hence hν=1.55 eV, and this is the energy difference assumed between the final and intermediate (1-photon) states in the calculation.apsrev4-1
http://arxiv.org/abs/1703.08586v1
{ "authors": [ "Paul Hockett" ], "categories": [ "quant-ph", "physics.atom-ph" ], "primary_category": "quant-ph", "published": "20170324195934", "title": "Angle-resolved RABBIT: theory and numerics" }
Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore 487372, Singapore Department of Applied Physics, Key Laboratory of Micro-nano Measurement-Manipulation and Physics (Ministry of Education), Beihang University, Beijing 100191, China Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore 487372, Singapore School of Physics and Technology, Wuhan University, Wuhan 430072, China hmweng@iphy.ac.cn Beijing National Laboratory for Condensed Matter Physics, and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Collaborative Innovation Center of Quantum Matter, Beijing, Chinashengyuan_yang@sutd.edu.sg Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore 487372, SingaporeWe reveal a class of three-dimensional d-orbital topological materials in the antifluorite Cu_2S family. Derived from the unique properties of low-energy t_2g states, their phases are solely determined by the sign of spin-orbit coupling (SOC): topological insulator for negative SOC, whereas topological semimetal for positive SOC; both having Dirac-cone surface states but with contrasting helicities. With broken inversion symmetry, the semimetal becomes one with a nodal box consisting of butterfly-shaped nodal lines that are robust against SOC. Further breaking the tetrahedral symmetry by strain leads to an ideal Weyl semimetal with four pairs of Weyl points. Interestingly, the Fermi arcs coexist with a surface Dirac cone on the (010) surface, as required by a Z_2-invariant. 71.20.-b, 73.20.-r, 31.15.A- d-Orbital Topological Insulator and Semimetal in Antifluorite Cu_2S Family: Contrasting Spin Helicities, Nodal Box, and Hybrid Surface States Shengyuan A. Yang December 30, 2023 =============================================================================================================================================When individual atoms are brought together to form crystalline solids, the atomic orbitals overlap and form extended Bloch states. In the energy-momentum space, discrete atomic levels evolve into dispersive electronic bands. The interaction between orbitals and with further coupling to spin may generate inverted band ordering and lead to topological states of matter, which is a focus of recent physics research <cit.>. It is now established that nontrivial topology can occur for both gapped (insultor) and gapless (semimetal) systems. For topological insulators (TIs), an invariant is defined for the bulk valence bands below the gap <cit.>; whereas for topological semimetals (TSMs), the characterization is on the topology of band-crossings near the Fermi level <cit.>, leading to a variety of TSMs, among which the Weyl semimetal and nodal-line semimetal states with respective 0D and 1D band-crossings are attracting great interest and actively searched for <cit.>. The nontrivial bulk topology manifests on the sample surface as the existence of protected surface states: TIs have Dirac-cone like surface states with spin-momentum-locking <cit.>; whereas Weyl semimetals possess open Fermi arcs connecting pairs of projected Weyl points on the surface <cit.>.In forming topological band structures, the orbital character of bands plays an important role, e.g., it determines the band inversion, the low-energy quasiparticle dispersion, and furthermore, the type and strength of the effective spin-orbit coupling (SOC). For almost all the TIs identified so far, the low-energy bands are of s- and/or p-orbital character <cit.>. Meanwhile, it is known that d-orbitals could host interesting SOC physics. For instance, with tetrahedral coordination, the five d-orbitals will split into an e_g doublet and a t_2g triplet. With SOC, the t_2g states can exhibit a unique property that its effective SOC is negative, with j=1/2 doublet energetically higher than the j=3/2 quartet. This mechanism has inspired works in exploring TIs with negative SOC. However, in the few examples predicted to date <cit.>, the low-energy bands are still dominated by s and p characters, while the d-bands are away from the Fermi level and only act indirectly. Thus, one may wonder whether we can find a genuine d-orbital TI, and how would the d character produce any new physics?Here, we answer the above questions by revealing intriguing d-orbital topological phases in the Cu_2S material family with antifluorite structure. By first-principles calculations, we show that the low-energy bands in these materials are dominated by the t_2g-orbitals, of which the sign of effective SOC can be made either negative or positive by tuning the orbital interaction. Remarkably, this sign completely determines the phase. As illustrated in Fig. <ref>, enforced by band filling, the system must be an insulator (semimetal) when the sign of SOC is negative (positive). Moreover, in both cases, there is band inversion between the cation s-band and the t_2g-band, hence both phases are topological. We explicitly show that the Dirac-cone surface states in the two phases have opposite helicities in spin-momentum-locking, consistent with their sign of SOC. Furthremore, novel features are observed for the TSM phase. In ternary compounds with intrinsic inversion asymmetry, the system becomes a novel semimetal with a nodal box comprising butterfly-shaped nodal lines. In known examples of nodal-line materials, the nodal lines are unstable when SOC is included; in contrast, the butterfly nodal line here is stabilized with SOC. With further lowering of cubic symmetry by strain, an ideal Weyl semimetal emerges with four pairs of Weyl points lying exactly at the Fermi level. We find the phenomenon of hybrid surface states, i.e., coexisting Fermi-arc and Dirac-cone surface states on certain surfaces, as required by a bulk Z_2 invariant. Our predicted features in the bulk band structure and the surface states (including the spin texture) can be readily probed by the ARPES experiment.The Cu_2S-family materials typically have three structures (denoted as α, β, and γ). Here, we focus on the α-phase structure <cit.>, also known as the antifluorite structure, having the space group Fm3̅m (No. 225). The conventional unit cell has a cubic shape and the structure can be regarded as a double nested zinc-blende lattice. As shown in Fig. <ref>(a), for binary compounds like Cu_2S and Cu_2Se, an inversion center is preserved. However, inversion symmetry is broken for ternary compounds like CuAgSe (see Fig. <ref>(a)) where the Cu sites in one of the nested zinc-blende lattices are occupied by Ag. We perform first-principles calculations based on the density functional theory (DFT). The calculation details and the structural parameters are in the Supplemental Material <cit.>. In the following discussion, we shall mainly focus on the representatives Cu_2S, Cu_2Se, and CuAgSe.Negative SOC: TI.—In Cu_2S, each Cu is surrounded by a tetrahedron of S atoms. The tetrahedral crystal field splits the Cu 3d-orbitals into e_g and t_2g states, with t_2g having a higher energy. Focusing on the t_2g states, the interaction between the two Cu atoms in a primitive cell leads to bonding and antibonding states, with the latter energetically higher than the former. In Cu_2S, the Cu d-orbitals have higher energy than that of S-3p orbitals, so that the p-d hybridization further pushes the t_2g antibonding states up to the Fermi level. The t_2g triplet (containing d_xy, d_xz, and d_yz orbitals) have an effective orbital moment ℓ=1, which are then split by the SOC λℓ· s into a j=1/2 doublet and a j=3/2 quartet. As mentioned, a unique feature for t_2g is that the SOC can be negative with λ<0, opposite to the SOC splitting of p-orbitals <cit.>.Indeed, as shown in Fig. <ref>, our DFT result confirms the above picture. The low-energy bands near the Fermi level are mainly from the t_2g states, and the S-3p bands are below -4 eV. Due to combined inversion symmetry and time reversal symmetry, each band is spin degenerate. Around Γ-point, one observes that: (i) the originally degenerate t_2g states are split by SOC, and the j=1/2 doublet is higher than the j=3/2 quartet in energy, showing a negative SOC; (ii) the Cu-4s states dive below the t_2g states by about 1.1 eV, indicating an inverted band ordering. Band filling dictates that the Fermi level lies exactly in the gap (∼ 55 meV) between the j=1/2 and j=3/2 states (Fig. <ref>(c)). The band inversion at a single time reversal invariant momentum (TRIM) point directly indicates that the system is a strong TI. To further confirm the nontrivial band topology, we calculate the Z_2 invariant for the bulk band structure. With inversion symmetry, the task is simplified by analyzing the product of parity eigenvalues at the eight TRIM points <cit.>. We find that this product is positive at Γ and negative at other TRIMs, in accordance with our analysis of the band inversion, which leads to a strong TI with Z_2 indices (1;000).The hallmark of TI is the existence of protected Dirac-cone surface states with spin-momentum-locking. In Fig. <ref>(d), we plot the calculated surface energy spectra for (001) surface, clearly showing a single surface Dirac-cone. Here the Dirac point is buried in the bulk valence bands. Notice that for a constant energy above the Dirac point, the surface states show a right-handed spin-momentum-locking pattern (see Fig. <ref>(e)). This is consistent with the negative SOC, and is in contrast with almost all other TIs (which have left-handed helicity due to positive SOC) <cit.>. The same feature is also observed for other surfaces. All these evidences confirm that Cu_2S is a d-orbital TI.Positive SOC: TSM.—The effective SOC strength λ could be tuned by varying the crystal environment, especially through the interaction with the anion p-orbitals. The effect depends on both the interaction strength and the SOC of p-orbitals. Consider Cu_2Se which has the same structure as Cu_2S. Compared with S-3p, the Se-4p orbitals are closer to the Cu-t_2g orbitals in energy, and they also possess a stronger SOC (which is positive). Consequently, one expects that the interaction with Se-4p would decrease the negativeness of λ for Cu-t_2g states.This speculation is confirmed by our DFT result. We find that the composition of the low-energy bands in Cu_2Se is similar to that of Cu_2S, however, its j=3/2 quartet is above the j=1/2 doublet (see Fig. <ref>(a)), indicating that SOC has changed from negative to positive. Band filling dictates that the quartet is half-filled, so that the Fermi level intersects with the degenerate states and the system must be a semimetal.Note that the band inversion near Γ-point between the Cu-4s states and the t_2g states is preserved, not affected by the sign change of λ. Hence the surface states still exist for Cu_2Se, as shown in Fig. <ref>(b), although they are submerged in the bulk states. Importantly, the spin-momentum-locking pattern here becomes left-handed (Fig. <ref>(c)), which is consistent with the positive SOC.Nodal-box and Weyl semimetals.—Breaking inversion symmetry would split the spin-degenerate bands with SOC, adding new ingredients to the physics. Consider CuAgSe in which the inversion symmetry is naturally broken (see Fig. <ref>(a)). Similar to Cu_2Se, its t_2g states have an effectively positive SOC, hence the system is also a semimetal, as shown in Fig. <ref>(b). In Cu_2Se, the conduction band and the valence band touch at a single point, but we shall see that in CuAgSe, the splitting of the j=3/2 quartet near Γ-point leads to nodal-line band-crossings.The symmetry group contains six mirror planes which may be collectively denoted as M_{110}. In Fig. <ref>(b), one observes that the four states in the quartet fully split along the Γ-K line, on which the two middle bands cross each other at a point K' near the Fermi level. Meanwhile, along the Γ-L direction, two middle bands become degenerate, which then cross the upper band at a triply-degenerate point (labeled as L'). A careful scan of the band structure around Γ-point shows that these crossing points are not isolated (see Fig. <ref>(c)). Remarkably, in each M_{110} mirror plane, the crossings near Fermi level form a butterfly-shaped nodal line, which is possible since it is formed by the pair-wise crossings of three bands. This is shown in Fig. <ref>(d): along the diagonal (i.e. Γ-L) direction, the crossing is protected and pinned by the C_3v symmetry such that the two states at each point on this line form a 2D irreducible representation Λ_4 (E_1/2); whereas for the arcs connecting the L' points, they are protected by the mirror plane because the two crossing bands have opposite mirror eigenvalues ± i (see Fig. <ref>(b)). Note that unlike most previously identified nodal lines which are unstable under SOC <cit.>, the butterfly nodal line here is robust against SOC, and actually it appears only when SOC is included. Combining the `butterflies' from all the mirror planes leads to the interesting nodal-box pattern shown in Fig. <ref>(e).It has been theoretically argued that a Weyl semimetal phase must occur during the transition of an inversion-asymmetric system from a TI phase to a normal insulator phase <cit.>. For CuAgSe, we find that it is a TI under a small uniaxial compression, and it becomes a normal insulator under a tensile strain. Hence there must exist a Weyl semimetal phase in-between. We carefully monitor the band structure change under uniaxial strains and find that the butterfly nodal lines quickly disappear upon applying a tensile strain. The reduction of the cubic symmetry leads to a dramatic change in the low-energy bands. Figure <ref>(a) shows the band structure at 3% strain, in which the gap almost closes along the Γ-Z line. We perform a scan of the possible band-crossing points using a dense k-mesh and reveal that there exist four pairs of Weyl points at (±0.001235, 0, ±0.1012) and (0, ±0.001235, ±0.1012) in unit of reciprocal lattice vectors. As schematically shown in Fig. <ref>(b), the points in each time reversal pair (i.e., at opposite k-points) have the same chirality, and the four pairs are further connected by the two remaining mirror planes M_(110) and M_(11̅0) (two points connected by mirror have opposite chiralities), so all the eight points are tied by symmetry and must locate at the Fermi level. Moreover, there is no other coexisting electron/hole pockets. Such kind of Weyl semimetals is referred to as “ideal" <cit.>, regarded as good platforms for studying the interesting Weyl physics.The surface of a Weyl semimetal features Fermi arcs connecting the surface-projected Weyl points. On (001) surface, there are four projected points, each having a chirality of ± 2 (see Fig. <ref>(b)), hence there must be two arcs connected to each point. This is verified by the surface spectrum in Fig. <ref>(c), where the Fermi arcs are connected into a loop. Meanwhile, for (010) surface, there are six projected Weyl points, of which the middle ones on the k_z-axis have chirality +2 and other four have chirality -1 (Fig. <ref>(b)). In the slice at Fermi energy (Fig. <ref>(d)), one observes the nearly straight Fermi arcs connecting each (-1,+2,-1) triplet. But surprisingly, there appears an additional Fermi circle, which looks like the TI Dirac-cone surface states. By checking the surface state variation at constant energies around Fermi level, one confirms that the circle is indeed separated from the Fermi arcs (see Fig. <ref>(e)). To explain its existence, we note that k_z=0 time reversal invariant plane is fully gapped in the bulk hence carries a Z_2 invariant. Using the Wilson loop method <cit.>, we find that its Z_2=1, indicating that there must be a pair of gapless states in edge spectrum of the k_z=0 plane. This means that, in Fig. <ref>(d) and <ref>(e), there must exist a time reversal pair on the k_z=0 line. Therefore, the existence of the Dirac-cone is in fact required by the Z_2 invariant. In retrospect, one notes that planes such as (110) and (11̅0) also have nontrivial Z_2, but this does not lead to an additional Dirac-cone for (001) surface because the pattern of Fermi arcs in Fig. <ref>(c) already satisfies the requirement. Discussion.—In this work, we have revealed an interesting family of topological materials in which the sign of SOC controls topological phase. On the negative SOC side, these materials are the first d-orbital TI discovered to date. We note that a few materials like TlN and HgS have been proposed as TIs with negative SOC <cit.>, but their low-energy bands are still dominated by s and p orbitals.The sign of SOC determines the helicity of Dirac-cone surface states like in Fig. <ref>(e) and <ref>(c), which can be detected by spin-resolved ARPES experiment. The ability to control the helicity gives us additional freedom in utilizing these states for spintronics applications. Interestingly, it has been proposed that gapless interface states will emerge when two surfaces with opposite helicities are contacted <cit.>. The effect may be explored in our identified materials.As mentioned, almost all the identified nodal lines are unstable under SOC. Certain kind of robust nodal lines requires nonsymmorphic symmetries <cit.>. The butterfly nodal line discovered here is distinct in that: (i) it is formed under SOC; (ii) it does not require nonsymmorphic symmetries; and (iii) it is from the pair-wise crossings of three bands, not just two bands. Moreover, the multiple `butterflies' constitute a novel nodal box under tetragonal symmetry.Finally, the coexistence of Fermi arcs and Dirac-cone surface states was proposed at an interface between a Weyl semimetal and a TI <cit.>, and is also proposed in a model study <cit.>. Our work predicts the first material for its realization. The intriguing surface-state patterns can be directly probed via ARPES experiment. Furthermore, the interference between the Fermi arcs and the Dirac-cone may also lead to signatures in the quasiparticle interference pattern detectable via scanning tunneling microscopy <cit.>. § SUPPLEMENTAL MATERIAL§.§ Computational MethodsThe results presented in the main text are obtained by first-principles calculations as implemented in Vienna ab initio simulation package (VASP) <cit.> with the projector augmented wave (PAW) method <cit.>. The generalized gradient approximation (GGA) with Perdew-Burke-Ernzerhof (PBE) <cit.> realization were adopted for the exchange-correlation potential. The plane-wave cutoff energy was taken as 500 eV. The Monkhorst-Pack k-point mesh <cit.> of size 10×10× 10 was used for Brillouin zone sampling.The crystal structures were optimized until the forces on the ions were less than 0.01 eV/Å. As the transition metal d orbitals may have notable correlation effects, we also validate our results by GGA+U method<cit.>. On-site Hubbard U parameters ranging from 1 eV to 7 eV were tested. We find that the key features are qualitatively the same as the GGA results, which also agree with previous studies <cit.>. So in the main text, we focus on the GGA results. From the DFT results, we construct the maximally localized Wannier functions (MLWF) <cit.> for Cu (Ag) s, d and S (Se) p orbitals, and effective model Hamiltonian for bulk and semi-infinite layer are built to investigate the surface states.In this work, we focus on the antifluorite structure (α-phase, space group Fm3̅m (No. 225)) of the Cu_2S-family materials. For the band structure calculations, we take the experimental lattice parameters a=5.725 Å for Cu_2S <cit.>, a=5.787 Å for Cu_2Se <cit.>, and a=5.96 Åfor CuAgS <cit.>. For CuAgSe, since we also need to analyze its result under strain, we used its optimized lattice parameter a=6.17 Åin the calculation. §.§ Band structures with modified Becke-Johnson potentialConsidering the possible underestimation of band gap by GGA, we further check the band structure by the hybrid functional approach with modified Becke-Johnson (mBJ) potential <cit.> as implemented in the WIEN2K package <cit.>. We find that the band inversion features are maintained. The band inversion energy between t_2g and Cu-4s bands are of about 0.1 eV, 0.24 eV, 0.53 eV and 0.26 eV for Cu_2S, Cu_2Se, CuAgSe and CuAgS, respectively. The topological phases are still maintained. As shown in Fig. <ref>, Cu_2S is still a negative SOC TI, with direct (indirect) band gap of 65 meV (30 meV); CuAgS also keeps the negative SOC band structure with direct (indirect) band gap of 71 meV (38 meV); Cu_2Se and CuAgSe are still topological semimetals.§.§ Z_2 indices by Wilson loop methodBand inversion is a necessary condition for a topological nontrivial band structure, but it is not sufficient to identify a TI. Topological invariant is a global character of the electronic structure in the whole Brillouin zone. The definition of parity production for occupied bands at TRIM points is a convenient method to distinguish TIs with inversion symmetry, but it cannot be used for inversion-asymmetric systems such as CuAgSe. The Wilson loop method can be employed for such cases. It traces the evolution of the Wannier function centers <cit.>. The Wannier center evolution for four representative planes (in the Brillouin zone) of CuAgSe are shown in Fig. <ref>, from which we can find the Z_2 indices are respectively 1 and 0 for k_z=0 plane and k_z=π plane, indicating that there must be a pair of gapless states in edge spectrum of the k_z=0 plane. As discussed in the main text, this dictates the existence of a Dirac-coneon the (010) surface.For the two mirror planes (110) and (11̅0), we find Z_2=1. However, this does not lead to an additional Dirac-cone for (001) surface because the pattern of the Fermi arcs already satisfies the requirement. 56 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Hasan and Kane(2010)]RevModPhys.82.3045 authorM. Z. Hasan and authorC. L. Kane, journalRev. Mod. Phys. volume82, pages3045 (year2010).[Qi and Zhang(2011)]RevModPhys.83.1057 authorX.-L. Qi and authorS.-C. Zhang, journalRev. Mod. Phys. volume83, pages1057 (year2011).[Bansil et al.(2016)Bansil, Lin, and Das]RevModPhys.88.021004 authorA. Bansil, authorH. Lin, and authorT. Das, journalRev. Mod. Phys. volume88, pages021004 (year2016).[Volovik(2003)]Volovik2003 authorG. E. Volovik, titleThe Universe in a Helium Droplet (publisherClarendon Press, Oxford, year2003).[Zhao and Wang(2013)]Zhao2013c authorY. X. Zhao and authorZ. D. Wang, journalPhys. Rev. Lett. volume110, pages240404 (year2013).[Wan et al.(2011)Wan, Turner, Vishwanath, and Savrasov]WanXG_Weyl authorX. Wan, authorA. M. Turner, authorA. Vishwanath, and authorS. Y. Savrasov, journalPhys. Rev. B volume83, pages205101 (year2011).[Murakami(2007)]Murakami2007 authorS. Murakami, journalNew J. Phys. volume9, pages356 (year2007).[Weng et al.(2015a)Weng, Fang, Fang, Bernevig, and Dai]WengHM_2015TaAs authorH. Weng, authorC. Fang, authorZ. Fang, authorB. A. Bernevig, and authorX. Dai, journalPhys. Rev. X volume5, pages011029 (year2015a).[Huang et al.(2015)Huang, Xu, Belopolski, Lee, Chang, Wang, Alidoust, Bian, Neupane, Zhang et al.]Huang2015 authorS.-M. Huang, authorS.-Y. Xu, authorI. Belopolski, authorC.-C. Lee, authorG. Chang, authorB. Wang, authorN. Alidoust, authorG. Bian, authorM. Neupane, authorC. Zhang, et al., journalNat Commun volume6,(year2015).[Lv et al.(2015)Lv, Weng, Fu, Wang, Miao, Ma, Richard, Huang, Zhao, Chen et al.]Lv2015 authorB. Q. Lv, authorH. M. Weng, authorB. B. Fu, authorX. P. Wang, authorH. Miao, authorJ. Ma, authorP. Richard, authorX. C. Huang, authorL. X. Zhao, authorG. F. Chen, et al., journalPhys. Rev. X volume5, pages031013 (year2015).[Xu et al.(2015a)Xu, Belopolski, Alidoust, Neupane, Bian, Zhang, Sankar, Chang, Yuan, Lee et al.]Xu613 authorS.-Y. Xu, authorI. Belopolski, authorN. Alidoust, authorM. Neupane, authorG. Bian, authorC. Zhang, authorR. Sankar, authorG. Chang, authorZ. Yuan, authorC.-C. Lee, et al., journalScience volume349, pages613 (year2015a).[Xu et al.(2015b)Xu, Alidoust, Belopolski, Yuan, Bian, Chang, Zheng, Strocov, Sanchez, Chang et al.]Xu2015a authorS.-Y. Xu, authorN. Alidoust, authorI. Belopolski, authorZ. Yuan, authorG. Bian, authorT.-R. Chang, authorH. Zheng, authorV. N. Strocov, authorD. S. Sanchez, authorG. Chang, et al., journalNat Phys volume11, pages748 (year2015b).[Yang et al.(2015)Yang, Liu, Sun, Peng, Yang, Zhang, Zhou, Zhang, Guo, Rahn et al.]Yang2015 authorL. X. Yang, authorZ. K. Liu, authorY. Sun, authorH. Peng, authorH. F. Yang, authorT. Zhang, authorB. Zhou, authorY. Zhang, authorY. F. Guo, authorM. Rahn, et al., journalNat Phys volume11, pages728 (year2015).[Weng et al.(2015b)Weng, Liang, Xu, Yu, Fang, Dai, and Kawazoe]PhysRevB.92.045108 authorH. Weng, authorY. Liang, authorQ. Xu, authorR. Yu, authorZ. Fang, authorX. Dai, and authorY. Kawazoe, journalPhys. Rev. B volume92, pages045108 (year2015b).[Kim et al.(2015)Kim, Wieder, Kane, and Rappe]PhysRevLett.115.036806 authorY. Kim, authorB. J. Wieder, authorC. L. Kane, and authorA. M. Rappe, journalPhys. Rev. Lett. volume115, pages036806 (year2015).[Yu et al.(2015)Yu, Weng, Fang, Dai, and Hu]PhysRevLett.115.036807 authorR. Yu, authorH. Weng, authorZ. Fang, authorX. Dai, and authorX. Hu, journalPhys. Rev. Lett. volume115, pages036807 (year2015).[Chen et al.(2015)Chen, Xie, Yang, Pan, Zhang, Cohen, and Zhang]Chen2015 authorY. Chen, authorY. Xie, authorS. A. Yang, authorH. Pan, authorF. Zhang, authorM. L. Cohen, and authorS. Zhang, journalNano Lett. volume15, pages6974 (year2015).[Weng et al.(2016a)Weng, Fang, Fang, and Dai]PhysRevB.93.241202 authorH. Weng, authorC. Fang, authorZ. Fang, and authorX. Dai, journalPhys. Rev. B volume93, pages241202 (year2016a).[Weng et al.(2016b)Weng, Fang, Fang, and Dai]PhysRevB.94.165201 authorH. Weng, authorC. Fang, authorZ. Fang, and authorX. Dai, journalPhys. Rev. B volume94, pages165201 (year2016b).[Zhu et al.(2016)Zhu, Winkler, Wu, Li, and Soluyanov]PhysRevX.6.031003 authorZ. Zhu, authorG. W. Winkler, authorQ. Wu, authorJ. Li, and authorA. A. Soluyanov, journalPhys. Rev. X volume6, pages031003 (year2016).[Feng and Yao(2012)]YaoYG2012_TIreview authorW. Feng and authorY. Yao, journalSci. China Phys. Mech. Astron. volume55, pages2199 (year2012).[Ando(2013)]Ando2013 authorY. Ando, journalJ. Phys. Soc. Jpn. volume82 (year2013).[Vidal et al.(2012)Vidal, Zhang, Stevanović, Luo, and Zunger]Vidal:2012cr authorJ. Vidal, authorX. Zhang, authorV. Stevanović, authorJ.-W. Luo, and authorA. Zunger, journalPhys. Rev. B volume86, pages075316 (year2012).[Sheng et al.(2014)Sheng, Wang, Yu, Weng, Fang, and Dai]Sheng_TlN authorX.-L. Sheng, authorZ. Wang, authorR. Yu, authorH. Weng, authorZ. Fang, and authorX. Dai, journalPhys. Rev. B volume90, pages245308 (year2014).[Virot et al.(2011)Virot, Hayn, Richter, and van den Brink]Virot:2011fn authorF. Virot, authorR. Hayn, authorM. Richter, and authorJ. van den Brink, journalPhys. Rev. Lett. volume106, pages236806 (year2011).[Djurle(1958)]Cu2S_5.725 authorS. Djurle, journalActa Chem. Scand. volume12, pages1415 (year1958).[Yamamoto and Kashida(1991)]Cu2Se_constant authorK. Yamamoto and authorS. Kashida, journalJ. Solid State Chem. volume93, pages202(year1991).[Trots et al.(2007)Trots, Senyshyn, Mikhailova, Knapp, Baehtz, Hoelzel, and Fuess]CuAgS_constant authorD. M. Trots, authorA. Senyshyn, authorD. A. Mikhailova, authorM. Knapp, authorC. Baehtz, authorM. Hoelzel, and authorH. Fuess, journalJ. Phys. Condens. Matter volume19, pages136204 (year2007).[SI()]SI noteSee Supplemental Material.[Cardona(1963)]Cardona1963 authorM. Cardona, journalPhys. Rev. volume129, pages69 (year1963).[Shindo et al.(1965)Shindo, Morita, and Kamimura]Shindo1965 authorK. Shindo, authorA. Morita, and authorH. Kamimura, journalJ. Phys. Soc. Jpn. volume20, pages2054 (year1965).[Fu and Kane(2007)]Fu2007 authorL. Fu and authorC. L. Kane, journalPhys. Rev. B volume76, pages045302 (year2007).[Zhang et al.(2010)Zhang, Yu, Zhang, Dai, and Fang]ZhangW_NJP authorW. Zhang, authorR. Yu, authorH.-J. Zhang, authorX. Dai, and authorZ. Fang, journalNew J. Phys. volume12, pages065013 (year2010).[Zhang et al.(2013)Zhang, Liu, and Zhang]PhysRevLett.111.066801 authorH. Zhang, authorC.-X. Liu, and authorS.-C. Zhang, journalPhys. Rev. Lett. volume111, pages066801 (year2013).[Liu and Vanderbilt(2014)]Liu2014 authorJ. Liu and authorD. Vanderbilt, journalPhys. Rev. B volume90, pages155316 (year2014).[Ruan et al.(2016a)Ruan, Jian, Yao, Zhang, Zhang, and Xing]Ruan2016a authorJ. Ruan, authorS.-K. Jian, authorH. Yao, authorH. Zhang, authorS.-C. Zhang, and authorD. Xing, journalNat. Commun. volume7, pages11136 (year2016a), 1511.08284.[Ruan et al.(2016b)Ruan, Jian, Zhang, Yao, Zhang, Zhang, and Xing]Ruan2016 authorJ. Ruan, authorS.-K. Jian, authorD. Zhang, authorH. Yao, authorH. Zhang, authorS.-C. Zhang, and authorD. Xing, journalPhys. Rev. Lett. volume116, pages226801 (year2016b).[Yu et al.(2011)Yu, Qi, Bernevig, Fang, and Dai]YuR_Wilson authorR. Yu, authorX. L. Qi, authorA. Bernevig, authorZ. Fang, and authorX. Dai, journalPhys. Rev. B volume84, pages075119 (year2011).[Soluyanov and Vanderbilt(2011)]PhysRevB.83.035108 authorA. A. Soluyanov and authorD. Vanderbilt, journalPhys. Rev. B volume83, pages035108 (year2011).[Weng et al.(2014)Weng, Dai, and Fang]WengHM2014review authorH. Weng, authorX. Dai, and authorZ. Fang, journalMRS Bull. volume39, pages849 (year2014).[Takahashi and Murakami(2011)]Takahashi2011 authorR. Takahashi and authorS. Murakami, journalPhys. Rev. Lett. volume107, pages166805 (year2011).[Fang et al.(2016)Fang, Weng, Dai, and Fang]Fang2016 authorC. Fang, authorH. Weng, authorX. Dai, and authorZ. Fang, journalChin. Phys. B volume25, pages117106 (year2016).[Grushin et al.(2015)Grushin, Venderbos, and Bardarson]Grushin2015 authorA. G. Grushin, authorJ. W. F. Venderbos, and authorJ. H. Bardarson, journalPhys. Rev. B volume91, pages121109 (year2015).[Lau()]Lau2017 noteA. Lau, J. van den Brink, and C. Ortix, arXiv:1701.01660. natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Kresse and Furthmüller(1996)]VASP1 authorG. Kresse and authorJ. Furthmüller, journalPhys. Rev. B volume54, pages11169 (year1996).[Kresse and Joubert(1999)]VASP2 authorG. Kresse and authorD. Joubert, journalPhys. Rev. B volume59, pages1758 (year1999).[Blöchl(1994)]paw authorP. E. Blöchl, journalPhys. Rev. B volume50, pages17953 (year1994).[Perdew et al.(1996)Perdew, Burke, and Ernzerhof]GGAPBE authorJ. P. Perdew, authorK. Burke, and authorM. Ernzerhof, journalPhys. Rev. Lett. volume77, pages3865 (year1996).[Monkhorst and Pack(1976)]PhysRevB.13.5188 authorH. J. Monkhorst and authorJ. D. Pack, journalPhys. Rev. B volume13, pages5188 (year1976).[Dudarev et al.(1998)Dudarev, Botton, Savrasov, Humphreys, and Sutton]PhysRevB.57.1505 authorS. L. Dudarev, authorG. A. Botton, authorS. Y. Savrasov, authorC. J. Humphreys, and authorA. P. Sutton, journalPhys. Rev. B volume57, pages1505 (year1998).[Zhang et al.(2014)Zhang, Wang, Xi, Qiu, Shi, Zhang, and Zhang]ZhangPH2014 authorY. Zhang, authorY. Wang, authorL. Xi, authorR. Qiu, authorX. Shi, authorP. Zhang, and authorW. Zhang, journalJ. Chem. Phys. volume140 (year2014).[Råsander et al.(2013)Råsander, Bergqvist, and Delin]Rasander2013_Cu2Se authorM. Råsander, authorL. Bergqvist, and authorA. Delin, journalJ. Phys. Condens. Matter volume25, pages125503 (year2013).[Mostofi et al.(2014)Mostofi, Yates, Pizzi, Lee, Souza, Vanderbilt, and Marzari]Wannier90T authorA. A. Mostofi, authorJ. R. Yates, authorG. Pizzi, authorY.-S. Lee, authorI. Souza, authorD. Vanderbilt, and authorN. Marzari, journalComput. Phys. Comm. volume185, pages2309(year2014).[Becke and Johnson(2006)]MBJ authorA. D. Becke and authorE. R. Johnson, journalJ. Chem. Phys. volume124, pages221101 (year2006).[Blaha et al.(2001)Blaha, Schwarz, Madsen, D., and J.]WIEN2K authorP. Blaha, authorK. Schwarz, authorG. Madsen, authorK. D., and authorL. J., titleWIEN2k, An Augmented Plane Wave + Local Orbitals Program for Calculating Crystal Properties (year2001), ISBN isbn3-9501031-1-2.
http://arxiv.org/abs/1703.09040v1
{ "authors": [ "Xian-Lei Sheng", "Zhi-Ming Yu", "Rui Yu", "Hongming Weng", "Shengyuan A. Yang" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170327124828", "title": "$d$-Orbital Topological Insulator and Semimetal in Antifluorite Cu$_2$S Family: Contrasting Spin Helicities, Nodal Box, and Hybrid Surface States" }
Service Overlay Forest Embedding forSoftware-Defined Cloud Networks Jian-Jhih Kuo1, Shan-Hsiang Shen12, Ming-Hong Yang13, De-Nian Yang1, Ming-Jer Tsai4 and Wen-Tsuen Chen14 1Inst. of Information Science, Academia Sinica, Taipei, Taiwan2Dept. of Computer Science & Information Engineering,National Taiwan University of Science & Technology, Taipei, Taiwan3Dept. of Computer Science & Engineering, University of Minnesota, Minneapolis MN, USA4Dept. of Computer Science, National Tsing Hua University, Hsinchu, TaiwanE-mail: {lajacky,sshen3,curtisyang,dnyang,chenwt}@iis.sinica.edu.tw and mjtsai@cs.nthu.edu.tw December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The B VI lightcurves of seven recent novae have been extensively mapped with daily robotic observations from Atacama (Chile).They are V1534 Sco, V1535 Sco, V2949 Oph, V3661 Oph, MASTER OT J010603.18-744715.8, TCP J1734475-240942 and ASASSN-16ma.Five belong to the Bulge, one to SMC and another is a Galactic disk object.The two program novae detected in γ-rays by Fermi-LAT (TCP J1734475-240942 and ASASSN-16ma) are Bulge objects with unevolved companions.They distinguish themselves in showing a double-component optical lightcurve.The first component to develop is the fireball from freely-expanding, ballistic-launched ejecta with the time of passage through maximum which is strongly dependent on wavelength (∼1 day delay between B and I bands).The second component, emerging simultaneously with the nova detection in γ-rays and for this reason termed gamma, evolves at a slower pace, its optical brightness being proportional to the γ-ray flux, and its passage through maximum not dependent on wavelength.The fact that γ-rays are detected from novae at the distance of the Bulge and at peak flux levels differing by 4× seems to contradict some common belief like: only normal novae close to the Sun are detected by Fermi-LAT, most normal novae emit γ-rays, and they emit γ-rays in similar amounts.The advantages offered by high-quality photometric observations collected with only one telescope (as opposed to data provided by a number of different instruments) are discussed in connection to the actual local realization of the standard filter bandpasses and the spectrum of novae dominated by emission lines.It is shown how, for the program novae, such high-quality and single-telescope optical photometry is able to disentangle effects like: the wavelength dependence of a fireball expansion, the recombination in the flashed wind of a giant companion, the subtle presence of hiccups and plateaus, tracing the super-soft X-ray phase, and determining the time of its switch-off.The non-detection by 2MASS of the progenitor excludes a giant or a sub-giant being present in four of the program novae (V2949 Oph, V3661 Oph, TCP J18102829-2729590, and ASASSN-16ma).For the remaining three objects, by modelling the optical-IR spectral energy in quiescence it is shown that V1534 Sco contains an M3III giant, V1535 Sco a K-type giant, and MASTER OT J010603.18-744715.8 a sub-giant.novae, cataclysmic variables § INTRODUCTION The outburst of a nova originates from thermonuclear (TNR) runaway on the surface of a white dwarf when material accreted from a companion reaches critical conditions for ignition.The accreted envelope is electron degenerate, a fact that leads to violent mass ejection into surrounding emptiness (if the donor is a dwarf) or into thick circumstellar material (if the WD orbits within the wind of a cool giant companion).The variety of nova phenomena is further enriched by the dependence on WD mass (as the velocity and amount of ejected material), the diffusion and mixing of underlying WD material into the accreted envelope (outburst strength and chemistry), the common-envelope interaction with the companion star (such as 3D morphology of the ejecta and duration of the post-TNR stable nuclear burning), and the viewing angle (especially for highly structured ejecta composed of bipolar flows, equatorial tori, diffuse prolate components and winds or bow-shocks).Bode & Evans (2012) and Woudt & Ribeiro (2014) has provided extensive, recent reviews about classical novae.Given the range of observable phenomena, a comprehensive description of a nova would obviously benefits from the widest wavelength and epoch coverage, including the pre-outburst properties of the progenitor.While rampant fields like X-ray imaging/spectra (eg.Ness 2012), GeV γ-rays detection (Ackermann et al.2014) and radio high angular-resolution maps (Chomiuk et al.2014) are changing our understanding of novae, good multi-band lightcurves are still an essential contribution.Especially during the initial optically thick phase, but also during the following optically thin advanced decline, the ejecta and pre-existing circumstellar matter reprocess at longer wavelengths (optical/IR) the energetic input of phenomena developing primarily at much higher energies.An accurate and multi-band lightcurve of a nova can thus track and reveal a lot about the powering engine and the physical conditions in the intervening and reprocessing medium.To be of the highest diagnostic value, a multi-band lightcurve should be (i) densely mapped (daily), (ii) start immediately after nova discovery and extend well into the advanced decline, stopped only by Solar conjunction or limited by telescope diameter, (iii) pursue the highest external photometric accuracy, i.e.the combination of the highest recorded flux with the most accurate transformation from the instantaneous local photometric system to the standard one, and (iv) the entire (or the bulk of the) lightcurve being obtained with a single instrument and not be the result of the combination of sparse data from a variety of different telescopes.This last point is discussed in more detail in sect.3 below.The aim of the present paper is to present extensive B VI lightcurves of seven recent novae, all appearing at deep southern declinations, well below those accessible with the Asiago telescopes that we regularly use to follow spectroscopically novae appearing north of -25^∘.The photometric observations presented in this paper have been obtained with a robotic telescope we operate in Atacama (Chile).The program novae are listed in Table 1, together with their equatorial and Galactic coordinates, spectral class (FeII or He/N) and date when their discovery was announced.All of them, except MASTER OT J010603.18-744715.8 appearing toward SMC, erupted within a few degrees of the Galactic center.Most of the program novae have been targeted and detected in the radio, X-rays and/or γ-rays.Nothing comprehensive has been yet published on these exciting observations other than preliminary announcements on ATels, and for only the two oldest program novae (V1534 Sco and V1535 Sco), a comprehensive study of their near-IR spectra has been accomplished (Joshi et al.2015, Srivastava et al.2015).For none of the program novae a detailed report on their optical properties and multi-band lightcurve has been published so far, and thus the aim of this paper is to fill-in this gap by providing and discussing high accuracy, daily-mapped B VI lightcurves for all of them.In addition to allow by themselves some physical discussion on the properties of the program novae, our lightcurves are meant to provide useful support information to future studies based on other wavelength domains.Our photometric mapping started within one day of nova announcement and extended until Solar conjunction set in or the nova completed its evolution.All times given in the paper are UT unless otherwise noted. Our photometry is strictly tied to the Landolt (2009) system of equatorial standards, thus the I band should be properly written as I_ C (Cousins' system). For simplicity we will drop the "C" suffix in the rest of the paper, and write the adopted photometric bands as B VI.§ OBSERVATIONS B VI optical photometry of the program novae was obtained with ANS Collaboration robotic telescope 210, located in San Pedro de Atacama, Chile.All novae were observed ∼daily for as long as Solar conjunction allowed after their discovery.Telescope 210 is a 40cm f/6.8 Optimized Dall-Kirkham (ODK).It mounts a FLI cooled CCD camera equipped with a 4k×4k Kodak 16803 sensor of 9 μm pixel size.The photometric B VI filters are of the multi-layer dielectric type and are manufactured by Astrodon.Technical details and operational procedures of the ANS Collaboration network of telescopes are presented by Munari et al.(2012).Detailed analysis of the photometric performances and multi-epoch measurements of the actual transmission profiles for all the photometric filter sets in use at all ANS telescopes is presented by Munari & Moretti (2012).Data collected on the program novae with ANS telescope210 were ftp-transferred daily to the central ANS server were data reduction was carried out in real time to check on nova progress and instrument performance.Data reduction involved all usual corrections for bias/dark/flat/pixel map, with fresh new calibration frames obtained regularly in spite of the highly stable conditions of the instrumentation at the remote desert site.Transformation from the instantaneous local photometric system to the standard one is carried out on all individual observations by color equations which coefficients are χ^2 calibrated against a local photometric sequence imaged together with the nova.The local photometric sequence is extracted from the APASS survey (Henden et al.2012, Henden & Munari 2014) using the transformation equation calibrated in Munari et al.(2014a,b).The APASS survey is strictly linked to the Landolt (2009) and Smith et al.(2002) systems of equatorial standards.The local photometric sequences around the program novae are selected to fully cover the whole range of colors spanned by each individual nova and are kept fixed during the whole observing campaign, so to ensure the highest (internal) consistency.Re-observing the local photometric sequences along with the respecting novae has allowed us to refine their magnitudes to extreme precision (well beyond the original APASS), including pruning from the hidden presence of subtle variable stars.These pruned and refined local photometric sequences are freely available (e-mail the first author) to anyone interested in using them to calibrate further optical photometry of the novae considered in this paper.All measurements were carried out with aperture photometry, with the aperture radius and inner/outer radii for the sky annulus χ^2-optimized on each image so to minimize dispersion of the stars of the local photometric sequences around the transformation equations from the local instantaneous to the standard system.On average, the aperture radius was ∼1.0×FWHM of the seeing profile, and the inner and outer radii for the sky annulus were ∼3× and ∼4×FWHM, respectively.Finally, colors and magnitudes are obtained separately during the reduction process, and are not derived one from the other.Our measurement for the program novae are listed in Table 2, available in full only electronically.The quoted uncertainties are total error budgets, adding quadratically the Poissonian contribution on the nova to the uncertainty (measured on the stars of the local photometric sequence) in the transformation from the instantaneous local photometric system to the standard one.§ SINGLE-TELESCOPE VS MULTI-TELESCOPE LIGHTCURVES The advantages offered by building the lightcurve of a nova from data provided by a single telescope (and thus not the result of the combination of sparse data from a variety of different telescopes) are relevant, even if frequently overlooked or not fully appreciated.We do not refer - although very relevant themselves - to differences in the quality of data acquisition/reduction/calibration carried out at each telescope, nor to the effect of difference in focal length and PSF purity in the crowded fields where novae usually appear.We restrict to consider here the ideal case in which all aspects of data acquisition and reduction have been carried out state-of-the-art and crowding is not an issue.The spectral energy distribution of a nova is dominated by strong emission lines, the more so as the nova declines.While standard color-equations (either for the all-sky and the local photometric sequence approaches) can essentially null the differences (for normal stars and standard filter sets) between the standard photometric system and its local instantaneous realization (with the time- and λ-dependent atmospheric transmission as a key component of the optical train), this is hardly so for objects whose spectra are dominated by emission lines.Let's consider the Landolt V band for example, with similar reasoning applicable to other bands or other photometric systems.As discussed in detail in Munari et al.(2013a), much of the flux through the V band during the optically thin phase of FeII-novae comes from the [OIII] nebular doublet.The doublet is located on the steeply rising long-pass edge of the V band profile, where small differences in the transmission of the photometric filters cause large deviations in the flux collected from the nova.Similarly, during the optically thick phase of heavily reddened novae of both the FeII and He/N types, a non-negligible fraction of the flux through the V band comes from the Hα emission line.Hα is located at the red wing of the V band, were the transmission of an actual filter can go from null up to several % of the peak value.In the top panel of Figure 1 we have plot the transmission profile of the V band as locally realized by some of the most popular attempts to match and standardize the original Johnson & Morgan (1953) UBV photometric system(Cousins 1980, Graham 1982, Bessell 1990, Straizys 1992, Landolt 1992).The differences along the whole band profile are quite obvious. Yet, proper handling of the color-equations can essentially null such differences when dealing with the smooth, continuum-dominated and black-body like spectral energy distribution of normal stars.The lower three panels of Figure 1 overplot to the spectra of three novae the transmission profiles of the set of V filters measured in the laboratory by Munari & Moretti (2012) with a spectrometer over the 2000 Å to 1.1 μm range (so to check for either blue or red leaks).These filters come from the main manufacturers in the field and, prior to measurement, have been subject to at least one year of continuous operation at the telescope (thus exposed to large and continuous changes in barometric pressure, temperature and humidity).The filters are of two types.Those following the Bessell (1990) recipe for sandwich of Schott colored glasses (2mm of GG495 + 3mm of BG39) are plotted in orange, the others are of the multi-layer dielectric type and are plotted in red.The nova spectra are examples taken from our long term monitoring of all novae accessible with the Asiago telescopes.Nova Aql 2013 and Nova Oph 2009 are two heavily reddened novae, of respectively the FeII and He/N types, as observed during the early decline from maximum.Nova Mon 2012 is a low reddening FeII nova as observed during the optically thin phase, at a time when the super-soft phase was over.To accurately simulate actual observations of these novae with the different V filters plotted in the lower panels of Figure 1 we proceeded the following way.The nights when these spectra of the three novae were observed were clear and all-sky photometric.Several blue and red spectrophotometric standard stars were observed at different airmass during each night.On the fully-extracted, but not flux-calibrated spectra of the standards, we computed the instrumental magnitude in the UBV filters which transmission profile is fully covered by our 3200-7700 Å spectra.These instrumental magnitudes (plus reference tabular values) were used with normal photometric data-reduction techniques to solve the color-equations to transform from the local to the standard photometric system.While the U and B band profiles were kept fixed to those tabulated by Landolt (1992), for the V band profile we in turn adopted each one of those plotted in bottom panels of Figure 1.The all-sky inter-calibration of the standard stars provided stable results at 0.01 mag level in all bands whatever the choice for the V profile was.The V magnitude derived in the same way for the novae changed instead from one V filter to another.The range of the computed V magnitudes is (cf Figure 1) 0.05 mag for Nova Aql 2013, 0.11 mag for Nova Oph 2009, and 0.51 mag for Nova Mon 2012.For the heavily reddened Nova Aql 2013 and Nova Oph 2009, the bluer wavelengths going through the V passband contribute essentially nothing to the recorded flux.In such conditions, the fact that the transmission profile of a given filter is null or is it still transmitting something at Hα wavelength is the reason for the different magnitude derived for the nova.Obviously, the wider the equivalent width of Hα, the larger is Δ V, as the comparison of Nova Aql 2013 (e.w.(Hα)=330 Å) and Nova Oph 2009 (e.w.(Hα)=770 Å) clearly illustrates.For the nebular spectrum of Nova Mon 2012 (lowest panel of Figure 1), the line responsible for much of the filter-to-filter differences is [OIII], which dominates (with its e.w.(Hα)=8800 Å) the flux going through the V band. The [OIII] doublet is located right on the steeply ascending branch of the V band transmission profile, where filter-to-filter differences are the largest.The conclusion seems straightforward. If state-of-the-art photometry is collected with only one telescope (always the same filters, detector, comparison sequence, data reduction procedures), any glitch present in a densely mapped lightcurve will probably be true, something connected to a real change in the physical conditions experienced by the nova.On the contrary, if the lightcurve of a nova is built from data obtained independently at different telescopes observing at different epochs, there is a serious risk that any feature is an artefact caused by the mixed data sources and it is not intrinsic to the nova.In addition to those recorded by ANS Collaboration, an excellent example of single-telescope lightcurves of novae are those obtained by the SMARTS project (Walter et al.2012).Application of medium- and narrow-band filters to the photometry of novae would overcome the problems caused by the mixed presence of both continuum and emission lines within the transmission profile of broad photometric bands, segregating the contribution of pure continuum from that of emission lines.The evolution of Nova Del 2013 during the first 500 days of its eruption has been monitored by Munari et al.(2015) simultaneously in Landolt broad-band B and V, Stromgren medium-band b and y, and line narrow-band Hα and [OIII] filters.This study highlights the great diagnostic potential of such a combined approach in carrying out the photometry of nova outbursts. § THE PROGRAM NOVAE §.§ Decline rates, reddening and distances For all the program novae, the collected B VI photometric data allows to derive decline rates, reddening and distances with the popular methods summarized in this section.The results are listed in Table 3 and in the sections below aiming to individual objects..The characteristic rates t_2^V and t_3^V are the time (in days) that a nova takes to decline in V band by, respectively, 2 and 3 magnitudes below maximum brightness.This quantity is obviously wavelength-dependent, considering the significant color evolution presented by a nova around maximum.Duerbeck (2008) proposed a mean relation t_3^V=1.75×t_2^V.For our program novae we obtain t_3^V=1.54×t_2^V (σ=0.22).Photometric reddening is computed by comparison with the intrinsic colors given by van den Bergh and Younger (1987).From a sample of well studied novae, they derived as mean intrinsic values (B-V)_∘=+0.23(±0.06) at the time of V-band maximum, and (B-V)_∘=-0.02(±0.04) at t_2^V.The reddening we computed for the program novae at these two epochs are in good mutual agreement.Because of the peculiar spectral energy distribution of novae, the intrinsic values given by van den Bergh and Younger cannot be ported to other color combinations using transformations calibrated on normal stars, as discussed in Munari (2014).The distance to a nova is usually estimated via calibrated relations (called MMRD) between absolute magnitude at maximum and rate of decline t_n^λ, either in the formM_max^λ = α_n logt_n^λ + β_n or the stretched S-shaped curve M_max^λ = γ_n- δ_narctanϵ_n - logt_n^λ/ζ_n first introduced by Capaccioli et al.(1989).In computing the distances to the program novae, we have adopted the latest available calibration by Downes & Duerbeck (2000) for MMRD as function of t_2^V and t_3^V, as well as the S-shaped curve.These three values for the absolute magnitude are in good mutual agreement (mean deviation from mean value is 0.15 mag), with perhaps a slight tendency to be fainter for values computed from t_2^V.The distance given in Table 3 is computed from the mean value of the absolute magnitude as provided by the three t_2^V, t_3^V and S-curve methods.Buscombe & de Vaucouleurs (1955) noted how the absolute magnitude 15 days after optical maximum is similar for novae of all speed classes.We adopt for this the value M_15^V=-6.05 calibrated by Downes & Duerbeck (2000).On average the distance computed in Table 3 from the brightness at 15 days is similar to that provided by t_2^V, t_3^V and S-curve methods.In estimating the distances, the correction for extinction is computed from the derived E_B-V reddening and the standard R_V=3.1 law by Fitzpatrick (1999), following the expression A(V)=3.26× E_B-V + 0.033× E_B-V^2computed by Fiorucci and Munari (2003) for the energy distribution of a nova at the time of maximum brightness.Compared to the total extinction along the line of sight provided by the 3D maps of Schlegel, Finkbeiner & Davis (1998, hereafter SFD98) and Schlafly & Finkbeiner (2011, SF11), the extinction computed from Eq.(1) is nearly identical for four program novae, and a fraction of it for other two.The remaining program nova (V2949 Oph) has no B-band photometry useful to the computation of E_B-V. §.§ V1534 Sco V1534 Sco (= Nova Sco 2014 = TCP J17154683-3128303) was discovered at (unfiltered) 10.1 mag on 2014 Mar 26.85 by K.Nishiyama and F.Kabashima (CBET 3841).Spectroscopic classification as an He/N nova was obtained on Mar 27.8 by Ayani & Maeno (2014), reporting FWHM=7000 km/s for Hα (see also Jelinek et al.2014).Joschi et al. (2015) discuss the result from near-IR spectroscopy covering the first 19 days of the outburst, following on the preliminary report by Joschi et al.(2014).The near-IR spectra confirm the He/N classification, and show emission lines characterized by a rectangular shape with FWZI∼9500 km/s and a narrow component on the top.The positional coincidence with a bright 2MASS cool source (J=11.255, H=10.049, Ks=9.578) and the presence of first overtone absorption bands of CO at 2.29 microns (as seen in M giants) led Joschi et al.(2014) to suggest that V1534 Sco is a nova originating from a symbiotic binary system, similar to V407 Cyg, RS Oph and V745 Sco.X-ray emission from the nova was observed by the Swift satellite within a few hours of optical discovery (Kuulkers et al.2014), corresponding to an absorbed optically thin emission of kT = 6.4 +3.8/-2.1 keV and N_H = 5.8 (+1.2/-1.0) 10^22 cm^-2 (with most of the absorption intrinsic to the source).X-ray emission was also recorded on the following days (Page, Osborne and Kuulkers 2014), with the softer counts increasing as result of decreasing absorption column, a behavior consistent with that expected for a shock emerging from the wind of the secondary star, as expected in a nova erupting within a symbiotic system.The presence of a cool giant in nova V1534 Sco has however to face some inconsistencies: (a) Joschi et al.(2015) sequence of near-IR spectra show emission lines of constant width, not the rapid shrinking associated with the deceleration of the ejecta expanding through the pre-existing wind of the cool giant companion, as observed in the template V407 Cyg case (Munari et al.2011); (b) the narrow peak observed by Joschi et al.(2015) to sit on top of the broad emission lines, and taken to represent the flash-ionized wind of the cool giant, does not quickly disappear as consequence of rapid recombination driven by the high electronic density, as instead observed in other novae erupting within symbiotic binaries; and (c) by analogy with V407 Cyg and V745 Sco, γ-ray emission would have been expected to arise from high velocity ejecta slamming onto the wind of the cool companion (Ackermann et al.2014), but no γ-ray detection of V1534 Sco has been reported to date.§.§.§ The lightcurve The lightcurve of V1534 Sco is presented in Figure 2 and the basic parameters extract from it are listed in Table 3.Our photometric monitoring commenced within one day of the announcement of its discovery.Our observations continued indeed for a longer period than shown in the figure, but we refrain from plotting or tabulating such noisy late data, which are best described as a rapid flattening of the lightcurve toward the asymptotic values B∼19.6, V∼18.3, and I∼14.3.This flattening is artificial and can be ascribed to (1) the stable contribution from the 2MASS cool source which dominates in the I band, and (2) the unresolved contribution in B and V bands by several unrelated field stars which lie within ∼4 arcsec of the nova.The crowding is so severe in the immediate surrounding of the nova that attempts to disentangle it via PSF-fitting proved unconverging on our images.The time and brightness of maximum in I band is well constrained in Figure 2, and by similarity we assume the earliest two V points plotted in Figure 2 to mark the actual maximum in the V band.Lacking B-band data for the maximum, the E_B-V reddening can be estimated only from B-V color at t_2^V, providing A_V=3.66 from Eq.(1).From this and the extremely fast decline times listed in Table 3, the distance to this nova would turn unconfortably large, ≥30 kpc, placing it far beyond the Galactic Bulge against which it is seen projected, at a hight of ≥2 kpc above the Galactic plane.This is clearly an unlikely location. The B-V color measured for this nova at t_2^V seems strongly influenced by the presence of the cool giant, the severe crowding and the contribution by recombining flash-ionized wind of the companion, to the point of fooling the comparison with van den Bergh and Younger (1987) intrinsic colors.For similar reasons the brightness at 15 days (V=16.35 mag) appears useless in estimating the distance.If for the extinction we adopt instead the values given by SFD98 and SF11 and reported in Table 3, the distance to the nova results in 5.4 and 9.4 kpc, respectively.In Table 3 we list the average 7.4 kpc value, well compatible with a partnership to the Galactic Bulge.It is worth noting that the lightcurve of V1534 Sco displays a distinctive hiccup marked by the red arrow in Figure 2.Its strength is wavelength-dependent, descreasing from B to I band.For sake of discussion, we have fitted the V-band lightcurve of V1534 Scowith the combination of two sources: the flash-ionized wind of the cool giant and the expanding nova ejecta.The latter is obtained as the difference (computed in the flux space) between the observed lightcurve and the exponential decline from the flash-ionized wind.In Figure 3 we present the results for a recombination e-folding time of 3 days, corresponding to an electron density of 1.5×10^7 cm^-3 at the peak of the ionization and assuming an electron temperature of 10 000 K.The fit looks excellent, but this is hardly a proof of its uniqueness.Given the fact that the near-IR observations by Joschi et al.(2015) did not detected a decelleration of the ejecta, it makes sense to treat the lightcurve derived in Figure 3 for the expanding ejecta as that of the nova proper.In this case the V band maximum was reached on JD=2456747.0 at magnitude ∼13.8, with a decline rate t_2^V∼13 days.Adopting the larger extinction from SFD98, the distance (∼10 kpc) would still be compatible with a partnership to the Bulge (the fainter apparent magnitude is compensated for by a similarly fainter absolute value implied by the slower decline rate). §.§ V1535 Sco V1535 Sco (= Nova Sco 2015 = PNV J17032620-3504140) was discovered by T.Kojina on 2015 Feb 11.837 (CBET 4078) and soon confirmed spectroscopically by Walter (2015) as an He/N nova.Nelson et al.(2015) performed X-ray and radio observations within a few days of discovery, and found the initial presence of hard, absorbed X-rays and synchrotron radio emission that suggested the nova erupted in a symbiotic binary, with collision between the ejecta and the cool giant wind shock-heating plasma and accelerating particles.The suggestion about the presence of a cool giant companion was made also by Walter (2015).The synchrotron radio component rapidly declined during the following days while more conventional thermal free-free emission emerged (Lindford et al.2015).Near-IR spectral monitoring by Srivastava et al.(2015a,b) revealed a progressive narrowing of the never-too-broad emission lines from FWHM∼2000 down to 500 km/s, indicating a decelerating shock as the nova ejecta collide with and are slowed down by the wind of the giant companion.The extra brightness of the progenitor in quiescence Hα Super-COSMOS images was taken by Srivastava et al.(2015b) as a further evidence of a symbiotic nature.Linear polarization measurements in BVRI bands at seven consecutive dates in February were reported by Muneer, Anupama and Raveendran (2015) who concluded that, even if not corrected for interstellar polarization, the data support intrinsic polarization.§.§.§ The lightcurve The lightcurve resulting fromour year-long monitoring of V1535 Sco is presented inFigure 4, and the basic nova parameters are summarized inTable 3.The overall lightcurve of V1535 Sco looks pretty standard, with a faster decline during the initially optically thick conditions followed by a slower descent during the later optically thin phase.The transition from optically thick to thin ejecta occoured 27 days past and Δmag=3.60 below the V-band maximum.Such a well behaving transition is usually seen in FeII novae (eg.McLaughlin 1960), but much less frequently in He/N novae.The latter usually expel less material at larger velocity and higher ionization compared to FeII counterparts, and their ejecta reach optically thin conditions much closer to maximum brightness.A noteworthy feature of the lightcurve is the temporary dip that the nova went through around April 8 (JD=2457121), which developed at constant colors during the optically thin phase, as if for some time the ejecta were exposed to a lower flux of ionizing photons from the central source.A second, wider, and stronger dip which occoured in mid-September (around JD=2457280), was instead strongly color-dependent.Their interpretation would require access to a detailed spectroscopic monitoring that we lack.Overall, the lightcurve of V1535 Sco during the optically thin phase has been "bumpy", well beyond the measurement errors.Figure 5 zooms on the early portion of the V-band lightcurve, to highlight the plateau lasting a couple of days around Feb 24, or ten days past and 1.5 mag below optical maximum. Two possible interpretations come to mind, but both have their sharesof problems. First, the plateau could represent the same type of transition discussed in Figure 3 above for V1534 Sco, namely the emission from expanding ejecta overtaking that of flash-ionized wind of the cool companion.This contrasts with the long delay past maximum, requiring a rising time to maximum for the ejecta (≥10 days) which is more typical of FeII events than He/N for which it is generally an order of magnitude faster.This could be counter-argumented by noting that (i) the initial He/N spectral classification for V1534 Sco could have been fooled by the dominating emission from the flash-ionized wind, and (ii) the narrowness of the emission lines, their Gaussian-like shapes and the presence of P-Cyg absorptions observed in the near-IR by Srivastava et al.(2015b) are more typical of FeII novae, while He/H tend to show much broader and rectangular emission lines with no P-Cyg absorption components (Banerjee & Ashok 2012).It will be interesting to carefully inspect, when they will be eventually published, optical spectra taken over a protracted interval of time to ponder the spectral classification of the expanding nova ejecta separately from that of the flash-ionized wind.Secondly, a similar plateau has been sometimes observed in novae during the super-soft phase, when optically thin ejecta are exposed to the hard radiation field of the central white dwarf still burning nuclearly at its surface.The consequent input of ionizing photons spreading through the ejecta counter-balances the recombination of ions.The plateau is usually terminated by either rapid dilution in fast expanding and low mass ejecta (as observed during the 2016 outburst of the recurrent nova LMC 1968, Munari et al.2016a), or by switching off the nuclear burning on the WD (as in U Sco, Osborne et al.2010).The problem in this case is that the plateau occoured two weeks before the ejecta turned optically thin on day 27 past optical maximum.A way out could be a highly structured, non-spherical shape of the ejecta, with optical thickness strongly dependent on angular coordinates.Hints in favor of such an arrangement are the fact that the nova erupted within the pre-existing wind of the giant companion, and optical (Walter 2015) as well as near-IR (Srivastava et al.2015b) spectra whichpresent weak emission components separated from the corresponding main ones. The reddening estimated from nova colors and the total extinction along the line of sight deduced from SFD98 and SF11 maps are in excellent agreement (Table 3), and the derived distance places V1535 Sco at the distance of the Galactic Bulge against which it is seen projected. §.§ V2949 Oph V2949 Oph (= TCP J17344775-2409042 = Nova Oph 2015 N.2) was discovered on 2015 Oct 11.41 by K.Nishiyama and F.Kabashima (CBET 4150), and confirmed spectroscopically on Oct 12.42 by Ayani (2015).Low expansion velocity, heavy reddening and a Fe-II spectral class was reported by Campbell et al.(2015) from Oct 11 spectroscopic observations, while Littlefield and Garnavich (2015) from Oct 11.99 observations estimated in 900 km/s the FWHM of Hα emission and -800 km/s the velocity of its P-Cyg absorption component. §.§.§ The lightcurve Our lightcurve for V2949 Oph in presented in Figure 6.We begun the observations as night settled on Oct 12.98, soon after spectroscopic confirmation was circulated, and continued them until Nov 9, when Solar conjunction prevented further data to be collected.This is the only program nova that was not observed also in B band.The lightcurve shows the nova fluctuating by Δ V∼2 mag around maximum brightness.Similar peak brightness was reached on Oct 12.38 at V=11.41 and on Nov 7.99 at V=11.76.The first has been taken - somewhat arbitrary - as the true maximum, so that the brightness 15 days past it can be used to estimate a distance of 8.4 kpc (cf.Table 3), which places the nova right at the distance of the Galactic center.The reddening resulting from B-V color around maximum (cf.Figure 6) indicates an extinction A_V∼4.9, uncomfortably in excess of the total value along the line of sight A_V∼3.39 from SFD98 and A_V∼2.88 from SF11 maps.Because the very few B magnitudes used in this exercise are not ours and come instead from VSNET observers (who did not provide details on their data reduction procedures and adopted comparison sequence), we will make no further use of these B-band data. §.§ V3661 OphV3661 Oph (= PNV J17355050-2934240 = Nova Oph 2016) was discovered in outburst by H.Yamaoka on Mar 11.81 (CBET 4265).A preliminary spectroscopic classification as a nova was derived by Munari et al.(2016b) from a very low S/N spectrum, with later IR and optical spectra by Srivastava et al.(2016) and Frank et al.(2016) fixing the spectral class to FeII.All three spectral sources concur on a highly reddened continuum, FWHM∼1000/1400 km/s for Balmer emission lines and a velocity separation of ∼950 km/s between the emission and absorption components of the P-Cyg profile affecting most of the lines.A pre-discovery OGLE-IV observation at I=12.15 on March 8.31 was reported by Mroz & Udalski (2016a) who noted the absence of the progenitor in OGLE deep template images, meaning it was fainter than 22 mag in I band.A pre-discovery observation by ASAS-SN of the nova on March 10.85 has been noted by Chomiuk et al.(2016).Finally, Muneer & Anupama (2016) reported significant linear polarization in VRI photometric observations of V3661 Oph obtained from March 13 to 19, that they interpret as arising primarily in the interstellar medium given the high reddening suffered by the nova. §.§.§ The lightcurve Our lightcurve for V3661 Oph is presented inFigure 7, and the basic nova parameters are summarized in Table 3 as for the other program objects.The lightcurve looks particularly well behaving, almost a textbook example for a FeII nova. The clear dependence on wavelength of the time of maximum brightness will be discussed in sect. 5 below, in parallel with the similar case for TCP J18102829-2729590. The transition from optically thick to thinejecta occoured 6.0 days past and Δmag=3.35 below V-band maximum.With t_2^V=3.9 and t_3^V=5.7 days, V3661 Oph is probably the fastest known nova of the FeII type, and a very fast one even compared with He/N recurrent novae like U Sco.It is by far the nova with the reddest colors and therefore the highest extinction among the program ones, with a mean observed color B-I∼6.25, as averaged along the whole lightcurve.SFD98 and SF11 maps also suggest an extremely large total extinction along the line of sight to V3661 Oph.Finally, the short distance derived for this nova places it much closer than the Bulge and within the Galactic disk. §.§ MASTER OT J010603.18-744715.8 MASTER OT J010603.18-744715.8 was discovered (at unfiltered 10.9 mag) on 2016 Oct 14.19 by the MASTER-OAFA autodetection system and announced by Shumkov et al (2016) on Oct 14.34.ANS Collaboration monitoring begun on Oct. 14.51. Detection of the progenitor at mean I=20.84 and (V-I)=+0.16 on archive OGLE-IV observations was reported by Mroz and Udalski (2016b), with hints of semi-regular variability of a timescale of 20-30 days. Lipunov et al. (2016) found pre-discovery MASTER images that show how the nova was already declining from maximum when first noticed.An image for Oct 9.81 recorded the nova at (unfiltered) 8.5 mag, declining to 8.9 mag on Oct 11.07 and 9.3 mag on Oct 12.16.Robotic DSLR-camera monitoring of theSMC was inspected by Jablonski & Oliveira (2016) to obtain the (unfiltered) brightness profile of the rise toward maximum of the nova during Oct 9.The nova was fainter than 13.2 mag on Oct 9.197, first detected at 12.9 mag on Oct 9.210, and last measured at 9.90 mag on Oct 9.325. Spectroscopic confirmation was obtained by Williams & Darnley (2016a) on Oct 14.70.They measured FWHM∼3700 km/s for Balmer lines and classified the nova type as He/N.Following their description of the observed emission lines, the signatures in favor of a He/N class appears however weaker than typical for this type, with some room left for a FeII classification.Darnley & Williams (2016b) reported on their continued spectroscopic monitoring of the nova till Oct 29, noting the disappearance of P-Cyg absorptions and the emergence of HeI 5876, 7065 and of [OIII] 4959/5007, from which they infer the nova had entered the nebular phase.It should be noted that the presence of HeI emission lines at the time [OIII] emerges is standard for FeII novae, and that presence of P-Cyg absorption and [OIII] nebular lines are more typical of FeII than He/N novae (Williams 1992).In addition, the FWHM∼3700 km/s observed for Balmer lines is close to the low limit for typical He/N novae while still well suited for FeII ones.Nova MASTER OT J010603.18-744715.8 has been intensively monitored in X-rays. Early Swift observations on Oct 15 failed to detect X-ray emission (Kuin et al. 2016). Rapidly brightening soft X-ray emission was detected by Swiftstarting with Nov 7 (Page et al. 2016). Chandra observations for Nov 17-18 (Orio et al. 2016a) confirmed the super-soft bright emission, which continued well into the Chandra observation for 2017 Jan 4 (Orio et al. 2016b), when a preliminary fit to the spectra supports an increase from 650,000 to 750,000 K for the temperature of the white dwarf.§.§.§ The lightcurve The lightcurve of MASTER OT J010603.18-744715.8 is particoularly simple and smooth, and it is presented in Figure 8, with the basic parameters extracted from it summarized in Table 3.Figure 9 zooms on the phase of maximum, which was very brief with an extremely fast rise toward it, as pre-discovery observations by Lipunov et al.(2016) and Jablonski & Oliveira (2016) help to constrain.These unfiltered observations (i.e.white light, and therefore strongly skewed toward red wavelengths where CCD sensitivity peaks) require a large color correction to be properly plotted on the V-band plane, because of the remarkably blue colors for the nova resulting from the very low reddening toward SMC.The color corrections are given in Figure 9, and have been derived by continuity in comparison with our properly calibrated photometry.The latest observation listed by both Lipunov et al.(2016) and Jablonski & Oliveira (2016) are clearly off the otherwise well behaving lightcurve of the nova (the two arrows in Figure 9 point at them), and are ignored as erroneous data.§.§.§ Super-soft X-rays and rate of decline Our B VI data are transformed into absolute fluxes (erg cm^-2 s^-1) and log-log plotted against time on the left panel of Figure 10.For a comparison, the same is done on the right panel for Nova Mon 2012 (data from Munari et al.2013a).The shaded area in the figure marks the time after the super-soft X-ray emission had ceased (Nelson et al.2012, Page et al.2013).The phase of super-soft X-ray emission corresponds to optically thin ejecta permeated by the hard radiation from the central white dwarf undergoing stable nuclear burning at its surface (Krautter 2008, Schwarz et al.2011).Such ionizing radiation partially counter-balance the recombination in the expanding ejecta, keeping high their emissivity and flattening the decline rates.When, with the end of the super-soft phase this hard radiation input ends, the emissivity of the ejecta rapidly settles onto the pure recombination rate ∝ t ^-3, which is precisely what Nova Mon 2012 duly did.For MASTER OT J010603.18-744715.8, as soon as it entered the nebular phase and super-soft X-ray emission emerged (Williams and Darnley 2016b, Page et al.2016), the decline in flux rapidly settled on a rate kept stable for all the period covered by our observations: ∝ t ^-1.8, ∝ t ^-1.6 and ∝ t ^-2.2, for B, V and I respectively.The rates are slightly different from band to band, depending on the fractional contribution of continuum and emission lines, which decline at different speed as the degree of ionization and electron density change through the ejecta.The fact that these rates are much flatter than ∝ t ^-3 is interpreted as an evidence that nuclear burning was still up and running on the surface of the central WD at the time of our last observations.When the nuclear burning will eventually end, it is expected that the decline in brightness of MASTER OT J010603.18-744715.8 will accellerate to ∝ t ^-3, as seen in Nova Mon 2012. §.§ TCP J18102829-2729590 TCP J18102829-2729590 was discovered on 2016 October 20.383 at 10.7 mag by K.Itagaki (cf.CBET 4332).Mroz et al.(206) derived astrometric coordinates from OGLE-IV I-band images as RA=18:10:28.29 and DEC=-27:29:59.3, and noted that the progenitor is undetected in pre-outburst OGLE deep template images, meaning I>22 mag.Spectroscopic classification as a FeII-class nova was obtained by Lukas (2016).γ-ray emission from this nova has been detected by Fermi-LAT (Li & Chomiuk 2016).§.§.§ The lightcurve Our daily-mapped lightcurve of TCP J18102829-2729590 is shown in Figure 11.It extends over a whole month and fully covers the phase of maximum brightness and decline well past t_3^V.It is real smooth and characterized by a rapid initial rise and two distinct maxima.Our monitoring was stopped by Solar conjunction when the nova was still bright.As for the other novae, the parameters extracted from the lightcurve are listed in Table 3.The distances derived from t_2^V, t_3^V are V_15 are dependent from which of the two maxima is taken as reference.The average of the values listed in Table 3 is 8.1 kpc, right that of the Bulge against which the nova is seen projected.The partnership to the Bulge is confirmed by the photometric reddening of the nova that equals the total extinction along the line of sight from the 3D maps of SFD98 and SF11.There is a striking difference between the two maxima displayed by TCP J18102829-2729590: the first one is markedly wavelength dependent, the other is not.The wavelength dependence of the first maximum manifests in a time-delay of ∼1 day between peak brightness in B and I bands, as noted above for V3661 Oph.As discussed in sect.5 below, this is a characteristic of the initial fireball expansion of the ejecta, with maximum representing the time of largest angular extension for the pseudo-photosphere that is optically thick at the given wavelength.The independence from wavelength of the second maximum suggests it is of a different physical nature, which will be discussed in sect.6 below. §.§ ASASSN-16ma ASASSN-16ma was discovered at V∼13.7 in ASASSN-CTIO images obtained on 2016 Oct 25.02, brightened to V∼11.6 a day later and was undetected (V>17.3) on Oct 20.04 (Stanek et al.2016).Its coordinates were originally given as RA=18:20:52.12 and DEC=-28:22:13.52, which Saito et al.(2016) adopted to identify the likely progenitor in the VVV Survey, as a source possibly consisting of two unresolved components of combined brightness z = 18.8, Y = 18.5, J = 18.1, H = 17.8, and K_s = 17.6 mag.Mroz et al.(2016) remeasured the position of the nova on OGLE-IV I-band images and derived a different astrometric position RA=18:20:52.25 and DEC=-28:22:12.1, which is 2.4 arcsec away from the initial ASASSN-CTIO one.The progenitor is not visible in pre-outburst OGLE-IV survey images, meaning that is was fainter than I>22 mag and, therefore, the star proposed by Saito et al.is an unrelated field star.A low resolution spectrum obtained on Oct 27.5 by Luckas (2016) showed the object to be a FeII-class nova.A month later, on Nov 23.1, Rudy et al. (2016) obtained an optical/near-IR spectrum of ASASSN-16ma that confirmed the FeII classification and was characterized by prevailing low-expansion velocity and low-excitation conditions.γ-ray emission from ASASSN-16ma was discovered by Li, Chomiuk & Strader (2016) while they were monitoring with Fermi-LAT the nearby nova TCP J18102829-2729590, described in the section above.ASASSN-16ma remained undetected by Fermi-LAT until Nov 8 (JD=2457701) when it suddenly turned into a strong γ-ray source, remaining active (although declining) for the following 9 days (Li et al.2016).§.§.§ The lightcurve Our daily-mapped lightcurve of ASASSN-16ma is shown in Fig. 12. It extends over a whole month and covers the initial rise, the phase of maximum and decline well past t_3^V.Our monitoring was stopped by Solar conjunction when the nova was still bright.A zoomed view of the initial rise in brightness is given in Figure 13, where our V-band observations are combined with literature data.The lightcurve of ASASSN-16ma started as a simple one.The two observations for November 4.0 and 5.0 (JD=2457696.5 and 97.5) are highlighted with filled dots in Figure 12.They are characterized by the same dependence on wavelength as for the maximum brightness of V3661 Oph (Figure 7) and the first maximum of TCP J18102829-2729590 (Figure 11), namely a time delay of ∼1 day between the maximum in B and I bands.We believe these filled dots trace the normal fireball maximum ASASSN-16ma initially went through.In support of this interpretation it is worth noticing that for the two γ-ray program novae, both belonging to the Bulge and affected by a similarly low reddening, the first maximum occurred at a similar brightness: V=8.1 for ASASSN-16ma and V=7.6 for TCP J18102829-2729590.Soon after the passage through the fireball maximum,ASASSN-16ma raised to a second and brighter maximum, composed by two peaks.As for TCP J18102829-2729590, the distance to ASASSN-16ma derived from t_2^V, t_3^V and V_15 depends from which of these two peaks is taken as reference.Choosing the first one (at JD=2457700.5) returns a distance shorter than that of the Bulge, and a larger one selecting the second (at JD=2457707.5).The average is 8.3 kpc, placing also ASASSN-16ma at the distance of the Bulge against which the nova is seen projected.Similarly to TCP J18102829-2729590, the partnership to the Bulge is confirmed by the photometric reddening of ASASSN-16ma that equals the total extinction along the line of sight from the 3D maps of SFD98 and SF11.As for TCP J18102829-2729590, the second maximum displayed by ASASSN-16ma is discussed in section 6 below.§ THE FIREBALL EXPANSION The initial photometric evolution of a nova is characterized by the rise toward maximum, the maximum itself, and the settling onto decline.The rise toward maximum is rarely mapped at optical wavelengths (Seitter 1990), because it is usually very fast (a matter of few days or even hours) and the discovery of the nova has a higher chance to occur when the object is at peak brightness (especially so in crowded fields).Nonetheless, sometimes the conditions are just right to cover the transit of a nova through optical maximum.For the seven program novae, this is the case for TCP J18102829-2729590 and V3661 Oph, and marginally so for others.Figure 14 presents a zoom on their lightcurves around optical maximum.The obvious feature is how the maximum brightness occurs at later times with increasing wavelength.The flux density emitted by the ejecta of the nova expanding as anhomogeneous, ionized plasma is:f_ν = B_ν (d/D)^2 (1-e^-τ_ν)where B_ν is the Planck function, d is the linear dimension of the ejecta that scales as ∼v_ ej(t-t_∘), D is the distance to the nova and τ_ν is the free-free optical depth from bremsstrahlung of electrons.Following Altenhoff et al.(1960) and Mezger & Henderson (1967), τ_ν goes asτ_ν≈ 0.08235 T_e^-1.35ν^-2.1∫ N_e^2dlwhere N_e and T_e are the electron density (cm^-3) and temperature (K), ν is the frequency in units of 10^9 Hz, and the emission measure ∫ N_e^2dl is in pc cm^-6. The time delay between maximum brightness reached in different photometric bands can be then expressed ast_max^V - t_max^B= 0.35×Θ     (days)t_max^R - t_max^V= 0.25×Θt_max^I - t_max^V= 0.70×Θt_max^J - t_max^V= 1.55×Θt_max^H - t_max^V= 2.15×Θt_max^K - t_max^V= 2.90×ΘwithΘ =( T_ e/10^4  K)^-0.27( M_ ej/10^-4 M_⊙)^+0.4( v_ ej/1000  km/sec)^-1where M_ ej is the ejected mass and v_ ej the ejection velocity.This time delay is the same reason responsible for the maximum thermal radio emission to be reached ∼years past optical maximum (Hjellming 1974).The timings given in Figure 14 correspond to t_max^I - t_max^B=0.85 and ≥0.90 for TCP J18102829-2729590 and V3661 Oph, respectively.This is close to what expected from Eq.(4) for typical values of T_e, M_ ej and v_ ej adopted in computing Θ.§ A SECOND LIGHTCURVE COMPONENT PARALLELING THE EMISSION IN Γ-RAYS The lightcurve of the two γ-ray program novae, TCP J18102829-2729590 and ASASSN-16ma, is characterized by the distinct presence of two components, which are highlighted in Figure 15.The initial or fireball component produces a passage through maximum that is dependent on wavelength as described in the previous section.The second component appears at a later time and peaks simultaneously with the detection of the nova in γ-rays (for which reason we term it gamma) and gives origin to a second maximum which is not wavelength dependent.The gamma component of the optical lightcurve behaves synchronously with the emission observed in γ-rays.The preliminary analysis by Li et al.(2016) of the daily averaged γ-ray behavior of ASASSN-16ma, shows a sudden detection coincident with peak flux on November 8 (JD=2457700.5) followed by a general decline along the following nine days, with a significant 1-day γ-ray flux dip observed on November 13 (JD=2457705.5).The gamma component of the optical lightcurve in Figure 15 presents exactly the same behavior: a maximum on November 8 and a general decline for the following nine days with a 1-day brightness dip centered on November 13.Not only the shapes but also the flux ratios behaved in parallel.In fact, during the nine days of general decline, the γ-ray flux changed by a factor of 3, from 9.7 (±1.3) to 3.4 (±2.1) ×10^-7 ph cm^-2 s^-1 (Li et al.2016), and over the same period of time the flux through the V-band also declined by a factor of 3, from V=5.89 to V=7.02.Once the daily γ-ray behavior of TCP J18102829-2729590 will become available, it will be interesting to explore if a similar degree of parallelism with the gamma component of its optical lightcurve was followed too.As a further evidence of the link between the γ-ray emission and the gamma component of the lightcurve, it is worth noticing that the reported mean γ-ray flux of ASASSN-16ma (Li and Chomiuk 2016) is 2.5× higher than for TCP J18102829-2729590 (Li et al.2016).Well, the reddening corrected mean flux of the gamma component of the two novae in Figure 15 is exactly in the same 2.5× ratio, or <(V)_∘>=5.19 and <(V)_∘>=6.15 for ASASSN-16ma and TCP J18102829-2729590, respectively.A difference of 2.5× in the mean γ-ray flux for the two program novae, both belonging to the Bulge, seems to disprove the common belief (eg.Ackermann et al.2014), that (i) the intrinsic γ-ray brightness is similar among normal novae, (ii) they can be detected by Fermi-LAT over only limited distances from the Sun, and therefore (iii) γ-ray emission is a widespread (if not general) property of novae.Judging from ASASSN-16ma and TCP J18102829-2729590, it appears instead that novae can be firmly detected by Fermi-LAT up to and beyond the Galactic Bulge, and their intrinsic brightness in γ-rays can differ greatly.Combining this with the relatively low number of normal novae detected to date by Fermi-LAT (6 novae in total have been detected in γ-rays, in contrast to the 69 discovered optically in the same period, cf Morris et al.2017), it is tempting to conclude that γ-ray emission is not a wide-spread property for them.The two-component lightcurve here described for the program γ-ray novae brings to mind the two-component ejecta adopted to model the radio-interferometric observations of some recent novae (Chomiuk et al.2014, Weston et al.2016), a faster polar wind collides with a slower (and pre-existing ?) equatorial density enhancement.This scenario applies however to radio observations extending for months past the initial eruption.The second or gamma component of the optical lightcurve of program γ-ray novae develops instead within a few days of the initial fireball component, and could therefore trace something different in the kinematical and geometric arrangment of the ejecta.We postpone to a future paper a quantitative modeling of our two-component lightcurve for γ-ray novae to include similar data for additional objects and therefore reinforce the statistics.§ PROGENITORS At the position of the four program novae V2949 Oph, V3661 Oph, TCP J18102829-2729590, and ASASSN-16ma no progenitor is visible in deep OGLE I-band images or DSS plates, which set the minimal outburst amplitudes listed in Table 3.For all of them, a progenitor containing a giant or a sub-giant companion, would have been brighter in K band than the completeness limit of 2MASS survey in the respective areas, suggesting their donor star is a dwarf.For the remaining three program novae (V1534 Sco, V1535 Sco and MASTER OT J010603.18-744715.8) a progenitor has been proposed based on positional coincidence with pre-outburst surveys.We consider in turn these three novae.§.§ V1534 Sco Joschi et al.(2014) proposed 2MASS 17154687-3128303 as the progenitor of nova V1534 Sco.At J=11.255(±0.042), H=10.049(±0.039), and K=9.578(±0.035), it lies at 0.6 arcsec from the position of the nova reported by SIMBAD.By fitting with a black-body only its 2MASS and WISE infrared energy distribution, Joschi et al.(2014) classified the staras an M5III giant, reddened by E_B-V=0.9.In Figure 16 we present the observed spectral energy distribution (SED) of the progenitor of V1534 Sco.To the 2MASS JHK and WISE W_1W_2W_3 infrared data considered by Joschi et al.(2014), we add I=14.22 mag from DENIS and R=17.20 mag from SuperCOSMOS catalogues.We have not been able to find quiescence B and V data.As noted above in sect.4.2.1, at latest stages the lightcurve of V1534 Sco became completely flat, with asymptotic values B∼19.6, V∼18.3, and I∼14.3.The latter is practically identical to the pre-outburst DENIS I=14.22 mag value, suggesting that these asymptotic values could be viable proxies for the brightness in quiescence.We therefore added the asymptotic B and V values to the SED of Figure 16.There we over-plot to the nova the SEDs of G3III-M6III giants, compiling their optical/IR intrinsic colors from Koornneef (1983), Bessell (1990), and Fluks et al.(1994).The SEDs of giants are reddened according to the total extinction along the line of sight to V1534 Sco (cf.Table 3) as derived from 3D maps of SFD98 and SF11.We have already seen in sect.4.2.1 how these values for the extinction lead to a correct distance to the nova.They have been transformed into the corresponding E_B-V and A_λ following the relations calibrated by Fiorucci & Munari (2003) for M-type giants.The best fit to RIJHKW_1W_2W_3 data in Figure 16 is obtained with an M3III for E_B-V=1.63 and an M1III for E_B-V=1.95.Overall, the fit with the M3III is somewhat better.This is minimally dependent on KW_1W_2W_3 bands, while RIJ are far more relevant.The fit with the M3III provides a distance of 8.2 kpc, while that with the M1III drops down to 5.0.Considering the partnership of the nova with the Bulge, we conclude that the progenitor of Nova V1534 Sco is well represented by an M3III cool giant reddened by E_B-V=1.63.The B and V points lie above both fit attempts in Figure 16.There are at least three suitable explanations for this: (1) the asymptotic B and V values are still influenced by emission from the nova ejecta, (2) the severe crowding which fooled the derivation in sect.4.2 of E_B-V from nova photometry is affecting the B and V brightness of the progenitor too, and (3) ionization of the cool giant wind by the WD produces extra-flux at B and V wavelengths.It is in fact well known how the UBV colors of symbiotic stars are much bluer than those of the M giants they harbour (cf.the UBVRI-JHKL photometric surveys of known symbiotic stars by Munari et al.1992 and Henden & Munari 2008), because of the contribution at shorter wavelengths by the emission from circumstellar ionized gas. §.§ V1535 Sco Srivastava et al. (2015) proposed 2MASS 17032617-3504178 (J=13.40, H=12.53, and K=12.22) as the progenitor.This star is positionally concident to better than 0.1 arcsec with the nova, with a Gaia Data Release 1 source of G=14.392 mag, and a DENIS counterpart with I=15.24 mag.In Figure 17 we plot the observed SED for the progenitor of V1535 Sco, combining 2MASS JHK and WISE W1W2 infrared data, to which we have added I=15.24 mag from DENIS, V=17.05 mag from the YB6 Catalog (USNO, unpublished; accessed via Vizier at CDS) and R=16.33 from SuperCOSMOS.We have considered the fit with the same family of energy distributions of G3III-M6III giants already used in Figure 16 for V1534 Sco, this time reddened by the same E_B-V=1.03 derived and discussed above for the nova.The fit is clearly unsatisfactory at optical wavelengths, implying quite blue intrinsic colors for the progenitor.At the distance given in Table 3 for the nova, the absolute magnitude of the progenitor would be M(K)=-2.9, which is that expected for a K3-4III giant.Such a classification was one of the alternatives (the other being an M4-5III) considered by Srivastava et al.(2015).Although rare, the symbiotic stars with K giants account for ∼10% of the total in the catalog by Belczyński et al.(2000). The optical colors of some of them (cf Munari et al.1992, Henden & Munari 2008) are strongly affected by the blue emission of the K giant wind ionized by the radiation from the WD companion, and this could easily be a viable interpretation for the progenitor of V1535 Sco. §.§ MASTER OT J010603.18-744715.8 Mroz et al.(2016) reported the progenitor was clearly visible in OGLE-IV survey images at equatorial coordinates R.A.=01:06:03.27, Decl.=-74:47:15.8 (J2000.0), I=20.84 mean magnitude and V-I=+0.16 color.They add that it showed semi-regular variability on a timescale of 20-30 days.The blue color reflects into the non-detections by 2MASS and WISE infrared surveys.At a distance of 1.03 arcsec from the OGLE position there is a GALEX source of magnitudes FUV=20.529(±0.331) and NUV=20.573(±0.205), the second closest GALEX source being 30 arcsec away.The astrometric proximity and compatible magnitudes and colors, suggest that the OGLE and GALEX sources are the same star, of blue colors consistent with those of a disk-dominated source.Adopting the E_B-V=0.08 reddening and 61 kpc distance to SMC listed by Mateo (1998), the absolute magnitude of the progenitor is M(V)=+1.8, which suggests a sub-giant as the donor star.A giant of the T CrB type would shine at M_V∼-0.5 (Sowell et al.2007), while the mean magnitude for novae with dwarf companions is M_V∼4.5 (Warner 1995). The presence of a sub-giant is consistent with the non-detection of the progenitor during the 2MASS survey.§ ACKNOWLEDGEMENTSWe would like to thank S. Dallaporta, F.Castellani, G.Alsini and R.Belligoli for some check observationscarried out on the program targets. [Ackermann et al.2014]2014Sci...345..554A Ackermann M., et al., 2014, Sci, 345, 554[Altenhoff et al.1960]Altenhoff Altenhoff W., Mezger P. G., Strassl H., Wendker H., Westerhout G., 1960, Veroff Sternwarte Bonn 59, 48 [Ayani2015]2015IAUC.9279....3A Ayani K., 2015, IAUC, 9279, 3[Ayani & Maeno2014]2014CBET.3841....1A Ayani, K., Maeno, S., 2014, CBET, 3841, 1 [banerjee2012]b73 Banerjee D.P.K., Ashok, N.M., 2012, BASI, 40, 243 [Bessell1990]1990PASP..102.1181B Bessell M. S., 1990, PASP, 102, 1181[Belczyński et al.2000]2000A AS..146..407B Belczyński K., Mikołajewska J., Munari U., Ivison R. J., Friedjung M., 2000, A&AS, 146, 407[Bode & Evans2012]2012clno.book.....B Bode M. F., Evans A., 2012, eds., Classical Novae, Cambridge University Press [Buscombe & de Vaucouleurs1955]1955Obs....75..170B Buscombe W., de Vaucouleurs G., 1955, Obs, 75, 170[Campbell et al.2015]2015ATel.8155....1C Campbell H., et al., 2015, ATel, 8155, [Capaccioli et al.1989]1989AJ.....97.1622C Capaccioli M., della Valle M., Rosino L., D'Onofrio M., 1989, AJ, 97, 1622[Chomiuk et al.2014]2014Natur.514..339C Chomiuk L., et al., 2014, Nature, 514, 339[Chomiuk et al.2016]2016ATel.8841....1C Chomiuk L., Strader J., Stanek K. Z., Kochanek C. S., Holoien T. W.-S., Shappee B. J., Prieto J. L., Dong S., 2016, ATel, 8841, [Cousins1980]1980SAAOC...1..166C Cousins A. W. J., 1980, SAAOC, 1, 166[Downes & Duerbeck2000]2000AJ....120.2007D Downes R. A., Duerbeck H. W., 2000, AJ, 120, 2007[Duerbeck2008]Duerbeck Duerbeck H. W., 2008, in Classical Novae, M.-F. Bode and Evans eds., Cambridge University Press, pag. 1 [Fiorucci & Munari2003]2003A A...401..781F Fiorucci M., Munari U., 2003, A&A, 401, 781[Fitzpatrick1999]1999PASP..111...63F Fitzpatrick E. L., 1999, PASP, 111, 63[Fluks et al.1994]1994A AS..105..311F Fluks M. A., Plez B., The P. S., de Winter D., Westerlund B. E., Steenman H. C., 1994, A&AS, 105, 311 [Frank et al.2016]2016ATel.8817....1F Frank S., Wagner R. M., Starrfield S., Woodward C. E., Neric M., 2016, ATel, 8817, [Graham1982]1982PASP...94..244G Graham J. A., 1982, PASP, 94, 244[Henden et al.2012]2012JAVSO..40..430H Henden A. A., Levine S. E., Terrell D., Smith T. C., Welch D., 2012, JAVSO, 40, 430[Henden & Munari2008]2008BaltA..17..293H Henden A., Munari U., 2008, BaltA, 17, 293[Henden & Munari2014]2014CoSka..43..174M Henden A., Munari U., 2014, in Observing Techniques, Instrumentation and Science for Meter-Class Telescopes, T. Pribulla ed., CoSka, 43, 518[Hjellming1974]1974gegr.book..159H Hjellming R. M., 1974, in Galactic and Extra-Galactic Radio Astronomy, G.L. Verschuur and K.I. Kellermann eds., Springer-Verlag New York, pag. 159 [Jablonski & Oliveira2016]2016ATel.9684....1J Jablonski F., Oliveira A., 2016, ATel, 9684, [Jelinek et al.2014]2014ATel.6025....1J Jelinek M., Cunniffe R., Castro-Tirado A. J., Rabaza O., Hudec R., 2014, ATel, 6025, [Johnson & Morgan1953]1953ApJ...117..313J Johnson H. L., Morgan W. W., 1953, ApJ, 117, 313 [Joshi et al.2014]2014ATel.6032....1J Joshi V., Banerjee D. P. K., Venkataraman V., Ashok N. M., 2014, ATel, 6032, [Joshi et al.2015]2015MNRAS.452.3696J Joshi V., Banerjee D. P. K., Ashok N. M., Venkataraman V., Walter F. M., 2015, MNRAS, 452, 3696[Koornneef1983]1983A A...128...84K Koornneef J., 1983, A&A, 128, 84[Krautter2008]2008ASPC..401..139K Krautter J., 2008, ASPC, 401, 139[Kuin et al.2016]2016ATel.9635....1K Kuin N. P. M., Page K. L., Williams S. C., Darnley M. J., Shore S. N., Walter F. M., 2016, ATel, 9635, [Kuulkers et al.2014]2014ATel.6015....1K Kuulkers E., Page K. L., Saxton R. D., Ness J.-U., Kuin N. P., Osborne J. P., 2014, ATel, 6015, [Landolt1992]1992AJ....104..340L Landolt A. U., 1992, AJ, 104, 340[Landolt2009]2009AJ....137.4186L Landolt A. U., 2009, AJ, 137, 4186[Li & Chomiuk2016]2016ATel.9699....1L Li K.-L., Chomiuk L., 2016, ATel, 9699, [Li, Chomiuk, & Strader2016]2016ATel.9736....1L Li K.-L., Chomiuk L., Strader J., 2016, ATel, 9736, [Li et al.2016]2016ATel.9771....1L Li K.-L., Chomiuk L., Strader J., Cheung C. C., Jean P., Shore S. N., Fermi Large Area Telescope Collaboration, 2016, ATel, 9771, [Linford et al.2015]2015ATel.7194....1L Linford J., et al., 2015, ATel, 7194, [Lipunov et al.2016]2016ATel.9631....1L Lipunov V., et al., 2016, ATel, 9631, [Littlefield & Garnavich2015]2015ATel.8156....1L Littlefield C., Garnavich P., 2015, ATel, 8156, [Luckas2016]2016ATel.9678....1L Luckas P., 2016, ATel, 9678, [Lukas2016]2016ATel.9658....1L Lukas P., 2016, ATel, 9658, [McLaughlin1960]1960stat.conf..585M McLaughlin D. B., 1960, in Stellar Atmospheres. J. L. Greenstein ed., University of Chicago Press, 585[Mateo1998]1998ARA A..36..435M Mateo M. L., 1998, ARA&A, 36, 435 [Mezger & Henderson1967]1967ApJ...147..471M Mezger P. G., Henderson A. P., 1967, ApJ, 147, 471 [Morris et al.2017]2017MNRAS.465.1218M Morris P. J., Cotter G., Brown A. M., Chadwick P. M., 2017, MNRAS, 465, 1218 [Mroz & Udalski2016a]2016ATel.8811....1M Mroz P., Udalski A., 2016a, ATel, 8811, [Mroz & Udalski2016b]2016ATel.9622....1M Mroz P., Udalski A., 2016b, ATel, 9622, [Mroz et al.2016]2016ATel.9683....1M Mroz P., Udalski A., Pietrukowicz P., 2016, ATel, 9683, [Munari et al.1992]1992A AS...93..383M Munari U., Yudin B. F., Taranova O. G., Massone G., Marang F., Roberts G., Winkler H., Whitelock P. A., 1992, A&AS, 93, 383[Munari et al.2011]2011MNRAS.410L..52M Munari U., et al., 2011, MNRAS, 410, L52[Munari et al.2012]2012BaltA..21...13M Munari U., et al., 2012, BaltA, 21, 13[Munari & Moretti2012]2012BaltA..21...22M Munari U., Moretti S., 2012, BaltA, 21, 22[Munari et al.2013a]2013MNRAS.435..771M Munari U., Dallaporta S., Castellani F., Valisa P., Frigo A., Chomiuk L., Ribeiro V. A. R. M., 2013a, MNRAS, 435, 771[Munari2014]2014ASPC..490..183M Munari U., 2014, in Stella Novae: Past and Future Decades, P. A. Woudt and V. A. R. M. Ribeiro eds., ASP Conf. Ser. 490, 183 [Munari et al.2014a]2014JAD....20....4M Munari U., Henden A., Frigo A., Dallaporta S., 2014a, JAD, 20, [Munari et al.2014b]2014AJ....148...81M Munari U., et al., 2014b, AJ, 148, 81[Munari et al.2015]2015NewA...40...28M Munari U., Maitan A., Moretti S., Tomaselli S., 2015, NewA, 40, 28 [Munari et al.2016a]2016IBVS.6162....1M Munari U., Walter F. M., Hambsch F.-J., Frigo A., 2016a, IBVS, 6162, 1[Munari et al.2016b]2016IAUC.9280....2M Munari U., Sollecchia U., Hambsch F.-J., Frigo A., 2016b, IAUC, 9280, [Muneer & Anupama2016]2016ATel.8853....1M Muneer S., Anupama G. C., 2016, ATel, 8853, [Muneer, Anupama, & Raveendran2015]2015ATel.7161....1M Muneer S., Anupama G. C., Raveendran A. V., 2015, ATel, 7161, [Nelson et al.2012]2012ATel.4590....1N Nelson T., Mukai K., Sokoloski J., Chomiuk L., Rupen M., Mioduszewski A., Page K., Osborne J., 2012, ATel, 4590, [Nelson et al.2015]2015ATel.7085....1N Nelson T., et al., 2015, ATel, 7085, [Ness2012]2012BASI...40..353N Ness J. U., 2012, BASI, 40, 353[Orio et al.2016a]2016ATel.9810....1O Orio M., Behar E., Rauch T., Zemk P., 2016a, ATel, 9810, [Orio et al.2016b]2016ATel.9970....1O Orio M., Rauch T., Zemko P., Behar E., 2016b, ATel, 9970, [Osborne et al.2010]2010ATel.2442....1O Osborne J. P., et al., 2010, ATel, 2442 [Page et al.2013]2013ATel.4845....1P Page K. L., et al., 2013, ATel, 4845, [Page et al.2016]2016ATel.9733....1P Page K., Osborne J., Kuin P., Shore S., Williams S., Darnley M. J., 2016, ATel, 9733, [Page, Osborne, & Kuulkers2014]2014ATel.6035....1P Page K. L., Osborne J. P., Kuulkers E., 2014, ATel, 6035, [Rudy, Crawford, & Russell2016]2016ATel.9849....1R Rudy R. J., Crawford K. B., Russell R. W., 2016, ATel, 9849,[Saito et al.2016]2016ATel.9680....1S Saito R. K., Minniti D., Catelan M., Angeloni R., 2016, ATel, 9680, [Schlafly & Finkbeiner2011]2011ApJ...737..103S Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103 (SF11) [Schlegel, Finkbeiner, & Davis1998]1998ApJ...500..525S Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525 (SFD98) [Schwarz et al.2011]2011ApJS..197...31S Schwarz G. J., et al., 2011, ApJS, 197, 31[Seitter1990]1990LNP...369...79S Seitter W. C., 1990, in Physics of Classical Novae, A. Cassatella and R. Viotti eds., Springer-Verlag, Berlin, pag. 79 [Shumkov et al.2016]2016ATel.9621....1S Shumkov V., et al., 2016, ATel, 9621, [Smith et al.2002]2002AJ....123.2121S Smith J. A., et al., 2002, AJ, 123, 2121 [Soker & Livio1989]1989ApJ...339..268S Soker N., Livio M., 1989, ApJ, 339, 268[Sowell et al.2007]2007AJ....134.1089S Sowell J. R., Trippe M., Caballero-Nieves S. M., Houk N., 2007, AJ, 134, 1089[Srivastava et al.2015a]2015ATel.7236....1S Srivastava M., Ashok N. M., Banerjee D. P. K., Venkataraman V., 2015a, ATel, 7236, [Srivastava et al.2015b]2015MNRAS.454.1297S Srivastava M. K., Ashok N. M., Banerjee D. P. K., Sand D., 2015b, MNRAS, 454, 1297[Srivastava et al.2016]2016ATel.8809....1S Srivastava M., Joshi V., Banerjee D. P. K., Ashok N. M., 2016, ATel, 8809, [Stanek et al.2016]2016ATel.9669....1S Stanek K. Z., et al., 2016, ATel, 9669, [Straižys1992]1992msp..book.....S Straižys V., 1992, Multicolor Stellar Photometry, Pachart Publishing House (Tucson) [van den Bergh & Younger1987]1987A AS...70..125V van den Bergh S., Younger P. F., 1987, A&AS, 70, 125[Walter et al.2012]2012PASP..124.1057W Walter F. M., Battisti A., Towers S. E., Bond H. E., Stringfellow G. S., 2012, PASP, 124, 1057[Walter2015]2015ATel.7060....1W Walter F., 2015, ATel, 7060, [Warner1995]1995CAS....28.....W Warner B., 1995, Cataclysmic Variable Stars, Cambridge Univ. Press [Weston et al.2016]2016MNRAS.457..887W Weston J. H. S., et al., 2016, MNRAS, 457, 887 [Williams1992]b131 Williams R.E., 1992, AJ, 104, 725 [Williams & Darnley2016a]2016ATel.9628....1W Williams S. C., Darnley M. J., 2016a, ATel, 9628, [Williams & Darnley2016b]2016ATel.9688....1W Williams S. C., Darnley M. J., 2016b, ATel, 9688, [Woudt & Ribeiro2014]2014ASPC..490.....W Woudt P. A., Ribeiro V. A. R. M., 2014, eds., Stella Novae: Past and Future Decades, ASP Conf. Ser. 490
http://arxiv.org/abs/1703.09017v1
{ "authors": [ "U. Munari", "F. -J. Hambsch", "A. Frigo" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170327112831", "title": "Photometric evolution of seven recent novae and the double component characterizing the lightcurve of those emitting in gamma rays" }
We propose a simulated annealing algorithm specifically tailored to optimise total retrieval times in a multi-level warehouse under complex pre-batched picking constraints. Experiments on real data from a picker-to-parts order picking process in the warehouse of a European manufacturer show that optimal storage assignments do not necessarily display features presumed in heuristics, such as clustering of positively correlated items or ordering of items by picking frequency. In an experiment run on more than 4000 batched orders with 1 to 150 items per batch, the storage assignment suggested by the algorithm producesa 21% reduction in the total retrieval time with respect to a frequency-based storage assignment.Hiring Expert Consultants in E-Healthcare: A Two Sided Matching Approach Arpan Roy label3cor2 December 30, 2023 ========================================================================§ INTRODUCTIONWarehouses play a key role in modern supply chains <cit.> and are a significant cost factor to a company: according to the European Logistics Association/AT Kearney report <cit.>, the capital and operation costs of warehouses represent about 25% of the surveyed companies' logistics costs in 2003, while figures for the USA <cit.> indicate that warehousing contributed to the total logistics costs with a share of 22%.Order picking, generally defined as the process of retrieving products from storage in response to a specific customer request, is the most labour-intensive operation in warehouses with manual systems, and a very capital-intensive operation in warehouses with automated systems <cit.>. Estimates of the percentage of order picking costs on the total warehouse costs range as high as 55% in Drury <cit.> andBartholdi et al<cit.> to 65% in Coyle et al. <cit.>. For these reasons, warehousing professionals consider order picking as the highest priority area for productivity improvements.Over the last twenty years, many papers have studied order picking processes and optimal strategies or heuristics to optimise subprocesses such as warehouse layout, storage assignment, order batching, order release method and picker routing. However, there is little published research on how to combine these subprocesses optimally: we mention here<cit.>, which compares the performance of S-shaped and Largest Gap routing heuristics for batches of 3 and 4 items, and <cit.>, which focuses on heuristics for order batching to improve the overall performance of order picking systems. The contribution of this work is to propose a combined optimisation of the storage assignments and a routing heuristic for correlated batched orders of large size (up to 102 unique parts in our tests). This is achieved via a hand-tailored algorithm based on a simulated annealing combinatorial search, which incorporatesvariable parameters depending on the routing heuristics. The algorithm is designed for multi-level warehouses, the routing heuristics is based on a warehouse design with wide aisles, however, it can be adapted to narrow aisles. § WAREHOUSE ORDER PICKINGIn human-operated warehouses, the most common system for order picking is the picker-to-parts class, where the (human) order picker walks or drives along the aisles to pick items <cit.>. Despite their ubiquity, pickers-to-part order-picking systems have not received comparable attention from researchers, perhaps because of their variety and complexity <cit.>.An optimal strategy for a picker-to-parts picking process will minimize the time needed for picking all orders in a given time frame. The precise quantity to minimize is indeed total retrieval time, which is defined asthe sum of pick and travel time and time due to delays. By using s-shaped routing or other routing rules which make the aisles unidirectional, the delay part (which is usually due to congestion) can be a priori removed from the model. The main components to be minimized are therefore (1) the time needed for travelling the warehouse to collect the itemsand (2) the time needed to perform one pick. If we assume thatthe microdesign of the storage racks was already optimised and appropriate tools are used by the picker, the latter is basically a constant, its size depending on the level in the warehouse where the item is located. Minimizing the time needed for travelling the warehouse can be achieved by various ansatzes, the most common ones involve solvingthe storage location assignment problem (SLAP), therouting problem or allocating optimal batched orders for one tour of the picker. §.§ Storage assignment policies Early scientific contributions to thestorage location assignment problem in warehouses include a taxonomy of possible storage location assignment policies, where the classification between dedicated storage, randomized storage and class-based storage was introduced, see <cit.> and the references therein.In our problem, we deal with a warehouse where the individual pieces used in the production of certain products are stored. Therefore, the orders arrive already pre-batched, according to the necessity of the assembly line. The order batches are quite large(on average 30 items per batch in the data we used for testing our algorithm). Wetherefore seek a solution to the combined SLAP for a warehouse withdedicated storage-policy and the routing problem for such pre-batched orders. The dedicated storage policy assigns a fixed location to each product, meaning that this location is reserved even for products that are out of stock. While this may result in a low space utilisation, which may imply higher maintenance costs and possibly longer routing times, dedicated storage locations are an advantage in warehouses for production units, since every stored item is needed regularly.Moreover, dedicated storage locations help to increase the orientation of the order pickers, leading to an increased routing velocity and fewer wrong picks.Dedicated storage policies allow for logical grouping of storage items, which are often advantageous in retail warehouses <cit.> and in general for items of very different weight, which can be stored in decreasing weight along the standard picking route, implying a good stacking sequence. However, as pointed out in the survey <cit.>, analytical models for optimising dedicatedstorage assignment in manual-pick order-picking systems are still lacking. Existing studiesmainly focus on random storage assignments.The random storage policy (see for example <cit.>) reduces the total space required. This does not automatically improve routing times, as travel distance might increase, see<cit.>. It needs a computer-controlled environment to be efficient, as a lack of automation or technical equipment assisting the picking process can lead to slow travelling times due to disorientation and higher percentages of picking errors.It was observed <cit.> that a separation of the pick stock (forward area) from the bulk stock (reserve area) can lead to a significant improvement of picking times: in the forward area, a dedicated storage policy is applied, while the bulk area can follow a random storage policy. In this way, the advantages of dedicated storage still hold and disadvantages are reduced. Indeed, this policy is already adapted in many warehouses attached to production units, such as the one in our reference case.At the interface between research and industry, several papers, such as <cit.>, describe algorithms that solve the SLAP with real-life constraints. In particular the paper <cit.> also deals with a multi-level warehouse situation, pointing out that the storage assignment systems need to reflect the structure of the orders. §.§ Routing policiesStorage assignment has an impact on the performance of the routing method <cit.>.The problem of optimal routing for order picking classifies as a Steiner Traveling Salesman Problem, which is in general not solvable in polynomial time. However, for a special warehouse aisle configuration, Ratliff and Rosenthal <cit.> showed that there does exist an algorithm that can solve the problem in running time linear in the number of aisles and the number of pick locations. This algorithm was extended to other situations in <cit.>. However, algorithms are not yet available for every specific layout, and there remains the unsolved problem of aisle congestion by pickers following different routes, and the fact that pickers may deviate from routes that they deem illogical <cit.>. Because of this, the problem of routing order pickers is mainly solved by using a heuristic, such as the s-shape method. In <cit.>,arouting strategy for a warehouse withU-shaped layout has been introduced and proven to be more efficient under certain conditions.Most heuristic methods for routing order pickers in single-block warehouses assume that the aisles of the warehouse are narrow enough to allow the order picker to retrieve products from both sides of the aisle without changing position <cit.>. A polynomial-time algorithm for routing order pickers in wide aisles was proposed in <cit.>.One of the strengths of the algorithm we propose here is that it gives efficient heuristicsboth for narrow and wide aisles: the routing strategy, the right-hand rule and the aisle length are incorporated as variable parameters and can easily be changed, see formula (<ref>). §.§ Simulated annealing The simulated annealing (SA) method is an optimisation algorithm introduced by Kirkpatrick in 1983 <cit.> and separately by Černỳ in 1985 <cit.>.The idea behind this method is to emulate the congelation of a crystal with many atoms. This process starts at very high temperatures at which all atoms are free to move. By slowly decreasing the temperature atoms start to "feel" the presence of others and arrange themselves into a structure, more precisely: the interaction between atoms forces them to settle into a crystal shape, where the overall interaction energy is minimized. The slow cooling process allows the system to explore many configurations and enabling it to find a near optimal or optimal configuration.Transferring this method to optimisation problems, one has to define an energy or cost function f. In our case this cost function is the overall picking time in a warehouse. Changing the positions of items (also called configuration) leads to an increase or decrease in the cost function. At each iteration of this changing procedure (called update), the simulated annealing algorithm checks whether the new configuration is better or worse than the one before and applies the Metropolis criterion <cit.>, which states the following: An improvement, in this case a change of configuration which leads to a decrease of the cost function, will be accepted, while a worse configuration will only be adopted with a Boltzmann probabilityℙ(Δ f,T)=e^-Δ f/Twhere Δ f is the difference of the values of the cost-function for the two configurations and T a control parameter similar to a temperature.When performing an optimisation, at first one chooses a large control parameter T and performs a finite number of updates to the configuration. Then the parameter T is slightly decreased and again a finite number of updates are realised. Repeating the last steptransforms the update-method from a random walk at large T to a local search update at small T. The last configuration obtained is the one with the least energy.Simulated annealing algorithms are nowadays widely used in economics, for example in packaging problems <cit.>, the production scheduling problem <cit.>, the corridor allocation problem <cit.> <cit.>and solving the SLAP <cit.>.A simulated annealing heuristic to minimize total retrieval time involving order batching and sequencing was introduced in <cit.>. Their algorithm uses a geometric cooling schedule <cit.>also adapted in <cit.>. The algorithm proposed in <cit.> uses a single flexible heuristic based on random moves in a structured manner, in comparison to multiple deterministic neighbourhood search heuristics, as often found in the literature, and is comparably very fast.Recent results <cit.> show that solutions obtained by simulated annealing, Iterated Local Search or the Attribute-Based Hill Climber <cit.>may allow order picking systems to operate more efficiently compared to those obtained with standard constructive heuristics such as the Earliest Due Date rule. § PROBLEM DESCRIPTION AND MODELLING We considera multi-level warehouse of a production site. Incoming orders consist of large batches of individual items, whichare picked manually into a bin using a picker-to-parts order picking system and delivered to a couple of output locations. The first two levels of the warehouse are easily accessible, while all levels above can only be accessed by a lifting device, resulting in a higher picking time per item.The items ordered in a batch are correlated to each other, as each batch corresponds to the parts used in one step of the assembly line of the production site. The difficulty of this SLAP is due to the high number of possible optional features among which the customer can choose when orderingone of the (very few) main products: The number of items which are needed for all options in a main product line is relatively low compared to the number of items corresponding to one or more options. The correlations between individual items to be picked are therefore quite complex and cannot be treated by a pure frequency-based algorithm as proposed in <cit.>.Another level of complexity is added by different storage container classes used in the warehouse: As they are of different size, they cannot be randomly exchanged. Moreover, the number of different items stored per aisle depends on the composition of container classes in this aisle. Note that standard simulated annealing algorithms are unable to deal with combinatorial constraints as posed by storage containers of different size.§ SOLUTION APPROACH Webuild a simulated annealing algorithm which finds the configuration of warehouse items in a multi-level, multi-container class warehouse minimizing the total retrieval time of batched orders under a given routing in a reference time frame. For this, we need to specify (1) how we calculate the retrieval timeand (2) which update routine we implement in our SA algorithm to gradually improve retrieval times. §.§ Construction of retrieval timeAs we consider pre-batched orders, meaning several items collected into the same bin during one route, we have to calculate the retrieval times per ordered bin. The total retrieval time t in the reference time frame is simply the sum over the individual retrieval times t_bin, i for each bin (labelled by i) ordered in the reference time frame: t =∑_i=1^N_bin t_bin, i As mentioned above, our algorithm achieves this by both optimising the storage assignment and using a heuristic to minimize routing times. The retrieval time per bin t_bin is therefore split into the pick timet_p and the routing time t_r. These are calculated independently from one another and the retrieval time per bin then readst_bin = t_p + t_r. §.§.§ Routing heuristicThe routing time is the time that a picker needs to physically move to all locations in the warehouse where the goods for one bin are stored. The goal is to reduce the overall routing time for all bins.It is easy to see that for a warehouse with only parallel aisles, the s-shaped routing heuristic can be applied both for wide aisles, enforcing a right-hand pick rule, and for narrow aisles (no right-hand rule). However, while unidirectional narrow aisles have to be traveled completely, the routing heuristic in wide aisles would be to pick with the right hand until the last item in the batch for the right part of the aisle is reached, and then to turn around and pick from the “left” part of the aisle, exiting at the same point where the aisle was entered.We present a routing heuristic optimised for wide aisles. This formula can readily be adapted to an s-shaped routing in unidirectional narrow aisles.Dividing every aisle into subsections, the time needed to travel through one wide aisle while applying a right-hand pick rule is given by the travel time per subsection τ_s multiplied by the distance d of the subsection which is farthest away from the entrance of the aisle[We call d the distance as it can be calculated by any norm which the user considers adapted. The easiest notion of distance is twice the maximum norm, so that d = 2 max{ x_j - x }, where x is the entrance point of the aisle and x_j the j-th item to be picked in this aisle.]. In case of narrow aisles, the time needed to travel through one aisle is constant, namely just the number of subsections per aisle multiplied τ_s. In other words, d is a constant and not a variable.The routing time t_r for one bin is then calculated ast_r = N_aisle·τ_aisle + d ·τ_swhereN_aisle is the number of aisles that have to be entered to collect all items in this batch, τ_aisle the time needed to change from one aisle to the nextand τ_s the time it takes to move one subsection within one aisle.One might ask why this heuristic should give a good result. In fact, considering an isolated routing optimisation, as described in the literature review above, a simple s-shaped heuristic does not always give the best results, even though it helps to avoid congestion. However, in our case, it is precisely the mix of a solution to the SLAP with implicit routing heuristics which will produce an optimal solution on the condition of this wide-aisle s-shaped routing heuristic. §.§.§ Multi-level pickingThe literature often distinguishes between low-level and high-level picking. In low-level picking systems the picker can directly collect the items from the storage racks, while high-level picking or “man-aboard order-picking” indicates the use of a lifting order-pick truck or crane, see <cit.> for a detailed exposition. We design our algorithm in a way that the pick times depend on the level where the item is stored. In the simplest case, when only a distinction between low-level and high-level picking is made, we therefore work with two picking times, namely τ_l for the lower level(s) and τ_u for “upper level” picking. The pick time per bin can then be calculated ast_p = N_l·τ_l + N_u·τ_u + Θ(N_u) ·τ_liftwhere N_l is the number of items located in lower levels and N_u the number of items located in the upper levels, respectively.The last term in equation (<ref>) adds the time τ_lift needed to fetch or adjust the lifting device to the upper level(s). In the simplest situation of only one level change, Θ(N_u) function returns one if there are any elements to retrieve from the upper levels and zero if the picker has no need to visit the upper levels.Formula (<ref>) is only the special case of the general multi-level picking time formula=t̃_p= (L-1) ·τ_lift∑_j=1^L N_j·τ_jwhere L is the number of level changes to be made by the picker and τ_j the picking time for an item stored in level j. By specifying different picking times for different levels or other special situations, the proposed algorithm can be adapted to combined low- and high-level picking, and by adjusting τ_lift, τ_aisleand τ_s, the algorithm applies also todifferently shaped warehouses and variable routing heuristics.§.§ Construction of movesThe total retrieval time constructed in (<ref>) is the cost-function which our simulated annealing algorithm has to minimize. As explained above, such an algorithmchanges again and again the location of the storage containers (in which the items are stored), trying to find configurations with a lower total cost. These continuous changes are called moves. It is of key importance for the performance of the algorithm to choose the correct type of moves.The specific design our test warehouse adds an extra difficulty: the presence of different-size storage containers in the warehouse translate into combinatorial compatibility conditions, i.e. is has to be ensuredthat the algorithm does not exchange two storage containers of different size. The crucial solution step here was to observe that the admissible combinations of container classes form subsections in the aisle. Therefore, in the presented algorithm, two update routines were implemented: the first routine exchanges two random boxes from the same size category, the second routineswaps whole subsections in two randomly chosen levels. The first move is tasked to cluster all items that are strongly correlated and minimizes the individual item picking time for one bin.The second routine bothensures that a solution found is admissible and accelerates the optimisation by searching for a more adequate location ofa group of already clustered items, e.g. a subsection consisting of items used to assemble a highly popular product option should be placed at a part of the warehouse which is quickly reachable by the picker. The combination of these two moves allows the simulated annealing algorithm to find warehouse configurations that are minimized by the average picking time for each bin, while keeping the overall performance picking time for the whole range of product options in mind.§ VALIDATION AND RESULTS §.§ Case description We test our algorithm with real data from the production site of a medium-size European company offering highly customizable products with a long lifespan. When ordering a product, the customer chooses from a large number of possible options for the product of his choice, which are assembled at the production site. The individual parts for the product are prefabricated by subcontractors, shipped to the company and stored in the warehouse until needed, there is no just-in-time delivery. Orders to the warehouse arrive as a batch, which is picked in a single journey. The picker stores the items in the batch on a bin, which includes small trays for small pieces and a dedicated space for heavy items, so there is no issue of considering heavy or delicate items when deciding the routing strategy. When all items are collected, the picker delivers them to certain input points along the assembly line. The content of one batched order varies not only according to the output delivery point, which is the input point of a specific step in the assembly line, but also highly depends on the end-configuration of the product chosen by the customer. In other words, customised product options lead to complex correlations between the items stored in the warehouse. Positively correlated items are more likely to be found in one batched order arriving at the warehouse. §.§.§ Sample warehouse design used in the algorithmOur algorithm was designed to be general enough to cope with several features appearing in the warehouse of the abovementioned company, which are variable aisle length, multiple storage levels, and different container types with combinatorial restrictions due to their size. As visualised in figure <ref>,the sample warehouse of the manufacturer has 7 aisles of variable length (due to the constraints of the production site). Each aisle is 4 levels high and is divided into (at most) 20 subsections. Individual items are stored in containers, of which two main container type, namely large or regular size are used, and of these types, both high and low containers are available. Each subsection of an aisle is wide enough to hold two large or three regular containers, therefore up to six different items(six items meaning three regular size containers of low height stacked on three other regular size containers of low height) can be stored at each level of a subsection. A total amount of 1268 different components are stored in the warehouse of the manufacturer whose data we used. We considered only the primary pick location of these 1268 individual items, as only this primary pick location is mentioned in the order sheet given to the picker. The secondary location of this component is in the less frequented “cold” area of the warehouse, where a random storage policy is exibited (surplus and refill storage).An order batch contains between one and 150 items each. As mentioned, the bin has designated storage options for different items. For example, small items go on the trays, so that no predefined sequence of picking is needed. The delivery locations are designated spots at the edge of the assembly line. As the retrieval time for one order batch is by orders of magnitude longer than the travelling time to the delivery points along the assembly line, the influence of those delivery points on the warehouse storage locations can be neglected.§.§.§ Routing heuristics in the company Our algorithm is designed to adapt to several routing situations. The routing heuristics used in our sample warehouse is as follows: pickers start with picking from the lower two levels of a wide aisle. They follow a right-hand rule, which means that they pick only on their right while traveling along the aisle. Once the last item to their right is reached, they turn around and pick the other side of the aisles until theyarrive back to the entrance of this aisle and change to the next aisle. After completing the routing in the lower levels,a lifting order-pick truck is fetched and the picker starts picking the first upper level.After all items of the batch stored in the first upper level are picked, the height of the order-pick truck is adjusted and the picker continues the same routing on the second upper storage level. Using this routing heuristics, the company arranged their storage allocation based on the picking frequency. §.§.§ Individual pick timesNote that the time required for each pick (denoted by τ_j in formula (<ref>)) changes depending on the level from which items are picked. In our sample warehouse, we consider the simple situation of only two different picking times, τ_l for the lower levels, and τ_u for the upper levels, see formula (<ref>). The time required for each pick in the upper levels is significantly longer than for the lower levels, as more careful steering is needed and the picker is less mobile. The choice of pick times and the time for aisle change/ level change used in the experiments of our algorithm are listed below. Please recall equations (<ref>) and (<ref>) for the definition.It is worth mentioning that the solution quality does not depend on the exact times as long as they are in reasonable proportions to each other. Variable Time in secondsτ_aisle 30τ_s 2 per subsectionτ_l 15τ_u 30τ_lift 120 §.§ Validation of methodThe experiments were carried out with 4192 pre-batched orders, representing the products assembled in the reference time frame. All experiments were run single-threaded on Intel(R) Core(TM) i7-4790 CPU running at 3.60GHz and with 8 GB of memory. We used a geometric cooling schedule starting at T = 10^7 and stopping when convergence is reached with a cooling constant α = 0.95, resulting in about 2 · 10^6 iterations. The total running time for one annealing was about 5 hours. §.§ ResultsAfteroptimisation of storage locationsby our algorithm, the total retrieval time for all 4 · 10^4 pre-batched orders in the reference time frame was calculated to beapproximately 38% lower than for a random item distribution and21% lower compared to the total retrieval time with the pre-otimised storage location configuration used by the company. This is a significant improvement, in particular as the initial storage allocation used a frequency-based heuristics similar to <cit.>: Fixing the routing heuristics, the company's logistics division had already re-arranged the items used more often to the first aisles and those used less were put in the back aisles.However, as our results show, this simple heuristics has its drawback when handling large batched orders.§.§.§ Detailed analysis - global distributionFigure <ref> shows the optimised warehouse design. Red items have the highest picking frequency, blue items the lowest.In comparison to the starting situation (Figure <ref>), the lower levels shown in a) and b) are better stocked. They contain the more frequently picked items (red, orange, yellow/green pixels), while the upper levels c) and d)) contain more blue/green items. In fact, the algorithm removed all orange/red items from the upper level, which is to be expected since the higher picking time in the top floors and the need for a lifting device have a significant impact on the optimisation.The canonical entrance to the warehouse is on the right-bottom corner of the picture. The first aisle from the right, which is the first aisle visited, initially had a lot of very frequently picked items on the first two levels, as this was the preferred storage location in the heuristics used by the company. After optimisation by our algorithm, most of these items were moved, as the aisle is very short one and therefore inefficient to visit from a global optimisation point of view, as taken by our algorithm.§.§.§ Detailed analysis - batch-induced correlations As already discussed above, the picking process of large batched orders results in complex correlations between the individual items in the warehouse. The correlations are not necessarily related to the frequency of picks of a single item: some items are very basic and are used for every single product that this company is producing,independently of the product option chosen by the customer, while other items are specific to one product option and appear in exactly one batch if and only if this product option was ordered.One might conjecture that clustering the items according to their correlation to each other leads to a lower total retrieval time. By clustering we mean that highly correlated items are stored in neighbouring storage containers.To check if our algorithm does indeed cluster correlated items, we visualise in figure <ref>the changes in correlation between neighbouring items. The visualisation is done via the average jaccard-similarity coefficient of a batch which contains item i and a batch which contains a direct neighbour of item i,which we call j.Roughly speaking, a high jaccard-similarity coefficientmeans that item i and its neighbour j are positively correlated in the picking process, i.e. a large percentage of batches which contain i also contain j.The calculation of the jaccard-similarity coefficient goes as follows: Denote {B_i} a batch in which i occurs and {B_j} a batch in which j occurs.The jaccard-similarity coefficient measures the “similarity” between two finite sample sets {B_i} and {B_j}and is defined bysim({B_i},{B_j})=|{B_i}⋂{B_j}|/|{B_i}⋃{B_j}|.Moreover, define the set of neighbours of item i as the set ofitems j, which are in the same or adjacent subsection of an aisle. We require that j has to be stored in the same level category as i, meaning that if i is stored in a lower level, then also j has to be stored in a lower level to qualify as a neighbour.Figure <ref> showsthat the heuristic approach taken by the company (plota) of figure <ref>) shows a high similarity between a few items, i.e. objects which are often picked to the same batch are stored in neighbouring containers. However,the optimised configuration(plot b) of figure<ref>) does not display areas of a high similarity.Consequently,in contrast to intuition, clustering of correlated items does not necessarily lead to a reduction in retrieval times. This phenomenon can be explained with the relatively low impact of item distance to the total retrieval time of a large batch, in relation to the sum of the picking times of the individual items. This behaviour is also present in the other warehouse levels. To conclude, the analysis of correlations of neighbouring items shows that the examined warehouse cannot be simply divided into“hot areas” and a “cold areas” with respect to batched orders:the complex structure of the batches leads to highly non-trivial optimised configurations.§.§.§ Detailed analysis - individual picking timesAnother important question is whether an optimal total retrieval time for a large number of batches might go to the expense of very long retrieval times for a few batches. The last step in our analysis is therefore to check the individual picking times per item. For better visibility, the displayed picking times are normed to the maximum batch picking time in the original configuration. The histogram of individual picking times are shown in figure <ref>: Before the optimisation, the distribution of individual picking times is centredaround a value of 0.16 with a right handed fat tail. The algorithm is able to shift the distribution to shorter times and eliminates parts of the fat tail. This proves that the optimal storage assignment obtained by our algorithm bears a lower probabilityof extremely long retrieval times for arbitrary batches.§ DISCUSSION/CONCLUSIONThis paper presents a simulated annealing algorithm to reduce picking times of potentially large sets of batched orders by a combined optimisation of the storage assignments in a multi-level warehouse and an implicit s-shape routing heuristic.Our main contribution is a general solution method for the solution of the storage location assignment problem under adaptable routing methods, which is flexible enough to be used in a multi-level warehouse setting. While the current algorithm was designed for single-block warehouse layouts, extensions to more general warehouse settings can easily be done by adapting formula (<ref>). Notably, this algorithm can also be applied to parts of the warehouse without resulting in phantom constraints or other unnatural solutions to this combined optimisation problem.To our knowledge, our article is thefirst to investigate the impact of large batched orders on the optimal storage assignment. Contrary to intuition, we give evidence that, even for batched orders of very large size (e.g. over 100 items in one batch), clustering of correlated items in specific parts of the warehouse does not necessarily lead to a reduction in retrieval times. Moreover, we show that clustering heuristics are not even beneficial to reduce the probability of extremely long retrieval times for rarely occurring batches.Wetested our algorithm on real data, optimising the storage assignments in a four-level warehouse of a manufacturer with pre-batched orders of 1-150 items per order. The optimal storage assignment suggested by the algorithm reduces the total retrieval time by 21% compared to heuristics based on the frequency of picking for individual pieces.Assuming a random item distribution in the warehouse, this simulated annealing algorithm reduces the overall picking time approximately by 38%. These savings are achieved without changing the routing heuristics and without splitting large pre-batched orders. The simultaneous delivery of all items, even of a very large batch, is crucial to maintaining efficiency of the assembly line. Note that the algorithm is very fast, as all parts of the algorithm are designed and optimised for multi-level warehouses. No black-box packages are used.The main novelty of our method is to provide a structured approach for the storage location assignment problem with very general pre-batching constraintsin multi-level warehouses settings. In particular, the algorithm is able to deal with complex correlated batches and able to find highly non-trivial optimised configurations. § ACKNOWLEDGEMENTS We gratefully acknowledge the financial, data and feedback contribution from our partner company.A.E. and C.G. thank Stiftung der Deutschen Wirtschaft for partial funding of this project. C.G.'s research is supported by ERC grant no. 277749 “EPSILON”.abbrv 10Hsieh2011618 New batch construction heuristics to optimise the performance of order picking systems. International Journal of Production Economics, 131(2):618 – 630, 2011.ahonen2014simulated H. Ahonen, A. G. de Alvarenga, and A. Amaral. Simulated annealing and tabu search approaches for the corridor allocation problem. European Journal of Operational Research, 232(1):221–233, 2014.amaral2012corridor A. R. Amaral. The corridor allocation problem. Computers & Operations Research, 39(12):3325–3330, 2012.atmaca2013defining E. Atmaca and A. Ozturk. Defining order picking policy: A storage assignment model and a simulated annealing solution in as/rs systems. Applied Mathematical Modelling, 37(7):5069–5079, 2013.bartholdi2011warehouse J. J. Bartholdi III and S. T. Hackman. Warehouse & distribution science: release 0.92. Atlanta, GA, The Supply Chain and Logistics Institute, School of Industrial and Systems Engineering, Georgia Institute of Technology, 2011.battista2011storage C. Battista, A. Fumi, F. Giordano, and M. Schiraldi. Storage location assignment problem: implementation in a warehouse design optimization tool. In Proceedings of the Conference Breaking Down the Barriers between Research and Industry, 2011.brynzer H. Brynzer and M. Johansson. Storage location assignment: Using the product structure to reduce order picking times. International Journal of Production Economics, 46:595 – 603, 1996.vcerny1985thermodynamical V. Černỳ. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. Journal of optimization theory and applications, 45(1):41–51, 1985.chan2011improving F. T. Chan and H. K. Chan. Improving the productivity of order picking of a manual-pick and multi-level rack distribution warehouse through the implementation of class-based storage. Expert Systems with Applications, 38(3):2686–2700, 2011.tutorial K. Choe and G. Sharp. Small parts order picking: design and operation. Online tutorial, 1991.cohn1999simulated H. Cohn and M. Fielding. Simulated annealing: searching for an optimal temperature schedule. SIAM Journal on Optimization, 9(3):779–802, 1999.coyle1996management J. J. Coyle, E. J. Bardi, C. J. Langley, et al. The management of business logistics, volume 6. West Publishing Company St Paul, MN, 1996.de1999efficient M. De Koster, E. S. Van der Poort, and M. Wolters. Efficient orderbatching methods in warehouses. International Journal of Production Research, 37(7):1479–1504, 1999.de2004assess R. De Koster. How to assess a warehouse operation in a single tour. Technology Report. Erasmus University, Netherlands, 2004.de2007design R. De Koster, T. Le-Duc, and K. J. Roodbergen. Design and control of warehouse order picking: A literature review. European Journal of Operational Research, 182(2):481–501, 2007.de2001logistics R. De Koster and A. Neuteboom. The logistics of supermarket chains: a comparison of seven chains in the Netherlands. Elsevier Business Information, 2001.de1998routing R. De Koster and E. Van der Poort. Routing order pickers in a warehouse: a comparison between optimal and heuristic solutions. IIE transactions, 30(5):469–480, 1998.dekker2004improving R. Dekker, M. De Koster, K. J. Roodbergen, and H. Van Kalleveen. Improving order-picking response time at Ankor's warehouse. Interfaces, 34(4):303–313, 2004.drury1988towards J. Drury. Towards more efficient order picking. IMM monograph, 1, 1988.european2004differentiation European Logistics Association and AT Kearney Management Consultants. Differentiation for performance excellence in logistics, 2004.frazelle2002supply E. Frazelle. Supply chain strategy: the logistics of supply chain management. McGrraw Hill, 2002.frazele1989correlated E. Frazelle and G. P. Sharp. Correlated assignment strategy can improve any order-picking operation. Industrial Engineering, 21(4):33–37, 1989.frazelle1989stock E. H. Frazelle. Stock location assignment and order picking productivity. Georgia Tech Theses and Dissertations, 1989.gademann2005order N. Gademann and S. Velde. Order batching to minimize total travel time in a parallel-aisle warehouse. IIE transactions, 37(1):63–75, 2005.goetschalckx1989classification M. Goetschalckx and J. Ashayeri. Classification and design of order picking. Logistics World, 2(2):99–106, 1989.goetschalckx1988efficient M. Goetschalckx and H. D. Ratliff. An efficient algorithm to cluster order picking items in a wide aisle. Engineering Costs and Production Economics, 13(4):263–271, 1988.gomes2006solving A. M. Gomes and J. F. Oliveira. Solving irregular strip packing problems by hybridising simulated annealing and linear programming. European Journal of Operational Research, 171(3):811–829, 2006.hall1993distance R. W. Hall. Distance approximations for routing manual pickers in a warehouse. IIE transactions, 25(4):76–87, 1993.hausman1976optimal W. H. Hausman, L. B. Schwarz, and S. C. Graves. Optimal storage assignment in automatic warehousing systems. Management Science, 22(6):629–638, 1976.henn2013245 S. Henn, S. Koch, H. Gerking, and G. Wäscher. A u-shaped layout for manual order-picking systems. Logistics Research, 6(4):245–261, 2013.henn2013metaheuristics S. Henn and V. Schmid. Metaheuristics for order batching and sequencing in manual order picking systems. Computers & Industrial Engineering, 66(2):338–351, 2013.hong2012batch S. Hong, A. L. Johnson, and B. A. Peters. Batch picking in narrow-aisle order picking systems with consideration for picker blocking. European Journal of Operational Research, 221(3):557–570, 2012.establish Establish. Inc and Herbert W. Davis &. Co. Logistic cost and service, 2005.kirkpatrick1984optimization S. Kirkpatrick. Optimization by simulated annealing: Quantitative studies. Journal of statistical physics, 34(5-6):975–986, 1984.kutzelnigg R. Kutzelnigg. Optimal allocation of goods in a warehouse: Minimizing the order picking costs under real-life constraints. In 3rd IEEE International Symposium on Logistics and Industrial Informatics, pages 65–70, Aug 2011.loukil2007multi T. Loukil, J. Teghem, and P. Fortemps. A multi-objective production scheduling case study solved by simulated annealing. European journal of operational research, 179(3):709–722, 2007.matusiak2014fast M. Matusiak, R. de Koster, L. Kroon, and J. Saarinen. A fast simulated annealing method for batching precedence-constrained customer orders in a warehouse. European Journal of Operational Research, 236(3):968–977, 2014.metropolis1953equation N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953.muppani2008efficient V. R. Muppani and G. K. Adil. Efficient formation of storage classes for warehouse storage location assignment: a simulated annealing approach. Omega, 36(4):609–618, 2008.petersen1997evaluation C. G. Petersen. An evaluation of order picking routeing policies. International Journal of Operations & Production Management, 17(11):1098–1111, 1997.petersen1999 C. G. Petersen. The impact of routing and storage policies on warehouse efficiency. International Journal of Operations & Production Management, 19(10):1053–1064, 1999.ratliff1983order H. D. Ratliff and A. S. Rosenthal. Order-picking in a rectangular warehouse: a solvable case of the traveling salesman problem. Operations Research, 31(3):507–521, 1983.roodbergen2001layout K.-J. Roodbergen. Layout and routing methods for warehouses. Erasmus University Rotterdam, 2001.roodbergen2001routingmiddle K. J. Roodbergen and R. De Koster. Routing order pickers in a warehouse with a middle aisle. European Journal of Operational Research, 133(1):32–43, 2001.roodbergen2001routingmultiple K. J. Roodbergen and R. Koster. Routing methods for warehouses with multiple cross aisles. International Journal of Production Research, 39(9):1865–1883, 2001.tompkins2003facilities J. A. Tompkins, J. A. White, Y. A. Bozer, and J. M. A. Tanchoco. Facilities planning (third ed). John Wiley & Sons, 2003.van1998forward J. P. Van den Berg, G. P. Sharp, A. N. Gademann, and Y. Pochet. Forward-reserve allocation in a warehouse with unit-load replenishments. European Journal of Operational Research, 111(1):98–113, 1998.van1999models J. P. van den Berg and W. Zijm. Models for warehouse management: Classification and examples. International Journal of Production Economics, 59(1):519–528, 1999.Whittley2004 I. M. Whittley and G. D. Smith. The attribute based hill climber. Journal of Mathematical Modelling and Algorithms, 3(2):167–178, 2004.
http://arxiv.org/abs/1704.01049v1
{ "authors": [ "Alexander Eckrot", "Carina Geldhauser", "Jan Jurczyk" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170325141535", "title": "A simulated annealing approach to optimal storing in a multi-level warehouse" }
Sneha Chaubey]Sneha ChaubeyDepartment of Mathematics, University of Illinois,1409 West Green Street,Urbana, IL 61801 chaubey2@illinois.edu Elena Fuchs]Elena FuchsDepartment of Mathematics, UC Davis, One Shields Avenue, Davis, CA 95616 efuchs@math.ucdavis.edu Robert Hines]Robert HinesDepartment of Mathematics, University of Colorado, Campus Box 395, Boulder, Colorado 80309-0395 robert.hines@colorado.edu Katherine E. Stange]Katherine E. StangeDepartment of Mathematics, University of Colorado, Campus Box 395, Boulder, Colorado 80309-0395 kstange@math.colorado.edu[2010]Primary: 11A55, 11J70, 11E20, 37A45, 52C26The work of the fourth author was sponsored by the National Security Agency under Grant Number H98230-14-1-0106 and NSF DMS-1643552.The United States Government is authorized to reproduce and distribute reprints notwithstanding any copyright notation herein. The work of the second author was supported by NSF DMS-1501970 and a Sloan Research Fellowship.We examine a pair of dynamical systems on the plane induced by a pair of spanning trees in the Cayley graph of the Super-Apollonian group of Graham, Lagarias, Mallows, Wilks and Yan.The dynamical systems compute Gaussian rational approximations to complex numbers and are “reflective” versions of the complex continued fractions of A. L. Schmidt.They also describe a reduction algorithm for Lorentz quadruples, in analogy to work of Romik on Pythagorean triples.For these dynamical systems, we produce an invertible extension and an invariant measure, which we conjecture is ergodic.We consider some statistics of the related continued fraction expansions, and we also examine the restriction of these systems to the real line, which gives a reflective version of the usual continued fraction algorithm. Finally, we briefly consider an alternate setup corresponding to a tree of Lorentz quadruples ordered by arithmetic complexity. The Dynamics of Super-Apollonian Continued Fractions [ December 30, 2023 ====================================================§ INTRODUCTION This paper unites three distinct lines of research that have appeared over the past 40 years.In what was chronologically the first, Asmus Schmidt developed a theory of Gaussian complex continued fractions based on Farey circles and triangles analogous to the Farey subdivision of the real line <cit.>. In the second, Graham, Lagarias, Mallows, Wilks and Yan laid much of the algebraic groundwork for the study of Apollonian circle packings and super-packings <cit.>. In the third, Romik studied a dynamical system on the tree of Pythagorean triples which is conjugate to a Euclidean algorithm related to the Gauss map for real continued fractions <cit.>.In this work, we define a dynamical system on the complex plane which can be viewed in at least three distinct ways:as a “reflective” version of Gaussian complex continued fractions; as a system of reduction on the Descartes quadruples of all Apollonian circle packings; or as a dynamical system on the tree of Lorentz quadruples, i.e. solutions to x^2 + y^2 + z^2 = t^2.Restricted to the real line, it produces a reflective continued fraction algorithm and Euclidean algorithm.Our work fits into a long line of work on the number theoretical aspects of Apollonian circle packings.One of the first papers on this subject was <cit.>, paving the way for many subsequent works on the arithmetic of such packings.Indeed, many of the results in these subsequent works started as conjectures in <cit.> (see <cit.> and <cit.> for a summary of these results).By and large, the arithmetic questions inspired by the work of Graham et. al. have been of the following nature.First, one observes that certain Apollonian circle packings are integral, meaning their curvatures (inverse radii) are all integral.Consider the set of integer curvatures (counted with or without multiplicity) appearing in such a fixed primitive integral Apollonian packing.What can one say about this set of integers: how many primes of bounded size are there, can one describe the integers using local conditions alone, and what should these local conditions be, if so?While the question of rational approximation of complex numbers is quite different in flavor than the number theoretic questions just described, it turns out both questions hang on studying the same symmetry group.In the study of integer curvatures this is the Apollonian group, which lives naturally inside the Super-Apollonian group of <cit.>, which, in turn, describes the collection of all integral Apollonian circle packings simultaneously.However, the Super-Apollonian group can be realized as an index 48 subgroup of the extended Bianchi group _2([i])⋊⟨c⟩.This Bianchi group, analogous to _2(), is the natural setting in which to study Gaussian complex continued fraction expansions.As an indication of this phenomenon, the Farey circles and triangles of Schmidt (see Figure <ref>), when iterated, actually produce an Apollonian super-packing as illustrated in <cit.> or Figure <ref>.The Super-Apollonian continued fraction expansion we describe in this paper was, however, initially inspired by a process described by Romik in <cit.>.In that paper, Romik used the group generated by γ_1=( [ -122; -212; -223;]),γ_2=( [ 1 2 2; 2 1 2; 2 2 3; ]),γ_3=( [1 -22;2 -12;2 -23;])to create a dynamical system on the positive quadrant of the unit circle.This is a subgroup of the orthogonal group O_F(ℤ) where F(x,y,z)=x^2+y^2-z^2, and any point (x,y) on the unit circle corresponds to the solution (x,y,1) of the equation F(x,y,z)=0, or the Pythagorean equation.Indeed, the dynamical system Romik constructed acts naturally to walk to the root of a tree of primitive integer Pythagorean triples as depicted in Figure <ref>. The fact that Pythagorean triples form a tree has been observed many times; see <cit.>.Given any primitive Pythagorean triple (a,b,c), which corresponds to the rational point (a/c,b/c) on the unit circle, Romik's dynamical system allows one to quickly determine the finite word in γ_1,γ_2,γ_3 which describes the path in the tree from (3,4,5) to (a,b,c).It also associates an infinite word or expansion in γ_1,γ_2,γ_3 to any irrational point on the first quadrant of the unit circle, and this gives a type of continued fraction expansion.We refer the reader to <cit.> for further details on this beautiful work.The present paper can be thought of as a version of Romik's work in four dimensions.As we discuss below, Apollonian packings are naturally connected to primitive Lorentz quadruples, or coprime quadruples of integers satisfying x^2+y^2+z^2=t^2.In this higher dimensional setting, a number of new difficulties arise: for example, the analog of the ternary Pythagorean tree above is no longer a tree, and there is not a unique path from any given vertex to the root (see Figure <ref> for a picture of the graph). We will now briefly describe the idea behind Super-Apollonian continued fractions.An Apollonian packing is obtained by starting with four pairwise tangent circles,and repeatedly inscribing intoevery region bounded by three existing pairwise tangent circles the unique circle which is tangent to all three.The number theoretical interest in these packings comes from the fact that if the starting four circles all have integer curvature (i.e. the reciprocal of each of the radii is an integer), then all of the circles in the packing have integer curvature.This process is depicted in Figure <ref>. These packings have a symmetry which can be described both algebraically and geometrically.The algebraic interpretation stems from Descartes' theorem from 1643, that if four pairwise circles have curvatures a,b,c,d, then Q(a,b,c,d)=2(a^2+b^2+c^2+d^2)-(a+b+c+d)^2=0; the symmetries of the packing are symmetries of this quadratic form.On the other hand, geometrically, one can embed the Apollonian circle packing in the complex plane and describe the symmetries as Möbius transformations.While the curvatures of circles in any one primitive integral Apollonian packing give only a thin subset of all primitive integer solutions to Q(a,b,c,d)=0, one can use the geometric interpretation of the algebraic symmetry to choose a natural extension to the Super-Apollonian group of <cit.>.The Super-Apollonian group acts on Descartes quadruples.The orbit of one Descartes quadruple under this larger group will now include all primitive integral Apollonian circle packings, nested one inside another densely in the complex plane; see Figure <ref>.To approximate a complex number z ∈, we approach z with a sequence of Descartes quadruples of circles, in the sense that the tangency points of the quadruples (all of which are rational) approach z.The sequence of quadruples is generated by repeated applications of generators of the Super-Apollonian group, so that the continued fraction expansion is a word in the generators.Figure <ref> depicts this process when approximating the number π+e i. In contrast to the real or Pythagorean case, since the Super-Apollonian group is not free, the Cayley graph is not a tree, and there are multiple approximation sequences for a given z ∈; a natural choice is given by the normal form of Graham, Lagarias, Mallows, Wilks and Yan <cit.>.The Apollonian super-packing of Figure <ref> is the same fractal subdivision ofused in the Gaussian continued fraction algorithm of Schmidt.However, Schmidt's setup involves a different choice of group and generators to navigate the subdivision.We develop a dynamical system, based on the Super-Apollonian group, which acts on the plane to generate the continued fraction algorithm; define an invertible extension to an action on hyperbolic geodesics; and find an invariant measure.This process closely follows the development of Schmidt's system by Schmidt and Nakada, as detailed in an appendix.Having described these dynamical systems, we go on to analyse some statistics of the continued fraction expansion.We also include an analysis of the restriction of the Super-Apollonian dynamics to the real line, where we recover “reflective” real continued fractions.Finally, in part to emphasize the variability in possible systems, we briefly consider another dynamical system with the goal of organizing Lorentz quadruples by arithmetic complexity.In future work, we plan to provide a proof of ergodicity for our continued fraction algorithm, and to develop parallel theories for other imaginary quadratic fields such as (√(-2)).The paper proceeds as follows.Section <ref> contains background information on classical continued fractions, Lorentz and Descartes quadruples, the Apollonian and Super-Apollonian groups and their geometric interpretation as Möbius transformations of the complex plane.This section includes a discussion of Graham, Lagarias, Mallows, Wilks and Yan's swap normal form and the associated spanning tree of the Cayley graph of the Super-Apollonian group.Section <ref> defines a dynamical system associated to this choice of spanning tree.First a pair of dynamical systems on the complex plane are described, which compute a Gaussian continued fraction expansion of a complex number.We describe an invertible extension and invariant measure.In Section <ref>, the closely related dynamical systems of Lorentz and Descartes quadruples are described.These systems travel to the root along the the swap normal and invert normal form spanning trees.We relate these to the Reduction Algorithm for Descartes quadruples of Graham, Lagarias, Mallows, Wilks and Yan.In Section <ref>, we give some consequences of the conjectural ergodicity of the systems, and compare to experimental data.We plan to demonstrate ergodicity in a follow-up paper.In Section <ref>, we restrict this system to the real line, and recover a “reflective” variation on the classical continued fraction algorithm.In Section <ref>, we briefly consider an alternate dynamical system which respects the arithmetic complexity of Lorentz quadruples.Finally, in an appendix, we review the Gaussian continued fraction algorithm of Schmidt and Nakada, and its explicit connection to our work. §.§ A note on figures Figures and experimental data were created with Sage Mathematics Software <cit.> and Mathematica <cit.>. §.§ AcknowledgementsWe would like to thank Dan Romik for writing the paper that inspired this work, and for several helpful conversations.We also thank Jayadev Athreya for his helpful comments and suggestions.§ QUADRUPLES AND THE SUPER-APOLLONIAN GROUP §.§ Simple continued fractions We begin with a brief overview of simple continued fractions on the real line, described from the perspective and in the language we plan to use for complex continued fractions in the remainder of this paper.The purpose is to provide an explicit analogy for much of what follows. Over the integers, the Euclidean algorithm is the iteration of the division algorithm, which, for a,b ∈, returns q, r ∈ so that a=bq+r, 0 ≤ r < |b|.Replacing the pair (a,b) with (b,r), we repeat.We can illustrate this as (a,b)q↦(b,r), e.g.(355,113)3↦(113,16)7↦(16,1)16↦(1,0).The transformation taking (a,b) to (b,r) can be written in terms of matrices as[ b; r ] =( [ 0 1; 1 0; ]) ( [1 -q;01;]) [ a; b ] =[01;1 -q ][ a; b ]If a, b are coprime, the Euclidean algorithm terminates at (1,0).We can word backwards from (1,0) to recover (a,b), e.g.[ 355; 113 ] =( [ 3 1; 1 0; ]) ( [ 7 1; 1 0; ]) ( [ 161;10;]) [ 1; 0 ]The Euclidean algorithm gives rise to a map on rationalsb/aq↦r/b = a/b - q,(say with a,b coprime), and the algorithm produces expressions such as355/113=3+1/7+1/16.We can extend this map to the interval (0,1), i.e. the Gauss map T(x)={1/x} mapping x∈(0,1) to the fractional part of 1/x, which is defined piecewise by Möbius transformationsT(x)=-qx+1/x,x∈[1/q+1,1/q).The effect of this is an infinite continued fractionx=[a_0;a_1,…,a_n,…]=a_0+1/a_1+1/a_2+…,a_0=⌊ x⌋,a_n=⌊1/T^n-1(x-a_0)⌋,and a sequence of convergents p_n/q_n to x:( [ p_n p_n-1; q_n q_n-1; ]) = ( [ a_0 1; 1 0; ]) ( [ a_1 1; 1 0; ])⋯( [ a_n 1; 1 0; ]),lim_n→∞p_n/q_n=x.For irrational x∈(0,1), T is the left shift map on N^N, T([0;a_1,a_2,…])=[0;a_2,a_3,…].The Gauss map is ∞-to-1, but we can find an invertible extension T defined on pairs (y,x)∈(-∞,-1)×(0,1),T(y,x)=(1/y-⌊ x ⌋,T(x)).The pairs (y,x) can be identified with oriented geodesics in the hyperbolic plane, realized as the upper half plane above the real line.Specifically, the points y and x are the past and future points on the ideal boundary in the upper half-plane model.The extension T acts piecewise by isometries, Isom(H^2)≅_2(R). (The space of geodesics (-∞,-1)×(0,1) corresponds to Gauss' reduced indefinite binary quadratic forms.)The space of geodesics carries an isometry invariant measure dxdy/(x-y)^2 coming from hyperbolic area or Haar measure on SL_2(R), which restricts to a T-invariant measure on (-∞,-1)×(0,1) since T is defined piecewise by isometries.Pushing this measure forward to the second coordinate gives the T-invariant Gauss measure dμ(x)=dx/1+x on (0,1), which is ergodic:dμ(x)=dx∫_-∞^-1dy/(x-y)^2.For more on continued fractions, see <cit.> (basic properties), <cit.> (Diophantine approximation), <cit.> (ergodicity), <cit.> (connection to the geodesic flow and hyperbolic dynamics).For an excellent exposition of this view of the Gauss measure, see <cit.>. §.§ Lorentz and Descartes quadruples We consider the Lorentz (1,3)–form Q_L(𝐱) =x_0^2 - x_1^2 - x_2^2 - x_3^2,and the solutions to Q_L(𝐱)=0 are called Lorentz quadruples.Consider the following matrices:L_1 := [2 -1 -1 -1;10 -1 -1;1 -10 -1;1 -1 -10 ], L_2 := [2 -111;1011; -110 -1; -11 -10 ], L_3 := [21 -11; -101 -1;1101; -1 -110 ], L_4 := [211 -1; -10 -11; -1 -101;1110 ], L_1^⊥ := [2111; -10 -1 -1; -1 -10 -1; -1 -1 -10 ], L_2^⊥ := [21 -1 -1; -1011;110 -1;11 -10 ], L_3^⊥ := [2 -11 -1;101 -1; -1101;1 -110 ], L_4^⊥ := [2 -1 -11;10 -11;1 -101; -1110 ].We create these matrices by conjugating L_1 by all eight possible diagonal matrices with diagonals (1, ± 1, ± 1, ± 1).Each of the eight resulting matrices is an involution, and preserves Q_L in the sense thatL_i^T G_L L_i = G_L,(L_i^⊥)^T G_L L_i^⊥ = G_Lfor the Gram matrixG_L = [1000;0 -100;00 -10;000 -1 ]of Q_L.We create a Cayley graph, 𝒞_L, whose vertices are the elements of the group generated by the L_i, L_i^⊥ and where two vertices M_1 and M_2 are joined by an edge exactly when M_1 M_2^-1 is one of the L_i, L_i^⊥.If one takes a quotient of this Cayley graph by considering the action of the L_i, L_i^⊥ on Lorentz quadruples, one obtains Figure <ref>. The form Q_L is equivalent to the Descartes form Q_D(𝐲) := (y_0+y_1+y_2+y_3)^2 - 2(y_0^2 + y_1^2 + y_2^2 + y_3^2) by the relationship 2Q_D(J𝐱) = Q_L(𝐱) whereJ = 1/2[1111;11 -1 -1;1 -11 -1;1 -1 -11 ].Note that J^-1=J. The Descartes form has Gram matrixG_D = 2 J^T G_L J = [ -1111;1 -111;11 -11;111 -1 ]and is central to the study of Apollonian circle packings.The solutions to the Descartes form Q_D(𝐲)=0 are called Descartes quadruples.If one considers the matrices above under the change of variables L_i ↦ JL_iJ,L_i^⊥↦ JL_i^⊥ J,one obtains the Super-Apollonian group ^ of <cit.>, whose generators are (in order corresponding to the L_i above):S_1 := [ -1222;0100;0010;0001 ], S_2 := [1000;2 -122;0010;0001 ], S_3 := [1000;0100;22 -12;0001 ], S_4 := [1000;0100;0010;222 -1;], S_1^⊥ := [ -1000;2100;2010;2001 ], S_2^⊥ := [1200;0 -100;0210;0201 ], S_3^⊥ := [1020;0120;00 -10;0021;], S_4^⊥ := [1002;0102;0012;000 -1 ].The group 𝒜^S preserves Q_D in the sense that M^tG_DM=G_D for all M∈A^S.The corresponding Cayley graph 𝒞_D is isomorphic to 𝒞_L.One presentation of the Super-Apollonian group is <cit.>A^S = ⟨ S_1, S_2, S_3, S_4, S^⊥_1, S^⊥_2, S^⊥_3, S^⊥_4 : S_j^2 = (S^⊥_j)^2 = 1, S_jS^⊥_k = S^⊥_kS_j(j ≠ k) ⟩.It is a right-angled hyperbolic reflection group generated by involutions S_i, S_i^⊥.The group 𝒜_S is of index 48 in the full orthogonal group of Q_D <cit.>. §.§ Descartes quadruples, normal form and a spanning tree The motivation for studying 𝒜^S came from the study of Apollonian circle packings; in this section we describe this geometry.We consider circles in = ∪{∞} identified with ^1(), having projective equationa X X -b Y X - bY X + c Y Y = 0, [X:Y] ∈^1(),ac-bb̅=-1.Such a circle is said to have curvature a (inverse of the radius in [X:1]), co-curvature c (curvature of the circle in [1:Y]), and curvature-center b (center times curvature).We denote it by a four-dimensional real vector (c,a,b_1,b_2), where b=b_1+b_2i.These are known as ACC coordinates (augmented curvature-center coordinates) <cit.>.When a=0, the zero set in [X:1] is a line and b becomes a unit normal vector.Four circles C_i as row vectors of a 4× 4 matrix C in ACC coordinates are said to be in Descartes configuration ifC^tG_DC=( [0 -400; -4000;0020;0002;]).It is a theorem of Graham, Lagarias, Mallows, Wilks and Yan that circles are in Descartes configuration according to this algebraic condition if and only if, as circles, they are all mutually tangent with disjoint interiors, where sign of the curvature indicates orientation (hence interior) <cit.> and <cit.>.This is nicely explained by interpreting the Descartes form as a bilinear pairing on circles which measure angle of intersection or hyperbolic distance; see for example <cit.>.Since it preserves Q_D, the Super-Apollonian group 𝒜^S acts on Descartes quadruples, in the form of such 4 × 4 matrices, from the left. There is a nice interpretation of this action geometrically <cit.>. For such a configuration of circles, there is a dual Descartes quadruple consisting of circles passing orthogonally through the first quadruple, sharing the same set of tangency points (Figure <ref>).Then S_i acts as what we call a “swap" replacing C_i with its inversion in the dual circle orthogonal to the other three, and S_i^⊥ fixes C_i while replacing the other three circles with their inversions in C_i (we refer to the action of S_i^⊥ as simply an “inversion”).See Figure <ref>.There are two “natural” ways of uniquely writing elements of A^S given the commutation relations in the group (i.e. two natural spanning trees for the Cayley graph of A^S with respect to the given generators), and we will be working with both of them.These were first defined in <cit.>.A word W=M_nM_n-1⋯ M_1 in the Super-Apollonian generators is in swap normal form if M_i≠ M_i+1 and if whenever M_i=S_j and M_i+1=S_k^⊥, then j=k; i.e. the “swaps” are pushed as far left as possible (equivalently the “inversions” are as far right as possible).A word W=M_nM_n-1⋯ M_1 in the Super-Apollonian generators is in invert normal form if M_i≠ M_i+1 and if whenever M_i=S_j^⊥ and M_i+1=S_k, then j=k; i.e. the “inversions” are pushed as far left as possible (equivalently the “swaps” are as far right as possible). The swap (invert) normal form of an element of the Super-Apollonian group is unique <cit.>. In the Cayley graph _Q, travelling a path of length n to the origin, one can read off labels as M_n, ⋯, M_1 where M_n is the distal edge and M_1 the proximal (to the origin).This gives a word M_n ⋯ M_1 associated to the path. Define the subgraph _S of _Q to be the union of all paths to the origin labelled by swap normal form words.This is called the swap down tree. The reason for the terminology “swap down tree” is that, as one travels toward the origin in _Q along the swap down tree, if one has a choice of S_j^⊥ followed by S_i or S_i followed by S_j^⊥ (both two-move sequences having the same endpoint closer to the origin), one must choose the latter, which is to say, one must swap before inverting.The following proposition asserts that besides being a spanning tree, it is minimal in a certain way.The swap down tree _S is a spanning tree of _Q, and, from any vertex, the path to the origin along the tree is of minimal length among paths to the origin in _Q. Every word can be put into a unique normal form, without increasing its length, by cancelling any double letters and moving each S_i^⊥ as far to the right as possible using the commutativity relations <cit.>. Therefore there is a unique path to the origin in _S from any vertex of _Q. We may conclude that _S is connected, is a tree, and spans _Q.Minimality follows from the observation that changing to swap normal form never increases length. There are exactly analogous statements for the corresponding invert down tree.§.§ Geometric realization of the Super-Apollonian group The group _2([i]) acts on the extended complex plane = ∪{∞} by the Möbius action[ α γ; β δ ]· z = α z + γ/β z + δ.This action can be extended to include complex conjugation,𝔠· z = z,giving rise to the group B[-1] = _2([i]) ⋊⟨𝔠⟩ of Möbius transformations, the extended Bianchi group, a maximal discrete subgroup of _2(C)⋊⟨c⟩≅Isom(H^3).The group B[-1] acts on the collection of circles of(recall that lines are circles through ∞).In what follows, we will identifywith ^1() when convenient.The orbit of the circle = ∪{∞} under _2([i]) is a dense collection of nested circles called a Schmidt arrangement, denoted .An image is shown in Figure <ref>.For more on Schmidt arrangements, see <cit.>. To describe our dynamical system, we choose a particular Descartes quadruple R_B, and its dual R_A, whose circles are the rows ofR_B=( [000 -1;2001;0201;2221;]), R_A=R_B^⊥=( [2212;0210;2010;00 -10;])in ACC coordinates (see Figure <ref>), respectively.We call these the dual base quadruples.The terminology refers to the fact that R_A consists of the unique Descartes quadruple orthogonal to R_B and having the same intersection points (and vice versa).In particular, the “swaps” of R_A are the “inversions” of R_B and vice versa.Other choices of base quadruple would of course be possible, but the choice here coincides with a natural subset of the Schmidt arrangement and has particularly simple tangency points.We embed the Super-Apollonian group into _2(C)⋊⟨𝔠⟩ using a form of the exceptional isomorphism _2() ⋊⟨𝔠⟩≅ O_3,1^+().To do so, we map each element of ^ to the Möbius transformation which acts the same way on R_B.The Möbius transformations corresponding to the Super-Apollonian generators ares_1=(1+2i)z̅-2/2z̅-1+2i,s_2=z̅/2z̅-1,s_3=-z̅+2,s_4=-z̅, s_1^⊥=z̅,s_2^⊥=z̅+2i,s_3^⊥=z̅/-2iz̅+1,s_4^⊥=(1-2i)z̅+2i/-2iz̅+1+2i(the s_i are inversions in the circles of R_A and the s_i^⊥ are inversions in the circles of R_B).Let Γ denote the Möbius group generated by these generators; it is isomorphic to ^S.Considering the Poincaré extension of the Möbius action to the upper-half-space model of hyperbolic space, one sees that Γ is the finite covolume Kleinian group generated by reflections in the sides of a right-angled ideal octahedron whose faces lie on the geodesic planes defined by the circles of R_B and R_A.The orbit of the Super-Apollonian group on a particular Descartes quadruple is known as an Apollonian super-packing <cit.>.For this choice of base quadruple, as a collection of circles, the corresponding Apollonian super-packing coincides with the Schmidt arrangement of<cit.>. This orbit gives a sequence of partitions P_n of the plane into triangles and circles, each refining the last (see Figure <ref>).The regions of the partition P_n are indexed by the swap normal form words of length n in the Super-Apollonian generators; see Figure <ref>.A word W'∈P_n+1 refines W∈P_n if and only if W is an initial segment of W' (right initial, speaking about ^S).This will allow us to coordinatize the plane by infinite words in swap normal form.The coordinates of a point will be produced by a dynamical system described below.If W=M_nM_n-1⋯ M_1, M_i∈{S_j,S_j^⊥} is a word in the Super-Apollonian generators and Q=WR_B, then in terms of the Möbius transformations the circles of Q are m_1m_2…m_nc_i where c_i are the circles of R_B.The swap normal form for A^S passes to words in the Möbius generators, reversing order as just noted.Compare the following definition to Definition <ref>, noting the order reversal. A word w=m_1m_2…m_n in the Super-Apollonian Möbius generators is in swap normal form if m_i≠m_i+1 and if whenever m_i=s_j and m_i+1=s_k^⊥, then j=k; i.e. the “swaps” are pushed as far right as possible (equivalently the “inversions” as far left as possible).A word w=m_1m_2…m_n is in invert normal form ifm_i≠m_i+1 and if whenever m_i=s_j^⊥ and m_i+1=s_k, then j=k; i.e. “inversions” are as far right as possible.Finally, we note that s_i^⊥=ds_id, whered=z̅-1+i/(1-i)z̅+i=d^-1is the isometry of the octahedron switching opposite faces, the Möbius version of the “duality operator”D=1/2( [ -1111;1 -111;11 -11;111 -1;])from <cit.>.This defines an involution on Super-Apollonian words in swap normal form, namely taking the transpose (or reversing order and conjugating by D)M=M_nM_n-1… M_1,M^⊥=M_1^⊥M_2^⊥… M_n^⊥.On the level of Möbius transformations, m^⊥=dm^-1d.See <cit.>, <cit.> for more information.§ A PAIR OF DYNAMICAL SYSTEMS §.§ Dynamics on ^1(C)We now define a pair of dynamical systems on ^1(C) associated to the base quadruples R_B and R_A.Let B_i, B_i', be the open circular and closed triangular regions of the plane coming from the base quadruple R_B (A_i and A_i' defined similarly, see Figure <ref>), and defineT_B(z)={[ s_iz z∈ B_i',; s_i^⊥zz∈ B_i,;]., T_A(w)={[ s_iww∈ A_i,; s_i^⊥w w∈ A_i'.;].In words, if the point z is in one of the four closed triangular regions, we swap and if z is in one of the open circular regions, we invert in that circle.Each of T_A and T_B has six fixed points, the points of tangency {0,1,∞,i,i+1,1/(1-i)}. Under the dynamical systems T_A and T_B, every Gaussian rational z∈ℚ(i) reaches one of the fixed points in finite time.The fixed point reached is determined by the “parity” of the numerator and denominator of z=p/q, i.e. one of the six equivalence classes under the equivalence relationp/q∼r/s⟺ ps≡ qr2. Recall that B[-1] = _2([i]) ⋊⟨𝔠⟩.The group Γ is the kernel of the surjective map_2(Z[i])⋊⟨𝔠⟩→_2(Z[i]/(2))since Γ is in the kernel and both are of index 48 in B[-1] (we have [B[-1]:Γ]=48 comparing fundamental domains and |_2(Z[i]/(2))|=48 by direct computation).Hence Γ preserves parity, e.g.s_1(p/q)=(1+2i)p̅-2q̅/2p̅+q̅(-1+2i)≡p/q 2.Termination in finite time follows from the following version(s) of the Euclidean algorithm in Z[i].“Homogenizing” T_A and T_B gives dynamical systems on pairs (0,0)≠(p,q)∈Z[i]×Z[i] that terminate when p/q∈{0,1,∞,i,1+i,1/1-i}, i.e. act on pairs via complex conjugation and the matrices implied in the definitions of the s_i, s_i^⊥.For instance,T_B(p,q):=(p̅,2p̅-q̅) forp/q∈ B_2', whereT_B(p/q)=s_2(p/q)=p/q/2(p/q)-1.We'll consider the case of T_B, noting that the proof for T_A is nearly identical: (p,q)↦{[ s_1(p,q)=((1+2i)p̅-2q̅,2p̅+(2i-1)q̅)p/q∈ B_1'∖{i,1+i,1/1-i}; s_2(p,q)=(p̅,2p̅-q̅)p/q∈ B_2'∖{0,1,1/1-i}; s_3(p,q)=(2q̅-p̅,q̅)p/q∈ B_3'∖{1,1+i,∞};s_4(p,q)=(-p̅,q̅)p/q∈ B_4'∖{0,i,∞}; s_1^⊥(p,q)=(p̅,q̅) p/q∈ B_1∖{0,1,∞};s_2^⊥(p,q)=(p̅+2iq̅,q̅) p/q∈ B_2∖{i,i+1,∞};s_3^⊥(p,q)=(p̅,q̅-2ip̅) p/q∈ B_3∖{0,i,1/1-i}; s_4^⊥(p,q)=((1-2i)p̅+2iq̅,(2i+1)q̅-2ip̅)p/q∈ B_4∖{1,1+i,1/1-i}.;].The inequalities defining the regions B_i, B_i' show that |q| is reduced whenever s_1, s_2, s_3^⊥, or s_4^⊥ is applied.For example, when applying s_1, the fact that p/q is in the triangle B_1'∖{i,1+i,1/1-i} (or the circle A_1) shows that |q| is reduced when applying s_1, as follows:p/q∈ B_1'∖{i,1+i,1/1-i}⟹|p/q-1+2i/2|^2<1/4⟹ |2p̅+(1-2i)q̅|<|q|.Note that the inequalities above define A_1, but since B_1' is contained in A_1 they hold true for B_1' as well.Similarly, application of s_3 and s_2^⊥ both reduce |p|.Applying s_4 maps B_4' onto B_1'∪ B_2'∪ B_4∪ B_3' (from which one of |p|, |q| will be reduced as just discussed).Finally, s_1^⊥ maps B_1 onto one of the other seven regions.Hence the algorithm terminates. Iteration of the map T_B or T_A with input z produces a word z=m_1⋯m_n ⋯ in the Möbius generators s_1, s_2, s_3, s_4, s_1^⊥, s_2^⊥, s_3^⊥, s_4^⊥, where m_n is defined by T_B^nz=m_n(T_B^n-1z).We take the word to be finite for z∈Q(i), ending when a fixed point is reached.An example of this process is shown in Figure <ref>. The collection 𝒜^S(n) of length n Super-Apollonian words in swap normal form partitions the plane into a collection of 9·5^n-1-1 triangles and circles, which we call Farey circles and triangles following Schmidt <cit.>.Specifically, we associate to each word in 𝒜^S(n) an open circular or closed triangular region, the notation beingF_B(m)=m_1 ⋯m_n-1 B_i(circular),m_1 ⋯m_n-1 B_i' (triangular),for m=m_1⋯m_n with m_n=s_i^⊥ (circular) or s_i (triangular).This definition is set up so that the words of length one correspond to the eight regions of the base quadruple.A word z = m_1 m_2 ⋯ produced by iteration by T_B (respectively T_A) on z ∈^1() is in swap (respectively invert) normal form. Furthermore, * If z is rational, then z = zbfor b ∈{0,1,∞,i,1+i,1/1-i} matching z in parity as described in Theorem <ref>.* If z is not rational, then z is an infinite word with the property that{z} = ⋂_n≥ 1 F(m_1 ⋯m_n ). This gives a bijection z ↔z, under which T_B (respectively T_A) can be considered to act on words, and this action is via the left-shift, T_B(m_1m_2 ⋯)=m_2 m_3 ⋯. That z is in swap normal form is clear; the only circular region in s_i(B_i') is B_i.Two Farey sets are either disjoint or one is contained in the other:if m is a (left) initial segment of n, which we denote m≤n, then F_B(m)⊇ F_B(n).For z∈ℂ∖Q(i), the infinite swap normal form word z=m_1 m_2 ⋯ produced by iterating T_B determines z since z=∩_nF(m_1…m_n).For rational points, to determine z, we need both the finite word and the parity of the rational:then z = zb, where b is the element of {0, 1, ∞, i, 1+i, 1/1-i} of the specified parity. From now on, we consistently use the variables z, z for the B coordinate system, and w, w for the A coordinate system, since we will be using both codings simultaneously.§.§ Covering the boundary of the ideal octahedron and first approximation constant for Q(i)The purpose of this section and the next is to relate the Super-Apollonian continued fraction algorithm to classical statements of Diophantine approximation.In this section, we give the first value of the Lagrange spectrum for complex approximation by Gaussian rationls.With this as a point for comparison, in the second section we describe the goodness of the approximations obtained by the algorithm.The “good” rational approximations to an irrational z∈C|z-p/q|≤ C/|q|^2are determined by the collection of horoballsB_C(p/q) ={(z,t)∈ H^3 : |z-p/q|^2+(t-C/|q|^2)^2≤ C^2/|q|^4},B_C(∞) ={(z,t)∈ H^3 : t≥1/2C},through which the geodesic ∞ z passes (or through which any geodesic wz eventually passes).Over Q(i), the smallest value of C with the property that every irrational z has infinitely many rational approximations satisfying the above inequality was determined by Ford in <cit.>.Here we give a short proof of this fact using the geometry of the ideal octahedron.Every z∈C∖Q(i) has infinitely many rational approximations p/q∈Q(i) such that|z-p/q|≤C/|q|^2,C = 1/√(3)≃ 0.577350269…The constant 1/√(3) is the smallest possible, as witnessed by z=1+√(-3)/2. The value of C for which the horoballs with parameter C based at the ideal vertices {0,1,∞,i,1+i,1/(1-i)} cover the boundary of the fundamental octahedron is easily found to be 1/√(3) (one need only cover the face with vertices {0,1,∞}, see Figure <ref>).Hence as we follow the geodesic ∞ z through the tesselation by octahedra, at least one of the six vertices of each octahedron satisfies the above inequality with C=1/√(3), one or two as it enters and one or two as it exits (some of which may coincide).This gives the smallest value of C for which the inequality above has infinitely many solutions for all irrational z, noting the the geodesic from e^-π i/3 to e^π i/3 passes orthogonally through the “centers” of the opposite faces of the octahedra through which it passes. The sequence of octahedra we consider in our continued fraction algorithm are not necessarily along the geodesic path, but we do capture all rationals with |z-p/q|<C/|q|^2 with C=1/(1+1/√(2)) as detailed in the next section.§.§ Quality of rational approximationTo any complex number we associate six sequences of Gaussian rational approximations by following the inverse orbit of the six points of tangency of our base quadruples R_A, R_B.Namely, if z=∏_i=1^∞z_i=∏_i=1^∞w_i=w in the two codings, then the convergents p^A_n,α/q^A_n,α, p^B_n,α/q^B_n,α are given byp^A_n,α/q^A_n,α=(∏_i=1^nw_i)(α),p^B_n,α/q^B_n,α=(∏_i=1^nz_i)(α),α∈{0,1,∞,i,i+1,1/(1-i)},with the property thatlim_n→∞p^A_n,α/q^A_n,α=w,lim_n→∞p^B_n,α/q^B_n,α=zfor all α and w∈ℂ∖Q(i), z∈ℂ∖Q(i).The following theorem is equivalent to a statement about approximation by Schmidt's continued fractions, given as Theorem 2.5 in <cit.>.In particular, the approximations given by Schmidt's algorithm and the Super-Apollonian algorithm coincide.In <cit.>, it is stated without proof; here we provide a proof. If p/q is such that |z_0-p/q|<C/|q|^2,C=√(2)/1+√(2)≃ 0.585786437…,then p/q is a convergent to z_0 (with respect to both T_A and T_B).Moreover, the constant C is the largestpossible.Note that the Apollonian super-packings associated to the root quadruples R_A, R_B, are invariant under the action of _2(Z[i])⋊⟨c⟩.Consider the quadruple where p/q first appears as a convergent to z_0, and let γ(z)=-Qz+P/-qz+p∈_2(Z[i]) take this quadruple to the base quadruple (say R_B) with p/q mapping to infinity and infinity mapping to Q/q.For any value of C, the disk of radius C/|q|^2 centered at p/q gets mapped by γ to the exterior of the disk of radius 1/C centered at Q/q:w =-Qz+P/-qz+p⇒ |z-p/q|=1/|w-Q/q||q|^2, C/|q|^2 ≥|z-p/q|=1/|w-Q/q||q|^2⇒ |w-Q/q|≥1/C.Consider the ways in which p/q can first appear as a convergent to z_0 in the sequence of partitions of the plane. * We might invert into a circle containing z_0.In particular, then, p/q is in the interior of the circle of inversion (since it is its first appearance as a convergent).In this case, by the discussion in Section <ref>, all z inside the circle also include this inversion in their expansion.Therefore, all z inside the circle will have p/q as a convergent.Our goal is to show that the circle of radius 1/|q|^2 around p/q is contained in the circle of inversion.Under γ above (perhaps after applying some binary tetrahedral symmetry of the base quadruple), the circles A, B, get mapped to A', B' as in the figures below, with Q/q=γ(∞) lying in the triangle inside B' as shown.The exterior of a disk of radius one centered at Q/q does not meet the interior of B', hence, applying γ^-1, the disk of radius 1/|q|^2 centered at p/q does not meet B.Therefore p/q is a convergent to any z with |z-p/q|<1/|q|^2.< g r a p h i c s > < g r a p h i c s > * We might swap into a triangle containing z_0 producing p/q as a point of tangency on an edge of this triangle.In the image, z_0 is in the quadrangle with sides formed by A,B,C,D; this is the union of two triangles.The dotted circles in the figures indicate the two possible swaps associated to the initial creation of p/q as a convergent.In this case, any z in the indicated quadrangle will have p/q as a convergent.We aim to show that a circle of radius C/|q|^2 around p/q is contained in this region.Under γ above (perhaps after applying some binary tetrahedral symmetry of the base quadruple), the circles A, B, C, D, and E are mapped to A', B' C', D' and E', the three circles tangent to p/q are mapped to the lines in the second picture, with Q/q=γ(∞) lying in the intersection of the disks defined by B' and E'.The exterior of any circle of radius 1/C=1+1/√(2) centered inside E' avoids the interiors of A', B', C', and D'.Applying γ^-1 shows that the disk of radius C/|q|^2 around p/q does not meet A, B, C, or D, so that p/q is a convergent to any z with |z-p/q|<C/|q|^2.< g r a p h i c s > < g r a p h i c s >§.§ Invertible extension and invariant measuresIn this section, we derive an invertible extension T of T_B, with the property that T^-1 extends T_A, along with an invariant measure for T.This is done with a goal of eventually producing an ergodic measure preserving system, as is done both in <cit.> and <cit.>.For this purpose, let H^3 denote hyperbolic 3-space, having boundary C∼ℙ^1(C) (e.g. the Poincaré upper half space model).The space of oriented geodesics in H^3, identified with pairs in ^1(C)×^1(C)∖Δ, where Δ denotes the diagonal, carries an isometry invariant measure|z-w|^-4du dv dx dy, w=u+iv, z=x+iy. We restrict this measure to geodesics in the set𝒢=(⋃_iA_i× B_i)⋃(⋃_i≠ jA_i'× B_j)⋃(⋃_i≠ jA_i× B_j')⋃(⋃_i,jA_i'× B_j'),consisting of geodesics between disjoint A and B regions of the base quadruples (see Figure <ref>).In what follows we use A coordinates for w in the first coordinate and B coordinates for z in the second coordinate.Define T:𝒢→𝒢 byT(w,z)={[(s_iw,s_iz)z∈ B_i,z=s_i…;(s_i^⊥w,s_i^⊥z) z∈ B_i',z=s_i^⊥…;].where z is the Möbius transformation corresponding to z as described in Theorem <ref>. Equivalently,T^-1(w,z)={[(s_iw,s_iz)w∈ A_i,w=s_i…;(s_i^⊥w,s_i^⊥z) w∈ A_i',w=s_i^⊥…;]..In other words, T is applying T_B diagonally depending on the second coordinate.In terms of the shifts on pairs (w,z)=(∏_iw_i,∏_iz_i) corresponding to (w,z), we haveT(w,z)=(z_1w,∏_i=2^∞z_i)=(z_1w,T_B(z)),T^-1(w,z)=(∏_i=2^∞w_i,w_1z)=(T_A(w),w_1z).See Figure <ref> for a visualization of the invertible extension.Define the following regions (see Figure <ref>):𝒜_i =B_i∪(∪_j≠ iB_j') 𝒜_i' =(∪_j B_j')∪(∪_j≠ iB_j) ℬ_i =A_i∪(∪_j≠ iA_j') ℬ_i' =(∪_j A_j')∪(∪_j≠ iA_j) Here 𝒜_i consists of the B regions not intersecting A_i, and so on. The function T: 𝒢→𝒢 is a measure-preserving bijection. Consequently, the push-forward of this measure onto the first or second coordinate gives invariant measures μ_A and μ_B for T_A and T_B.Specifically, we havedμ_A(w)=f_A(w) du dv={[ f_A_i(u,v) du dv=du dv∫_𝒜_i|z-w|^-4 dx dy, w∈ A_i; f_A_i'(u,v) du dv=du dv∫_𝒜_i'|z-w|^-4 dx dy,w∈ A_i';].anddμ_B(z)=f_B(z) dx dy={[ f_B_i(x,y) dx dy=dx dy∫_ℬ_i|z-w|^-4 du dv, z∈ B_i; f_B_i'(x,y) dx dy=dx dy∫_ℬ_i'|z-w|^-4 du dv,z∈ B_i';].. See Figure <ref> for a graph of the invariant density f_B. Note that 𝒢=∪_i(A_i×𝒜_i∪ A_i'×𝒜_i')=∪_i(ℬ_i× B_i∪ℬ_i'× B_i'), which is readily seen in Figure <ref>.SinceT(ℬ_i× B_i) =A_i'×𝒜_i',T(ℬ_i'× B_i') =A_i×𝒜_i,we immediately get that T is a bijection.As T is a bijection defined piecewise by isometries, it preserves the measure described above.Theorem <ref> defines the functions f_A and f_B implicitly.We now compute what they are explicitly, starting with f_B.Computing the relevant integrals in Theorem <ref> gives π/4 times hyperbolic area on the triangular regions B_i':f_B(x,y)={[π/4(1/4-d^2)^2 z∈ B_1',d^2=(x-1/2)^2+(y-1)^2;π/4(1/4-d^2)^2 z∈ B_2',d^2=(x-1/2)^2+y^2;π/4(1-x)^2 z∈ B_3';π/4x^2 z∈ B_4'; ].,and on the circular regions B_i we havef_B(x,y)={[ H(x,y) z∈ B_1; H(x,1-y) z∈ B_2; G(x,y) z∈ B_3; G(1-x,y) z∈ B_4;].,whereH(x,y) =h(x,y)+h(1-x,y)+h(x^2-x+y^2,y),G(x,y) =h(x,y^2-y+x^2)+h(x^2-x+y^2,y^2-y+x^2)+h(x^2-x+(1-y)^2,y^2-y+x^2),h(x,y) =arctan(x/y)/4x^2-1/4xy.Furthermore, we have the relationship f_A(w)=f_B(ρ w)=f_B(dw)where ρ is rotation by π/2 around 1/(1-i) and d is the isometry switching opposite faces of the octahedron (the duality operator <ref>).The measures μ_A, μ_B have S_3 symmetry on each of the A_i, A_i', B_i, B_i'.For instance the isometries permuting {0,1,∞} on A_1', B_1 preserve the measure (generators shown for a transposition and three-cycle)(0,1)∼-z̅+1,(0,1,∞)∼-1/z-1.The measures also have S_4 symmetry on C, permuting the pairs {A_i,A_i'}, {B_i,B_i'} (transpositions (i,i+1) shown)(1,2)∼z̅+i,(2,3)∼1/z̅,(3,4)∼-z̅+1.The total measure assigned to each of the A_i, A_i', B_i, B_i', is π^2/4, so to normalize μ_A, μ_B, we divide by 2π^2.All of the above should be compared with Nakada's extension of Schmidt's system, described in an appendix. The following lemmas compare the measures of some Farey circles and triangles via the involutions ⊥ and ^-1, simplifying computations in Section 5.For use in the proofs, we note the equality of regionss_i^⊥ℬ_i =A_i'=dB_i', s_iℬ_i' =A_i=dB_i,s_i𝒜_i =B_i'=dA_i', s_i^⊥𝒜_i' =B_i=dA_i,where d is as in (<ref>).For m in swap normal form, and n in invert normal form, there are equalities of measure μ_B(F_B(m))=μ_B(F_B(m^⊥)), μ_A(F_A(n))=μ_A(F_A(n^⊥)).Consider the case where m=s_i…s_j so that F_B(m)=ms_jB_j'⊆ B_i' and F_B(m^⊥)=m^⊥s_i^⊥B_i⊆ B_j (all other cases are analogous) and let ω=|z-w|^-4 du dv dx dy be the invariant form on the space of geodesics.Recall m^⊥=dm^-1d from (<ref>).We haveμ_B(F_B(m))=∫_ℬ_i'∫_F_B(m)ω=∫_ℬ_i'∫_ms_jB_j'ωwhileμ_B(F_B(m^⊥)) =∫_ℬ_j∫_F(m^⊥)ω=∫_ℬ_j∫_m^⊥s_i^⊥B_iω =∫_ℬ_j∫_dm^-1ds_i^⊥B_iω=∫_mdℬ_j∫_ds_i^⊥B_iω=∫_mdℬ_j∫_s_idB_iω=∫_ms_jB_j'∫_s_iA_iω=∫_ms_jB_j'∫_ℬ_i'ω.Here we are using the fact that ω=|z-w|^-4 du dv dx dy is the invariant form and the relations in (<ref>). For m in swap normal form, and n in invert normal form, there are equalities of measureμ_B(F_B(m))=μ_A(F_A(m^-1)), μ_A(F_A(n))=μ_B(F_B(n^-1)).Consider the case where m=s_i…s_j so that F_B(m)=ms_jB_j'⊆ B_i' and F_A(m^-1)=m^-1s_iA_i⊆ A_j (all other cases are analogous) and let ω=|z-w|^-4 du dv dx dy be the invariant form on the space of geodesics.Thenμ_B(F(m))=∫_ℬ_i'∫_F_B(m)ω=∫_ℬ_i'∫_ms_jB_j'ω =∫_s_iA_i∫_m𝒜_jω=∫_m^-1s_iA_i∫_𝒜_jωwhileμ_A(F(m^-1))=∫_F_A(m^-1)∫_𝒜_jω=∫_m^-1s_iA_i∫_𝒜_jω.Here we are using the fact that ω=|z-w|^-4 du dv dx dy is the invariant form and the relations in (<ref>). § DYNAMICAL SYSTEMS ON LORENTZ AND DESCARTES QUADRUPLES The dynamical systems introduced in the previous section can be translated to ones on Lorentz and Descartes quadruples.In this section, we explore these systems as well as how they interact with individual Apollonian packings. §.§ Dynamics on Lorentz quadruples In analogy to Romik <cit.>, we now define a dynamical system on integer Lorentz quadruples a^2=b^2+c^2+d^2 with a>0 which decreases the value of a and terminates at one of the six quadruples(g,± g,0,0),(g,0,± g,0),(g,0,0,± g),g=gcd(a,b,c,d).Define the following dynamical system on C = { (a,b,c,d) : a^2 = b^2 + c^2 + d^2, a > 0 }⊂^4:T_L(a,b,c,d)={[ L_1^⊥(a,b,c,d)^tif2a+b+c+d≤ a; L_2^⊥(a,b,c,d)^tif2a+b-c-d≤ a; L_3^⊥(a,b,c,d)^tif2a-b+c-d≤ a; L_4^⊥(a,b,c,d)^tif2a-b-c+d≤ a; if none of the above then;L_1(a,b,c,d)^tif2a-b-c-d ≤ a; L_2(a,b,c,d)^tif2a-b+c+d≤ a; L_3(a,b,c,d)^tif2a+b-c+d≤ a; L_4(a,b,c,d)^tif2a+b+c-d≤ a; ].. Iteration of T_L produces a word W_1 W_2 ⋯ W_n ⋯ defined by T_L^k(a,b,c,d) = W_k T_L^k-1(a,b,c,d). The word W_1 W_2 ⋯ W_n produced by iteration of T_L on a primitive integer Lorentz quadruple (a,b,c,d) is in swap normal form and satisfies (a,b,c,d)^t = W_1 W_2 ⋯ W_k 𝐛^t where 𝐛 is one of the following simplest Lorentz quadruples: (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,-1,0,0), (1,0,-1,0), (1,0,0,-1), under iteration of T_L. See Figure <ref>. The map is well defined unless a ± b ± c ± d > 0for all choices of signs.In this case a < |b| + |c| + |d|, which is impossible given a^2 = b^2 + c^2 + d^2.Note that the map preserves (a,b,c,d), so it is well-definedon C, the primitive integral Lorentz quadruples of C.Therefore, it suffices to verify the following:T_L( W_1 W_2 ⋯ W_n 𝐛^t) =W_2 W_3 ⋯ W_n 𝐛^t.We do this by constructing an explicit intertwining map demonstrating that (^1(Q(i)), T_B) and (C, T_L) are conjugate.Then the result will follow from Theorem <ref>.In analogy to work of Romik on Pythagorean triples, it is possible to scale this system to act on a sphere. Scaling so that a=1 (X=b/a, Y=c/a, Z=d/a) (call this projection π), we get a system on the sphere T_sph(X,Y,Z)={[ (-1-Y-Z,-1-X-Z,-1-X-Y)/2+X+Y+Z if1+X+Y+Z<0; (-1+Y+Z,1+X-Z,1+X-Y)/2+X-Y-Z if1+X-Y-Z<0; (1+Y-Z,-1+X+Z,1-X+Y)/2-X+Y-Z if1-X+Y-Z<0; (1-Y+Z,1-X+Z,-1+X+Y)/2-X-Y+Z if1-X-Y+Z<0; if none of the above, then;(1-Y-Z,1-X-Z,1-X-Y)/2-X-Y-Z if1-X-Y-Z<0;(1+Y+Z,-1+X-Z,-1+X-Y)/2-X+Y+Z if1-X+Y+Z<0;(-1+Y-Z,1+X+Z,-1-X+Y)/2+X-Y+Z if1+X-Y+Z<0;(-1-Y+Z,-1-X+Z,1+X+Y)/2+X+Y-Z if1+X+Y-Z<0;]..The regions defined on the sphere (Figure <ref>) are given by the intersection of the sphere with the hyper-ideal tetrahedron defined by the linear inequalities1+X+Y+Z≥0, 1+X-Y-Z≥0, 1-X+Y-Z≥0, 1-X-Y+Z≥0in the Klein projective model of hyperbolic space, and the cases of T_sph are reflections in those geodesic planes and the planes of the dual tetrahedron.The systems (S^2,T_sph) and (^1(C),T_B) are conjugate, moving between the projective and the upper half-space models of hyperbolic space.Specifically, after stereographic projection (X,Y,Z)↦(X/1-Z,Y/1-Z)=z, rotating (e^-π i/4z), scaling (z/√(2)), shifting (z+1+i/2), and switching two circles (z̅/-iz̅+1), we obtain an intertwining mapϕ(X,Y,Z)=iz̅+1/z̅+1=(1+Y-Z)+iX/(1+X-Z)-iY,T_B∘ϕ=ϕ∘ T_sph.Under the composition ϕ∘π, one can associate a Lorentz quadruple (a,b,c,d) with the complex pointz = ϕ∘π(a,b,c,d)^t :=a+c-d + bi /a+b-d-ci,where π is the scaling projection from above, the correspondence between fixed points being(1,±1,0,0)↦1/1-i, ∞,(1,0,±1,0)↦ 1+i,0,(1,0,0,±1)↦ i,1. Then,s_i ∘ϕ∘π ( a,b,c,d )^t = ϕ∘π∘ L_i (a,b,c,d)^t, s_i^⊥∘ϕ∘π ( a,b,c,d )^t = ϕ∘π∘ L_i^⊥ (a,b,c,d)^t,and the conditions defining the cases of T_L and T_B correspond.Therefore, T_B∘ϕ∘π=ϕ∘π∘ T_L. To summarize the above:the following diagram commutes, ϕ is invertible, and there is a unique primitive representative in π^-1(q) for rational q∈ S^2.C [r]^T_L[d]_πC [d]^πS^2 [r]^T_sph[d]_ϕS^2 [d]^ϕ P^1(C) [r]^T_B P^1(C) §.§ Dynamics on Descartes quadruples Under the change of variables of Section <ref>, the dynamical system of the last section becomes adynamical system on primitive integer Descartes quadruples.Define the following:T_S(a,b,c,d)={[ S_1^⊥(a,b,c,d)^tifa<0; S_2^⊥(a,b,c,d)^tifb<0; S_3^⊥(a,b,c,d)^tifc<0; S_4^⊥(a,b,c,d)^tifd<0; if none of the above then; S_1(a,b,c,d)^tifb+c+d<a; S_2(a,b,c,d)^tifa+c+d<b; S_3(a,b,c,d)^tifa+b+d<c; S_4(a,b,c,d)^tifa+b+c<d; ].. Iteration of T_S produces a word W_1 W_2 ⋯ W_n defined by T_S^k(a,b,c,d)^t = W_k T_S^k-1(a,b,c,d)^t. The word W_1 W_2 ⋯ W_n produced by iteration of T_S on a primitive integer Descartes quadruple (a,b,c,d) is in swap normal form and satisfies (a,b,c,d)^t = W_1 W_2 ⋯ W_k σ(1,1,0,0)^t where σ is some permutation of the entries of the vector.In other words, any primitive integral Descartes quadruple (a,b,c,d) eventually reaches one of the so-called simplest Descartes quadruples (1,1,0,0), (1,0,1,0), (1,0,0,1), (0,1,1,0), (0,1,0,1), (0,0,1,1) under iteration of T_S.The proof is immediate from the previous sections, using conjugation by J defined in (<ref>).We now state an analogous system conjugate to T_A.The proof is similar.DefineT_I(a,b,c,d)={[ S_1(a,b,c,d)^tifb+c+d<a; S_2(a,b,c,d)^tifa+c+d<b; S_3(a,b,c,d)^tifa+b+d<c; S_4(a,b,c,d)^tifa+b+c<d; if none of the above then; S_1^⊥(a,b,c,d)^tifa<0; S_2^⊥(a,b,c,d)^tifb<0; S_3^⊥(a,b,c,d)^tifc<0; S_4^⊥(a,b,c,d)^tifd<0; ].. Iteration of T_I produces a word W_1 W_2 ⋯ W_n defined by T_I^k(a,b,c,d) = W_k T_I^k-1(a,b,c,d). The word W_1 W_2 ⋯ W_n produced by iteration of T_I on a primitive integer Descartes quadruple (a,b,c,d) is in invert normal form and satisfies (a,b,c,d)^t = W_1 W_2 ⋯ W_k σ(1,1,0,0)^t where σ is some permutation of the entries of the vector.In other words, any primitive integral Descartes quadruple (a,b,c,d) eventually reaches one of the so-called simplest Descartes quadruples (1,1,0,0), (1,0,1,0), (1,0,0,1), (0,1,1,0), (0,1,0,1), (0,0,1,1) under iteration of T_I. §.§ Dynamics on Apollonian circle packings The Apollonian circle packist will be interested to consider how the dynamical systems interact with individual Apollonian circle packings.Each Apollonian circle packing has a root quadruple, i.e. the largest four pairwise tangent circles in the packing.The main result of this section is to show that the invert normal form word W_1 W_2 ⋯ W_n produced by the dynamical system T_I is such that the longest substring W_1 W_2 ⋯ W_k consisting only of swaps will end with the root quadruple of the packing containing the initial quadruple.In other words, the dynamical system T_I, or equivalently T_A, moves any quadruple to the root of its Apollonian circle packing, then inverts, then moves to the root, then inverts, etc. The dynamical system T_I moves a quadruple to the root of its packing via swaps before inverting.In particular, while it remains in a single Apollonian circle packing, the dynamical system T_I agrees with the Reduction Algorithm for Descartes quadruples of <cit.>, until the last step (when Graham, Lagarias, Mallows, Wilks and Yan reorder the quadruple).The proof requires several lemmas.A Descartes quadruple (a,b,c,d) is a root quadruple of the Apollonian packing in which it resides if a≤ 0≤ b≤ c≤ d, and a+b+c≥ d.Furthermore, it exists if the packing is integral, and is unique <cit.>.Let (x,y,z,w) be a Descartes quadruple such that the i-th coordinate is not maximal in the quadruple.Let S_ii=S_i^⊥ S_i.Then S_ii(x,y,z,w)^t is a root quadruple of the packing in which it resides, possibly after re-ordering the coordinates.We haveS_11=( [1 -2 -2 -2; -2544; -2454; -2445;]) S_22=( [5 -244; -21 -2 -2;4 -254;4 -245;]) S_33=( [54 -24;45 -24; -2 -21 -2;44 -24;]) S_44=( [544 -2;454 -2;445 -2; -2 -2 -21;]),We prove the lemma for the case i=1 and note that the other cases are identical.We haveS_11(x,y,z,w)^t=(x-2y-2z-2w,-2x+5y+4z+4w, -2x+4y+5z+4w, -2x+4y+4z+5w)^t.Since x is not maximal among x,y,z,w, we have that the first coordinate of S_1(x,y,z,w)^t is ≥0 (if x is negative, -x+2y+2z+2w is a sum of positive numbers and hence it is positive; if x is nonnegative, it is enough to argue that the first coordinate of S_1(x,y,z,w)^t is at least x).Hence x-2y-2z-2w≤ 0≤ -2x+5y+4z+4w, -2x+4y+5z+4w, -2x+4y+4z+5w.Without loss of generality, assume y≤ z≤ w, so thatx-2y-2z-2w≤ 0≤ -2x+5y+4z+4w≤ -2x+4y+5z+4w≤-2x+4y+4z+5wIn order to show that the above is a root quadruple, we need only to show that x-2y-2z-2w-2x+5y+4z+4w-2x+4y+5z+4w+2x-4y-4z-5w=-x+3y+3z+w≥0.Using Descartes' theorem, and the fact that w is maximal, we have thatw=2x+2y+2z+√(16xy+16xz+16yz)/2=x+y+z+2√(xy+xz+yz).So the expression in (<ref>) can be rewritten as4y+4z+2√(xy+xz+yz).If 0≤ y≤ z, then this is clearly nonnegative and we are done.Suppose y<0.Then the circle corresponding to y in the Descartes quadruple (x,y,z,w) contains the one of curvature z, and so the circle of curvature z has smaller radius and hence larger curvature than the one of curvature y.Thus 4y+4z>0 and the expression in (<ref>) is nonnegative as desired.Let (a,b,c,d) be a root quadruple of some packing.Then S_i^⊥(a,b,c,d)^t is a root quadruple of another packing, after re-ordering, for any 2≤ i≤ 4. We consider the case where i=2 and note that the other cases are identical.We have S_2^t(a,b,c,d)^t=(2b+a,-b,2b+c,2b+d)^t, and -b≤ 0≤ 2b+a≤2b+c≤ 2b+d.We compute-b+2b+a+2b+c-2b-d=b+a+c-d≥ 0since (a,b,c,d) is a root quadruple, and hence (2b+a,-b,2b+c,2b+d) is also a root quadruple after reordering.The system T_I generates a word satisfying (a,b,c,d)^t = W_1 W_2 ⋯ W_n b where b is a simplest Descartes quadruple.In this word, swaps are as far left as possible and inversions as far right as possible.Therefore if the leftmost inversion S_i^⊥ occurs at some position k, i.e. W_k = S_i^⊥, it is because it is followed by S_i, or because it occurs as the last letter (k=n), or else because it is followed by another inversion S_j^⊥. We assume that the leftmost inversion is W_1, and will show that (a,b,c,d)^t is a root quadruple.The theorem will follow.In the first case, (a,b,c,d)^t = S_i^⊥ S_i (x,y,z,w)^t, where the i-th coordinate is not maximal (since it is a result of S_i in the application of T_S).Therefore, by the first lemma, (a,b,c,d) is a root quadruple.In the second case, (a,b,c,d) is created by an inversion from a simplest Descartes quadruple, which is, in particular, a root quadruple.But it is not an inversion in a circle of curvature 0, since that would not change the Descartes quadruple.Therefore the second lemma applies.In the third case, (a,b,c,d)^t = S_i^⊥ S_j^⊥ (x,y,z,w)^t, where i ≠ j.Then, by induction (with the previous two cases as base cases and the second lemma as an inductive step), (a,b,c,d)^t is a base quadruple.(Since i ≠ j, we know the i-th circle is not the largest in S_j^⊥(x,y,z,w)^t.)§ TYPICAL EXPANSIONS OF A POINT FROM TWO PERSPECTIVESWhat can one say about the chain of swaps and inversions that are used in the rational approximation of a typical point under iteration?First, we consider the question as a limiting question on finite expansions.In the second section, we consider the question for all expansions.Finally, we provide some numerical data. §.§ Digit probabilities in finite expansionsWe consider the behavior of the expansions of rational points (those points with finite expansion).We use a measure of the height of a rational point.For example, in the case of Lorentz quadruples, we may define the set of points of height N to beX_N:={(a,b,c,d)∈ℤ^4 : a^2=b^2+c^2+d^2, 0<a ≤ N, (b,c,d)=1}.We then consider X_N to be a discrete probability space with uniform probability measure.Under this measure, we can ask about the distribution of the n-th digit in the expansion, as N →∞.In this section, we prove the following theorem.Write δ(X,Y,Z) for the letter produced by applying T_Sph to (X,Y,Z).Let dμ=1/4πdA be the normalized uniform area measure on the sphere.Under the uniform probability measure for X_N, the distribution of n-th digit in the expansion of a random Lorentz quadruple (a,b,c,d)∈ X_N, converges to the distribution of δ under the measureℱ^n-1(𝐈)dμ, where 𝐈 is the constant function 1 and ℱ is the the transfer operator of T_Sph. In particular, the distributions of the 1st and 2nd digits converge to the distribution of δ under the measuresdμ, ( ∑1/(2 ± X ± Y ± Z)^4 ) dμ,respectively. (The sum is over all choices of sign combinations.) For example, the probabilities of possible first digits approach the proportional areas of the circular and triangular regions on the sphere in Figure <ref>.These are, respectively, the probability the first digit is an inversion, 1/π∫_1/√(2)^∞∫_-∞^∞4dxdy/(1+x^2+y^2)^2=2(1-1/√(3))= 0.84529946…(the integrand is the pushforward of surface area on the sphere to the plane under stereographic projection); and the probability it is a swap,1/4π (4π-8π(1-1/√(3)))=(2/√(3)-1)= 0.15470053…. Note that the same result applies, via application of the change of coordinates J, to Descartes quadruples chosen uniformly among those whose sum of curvatures is less than N.It is also worth remarking that this sort of result is limited in the sense that a slightly different way of measuring height, say the maximum of the curvatures is less than N, may potentially lead to different probabilities.The transfer operator ℱ:L^1(S^2,μ)→ L^1(S^2,μ) arising from the transformation T_Sph is defined in the following way: for f∈ L^1(S^2,μ), we have(ℱf)(X̃)=∑_Ỹ∈ f^-1(X̃)g(Ỹ)f(Ỹ),where g is the inverse of the Jacobian of T_sph. By definition, the transfer operator is given by ( ℱf)(X,Y,Z) =1/(2+X+Y+Z)^4f(-1-Y-Z/2+X+Y+Z,-1-X-Z/2+X+Y+Z,-1-X-Y/2+X+Y+Z)+ 1/(2+X-Y-Z)^4f(-1+Y+Z/2+X-Y-Z,1+X-Z/2+X-Y-Z,1+X-Y/2+X-Y-Z)+ 1/(2-X+Y-Z)^4f(1+Y-Z/2-X+Y-Z,-1+X+Z/2-X+Y-Z,1-X+Y/2-X+Y-Z)+1/(2-X-Y+Z)^4f(1-Y+Z/2-X-Y+Z,1-X+Z/2-X-Y+Z,-1+X+Y/2-X-Y+Z) +1/(2-X-Y-Z)^4f(1-Y-Z/2-X-Y-Z,1-X-Z/2-X-Y-Z,1-X-Y/2-X-Y-Z) +1/(2-X+Y+Z)^4f(1+Y+Z/2-X+Y+Z,-1+X-Z/2-X+Y+Z,-1+X-Y/2-X-Y+Z) +1/(2+X-Y+Z)^4f(-1+Y-Z/2+X-Y+Z,1+X+Z/2+X+Y-Z,-1-X+Y/2+X-Y+z) +1/(2+X+Y-Z)^4f(-1-Y+Z/2+X+Y-Z,-1-X+Z/2+X+Y-Z,1+X+Y/2+X+Y-Z).Therefore the second part of the theorem follows from the first.The result follows if we show that for a given region R in S^2, lim_N→∞|{(a,b,c,d)∈ X_N : (b/a,c/a,d/a)∈ R}|/|X_N| = μ(R).This would show that the random vector (b/a,c/a,d/a) converges in distribution to μ. Therefore, the transformation T_sph(X,Y,Z) has distribution (ℱ(𝐈))(X,Y,Z) dμ. It suffices to consider regions R on the unit sphere given by R={ (sinθcosϕ,sinθsinϕ,cosθ):s_1<θ<s_2,t_1<ϕ<t_2}.We let S_s_1,s_2,t_1,t_2 be the three dimensional region given by S_s_1,s_2,t_1,t_2={ (x,y,z):x^2+y^2+z^2≤ 1,s_1<arccosz/√(x^2+y^2+z^2)<s_2, t_1<arctany/x<t_2}.Now |{(a,b,c,d)∈ X_N : (b/a,c/a,d/a)∈ R}| / |X_N| = #{ (a,b,c,d)∈ℤ^4:a>0, (b,c,d)=1,b^2+c^2+d^2≤ N^2,(b/a,c/a,d/a)∈ R}/#{ (a,b,c,d)∈ℤ^4:a>0, (b,c,d)=1,b^2+c^2+d^2≤ N^2}=#{ (b,c,d)∈ℤ^3: (b,c,d)=1,(b/N,c/N,d/N)∈ S_s_1,s_2,t_1,t_2}/#{ (b,c,d)∈ℤ^3: (b,c,d)=1,(b/N,c/N,d/N)∈ S_0,2π,0,π}=( 1+o(1))N^31/ζ(3) volume of S_s_1,s_2,t_1,t_2/N^31/ζ(3) volume ofS^2=( 1+o(1))1/3 (cos s_1-cos s_2)(t_2-t_1)/4π/3=( 1+o(1))1/4π(cos s_1-cos s_2)(t_2-t_1)=( 1+o(1))dA/4π,which proves (<ref>). §.§ Expansion statistics assuming ergodicityThe results in this section depend on the conjectural ergodicity of the systems (ℙ^1(ℂ), T_A,μ_A) and (ℙ^1(ℂ), T_B,μ_B) constructed in the previous section.These systems are similar to the system constructed by Schmidt in <cit.>, which is ergodic (the main result of Schmidt's <cit.>).Similarly, Romik constructs a measure-preserving system on the first quadrant of the unit circle in <cit.> which he proves to be ergodic.In both Romik's and Schmidt's work, ergodicity is then applied to provide information about the expansions of a typical point.While we leave the proof of ergodicity to a subsequent paper, we state it as a conjecture here. The systems (ℙ^1(ℂ), T_A,μ_A) and (ℙ^1(ℂ), T_B,μ_B) are ergodic. Then standard theorems of ergodic theory would imply that for almost all z ∈^1(), lim_n→∞1/n∑_i=1^nϕ(T_A^iz)=∫ϕ dμwith ϕ an indicator function for various Farey circles and triangles (and similarly with T_B in place of T_A).In particular, by judicious choice of an indicator function, we can compute the frequencies of certain chains of swaps and inversions occurring in the typical expansion of a point.§.§.§ Example:Two or three swaps in a row The frequency of two swaps in a row {S_iS_j : i,j∈[1,4], i≠ j}, as well as the frequency of two inversions in a row {S_i^⊥ S_j^⊥ : i,j∈[1,4], i≠ j} is 0.345299….The frequency of three swaps (or inversions) in a row, {S_iS_jS_k : i,j,k∈[1,4], i≠ j, j≠ k}, is 0.246913….With reference to Figure <ref>, there are twelve triangular regions corresponding to two swaps in a row.The total measure of these triangular regions, as a proportion of the total measure of the plane, is the frequency we wish to compute.As noted in Section <ref>, the total area with respect to μ_A or μ_B of all eight Farey circles and triangles (the entire plane) is 2π^2.The area of a region with respect to μ_A and μ_B is exactly hyperbolic area multiplied by a factor of π/4. The hyperbolic area of a Euclidean disk of radius r in the upper half-plane whose center has y-coordinate b>0 isI(α)=∫_-1^12√(1-x^2)/α^2-(1-x^2)dx=2π(α/√(α^2-1)-1) (),where α=b/r>1.The hyperbolic area of a translate (in the y direction by α) of the ideal triangle with vertices 0,1,∞ isJ(α)=∫_0^1dx/α+√(x(1-x))=π-2arccos(1/2α)/√(1-(1/2α)^2).Note:this is Φ(1/2α) where Φ is as in Lemma 3.3 from <cit.>.Putting this together, we have that the frequency of two swaps in a row is 12·(π/4· J(1))/2π^2 = 0.345299…,the frequency of three swaps in a row is12/8π(J(1)-I(4))=0.246913…,and so on.We may use Lemmas <ref> and <ref> to translate integrals over Farey circles to integrals over Farey triangles, and vice versa.In particular, the frequency of n swaps will equal the frequency of n inversions.§.§.§ Example:Certain strings of SchmidtThe frequency in the expansion of almost every z∈ℂ of a string of alternating swaps or inversions of length n,S_i/j… S_jS_iS_jS_i,S_i/j^⊥… S_j^⊥S_i^⊥S_j^⊥S_i^⊥ (i≠ jfixed),is J(n-1)/8π, where J(α) is as in (<ref>).These are probabilities associated to a string of only swaps or only inversions having a common fixed point.Consider the probability that one of the vertices is fixed for exactly n iterations by a string of only swaps or only inversions, i.e. the frequency of stringsMS_i/j… S_jS_iS_jS_iN,MS_i/j^⊥… S_j^⊥S_i^⊥S_j^⊥S_i^⊥N(i≠ j fixed),where M,N≠ S_i,S_j on the left and M,N≠ S_i^⊥,S_j^⊥ on the right.This frequency is (cf. <cit.>)1/8π(J(n-1)-2J(n)+J(n+1)).In particular, the conjectured values for length 1, 2 and 3 Schmidt strings are0.084117…,0.007180…,0.002249…respectively.§.§ Experimental data We ran two experiments.In the first, we generated all Lorentz quadruples with a < 200 and computed the frequencies of certain substrings, as well as the frequencies of the first several digits.The former are shown in Table <ref>.The frequency of swaps and inversions in the first and second digits are shown in Table <ref>.These results are to be compared to Section <ref>.In the second, we approximated the frequencies in a random expansion as follows:we selected 100 points uniformly at random from the [0,1]×[0,1] square and computed the first 100 letters of the expansion iterating T_A or T_B, and tabulated the frequency of certain substrings.These results are to be compared to Section <ref>.§ RESTRICTION TO THE REAL LINE AND ITS ERGODIC INVARIANT MEASURE §.§ The dynamical system and its ergodic invariant measureIn this section, we consider the action of T_A and T_B on the real line, thus obtaining a version of the Euclidean algorithm.On ^1(R), T_A and T_B agree, the map being t(x):={[ a(x)=-x x∈ A=[-∞,0]; b(x)=x/2x-1x∈ B=[0,1];c(x)=2-xx∈ C=[1,∞]; ]..The Möbius transformations a,b,c are reflections in the sides of the ideal hyperbolic triangle with vertices {0,1,∞}, generating a group Γ_R isomorphic to the free product Z/(2)*Z/(2)*Z/(2).The fixed points of t are {0,1,∞} and iteration of t takes rationals to one of the fixed points in finite time, depending on the parity of the numerator and denominator, preserving parity as Γ_R is the kernel of the map _2(Z)→SL_2(Z/(2)) (index 6).See the next subsection for a proof that orbits are finite on rationals.Iteration of t with input x produces a word x=m_1… in {a,b,c}, m_i(t^i-1(x))=t(t^i-1(x)), such that m_i≠m_i+1.We take the word to be finite if x is rational.We obtain rational approximations to x by following the inverse orbit of {0,1,∞}:p_n,α/q_n,α=(∏_i=1^nx_i)(α),α∈{0,1,∞},which include the usual convergents from simple continued fractions.The convergents can be constructed starting from (1/1,±1/0,0/1), according as x is positive or negative, and taking mediants, moving through the Farey tree.Applying a, b, or c updates the first, second, or third position by taking the mediant p/q⊕r/s:=p+q/r+s of the other two entries.For example for a random numberx=0.4189513796210592…,x=bacabcacbcacacababac…,the first 20 convergents are (mediants in red, see Figure <ref>)(1/1,1/0,0/1),(1/1,1/2,0/1),(1/3,1/2,0/1),(1/3,1/2,2/5),(3/7,1/2,2/5),(3/7,5/12,2/5), (3/7,5/12,8/19),(13/31,5/12,8/19),(13/31,5/12,18/43),(13/31,31/74,18/43), (13/31,31/74,44/105),(75/179,31/74,44/105), (75/179,31/74,106/253), (137/327,31/74,106/253),(137/327,31/74,168/401), (199/475,31/74,168/401), (199/475,367/876,168/401), (535/1277,367/876,168/401),(535/1277,703/1678,168/401), (871/2079,703/1678,168/401), (871/2079,703/1678,1574/3757)=(0.4189514…,0.4189511…,0.4189512…). The map T(y,x)=(m(y),m(x)), m(x)=t(x), extending t in second coordinate, is a bijection on the space of geodesicsG_R=A× B∪ A× C∪ B× C∪ B× A∪ C× A∪ C× B∖diag.where there is an isometry invariant measure dx dy |x-y|^-2.Pushing forward to the second coordinate gives the infinite t-invariant measuref(x) dx=dμ(x)={[ dx/-xx<0,; dx/x(1-x)0<x<1,;dx/x-1x>1,; ]..This dynamical system is clearly a cross-section of billiards in the ideal hyperbolic triangle:the bi-infinite word y^-1x in {a,b,c} corresponding to a geodesic (y,x) records the sequence of collisions with the walls.The return time, say for a geodesic (y,x)∈[-∞,0]×[1,∞], is given byr(y,x)=1/2log(x(1-y)/y(1-x)).The return time is integrable with respect to (x-y)^-2 dy dx, for instance1/2∫_-∞^0∫_1^∞log(x(1-y)/y(1-x))dx dy/(y-x)^2=π^2/6.Since this triangle reflection group has finite covolume, the system (G_R,T,(x-y)^-2 dy dx) is ergodic, implying ergodicity of (P^1(R),t,μ) §.§ A Euclidean algorithm By homogenizing t above, we get a dynamical system on pairs of integers (p,q)∈Z^2 which halts when p=q or one of p, q is zero.(p,q)↦{[ a(p,q)=(-p,q) q<0<porp<0<q,i.e.p/q<0,; b(p,q)=(p,2p-q) 0<p<qorq<p<0,i.e.0<p/q<1,; c(p,q)=(2q-p,q) 0<q<porp<q<0,i.e.p/q>1.; ].The b step reduces |q|, the c step reduces |p|, and after applying a, one of either b or c follows.Hence the algorithm terminates.If (p,q)≠(0,0) then the non-zero entry of the output is ±gcd(p,q), and working backwards allows us to write the gcd as a linear combination of p and q (using only the coefficients -1 and 2 at each step). §.§ Dynamics on triplesIn this section, we make some remarks relating the triangle reflection group to a “dual Apollonian group” on the line and conjugate this group to act on Pythagorean triples (obtaining a system on the circle conjugate to (P^1(R),t,μ)).Here is a lower dimensional version of Descartes' theorem on curvatues.Consider three real numbers, a<b<c (one of which may be ∞, ordered as on a circle).The inverses of the lengths of the interals they define (infinite length if one endpoints is ∞, negative length if the interval contains ∞) satisfy (x+y+z)^2-(x^2+y^2+z^2)=0.In analogy with the dual Apollonian group ⟨s_i^⊥ : 1≤ i≤4⟩, we can define a group acting on ordered triples of mutually tangent oriented intervals as “inversions” in the equivalent of ACC coordinates using indefinite binary quadratic forms.IfF=( [a -b; -bc;]),a,b,c∈R,(F)=-1,then the zero set of F, (x-b/a)^2=1/a^2, defines an oriented interval with curvature a, co-curvature c, and curvature-center b (if a=0, then b is ±1 depending on orientation). Three forms F_i (considered as geodesics of the upper half-plane model of H^2) are in “triangular” configuration if they define an ideal hyperbolic triangle with proper orientation.The analogous group has generatorsA=( [ -100;210;201;]) B=( [120;0 -10;021;]) C=( [102;012;00 -1;])each defining an inversion in the given interval.The group generated by A,B,C preserves the quadratic form xy+xz+yz, i.e.M^tQM=Q,Q=1/2( [ 0 1 1; 1 0 1; 1 1 0; ]),M∈⟨ A,B,C⟩.Three intervals/forms/geodesics R are in triangular configuration if they satisfyR^tQR=( [0 -20; -200;00 -1;]).The group Γ_R is then a geometric realization of this group with respect to the base triple representing the ideal hyperbolic triangle with vertices 0, 1, ∞ (rows in (c,a,b) coordinates)( [00 -1;021; -201;]). The form Q is rationally equivalent to the “Pythagorean” diagonal form.For instanceP=J^tQJ,P=( [100;0 -10;00 -1;]),J=( [111;0 -10;11 -1;]).Conjugating A, B, C by J givesA^J=( [322; -2 -1 -2; -2 -2 -1;]), B^J=( [100;0 -10;001;]), C^J=( [32 -2; -2 -12;22 -1;]).We can define a dynamical system on triples of integers (a,b,c) such that a^2=b^2+c^2, a>0 which reduces the value of a and terminates at one of (g,-g,0), (g,0,± g), where g is the GCD of a, b, and c:(a,b,c)↦{[ (3a+2a+2c,-2a-b-2c,-2a-2b-c) b<0,c<0,; (a,-b,c) b>0,;(3a+2b-2c,-2a-b+2c,2a+2b-c) b<0,c>0.;].Dividing by a^2, we obtain a sytem on the circle x^2+y^2=1, conjugate to t(x,y)↦{[ (-2-x-2y/3+2x+2y,-2-2x-y/3+2x+2y)x<0,y<0,;(-x,y)x>0,;(-2-x+2y/3+2x-2y,2+2x-y/3+2x-2y)x<0,y>0.; ].§.§ Comparing to Romik's Work Romik's action on Pythagorean triples also gives a dynamical system on the real line with an ergodic invariant measure and another variation on the Euclidean algorithm <cit.>.For example, Romik's Euclidean algorithm in section 3.2 of <cit.> is a dynamical system on nonnegative pairs of integers (p,q) with p>q where (p,q)↦{[ a(p,q)=(p-2q,q) p-2q>q,; b(p,q)=(q,p-2q)q≥ p-2q>0,; c(p,q)=(q,2q-p) p-2q≤0.; ]. For the sake of comparison, here is the greatest common divisor of 246 and 113 in the system described in section <ref>: (246,113) → (-20,113)→(20,113)→(20,-73)→(-20,-73)→(-20,33)→(20,33)→ (20,7)→(-6,7)→(6,7)→(6,5)→(4,5)→(4,3)→(2,3)→(2,1)→(0,1), while Romik's algorithm runs more quickly as follows:(246,113) → (113,20)→(73,20)→(33,20)→(20,7)→(7,6)→(6,5)→ (5,4)→(4,3)→(3,2)→(2,1)→(1,0).§ LORENTZ QUADRUPLES BY SIZE: ANOTHER DYNAMICAL SYSTEMIn contrast to the approach taken so far, it may be desirable to prioritize the feature that the dynamical system moves to the root by travelling to the arithmetically simplest among the adjacent Lorentz quadruples.This leads to a different dynamical system.We say a Lorentz quadruple (a,b,c,d) is normalized if a ≥ b ≥ c ≥ d ≥ 0.We can normalize a quadruple by changing signs of its entries and reordering entries.Order the normalized Lorentz quadruples lexicographically, so that the first, or least, is (1,1,0,0).We will refer to the position of the quadruples in this ordering as their height; quadruples with the same normalization have the same height.The height of a possibly non-normalized Lorentz quadruple is the height of its normalization.The purpose of this section is to construct a dynamical system on quadruples which travels to the root by passing at each step to the adjacent quadruple of least height.The paths to the origin according to this system form a tree.Writing quadruples in normalized form, one obtains a tree organizing quaduples by height which is shown in Figure <ref>. Let _+ ⊆ represent the Lorentz quadruples (a,b,c,d) that satisfy a,b,c,d ≥ 0.We define a dynamical system on _+.Write A = 2a-b-c-d, B = a-c-d, C = a - b - d and D = a-b-c.DefineT_D(a,b,c,d) = (|A|, |B|, |C|, |D|) = (A, |B|, |C|, |D|).Observe that A > 0, i.e. 2a > b+c+d for any Lorentz quadruple.Define the seven matrices:D_1 :=[21 -1 -1; -1011; -1 -101; -1 -110 ], D_2 := [2 -11 -1; -10 -11; -1101; -11 -10;],D_3 := [2 -1 -11; -101 -1; -110 -1; -1110;] D_4 := [2 -1 -1 -1; -1011; -1101; -1110 ] D_5 := [2 -111; -10 -1 -1; -110 -1; -11 -10 ] D_6 := [21 -11; -101 -1; -1 -10 -1; -1 -110;] D_7 := [211 -1; -10 -11; -1 -101; -1 -1 -10 ]The dynamical system generates a word M_1M_2⋯ from left-to-right in the D_i, since if T_D(a,b,c,d) = (|A|,|B|,|C|,|D|), then D_i(A,B,C,D)^t = (a,b,c,d)^t for exactly one i (it is not possible that a > c+d, b+d, b+c).In other words exactly one of the D_i undoes T_D.Explicitly, we let D_i be{[ D_1 D,C < 0, B> 0; D_2 D,B < 0, C> 0; D_3 C,B < 0, D> 0; D_4 B,C,D < 0; D_5 D,C > 0, B< 0; D_6 D,B > 0, C< 0; D_7 B,C > 0, D< 0; ]. We give ℒ_+ the structure of a graph as follows:let two quadruples 𝐛_1 and 𝐛_2 be joined by an edge whenever 𝐛_2 is obtained by L_i 𝐛_1, followed by taking absolute values. For any (a,b,c,d) ∈_+, T_D^(n)(a,b,c,d) = 𝐛 for some integer n, and some 𝐛∈{ (1,1,0,0), (1,0,1,0), (1,0,0,1) }. Then the word M = M_1⋯ M_n generated by T_D satisfies (a,b,c,d) = M𝐛. Furthermore, the path (T_D^(k)(a,b,c,d))_k=0^n to 𝐛 on _+ is of minimal length in the graph ℒ_+.Finally, the path is the same as the path obtained by always travelling to the adjacent quadruple of smallest height.Label ℒ^+ according to direction of decrease of height.Then * All edges are directed.*the origin vertices 𝐛∈{(1,1,0,0),(1,0,1,0),(1,0,0,1)} have no outward directed edges,*aside from the origins, every vertex has at least one outward directed edge, *the only minimal cycles in the graph are squares, and*any square is isomorphic to∙[r] [d]∙[d]∙[r]∙ Of the adjacent quadruples, the smallest is that with first entry 2a - b - c - d.We have 2a -b-c-d ≥ a if and only if a ≥ b + c+d.However, since a^2 = b^2+c^2+d^2, and we are assuming b,c,d ≥ 0, this would entail b^2+c^2+d^2 ≥ (b+c+d)^2, which can only occur if at least two of b,c,d are zero.Hence, in the primitive case, this only occurs when the vertex is an origin.Therefore the origins have no outward directed edges, but every other vertex does have an outward directed edge.An edge is undirected if and only if the quadruples are the same up to reordering the non-negative quantities b,c,d.The adjacent quadruples have largest entry 2a ± b ± c ± d.Therefore an undirected edge can occur only if a = ± b ± c ± d for some choice of signs.But since a ≥ b,c,d ≥ 0, this is only possible with at most one negative sign.If a=b+c+d, we are in the case of the root, as above.Otherwise, if a=b+c-d, an undirected edge implies that a ≥ b=c ≥ d, and the two quadruples are the same, including ordering, hence the same vertex. This shows that the graph has no undirected edges.Observe that ℒ_+ is by definition a union of images of the Cayley graph 𝒞_L under graph homomorphism. In particular, the presentation of the Super-Apollonian group implies that the only minimal cycles in 𝒞_L are squares.So the minimal cycles in ℒ_+ consist of images of squares.Up to permuting the second through fourth coordinates, or changing their signs, the homomorphic images of a square in 𝒞_L is always labelled with first coordinates as follows, where edges are labelled with differences in the direction shown:@C=5em 2a - b + c + d @->^a-b+c-d[r] @<-_a-b+c+d[d] 3a - 2b + 2c @<-^a-b+c+d[d] a @->_a-b+c-d[r] 2a - b + c - d Since in a square of 𝒞_L the edge labels are non-zero, the direction of the edges is determined by these values, and it is evident that parallel edges must be directed in the same way, from which the assertion about directions on squares follows. Finally, the non-existence of triangles in ℒ_+ follows from the diagram above, since such a triangle must be a homomorphic image of a square, and the edge labellings demonstrate that if one side of the square collapses, the opposite side also collapses.By items (<ref>) and (<ref>) of Lemma <ref>, and the well-ordering principle for heights, we know that from any quadruple there is a path to the origin which respects the direction of the graph.The path of greatest decrease is unique since there is a unique adjacent quadruple of least height.This last fact arises as follows:suppose 2a - b -c-d = 2a ± b ± c ± d for some other choice of signs.Then one of b,c,d is zero.If two entries are zero, we are at the root.If one entry is zero, comparing matrices L_i shows that the two quadruples are equal, so there is just one quadruple of least height.Therefore the graph formed of fastest-dropping paths consists of three trees, which together span the graph. It remains to show minimality of the path lengths. Compare the path of greatest descent to the image of the swap normal form path.Both descend the origin, and both respect the direction of the graph (as observed in the proof of Theorem <ref>).The moves that take one path to the other consist of square crossings (by Lemma <ref> (<ref>)).Let the weight of a path be given by the sum of 1 for each edge traversed respecting the graph direction, and -1 for each edge traversed against the graph direction.Then square crossings preserve weight, by Lemma <ref> (<ref>), so both paths have equal weight.Since both paths respect graph direction, this implies they are of the same length. § APPENDIX: SUMMARY OF A. L. SCHMIDT'S CONTINUED FRACTIONSHere we give a quick summary of Asmus Schmidt's continued fraction algorithm <cit.>, its ergodic theory <cit.>, and further results of Hitoshi Nakada concerning these <cit.>, <cit.>, <cit.>.We also explicitly relate Schmidt's system to ours.Define the following matrices in PGL(2,ℤ[i]):V_1 =( [ 1 i; 0 1; ]), V_2=( [10; -i1;]), V_3=( [ 1-i i;-i 1+i; ]),E_1 =( [ 1 0; 1-i i; ]), E_2=( [1 -1+i;0i;]), E_3=( [ i 0; 0 1; ]), C=( [1 -1+i;1-ii;]). Note thatS^-1V_iS=V_i+1,S^-1E_iS=V_i+1,S^-1CS=C(indices modulo 3)where S is the order three elliptic element( [0 -1;1 -1;]), and thatc∘ m∘c=m^-1for the Möbius transformations m induced by {V_i,E_i,C} (here c is complex conjugation).In <cit.> Schmidt uses infinite words in these letters to represent complex numbers as infinite products z=∏_nT_n, T_n∈{V_i,E_i,C} in two different ways.Let M_N=∏_n=1^NT_n.We have regular chainsM_N=±1⇒ T_n+1∈{V_i,E_i,C}, M_N=± i⇒ T_n+1∈{V_i,C},representing z in the upper half-plane ℐ (the model circle) and dually regular chainsM_N=± i⇒ T_n+1∈{V_i,E_i,C}, M_N=± 1⇒ T_n+1∈{V_i,C},representing z∈{0≤ x≤1,y≥0, |z-1/2|≥1/2}=:ℐ^* (the model triangle).The model circle is a disjoint union of four triangles and three circles, and the model triangle is a disjoint union of three triangles and one circle (pictured in Figure <ref>): ℐ =𝒱_1∪𝒱_2∪𝒱_3∪ℰ_1∪ℰ_2∪ℰ_3∪𝒞 ℐ^* =𝒱_1^*∪𝒱_2^*∪𝒱_3^*∪𝒞^* where𝒱_i=v_i(ℐ),ℰ_i=e_i(ℐ^*),𝒞=c(ℐ^*),𝒱_i^*=v_i(ℐ^*),𝒞^*=c(ℐ),(lowercase letters indicating the Möbius transformation associated to the corresponding matrix). By considering z=∏_nT_n we obtain rational approximations p_i^(N)/q_i^(N) to z by M_N ( [ 1 0 1; 0 1 1; ]) = ( [ p^(N)_1 p^(N)_2 p^(N)_3; q^(N)_1 q^(N)_2 q^(N)_3; ])(which are the orbits of ∞,0,1 under the partial products m_N=t_1∘…∘ t_N).In <cit.> a quality of approximation is given: if |z-p/q|<1/(1+1/√(2))|q|^2, then p/q is a convergent to z.The shift map T on X=ℐ∪ℐ^*={chains, dual chains} maps X to itself via Möbius transformations, specifically (mapping 𝒱_i, 𝒞^* onto ℐ and 𝒱_i^*, ℰ_i, 𝒞 onto ℐ^*)T(z)= {[ v_i^-1z z∈𝒱_i∪𝒱_i^*; e_i^-1z z∈ℰ_i; c^-1z z∈𝒞∪𝒞^*; ]. .The shift T:X→ X is shown to be ergodic (<cit.>) with respect to the following probability measuref̃(z)= {[ 1/2π^2(h(z)+h(sz)+h(s^2z)) z=x+yi∈ℐ;1/2π1/y^2 z=x+yi∈ℐ^* ].whereh(z)=1/xy-1/x^2arctan(x/y).By inducing to X∖∪_i(V_i∪V_i^*) Schmidt gives “faster” convergents p̂^(n)_α/q̂^(n)_α and a sequence of exponents e_n (1 for E_i, C, and the return time k for V_i^k).He then gives results analogous to those of simple continued fractions via the pointwise ergodic theorem, including the arithmetic and geometric mean of the exponents which exist for almost every z (<cit.>):lim_n→∞(∏_i=1^ne_i)^1/n=1.26…,lim_n→∞1/n∑_i=1^ne_i=1.6667….In <cit.>, Nakada constructs an invertible extension of T on a space of geodesics in two copies of three-dimensional hyperbolic space.In one copy we take geodesics from I^* to I and in the other the geodesics from I to I^* where the overline indicates complex conjugation.The regions are pictured in Figure <ref>.The extension acts as Schmidt's T depending on the second coordinate.Nakada doesn't provide a second proof of ergodicity, but quotes Schmidt's result.Also in <cit.>, results about the the density of Gaussian rationals p/q that appear as convergents and satisfy |z-p/q|<c/|q|^2 are obtained.For instance according to <cit.>, for almost every z∈ X and 0<c<1/1+1√(2), it holds thatlim_N→∞1/N#{p/q∈Q(i) : p/q=p_i^(n)/q_i^(n),1≤ n≤ N,i=1,2,3,|z-p/q|<c/|q|^2}=c^2/π.In <cit.> Nakada describes the rate of convergence of Schmidt's convergents.Namely for almost every z,lim_n→∞1/nlog|q_i^(n)|=E/π,lim_n→∞1/nlog|z-p_i^(n)/q_i^(n)|=-2E/π,E=∑_k=0^∞(-1)^k/(2k+1)^2.For reference, the relationship between the Super-Apollonian Möbius generators {s_i,s_i^⊥} and Schmidt's {v_i,e_i,c} ares_1=c^2∘c,s_2=e_1^2∘c,s_3=e_2^2∘c,s_4=e_3^2∘c, s_1^⊥=1∘c,s_2^⊥=v_1^2∘c,s_3^⊥=v_2^2∘c,s_4^⊥=v_3^2∘c. 20 [P]Price H.L. Price, The Pythagorean Tree: A New Species (2008), <https://arxiv.org/abs/0809.4324>. [Be]Berggren B. Berggren, Pytagoreiska trianglar, Tidskrift for elementar matematik, fysik och kemi 17 (1934), 129–139. [Ba]Barning F.J.M. Barning, On Pythagorean and quasi-Pythagorean triangles and a generation process with the help of unimodular matrices, Math. Centrum Amsterdam Afd. Zuivere Wisk. ZW-011 (1963), 37 pp. [Bi]B P. Billingsley, Ergodic Theory and Information, J. Wiley, 1965. [EW]EW M. Einsiedler and T. Ward, Ergodic Theory with a view towards Number Theorey, Graduate Texts in Mathematics 259 (Springer-Verlag London Limited 2011). [F]F L.R. Ford, On the closness of approach of complex rational fractions to a complex irrational number, Trans. Amer. Math. Soc. 27 (1925), 146–154. [Fu]Fuchsbull E. Fuchs, Counting problems in Apollonian packings, Bull. Amer. Math. Soc., 50 (2013), 229–266. [GLMWY1]GLMWY0 R.L. Graham, J.C. Lagarias, C.L. Mallows, A.R. Wilks, C.H. Yan, Apollonian circle packings: number theory,Journal of Number Theory 100 (2003), 1–45.[GLMWY2]GLMWY1 R.L. Graham, J.C. Lagarias, C.L. Mallows, A.R. Wilks, C.H. Yan, Apollonian Circle Packings: Geometry and Group Theory I. The Apollonian Group,Discrete Comput. Geom. 34 (2005), no. 4, 547–585. [GLMWY3]GLMWY2 R.L. Graham, J.C. Lagarias, C.L. Mallows, A.R. Wilks, C.H. Yan, Apollonian Circle Packings: Geometry and Group Theory II. Super-Apollonian Group and Integral Packings, Discrete Comput. Geom. 35 (2006), no. 1, 1–36. [LMW]LMW J.C. Lagarias, C.L. Mallows, A.R. Wilks, Beyond the Descartes circle theorem, Amer. Math. Monthly 109 (2002), no. 4, 338–361.[H]Hall A. Hall, Genealogy of Pythagorean triads, Math. Gaz. 54 (1970), 377–379. [Ka]Kanga A.R. Kanga, The family tree of Pythagorean triples, Bull. Inst. Math. Appl. 26 (1990), no. 1-2, 15–17. [Ke]titbit M. Keane, A continued fraction titbit., Symposium in Honor of Benoit Mandelbrot (Curaçao, 1995), Fractals 3 (1995), no. 4, 641–650.[Kh]Kh A. Ya. Khinchin, Continued Fractions, 1997, Dover. [Kon]Kontorovichbull A. Kontorovich, From Apollonius to Zaremba: Local-global phenomena in thin orbits, Bull. Amer. Math. Soc. 50 (2013), 187–228. [Koc]Kocik J. Kocik, A theorem on circle configurations (2007), <https://arxiv.org/abs/0706.0372>. [N1]N1 H. Nakada, On Ergodic Theory of A. Schmidt's Complex Continued Fractions over Gaussian Field, Mh. Math. 105 (1988), 131–150. [N2]N2 H. Nakada, The metrical theory of complex continued fractions, Acta Arith. 56 (1990), no. 4, 279–289. [N3]N3 H. Nakada, On metrical theory of Diophantine approximation over imaginary quadratic field, Acta Arith. 51 (1988), no. 4, 399–403. [R]R D. Romik, The dynamics of Pythagorean triples, Trans. Amer. Math. Soc. 360 (2008), 6045–6064. [Sa]SAGE SageMath, the Sage Mathematics Software System (Version 7.3),The Sage Developers, 2016, http://www.sagemath.org. [Sc1]S1 A.L. Schmidt, Diophantine Approximation of Complex Numbers, Acta Math. 134 (1975), 1–85. [Sc2]S2 A.L. Schmidt, Ergodic Theory for Complex Continued Fractions, Mh. Math. 93 (1985), 39–62. [Sc3]S3 A.L. Schmidt, Farey triangles and Farey quadrangles in the complex plane, Math. Scand., 21 (1967), 241–295. [Sc4]S4 A.L. Schmidt, Diophantine approximation in the field ℚ(i√(2)), Journal of Number Theory 131 (2011), 1983–2012. [wS]wS W. Schmidt, Diophantine Approximation, Lecture Notes in Mathematics no. 785, 1996, Springer-Verlag.[St1]stange1 K.E. Stange, Visualizing the arithmetic of imaginary quadratic fields, Int. Math. Res. Not. (2017). [St2]stange2 K.E. Stange, The Apollonian structure of Bianchi groups, to appear in Trans. Amer. Math. Soc., <http://arxiv.org/abs/1505.03121>. [V]V L.Y. Vulakh, Diophantine approximation on Bianchi groups, J. Number Theory 54 (1995), 73–80. [W]MATH Wolfram Research, Inc., Mathematica, Version 10.0, Champaign, IL (2014).
http://arxiv.org/abs/1703.08616v1
{ "authors": [ "Sneha Chaubey", "Elena Fuchs", "Robert Hines", "Katherine E. Stange" ], "categories": [ "math.NT", "math.DS", "math.MG" ], "primary_category": "math.NT", "published": "20170324223331", "title": "The Dynamics of Super-Apollonian Continued Fractions" }
Braess Paradox]On the Braess Paradox with Nonlinear Dynamics and Control TheoryColombo]Rinaldo M. Colombo [Colombo]INDAM Unit, University of Brescia, Via Branze 38, I–25123 Brescia, Italy []Rinaldo.Colombo@Ing.UniBs.ItRinaldo.Colombo@Ing.UniBs.It http://dm.ing.unibs.it/rinaldo/http://dm.ing.unibs.it/rinaldo/Holden]Helge Holden [Holden]Department of Mathematical Sciences, Norwegian University of Science and Technology, NO–7491 Trondheim, Norway,and Centre of Mathematics for Applications,University of Oslo, P.O. Box 1053, Blindern, NO–0316 Oslo, Norway[]holden@math.ntnu.noholden@math.ntnu.no http://www.math.ntnu.no/ holdenwww.math.ntnu.no/~holden[2010]Primary: 35L65; Secondary: 90B20Partially supported by the Research Council of Norway and by the Fund for International Cooperation of the University of Brescia. We show the existence of the Braess paradox for a traffic network with nonlinear dynamics described by the Lighthill–Whitham-Richards model for traffic flow. Furthermore, we show how one can employ control theory to avoid the paradox. The paper offers a general framework applicable to time-independent, uncongested flow on networks. These ideas are illustrated through examples. [ [ December 30, 2023 =====================§ INTRODUCTIONConsider the following scenario: We have a simple network consisting of two routes connecting A to B, see Figure <ref>.Each route consists of two roads. Roads a and d are identical, as are roads b and c. Traffic is unidirectional in the direction from A to B. Travel time along roads a and d are given by ρ/100, where ρ is the number of vehicles on that road, while the travel time is 45 for each of roads b and c, irrespective of the number of vehicles on that road.In equilibrium, vehicles will distribute evenly between the two routes connecting A and B, i.e., roads a & d and b & c.Assuming that initially m=4000 vehicles start from A, we find a travel time of 65 along each of the two routes. Add a road e as given in Figure <ref>, and assume that the travel time is zero along this road.Drivers will start using the new road, reducing their travel time from 65 to 40. However, as more and more drivers use the new road, their travel time will increase to 80. Now, no driver will have an incentive to use the old roads, i.e., avoiding road e, as the travel time along those roads will be 85. Thus all drivers are worse off than before, in spite of having a new road. This is the Braess paradox in a nutshell: Adding a new road to a network may make travel times worse for all. In both cases the equilibrium is a Wardrop equilibrium (i.e., all routes used have the same travel time, and all unused routes have longer travel times) as well as a Nash equilibrium.This is the simplest example of the Braess paradox, introduced (with a different example) by Braess in 1968 <cit.>, see also <cit.>. This example and some generalizations have been studied in, e.g., <cit.>. In spite of the unrealistic assumptions in the prevalent example above, the paradox has turned out to be ubiquitous and intrinsic to dynamical networks. The paradox also appears in other situations not modeling traffic flow <cit.>, see, e.g., <cit.> for an example involving mesoscopic electron systems, and <cit.> for an example with mechanical springs. Furthermore, the paradox can be reformulated in the context of game theory. In addition, there are well documented examples of the paradox occurring in real-life traffic situations, e.g., in Seoul <cit.> and Stuttgart <cit.>, see also <cit.>. Not surprisingly, the paradox has been well described also in general media, see, e.g., <cit.> and on Wikipedia as well as YouTube. The extensive discussion about the Braess paradox makes a complete reference list impossible, see, however, <cit.>.In this paper we only refer to articles directly related to the research at hand.Here we want to study the Braess paradox with a more realistic nonlinear dynamics. More specifically, we want to model unidirectional traffic along roads by a macroscopic model where only densities of vehicles are considered. We believe this to be novel.In this class of models, introduced by Lighthill–Whitham <cit.> and Richards <cit.> (hereafter denoted the LWR model), vehicles, described by a density ρ rather than individually, drive with a velocity determined by the density alone; higher density yields slower speed while low density lets vehicles approach the speed limit. At a maximum density with bumper-to-bumper vehicles, traffic comes to a halt.The dynamics is well described by the nonlinear partial differential equation∂_tρ+∂_x(ρv(ρ))=0,see, e.g., <cit.>.The function q(ρ)=ρ v(ρ) is denoted the flux function, or, in the context of traffic flow, the fundamental diagram. It is in general a concave function that equals zero when ρ vanishes and when ρ equals the maximum possible road density.Hyperbolic conservation laws, as equations of the type (<ref>) are called, have been used to study traffic on a network, starting with Holden and Risebro <cit.>, see, e.g., the book by Garavello and Piccoli <cit.>. Related results on a game theoretic approach to network traffic through the LWR model, see <cit.>. For general theory concerning hyperbolic conservation laws we refer to <cit.>.However, the Braess paradox describes an equilibrium situation, and it is not relevant to include time variation. Rather, we want to study stationary solutions where the velocity is a given function of the density of vehicles on the road. At a junction, the differential equation (<ref>) will in general, if the two roads have different properties, establish a complicated wave pattern, creating waves that emanate from the junction in both directions.However, in the equilibrium situation, this cannot happen, as it would create time-dependent waves. Thus, we will set up the example in such a way that no waves are created at junctions.In this paper we analyze the same simple network as described above, but with much more realistic dynamics. More general examples are of course possible using the same methods. However, calculations become more cumbersome and less transparent, and we here focus on presenting the ideas of the model, exemplified on the simple network in Figures <ref> and <ref>. For another approach to the Braess paradox, see, e.g., <cit.>.The prevalence of the Braess paradox is unwanted, and one would like to take measures to prevent its occurrence. In the example in the present paper, we use the velocity of the road e as a control parameter. By properly adjusting the speed limit on road e, one can force the Braess paradox to disappear, and make the social optimum coincide with the Nash equilibrium.This can be illustrated in the simple example in the beginning of the introduction. Given a “benevolent dictator” who wants to reduce the total travel time and reach the social optimum, a short calculation shows that, with m=4000, 1750 vehicles should follow each of the routes a & b and c & d, and the remaining 500 vehicles should follow the route a, e, and d.Although a social optimum, this situation is neither a Wardrop nor a Nash equilibrium.This paper offers a framework applicable to general networks. The input is, in addition to the network itself, the length and velocity fields of each road as well as the influx.We assume that traffic is in the uncongested, or free, phase. This will prevent waves from emanating from the junctions.§ A DYNAMIC VERSION OF THE BRAESS PARADOX §.§ Notation and basic definitions Below, we denote ^+ = [0, +∞) and S^n = {θ∈ [0,1]^n|∑_j θ_j ≤ 1} is the standard simplex in ^n. The sphere centered at θ with radius r is denoted by B_r (θ).Two points A and B are connected through a network of roads. Along each road, traffic is described through the LWR model (<ref>). At each junction, the total flow exiting the junction equals the incoming one, so that the total quantity of vehicles is conserved.The macroscopic description obtained solving (<ref>) along each road also provides the full microscopic portrait of the network. Indeed, once ρ = ρ (t,x) is known along the road r connecting, say, the junction at A to that at B, the single vehicle leaving from A at time t_o travels along r according to{[ ẋ = v(ρ(t, x (t))),;x (t_o) = A. ].The travel time τ_r (t_o) along the road a is then implicitly defined byx (τ_r (t_o)) = B.To compute τ_r (t_o), in general, one has first to provide (<ref>) with initial and boundary data, then solve the resulting initial-boundary value problem to obtain ρ = ρ (t,x), use this latter expression to solve the ordinary differential equation (<ref>) and finally solve the equation (<ref>). Observe that the right-hand side in the ordinary differential equation in (<ref>) is in general discontinuous, nevertheless in the present setting it is well-posed, see <cit.>. In the present stationary framework, this procedure can be pursued explicitly, as we detail below in Example <ref>. Remark that, in a stationary regime, all travel times are independent of the starting time t_o. For the above travel times to be a reliable measure of the network efficiency, it is necessary that they are independent from any particular initial data. Also the standard initial-boundary value problem for (<ref>) with zero initial density on the whole network is unsatisfactory, since it would give results that depend on the transient period necessary to fill the network. We are thus bound to select stationary solutions, assigning a constant inflow at A for all times t ∈. Moreover, to allow for stationary solutions, we also assume that the total flow incoming at any junction never exceeds the total capacity of the roads exiting that junction.In the general LWR model (<ref>), the flux function q = q (ρ) is a concave function that vanishes at zero density and at ρ_M, the maximum density. The flux has a unique maximum for some value ρ_m ∈ (0,ρ_M). As usual, we refer to densities below ρ_m as the uncongested, or free, phase, and for densities above ρ_m as the congested phase. In the remaining part of the paper, to obtain stationary solutions, we need to remain in the free phase only, so that ρ∈ [0, ρ_m] throughout the network. In order to simplify the notation we will use the normalization ρ_m = 1 for all roads. We will not make any assumptions on, or reference to, q above this value. Hence, on the flow function we pose the following assumption: (q) q ∈3 ([0,1]; ^+), q (0) = 0, q' > 0 and q”≤ 0.Clearly, if q satisfies (q), then the speed law v (ρ) = q (ρ) / ρ is well-defined, continuous, strictly positive and weakly decreasing, see Lemma <ref>. As a result, the travel along a road segment is a convex and increasing function of the inflow.Let q satisfy (q) with q”' ≤ 0 and call ϕ = q (1). Then, the travel time τ (θ), which is defined by x(τ (θ)) = B whereẋ = v(ρ (t, x (t))), x (0) = A, ∂_t ρ + ∂_x q (ρ) = 0, q(ρ(t, A)) = θϕ,is of class 2 ([0, 1]; ^+), weakly increasing and convex. The proof follows directly from Lemma <ref>.When γ is a route consisting of the adjacent roads r_1, r_2, r_3, …, the travel time τ_γ (t_o) along γ is then defined as the sum ∑_i τ_r_i of the travel times of all roads.A network consists of several routes connecting A to B. To describe it, we enumerate each single road (or edge) and construct the matrix Γ settingΓ_ij =1the road r_i belongs to the route γ_j, 0otherwise.We now assign a constant total inflow ϕ at A and call θ_i the fraction of the drivers that reach B along the route γ_j.A single road may well belong to more than one route, so that the flow along the road r_i is ϕΓ_i θ =ϕ∑_i Γ_ijθ_j and the travel time along that road results to be τ_r_i (Γ_i θ).The total travel time τ_i along the ith route is in general a function of all partition parameters, more preciselyτ_γ_j (θ) = ∑_i Γ_ij τ_r_i( Γ_iθ). From a global point of view, it is natural to evaluate the quality of a network through the mean global travel time[Also called average latency of the system or social cost of the network.] T (θ) = ∑_j θ_jτ_γ_j (θ) or, using matrix notation τ_r (Γθ) = [τ_r_1 (Γ_1θ) ⋯ τ_r_n (Γ_nθ)], we findT (θ) = τ_r (Γθ)Γ θ .We call globally optimal[Also called social optimum for the system.] a state θ_G ∈ S^n that minimizes T over S^n, i.e., θ_G = _θ∈ S^n T (θ). This social optimum state conforms to Wardrop's Second principle, see <cit.>.Let all road travel times τ_r_1, …, τ_r_m be of class 2 ([0,1]; ^+), weakly increasing and convex. Then, the map T is in 2 ([0,1];^+) is convex. The proof is deferred to the Appendix.For brevity, we call relevant those travel times τ_i such that θ_i ≠ 0.A state θ̅∈ S^n is an equilibrium state if all relevant travel times coincide, i.e., for all i,j ∈{1, …, n} θ̅_i ≠0θ̅_j ≠0,τ_i(θ̅) = τ_j(θ̅)= τ̅ ,the common value τ̅ of the travel times being the equilibrium time.In other words, at equilibrium all drivers need the same time to go from A to B. A common criterion for optimality goes back to Pareto.An equilibrium state θ^P ∈ S^n is a local Pareto point if there exists a positive δ such that for all θ∈ B_δ (θ^P) ∩ S^n if there exists a j such that τ_γ_j (θ) < τ_γ_j (θ^P), then there exists also a k such that τ_γ_k (θ) > τ_γ_k (θ^P). In other words, no (small) perturbation of a Pareto point may reduce all travel times.However, from a “selfish” point of view, each driver aims at reducing his/her own travel time.It is then natural to introduce the following definition.An equilibrium state θ^N ∈ S^n is a local Nash point if there exists a positive δ such that for all ϵ∈ (0, δ] and all j,k = 1, …, n, θ^N + ϵe_j - ϵe_k∈S^nτ_γ_j (θ^N + ϵe_j - ϵe_k) > τ_γ_k (θ^N),where e_j is the unit vector directed along the jth axis. In other words, it is not convenient for ϵ drivers to change from route k to route j, for any j,k =1,…, n. Consider the simple case of the network in Figure <ref>, and assume that its dynamics is described as follows: Road Length Density Model Flow a 3/2 ρ ∂_t ρ + ∂_x (ρv (ρ)) = 0 q (ρ) = (-1+√(1+8ρ))/4 b 1 R ∂_t R + ∂_x (R V (R)) = 0 Q (R) = -1 + √(1+R)The maximal inflow ϕ at A that, for any θ∈ [0,1], can be partitioned in θ ϕ along a and (1-θ)ϕ along b is min{q (1),Q (1)} = √(2)-1.With this constant inflow as left boundary data in (<ref>), the resulting (stationary) densities areρ = (1 + 2θ ϕ)θ ϕ along road a, and R = (2 + (1-θ) ϕ) (1-θ)ϕ along road b.The corresponding constant traffic speedsv (ρ) = (1+2 θ ϕ) ^-1 along road a, and V (R) = (2 + (1-θ) ϕ)^-1 along road b,inserted in (<ref>), lead to the following travel times on the two roads:τ_a (θ) = 3(1 + 2 θ ϕ)/2 along road a, and τ_b (1-θ) = 2 + (1-θ) ϕ along road b.Finally, the mean global travel time defined at (<ref>) isT (θ) = 2 + ϕ- 1+4 ϕ/2θ+ 4 θ^2 ϕ .According to Definition <ref>, we have a unique Nash point at θ^N and a unique globally optimal state at θ_G, whereθ^N =0,ϕ∈[0, 1/6), 1+2ϕ/8ϕ, ϕ∈[1/6, √(2)-1],θ_G =0, ϕ∈[0, 1/12), 1+4ϕ/16ϕ, ϕ∈[1/12, √(2)-1]. Clearly, θ^N is also a Pareto point according to Definition <ref>. Note that the globally optimal state may well differ from the Nash optimal one and both depend on the total inflow ϕ, see Figure <ref>. §.§ The case of four roads Consider the network in Figure <ref>.The network is given by two routes, denoted α and β, connecting A and B. The route α consists of roads a and b, the route β consists of roads c and d. Roads a and d have the same length ℓ and the same fundamental diagram q. Similarly, roads b and c share the same length L and the same flow density relation.Traffic is always assumed to be unidirectional from A to B, and no obstructions, e.g., traffic lights, are encountered at the junctions.Along each road, the dynamics of traffic is described by the LWR model (<ref>) with flux functions that lead to the travel timesτ_a (θ) = τ_d (θ)τ_b (θ) = τ_d (θ),so that the travel time τ_α (θ) along the route α and τ_β (1-θ) along the route β, areτ_α(θ) = τ_a (θ) + τ_b (θ)τ_β(1-θ) = τ_a (1-θ) + τ_b (1-θ).Then, θ↦τ_α (θ) is (weakly) increasing, while θ↦τ_β (1-θ) is (weakly) decreasing. Since τ_α (1/2) = τ_β (1/2), we have that θ^N = 1/2 is a Nash (and also Pareto) point for this system. It is easy to verify that (θ^N,θ^N) is also globally optimal, since it is the argument that minimizes T (θ_1,θ_2) over the simplex S^2. §.§ The case of five roads We now introduce a new road in Figure <ref>, passing to the network described in Figure <ref>.The new road e, which has the direction from a to d, has length ℓ̃ and its dynamics is characterized by a flow function q̃ satisfying (q).The presence of the road e allows us to consider the route γ connecting A to B consisting of the roads a, e, and d. For all θ_1, θ_2 ∈ [0,1] such that θ_1 + θ_2 ≤ 1, we now let the inflow θ_1ϕ enter α, θ_2ϕ enter β and the remaining (1-θ_1-θ_2)ϕ enter γ. The travel times along the three routes are then:τ_α (θ_1, θ_2) = τ_a (1-θ_2) + τ_b (θ_1) ,τ_β (θ_1, θ_2) = τ_b (θ_2) + τ_a (1-θ_1),τ_γ (θ_1, θ_2) = τ_a (1-θ_2) + τ_e (1-θ_1-θ_2) + τ_a (1-θ_1).Observe that τ_α (θ, θ) = τ_β (θ, θ).The mean global travel time isT (θ_1,θ_2) = θ_1τ_α (θ_1, θ_2) + θ_2τ_β (θ_1, θ_2) + (1-θ_1-θ_2)τ_γ (θ_1, θ_2). §.§ The Braess paradoxWe now compare the travel times obtained in the two cases described by Figures <ref> and <ref>. To this end, observe that the travel times τ_α^ and τ_β^ in the case of four roads, and referring to Figure <ref>, are obtained from those in the 5 roads case settingτ_α^ (θ) = τ_α(θ, 1-θ)τ_β^ (θ) = τ_β(θ, 1-θ). Let the travel times τ_a, τ_b, τ_e ∈0 ([0,1]; ^+) be non decreasing and assume that τ_a or τ_b are not constant. If the travel times defined in (<ref>) satisfyτ_α (1/2, 1/2) < τ_γ (0,0) < τ_α (0,0),then: * θ^N ≡ (0,0) is the unique local Nash point for the network with five roads in Figure <ref>;* the corresponding equilibrium time τ_γ (0,0) is worse than the globally optimal configuration for the network with four roads in Figure <ref>.Under the above conditions we have the occurrence of the Braess paradox. Observe that the point θ^P ≡ (1/2, 1/2) is the unique Pareto point for the five roads networks.Condition (<ref>) allows us to construct several examples illustrating the Braess paradox.With the notation in Figure <ref>, choose[; a,d 1 ρ q (ρ) = ln(1+ρ);b, c 1 R Q (R) = R V(V ∈ ); e 1ρ̃ q̃ (ρ̃) = ρ̃ṽ(ṽ ∈ ) ]Condition (<ref>) then becomese^ϕ-1/ϕ < 1/V - 1/ṽ < 2/ϕ(e^ϕ-e^ϕ/2),and, for any ϕ∈(0, min{ln2, V, ṽ}], it can easily be met for suitable V, ṽ, see Figure <ref>.§ CONTROL THEORY FOR THE NOVEL ROAD — OR HOW TO COPE WITH THE BRAESS PARADOXOur next aim is proving that in the case of the network in Figure <ref>, a carefully chosen speed limit imposed on the novel road γ makes the Nash optimal state coincide with the globally optimal one.We use the same notation as in Section <ref>, but we use the travel time τ̃ along the e road as control parameter. Equivalently, we impose that the speed along the road γ is ṽ, so thatτ_e (θ_1,θ_2) = τ̃.The next theorem says that there exists an optimal control. Let the travel time τ_a,τ_b ∈0 ([0,1]; ^+) be non decreasing and convex, one of the two being strictly convex. Then, there exists a constant travel time τ̃∈^+ such that the network in Figure <ref> admits a partition (θ_*, θ_*) which is a Nash optimal state and also globally minimizes the mean global travel time.Thus, by carefully selecting the travel time, or, equivalently, adjusting the maximum speed, one can avoid the occurrence of the Braess paradox. Moreover, the Nash equilibrium is steered to become globally optimal.§ TECHNICAL DETAILS Let q satisfy (q). Then, the speed v = v (ρ) defined byv (ρ) = { [q' (0) ρ = 0; q (ρ) / ρ ρ > 0 ] .is well-defined, continuous in [0, ρ_m], strictly positive and weakly decreasing.Continuity follows from l'Hôpital's rule. By straightforward computation we findv' (ρ) =ρq' (ρ) - q (ρ)/ρ^2 ρ> 0 , 1/2 q” (0) ρ= 0 ,v” (ρ) =q” (ρ)/ρ - 2 q' (ρ)/ρ^2 + 2 q (ρ)/ρ^3 ρ> 0 , 1/3 q”' (0) ρ= 0. By the concavity of q, we have q' (0) ≥ q (ρ)/ρ≥ q' (ρ), implying that v' ≤ 0. Let q satisfy (q). Then, the map ρθ↦ρ (θ) defined byq(ρ(θ)) = θϕ satisfies: * ρ∈2 ([0,1]; [0,1]) and ρ (0) = 0;* ρ' (θ) > 0 and ρ” (θ) > 0 for all θ∈ [0,1];* if q is strictly convex, then ρ” (θ) > 0 for all θ∈ [0,1]. Existence and regularity of ρ are immediate. Moreover, by (q) and q (ρ (θ)) = θ ϕ, it follows thatρ(0) = 0,ρ' (θ) = ϕ/q'(ρ(θ))> 0ρ” (θ) = - ϕ^2 q”(ρ(θ))/(q'(ρ(θ)))^3 ≥0 ,and the latter inequality is strict as soon as q is strictly convex. Let q satisfy (q). Then, the map θ↦ 1/v(ρ (θ)) is weakly increasing. If, moreover, q”' (ρ) ≤ 0 for all ρ∈ [0,1], then the map θ↦ 1/v(ρ (θ)) is convex.We findd/dθ (1/v(ρ(θ))) = - v'(ρ(θ)) ρ' (θ)/(v(ρ(θ)))^2 ≥0.Moreover, using the explicit expressions above,d/dθ(1/v(ρ (θ))) = - v'(ρ (θ))ρ' (θ)/(v(ρ (θ)))^2 = -ρ (θ) q'(ρ (θ)) - q(ρ (θ))/(ρ (θ))^2 ϕ/q'(ρ (θ))/(q(ρ (θ)))^2/(ρ (θ))^2 = ( 1/q(ρ (θ)) q'(ρ (θ)) - ρ (θ)/(q(ρ (θ)))^2) ϕ,d^2/dθ^2(1/v(ρ (θ))) = - 2ρ' (θ)ϕ/(q(ρ (θ)))^3×[ 1/2(q(ρ (θ))/q'(ρ (θ)))^2 q”(ρ (θ)) + q(ρ (θ)) - ρ (θ) q'(ρ (θ)) ].Call f (ρ) = 1/2(q(ρ)/q'(ρ))^2 q”(ρ) + q(ρ) - ρq'(ρ). Observe that f (0) = 0 andf' (ρ) = 1/2 (q(ρ)/q'(ρ))^2 q”'(ρ) + (q (ρ) - ρq' (ρ))q” (ρ)/q' (ρ) - q (ρ) (q” (ρ))^2/(q' (ρ))^3 ≤0,thereby completing the proof.The assumption that q”' (ρ) ≤ 0 is sufficient, but not necessary, to obtain convexity of the travel time. Observe that if f ∈2 (^+; ) is convex and increasing, then also the map x ↦ x f (x) is convex and increasing. By Lemma <ref>, for all i=1, …,m, the map ξ↦τ_r_i (ξ)ξ is convex for ξ∈ [0,1]. Hence, also the map θ↦∑_i τ_r_i (θ_i)θ_i is convex for θ∈ [0,1]^n. Since Γ_ij∈{0,1}, also the map θ↦ T (θ) is convex.By Definition <ref>, the configuration θ^N with θ^N_1 = θ^N_2 = 0 is clearly an equilibrium, the only relevant time being the equilibriumτ̅= τ_γ(0,0) = 2 τ_a (1) + τ_e (1) = 2 ℓ/v(ρ(1)) + ℓ̃/ṽ(ρ̃(1)).By (<ref>), it is also a Nash point, since τ_a (0,0) = τ_β (0,0) > τ̅ and, by continuity, the same inequality holds in a neighborhood of θ^N.Assume there exists an other equilibrium point θ̅ in the interior of S^2. Then, by symmetry, θ̅_1 = θ̅_2 and, by Definition <ref>,τ_b (θ̅_1) - τ_a (1-θ̅_1) = τ_e (1-2θ̅_1).By assumption, the left-hand side above is a strictly increasing function of θ_1, while the right-hand side is weakly decreasing, so thatτ_e (1-2 θ̅_1)≤τ_e (1)< τ_b (0) + τ_a (0) - 2 τ_a (1) ≤τ_b (0) + τ_a (0) - 2 τ_a (0) ≤τ_b (0) - τ_a (0) ≤τ_b (θ̅_1) - τ_a (1-θ̅_1),which contradicts (<ref>).To complete the proof of the uniqueness of the Nash points, consider the configuration (0,1). In this case, the only relevant time is τ_α (0,1) andτ_α(1,0) = τ_a (1) + τ_b (1) > τ_a (0) + τ_b (0) = τ_β(1,1),proving that (1,0) is not a Nash point. The case of (0,1) is entirely analogous.Finally, observe that the globally optimal time for the case of four roads is τ_α (1/2,1/2) = τ_b (1/2, 1/2) and the leftmost bound in (<ref>) allows to complete the proof. Let the travel time τ_a,τ_b ∈0 ([0,1]; ^+) be non decreasing and convex, at least one of the two being strictly convex. Then, there exists a map Θ∈0 (^+;[0, 1/2]) such that the partition (Θ (θ), Θ (θ)) is the point of global minimum of the mean travel time T defined in (<ref>), (<ref>), (<ref>) over S^n.The travel time T is convex by Proposition <ref>. By symmetry, its minimum is attained at a point (θ,θ) and if θ∈ (0, 1/2), then this point satisfies d/dθ T (θ,θ)=0. Straightforward we findT (θ, θ) = 2(1-θ)τ_a (1-θ) + 2θ τ_b (θ) + (1-2θ) τ̃_e ,d/dθ T (θ,θ) = 2 ( - τ_a (1-θ) - (1-θ) τ_a' (1-θ) + τ_b (θ) + θ τ_b' (θ) + τ̃),d^2/dθ^2 T (θ,θ) = 2( 2 τ_a' (1-θ) + (1-θ) τ_a” (1-θ) + 2 τ_b' (θ) + θτ_b” (θ) ),hence d^2/dθ^2 T (θ,θ) > 0, which shows that the map θ↦ T (θ,θ) is strictly convex. Hence it admits a unique point of minimum Θ (τ̃) in (0, 1/2). The standard Implicit Function Theorem ensures that Θ is continuous. Let the travel time τ_a,τ_b ∈0 ([0,1]; ^+) be non decreasing and convex, at least one of the two being strictly convex. Then, there exists a map T̃∈0 ([0, 1/2]; ^+) such that assigning the travel time T̃ (θ) on road e makes the configuration (θ,θ) the unique local Nash point in the sense of Definition <ref>.Given θ∈ [0,1/2], we seek a τ̃ such that (θ,θ) is an equilibrium point. To this aim, we solveτ_a (θ,θ) = τ_b (θ,θ) τ_a (θ,θ) = τ_γ(θ,θ).By symmetry consideration, to former equality is certainly satisfied for any θ∈ [0, 1/2]. The latter is equivalent to:τ_a (1-θ) + τ_b (θ) = 2 τ_a (1-θ) + τ̃ .Therefore, we setT̃ (θ) =τ_b (θ) - τ_a (1-θ) if τ_b (θ) ≥τ_a (1-θ), 0 if τ_b (θ) < τ_a (1-θ). By construction, (θ,θ) is an equilibrium configuration in the sense of Definition <ref>, once the travel time τ̃ along the road e is set equal end T̃ (θ).When θ∈ (0, 1/2), to prove that (θ,θ) is a local Nash point, thanks to the present symmetries, it is sufficient to check that for all small ϵ>0 we haveτ_α (θ+ϵ, θ) > τ_γ (θ,θ),τ_α (θ+ϵ, θ-ϵ) > τ_β (θ,θ),τ_γ (θ-ϵ, θ) > τ_α (θ,θ),or, equivalently,τ_b (θ+ϵ) - τ_b (θ) + τ_a (1-θ) - τ_a(1-θ-ϵ) > 0,τ_a (1-θ+ϵ) - τ_a (1-θ-ϵ) + τ_b (θ+ϵ) - τ_b (θ-ϵ) > 0,τ_a (1-η+ϵ) - τ_a (1-θ) > 0,and all these inequalities hold by the monotonicity of the travel times.Let Θ and T̃ be the maps defined in Lemma <ref> and Lemma <ref>, respectively. DefineΥ[0,1/2] →[0,1/2]Υ= Θ∘T̃,and call θ_* a fixed point for Υ. By construction, (θ_*, θ_*) is a local Nash point, once τ̃_* = T̃ (θ_*) is fixed as the travel time along road e. 99ArnottSmall R. Arnott and K. Small.Dynamics of traffic congestion.Amer. Scientist 1994(82) 446–455.Baker L. Baker.Removing roads and traffic lights speeds urban travel.Scientific American, January 28, 2009.BraessParadox D. Braess.Über ein Paradoxon aus der Verkehrsplanung.Unternehmensforschung 1968(12) 258–268.English translation: On a paradox of traffic planning.Transp. Science  2005(39) 446–450.BressanHan2012 A. Bressan and K. Han.Nash equilibria for a model of traffic flow with several groups of drivers.ESAIM Control Optim. Calc. Var. 2012(18:4):969–986.BressanHan2013 A. Bressan and K. Han.Existence of optima and equilibria for traffic flow on networks.Netw. Heterog. Media2013(8:3):627–648.ColomboMarson R. M. Colombo and A. Marson.A Hölder continuous ODE related to traffic flow.Proc. Roy. Soc. Edinburgh Sect. A2003(133:4):759–772.CohenHorowitz J. E. Cohen and P. Horowitz.Paradoxical behaviour of mechanical and electrical networks. Nature  1991(352) 699–701.DafermosNagurney S. Dafermos and A. Nagurney.On some traffic equilibrium theory paradoxes.Transp. Science  1984(18B) 101–110.Easley D. Easley and J. Kleinberg.Networks, Crowds, and Markets: Reasoning about a Highly Connected World. Cambridge University Press, 2010.Frank M. Frank.The Braess paradox.Math. Programming 1981(20) 283–302.GaravelloPiccoli M. Garavello and B. Piccoli.Traffic Flow on Networks.American Institute of Mathematical Sciences, 2006.HagstromAbrams J. N. Hagstrom and R. A. Abrams.Characterizing Braess's paradox for traffic networks.In: Proceedings of IEEE 2001 Conference on Intelligent Transportation Systems, pp. 837–842.HoldenRisebro_network H. Holden and N. H. Risebro. A mathematical model of traffic flow on a network of unidirectional roads.SIAM J. Math. Anal. 1995(26) 999–1017. HoldenRisebro H. Holden and N. H. Risebro.Front Tracking for Hyperbolic Conservation Laws.Springer-Verlag, New York, 2007, Second corrected printing. Knodel W. Knödel.Graphentheoretische Methoden und ihre Anwendungen.Springer-Verlag, 1969.Kolata G. Kolata.What if they closed 42nd Street and nobody noticed?New York Times, December 25, 1990.LighthillWhitham M. J. Lighthill and G. B. Whitham. On kinematic waves. II. A theory of traffic flow on long crowded roads.Proc. Roy. Soc. London. Ser. A. 1955(229) 317–345.NagurneyBoyce A. Nagurney and D.Boyce.Preface to “On a paradox of traffic planning”.Transp. Science  2005(39) 443–445.Pala M.G. Pala, S. Baltazar, P. Liu, H. Sellier, B. Hackens, F. Martins, V. Bayot, X. Wallart, L. Desplanque, and S. Huant. Transport inefficiency in branched-out mesoscopic networks: An analog of the Braess paradox.Phys. Rev. Lett.2012(108)  076802.Richards P. I. Richards.Shock waves on the highway.Operations Res. 1956(4) 42–51.Roughgarden T. Roughgarden.Selfish Routing and the Price of Anarchy.MIT Press, Cambridge, 2005.Roughgarden_paper T. Roughgarden.On the severity of Braess's paradox: Designing networks for selfish users is hard. J. Comp. Syst. Science 2006(72) 922–953.RoughgardenTardos T. Roughgarden and É. Tardos.How bad is selfish routing?J. ACM 2002(29) 236–259.SteinbergZangwill R. Steinberg and W. I. Zangwill. The prevalence of Braess' paradox.Transp. Science 1983(17) 301–318.Vidal J. Vidal.Heart and soul of the city. The Guardian, November 1, 2006.Wardrop J. G. Wardrop.Some theoretical aspects of road traffic research.In: Proceedings of the Institute of Civil Engineers. II, Vol. 1, pp. 325–378, 1952.YounGastnerJeong H. Youn, M. T. Gastner, and H. Jeong. Price of anarchy in transportation networks: Efficiency and optimality control.Phys. Rev. Lett. 2008(101) 128701. Erratum, loc. sit. 2009(102) 049905.
http://arxiv.org/abs/1703.09803v1
{ "authors": [ "Rinaldo M. Colombo", "Helge Holden" ], "categories": [ "math.AP", "35L65 (Primary), 90B20 (Secondary)" ], "primary_category": "math.AP", "published": "20170327092945", "title": "On the Braess Paradox with Nonlinear Dynamics and Control Theory" }
IEEE TRANSACTIONS ON INFORMATION THEORY,  Vol. X, No. X, XXXXXXX XXXX M. Barni, B. Tondi: Adversarial Source Identification Game with Corrupted TrainingAdversarial Source Identification Game with Corrupted Training Mauro Barni, Fellow, IEEE, Benedetta Tondi, Student Member, IEEE M. Barni is with the Department of Information Engineering and Mathematics, University of Siena, Via Roma 56, 53100 - Siena, ITALY, phone: +39 0577 234850 (int. 1005), e-mail: barni@dii.unisi.it; B. Tondi is with the Department of Information Engineering and Mathematics, University of Siena, Via Roma 56, 53100 - Siena, ITALY, e-mail: benedettatondi@gmail.com.Received: date / Accepted: date =============================================================================================================================================================================================================================================================================================================================================================================================================================================== We study a variant of the source identification game with training data in which part of the training data is corrupted by an attacker. In the addressed scenario, the defender aims at deciding whether a test sequence has been drawn according to a discrete memoryless source X ∼ P_X, whose statistics are known to him through the observation of a training sequence generated by X. In order toundermine the correct decision under the alternative hypothesis that the test sequence has not been drawn from X, the attacker can modify a sequence produced by a source Y ∼ P_Y up to a certain distortion, and corrupt the training sequence either by adding some fake samples or by replacing some samples with fake ones. We derive the unique rationalizable equilibrium of the two versions of the game in the asymptotic regime and by assuming that the defender bases its decision by relying only on the first order statistics of the test and the training sequences. By mimicking Stein's lemma, we derive the best achievable performance for the defender when the first type error probability is required to tend to zero exponentially fast with an arbitrarily small, yet positive, error exponent. We then use such a result to analyze the ultimate distinguishability of any two sources as a function of the allowed distortion and the fraction of corrupted samples injected into the training sequence.Hypothesis testing, adversarial signal processing, cybersecurity, game theory, source identification, optimal transportation theory, earth mover distance, adversarial learning, Sanov's theorem. =0mu =0mu =0mu§ INTRODUCTION Adversarial Signal Processing (AdvSP) is an emerging discipline aiming at modelling the interplay between a defender wishing to carry out a certain processing task, and an attacker aiming at impeding it <cit.>. Binary decision in an adversarial setup is one of the most recurrent problems in AdvSP, due to its importance in many application scenarios. Among binary decision problems, source identification is one of the most studied subjects, since it lies at the heart of several security-oriented disciplines, like multimedia forensics, anomaly detection, traffic monitoring, steganalysis and so on.The source identification game has been introduced in <cit.> to model the interplay between the defender and the attacker by resorting to concepts drawn from game and information theory. According to the model put forward in <cit.>, the defender and the attacker have a perfect knowledge of the to-be-distinguished sources. In <cit.> the analysis is pushed a step forward by considering a scenario in which the sources are known only through the observation of a training sequence. Finally, <cit.> introduces the security margin concept, a synthetic parameter characterising the ultimate distinguishability of two sources under adversarial conditions.In this paper, we extend the analysis further, by considering a situation in which the attacker may interfere with the learning phase by corrupting part of the training sequence. Adversarial learning is a rather novel concept, which has been studied for some years from a machine learning perspective <cit.>. Due to the natural vulnerability of machine learning systems, in fact, the attacker may take an important advantage if no countermeasures are adopted by the defender. The use of a training sequence to gather information about the statistics of the to-be-distinguished sources can be seen as a very simple learning mechanism, and the analysis of the impact that an attack carried out in such a phase has on the performance of a decision system may help shedding new light on this important problem. To be specific, we extend the game-theoretic framework introduced in <cit.> and <cit.> to model a situation in which the attacker is given the possibility of corrupting part of the training sequence. By adopting a game-theoretic perspective, we derive the optimal strategy for the defender and the optimal corruption strategy for the attacker when the length of the training sequence and the observed sequence tends to infinity. Given such optimum strategies, expressed in the form of game equilibrium point, we analyse the best achievable performance when the type I and II error probabilities tend to zero exponentially fast. Specifically, we study the distinguishability of the sources as a function of the fraction of training samples corrupted by the attacker and when the test sequence can be modified up to a certain distortion level. The results of the analysis are summarised in terms of blinding corruption level, defined as the fraction of corrupted samples making a reliable distinction between the two sources impossible, and security margin, defined as the maximum distortion of the observed sequence for which a reliable distinction is possible (see <cit.>). The analysis is applied to two different scenarios wherein the attacker is allowed respectively to add a certain amount of fake samples to the training sequence and to selectively replace a fraction of the samples of the training sequences with fake samples. As we will see, the second case is more favourable to the attacker, since a lower distortion and a lower number of corrupted training samples are enough to prevent a correct decision.Given the above general framework, the main results proven in this paper can be summarised as follows:* We rigorously define the source identification game with addition of corrupted training samples ( game) and show that such a game is a dominance solvable game admitting an asymptotic equilibrium point when the length of the training and test sequences tend to infinity (Theorem <ref> and following discussion in Section <ref>);* We evaluate the payoff of the game at the equilibrium and derive the expression of the indistinguishability region, defined as the region with the sources Y which can not be distinguished from X because of the attack (Theorems <ref> and <ref>, Section <ref>);* Given any two sources X and Y, we derive the security margin and the blinding corruption level defined as the maximum distortion introduced into the test sequence and maximum fraction of fake training samples introduced by the attacker, still allowing the distinction of X and Y while ensuring positive error exponents for the two kinds of errors of the test (Theorem <ref> and Definition <ref> in Section <ref>);* We repeat the entire analysis for the source identification game with selective replacement of training samples ( game), and compare the two versions of the game (Theorem <ref> and subsequent discussion in Section <ref>).* The main proofs of the paper rely on a generalised version of Sanov's theorem <cit.>, which is proven in Appendix <ref>. In fact, Theorem <ref>, and its use to simplify some of the proofs in the paper, can be seen as a further methodological contribution of our work.This paper considerably extends the analysis presented in <cit.>, by providing a formal proof of the results anticipated in <cit.>[We also give a more precise formulation of the problem, by correcting some inaccuracies present in <cit.>.] and make a step forward by studying a more complex corruption scenario in which the attacker has the freedom to replace a given percentage of the training samples rather than simply adding some fake samples to the original training sequence.The paper is organised as follows. Section <ref> summarises the notation used throughout the paper, gives some definitions and introduces some basics concept of Game theory that will be used in the sequel. Section <ref> gives a rigorous definition of the  game, explaining the rationale behind the various assumptions made in the definition. In Section <ref>, we prove the main theorems of the paper regarding the asymptotic equilibrium point of the  game and the payoff at the equilibrium. Section <ref> leverages on the results proven in Section <ref> to introduce the concepts of blind corruption level and security margin, and evaluating them in the setting provided by the  game. Section <ref>, introduces and solves the  game, by paying attention to compare the results of the analysis with the corresponding results of the  game. The paper ends in Section <ref>, with a summary of the main results proven in the paper and the description of possible directions for future work. In order to avoid burdening the main body of the paper, the most technicaldetails of the proofs are gathered in the Appendix.§ NOTATION AND DEFINITIONS In this section, we introduce the notation and definitions used throughout the paper. We will use capital letters to indicate discrete memoryless sources (e.g. X). Sequences of length n drawn from a source will be indicated with the corresponding lowercase letters (e.g. x^n); accordingly, x_i will denote the i -th element of a sequence x^n. The alphabet of an information source will be indicated by the corresponding calligraphic capital letter (e.g. ). The probability mass function (pmf) of a discrete memoryless source X will be denoted by P_X. The calligraphic letterwill be used to indicate the class of all the probability mass functions, namely, the probability simplex in R^||. The notation P_X will be also used to indicate the probability measure ruling the emission of sequences from a source X, so we will use the expressions P_X(a) and P_X(x^n) to indicate, respectively, the probability of symbol a ∈ and the probability that the source X emits the sequence x^n, the exact meaning of P_X being always clearly recoverable from the context wherein it is used.We will use the notation P_X(A) to indicate the probability of A (be it a subset ofor ^n) under the probability measure P_X. Finally, the probability of a generic will be denoted by Pr{}.Our analysis relies extensively on the concepts of type and type class defined as follows (see <cit.> and <cit.> for more details). Let x^n be a sequence with elements belonging to a finite alphabet . The type P_x^n of x^n is the empirical pmf induced by the sequence x^n, i.e. ∀ a ∈, P_x^n (a) = 1/n∑_i=1^n δ(x_i, a), where δ(x_i,a) = 1 if x_i =a and zero otherwise. In the following, we indicate with ^n the set of types with denominator n, i.e. the set of types induced by sequences of length n. Given P ∈^n, we indicate with T(P) the type class of P, i.e. the set of all the sequences in ^n having type P. We denote by (P||Q) the Kullback-Leibler (KL) divergence between two distributions P and Q, defined on the same finite alphabet<cit.>: (P||Q) = ∑_a ∈ P(a) log_2 P(a)/Q(a). Most of our results are expressed in terms of the generalised log-likelihood ratio function h (see <cit.>), which for any two given sequences x^n and t^m is defined as: h(P_x^n, P_t^m) = (P_x^n || P_r^n + m) + m/n(P_t^m || P_r^n+ m), where P_r^n + m denotes the type of the sequence r^n+m, obtained by concatenating x^n and t^m, i.e. r^n+m = x^nt^m. The intuitive meaning behind the above definition is that P_r^n+m is the pmf which maximises the probability that a memoryless source generates two independent sequences belonging to T(P_x^n) and T(P_t^m), and that such a probability is equal to 2^-n h(P_x^n, P_t^m) at the first order in the exponent (see <cit.> or Lemma 1 in <cit.>).Throughout the paper, we will need to compute limits and distances in . We can do so by choosing one of the many available distances defined over ℝ^|| and for whichis a bounded set, for instance the L_p distance for which we have: d_L_p(P,Q) = ( ∑_a ∈ |P(a) - Q(a)|^p )^1/p. Without loss of generality, we will prove all our results by adopting the L_1 distance, the generalisation to different L_p metrics being straightforward. In the sequel, distances between pmf's inwill be simply indicated as d(·, ·) as a shorthand for d_L_1(·, ·)[Throughout the paper, we will use the symbol d(·, ·) to indicate both the distortion between two sequences in ^n and the L_1 distance between two pmf's in , the exact meaning being always clear from the context,].We also need to introduce the Hausdorff distance as a way to measure distances between subsets of a metric space <cit.>. Let S be a generic space and d a distance measure defined over S. For any point x ∈ S and any non-empty subset A ⊆ S, the distance of x from the subset A is defined as: d(x,A)  = inf_a ∈ A d(a,x). Given the above definition, the Hausdorff distance between any two subsets of S is defined as follows. For any two subsets A and B of S, let us define δ_B(A) = sup_b ∈ B d(b,A). The Hausdorff distance δ_H(A,B) between A and B is given by: δ_H(A,B)  = max{δ_A(B), δ_B(A)}. If the sets A and B are bounded with respect to d, then the Hausdorff distance always takes a finite value. The Hausdorff distance does not define a true metric, but only a pseudometric, since δ_H(A,B) = 0 implies that the closures of the sets A and B coincide, namely cl(A) =cl(B), but not necessarily that A = B. For this reason, in order for δ_H to be a metric, we need to restrict its definition to closed subsets[Note that in this case the inf and sup operations involved in the definition of the Hausdorff distance can be replaced with min and max, respectively.].Let then ℒ(S) denote the space ofnon-empty closed and limited subsets of S and letδ_H: (S) ×(S) → [0, ∞). Then, the space (S) endowed with the Hausdorff distance is a metric space <cit.> and we can give the following definition: Let {K_n} be a sequence of closed and limited subsets of S, i.e., K_n ∈(S) ∀ n. We use the notation K_n H→  K to indicate that the sequence has limit in ((S), δ_H) and the limiting set is K. §.§ Basic notions of Game Theory In this section, we introduce some basic notions and definitions of Game Theory.A 2-player game is defined as a quadruple (_1,_2,u_1, u_2), where _1 = {s_1,1… s_1,n_1} and _2 = {s_2,1… s_2,n_2} are the set of strategies the first and the second player can choose from, and u_l(s_1,i, s_2,j), l= 1,2, is the payoff of the game for player l, when the first player chooses the strategy s_1,i and the second chooses s_2,j. A pair of strategies (s_1,i, s_2,j) is called a profile. When u_1(s_s1,i, s_2,j) = -u_2(s_1,i, s_2,j), the win of a player is equal to the loss of the other and the game is said to be a zero-sum game. The sets _1, _2 and the payoff functions are assumed to be known to both players. Throughout the paper we consider strategic games, i.e., games in which the players choose their strategies beforehand without knowing thestrategy chosen by the opponent player.The final goal of game theory is to determine the existence of equilibrium points, i.e. profiles that in some sense represent the best choice for both players<cit.>. The most famous notion of equilibrium is due to Nash. A profile is said to be a Nash equilibrium if no player can improve its payoff by changing its strategy unilaterally. Despite its popularity, the practical meaning of Nash equilibrium is often unclear, since there is no guarantee that the players will end up playing at the equilibrium. A particular kind of games for which stronger forms of equilibrium exist are the so called dominance solvable games <cit.>. To be specific, a strategy is said to be strictly dominant for one player if it is the best strategy for the player, i.e., the strategy which corresponds to the largest payoff, no matter how the other player decides to play. When one such strategy exists for one of the players, he will surely adopt it. In a similar way, we say that a strategy s_l,i is strictly dominated by strategy s_l,j, if the payoff achieved by player l choosing s_l,i is always lower than that obtained by playing s_l,j regardless of the choice made by the other player. The recursive elimination of dominated strategies is a common technique for solving games. In the first step, all the dominated strategies are removed from the set of available strategies, since no rational player would ever play them. In this way, a new, smaller game is obtained. At this point, some strategies, that were not dominated before, may be dominated in the remaininggame, and hence are eliminated. The process goes on until no dominated strategy exists for any player. A rationalizable equilibrium is any profile which survives the iterated elimination of dominated strategies <cit.>. If at the end of the process only one profile is left, the remaining profile is said to be the only rationalizable equilibrium of the game. The corresponding strategies are the only rational choice for the two players and the game is said dominance solvable. § SOURCE IDENTIFICATION GAME WITH ADDITION OF CORRUPTED TRAINING SAMPLES () In this section, wegive a rigorous definition of the Source Identification game with addition of corrupted training samples.Given a discrete and memoryless source X ∼ P_X and a test sequence v^n, the goal of the defender (D) is to decide whether v^n has been drawn from X (hypothesis H_0) or not (alternative hypothesis H_1). By adopting a Neyman-Pearson perspective, we assume that D must ensure that the false positive error probability (P_fp), i.e., the probability of rejecting H_0 when H_0 holds (type I error) is lower than a given threshold. Similarly to the previous versions of the game studied in <cit.> and <cit.>, we assume that D relies only on first order statistics to make a decision. For mathematical tractability, likewise earlier papers, we study the asymptotic version of the game when n ∞, by requiring that P_fp decays exponentially fast when n increases, with an error exponent at least equal to λ, i.e. P_fp≤ 2^-n λ. On its side, the attacker aims at increasing the false negative error probability (P_fn), i.e., the probability of accepting H_0 when H_1 holds (type II error). Specifically, A takes a sequence y^n drawn from a source Y ∼ P_Y and modifies it in such a way that D decides that the modified sequence z^n has been generated by X. In doing so, A must respect a distortion constraint requiring that the average per-letter distortion between y^n and z^n is lower than L.Players A and D know the statistics of X through a training sequence, however the training sequence can be partly corrupted by A. Depending on how the training sequence is modified by the attacker, we can define different versions of the game. In this paper, we focus on two possible cases: in the first case, hereafter referred to as source identification game with addition of corrupted samples , the attacker can add some fake samples to the original training sequence. In the second case, analysed in Section <ref>, the attacker can replace some of the training samples with fake values (source identification game with replacement of training samples - ). It is worth stressing that, even if the goal of the attacker is to increase the false negative error probability, the training sequence is corrupted regardless of whether H_0 or H_1 holds, hence, in general, this part of the attack also affects the false positive error probability. As it will be clear later on, this forces the defender to adopt a worst case perspective to ensure that P_fp is surely lower than 2^-λ n.As to Y, we assume that the attacker knows P_Y exactly. For a proper definition of the payoff of the game, we also assume that D knows P_Y. This may seem a too strong assumption, however we will show later on that the optimum strategy of D does not depend on P_Y, thus allowing us to relax the assumption that D knows P_Y.With the above ideas in mind, we are now ready to give a formal definition of the  game. §.§ Structure of the gameA schematic representation of the  game is given in fig.ADVsetup_add.Let τ^m_1 be a sequence drawn from X. We assume that τ^m_1 is accessible to A, who corrupts it by concatenating to it a sequence of fake samples τ^m_2. Then A reorders the overall sequence in a random way so to hide the position of the fake samples. Note that reordering does not alter the statistics of the training sequence since the sequence is supposed to be generated from a memoryless source[By using the terminology introduced in <cit.>, the above scenario can be referred to as a causative attack with control over training data.]. In the following, we denote by m the final length of the training sequence (m = m_1 + m_2), and by α = m_2/m_1+m_2 the portion of fake samples within it. The corrupted training sequence observed by D is indicated by t^m. Eventually, we hypothesize a linear relationship between the lengths of the test and the corrupted training sequence, i.e. m = cn, for some constant value c[In this paper, we are interested in studying the equilibrium point of the source identification game when the length of the test and training sequences tend to infinity. Strictly speaking, we should ensure that when n grows, all the quantities m, m_1 and m_2 are integer numbers for the given c and α. In practice, we will neglect such an issue, since when n grows the ratios m/n and m_1/(m_1 + m_2) can approximate any real values c and α. More rigorously, we could consider only rational values of c and α, and focus on subsequences of n including only those values for which m/n = c and m_1/(m_1 + m_2) = α.].The goal of D is to decide if an observed sequence v^n has been drawn from the same source that generated t^m (H_0) or not (H_1). We assume that D knows that a certain percentage of samples in the training sequence are corrupted, but he has no clue about the position of the corrupted samples. The attacker can also modify the sequence generated by Y so to induce a decision error. The corrupted sequence is indicated by z^n. With regard to the two phases of the attack, we assume that A first corrupts the training sequence, then he modifies the sequence y^n. This means that, in general, z^n will depend both on y^n and t^m, while t^m (noticeably τ^m_2) does not depend on y^n. Stated in another way, the corruption of the training sequence can be seen as a preparatory part of the attack, whose goal is to ease the subsequent camouflage of y^n.For a formal definition of the  game, we must define the set of strategies available to D and A (respectively _D and _A) and the corresponding payoffs.§.§ Defender's strategies The basic assumption behind the definition of the space of strategies available to D is that to make his decisionD relies only on the first order statistics of v^n and t^m. This assumption is equivalent to requiring that the acceptance region for hypothesis H_0, hereafter referred to as Λ^n × m, is a union of pairs of type classes[We use the superscript n × m to indicate explicitly that Λ^n × m refers to n-long test sequences and (m = cn)-long training sequences.], or equivalently, pairs of types (P,R), where P ∈^n and R ∈^m. To define Λ^n × m, D follows a Neyman-Pearson approach, requiring that the false positive error probability is lower than a certain threshold. Specifically, we require that the false positive error probability tends to zero exponentially fast with a decay rate at least equal to λ. Given that the pmf P_X ruling the emission of sequences under H_0 is not known and given that the corruption of the training sequence is going to impair D's decision under H_0, we adopt a worst case approach and require that the constraint on the false positive error probability holds for all possible P_X and for all the possible strategies available to the attacker. Given the above setting, the space of strategies available to D is defined as follows: _D = {Λ^n × m⊂^n ×^m: max_P_X ∈𝒫 max_s ∈_A P_fp ≤  2^-λ n}, where the inner maximization is performed over all the strategies available to the attacker. We will refine this definition at the end of the next section, after the exact definition of the space of strategies of the attacker. §.§ Attacker's strategies With regard to A, the attack consists of two parts. Given a sequence y^n drawn from P_Y, and the original training sequence τ^m_1, the attacker first generates a sequence of fake samples τ^m_2 and mixes them up with those in τ^m_1 producing the training sequence t^m observed by D. Then he transforms y^n into z^n, eventually trying to generate a pair of sequences (z^n, t^m)[While reordering is essential to hide the position of fake samples to D, it does not have any impact on the position of (z^n, t^m) with respect to Λ^n × m, since we assumed that the defender bases its decision only on the first order statistic of the observed sequences. For this reason, we omit to indicate the reordering operator σ in the attacking procedure.] whose types belong to Λ^n × m. In doing so, he must ensure that d(y^n, z^n) ≤ nL for some distortion function d.Let us consider the corruption of the training sequence first. Given that the defender bases his decision only on the type of t^m, we are only interested in the effect that the addition of the fake samples has on P_t^m. By considering the different length of τ^m_1 and τ^m_2, we have: P_t^m =α P_τ^m_2 + (1-α) P_τ^m_1, where P_t^m∈^m, P_τ^m_1∈^m_1 and P_τ^m_2∈^m_2. The first part of the attack, then, is equivalent to choosing a pmf in ^m_2 and mixing it up with P_τ^m_1. By the same token,it is reasonable to assume that the choice of the attacker depends only on P_τ^m_1 rather than on the single sequence τ^m_1. Arguably, the best choice of the pmf in ^m_2 will depend on P_Y, since the corruption of the training sequence is instrumental in letting the defender think that a sequence generated by Y has been drawn by the same source that generated t^m.To describe the part of the attack applied to the test sequence, we follow the approach used in <cit.> based on transportation theory <cit.>. Let us indicate by n(i,j) the number of times that the i-th symbol of the alphabet is transformed into the j-th one as a consequence of the attack. Similarly, let S^n_YZ(i,j) = n(i,j)/n be the relative frequency with which such a transformation occurs. In the following, we refer to S^n_YZ as transportation map. For any additive distortion measure, the distortion introduced by the attack can be expressed in terms of n(i,j) and S^n_YZ. In fact, we have: d(y^n, z^n)  = ∑_i,j n(i,j) d(i,j),d(y^n, z^n)/n = ∑_i,j S^n_YZ(i,j) d(i,j). where d(i,j) is the distortion introduced when symbol i is transformed into symbol j.The map S^n_YZ also determines the type of the attacked sequence. In fact, by indicating with P_z^n(j) the relative frequency of symbol j into z^n, we have: P_z^n(j)  = ∑_i S^n_YZ(i,j)  ≜  S^n_Z(j). Finally, we observe that the attacker can not change more symbols than there are in the sequence y^n; as a consequence a map S^n_YZ can be applied to a sequence y^n only if S^n_Y(i) ≜∑_j S^n_YZ(i,j) = P_y^n(i). Sometimes, we find convenient to explicit the dependence of the map chosen by the attacker on the type of t^m and y^n, and hence we will also adopt the notation S^n_YZ(P_t^m, P_y^n).By remembering that Λ^n × m depends on v^n only through its type, and given that the type of the attacked sequence depends on S^n_Y only through S^n_YZ, we can define the second phase of the attack as the choice of a transportation map among all admissible maps, a map being admissible if: S^n_Y =  P_y^n∑_i,j S^n_YZ(i,j) d(i,j)  ≤  L. Hereafter, we will refer to the set of admissible maps as ^n(L, P_y^n).With the above ideas in mind, the set of strategies of the attacker can be defined as follows: _A  = _A,T×_A,O,where _A,T and _A,O indicate, respectively, the part of the attack affecting the training sequence and the observed sequence, and are defined as:_A,T  = { Q(P_τ^m_1):  ^m_1→^m_2},_A,O  = { S^n_YZ(P_y^n, P_t^m):  ^n×^m→^n(L, P_y^n) }. Note that the first part of the attack (_A,T) is applied regardless of whether H_0 or H_1 holds, while the second part (_A,O) is applied only under H_1. We also stress that the choice of Q(P_τ^m_1) depends only on the training sequence τ^m_1,while the transportation map used in the second phase of the attack is a function of both on y^n and τ^m_1 (through t^m).Finally, we observe that with these definitions, the set of strategies of the defender can be redefined by explicitly indicating that the constraint on the false positive error probability must be verified for all possible choices of Q(·) ∈_A,T, since this is the only part of the attack affecting P_fp. Specifically, we can rewrite (<ref>) as _D = {Λ^n × m⊂^n ×^m:  max_P_X max_Q(·) ∈_A,T P_fp ≤  2^-λ n}.§.§ Payoff The payoff is defined in terms of the false negative error probability, namely: u(Λ^n × m, (Q(·),  S^n_YZ(·, ·)))  =  -P_fn. Of course, D aims at maximising u while A wants to minimise it. §.§ The  game with targeted corruption ( game) The  game is difficult to solve directly, because of the 2-step attacking strategy. We will work around this difficulty by tackling first with a slightly different version of the game, namely the source identification game with targeted corruption of the training sequence, , depicted in Fig. <ref>.Whereas the strategies available to the defender remain the same, for the attacker, the choice of Q(·) is targeted to the counterfeiting of a given sequence y^n. In other words, we will assume that the attacker corrupts the training sequence τ^m_1 to ease the counterfeiting of a specific sequence y^n rather than to increase the probability that the second part of the attack succeeds. This means that the part of the attack aiming at corrupting the training sequence also depend on y^n, that is: _A,T = {Q(P_τ^m_1, P_y^n):  ^m_1×^n →^m_2}. Even if this setup is not very realistic and is more favourable to the attacker, who can exploit the exact knowledge of y^n (rather than its statistical properties) also for the corruption of the training sequence, in the next section we will show that, for large n, the  game is equivalent to the non-targeted version of the game we are interested in.With the above ideas in mind, the  game is formally defined as follows.§.§.§ Defender's strategies_D = {Λ^n × m⊂^n ×^m:  max_P_Xmax_Q(·,·) ∈_A,T P_fp≤ 2^-λ n}.§.§.§ Attacker's strategies_A  = _A,T×_A,O with _A,T and _A,O defined as in (<ref>) and (<ref>) respectively. §.§.§ Payoff The payoff is still equal to the false negative error probability: u(Λ^n × m, (Q(·, ·),  S^n_YZ(·, ·)))  =  -P_fn.§ ASYMPTOTIC EQUILIBRIUM AND PAYOFF OF THE  AND  GAMES In this section, we derive the asymptotic equilibrium point of the and the  games when the length of the test and training sequences tends to infinity and evaluate the payoff at the equilibrium. §.§ Optimum defender's strategy We start by deriving the asymptotically optimum strategy for D. As we will see, a dominant and universal strategy with respect to P_Y exists for D. In other words, the optimum choice of D depends on neither the strategy chosen by the attacker nor P_Y. In addition, since the constraint on the false positive probability must be satisfied for all attackers' strategy, the optimum strategy for the defender is the same for both the targeted and non-targeted versions of the game.As a first step, we look for an explicit expression of the false positive error probability. Such a probability depends on P_X and on the strategy used by A to corrupt the training sequence. In fact, the mapping of y^n into z^n does not have any impact on D's decision under H_0. We carry out our derivations by focusing on the game with targeted corruption. It will be clear from our analysis that the dependence on y^n has no impact on P_fp, and hence the same results hold for the game with non-targeted corruption.For a given P_X and Q(·, ·), P_fp is equal to the probability that Y generates a sequence y^n and X generates two sequences x^n and τ^m_1, such that the pair of type classes (P_x^n, α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1) falls outside Λ^n × m. Such a probability can be expressed as: P_fp = Pr{(P_x^n, α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1) ∈Λ̅^n× m}  =  ∑_P_y^n∈^n P_Y(T(P_y^n))·∑_(P_x^n, P_t^m) ∈Λ̅^n× m P_X(T(P_x^n)) ·∑_P_τ^m_1∈^m_1: α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1 = P_t^mP_X(T(P_τ^m_1)), where Λ̅^n × m is the complement of Λ^n × m, and where we have exploited the fact that under H_0 the training sequence τ^m_1 and the test sequence x^n are generated independently by X. Given the above formulation, the set of strategies available to D can be rewritten as: _D = {Λ^n × m :  max_P_Xmax_Q(·, ·)∑_P_y^n∈^n P_Y(T(P_y^n)) ·∑_(P_x^n, P_t^m) ∈Λ̅^n× m P_X(T(P_x^n)) ·∑_P_τ^m_1∈^m_1: α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1 = P_t^mP_X(T(P_τ^m_1)) ≤ 2^-λ n}.We are now ready to prove the following lemma, which describes the asymptotically optimum strategy for the defender for both versions of the game. Let Λ^n × m,* be defined as follows: Λ^n × m,* = {(P_v^n, P_t^m) : min_Q ∈^m_2h(P_v^n,P_t^m-α Q/1-α)  ≤ λ - δ_n} with δ_n = ||log(n+1)((1-α)nc+1)/n, where || is the cardinality of the source alphabet and where the minimisation over Q is limited to all the Q's such that P_t^m-α Q is nonnegative for all the symbols in .Then:* max_P_Xmax_s ∈_A P_fp ≤  2^-n(λ - ν_n), with lim_n ∞ν_n = 0,* ∀Λ^n × m ∈ _D, we have Λ̅^n × m⊆Λ̅^n × m,*. To prove the first part of the lemma, we see that from the expression of the false positive error probability given by eq. (<ref>), we can write: max_P_X max_Q(·,·) P_fp ≤ max_P_X∑_P_y^n∈^n P_Y(T(P_y^n)) ·∑_(P_x^n, P_t^m)∈Λ̅^n × m,* P_X(T(P_x^n)) ·max_Q(·, ·)∑_P_τ^m_1∈^m_1: α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1 = P_t^mP_X(T(P_τ^m_1)). Let us consider the term within the inner summation. For each P_τ^m_1 such that α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1 = P_t^m, we have[It is easy to see that the bound (<ref>) holds also for the non-targeted game, when Q depends on the training sequence only (Q(P_τ^m_1)).]: P_X(T(P_τ^m_1))  ≤ max_Q ∈^m_2 P_X( T ( P_t^m - α Q/1 - α) ), with the understanding that the maximisation is carried out only over the Q's such that P_t^m - α Q is nonnegative for all the symbols in .Thanks to the above observation, we can upper bound the false positive error probability as follows: max_P_X max_Q(·, ·) P_fp ≤max_P_X∑_P_y^n∈^n P_Y(T(P_y^n)) ·∑_(P_x^n, P_t^m)∈Λ̅^n × m,* P_X(T(P_x^n)) · |^m_1|·max_Q ∈^m_2P_X ( T ( P_t^m - α Q/1 - α) ) (a)=max_P_X∑_(P_x^n, P_t^m)∈Λ̅^n × m,* P_X(T(P_x^n)) |^m_1|max_Q ∈^m_2 P_X ( T ( P_t^m - α Q/1 - α) ) ≤ |^m_1|∑_(P_x^n, P_t^m)∈Λ̅^n × m,*max_Q ∈^m_2max_P_X P_X(T(P_x^n)) P_X ( T ( P_t^m - α Q/1 - α) ) where in (a) we exploited the fact that the rest of the expression no longer depends on P_y^n. From this point, the proof goes along the same line of the proof of Lemma 2 in <cit.>, by observing that max_P_X P_X(T(P_x^n)) P_X ( T ( P_t^m - α Q/1 - α) )is upper bounded by 2^-n h(P_x^n, P_t^m - α Q/1 - α), and that for each pair of types in Λ̅^n× m, *, h(P_x^n, P_t^m - α Q/1 - α) is larger than λ - δ_n for every Q by the very definition of Λ^n× m, *.We now pass to the second part of the lemma. Let Λ^n × m be a strategy in _D, and let (P_x^n, P_t^m) be a pair of types contained in Λ̅^n × m. Given that Λ^n × m is an admissible decision region (see (<ref>)), the probability that X emits a test sequence belonging to T(P_x^n) and a training sequence τ^m_1 such that after the attack (τ^m_1 || τ^m_2) ∈ T(P_t^m) must be lower than 2^-λ n for all P_X and all possible attacking strategies, that is: 2^-λ n  >  max_P_Xmax_Q(·, ·)∑_P_y^n∈^n P_Y(T(P_y^n)) ·[ P_X(T(P_x^n))  ·∑_P_τ^m_1 : α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1 = P_t^mP_X(T(P_τ^m_1))](a)= max_P_X∑_P_y^n∈^n P_Y(T(P_y^n)) ·[ P_X(T(P_x^n)) ·max_Q(·, P_y^n)∑_P_τ^m_1 : α Q(P_τ^m_1, P_y^n) + (1-α) P_τ^m_1 = P_t^mP_X(T(P_τ^m_1)) ](b)≥ max_P_X∑_P_y^n∈^n P_Y(T(P_y^n)) ·[ P_X(T(P_x^n)) ·max_Q(P_τ^m_1, P_y^n) P_X ( T ( P_t^m-α Q(P_τ^m_1, P_y^n)/1-α) ) ] (c)= max_P_X P_X(T(P_x^n))max_Q ∈^m_2 P_X( T ( P_t^m-α Q/1-α)), where (a) is obtained by replacing the maximisation over all possible strategies Q(·,·), with a maximisation over Q(·,P_y^n) for each specific P_y^n, and (b) is obtained by considering only one term P_τ^m_1 of the inner summation and optimising Q(P_τ^m_1,P_y^n) for that term. Finally, (c) follows by observing that the optimum Q(·,P_y^n) is the same for any P_y^n. As usual, the maximization over Q in the last expression is restricted to the Q's for which P_t^m-α Q ≥  0 for all the symbols in[It is easy to see that the same lower bound can be derived also for the non targeted case, as the optimum Q in the second to last expression does not depend on P_y^n.]By lower bounding the probability that a memoryless source X generates a sequence belonging to a certain type class (see <cit.>, chapter 12), we can continue the above chain of inequalities as follows 2^-λ n  > max_P_Xmax_Q ∈^m_2 2^-n[(P_x^n||P_X)+m_1/n( P_t^m-α Q/1-α|| P_X)]/(n+1)^||(m_1+1)^|| ≥ 2^-n min_Q ∈^m_2min_P_X[(P_x^n||P_X)+m_1/n( P_t^m-α Q/1-α|| P_X)] /(n+1)^||(m_1+1)^|| (a)= 2^-n min_Q ∈^m_2h( P_x^n , P_t^m - α Q/1 - α)/(n+1)^||(m_1+1)^||, where (a) derives from the minimization properties of the generalised log-likelihood ratio function h() (see Lemma 1, in <cit.>). By taking the log of both terms we have: min_Q ∈^m_2 h( P_x^n , P_t^m - α Q/1 - α)  > λ - δ_n, thus completing the proof of the lemma. Lemma 1 shows that the strategy Λ^n × m,* is asymptotically admissible (point 1) and optimal (point 2),regardless of the attack. From a game-theoretic perspective, this means that such a strategy is a dominant strategy for D andimplies that the game is dominance solvable <cit.>. Similarly, the optimum strategy is a semi-universal one, since it depends on P_X but it does not depend on P_Y.It is clear from the proof of Lemma 1 that the same optimum strategy holds for the targeted and non-targeted versions of the game. The situation is rather different with regard to the optimum strategy for the attacker. Despite the existence of a dominant strategy for the defender, in fact, the identification of the optimum attacker's strategy for the  game is not easy due to the 2-step nature of the attack. For this reason, in the following sections, we will focus on the targeted version of the game, which is easier to study. We will then use the results obtained for the  game to derive the best achievable performance for the case of non-targeted attack. §.§ The  game: optimum attacker's strategy and equilibrium pointGiven the dominant strategy of D, for any given τ^m_1 and y^n, the optimum attacker's strategy for the  game boils down to the following double minimisation: (Q^*(P_τ^m_1, P_y^n),  S^n,*_YZ(P_y_n, P_t^m))  = min_Q ∈^m_2S^n_YZ∈^n(L, P_y^n)(min_Q' h( P_z^n , (1-α)P_τ^m_1 + α Q - α Q'/1 - α) ), where P_z^n is obtained by applying the transformation map S^n_YZ to P_y^n, and where P_t^m = (1-α)P_τ^m_1 + α Q. As usual, the minimisation over Q' is limited to the Q' such that all the entries of the resulting pmf are nonnegative.As a remark, for L = 0 (corruption of the training sequence only), we get: Q^*(P_τ^m_1,P_y^n) = min_Q ∈^m_2[ min_Q' h( P_y^n ,  P_τ^m_1 + α/1 - α(Q - Q')) ], while, for α = 0 (classical setup, without corruption of the training sequence) we have: S^n,*_YZ(P_y^n, P_t^m) =S^n_YZ∈^n(L, P_y^n)minh(P_z^n, P_t^m), falling back to the known case of source identification with uncorrupted training, already studied in <cit.>. Having determined the optimum strategies of both players, it is immediate to state the following: The  game is a dominance solvable game, whose only rationalizable equilibrium corresponds to the profile(Λ^n × m,*, (Q^*(P·, ·),  S^n,*_YZ(·, ·)). The theorem is a direct consequence of the fact that Λ^n × m,* is a dominant strategy for D. We remind that the concept of rationalizable equilibrium ismuch stronger than the usual notion of Nash equilibrium, since the strategies corresponding to such an equilibrium are the only ones that two rational players may adopt <cit.>. §.§ The  game: payoff at the equilibriumIn this section, we derive the asymptotic value of the payoff at the equilibrium, to see who and under which conditions is going to win the game.To start with, we identify the set of pairs (P_y^n,P_τ^m_1) for which, as a consequence of A's action, D accepts H_0: Γ^n(λ,α, L)  = {( P_y^n, P_τ^m_1) :  ∃  (P_z^n, P_t^m) ∈Λ^n × m,*s.t.P_t^m = (1 - α) P_τ^m_1 + α Q  and  P_z^n = S_Z^n for someQ ∈^m_2 andS_YZ^n ∈(L, P_y^n)}. If we fix the type of the non-corrupted training sequence (P_τ^m_1), we obtain: Γ^n( P_τ^m_1,λ,α,L)= {P_y^n:  ∃  P_z^n∈Λ^n, *((1 - α) P_τ^m_1 + α Q)s.t.P_z^n = S_Z^n for some Q ∈^m_2 and S_YZ^n ∈(L, P_y^n)}, where Λ^n, *(P) denotes the acceptance region for a fixed type of the training sequence in ^m. It is interesting to notice that, since in the current setting A has two degrees of freedom, the attack has a twofold effect: the sequence y^n is modified in order to bring it inside the acceptance region Λ^n,*(P_t^m) and the acceptance region itself is modified so to facilitate the former action.To go on, we find it convenient to rewrite the set Γ^n(P_τ^m_1,λ,α,L) as follows: Γ^n( P_τ^m_1, λ,α,L)  = { P_y^n:  ∃ S_PV^n  ∈ (L, P_y^n)s.t.S_V^n  ∈ Γ^n_0(P_τ^m_1, λ, α)}, whereΓ^n_0(P_τ^m_1, λ,α) ={P_y^n: ∃ Q  ∈ ^m_2 s.t.P_y^n∈Λ^n,*((1 - α) P_τ^m_1 + αQ) }, is the set containing all the test sequences(or, equivalently, test types) for which it is possible to corrupt the training set in such a way that they fall within the acceptance region. As the subscript 0 suggests, this set corresponds to the set in (<ref>) when A cannot modify the sequence drawn from Y (i.e. L = 0) and then tries to hamper the decision by corrupting the training sequence only. By considering the expression of the acceptance region, the setΓ^n_0(P_τ^m_1, λ, α) can be expressed in a more explicit form as follows:Γ^n_0(P_τ^m_1,λ,α)  = {P_y^n:  ∃ Q,Q'  ∈ ^m_2 s.t.h(P_y^n, P_τ^m_1 +α/(1 - α) (Q - Q'))  ≤ λ - δ_n}, where the second argument of h() denotes a type in ^m_1 obtained from the original training sequence τ^m_1 by first adding m_2 samples and later removing (in a possibly different way) the same number of samples. Note that in this formulation Q accounts for the fake samples introduced by the attacker and Q' for the worst case guess made by the defender of the position of the corrupted samples. We also observe that since we are treating the  game, in general Q will depend on P_y^n.As usual, we implicitly assume that Q and Q' are chosen in such a way that P_τ^m_1 +α/(1 - α) (Q - Q') is nonnegative and smaller than or equal to 1 for all the alphabet symbols. We are now ready to derive the asymptotic payoff of the game by following a path similar to that used in <cit.>, <cit.>. First of all we generalise the definition of the sets Λ^n × m,*, Γ^n and Γ^n_0 so that they can be evaluated for a generic pmf in(that is, without requiring that the pmf's are induced by sequences of finite length). This step passes through the generalization of the h function. Specifically, given any pair of pmf's (P,P') ∈×, we define: h_c(P,P')  =  (P||U)  +  c (P'||U);U = 1/1+cP  + c/1+cP', where c ∈ [0,1]. Note that when (P,P') ∈^n ×^n, h_c(P,P') = h(P,P').The asymptotic version of Λ^n× m,* is: Λ^* = {(P, R)  :  min_Q h_c (P,  R - α Q/1-α)  ≤ λ}.In a similar way, we can derive the asymptotic versions ofΓ^n and Γ^n_0 in (<ref>) and (<ref>)-(<ref>). To do so, we first observe that, the transportation map S_YZ^n depends on the sources only through the pmfs. By denoting with S_PV^n a transportation map from a pmf P ∈^n to another pmf V ∈^n and rewriting the set Γ^n accordingly, we can easily derive the asymptotic version of the set as follows:Γ(R, λ,α,L)  = {P ∈:  ∃ S_PV∈(L, P)s.t.V ∈Γ_0(R, λ, α)}, with Γ_0(R,λ,α)  ={P ∈:  ∃ Q ∈ s.t.P ∈Λ^*((1 - α) R + α Q) } = {P ∈:  ∃ Q,Q' ∈ s.t.h_c(P,  R +α/(1 - α) (Q - Q'))  ≤ λ}, where the definitions of S_PV and 𝒜(L,P) derive from those of S_PV^n and 𝒜^n(L,P) by relaxing the requirement that the terms S_PV(i,j) and P(i) are rational number with denominator n.We now have all the necessary tools to prove the following theorem.For the   game, the false negative error exponent at the equilibrium is given by ε = min_R [ (1-α)c (R || P_X) + min_P ∈Γ(R, λ, α, L) (P || P_Y)]. Accordingly, * if P_Y  ∈ Γ(P_X, λ, α, L) then ε =  0;* if P_Y  ∉ Γ(P_X, λ, α, L) then ε >  0. The theorem could be proven going along the same lines of the proof of Theorem 4 in <cit.>. We instead provide a proof based on the extension of Sanov's theorem provided in the Appendix (see Theorem <ref>). In fact, Theorem <ref>, as well as Theorem 4 in <cit.>, can be seen as an application of such a generalized version of Sanov's theorem.Let us consider P_fn = ∑_(P_y^n, P_τ^m_1) ∈Γ^n(λ, α, L) P_X(T(P_τ^m_1)) P_Y(T(P_y^n)) =∑_R ∈^m_1 P_X(T(R)) ∑_P∈Γ^n(R,λ, α,L) P_Y(T(P)) =∑_R ∈^m_1 P_X(T(R)) P_Y(Γ^n(R,λ, α,L)). We start by deriving an upper-bound of the false negative error probability. We can write: P_fn ≤ ∑_R ∈^m_1 P_X(T(R)) ∑_P ∈Γ^n(R, λ, α, L) 2^- n (P || P_Y)≤ ∑_R ∈^m_1 P_X (T(R)) (n +1)^|𝒳| 2^- nmin_P ∈Γ^n (R, λ,α, L)(P || P_Y)≤ ∑_R ∈^m_1 P_X (T(R)) (n + 1)^|𝒳| 2^-n min_P ∈Γ(R, λ, α, L)(P || P_Y)≤(n + 1)^|𝒳| (m_1 + 1)^|𝒳| · 2^- n min_R ∈^m_1 [m_1/n(R || P_X) + min_P ∈Γ(R, λ, α, L)( P ||P_Y)]≤(n + 1)^|𝒳| (m_1 + 1)^|𝒳| · 2^- n min_R ∈ [(1-α)c(R || P_X) + min_P ∈Γ(R, λ, α, L)( P ||P_Y)], where the use of the minimum instead of the infimum is justified by the fact that Γ^n(R, λ, α, L) and Γ(R, λ, α, L) are compact sets. By taking the log and dividing by n we find: - log P_fn/n ≥min_R ∈[ (1-α)c ( R || P_X) +min_P ∈Γ(R, λ, α, L)( P || P_Y)] - β_n, where β_n = || log(n+1)((1 - α)nc +1)/n tends to 0 when n tends to infinity.We now turn to the analysis of a lower bound for P_fn. LetR^* be the pmf achieving the minimum in the outer minimisation of eq. (<ref>). Due to the density of rational numbers within real numbers, we can find a sequence of pmfs' R_m_1∈^m_1 (m_1 = (1-α) nc) that tends to R^* when n (and hence m_1) tends to infinity. We can write: P_fn=∑_R ∈^m_1 P_X(T(R))P_Y (Γ^n(R, λ, α,L))≥ P_X(T(R_m_1))P_Y (Γ^n(R_m_1, λ, α, L)),≥ 2^- m_1 (R_m_1 || P_X)/(m_1+1)^||P_Y (Γ^n(R_m_1, λ, α, L)),where in the first inequality we have replaced the sum with the single element of the subsequence R_m_1 defined previously, and where the second inequality derives from the well known lower bound on the probability of a type class <cit.>. From (<ref>), by taking the log and dividing by n, we obtain: -log P_fn/n ≤(1-α)c (R_m_1 || P_X) - 1/nlog P_Y(Γ^n(R_m_1, λ, α, L)) + β_n', where β_n' = || log(m_1+1)/n tends to 0 when n tends to infinity.In order to compute the probability P_Y(Γ^n(R_m_1, λ, α, L)), we resort to Corollary <ref> of the the generalised version of Sanov's Theorem given in Appendix <ref>. To apply the corollary, we must show that Γ^n(R_m_1, λ, α, L) H→Γ(R^*, λ, α, L).First of all, we observe that by exploiting the continuity of the h_c function andthe density of rational numbers into the real ones, it is easy to prove that Γ_0^n(R_m_1, λ, α) H→Γ_0(R^*, λ, α). Then the Hausdorff convergence of Γ^n(R_m_1, λ, α, L) to Γ(R^*, λ, α, L) follows from the regularity properties of the set of transportation maps stated in Appendix <ref>. To see how, we observe that any transformation S_PV∈Å(L,P) mapping P into V can be applied in inverse order through the transformation S_VP(i,j) = S_PV(j,i). It is also immediate to see that S_VPintroduces the same distortion introduced byS_PV, that is S_VP∈Å(L,V). Let now P be a point in Γ(R^*, λ, α, L). By definition we can find a map S_PV∈Å(L,P) such that V ∈Γ_0(R^*, λ, α). Since Γ_0^n(R_m_1, λ, α) H→Γ_0(R^*, λ, α), for large enough n, we can find a point V' ∈Γ_0^n(R_m_1, λ, α) which is arbitrarily close to V. Thanks to the second part of Theorem <ref> in Appendix <ref>, we know that a map S_V'P'∈Å^n(L,V') exists such that P' is arbitrarily close to P and P' ∈^n. By applying the inverse map S_P'V' to P', we see that P' ∈Γ^n(R_m_1,λ, α,L), thus permitting us to conclude that, when n increases, δ_Γ(R^*, λ, α,L)(Γ^n(R_m_1, λ, α, L)) → 0. In a similar way, we can prove that δ_Γ^n(R_m_1, λ, α, L)(Γ(R^*, λ, α,L)) → 0, hence permitting us to conclude that Γ^n(R_m_1, λ, α, L) H→Γ(R^*, λ, α, L).We can now apply the generalised version of Sanov Theorem as expressed in Corollary <ref> of Appendix <ref> to conclude that: - lim_n →∞1/n log P_Y(Γ^n(R_m_1, λ, α, L))  = P ∈Γ(R^*, λ, α, L)min(P||P_Y). Going back to equation (<ref>), and by exploiting the continuity of the divergence function, we can say that for large n we have: -log P_fn/n≤ (1 - α)c (R^* || P_X)  + min_P ∈Γ(R^*, λ, α, L)(P || P_Y) + ν_n, where the sequence ν_n tends to zero when n tends to infinity. By coupling equations(<ref>) and (<ref>) and by letting n ∞, we eventually obtain: -lim_n ∞ log P_fn/n = min_R [ (1-α)c ·(R || P_X) + min_P ∈Γ(R, λ, α,L) (P || P_Y)], thus proving the theorem.As an immediate consequence of Theorem <ref>, the set Γ(P_X, λ, α, L) defines the indistinguishability region of the test, that is the set of all the sources for which A induces D to decide in favour of H_0 even if H_1 holds. §.§ Analysis of the  gameWe now focus on the  game. For a given choice of Q(P_τ^m_1) ∈_A,T (and hence t^m),given a sequence y^n, the optimum choice of the second part of the attack derives quite easily from the definition of Λ^n× m,*, namely S^n,*_YZ(P_y^n,P_t^m) = min_S^n_YZ∈^n(L, P_y^n)(min_Q ∈^m_2h( P_z^n , P_t^m - α Q/1 - α)). Now the point is to determine the strategy Q(P_τ^m_1) which maximises the probability that the attack in (<ref>) succeeds. To this purpose, of course, the attacker must exploit the knowledge of P_Y. Since solving such a maximisation problem is not an easy task, we will proceed in a different way. We first introduce a simple (and possibly suboptimum) strategy, then we argue that such a strategy is asymptotically optimum, in that the set of the sources that cannot be distinguished from X with this choice is the same set that we have obtained for the   setup, which is known to be more favourable to the attacker.More specifically, we consider the following two-step attacking strategy. In the first step of the attack, A does not know y^n, hence he trusts the law of large numbers and optimises Q(P_τ^m_1) by using P_Y as a proxy for P_y^n. To do so, he applies equation (<ref>), by replacing P_y^n with P_Y. Specifically, by indicating with Q^†, the resulting strategy for the first step of the attack, we have Q^†(P_τ^m_1)  = min_Q∈^m_2min_Q' ∈^m_2S_YZ∈(L, P_Y) h_c( P_Z , P_τ^m_1 + α/1 - α (Q - Q')). As a by-product of the above minimisation, the attacker also finds the map S^n, †_YZ representing the optimum attack when P_y^n = P_Y. Let us indicate the result of the application of such a map to P_Y by P^†_Z.In the second part of the attack, A tries to move P_y^n as close as possible to P^†_Z, that is: S^n,†_YZ(P_y^n, P_t^m^†)  = min_S^n_YZ∈^n(L, P_y^n) d(S_Z^n, P_Z^†), where S^n,†_YZ(P_y^n, P_t^m^†) depends upon the corrupted training sequence obtained after the application of the first part of the attack, namely P_t^m^† = (1-α)P_τ^m_1 + α Q^†(P_τ^m_1), through P_Z^†.The asymptotic optimality of the strategy (Q^†(P_τ^m_1), S^n,†_YZ(P_y^n, P_t^m^†)) derives from the following theoremThe indistinguishability region of   game is equal to that of the   game (see eq. (<ref>)) and is asymptotically achieved by the attacking strategy (Q^†(P_τ^m_1), S^n,†_YZ(P_y^n, P_t^m^†)).The theorem derives from the observation that due to the law of large numbers, when n grows, P_y^n tends to P_Y; hence, for large enough n, optimising the first part of the attack by replacing P_y^n with P_Y does not introduce a significant performance loss. The rigorous proof goes along similar lines to those used to prove Theorem <ref> and ultimately relies on the continuity of the h_c function and the regularity properties of the set ^n(L, P_y^n). The details of the proof are omitted for sake of brevity. Given that asymptotic equivalence of the   and the   games, in the rest of the paper, we will generally refer to the  game without specifying if we are considering the targeted or non-targeted case.§ SOURCE DISTINGUISHABILITY FOR THE  GAMEIn this section, we study the behaviour of the  game when we vary the decay rate of the false positive error probability λ. By letting λ tend to zero, in fact, we can derive the best achievable performance of the defender when we require only that P_fp tends to zero exponentially fast regardless of the decay rate. Then, we use such a result to derive the conditions under which the reliable distinction between two sources is possible in terms of number of corrupted training samples α and maximum allowed distortion L. §.§ Ultimate achievable performance of the game As we said, the goal of this section is to study the limit of the indistinguishability region when λ 0. This limit, in fact, determines all the pmf's P_Y that can not be distinguished from P_X ensuring that the two types of error probabilities tend to zero exponentially fast (with vanishingly small, yet positive, error exponents).We start by exploiting optimal transport theory to rewrite the indistinguishability region as: Γ(P_X,λ,α,L) = { P:  ∃ V ∈Γ_0(P_X, λ, α) s.t.EMD(P,V) ≤ L}, where EMD (Earth Mover Distance) is the term used in computer vision to denote the minimum transportation cost <cit.>, that is EMD(P,V)  = min_S_PV : S_P = P, S_V = V ∑_i,j S_PV(i,j) d(i,j).With this definition, the main result of this section is stated by the following theorem. Given two sources X and Y, a maximum allowed average per-letter distortion L and a fraction α of training samples provided by the attacker, the maximum achievable false negative error exponent ε for the   game is: lim_λ 0 lim_n ∞ - 1/nlog P_fn = min_R [(1-α)c (R || P_X)  + min_P ∈Γ(R, α, L) (P || P_Y)], where Γ(R, α, L) = Γ(R, λ=0, α, L). Accordingly, the ultimate indistinguishability region is given by: Γ(P_X,α, L)={P :  ∃ V ∈Γ_0(P_X, α)s.t.EMD(P,V) ≤ L}, where Γ_0(P_X, α) = Γ_0(P_X, λ = 0, α). Moreover, Γ(P_X,α, L) can be rewritten as:Γ(P_X,α, L)={ P : min_V:EMD(P,V) ≤ L∑_i[V(i) – P_X(i) ]^+ ≤α/(1 - α)} ={ P : min_V:EMD(P,V) ≤ L d_L_1(V,P_X)  ≤ 2α/(1 - α)}. with [a]^+ =max{a,0}.The proof of the first part goes along the same steps used in the proof of Theorems 3 and 4 in <cit.> and is not repeated here. We show, instead, that Γ(P_X,α, L) can be rewritten as in (<ref>).By observing that h_c(P,Q) = 0 if and only if P=Q, it is immediate to see that the set Γ_0(P_X, λ=0,α) takes the following expression:Γ_0(P_X, α) = {P :  ∃ Q, Q' ∈ s.t.P  =  P_X + α/(1 - α) (Q - Q')}. Expression (<ref>) can be rewritten by avoiding the introduction of the auxiliary pmf's Q and Q'. To do so, we observe that Q(i) must be larger than Q'(i) for all the bins i for which P(i) > P_X(i) (and viceversa). In addition, Q and Q' must be valid pmf's, hence we have ∑_i [ Q(i) - Q'(i)]^+ = ∑_i [ Q'(i) - Q(i)]^+ ≤ 1.Then, it is easy to see that (<ref>) is equivalent to the following definition: Γ_0(P_X, α) ={P :  ∑_i[P(i) - P_X(i) ]^+≤α/(1 - α)} ={P :  d_L_1(P,P_X) ≤2α/(1 - α)}, where the second equality follows by observing that d_L_1(P,P_X) = ∑_i [P(i) - P_X(i)]^+ + ∑_i [P_X(i) - P(i)]^+. Eventually, equation (<ref>) derives immediately from the expression of Γ_0(P_X, α) given in (<ref>).According to Theorem <ref>, Γ(P_X,α, L) provides the ultimate indistinguishability region of the test, that is the set of all the pmf's for which A wins the game.Before going on, we pose to discuss the geometrical meaning of the set Γ_0(P_X, α) in (<ref>). To do so, we introduce the set Λ_0^*, obtained from Λ^* by letting λ→∞: Λ_0^* = { (P, P'):  ∃ Qs.t. P'  = P - α Q/(1-α)}. As usual, we can fix the pmf P and define: Λ_0^*(P) = { P':  ∃ Qs.t. P'  = P - α Q/(1-α)}. By referring to Figure <ref> (left part), we can geometrically interpret Λ_0^*(P) as the set of the pmf's P' such that P is a convex combination (with coefficient α) of P' with a point Q of the probability simplex.Starting from (<ref>), we can then rewriteΓ_0(P_X, α) as follows: Γ_0(P_X, α)= {P :  ∃Q ∈ s.t.P ∈Λ_0^*((1 - α) P_X + α Q)}. Accordingly, Γ_0(P_X, α) is geometrically obtained as the union of the acceptance regions built from the points which can be written as a convex combination of P_X with some point Q in the simplex. As shown in the right part of Figure <ref>, such a region corresponds to an hexagon centred in P_X, which, in the probability simplex, is equivalent to the set of points whose L_1 distance from P_X is smaller than or equal to 2α/(1-α) (as stated in (<ref>)). Of course, only the points of the hexagon that lie inside the simplex are valid pmf's and then must be accounted for.A pictorial representation of the set Γ(P_X, α,L) is given in Figure <ref>. §.§ Security margin and blinding corruption level (α_b)By a closer inspection of the ultimate indistinguishability region Γ(P_X,α, L), we can derive some interesting parameters characterising the distinguishability of two sources in adversarial setting. Let X ∼ P_X and Y ∼ P_Y be two sources. Let us focus first on the case in which the attacker can not modify the test sequence (L = 0). In this situation, the ultimate indistinguishability region boils down to Γ_0(P_X,α).Then we conclude that D can tell the two sources apart if d_L_1(P_Y, P_X) > 2 α/(1 - α). On the contrary, if d_L_1(P_Y, P_X) ≤2 α/(1 - α), A is able to make the sources indistinguishable by corrupting the training sequence. Clearly, the larger the α the easier is for A to win the game. We can define the blinding corruption level α_b, as the minimum value of α for which two sources X andY can not be distinguished. Specifically, we have: α_b(P_X, P_Y)  = d_L_1(P_Y, P_X)/2 + d_L_1(P_Y, P_X) = ∑_i [P_Y(i) - P_X(i)]^+/1 + ∑_i [P_Y(i) - P_X(i)]^+. From (<ref>) it is easy to see that α_b is always lower than 1/2, with the limit case α_b = 1/2 corresponding to a situation in which P_X and P_Y have completely disjoint supports[We remind that for any pair of pmf's (P,Q), d_L_1(P,Q)  ≤  2.].It is interesting to notice that α_b is symmetric with respect to the two sources. Since the attacker is allowed only to add samples to the training sequence without removing existing samples, this might seem a counterintuitive result. Actually, the symmetry of α_b is a consequence of the worst case approach adopted by the defender. In fact, D itself discards a subset of samples from the training sequence in such a way to maximise the probability that the remaining part of the training sequence and the test sequence have been drawn from the same source. Let us now consider the more general case in which L≠ 0.For a given α < α_b, we look for the maximum distortion allowed to A for which it is possible to reliably distinguish between the two sources.From equation (<ref>), we see that the attack does not succeed if: min_V:EMD(P_Y,V) ≤ L d_L_1(V,P_X)  > 2 α/(1 - α). This leads to the following definition, which extends the concept of security margin, introduced in <cit.>, to the more general setup considered in this paper. Let X ∼ P_X and Y ∼ P_Y be two discrete memoryless sources. The maximum distortion allowed to the attacker for which the two sources can be reliably distinguished in the   setup with a fraction α of possibly corrupted samples, is called Security Margin and is given by _α(P_X, P_Y)  =  L_α^*, where L_α^* = 0 if P_Y ∈Γ_0(P_X, α), while, if P_Y ∉Γ_0(P_X, α), L_α^* is the quantity which satisfies min_V :EMD(P_Y,V) ≤ L_α^* d_L_1(V, P_X) = 2α/(1 - α). A geometric interpretation of L^*_α is given in Figure <ref>. By focusing on the case P_Y ∉Γ_0(P_X, α), and by observing that min_V :EMD(P_Y,V) ≤ L d_L_1(V, P_X) is a monotonic non-increasing function of L, the security margincan be expressed in explicit form as _α (P_X, P_Y) = L'minmin_V:EMD(P_Y,V) ≤ L'| d_L_1(V,P_X) - 2α/(1 - α)|. When L > _α(P_X, P_Y), it is not possible for D to distinguish between the two sources with positive error exponents of the two kinds.By looking at the behavior ofthe security margin as a function of α, we see that _α_b(P_X, P_Y) = 0, meaning that, whenever the fraction of corrupted samples reaches the critical value, the sources can not be distinguished even if the attacker does not introduce any distortion. On the contrary, setting α = 0 corresponds to study the distinguishability of the sources with uncorrupted training; in this case we have _0(P_X,P_Y) =EMD(P_X,P_Y), in agreement with <cit.>. With reference to Figure <ref>, it is easy to see that when α = 0 the hexagon representing Γ_0(P_X, α) collapses into the single point P_X and the security margin corresponds to the Earth Mover Distance between Y and X. Eventually, we notice that, for α > 0, the value of the security margin in (<ref>) is less than EMD(P_X,P_Y). This is also an expected behaviour since the general setting considered in this paper is more favourable to the attacker than the setting in <cit.>.By looking at (<ref>), we can argue that the Security Margin is symmetric with respect to the two sources X and Y, that is, _α(P_Y,P_X) = _α(P_X,P_Y).To show that this is the case, we observe that the pmf V' associated with the minimum L, for which we have EMD(P_Y, V')= _α(P_X,P_Y), can be obtained through the application of a map S_P_Y V that works as follows: it does not modify a portion α/(1-α) of P_Y and moves the remaining mass into an equal amount of P_X in a convenient way (i.e., in such a way to minimise the overall distance between the masses). The inverse map can be applied to bring the same quantity of mass from P_X to P_Y, while leaving as is the remaining mass, thus obtaining a V” which satisfies EMD(P_X, V”)=EMD(P_Y, V') (because of the symmetry of the per-symbol distortion d) and d_L_1(V”, P_Y)= d_L_1(V', P_X) = 2α/(1-α). Arguably, V” is the pmf for which EMD(P_X, V”)= _α(P_Y,P_X); hence, _α(P_Y,P_X) = _α(P_X,P_Y). §.§.§ Bernoulli sourcesIn order to get some insights on the practical meaning of α_b and _α, we consider the simple case of two Bernoulli sources with parameter q = P_X(1) and p = P_Y(1).Assuming that no distortionisallowed to the attacker, the minimum fraction of samples that A must add to induce a decision error is, according to (<ref>), α_b= |p - q|/1 + |p - q|. For instance, and rather obviously, when |p - q| = 1, to win the game A must introduce a number of fake samples equal to the numberof samples of the correct training sequence, i.e. α = 0.5. With regard to , we have:_α(p,q) ={[ |q - p| - α/1 - α α < α_b; 0 α ≥ α_b ]..Figure <ref> illustrates the behavior of _α(p,q) as a function of α when p = 0.3 and q = 0.7. The blinding corruption value is α_b = 0.286. § SOURCE IDENTIFICATION GAME WITH REPLACEMENT OF TRAINING SAMPLES In this section, we study a variant of the game with corrupted training, in which A observes the training sequence and can replace a selected fraction of samples. Let τ^m indicate the original m-sample long training sequence drawn from X and letbe a subset of m_2 = α m indexes in [1, 2 … m]. The attacker can choose the index setand replace the corresponding samples with m_2 fake samples. More formally, given the original training sequence τ^m, the training sequence seen by the defender is t^m = σ(τ_^m_1|| τ^m_2), whereis the complement ofin [1, 2 … m], τ_^m_1 is the set of original (non-attacked) samples, and τ^m_2 is the sequence with the fake samples introduced by the attacker.Figure <ref> illustrates the adversarial setup considered in this section for the case of a targeted attack. Arguably, this scenario is more favourable to the attacker with respect to the  game. §.§ Formal definition of the  game In the sequel, we formally define the source identification game with replacement of selected samples, namely the  game. As anticipated, we focus on a version of the game in which the corruption of the training samples depends on the to-be-attacked sequence y^n (targeted attack), the extension to the case of non-target attack, in fact, can be easily obtained by following the same approach used in Section <ref>.§.§.§ Defender's strategies As in the  game, in order to be sure that the false positive error probability is lower than 2^-nλ, the defender adopts a worst case strategy and considers the maximum of the false positive error probability over all the possible P_X and over all the possible attacks that the training sequence may have undergone, yielding: _D = {Λ^n × m⊂^n ×^m:  max_P_X ∈𝒫 max_s ∈_A,T P_fp ≤  2^-λ n}. While the above expression is formally equal to that of the  game (see eq. (<ref>)), the maximisation over _A,T is now more cumbersome, due to the additional degree of freedom available to the attacker, who can selectively remove the samples ofthe original training sequence.In fact, even if D knew the position of the corrupted samples, simply throwing them away would not guarantee that the remaining part of the sequence would follow the same statistics of X, since the attacker might have deliberately altered them by selectively choosing the samples to replace.§.§.§ Attacker's strategies With regard to the attacker, the part of the attack working on the test sequence y^n is the same as for the  case, while the part regarding the corruption of the training sequence must be redefined.To this purpose, we observe that the corrupted training sequence may be any sequence t^m for which d_H(t^m, τ^m) ≤α m, where d_H denotes the Hamming distance. Given that the defender basis his decision on the type of t^m, it is convenient to rewrite the constraint on the Hamming distance between sequences as a constraint on the L_1 distance between the corresponding types. In fact, by looking at the empirical distribution of the corrupted sequence, searching for a sequence t^m s.t.d_H(t^m, τ^m) ≤α m is equivalent to searching for a pmf P_t^m∈^m for whichd_L_1(P_t^m, P_τ^m) ≤ 2α(see the proof of Lemma 2 in<cit.>). Therefore, the set of strategies of the attacker is defined by _A = _A,T×_A,O, where _A,T = {Q(P_τ^m, P_y^n):  ^m ×^n →^msuch that d_L_1(Q(P_τ^m, P_y^n), P_τ^m)  ≤  2α},_A,O = {S^n_YZ(P_y^n, P_t^m):  ^n ×^m →^n(L, P_y^n) }. Note that, in this case, the function Q(·, ·) gives the type of the whole training sequence observed by D (not only the fake subpart, as it was in the   game),that is, P_t^m = Q(P_τ^m, P_y^n).In the following, we will find convenient to express the attacking strategies in _A,T in an alternative way. Since the attacker replaces the samples of a subpart of the training sequence, the corruption strategy is equivalent to first removing a subpart of the training sequence and then adding a fake subsequence of the same length. Then, the sequence is reordered to hide the position of the fake samples. By focusing on the type of the observed training sequence, we can write: P_t^m =  P_τ^m - α Q_R(P_τ^m, P_y^n) + α Q_A(P_τ^m, P_y^n). where Q_R(P_τ^m, P_y^n) and Q_A(P_τ^m, P_y^n) (both belonging to ^m_2) are the types of the removed and injected subsequences respectively. In order to simplify the notation, in the following we will avoid to indicate explicitly the dependence of Q_R(P_τ^m, P_y^n) and Q_A(P_τ^m, P_y^n) on P_τ^m, P_y^n,and will indicate them as Q_R() and Q_A(). Furthermore, we will use notation Q_R and Q_A whenever the dependence from the arguments is not relevant. By varying Q_R and Q_A, we obtain all the pmf's that can be produced from P_τ^m by first removing and later adding m_2 samples. Of course not all pairs (Q_R, Q_A) are admissible since the P_t^m resulting from eq. (<ref>) must be a valid pmf, i.e. it must be nonnegative for all the symbols of the alphabet .§.§.§ Payoff As usual, the payoff function is defined asu(Λ^n × m, (Q_R(), Q_A(), S^n_YZ()))  =  -P_fn. §.§ Equilibrium point and payoff at the equilibrium In order to ensure thatP_fp is always lower than 2^- λ n , it is convenient to use the attack formulation given in (<ref>). For a given P_X, Q_R and Q_A, P_fp is the probability that X generates two sequences x^n and τ^m, such that the pair of type classes (P_x^n, P_τ^m - α(Q_R() - Q_A())) falls outside Λ^n × m. Accordingly, the set of strategies available to D can be rewritten as: _D= {Λ^n × m : max_P_X ∈ max_Q_R(),Q_A()∑_P_y^n∈^n P_Y(T(P_y^n)) ·∑_(P_x^n, P_t^m) ∈Λ̅^n × m P_X(T(P_x^n))  ·∑_P_τ^m∈^m:P_τ^m - α(Q_R() - Q_A()) = P_t^mP_X(T(P_τ^m))  ≤  2^-λ n}.By proceeding as in the proof of Lemma <ref>, it is easy to prove that the asymptotically optimum strategy for the defender corresponds to the following: Λ^n × m,*  = { (P_x^n, P_t^m):min_Q_R,Q_A ∈^m_2 h(P_x^n,P_t^m + α (Q_R - Q_A) ) ≤ λ -δ_n}, where δ_n tends to 0 as n →∞ and the minimization is limited to Q_R and Q_A in ^m_2 such that P_t^m+α (Q_R - Q_A) is a valid pmf. Consequently, the optimum attacking strategy is given by: (Q^*(P_τ^m, P_y^n), S^n,*_YZ(P_y^n,P_t^m))  = argmin_P_t^m s.t.d_L_1(P_t^m, P_τ^m) ≤ 2αS^n_YZ∈^n(L, P_y^n)[ min_Q_R,Q_Ah( P_z^n , P_t^m + α (Q_R - Q_A) ) ], hence resulting in the following theorem.The   game with targeted corruption is a dominance solvable game, whose only rationalizable equilibrium corresponds to the profile (Λ^n × m,*, (Q^*(),  S^n,*_YZ())) given by equations (<ref>) and (<ref>). In order to study the asymptotic payoff of the   game at the equilibrium, we parallel the analysis carried out in Sec. <ref>. By considering the case L=0, the set of pairs of types for which D will accept H_0 as a consequence of the attack to the training sequence is given by Γ_0^n(λ,α)  = {(P_y^n, P_τ^m) : ∃ P_t^m s.t.d_L_1(P_t^m, P_τ^m)  ≤  2αand(P_y^n, P_t^m)  ∈ Λ^n × m,*}. If we fix the type of the original training sequence, we get: Γ_0^n(P_τ^m, λ,α)  = {P_y^n:  ∃ P_t^m s.t.d_L_1(P_t^m, P_τ^m)  ≤  2αandP_y^n∈Λ^n,*(P_t^m) } =  { P_y^n : ∃ P_t^m,  ∃ Q, Q' ∈^m_2,s.t.d_L_1(P_t^m, P_τ^m) ≤ 2αandh(P_x^n, P_t^m - α Q'+ α Q)≤λ -δ_n}. By letting n go to infinity, we obtain the asymptotic counterpart of the above set, which, for a generic R ∈, takes the following expression: Γ_0(R, λ, α)  = { P:   ∃ P', Q, Q', s.t.d_L_1(P', R)  ≤  2αand h_c(P, P' - α Q'+ α Q)  ≤ λ}. When L0, we obtain: Γ(R,λ,α,L)  = {P :  ∃ V ∈Γ_0(R,λ,α)s.t.EMD (P,V)  ≤  L}. With the above definitions, it is straightforward to extend Theorem <ref> to the   case, thus proving that the set in (<ref>) evaluated in R  =  P_X represents the indistinguishability region of the   game. §.§ Security margin and blinding corruption level As a last contribution, we are interested in studying the ultimate distinguishability of two sources X and Y in the   setting and compare it with the result we have obtained for the   case. To do so, we consider the behaviour of the indistinguishability region when λ tends to 0. We have: Γ(P_X,α,L)  = {P : ∃ V ∈ Γ_0 (P_X,α)s.t.EMD (P,V)  ≤  L}, where Γ_0(P_X, α)   = {P:  ∃ P', Q, Q's.t.d_L_1(P', P_X)  ≤  2αand P  =  P' + α(Q - Q') }= {P:  ∃ P's.t.d_L_1(P', P_X)  ≤  2αand d_L_1(P,P')  ≤  2α}. The set in (<ref>) can be equivalently rewritten as Γ_0(P_X,α)  = {P:  d_L_1(P, P_X)  ≤  4α}.To see why, we first notice that set (<ref>) is contained in (<ref>). Indeed, from the triangular inequality we have that, for any P', d(P,P_X)  ≤  d_L_1(P,P') + d_L_1(P',P_X). Then, if P belongs to Γ_0(P_X,α) in (<ref>), it also belongs to the set in (<ref>).To see that the two sets are indeed equivalent, it is sufficient to show that the reverse implication also holds. To this purpose, we observe that, whenever d_L_1(P, P_X)  ≤  4α, a type P^* can be found such that its distance from both P and P_X is less or at most equal to 2α. In fact, by letting P^*  = P + P_X/2, we have d_L_1(P,P^*)  =  d_L_1(P^*,P_X)  = ∑_i |P(i) - P_X(i)/2| d_L_1(P,P_X)  = ∑_i |P_X(i) - P(i)|  = 2 d_L_1(P,P^*). If d_L_1(P, P_X)  ≤  4α, then, d_L_1(P,P^*)  =  d_L_1(P^*,P_X) =  d_L_1(P,P_X)/2  ≤  2α, permitting us to conclude that the sets in (<ref>) and (<ref>) are equivalent.Upon inspection of equation (<ref>), we can conclude that, as expected, the indistinguishability region for L=0 (and hence, also for the case L ≠ 0)is larger than that of the   game (see (<ref>)), thus confirming that the game with sample replacement is more favourable to the attacker (a graphical comparison between the indistinguishability regions for the two setups is shown in Figure <ref>).As a matter of fact, for the attacker, the advantage of the   game with respect to the   game depends on α. For small α and for α close to 1/2, the indistinguishability regions of the two games are very similar, while for intermediate values of α the indistinguishability region of the   game is considerably larger than that of the   game (the maximum difference between the two regions is obtained for α≈ 0.3). When α = 1/2 the attacker always wins, since he is able to bring any pmf inside the acceptance region regardless of the game version, while for α = 0, we fall back into the source identification game without corruption of the training sequence, thus making the two versions of the game equivalent.Given two sources X and Y, the blinding corruption level value takes the expression: α_b  = d_L_1(P_Y,P_X)/4. Since d_L_1(P_Y,P_X) ≤ 2 for any couple (P_Y, P_X) (the maximum value 2 is taken when the two distribution have disjoint support), the blinding value for the   game is lower than the blinding value of  game. The two expressions are identical when the two sources have disjoint support, in which case α_b = 1/2. When the attacker can also corrupt the test sequence, the ultimate indistinguishability region of the game is: Γ(P_X,α,L)  = {P: min_V:EMD(P,V) ≤ Ld_L_1(V, P_X)  ≤  4α}. Starting from (<ref>) we can define the security margin in the   setup. Let X ∼ P_X and Y ∼ P_Y be two discrete memoryless sources. The maximum distortion for which the two sources can be reliably distinguished in the   setup is called Security Margin and is given by _α(P_X, P_Y) = L_α^*, where L_α^* is the quantity which satisfies the following relation min_V:EMD(P_Y,V) ≤ L_α^* d_L_1(V,P_X)  =  4α, if P_Y ∉Γ_0(P_X, α), and L_α^* = 0 otherwise. Considering again the case of two Bernoulli sources and by adopting the same notation of Section <ref>, we have that α_b = |p - q|/4, while the security margin is_α(p,q) ={[ |q - p| - 2 α α < α_b; 0 α ≥ α_b ]..Figure <ref> plots _α as a function of α when p=0.3 and q = 0.7. The blinding value is α_b=0.1 which, as expected, is lower than the value we found for the   setup. § CONCLUSIONSWe studied the distinguishability of two sources in an adversarial setup when the sources are known through training data, part of which can be corrupted by the attacker himself. We considered two different scenarios. In the first one, the attacker simply adds fake samples to the original training sequence, while in the second one, the attacker replaces a selected subset of training samples with fake ones. We formalised both cases in a game-theoretic setup, then we derived the equilibrium point of the games and analysed the (asymptotic) payoff at the equilibrium.The result of the game can be summarised in a compact and elegant way by introducing two parameters, namely the Security Margin under corruption of the training sequence, and the blinding corruption level α_b, defined as the portion of fake samples the attacker must introduce to make impossible any reliable distinction between the sources. Based on these two parameters, the performance of the two games with corruption of the training data can be easily compared.Though rather theoretical, our findings can guide more practical researches in several fields belonging to the emerging areas of adversarial signal processing <cit.> and secure machine learning <cit.>. In many cases, in fact, the defender must take into account the possibility that the data he is using to tune the system he is working at, or during the learning phase, is corrupted by the attacker.The analysis carried out in this paper can be extended in several ways, for instance by considering continuous sources, or by assuming that the sources X and Y are not memoryless, but still amenable to be studied by using the method of types <cit.>. Following the analysis in <cit.>, we could also consider a more general setup in which the attacker is active under both H_0 and H_1.An interesting generalisation, consists in studying a symmetric setup in which the training and the test sequences can be corrupted by applying the same kinds of processing. For instance, the attacker could be allowed to replace samples in both the training and the set sequences, or he could be allowed to modify the training sequence up to a certain distortion. Other kinds of attacks to the training data could also be considered, like sample removal with no addition of fake samples. As a matter of fact, the kind of attack strongly depends on the application scenario, and it is arguable that the availability of a large variety of theoretical models would help bridging the gap between theory and practice.§ ACKNOWLEDGMENT This work has been partially supported by a research sponsored by DARPA and Air Force Research Laboratory (AFRL) under agreement number FA8750-16-2-0173. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and Air Force Research Laboratory (AFRL) or the U.S. Government.IEEEtranequationsection §.§ Generalized Sanov's theoremLetus consider a sequence of n i.i.d. discrete random variables taking values in a finite alphabetand distributed according to a pmf P. We denote with P_n the empirical pmf of the sequence. Let E ⊆𝒫 be a set of pmf's.Sanov's theorem <cit.> states that inf_Q ∈E(Q||P)  ≤ - n →∞lim sup 1/nlog P(P_n ∈ E)  ≤ - n →∞lim inf 1/nlog P(P_n ∈ E)  ≤  inf_Q ∈ intE(Q||P),where int S denote the interior part of the set S. When cl(E) = cl(int(E))[cl(E) denotes the closure of E. Clearly, cl(E) ≡ E if E is a closed set.], or,E  ⊆cl(int(E)), the left and right-hand side of (<ref>) coincide and we get the exact rate: -lim_n →∞ 1/nlog P(P_n ∈ E)  = inf_Q ∈ E(Q||P). If we define the set E_n = E ∩𝒫^n, we have: P(P_n ∈ E) = P(P_n ∈ E_n) and we can rewrite Sanov's theorem as: inf_Q ∈E(Q||P)  ≤ - n →∞lim sup 1/nlog P(P_n ∈ E_n)  ≤ - n →∞lim inf 1/nlog P(P_n ∈ E_n)  ≤  inf_Q ∈ intE(Q||P), Note that, by construction, we have cl(E) = cl(∪_n E_n).In the following, we extend the formulation of Sanov's theorem given in (<ref>) to more general sequences of sets E_n for which it does not necessary hold that E_n = E ∩𝒫^n for some set E.We start byintroducing the notion of convergence for sequences of subsets due to Kuratowsky, which is a more general notion of convergence with respect to the one based on Hausdorff distance. Let (S, d) be a metric space. We first provide the definition of lower closed limit or Kuratowski limit inferior <cit.>.A point p belongs to the lower limit n →∞Li K_n (or simply Li K_n) of a sequence of sets K_n, if every neighborhood of p intersects all the K_n's from a sufficiently great index n onward.Given the above definition, the expression p  ∈n →∞Li K_n is equivalent to the existence of a sequence of points {p_n} such that: p  = lim_n →∞ p_n, p_n ∈ K_n. Stated in another way, Li K_n is the set of the accumulation points of sequences in K_n. As an alternative, equivalent, definition we can let: n →∞Li K_n  = {p  ∈  Xs.t. n →∞lim sup d(x, K_n)  = 0}.Similarly, we have the following definition of upper closed limit or Kuratowski limit superior <cit.>. A point p belongs to the upper limit n →∞Ls K_n (or simply Ls K_n) of a sequence of sets K_n, if every neighborhood of p intersects an infinite number of terms in K_n.The expression p  ∈ Ls_n →∞K_n is equivalent to the existence of a subsequence of points {p_k_n} such thatk_1 < k_2 < …, p  = lim_n →∞ p_k_n, p_k_n∈ K_k_n. As an alternative, equivalent, definition we can let: n →∞Ls K_n  = {p  ∈  Xs.t. n →∞lim inf d(x, K_n)  = 0}. It can be proven that the Kuratowski limit inferior and superior are always closed set (see <cit.>).Given the above, we can state the following: The sequence of sets {K_n} is said to be convergent to K in the sense of Kuratowski, that is K_n K→  K, if Li K_n = K = Ls K_n, in which case we write K = Lim K_n. We observe that Kuratowski convergence is weaker than convergence in Hausdorff metric; in fact, given a sequence of closed sets {K_n}, K_n H→  K implies K_n K→  K <cit.>. For compact metric spaces, the reverse implication also holds and the two kinds of convergence coincide.In this work, we are interested in the spaceof probability mass functions defined over a finite alphabet , i.e., the probability simplex in ℝ^||, equipped with the L_1 metric.Beinga closed subset of ℝ^||,is a complete set. In addition, with the L_1 metric, ∈ℒ(ℝ^||), that is,is bounded. The space (, d_L_1), then, is a compact metric spaceand then, for our purposes, Kuratowski and Hausdorff convergence are equivalent.We are now ready to prove the following generalisation of Sanov's theorem:Let {E_(n)} be a sequence of sets in , such that Li (E_(n)∩^n)  ∅.Then:min_Q ∈Ls E_(n)(Q||P)  ≤ - n →∞limsup 1/nlog P(P_n ∈ E_(n)) ≤ - n →∞liminf 1/nlog P(P_n ∈ E_(n)) ≤  min_Q ∈Li (E_(n)∩^n)(Q||P), If, in addition, Ls E_(n) = Li (E_(n)∩^n),the generalized Sanov's limit exists as follows: -lim_n →∞ 1/nlog P(P_n ∈ E_(n))  = min_Q ∈ Lim E_(n)(Q||P). We first prove the expression for the lower bound.Let E_n  =  E_(n)∩^n. We have: P(E_(n)) =∑_Q ∈ E_nP_X(T(Q)) ≤(n + 1)^|| 2^-n min_Q ∈ E_n(Q ||P)≤(n + 1)^|| 2^-n inf_Q ∈ E_(n)(Q || P) = (n + 1)^|| 2^-n min_Q ∈ cl(E_(n))(Q || P).In the last inequality we exploited the fact that, being each E_(n) a bounded set of , andlower bounded in , the infimum over E_(n) corresponds to the minimum over its closure. By taking the logarithm of each side and dividing by n, we get: 1/nlog P(E_(n))  ≤ - min_Q ∈ cl(E_(n))(Q || P) + log(n+1)^||/n,We now prove that, for any δ and for sufficiently large n, we have min_Q ∈ cl(E_(n))(Q || P)  ≥ min_Q ∈ Ls E_(n)(Q || P)  - δ . First, according to the properties of the limit superior, Ls E_(n) = Ls ( cl(E_(n))) <cit.>, hence proving (<ref>) is equivalent to showing that: min_Q ∈ cl(E_(n))(Q || P)  ≥ min_Q ∈ Ls ( cl(E_(n)))(Q || P)  - δ . Let Q_n be the sequence of points achieving the minimum of the left-hand side of (<ref>) (for simplicity we assume that the minimum is unique, the extension to a more general case being straightforward). Let Q_n(j) be a subsequence of Q_n formed only by the elements of Q_n that do not belong to Ls ( cl(E_(n)))[ n(i)  >  n(j), ∀ i  >  j]. If the number of elements in Q_n(j) is finite, then for n large enough Q_n  ∈  Ls ( cl(E_(n))) and eq. (<ref>) is verified with δ = 0. If the number of elements in Q_n(j) is infinite, then, due to the boundedness of , the elements of Q_n(j) must have at least one accumulation point (Bolzano-Weierstrass theorem). Let A_i's be the accumulation points of Q_n(j). By definition of Ls, all A_i's belong to Ls ( cl(E_(n))). In addition, for any radius ρ, from a certain j on, all the points in Q_n(j) belong to ℛ = ⋃_i (A_i, ρ)[(A_i, ρ) is a ball with radius ρ centred in A_i.]. For large enough n, then we have: min_Q ∈ cl(E_(n))(Q || P)   ≥ min_Q ∈ Ls ( cl(E_(n))) ∪ℛ(Q || P) ≥min_Q ∈ Ls ( cl(E_(n)))(Q || P)  - δ, where the second inequality derives from the continuity of thefunction and the arbitrariness of ρ.By inserting equation (<ref>) in (<ref>), we have that, for large n, 1/nlog P(E_(n))≤- min_Q ∈ Ls E_(n)(Q ||P) + log(n+1)^||/n + δ, and hence, by the arbitrariness of δ, - n →∞limsup 1/nlog P(E_(n))  ≥ min_Q ∈ Ls E_(n)(Q ||P).We now pass to the upper bound. Let Q^* be a point achieving the minimum of the divergence over the set Li E_n. By definition of limit inferior, there exists a sequence of points {Q_n}, Q_n ∈ E_n such that Q_n → Q^* as n →∞. Then, by exploiting the continuity of , it follows that: (Q_n|| P)  ≤  D(Q^*|| P)  + γ, where γ can be made arbitrarily small for large n. We can then write: P(E_(n)) =∑_Q ∈ E_n P(T(Q)) ≥P(T(Q_n))  ≥ 2^-n (Q_n || P)/(n + 1)^||. Hence, we get 1/nlog P(E_(n)) ≥  - (Q_n ||P)  -  ||log(n + 1)/n, ≥  - (Q^* ||P)  - γ -  ||log(n + 1)/n, ≥  - min_Q ∈ Li E_n(Q || P)  - γ -  ||log(n + 1)/n, and then, by the arbitrariness of γ,-  n →∞liminf 1/nlog P(E_(n))  ≤ min_Q ∈ Li E_n(Q ||P), which concludes the proof of the first part (relation (<ref>)).For the proof of the second part, we observe that, when Ls E_(n) = Li (E_(n)∩^n), the two bounds in (<ref>) coincides. Moreover, the following chain of inclusions holds, Li E_(n) ⊆  Ls E_(n) =  Li (E_(n)∩^n)  ⊆  Li E_(n), and then Li E_(n) =  Ls E_(n) =  Lim E_(n), yielding (<ref>). We observe that, in general, the Kuratowski convergence of E_(n) is a necessary condition for the existence of the generalized Sanov limit in (<ref>), but it is not sufficient. In fact, we could have Li E_(n) ⊇  Li(E_(n)∩^n), in which case the lower and upper bound in (<ref>) do not coincide. It is also interesting to notice that when E_(n) ∈ ^n is a sequence of sets in ^n, then Sanov's limit holds whenever E_(n)K→E for some set E, or, by exploiting the compactness of , E_(n)H→ E. Based on the above observation, we can state the following corollary: Let E_(n) be a sequence of sets in ^n, such that E_(n)H→ E.Then: -lim_n →∞ 1/n log P(P_n ∈ E_(n))  = min_Q ∈ E (Q||P). §.§ Regularity properties of the set of admissible mapsTo prove the theorems on the asymptotic behaviour of the payoff in the two versions of the source identification game studied in this paper, we need to prove some regularity theorems on the set of admissible maps.To start with, we need to define a distance between transportation maps, that is a function d_s :  ℝ^|| × ||×ℝ^|| × ||→ℝ^+. In accordance with the rest of the paper, let us choose the L_1 distance, that is, given two maps (S_PV, S_QR), we define d_s (S_PV, S_QR) = ∑_i,j |S_PV(i,j) - S_QR(i,j)|.Our first result regards the regularity of 𝒜(L, P) as a function of P. Let P ∈ and let P' be any pmf in the neighbourhood of P of radius τ, i.e., P' ∈ℬ(P,τ). Thenδ_H(Å(L,P),  Å(L,P'))  ≤ τand hence τ→ 0limδ_H(Å(L,P), Å(L,P'))  =  0, uniformly in .Moreover, if we insist that P' ∈^n, the following result holds: ∀ε > 0,  ∃τ^* and n^* such that ∀τ < τ^* and n > n^*,δ_H(Å(L,P),  Å^n(L,P'))  ≤ ε∀ P' ∈(P,τ) ∩^n,  ∀ P ∈.From a general perspective, the lemma follows from the fact that 𝒜^n(L,P_y^n) (and 𝒜(L,P)) is built by imposing a number of linear constraints on the admissible transportation maps (see eq. (<ref>)), i.e. 𝒜(L,P) is a convex polytope <cit.>. By considering a P' close to P, we are perturbing the vector of the known terms of the linear constraints which defines the admissibility set. Instead of invoking the above general principle, in the following we give an explicit proof of the lemma.Given P ∈ and P' ∈ℬ(P, τ), let τ(i) = P(i) - P'(i) be the excess (or defect) of mass of P with respect to P' in bin i. For any map in 𝒜(L,P), we can choose a map S_P'V' that works as follows: for the bins i such that τ(i) ≤ 0, let S_P'V'(i,j) = S_PV(i,j) for j ≠ i, while for j=i, we let S_P'V'(i,j) = S_PV(i,j) + | τ(i) |. For the bins i for which τ(i) > 0, we first sort the index set {j: S_PV(i,j) ≠ 0} in decreasing order with respect to the amount of distortion introduced per unit of mass delivered from i to j (d(i,j)). Then, starting from the first index in the ordered list, we let S_P'V'(i,j) = max(0,  S_PV(i,j) - τ(i)). If S_P'V'(i,j) = 0, we update τ(i) to a new value τ'(i) = τ(i) - S_PV(i,j), and iterate the previous procedure by subtracting the updated value of τ'(i) from the second S_PV(i,j) in the list. This procedure goes on until the subtraction gives S_P'V'(i,j) ≠ 0, that is when we have removed all the excess mass from the i-th row of S_PV(i,j).It is easy to see that the map built in this way satisfies the distortion constraint, in fact, by construction the distortion associated to S_P'V' is less than that introduced by S_PV. Then, S_P'V'∈𝒜(L, P').In addition, by construction, ∑_j |S_P'V'(i,j) - S_PV(i,j)| ≤ | τ(i) |, and hence ∑_ij |S_P'V'(i,j) - S_PV(i,j)| ≤τ. Accordingly, we have: δ_Å(L,P)( Å(L,P')) = max_S_PV∈Å(L,P)min_S_P'V'∈Å(L,P') d_s(S_PV, S_P'V')  ≤ τ since, as we have shown with the preceding construction, the inner minimum is always lower or equal than τ. By repeating the same argument exchanging the role of Å(L,P) and Å(L,P'), we find that δ_H(𝒜(L, P'), 𝒜(L, P)) ≤τ, thus concluding the first part of the proof. In the second part of the lemma, we require that P' ∈^n and that the map produces a sequence in ^n. The proof is easily achieved by exploiting the first part of the lemma according to which for any map S_PV in Å(L,P), we can find a map S_P'V' in Å(L,P') which is arbitrarily close to S_PV, and then approximating S_P'V' with a map S^n_P'V'∈Å^n(L,P'). Due to the density of rational numbers in real numbers, such an approximation can be made arbitrarily accurate by increasing n, thus completing the proof.Given a transformation S_PV mapping P into V, Lemma <ref> states that, for any pmf P' close to P, we can find a map S_P'V' close to S_PV. The following theorem extends such a result to the pmf resulting from the application of the mapping.Let P ∈, and let P' be any pmf in the neighbourhood of P of radius τ, i.e., P' ∈ℬ(P,τ). LetS_PV∈𝒜(L,P). Then, we can always find a map S_P'V'∈𝒜(L, P') such that V' ∈ℬ(V, τ). Similarly, for any ε > 0, there exist τ^* and n^* such that ∀ τ < τ^* and n > n^*, given a P ∈, a map S_PV∈𝒜(L, P) and P' ∈^n ∩(P,τ),we can find a map S_P'V'^n in𝒜^n(L, P') such that V'_n ∈(V, ε) ∩^n. For any two maps S_PV and S_P'V', we have: V'(j) = ∑_i S_P'V'(i,j)  = ∑_i(S_PV(i,j) +(S_P'V'(i,j) - S_PV(i,j)))  ≤  V(j) + ∑_i | S_P'V'(i,j) - S_PV(i,j) |, and V'(j) = ∑_i S_P'V'(i,j)  = ∑_i(S_PV(i,j) +(S_P'V'(i,j) - S_PV(i,j)))  ≥  V(j) - ∑_i | S_P'V'(i,j) - S_PV(i,j) |, yielding: |V'(j) -V(j)|  ≤ ∑_i | S_P'V'(i,j) - S_PV(i,j) |. By summing over j and exploiting Lemma <ref>, we can choose S_P'V' so that:∑_j |V'(j) - V(j)| ≤ ∑_i,j | S_P'V'(i,j) - S_PV(i,j) |  ≤ δ_H(𝒜(L, P'),  𝒜(L, P))  ≤ τ, and hence V' ∈(V, |τ).Similarly to the second part ofLemma <ref>, the second part of the theorem follows immediately from the density of rational numbers in the real line.
http://arxiv.org/abs/1703.09244v1
{ "authors": [ "Mauro Barni", "Benedetta Tondi" ], "categories": [ "cs.CR", "stat.ML" ], "primary_category": "cs.CR", "published": "20170327180732", "title": "Adversarial Source Identification Game with Corrupted Training" }
Hamzeh.Khanpour@mail.ipm.irSara.Taheri@ipm.irAtashbar@ipm.ir^(1)Department of Physics, University of Science and Technology of Mazandaran, P.O.Box 48518-78195, Behshahr, Iran^(2)School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM), P.O.Box 19395-5531, Tehran, Iran^(3)Department of Physics, Faculty of Basic Science, Islamic Azad University Central Tehran Branch (IAUCTB), P.O. Box 14676-86831, Tehran, Iran ^(4)Independent researcher, P.O. Box 1149-8834413, Tehran, IranWe extract polarized parton distribution functions (PPDFs), referred to as “KTA17,” together with the highly correlated strong coupling α_s from recent and up-to-date g_1 and g_2 polarized structure functions world data at next-to-next-to-leading order in perturbative QCD. The stability and reliability of the results are ensured by including nonperturbative target mass corrections as well as higher-twist termswhich are particularly important at the large-x region at low Q^2. Their role in extracting the PPDFs in the nucleon is studied. Sum rules are discussed and compared with other results from the literature. This analysis is made by means of the Jacobi polynomials expansion technique to the DGLAP evolution. The uncertainties on the observables and on the PPDFs throughout this paper are computed using standard Hessian error propagation which served to provide a more realistic estimate of the PPDFs uncertainties. 13.60.Hb, 12.39.-x, 14.65.Bt Nucleon spin structure functions at NNLO in the presence of target mass corrections and higher twist effects S. Atashbar Tehrani^4December 30, 2023 ============================================================================================================ § INTRODUCTIONHadrons are the complex systems consisting of quarks and gluons. The determination of parton densities and understanding the details of their x and Q^2 dependence is one of the most important challenges in high energy physics. A straightforward calculation of the cross section is available via the collinear factorization theorem in perturbative QCD (pQCD). Particularly interesting is the investigation of polarized processes which provides information about the basic decomposition of nucleon's spin into its quark and gluon constituent parts. In recent years, the deep inelastic scattering (DIS) of polarized leptons off polarized nucleons has played an important role in the study of the nucleon spin structure functions. The spin structure of the nucleon is still one of the major unresolved issues in the study related to hadronic physics <cit.>. While the combined quark and antiquark spin contributions to the nucleon spin, have been measured to be about 30%, the contribution of the gluon spin to the spin of the nucleon is still insufficiently constrained after more than two decades of intense study. The last few years have witnessed tremendous experimental and phenomenological progress in our understanding on the spin structure of the nucleon. There are several QCD analyses of the polarized DIS data along with the estimation of their uncertainties in the literature <cit.>.Current phenomenological spin-dependent parton distribution function (PDF) analysis uses the spin-dependent DIS measurements on g_1^ p, n, d(x, Q^2) and g_2^ p, n, d(x, Q^2), see Table <ref>. Beside these data sets, one can also include the recent PHENIX measurement on neutral-pion π^0 productions <cit.> at √(s)=200, 510 GeV and inclusive jet production from the STAR Collaboration <cit.> in polarized proton-proton collisions at the Relativistic Heavy Ion Collider (RHIC). The longitudinal single-spin asymmetries in W^± weak boson production <cit.> from polarized proton-proton collisions also can be used. These data sets may lead to better determination of the polarized gluon, sea quark and antiquarks distributions at small-x.The precision of polarized parton distribution functions (PPDFs) determination in QCD analyses has steadily improved over the recent years, mainly due to refined theory predictions for the hard parton scattering reactions and also more accurate experimental observables. Recently, the COMPASS Collaboration at CERN <cit.> extracted the spin-dependent structure function of the proton g_1^p(x, Q^2) and the longitudinal double-spin asymmetries A_1^p (x, Q^2) from scattering of polarized muons off polarized protons for the region of low x (down to 0.0025) and high photon virtuality Q^2. Although significant progress has been made, the gluon polarization, as a fundamental ingredient describing the inner structure of the nucleon, suffers from large uncertainties and remains poorly constrained. Worse even, the gluon distributions originating from different collaborations represent significant differences.In our latest analysis TKAA16 <cit.>, we performed the first detailed pQCD analysis using Jacobi polynomial approach at next-to-leading-order (NLO) and next-to-next-to-leading-order (NNLO) approximation. All the available and up-to-date g_1^ p, n, d(x, Q^2) world data including recent COMPASS measurements <cit.> were considered which led to new parametrization of spin-dependent parton destinies.In the discussed paper <cit.>, we simply considered the equality of g_1(x, Q^2) ≡ g_1^τ2(x, Q^2), while the information on quark-gluon correlation is encoded into the higher-twist parts of the g_1(x, Q^2) and g_2(x, Q^2). Here, τ2 means twist 2 and HT refers to higher twist. Although these dynamical effects are suppressed by inverse powers of Q^2 in the HT expansion of g_1(x, Q^2), they appear to be equally important as their twist-2 part in g_2(x, Q^2). This special property makes measurements of g_2(x, Q^2) particularly sensitive for investigating multiparton correlations in the nucleon.Furthermore, g_2(x, Q^2) observables are mostly in the low Q^2 region where the target mass corrections (TMCs) and HT effects become significant. In the current analysis, which we refer to as “KTA17,” we develop a precise analysis by including TMCs and HT contributions in both g_1(x, Q^2) and g_2(x, Q^2) structure functions. The role of these corrections in PPDFs estimation using pQCD fits to the data is discussed. Studies of the moments of spin-dependent structure functions provide an opportunity to test our understanding of pQCD like that of the Bjorken sum rule. We also demonstrate once more the reliability and validity of the Jacobi polynomial expansion approach at the NNLO approximation to extract the PPDFs from polarized DIS structure function.The remainder of this article is organized as follows: In Sec. <ref>, we review the theoretical formalism underpinning the KTA17 analysis of the polarized DIS structure function,the Jacobi polynomials approach, target mass corrections and higher-twist effects. Section <ref> provides an overview of the method of the analysis, data selection, χ^2 minimization and error calculation. The results of present NNLO polarized PDFs fits and detailed comparison with available observables are discussed in Sec. <ref>. We compute and compare associated polarized sum rules in Sec. <ref>. A short discussion on the present status of polarized PDFs global analyses is discussed in Sec. <ref>. Finally, Sec. <ref> contains the summary and concluding remarks. In Appendix <ref>, we present a FORTRAN package containing results for the KTA17 polarized structure functions at NNLO approximation together with corresponding uncertainties. Appendix <ref> provides the analytical expressions for the polarized NNLO quark-quark and gluon-quark splitting functions. § THEORETICAL FRAMEWORKIn this section, we review the basic theoretical framework for the polarized DIS structure functions on which theKTA17 PPDFs analysis is based. After a brief revision of the leading-twist structure functions at NNLO approximation, we present the Jacobi polynomials expansion method which was already used to extract KTA17 PPDFs at NNLO approximation from polarized DIS data  <cit.>. Our approach to take into account TMCs and HT corrections is discussed in the following subsections. §.§ Leading-twist polarized DIS structure function In the light-cone operator-product expansion (OPE), the leading-twist (twist τ =2) contributions correspond to scattering off asymptotically free partons, while the higher-twist contributions emerge due to multiparton correlations. The leading-twist spin-dependent proton and neutron structure functions, g_1^ p,n (x, Q^2) at NNLO, can be expressed as a linear combination of polarized parton densities and coefficient functions as <cit.> g_1^ p (x, Q^2) = 1/2∑_q e^2_qΔ q_v(x, Q^2)⊗ (1+α_s(Q^2)/2 πΔ C^(1)_q+(α_s(Q^2)/2 π)^2Δ C^(2)_ns) +e^2_q (Δ q_s+Δq̅_̅s̅)(x, Q^2)⊗ (1+α_s(Q^2)/2 πΔ C^(1)_q+(α_s(Q^2)/2 π)^2Δ C^(2)_s) +2/9Δ g(x, Q^2)⊗(α_s(Q^2)/2 πΔ C^(1)_g+(α_s(Q^2)/2 π)^2Δ C^(2)_g) Here, Δ q_v, Δ q_s and Δ g are the polarized valance, sea and gluon densities, respectively. The pQCD evolution kernel for PPDFs is now available at NNLO in Ref. <cit.>. The Δ C^(1)_q and Δ C^(1)_g are the NLO spin-dependent quark and gluon hard scattering coefficients, calculable inpQCD <cit.>.We applied the hard scattering coefficients extracted at NNLO approximation. In this order the Wilson coefficients are different for quarks and antiquarks and we used Δ C^(2)_ns and Δ C^(2)_s <cit.>. The typical convolution in x space is represented with the symbol ⊗. Considering isospin symmetry, the corresponding neutron structure functions are available. The leading-twist deuteron structure function can be obtained fromg_1^ p and g_1^ n via the relation g_1^τ 2(d)(x,Q^2) = 1/2{g_1^ p(x,Q^2) + g_1^ n(x,Q^2)}× (1 - 1.5 w_D), where w_D=0.05±0.01 is the probability to find the deuteron in a D-state <cit.>. The leading-twist polarized structure function of g_2 ^τ 2(x, Q^2) is fully determined from g_1^τ 2(x, Q^2) via the Wandzura and Wilczek (WW) term <cit.>: g_2 ^τ 2(x, Q^2) = g_2 ^WW(x, Q^2) =- g_1^τ 2(x, Q^2) + ∫_x^1dy/y g_1^τ 2(y, Q^2) . This relation remains valid in the leading twist even thoughtarget mass corrections are included <cit.>. The leading-twist definition forg_1^τ 2(x, Q^2) andg_2^τ 2(x, Q^2) are valid in the Bjorken limit, i.e. Q^2 →∞, x= fixed. While, at a moderate low Q^2 (∼ 1-5 GeV^2) and W^2 (4 GeV^2 <W^2<10 GeV^2), TMCs and HT contributions should be considered completely in the nucleon structure functions studies. As we have already mentioned, the most significant improvement in KTA17 analysis in comparison to Ref. <cit.> is the treatment of target mass corrections and higher-twist contributions to the spin-dependent structure functions. They will be discussed in detail in the following subsections.§.§ Jacobi polynomials approach The method we employed in this paper is based on the Jacobi polynomials expansion of the polarized structure functions. Practical aspects of this method including its major advantages are presented in our previous studies <cit.> and alsoother literature <cit.>. Here, we outline a brief review of this method. In the polynomial fitting procedure, the evolution equation is combined with the truncated series to perform a direct fit to structure functions. According to this method, one can easily expand the polarized structure functions x g_1^QCD(x,Q^2), in terms of the Jacobi polynomials Θ_n^α, β(x), as follows, x g_1^τ 2(x, Q^2) = x^β (1 - x)^α∑_n = 0^ N_ max a_n(Q^2)Θ_n^α, β(x), where N_ max is the maximum order of the expansion. The parameters α and β are Jacobi polynomials free parameters which normally fixed on their best values. These parameters have to be chosen so as to achieve the fastest convergence of the series on the right-hand side of Eq. (<ref>).The Q^2 dependence of the polarized structure functions are codified in the Jacobi polynomials moments, a_n(Q^2). The x dependence will be provided by the weight function w^α, β(x) ≡x^β (1 - x)^α and the Jacobi polynomials Θ_n^α, β(x) which can be written as, Θ_n^α, β(x) = ∑_j = 0^nc_j^(n)(α, β) x^j, where the coefficients c_j^(n)(α, β) are combinations of Gamma functions in term of n, α and β. The above Jacobi polynomials have to satisfy the following orthogonality relation, ∫_0^1 dx x^β (1 - x)^α Θ_n^α, β(x)Θ_l^α, β(x) =δ_n, l . Consequently one can obtain the Jacobi moments, a_n(Q^2), using the above orthogonality relations as, a_n(Q^2) =∫_0^1 dx x g_1^τ 2(x,Q^2)Θ_n^α, β(x)=∑_j = 0^n c_j^(n)(α, β) M [xg_1^τ 2, j + 2](Q^2) , where the Mellin transform M [x g_1^τ 2,N] is introduced as M [x g_1^τ 2,N] (Q^2) ≡∫_0^1 dx x^ N-2xg_1^τ 2 (x, Q^2).Finally, having the QCD expressions for the Mellin moments M (Q^2), we can reconstruct the polarized structure function x g_1^τ 2(x, Q^2). Using the Jacobi polynomial expansion method, the x g_1^τ 2(x, Q^2) can be constructed as x g_1^τ 2(x, Q^2) = x^β(1 - x)^α ∑_n = 0^ N_max Θ_n^α, β(x)× ∑_j = 0^n c_j^(n)(α, β)M[x g_1^τ 2, j + 2](Q^2). We have shown in our previous analyses that by setting the N_ max = 9, α = 3, β = 0.5, the optimal convergence of this expansion throughout the whole kinematic region constrained by the polarized DIS data is possible. If α is allowed to vary in the fit procedure, it takes up values close to 3 with neither a change in PPDF parameter values nor a significant improvement in the χ^2/ d.o.f. By contrast, in the absence of sufficiently enough data to constrain β reasonably directly, we prefer to fix β to the value 0.5 suggested by Regge arguments at low x. For the chosen α and β values, the rate of convergence is adequate for all practical purposes. The N_max can become arbitrarily large. The freedom to increase N_max can compensate for injudiciously chosen values of the constant α and β. However we want to deduce the expansion evolution terms and find the most practical form. To study the dependence of fit results to the value of N_max, we also allow it to vary. In practice, we found that at Q_0^2 = 1 GeV^2, for α=3 and β=0.5, no improvement is achieved by allowing polynomials expansion vary between seven and nine terms. Inserting the Jacobi polynomial expansion of g_1^τ 2(x, Q^2) from Eq. (<ref>) into the WW relation Eq. (<ref>) leads to an analytical result for the g_2^τ 2(x, Q^2) structure function. §.§ Target mass corrections and threshold problemIn the low Q^2 region, the nucleon mass correction cannot be neglected and the power-suppressed corrections to the structure functions can make important contributions in some kinematic regions. Different from the case for dynamical HT effects, the TMCs can be calculated in closed-form expression. We follow the method suggested by Georgi and Politzer <cit.> in the case of the unpolarized structure function which is generalized by Blumlein and Tkabladze  <cit.> for all polarized structure functions. These corrections were both presented in terms of the integer moments and Mellin inversion to x space. The explicit twist-2 expression of g_1 with TMCs is <cit.> g_1^τ2 + TMCs(x,Q^2) =xg_1^τ 2(ξ, Q^2;M = 0)/ξ(1 + 4M^2x^2/Q^2)^3/2 +4 M^2x^2/Q^2(x + ξ)/ξ(1 + 4M^2x^2/Q^2)^2∫_ξ^1dξ'/ξ' g_1^τ 2(ξ', Q^2;M=0) -4M^2 x^2/Q^2(2 - 4M^2x^2/Q^2)/2(1 + 4M^2x^2/Q^2)^5/2× ∫_ξ^1d ξ'/ξ'∫_ξ'^1dξ”/ξ” g_1^τ 2 (ξ”, Q^2;M = 0). Here, M is the nucleon mass. Similarly, the target mass corrected structure function g_2with twist-2 contribution is given by g_2^τ2+TMCs(x,Q^2) = -xg_1^τ 2(ξ, Q^2;M = 0)/ξ(1 + 4M^2x^2/Q^2)^3/2 +x(1 - 4M^2xξ/Q^2)/ξ(1 + 4M^2x^2/Q^2)^2∫_ξ^1dξ'/ξ' g_1^τ 2(ξ', Q^2;M = 0) +3/24M^2 x^2/Q^2/(1 + 4M^2 x^2/Q^2)^5/2× ∫_ξ^1d ξ'/ξ'∫_ξ'^1d ξ”/ξ” g_1^τ 2(ξ”, Q^2;M = 0), where the Nachtmann variable <cit.> is given by ξ = 2x/1 + √(1 + 4M^2 x^2 / Q^2) . The maximum kinematic value of ξ is less than unity, which means that both the polarized and unpolarized target mass corrected leading-twist structure functions do not vanish at x=1. This longstanding threshold problem appears in the presence of TMCs and violates the momentum and energy conservation. The kinematics where this problem becomes relevant are limited to the nucleon resonance region. Many efforts were made to avoid this unphysical behavior by considering various prescriptions. It has been discussed at length in the literature <cit.>. These solutions are not unique <cit.>.Accardi and Melnitchouk <cit.> introduced some limitations on virtuality of the struck quark to have an abrupt cutoff at x=1. Where as, Georgi and Politzer <cit.>, Piccione and Ridolfi <cit.>, and also authors of <cit.> argued that higher-twist terms must be taken into account in the region of large x to prevent the threshold problem.Furthermore, D'Alesio et al.  <cit.> defined the maximum kinematically allowed region of x by imposing the probability for hadronization as θ(x_TH-x) whilex_TH=Q^2/Q^2+μ(2M+μ) . Here, μ is the lowest mass particle accessible in the process of interest. In this paper, we follow the later prescription to tame this paradox.§.§ Higher-twist effects In addition to the pure kinematical origin TMCs, polarized structure functions in the OPE receive remarkable contributions also from HT terms. In the range of large values of x, their contributions are increasingly important. The study of HT corrections provides us direct insight into the nature of long-range dynamical multigluon exchange or parton correlation in the nucleon. Similar to the TMCs, HT terms contribute at low values of Q^2 and vanish at large Q^2. Both g_1 and g_2 structure functions involve nonperturbative contributions from the quark and gluon correlations. In the case of g_1 structure function, these correlations emerge in powers of the inverse Q^2 and thus are suppressed.The g_2(x, Q^2) structure function can be written as <cit.>g_2(x, Q^2) = g_2 ^τ 2(x, Q^2) + g̅_2(x, Q^2) , where, g̅_2(x, Q^2) = -∫ _x ^1 ∂/∂ y[m_q/ Mh_T(y,Q^2)+ζ(y, Q^2) ]dy/y . The function h_T(x,Q^2) denotes the leading-twist transverse polarization density. Its contribution is suppressed by the ratio of the quark to nucleon masses, m_q/ M. The twist-3 term ζ(x,Q^2) is associated with the nonperturbative multi-parton interactions. There is no direct interpretation for these nonperturbative contributions and they can only be calculated in a model-dependent manner.We utilized the HT parametrization form suggested byBraun,Lautenschlager,Manashov, and Pirnay (BLMP) <cit.>. To this end, we construct higher-twist parton distributions in a nucleon at some reference scale as, g_2^τ 3(x) = A_HT [ ln(x) + (1 - x) + 1/2(1 - x)^2]+(1-x)^3[B_HT + C_HT(1 - x) + D_HT (1 - x)^2+E_HT(1 - x)^3] , where the coefficients {A_ HT,B_ HT,C_ HT,D_ HT,E_ HT} for the proton, neutron and deuteron can be obtained by fitting to data. Using g_2^τ 3(n) = ∫_0^1 g_2^τ 3(x) x^n-1dx, one can obtain the Mellin moments. The Q^2 dependence of the g_ 2^τ 3 can be achieved within nonsinglet perturbative QCD evolution as g_ 2^τ 3(n, Q^2) =M^ NS(n, Q^2)g_2^τ 3(n) . This method is compared with exact evolution equations for the gluon-quark-antiquark correlation in Ref. <cit.>. Their results are almost the same since the HT contributions are specially important in large-x region. We note that by modifying the large-x behavior, the small-x polarized parton densities could be affected by he momentum sum rule.Using the Jacobi polynomials technique presented in Eq. (<ref>), one can reconstruct the twist-3 part of spin-dependent structure functions, x g_ 2^τ 3(x, Q^2) ,vs its Q^2-dependent Mellin moments.By the integral relation of g_ 1^τ 3(x, Q^2) = 4 x^2M^2/Q^2[g_ 2^τ 3(x, Q^2)- 2∫_x^1 dy/y g_ 2^τ 3(y, Q^2)] , the twist-3 part of spin-dependent structure functions, g_ 1^τ 3(x, Q^2), also can be obtained <cit.>. Finally, the spin-dependent structure functions considering the TMCs andHT termsare as follows: x g_1,2^Full=pQCD+TMC+HT(x, Q^2) = xg_1,2^τ2+TMCs(x, Q^2) + xg_1,2^τ 3(x, Q^2) .It is a particular feature of x g_2^Full(x, Q^2) in which twist-3 term is not suppressed by inverse powers of Q^2 so it is equally important as its twist-2 contributions.Here, we neglected the effect of TMCs on τ3 terms, similar to JAM13 <cit.>. Concerning the current level of accuracy our estimation seems reasonable. Of course, with the new generation of data coming from 12 GeV Jefferson Lab experiments <cit.> our analysis should be extended to include TMCs for τ3, but for now it stands reasonably well. § KTA17 NNLO QCD ANALYSIS AND PARAMETRIZATIONMotivated by the interest in studying the effects of information arising from HT effects and TMCs, we carried out the following new global analysis of PPDFs. We present that our predictions are consistent with the results obtained in the recent studies. In this section, we discuss the method of KTA17 analysis, including the functional form we use, the data sets considered in the analysis and the method of error calculations. The determination of polarized PDFs uncertainties also follows the method given in this section. §.§ Parametrization Various functional forms have been proposed so far for the polarized PDFs in pQCD analyses. Throughout our analysis, we adopt exactly the same conventions as in the TKAA16 global fit <cit.>. In the present analysis,we take into account the following parametrization at the initial scale Q_0^2 = 1 GeV^2, x Δ q(x, Q_0^2) =N_q η_q x^α_q (1 - x)^β_q(1 + γ_q x), where the normalization factors, N_q, can be determined as 1/ N_q=(1 + γ_q α_q/α_q + β_q + 1) B(α_q, β_q + 1 ).The label of Δ q={ Δ u_v, Δ d_v, Δq̅, Δ g} corresponds to the polarized up-valence,down-valence, seaand gluon distributions, respectively. Charm and bottom quark contributions play no role for all presently available data. B(α_q, β_q + 1) is the Euler beta function. Considering SU(3) flavor symmetry, and due to the absence of semi-inclusive DIS (SIDIS) data in the KTA17 analysis, we attempt to fit only Δq̅≡Δu̅ = Δd̅ = Δs̅ = Δ s, while we would allow for a SU(3) symmetry breaking term by considering κ factor such thatΔs̅ = Δ s=κΔq̅. No improvement is achieved for the specific choice of κ.Referring to the inclusive polarized DIS World Data only, this strategy for the evolution of valence and sea quark distributions has previously been applied by Blumlein and Bottcher <cit.>, by the LSS group in <cit.> and also in our earlier studies <cit.>.The normalization factors, N_q, are chosen such that the parameters η_q are the first moments of Δ q_i(x, Q_0^2), as η_i = ∫_0^1 dxΔ q_i(x, Q_0^2). The present polarized DIS data are not accurate enough to determine all the shape parameters with sufficient accuracy. Equation (<ref>) includes 14 free parameters in total in which we further reduce the number of free parameters in the final minimization. The first moments of the polarized valence distribution can be described in terms of axial charges for octet baryon, F and D measured in hyperon and neutron β decay. These constraints lead to the values η_u_v = 0.928 ± 0.014 and η_d_v = - 0.342 ± 0.018 <cit.>. We fix two valence first moments on their central values. The parameters η_q̅ and η_g are determined from the fit.We find the factor (1 + γ_q x) provides flexibility to achieve a good description of data, especially for the valence densities {γ_u_v,γ_d_v}. The relevance of the parameters γ_q̅ and γ_g has been investigated by fixing all of them to zero and releasing them separately to test all possible combinations. Due to the present accuracy of the polarized DIS data, no improvement is observed and we prefer to set them to zero.The parameters {A_ HT,B_ HT,C_ HT,D_ HT,E_ HT} from Eq. (<ref>) specify the functional forms of g_2^τ3 and consequently g_1^τ3. They can be extracted from a simultaneous fit to the polarized observables.§.§ Overview of data sets The core of all polarized PDFs fits comprises the DIS data obtained at the electron-proton collider and in fixed-target experiments corresponding to the proton, the neutron and heavier targets such as the deuteron. Beside polarized DIS data, a significant amount of fixed-target SIDIS data <cit.> and the data from longitudinally polarized proton-proton (p p) collisions at the RHIC have only recently become available, for a limited range of momentum fractions x, 0.05<x<0.4  <cit.>.In the KTA17 analysis, we focus on the polarized DIS data samples. However, as only inclusive DIS data are included in the fit, it is not possible to separate quarks from antiquarks. We include the g_2 structure function in the KTA17 fitting procedure, which has been traditionally neglected due to the technical difficulty in operating the required transversely polarized target. We use all available g_1^p data from E143, HERMES98, SMC, EMC, E155, HERMES06, COMPASS10 and COMPASS16 experiments  <cit.>;g_1^n data from HERMES98, E142, E154, HERMES06, Jlab03, Jlab04, and Jlab05 <cit.>; and finally the g_1^d data from E143, SMC, HERMES06, E155, COMPASS05, and COMPASS06 <cit.>. The DIS data for g_2^p, n, d from E143, E142, Jlab03, Jlab04, Jlab05, E155, Hermes12, and SMC <cit.> also are included.These data sets are summarized in table <ref>. The kinematic coverage, the number of data points for each given target, and the fitted normalization shifts N_i are also presented in this table.To fully avoid a region of higher-twist effects, a cut in the hadronic mass W^2 is required. Sensitivity to the choice of cuts on W^2 is discussed in Ref. <cit.>. It is impossible to perform such a procedure for the present data on the spin-dependent structure functions without losing too much information. Here we want to stay inside the region of higher-twist corrections. Regarding Eq.( <ref>), the maximum kinematically allowed region of x is considered in our analysis. Moreover, due to the pQCD restriction, our KTA17 analysis is limited to the region of Q^2≥ 1 GeV^2.It is already known that a reasonable choice of Q_0^2 is required. The DGLAP equation allows one to move in Q^2, provided the perturbatively calculable boundary condition. The choice of Q_0^2 is typically the smallest value of Q^2 where the practitioner believes in pQCD. The reason is because back evolution in the DGLAP equation induces larger errors as opposed to forward evolution. Like most of the fitting programs on the market which solve the DGLAP evolution equations in the Mellin space, the KTA17 analysis algorithm also computes the Q^2 evolution and extracts the structure function in x space using the Jacobi polynomials approach.§.§χ^2 minimizationTo determine the best fit at NNLO, we need to minimize the χ^2_ global function with the free unknown PPDF parameters together with Λ_ QCD. χ_ global^2( p) quantifies the goodness of fit to the data for a set of independent parameters p that specifies the polarized PDFs at Q_0^2 = 1 GeV^2. This function is expressed as follows,χ_ global^2 ( p) = ∑_n=1^N_ exp w_n χ_n^2 , while w_n is a weight factor for the nth experiment and χ_n^2 ( p) = ( 1 - N_n /Δ N_n)^2 + ∑_i=1^N_n^ data( N_ng_(1,2), i^ Exp - g_(1,2), i^ Theory (p) / N_nΔ g_(1,2), i^ Exp)^2 .The minimization of the above χ_ global^2 ( p) function is done using the CERN program library MINUIT <cit.>. In the above equation, the main contribution comes from the difference between the model and the DIS data within the statistical precision. In the χ_n^2 function, g^ Exp, Δ g^ Exp, and g^ Theory indicate the experimental measurement, the experimental uncertainty (statistical and systematic combined in quadrature) and the theoretical value for the ith data point, respectively. N_n is overall normalization factors for the data of experiment n and the Δ N_n is the experimental normalization uncertainty. We allow for a relative normalization factor N_n between different experimental data sets within uncertainties Δ N_n quoted by the experiments. The normalization factors appear as free parameters in the fit. They are determined simultaneously with the parameters of the functional forms at prefitting procedure and fixed at their best values.§.§PPDFs uncertainties A robust treatment of uncertainty is desirable throughout full NNLO analysis. In this section, we briefly review the method in which we use to extract the polarized PDF uncertainties. The methodologies for the estimation of uncertainties are essential for understanding of the accuracy of collider predictions, both for the precision measurements and for the new physics searches. Three approaches are available to propagate the statistical precision of the experimental data to the fit results. They are based on the diagonalization of the Hessian error matrix, the Lagrange multiplier and the Monte Carlo sampling of parton distributions <cit.>. The Hessian and Monte Carlo techniques are the most commonly used methods. The adequacy of parametrization Eq. (<ref>) at the reference scale of Q_0^2 = 1 GeV^2 for given N_ max, α and β is investigated by the Hessian matrix method which is fully discussed inRefs. <cit.>. In the Hessian method, the uncertainty on a polarized PDF, Δ q(x), can be obtained from linear error propagation, [Δ q (x)]^2 = Δχ^2_ global× [ ∑_i (∂Δ q(x, â)/∂ a_i)^2C_i i+ ∑_i ≠ j ( ∂Δ q(x, â)/∂ a_i∂Δ q(x, â)/∂ a_j )C_i j ], where a_i (i = 1, 2, ..., N) denotes to the free parameters for each distribution presented in Eq. (<ref>). N is the number of optimized parameters and â_i is the optimized parameter.C ≡ H_i, j^-1 are the elements of the covariance matrix (or error matrix) determined in the QCD analysis at the scale Q^2_0. TheT = Δχ^2_ global is the tolerance for the required confidence region (C.L.).In order to compare the uncertainties of polarized PDFs obtained from the presentKTA17 analysis with those obtained by other groups, we follow the standard parameter-fitting criterion considering T = Δχ^2_ global = 1 for 68% (1-σ) C.L.. It is worth noting that, the various groups have different approaches to obtain C.L. criteria for the value of χ^2 in the goodness-of-fit test <cit.>. The difference originates from the quality of the experimental data sets. One approach is to fit to a very wide set of data (a tolerance criterion for Δχ^2 should be introduced), while the other one rejects inconsistent data sets ( Δχ^2 = 1).It should also be stressed that, in the process of the analysis of NNPDF <cit.> or JAM <cit.> groups a Monte Carlo method is used to estimate the PDF uncertainty. This method allows a more robust extraction of polarized PDFs with statistically rigorous uncertainties.In Sec. <ref>, we discuss the polarized PDF uncertainties in the kinematic region covered by the polarized inclusive DIS data used in this analysis. § DISCUSSION OF FIT RESULTSTo distinguish the effect of TMCs and HT contribution, we perform three analyses as the pQCD, `pQCD+TMC', and `pQCD+TMC+HT' scenarios. In the pQCD analysis, we only consider the leading-twist contribution of g_1 and g_2 structure functions, Eqs. (<ref>, <ref>, and <ref>), while in the pQCD+TMC analysis, the TMCs are included, Eqs. (<ref> and <ref>). The pQCD+TMC+HT analysis, which we referred to as KTA17, represents the effect of both TMC and HT contributions, Eq. (<ref>). As discussed earlier, the parameters {η_u_v,η_d_v,γ_q̅,γ_g} from Eq. (<ref>) are frozen in the first minimization step. We start to minimize the χ_ global^2 value with the 12 unknown fit parameters of Eq. (<ref>) and 15 HT parameters of Eq. (<ref>) plus an undetermined coupling constant. Then, in the final minimization step, we fix {γ_u_v,γ_d_v,β_q̅,β_g} together with {A_ HT,B_ HT,C_ HT,D_ HT,E_ HT} for the proton, neutron, and deuteron on their optimal values determined on prefitting scenario. As previously mentioned in Sec. <ref>, due to the lack of precise data, some of the parameters have to be fixed after an initial minimization step to their best values. KTA17 results are demonstrated in Tables <ref> and <ref>, while parameters marked with ^* are fixed. Accordingly, there are nine unknown parameters including the strong coupling constantwhich provide enough flexibility to have a reliable fit.The χ^2/ d.o.f. of the pQCD+TMC+HT analysis is lower than both the pQCD+TMC and pQCD scenarios, indicating the significance of small- Q^2 corrections. Large χ^2/ d.o.f. of pQCD fit analysis confirms our theoreticalassumption in which the leading-twist part should be accompanied by both TMCs and HT terms. As represented in Table <ref>, all the extracted strong coupling constants at Z mass are consistent with the world average value of 0.1185 ± 0.0006 <cit.>. The α_s (M_Z^2) based on `pQCD+TMC+HT' scenario, receives 2.07% (0.68%) corrections including TMC+HT (HT) effects.§.§ NNLO polarized PDFsThe effect of considering TMCs and HT terms on the KTA17 PPDFs, xΔ u_v (x, Q^2), xΔ d_v (x, Q^2), xΔq̅ (x, Q^2) and xΔ g (x, Q^2), is individually illustrated in Fig. <ref>. Including TMCs imposes significant effects on the whole x region of sea quark density while valence and gluon densities are mainly affected in the large-x region.Comparingthe pQCD+TMC and pQCD+TMC+HT curves we observe that all densities are practically identical in the small-x region (except for the sea quark density); little differences appear in their peak region behavior. Figure <ref> illustrates the evolution ofKTA17 polarized parton distributions for a selection of Q^2 values of 5, 30, and 100 GeV^2. We observe that the evolution in all the distributions, except the gluon density, tends to flatten out the peak for increasing Q^2, While the gluon distribution increases in the large kinematic region of x. §.§Polarized PDFs comparison We present KTA17PPDFs along with the corresponding uncertainty bounds as a function of x at Q_0^2= 1 GeV^2 in Fig. <ref>. Various parameterizations of NNPDF <cit.> KATAO <cit.>, BB10 <cit.>, DSSV09 <cit.>, AAC09 <cit.>, AKS14 <cit.>,LSS06 <cit.> and THK14 <cit.> at the NLO approximation, and TKAA16 <cit.> at the NNLO approximation are illustrated for comparison. In the polarized PDF sets (NNPDF, LSS and DSSV) which include SIDIS and/or W boson production in polarized pp collisions, Δu̅ is different from Δd̅, which are in turn different from 1/2 (Δ s + Δs̅). So we considered Δq̅=1/2 (Δu̅ + Δd̅) in Fig. <ref>. Our uncertainty estimation is based on the Hessian methods, for a tolerance of Δχ^2 = 1. The xΔ u_v and xΔ d_v polarized PDFs are the best determined distributions from the inclusive polarized DIS data, with relatively smaller uncertainty bands for the xΔ u_v distribution. As one can see, our xΔ u_v is relatively compatible with other results while the xΔ d_v, xΔq̅ and xΔ g densities are treated differently. For the extrapolated regions, x < 10^-3 and x > 0.8, where the PPDFs are not directly constrained by the data, all valence distributions are treated the same.The polarized gluon distribution is the most complicated case for PPDF uncertainties and parameterizations. Results for xΔ g from the various fits are usually quite spread. As illustrated in Fig. <ref>, the difficulty in constraining the polarized gluon distribution is clearly revealed through the spread of xΔ g from various global PPDF parametrizations. All the gluon distributions are positive at whole x range, except for the KATAO, DSSV and NNPDF which indicate a sign change. The NNPF gluon density is treated differently in the small-x region. The xΔ g distributions based on different group analyses tend to zero less quickly than the KTA17 result.Large differences are visible over the whole x range for the sea quark distribution. This distribution is actually not well constrained by the present polarized DIS data. It should be stressed again that, in both of our NNLO analyses, we used the inclusive DIS data to constrain polarized parton distributions. In contrast, in the fits of the LSS and DSSV collaborations,(SIDIS) data which are sensitive to the quark flavours are included. The quark-antiquark separation is achieved in NNPDF thanks to W boson production in polarized pp collisions.A detailed PPDF comparison is presented in Fig. <ref>, in which we plotted KTA17 together with those of TKAA16 (NLO and NNLO), AKS14 and LSS06 at Q^2= 10 GeV^2 as a function of x. Similar to previous comparisons, gluon density remains puzzling. The gluons from all PPDF sets are positive except for theAKS14 group which shows a sign change. ThexΔ u_v and xΔ d_vpolarized PDFs of the TKAA16 (NLO and NNLO),AKS14and LSS06 are qualitatively similar, though for LSS06 xΔ u_v are typically larger at medium x.§.§ Polarized structure function comparisonSeveral efforts to study the nucleon structure have been developed, aiming to predict the polarized PDFs behavior at small and large x. In order to investigate the precision of the obtained polarized PDFs and also to test whether the DIS data favor or unfavor them, a detailed comparison of the extracted structure functions and the available polarized DIS data is required.It should be stressed that much more numerous and more accurate data at both small and large x are requiredto discriminate among different groups analyses. We will return to this subject in Sec. <ref>, considering an ongoing planned and proposed high-energy polarized collider.In Figs. <ref>, <ref>, and <ref>,KTA17 theory predictions for the polarized structure functions of the proton xg_1^p (x, Q^2), neutron xg_1^n(x, Q^2) and deuteron xg_1^d(x, Q^2) are compared with the fixed-target DIS experimental data from E143, E154 and SMC. As we mentioned, KTA17 refers to the pQCD+TMC+HT scenario. The results from KATAO analysis in the NLO approximation <cit.> and TKAA16 analysis in the NNLO approximation <cit.> also shown. Our curves are presented for some selected values of Q^2 = 2, 3, 5, and 10 GeV^2 as a function of x. In general, we find good agreement with the experimental data over the entire range of x and Q^2, and our results are in accord with other determinations.In Fig. <ref>, we check the consistency of KTA17 with the newly improved statistical precision data of COMPASS16 in the low-x region. Further illustrations of the fit quality are presented in Figs. <ref>, <ref> and <ref> for the x g_2^i = p, n, d(x, Q^2) polarized structure functions obtained from Eq. (<ref>). Generally the g_2 data have larger uncertainties compared with the g_1 data, reflecting the lack of knowledge in g_2 structure function. At the current level of accuracy,KTA17is in agreement with data within their uncertainties, except for the E155 data for x g_2^d(x, Q^2). A precise quantitative extraction of the x g_2(x, Q^2) requires a large number of data with higher precision. Our results focus on the general characteristic of the x g_2(x, Q^2). §.§Higher-twist contributions Figure <ref> represents our xg_1^tw-3(x,Q^2) with those of LSS  <cit.> and JAM13 <cit.>. LSS split the measured x region into seven bins to determine the HT correction to g_1. They extracted the HT contribution in a model-independent way, while its scale dependence was ignored.The JAM group parametrized an analytical form for the twist-3 part of g_2 and calculated g_1^tw-3by integral relation of Eq.( <ref>) in a global fit at NLO approximation.The twist-3 part of g_2 together with those of the JAM13 <cit.> and BLMP <cit.> groups along with E143 experimental data <cit.> are presented in Fig. <ref>.Keeping terms up to twist 3, E143 Collaboration at SLAC reported the twist 3 contribution to the proton spin structure function x g_2^p with relatively large errors.However, within experimental precision the g_2 data are well described by the twist-2 contribution. The precision of the current data is not sufficient enough to distinguish model precision. As illustrated in Fig. <ref>, the twist-3 part of g_2 has significant contribution even at large Q^2. In comparison with Fig. <ref>, we find that xg_ 1^τ 3 vanishes rapidly at Q^2 > 5 GeV^2 while xg_ 2^τ 3 remains nonzero even in the limit of Q^2 →∞. Finally, KTA17 QCD fit results on xg_1 are compared to experimental measurements in Fig. <ref>. These measurements come from the Compass10, Compass16, E143, E155, EMC, HERMES06, HERMES98 and SMC experiments. The curves are given vs Q^2 at several values of x and are compared to the data.As can be seen, the theory predictions are in good agreement with the data.§ SUM RULESSum rules are powerful tools to investigate some fundamental properties of the nucleon structure, like the total momentum fraction carried by partons or the total contribution of parton spin to the spin of the nucleon. We explore how well the inclusion of TMCs and HT terms into NNLO polarized structure function analysis improves the precision of PPDF determination as well as QCD sum rules. In the following, the description of almost all important polarized sum rules together with available experimental data are briefly discussed.§.§ Bjorken sum ruleThe nonsinglet spin structure function is defined as g_1^ NS(x, Q^2)=g_1^p(x, Q^2) - g_1^n(x,Q^2) . The polarized Bjorken sum rule expresses the integral over the spin distributions of quarks inside of the nucleon in terms of its axial charge times a coefficient function <cit.> as Γ_1^ NS(Q^2) = Γ_1^p(Q^2) - Γ_1^n(Q^2) = ∫_0^1[g_1^p(x, Q^2) - g_1^n(x, Q^2)]dx= 1/6  |g_A|  C_Bj[α_s(Q^2)] + HT corrections . Here, g_A is the nucleon axial charge as measured in neutron β decay. The coefficient function C_Bj[α_s(Q^2)] is calculated in four-loop pQCD corrections in the massless <cit.> and very recently massive cases <cit.>.Bjorken sum rule potentially provides a very precise handle on the strong coupling constant. The value of α_s can be extracted via C_Bj[α_s(Q^2)] expression from experimental data. α_s is also available form accurate methods, such as the τ lepton and the Z boson into hadrons width decay. Comparison of these values offers an important test of QCD consistency. As previously reported in Ref. <cit.>, determination of α_s from the Bjorken sum rule suffers from small-x extrapolation ambiguities.Our results for the Bjorken sum rule are compared with experimental measurements E143 <cit.>, SMC <cit.>, HERMES06 <cit.> and COMPASS16 <cit.> in Table <ref>.§.§ Proton helicity sum rule The extrapolation of the proton spin among its constituents is a compelling question still driving the field of nuclear physics <cit.>. In order to get an accurate picture of the quark and gluon helicity density a precise extraction of PPDFs entering the proton's momentum sum rule is required. In a general approach, the spin of the nucleon can be carried by its constituents as 1/2 = 1/2ΔΣ(Q^2) + ΔG(Q^2) + L(Q^2). Here, Δ G(Q^2)=∫_0^1dx Δ g(x,Q^2) has the interpretation of the gluon spin contribution, and ΔΣ(Q^2)=∑_i∫_0^1dx (Δ q(x,Q^2)+Δq̅(x,Q^2)) denotes the flavor singlet spin contribution. L(Q^2) is the total contribution from the quark and gluon orbital angular momentum. Finding a way to measure them is a real challenge beyond the scope of this paper.Each individual term inEq. (<ref>)is a function of Q^2n but the sum is not.The values of the singlet-quark and gluon first moment at the scale of Q^2=10 GeV^2 are listed in Table <ref>. Results are compared to those from the NNPDFpol1.0 <cit.>, NNPDFpol1.1 <cit.> and DSSV08 <cit.> at both the truncated and full x regions. In Table <ref>, KTA17 results are presented and compared at Q^2=4 GeV^2 with the DSSV08 <cit.>, BB10 <cit.>, LSS10 <cit.> and NNPDFpol1.0 <cit.> results.Coming now to a comparison of results, we see that for the ΔΣ, KTA17 results are consistent within uncertainties with that of other groups. This is mainly because the first moment of polarized densities is fixed by semileptonic decays. Turning to the gluon, very different values are reported. The large uncertaintyprevents reaching a firm conclusion about the full first moment of the gluon.Let us finally discus the proton spin sum rule based on the extracted values presented in Table <ref>. The total orbital angular momentum to the total spin of the proton is L(Q^2=4  GeV^2) =0.256 ± 0.069. The gluon uncertainty is clearly dominant. Due to large uncertainty originating mainly from the gluons, we cannot yet come to a definite conclusion about the contribution of the total orbital angular momentum to the spin of the proton. Improving the current level of experimental accuracy is required for the precise determination of each individual contribution.§.§ twist 3 reduced matrix element d_2Under the OPE, one can study the effect of quark-gluon correlations via the moments of g_1 and g_2 d_2(Q^2)=3 ∫_0^1 x^2 g̅_̅2̅(x,Q^2) dx = ∫_0^1 x^2 [3 g_2(x, Q^2) + 2g_1(x, Q^2)]  dx, as follows from the relation g̅_2=g_2-g_2 ^WW. Thus, the twist-3 reduced matrix element of spin-dependent operators in the nucleon measures the deviation of g_2 from g_2 ^τ 2 [See Eq. (<ref>)]. The function of d_2(Q^2)is especially sensitive to the large-x behavior of g̅_̅2̅ (due to the x^2 weighting factor). Extraction of d_2 is particularly interesting as it will provide insight into the size of the multiparton correlation terms.Our results together with the other theoretical and experimental values are presented in Table <ref>. This notably nonzero value for d_2 implies the significance of considering higher-twist terms in QCD analyses. The most reliable determination of the the higher-twist moments d_2 was performed in JAM15 <cit.>. Since they are the only group that implemented TMCs for the τ3 part.In the near future, the expected data from 12 GeV Jefferson Lab experiments <cit.>may enable the d_2 moments to be determined more precisely in the DIS region at higher Q^2 values. QCD analysis of this new generation of bounded uncertainty data requires including TMCs in all higher-twist terms. §.§ Burkhardt-Cottingham (BC) sum ruleThe first moment of g_2 is predicted to yield zero by Burkhardt and Cottingham (BC) from virtual Compton scattering dispersion relations in all Q^2 <cit.> Γ_2 = ∫_0^1 dx g_2(x, Q^2) = 0 . It appears to be a trivial consequence of the WW relation for g_2^τ2. The BC sum rule is also satisfied for the target mass corrected structure functions. Therefore a violation of the BC sum rule would imply the presence of HT contributions <cit.>. Our Γ_2 results together with data from the E143 <cit.>, E155 <cit.>, HERMES2012 <cit.>, RSS <cit.>, and E01012 <cit.> groups for the proton, deuteron and neutron are presented in Table <ref>. Any conclusion depends on the low-x behavior of g_2 which has not yet been precisely measured.§.§ Efremov-Leader-Teryaev sum rule The Efremov-Leader-Teryaev (ELT) sum rule <cit.> integrates the valence part of g_1 and g_2 over x. Considering that the sea quarks are the same in protons and neutrons, the ELT sum rule can be derived similar to the Bjorken sum rule as ∫_0^1 dx   x[g_1^V(x) + 2 g_2^V (x)]= ∫_0^1 dx   x[g_1^p(x) - g_1^n(x) + 2(g_2^p(x) - g_2^n(x))]=0. This sum rule receives quark mass corrections and is only valid in the case of massless quarks <cit.>. It is preserved under the presence of target mass corrections <cit.>. Combining the data of E143 <cit.> and E155 <cit.> leads to -0.011 ± 0.008 at Q^2=5 GeV^2. We extracted the value of 0.0063± 0.0003 at the same Q^2.§POLARIZED PDFS IN THE HIGH-PRECISION ERA OF COLLIDER PHYSICSSeveral determinations of polarized PDFs of the proton are presently available up to NLO  <cit.> and also for the NNLO approximation <cit.>. They mostly differ in the included polarized data sets, the procedure applied to determine PPDFs from these data sets and the method used to extract corresponding uncertainties. Most of the analyses focused on the Lagrange multiplier or the Hessian approaches to estimate the uncertainty, while the NNPDF collaboration has developed a Monte Carlo methodology to control uncertainties. Available analyses use experimental information from neutral-current DIS and SIDIS to constrain the total quark combinations and individual quark and antiquark flavors, respectively.The gluon distribution would be constrained rather weakly by both DIS and SIDIS data,because of the small Q^2 range covered.In addition to the DIS and SIDIS fixed-target data, a remarkable amount of data from longitudinally polarized proton-proton collisions at the RHIC has become available recently <cit.>. The RHIC data can be expected to further constrain the gluon helicity distribution especially at the small momentum fractions, down tox∼ 0.01 <cit.>.The double-helicity asymmetries for jet and π^0 production are directly sensitive to the gluon helicity distribution over a small range of x, because of the dominance of gluon-gluon and quark-gluon initiated subprocesses in the kinematic range accessed by PHENIX at the RHIC <cit.>. In recent helicity PDF fits <cit.>, theRHIC measurements on the double-longitudinal spin asymmetry in the production of hadrons <cit.> and inclusive jet production in pp collisions <cit.>, as well as single-longitudinal spin asymmetry measurements in the production of W^± bosons <cit.>, have already been used. These data can increase sensitivity to the sign information of gluon density in present and future pQCD helicity PDF fits. In addition to the mentioned data, inclusion of the Hall-A and CLASS measurements at JLAB leads to a reduction in the PDF errors for the polarized valence and sea quarks densities as well as the gluon polarization uncertainty at x⩾ 0.1 <cit.>.The COMPASS Collaboration at CERN performed new measurements of the longitudinal double-spin asymmetry and the longitudinal spin structure function of the proton <cit.> as well as deuteron <cit.>. COMPASS measurements provide the lowest accessible values for x and the largest Q^2 values for any given x. Consequently, it leads to a better determination of sea quarks and gluon helicity distribution including the corresponding uncertainties. These data improve the statistical precision of g^p_1(x) by about a factor of 2 in the region x⩽ 0.02.Despite the discussed achievements, the QCD analysis of polarized data suffers from both limited kinematic coverage and insufficient precision of the available inclusive data. Consequently our understanding of the nucleon spin structure is still far from complete. The most up-to-date 200 GeV data from the COMPASS16 experiment do not change much the general trend of the polarized PDFs but a reduction of the uncertainties on almost all parton species was observed.Finally, it should be stressed that a future polarized electron-ion collider (EIC) would allow for a major breakthrough toward the understanding of the proton spin. The EIC is expected to open up the kinematic domain to significantly lower values of x (x ≈ 10^-4) in center-of-mass energy to ∼ 104 GeV^2, reducing significantly the uncertainty on the contributions from the unmeasured small-x region. The EIC will likely be the only facility to study the spin structure of the proton with the highest precision <cit.>.§ SUMMARY AND CONCLUSIONS The main goal of the present KTA17analysis is to determine the nucleon spin structure functions g_1(x,Q^2) and g_2(x,Q^2) and their moments which are essential in testing QCD sum rules. We have enriched our recent NNLO formalism <cit.> by TMCs and HT terms and extended it to include more experimental observables. These corrections play a significant role in the large-x region at low Q^2. We achieved an excellent description of the fitted data and providedunified and consistent PPDFs. Our helicity distributions have compared reasonably well with other extractions, within the known very large uncertainties arising from the lack of constraining data. We also studied the TMCs and HT effects on several sum rules at the NNLO approximation, since they are relevant in the region of low Q^2. The Bjorken sum rule is related to polarizedg_1 structure functions. We also present our results for the reduced matrix element d_2 in the NNLO approximation. More accurate data are required to scrutinize the BC and ELT sum rules. The future polarized EIC will make a huge impact on our knowledge of spin physics. The decreased uncertainties would absolutely solve the question of how spin and the orbital angular momentum of partons contribute to the overall proton spin. Concluding, in the light of upcoming development in experimental projects, phenomenological efforts to increase our knowledge of the nucleon structure functions and their moments are particularly important.§ ACKNOWLEDGMENTSWe would like to thank Elliott Leader and Emanuele Nocera for reading the manuscript and helpful discussions. We thank Alberto Accardi for detailed discussion on the evolution of higher-twist terms and Fabienne Kunne for detailed comments on COMPASS16 polarized DIS data. We are also thankful for School of Particles and Accelerators, Institute for Research in Fundamental Sciences for financially support this project. Hamzeh Khanpour acknowledges the University of Science and Technology of Mazandaran for financial support provided for this research and is grateful for the hospitality of the Theory Division at CERN where this work has been completed. S. Taheri Monfared gratefully acknowledges partial support of this research provided by the Islamic Azad University Central Tehran Branch.§ FORTRAN PACKAGE OF KTA17 NNLO POLARIZED PDFSA FORTRAN package containingKTA17 NNLO spin-dependent PDFs as well as the polarized structure functions x g_1^i = p, n, d(x, Q^2) for the proton, neutron and deuteron can be obtained via Email from the authors upon request. This package also includes an example program to illustrate the use of the routines. § NNLO SPLITTING FUNCTION In this section, for completeness, we present the NNLO Mellin-N space splitting function used for the evolution of longitudinally polarized parton densities based on our analysis. Their x-space forms are available inRef. <cit.>. FORTRAN files of our analytical results can be obtained from the authors upon request. These function can be written in terms of the harmonic sums as <cit.>, s_1= γ _E + ψ (n+1) , s_2= ζ (2) - ψ'(n+1) , s_3= ζ (3) + 0.5ψ”(n+1) , s_4= ζ (4) - 1/6ψ”'(n+1), where γ _E = 0.577216 is the Euler constant, ψ(n) = dlnΓ(n)/dn is the digamma function and ζ (2) = π^2/6, ζ (3) = 1.20206 and ζ(4) = 1.08232.The analytical expressions for the polarized NNLO quark-quark splitting function are given by Δ p_qq^(2)=1295.47+928/27 n^5-640/3 n^4+798.4/n^3-1465.2/n^2+1860.2/n-3505/1+n+297/2+n-433.2/3+n+ 1174.898(1/n-s_1)-714.1 s_1/n+684(s_1/n^2+-ζ (2)+s_2/n)+ f (-173.933+512/27 n^4-2144/27 n^3+172.69/n^2-216.62/n+6.816/(1+n)^4+406.5/1+n+77.89/2+n+. .34.76/3+n-183.187 (1/n-s_1)+5120 s_1/81 n-65.43 (s_1/n^2+-ζ (2)+s_2/n))+ 32/3 f^2 (-17/72+3-2 n-12 n^2+2 n^3+12 n^4/27 n^3 (1+n)^3+2 s_1/27+10 s_2/27-2 s_3/9)+ 502.4(-s_1/n^3+ζ (2)-s_2/n^2--ζ (3)+s_3/n) . For the gluon-quark splitting functions we have Δ p_qg^(2)=f (-1208/n^5+2313.84/n^4-1789.6/n^3+1461.2/n^2-2972.4/n+ 439.8/(1+n)^4+2290.6/(1+n)^3+4672/1+n-. 1221.6/2+n-18/3+n-278.32 s_1/n-90.26 (s_1^2+s_2)/n+825.4 (s_1/n^2+-ζ (2)+s_2/n)+ f (128/3 n^5-184.434/n^4+393.92/n^3-526.3/n^2+499.65/n- 61.116/(1+n)^4+358.2/(1+n)^3-. 432.18/1+n-141.63/2+n-11.34/3+n+6.256 s_1/n+7.32 (s_1^2+s_2)/n-47.3 (s_1/n^2+-ζ (2)+s_2/n)+ .0.7374 (-s_1^3-3 s_1 s_2-2 s_3)/n)-5.3 (-s_1^3-3 s_1 s_2-2 s_3)/n+ .3.784 (s_1^4+6 s_1^2 s_2+3 s_2^2+8 s_1 s_3+6 s_4)/n) , Δ p_gq^(2)=92096/27 n^5-5328.018/n^4+4280/n^3-4046.6/n^2+6159/n-1050.6/(1+n)^4 -1701.4/(1+n)^3-3825.9/1+n+ 1942/2+n-742.1/3+n-1843.7 s_1/n+451.55 (s_1^2+s_2)/n-1424.8(s_1/n^2+-ζ (2)+s_2/n)+ f (-1024/9 n^5+236.3232/n^4-404.92/n^3+308.98/n^2-301.07/n+180.138/(1+n)^4 -253.06/(1+n)^3-. 296/1+n+406.13/2+n-101.62/3+n+171.78 s_1/n-47.86 (s_1^2+s_2)/n-16.18(s_1/n^2+-ζ (2)+s_2/n)+ 16/27 f (-12/n+10/1+n+2 /1+n(-1/1+n-s_1)-8 ss_1/n+6 (s_1^2+s_2)/n-. ..3 /1+n(1/(1+n)^2+(1/1+n+s_1)^2+s_2))-4.963 (-s_1^3-3 s_1 s_2-2 s_3)/n)+ 59.3 (-s_1^3-3 s_1 s_2-2 s_3)/n+5.143(s_1^4+6 s_1^2 s_2+3 s_2^2+8 s_1 s_3+6 s_4)/n . Finally, the polarized third-order gluon-gluon splitting function reads: Δ p_gg^(2)=4427.762+12096/n^5-22665/n^4+21804/n^3-23091/n^2+30988/n- 7002/(1+n)^4-1726/(1+n)^3- 39925/1+n+13447/2+n-4576/3+n+2643.521 (1/n-s_1)-3801 s_1/n- 13247 (-1/1+n(-1/1+n-s_1)-s_1/n)-12292 (s_1/n^2+-ζ (2)+s_2/n)+ f (-528.536-6128/9 n^5+2146.788/n^4-3754.4/n^3+3524/n^2-1173.5/n-786/(1+n)^4+. 1226.2/(1+n)^3+2648.6/1+n-2160.8/2+n+1251.7/3+n-412.172 (1/n-s_1)+295.7 s_1/n- 6746 (-1/1+n(-1/1+n-s_1)-s_1/n)-7932 (s_1/n^2+-ζ (2)+s_2/n)+ f (6.4607+7.0854/n^4-13.358/n^3+13.29/n^2-16.606/n+31.528/(1+n)^3+ 32.905/1+n-. ..18.3/2+n+2.637/3+n-16/9(1/n-s_1)+0.21 s_1/n-16.944(s_1/n^2+-ζ (2)+s_2/n))) . For completeness, we also include the polarized NNLO pure singlet contribution, Δ p_ps^(2)=f (-344/27(24/n^5-24/(1+n)^5)-90.9198(-6/n^4+6/(1+n)^4) -368.6(2/n^3-2/(1+n)^3)-. 739 (-1/n^2+1/(1+n)^2)-1362.6(1/n-1/1+n)-81.5 (-6/(1+n)^4+6/(2+n)^4)+ 349.9 (2/(1+n)^3-2/(2+n)^3)+1617.4(1/1+n-1/2+n)-674.8 (1/2+n-1/3+n)+ 167.41 (1/3+n-1/4+n)-204.76 (--1/1+n-s_1/1+n-s_1/n)+ 232.57 (s_1/n^2-1/1+n+s_1/(1+n)^2+-ζ (2)+s_2/n-1/(1+n)^2-ζ (2)+s_2/1+n)- 12.61 (s_1^2+s_2/n-1/(1+n)^2+(1/1+n+s_1)^2+s_2/1+n)+ f (1.1741(-6/n^4+6/(1+n)^4)+13.287 (2/n^3-2/(1+n)^3)+45.482 (-1/n^2+1/(1+n)^2)+. 49.13 (1/n-1/1+n)-0.8253(-6/(1+n)^4+6/(2+n)^4)+ 10.657 (2/(1+n)^3-2/(2+n)^3)-30.77 (1/1+n-1/2+n)-4.307 (1/2+n-1/3+n)- 0.5094 (1/3+n-1/4+n)+9.517 (-1/1+n(-1/1+n-s_1)-s_1/n)+ .1.7805(s_1^2+s_2/n-1/1+n(1/(1+n)^2+(1/1+n+s_1) ^2+s_2)))- 6.541(-s_1^3-3 s_1 s_2-2 s_3/n-. ..1/1+n(-(1/1+n+s_1)^3-3 (1/1+n+s_1) (1/(1+n)^2+s_2)-2 (1/(1+n)^3+s_3)))). 99Aidala:2012mv C. A. Aidala, S. D. Bass, D. Hasch and G. K. Mallot,http://dx.doi.org/10.1103/RevModPhys.85.655Rev. Mod. Phys.85, 655 (2013).Ball:2016spl R. D. Ball, E. R. Nocera and J. Rojo, “The asymptotic behaviour of parton distributions at small and large x,” http://dx.doi.org/10.1140/epjc/s10052-016-4240-4Eur. Phys. J. C. 76, 383 (2016). deFlorian:2009vb D. de Florian, R. Sassot, M. Stratmann and W. Vogelsang, “Extraction of Spin-Dependent Parton Densities and Their Uncertainties,” http://dx.doi.org/10.1103/PhysRevD.80.034030Phys. Rev. D 80, 034030 (2009). Hirai:2008aj M. Hirai et al. [Asymmetry Analysis Collaboration], “Determination of gluon polarization from deep inelastic scattering and collider data,” http://dx.doi.org/10.1016/j.nuclphysb.2008.12.026Nucl. Phys. B 813, 106 (2009).Blumlein:2010rn J. Blumlein and H. Bottcher, “QCD Analysis of Polarized Deep Inelastic Scattering Data,” http://dx.doi.org/10.1016/j.nuclphysb.2010.08.005Nucl. Phys. B 841, 205 (2010).Leader:2010rb E. Leader, A. V. Sidorov and D. B. Stamenov, “Determination of Polarized PDFs from a QCD Analysis of Inclusive and Semi-inclusive Deep Inelastic Scattering Data,” http://dx.doi.org/10.1103/PhysRevD.82.114018Phys. Rev. D 82, 114018 (2010). Nocera:2014gqa E. R. Nocera et al. [NNPDF Collaboration], “A first unbiased global determination of polarized PDFs and their uncertainties,” http://dx.doi.org/10.1016/j.nuclphysb.2014.08.008Nucl. Phys. B 887, 276 (2014). Jimenez-Delgado:2013boa P. Jimenez-Delgado, A. Accardi and W. Melnitchouk, “Impact of hadronic and nuclear corrections on global analysis of spin-dependent parton distributions,” http://dx.doi.org/10.1103/PhysRevD.89.034025Phys. Rev. D 89, no. 3, 034025 (2014).Jimenez-Delgado:2014xza P. Jimenez-Delgado et al. [Jefferson Lab Angular Momentum (JAM) Collaboration], “Constraints on spin-dependent parton distributions at large x from global QCD analysis,” http://dx.doi.org/10.1016/j.physletb.2014.09.049Phys. Lett. B 738, 263 (2014).Sato:2016tuz N. Sato et al. [Jefferson Lab Angular Momentum Collaboration], “Iterative Monte Carlo analysis of spin-dependent parton distributions,” http://dx.doi.org/10.1103/PhysRevD.93.074005Phys. Rev. D 93, no. 7, 074005 (2016).Arbabifar:2013tma F. Arbabifar, A. N. Khorramian and M. Soleymaninia, “QCD analysis of polarized DIS and the SIDIS asymmetry world data and light sea-quark decomposition,” http://dx.doi.org/10.1103/PhysRevD.89.034006Phys. Rev. D 89, no. 3, 034006 (2014). Monfared:2014nta S. Taheri Monfared, Z. Haddadi and A. N. Khorramian, “Target mass corrections and higher twist effects in polarized deep-inelastic scattering,” http://dx.doi.org/10.1103/PhysRevD.89.119901Phys. Rev. D 89, no. 7, 074052 (2014). http://dx.doi.org/10.1103/PhysRevD.89.074052Erratum: Phys. Rev. D 89, no. 11, 119901 (2014).Shahri:2016uzl F. Taghavi-Shahri, H. Khanpour, S. Atashbar Tehrani and Z. Alizadeh Yazdi, “Next-to-next-to-leading order QCD analysis of spin-dependent parton distribution functions and their uncertainties: Jacobi polynomials approach,” http://dx.doi.org/10.1103/PhysRevD.93.114024Phys. Rev. D 93, no. 11, 114024 (2016).Gluck:2000dy M. Gluck, E. Reya, M. Stratmann and W. Vogelsang, “Models for the polarized parton distributions of the nucleon,” http://dx.doi.org/10.1103/PhysRevD.63.094005Phys. Rev. D 63, 094005 (2001). Adare:2014hsq A. Adare et al. [PHENIX Collaboration], “Inclusive double-helicity asymmetries in neutral-pion and eta-meson production in p⃗+p⃗ collisions at √(s)=200 GeV,”http://dx.doi.org/10.1103/PhysRevD.90.012007Phys. Rev. D 90, no. 1, 012007 (2016). Adare:2015ozj A. Adare et al. [PHENIX Collaboration], “Inclusive cross section and double-helicity asymmetry for π^0 production at midrapidity in p+p collisions at √(s)=510 GeV,” http://dx.doi.org/10.1103/PhysRevD.93.011501Phys. Rev. D 93, no. 1, 011501 (2016).Adamczyk:2014ozi L. Adamczyk et al. [STAR Collaboration], “Precision Measurement of the Longitudinal Double-spin Asymmetry for Inclusive Jet Production in Polarized Proton Collisions at √(s)=200 GeV,” http://dx.doi.org/10.1103/PhysRevLett.115.092002Phys. Rev. Lett. 115, no. 9, 092002 (2015).Adamczyk:2014xyw L. Adamczyk et al. [STAR Collaboration], “Measurement of longitudinal spin asymmetries for weak boson production in polarized proton-proton collisions at RHIC,” http://dx.doi.org/10.1103/PhysRevLett.113.072301Phys. Rev. Lett. 113, 072301 (2014).Adare:2015gsd A. Adare et al. [PHENIX Collaboration], “Measurement of parity-violating spin asymmetries in W^± production at midrapidity in longitudinally polarized p+p collisions,” http://dx.doi.org/10.1103/PhysRevD.93.051103Phys. Rev. D 93, no. 5, 051103 (2016). Adolph:2015saz C. Adolph et al. [COMPASS Collaboration], “The spin structure function g_1^ p of the proton and a test of the Bjorken sum rule,” http://dx.doi.org/10.1016/j.physletb.2015.11.064Phys. Lett. B 753, 18 (2016).Goto:1999by Y. Goto et al. [Asymmetry Analysis Collaboration], “Polarized parton distribution functions in the nucleon,”http://dx.doi.org/10.1103/PhysRevD.62.034017Phys. Rev. D 62, 034017 (2000).Nath:2017ofm N. M. Nath and J. K. Sarma,http://dx.doi.org/10.1007/s10773-017-3286-xInt J Theor Phys (2017), 1-18. Moch:2014sna S. Moch, J. A. M. Vermaseren and A. Vogt, “The Three-Loop Splitting Functions in QCD: The Helicity-Dependent Case,” http://dx.doi.org/10.1016/j.nuclphysb.2014.10.016Nucl. Phys. B 889, 351 (2014). Lampe:1998eu B. Lampe and E. Reya, “Spin physics and polarized structure functions,”http://dx.doi.org/10.1016/S0370-1573(99)00100-3Phys.Rept. 332, 1 (2000). Zijlstra:1993sh E. B. Zijlstra and W. L. van Neerven, “Order α_s^2 corrections to the polarized structure function g_1 (x,Q^2),” http://dx.doi.org/10.1016/0550-3213(94)90538-XNucl. Phys. B 417, 61 (1994). [Nucl. Phys. B 426, 245 (1994)] [Nucl. Phys. B 773, 105 (2007)]. Lacombe:1981eg M. Lacombe, B. Loiseau, R. Vinh Mau, J. Cote, P. Pires and R. de Tourreil, “Parametrization of the deuteron wave function of the Paris n-n potential,” http://dx.doi.org/10.1016/0370-2693(81)90659-6Phys. Lett. B 101, 139 (1981). Buck:1979ff W. W. Buck and F. Gross, “A Family of Relativistic Deuteron Wave Functions,” http://dx.doi.org/10.1103/PhysRevD.20.2361Phys. Rev. D 20, 2361 (1979). Zuilhof:1980ae M. J. Zuilhof and J. A. Tjon, “Electromagnetic Properties of the Deuteron and the Bethe-Salpeter Equation with One Boson Exchange,” http://dx.doi.org/10.1103/PhysRevC.22.2369Phys. Rev. C 22, 2369 (1980).Wandzura:1977qf S. Wandzura and F. Wilczek, “Sum Rules for Spin Dependent Electroproduction: Test of Relativistic Constituent Quarks,” http://dx.doi.org/10.1016/0370-2693(77)90700-6Phys. Lett. B 72, 195 (1977).Flay:2016wie D. Flay et al., “Measurements of d_2^n and A_1^n: Probing the neutron spin structure,” http://dx.doi.org/10.1103/PhysRevD.94.052003Phys. Rev. D 94, no. 5, 052003 (2016). Khorramian:2010qa A. N. Khorramian, S. Atashbar Tehrani, S. Taheri Monfared, F. Arbabifar and F. I. Olness, “Polarized Deeply Inelastic Scattering (DIS) Structure Functions for Nucleons and Nuclei,” http://dx.doi.org/10.1103/PhysRevD.83.054017Phys. Rev. D 83, 054017 (2011).Khorramian:2009xz A. N. Khorramian, H. Khanpour and S. A. Tehrani, ``Nonsinglet parton distribution functions from the precise next-to-next-to-next-to leading order QCD fit,” http://dx.doi.org/10.1103/PhysRevD.81.014013Phys. Rev. D 81, 014013 (2010).MoosaviNejad:2016ebo S. M. Moosavi Nejad, H. Khanpour, S. Atashbar Tehrani and M. Mahdavi, “QCD analysis of DIS structure functions in neutrino-nucleon scattering: Laplace transform and Jacobi polynomials approach,” https://arxiv.org/abs/1609.05310arXiv:1609.05310 [hep-ph]. Khanpour:2016uxh H. Khanpour, A. Mirjalili and S. Atashbar Tehrani, “Analytic derivation of the next-to-leading order proton structure function F_2^p(x, Q^2) based on the Laplace transformation,” https://arxiv.org/abs/1601.03508arXiv:1601.03508 [hep-ph]. Ayala:2015epa C. Ayala and S. V. Mikhailov, “How to perform a QCD analysis of DIS in analytic perturbation theory,” http://dx.doi.org/10.1103/PhysRevD.92.014028Phys. Rev. D 92, no. 1, 014028 (2015).Barker:1982rv I. S. Barker, B. R. Martin and G. Shaw, “QCD Analysis of Nonsinglet Neutrino Structure Functions,” http://dx.doi.org/10.1007/BF01571777Z. Phys. C 19, 147 (1983).Barker:1983iy I. S. Barker and B. R. Martin, “QCD Analysis of Nonsinglet Electromagnetic Structure Functions,” http://dx.doi.org/10.1007/BF01571777Z. Phys. C 24, 255 (1984).Krivokhizhin:1987rz V. G. Krivokhizhin, S. P. Kurlovich, V. V. Sanadze, I. A. Savin, A. V. Sidorov and N. B. Skachkov, “QCD Analysis of Singlet Structure Functions Using Jacobi Polynomials: The Description of the Method,” http://dx.doi.org/10.1007/BF01556164Z. Phys. C 36, 51 (1987).Krivokhizhin:1990ct V. G. Krivokhizhin, S. P. Kurlovich, R. Lednicky, S. Nemecek, V. V. Sanadze, I. A. Savin, A. V. Sidorov and N. B. Skachkov, “Next-to-leading order QCD analysis of structure functions with the help of Jacobi polynomials,” http://dx.doi.org/10.1007/BF01554485Z. Phys. C 48, 347 (1990).Chyla:1986eb J. Chyla and J. Rames, “On Methods of Analyzing Scaling Violation in Deep Inelastic Scattering,” http://dx.doi.org/10.1007/BF01559606Z. Phys. C 31, 151 (1986).Barker:1980wu I. S. Barker, C. S. Langensiepen and G. Shaw, “General Parametrization of Scale Breaking,” http://dx.doi.org/10.1016/0550-3213(81)90093-6Nucl. Phys. B 186, 61 (1981). Kataev:1997nc A. L. Kataev, A. V. Kotikov, G. Parente and A. V. Sidorov, “Next to next-to-leading order QCD analysis of the revised CCFR data for xF_3 structure function and the higher twist contributions,” http://dx.doi.org/10.1016/S0370-2693(97)01239-2Phys. Lett. B 417, 374 (1998).Alekhin:1998df S. I. Alekhin and A. L. Kataev, “The nlo DGLAP extraction of α_s and higher twist terms from ccfr xF_3 and F_2 structure functions data for ν N DIS,” http://dx.doi.org/10.1016/S0370-2693(99)00254-3Phys. Lett. B 452, 402 (1999). Kataev:1999bp A. L. Kataev, G. Parente and A. V. Sidorov, “Higher twists and α_s(M_Z) extractions from the NNLO QCD analysis of the CCFR data for the xF_3 structure function,” http://dx.doi.org/10.1016/S0550-3213(99)00760-9Nucl. Phys. B 573, 405 (2000).Kataev:2001kk A. L. Kataev, G. Parente and A. V. Sidorov, “Improved fits to the xF_3 CCFR data at the next-to-next-to-leading order and beyond,” http://dx.doi.org/10.1134/S1063779607060068Phys. Part. Nucl. 34, 20 (2003).[Fiz. Elem. Chast. Atom. Yadra 34, 43 (2003)] [Phys. Part. Nucl.38, no. 6, 827 (2007)]. Kataev:2005ci A. L. Kataev, “Infrared renormalons and the relations between the Gross-Llewellyn Smith and the Bjorken polarized and unpolarized sum rules,” http://dx.doi.org/10.1134/1.2034588JETP Lett. 81, 608 (2005).[Pisma Zh. Eksp. Teor. Fiz.81, 744 (2005)].Leader:1997kw E. Leader, A. V. Sidorov and D. B. Stamenov, “NLO QCD analysis of polarized deep inelastic scattering,” http://dx.doi.org/10.1142/S0217751X98002547Int. J. Mod. Phys. A 13, 5573 (1998).Georgi:1976ve H. Georgi and H. D. Politzer, “Freedom at Moderate Energies: Masses in Color Dynamics,” http://dx.doi.org/10.1103/PhysRevD.14.1829Phys. Rev. D 14, 1829 (1976).Blumlein:1998nv J. Blumlein and A. Tkabladze, “Target mass corrections for polarized structure functions and new sum rules,” http://dx.doi.org/10.1016/S0550-3213(99)00289-8Nucl. Phys. B 553, 427 (1999).Dong:2006jm Y. B. Dong, “Target mass corrections to proton spin structure functions and quark-hadron duality,” http://dx.doi.org/10.1016/j.physletb.2006.09.002Phys. Lett. B 641, 272 (2006). Dong:2008zg Y. B. Dong, “Target mass corrections to matrix elements in nucleon spin structure functions,” http://dx.doi.org/10.1103/PhysRevC.78.028201Phys. Rev. C 78, 028201 (2008).Dong:2007iv Y. B. Dong, “Target mass corrections and twist-3 in the nucleon spin structure functions,” http://dx.doi.org/10.1016/j.physletb.2007.07.022Phys. Lett. B 653, 18 (2007). Dong:2007zzc Y. B. Dong and D. Y. Chen, “Local quark-hadron duality of nucleon spin structure functions with target mass corrections,” http://dx.doi.org/10.1016/j.nuclphysa.2007.04.010Nucl. Phys. A 791, 342 (2007). Sidorov:2006fi A. V. Sidorov and D. B. Stamenov,http://dx.doi.org/10.1142/S0217732306021402Mod. Phys. Lett. A 21, 1991 (2006). Nachtmann:1973mr O. Nachtmann, “Positivity constraints for anomalous dimensions,” http://dx.doi.org/10.1016/0550-3213(73)90144-2Nucl. Phys. B 63, 237 (1973).Schienbein:2007gr I. Schienbein, V. A. Radescu, G. P. Zeller, M. E. Christy, C. E. Keppel, K. S. McFarland, W. Melnitchouk and F. I. Olness et al., “A Review of Target Mass Corrections,” http://dx.doi.org/10.1088/0954-3899/35/5/053101J. Phys. G 35,053101 (2008).Accardi:2008pc A. Accardi and W. Melnitchouk, “Target mass corrections for spin-dependent structure functions in collinear factorization,” http://dx.doi.org/10.1016/j.physletb.2008.10.036Phys. Lett. B 670,114 (2008).Piccione:1997zh A. Piccione and G. Ridolfi, “Target mass effects in polarized deep inelastic scattering,” http://dx.doi.org/10.1016/S0550-3213(97)00716-5Nucl. Phys. B 513, 301 (1998).DeRujula:1976ih A. De Rujula, H. Georgi and H. D. Politzer, “Trouble with xi Scaling?,” http://dx.doi.org/10.1103/PhysRevD.15.2495Phys. Rev. D 15, 2495 (1977). Ellis:1982cd R. K. Ellis, W. Furmanski and R. Petronzio, “Unraveling Higher Twists,” http://dx.doi.org/10.1016/0550-3213(83)90597-7Nucl. Phys. B 212, 29 (1983). D'Alesio:2009kv n. D'Alesio, E. Leader and F. Murgia, “On the importance of Lorentz structure in the parton model: Target mass corrections, transverse momentum dependence, positivity bounds,” http://dx.doi.org/10.1103/PhysRevD.81.036010Phys. Rev. D 81, 036010 (2010). Anselmino:1994gn M. Anselmino, A. Efremov and E. Leader, “The Theory and phenomenology of polarized deep inelastic scattering,”http://dx.doi.org/10.1016/0370-1573(95)00011-5Phys. Rept.261, 1 (1995).http://dx.doi.org/10.1016/S0370-1573(97)00003-3Erratum-ibid.281, 399 (1997). Braun:2011awV. M. Braun, T. Lautenschlager, A. N. Manashov and B. Pirnay,“Higher twist parton distributions from light-cone wave functions,”http://dx.doi.org/10.1103/PhysRevD.83.094023Phys. Rev. D 83, 094023 (2011). JLAB-12 T. Averett, B. Sawatzky, W. Korsch, and Z.-E. Meziani, spokespersons, Jefferson Lab Experiment E12-06-121 Leader:2006xc E. Leader, A. V. Sidorov and D. B. Stamenov, “Impact of CLAS and COMPASS data on Polarized Parton Densities and Higher Twist,” http://dx.doi.org/10.1103/PhysRevD.75.074027Phys. Rev. D 75,074027 (2007). Agashe:2014kda K. A. Olive et al. [Particle Data Group Collaboration], “Review of Particle Physics,” http://dx.doi.org/10.1088/1674-1137/38/9/090001Chin. Phys. C 38, 090001 (2014).Olive:2016xmw C. Patrignani et al. [Particle Data Group], “Review of Particle Physics,”http://dx.doi.org/10.1088/1674-1137/40/10/100001Chin. Phys. C 40, 100001 (2016). Alekseev:2010ub M. G. Alekseev et al. [COMPASS Collaboration], “Quark helicity distributions from longitudinal spin asymmetries in muon-proton and muon-deuteron scattering,”http://dx.doi.org/10.1016/j.physletb.2010.08.034Phys. Lett. B 693, 227 (2010). Ackerstaff:1997ws K. Ackerstaff et al. [HERMES Collaboration], “Measurement of the neutron spin structure function g_1^n with a polarized He^3 internal target,”http://dx.doi.org/10.1016/S0370-2693(97)00611-4Phys. Lett. B 404, 383 (1997).Alekseev:2007vi M. Alekseev et al. [COMPASS Collaboration], “The Polarised Valence Quark Distribution from semi-inclusive DIS,” http://dx.doi.org/10.1016/j.physletb.2007.12.056Phys. Lett. B 660, 458 (2008).Adeva:1997qz B. Adeva et al. [Spin Muon Collaboration], “Polarized quark distributions in the nucleon from semiinclusive spin asymmetries,” http://dx.doi.org/10.1016/S0370-2693(97)01546-3Phys. Lett. B 420, 180 (1998).Alekseev:2009ac M. Alekseev et al. [COMPASS Collaboration], “Flavour Separation of Helicity Distributions from Deep Inelastic Muon-Deuteron Scattering,” http://dx.doi.org/10.1016/j.physletb.2009.08.065Phys. Lett. B 680, 217 (2009).Aschenauer:2015eha E. C. Aschenauer et al., “The RHIC SPIN Program: Achievements and Future Opportunities,” https://arxiv.org/abs/1501.01220arXiv:1501.01220 [nucl-ex]. Abe:1998wq K. Abe et al. [E143 Collaboration], “Measurements of the proton and deuteron spin structure functions g_1 and g_2” http://dx.doi.org/10.1103/PhysRevD.58.112003Phys. Rev. D 58, 112003 (1998). HERM98 A. Airapetian et al. [HERMES Collaboration], “Measurement of the proton spin structure function g_1^p with a pure hydrogen target,” http://dx.doi.org/10.1016/S0370-2693(98)01341-0Phys. Lett. B 442, 484 (1998). Adeva:1998vv B. Adeva et al. [Spin Muon Collaboration], “Spin asymmetries A_1 and structure functions g_1 of the proton and the deuteron from polarized high-energy muon scattering,” http://dx.doi.org/10.1103/PhysRevD.58.112001Phys. Rev. D 58, 112001 (1998).EMCp J. Ashman et al. [European Muon Collaboration], “A measurement of the spin asymmetry and determination of the structure function g_1 in deep inelastic muon proton scattering,” http://dx.doi.org/10.1016/0370-2693(88)91523-7Phys. Lett. B 206, 364 (1988).J. Ashman et al. [European Muon Collaboration], “An investigation of the spin structure of the proton in deep inelastic scattering of polarized muons on polarized protons,” http://dx.doi.org/10.1016/0550-3213(89)90089-8Nucl. Phys. B 328, 1 (1989).E155pP. L. Anthony et al. [E155 Collaboration], “Measurements of the Q^2 dependence of the proton and neutron spin structure functions g_1^p and g_1^n,” http://dx.doi.org/10.1016/S0370-2693(00)01014-5Phys. Lett. B 493, 19 (2000).HERMpdA. Airapetian et al. [HERMES Collaboration], ``Precise determination of the spin structure function g_1 of theproton, deuteron and neutron,” http://dx.doi.org/10.1103/PhysRevD.75.012007Phys. Rev. D 75,012007 (2007).COMP1 M. G. Alekseev et al. [COMPASS Collaboration], “The Spin-dependent Structure Function of the Proton g_1^p and a Test of the Bjorken Sum Rule,” http://dx.doi.org/10.1016/j.physletb.2010.05.069Phys. Lett. B 690, 466 (2010).V. Y. Alexakhin et al. [COMPASS Collaboration], “The Deuteron Spin-dependent Structure Function g_1^d and its First Moment,” http://dx.doi.org/10.1016/j.physletb.2006.12.076Phys. Lett. B 647, 8 (2007). E154nK. Abe et al. [E154 Collaboration], “Precision determination of the neutron spin structure function g_1^n,” http://dx.doi.org/10.1103/PhysRevLett.79.26Phys. Rev. Lett. 79,26 (1997).JLABn2003 K. M. Kramer [Jefferson Lab E97-103 Collaboration], “The search for higher twist effects in the spin-structure functions of the neutron,” http://dx.doi.org/10.1063/1.1607208AIP Conf. Proc. 675, 615 (2003). JLABn2004 X. Zheng et al.[Jefferson Lab Hall A Collaboration], “Precision measurement of the neutron spin asymmetries and spin-dependent structure functions in the valence quark region,” http://dx.doi.org/10.1103/PhysRevC.70.065207Phys. Rev. C 70,065207 (2004). JLABn2005 K. Kramer, D. S. Armstrong, T. D. Averett, W. Bertozzi, S. Binet, C. Butuceanu, A. Camsonne and G. D. Cates et al., “The Q^2-dependence of the neutron spin structure function g^n_2 at low Q^2,” http://dx.doi.org/10.1103/PhysRevLett.95.142002Phys. Rev. Lett. 95,142002 (2005).E142n P. L. Anthony et al. [E142 Collaboration], “Deep Inelastic Scattering of Polarized Electrons by Polarized ^3He and the Study of the Neutron Spin Structure,” http://dx.doi.org/10.1103/PhysRevD.54.6620Phys. Rev. D 54,6620 (1996). E155d P. L. Anthony et al. [E155 Collaboration], “Measurement of the deuteron spin structure function g_1^d(x) for 1(GeV/c)^2 < Q^2 < 40(GeV/c)^2,” http://dx.doi.org/10.1016/S0370-2693(99)00940-5Phys. Lett. B 463, 339 (1999).COMP2005E. S. Ageev et al.[COMPASS Collaboration], “Measurement of the spin structure of the deuteron in the DIS region,” http://dx.doi.org/10.1016/j.physletb.2005.03.025Phys. Lett. B 612, 154 (2005).COMP2006V. Y. .Alexakhin et al.[COMPASS Collaboration], “The Deuteron Spin-dependent Structure Function g_1^d and its First Moment,” http://dx.doi.org/10.1016/j.physletb.2006.12.076Phys. Lett. B 647, 8 (2007).E155pdg2 P. L. Anthony et al.[E155 Collaboration], “Precision measurement of the proton and deuteron spin structure functions g_2 and asymmetries A_2,” http://dx.doi.org/10.1016/S0370-2693(02)03015-0Phys. Lett. B 553, 18 (2003). hermes2012g2 A. Airapetian, N. Akopov, Z. Akopov, E. C. Aschenauer, W. Augustyniak, R. Avakian, A. Avetissian and E. Avetisyan et al., “Measurement of the virtual-photon asymmetry A_2 and the spin-structure function g_2 of the proton,” http://dx.doi.org/10.1140/epjc/s10052-012-1921-5Eur. Phys. J. C 72, 1921 (2012). SMCpg2 D. Adams et al.[Spin Muon (SMC) Collaboration], “Spin structure of the proton from polarized inclusive deep inelastic muon - proton scattering,” http://dx.doi.org/10.1103/PhysRevD.56.5330Phys. Rev. D 56,5330 (1997). James:1994vla F. James and M. Roos, “Minuit: A System For Function Minimization And Analysis Of The Parameter Errors And Correlations,” http://dx.doi.org/10.1016/0010-4655(75)90039-9Comput. Phys. Commun. 10, 343 (1975).Martin:2009iq A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, “Parton distributions for the LHC,” http://dx.doi.org/10.1140/epjc/s10052-009-1072-5Eur. Phys. J. C 63, 189 (2009).Shoeibi:2017lrl S. Shoeibi, H. Khanpour, F. Taghavi-Shahri and K. Javidan,https://arxiv.org/abs/1703.04369arXiv:1703.04369 [hep-ph].Hou:2016sho T. J. Hou et al., “Reconstruction of Monte Carlo replicas from Hessian parton distributions,” arXiv:1607.06066 [hep-ph]. https://arxiv.org/abs/1607.06066arXiv:1607.06066 [hep-ph].Khanpour:2016pphH. Khanpour and S. Atashbar Tehrani,“Global Analysis of Nuclear Parton Distribution Functions and Their Uncertainties at Next-to-Next-to-Leading Order,”http://dx.doi.org/10.1103/PhysRevD.93.014026Phys. Rev. D 93,no. 1, 014026 (2016). Pumplin:2001ct J. Pumplin, D. Stump, R. Brock, D. Casey, J. Huston, J. Kalk, H. L. Lai and W. K. Tung, “Uncertainties of predictions from parton distribution functions. 2. The Hessian method,”http://dx.doi.org/10.1103/PhysRevD.65.014013Phys. Rev. D 65, 014013 (2001). Martin:2002aw A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, “Uncertainties of predictions from parton distributions. 1: Experimental errors,” http://dx.doi.org/10.1140/epjc/s2003-01196-2Eur. Phys. J. C 28, 455 (2003).Accardi:2016qay A. Accardi, L. T. Brady, W. Melnitchouk, J. F. Owens and N. Sato, “Constraints on large-x parton distributions from new weak boson production and deep-inelastic scattering data,” http://dx.doi.org/10.1103/PhysRevD.93.114017Phys. Rev. D 93, no. 11, 114017 (2016). Alekhin:2013nda S. Alekhin, J. Blumlein and S. Moch, “The ABM parton distributions tuned to LHC data,” http://dx.doi.org/10.1103/PhysRevD.89.054028Phys. Rev. D 89, no. 5, 054028 (2014). Dulat:2015mca S. Dulat et al., “New parton distribution functions from a global analysis of quantum chromodynamics,” http://dx.doi.org/10.1103/PhysRevD.93.033006Phys. Rev. D 93,no. 3, 033006 (2016). Abramowicz:2015mha H. Abramowicz et al. [H1 and ZEUS Collaborations], “Combination of measurements of inclusive deep inelastic e^±p scattering cross sections and QCD analysis of HERA data,” http://dx.doi.org/10.1140/epjc/s10052-015-3710-4Eur. Phys. J. C 75,no. 12, 580 (2015). Jimenez-Delgado:2014twa P. Jimenez-Delgado and E. Reya, “Delineating parton distributions and the strong coupling,” http://dx.doi.org/10.1103/PhysRevD.89.074049Phys. Rev. D 89,no. 7, 074049 (2014). Harland-Lang:2014zoa L. A. Harland-Lang, A. D. Martin, P. Motylinski and R. S. Thorne, “Parton distributions in the LHC era: MMHT 2014 PDFs,” http://dx.doi.org/10.1140/epjc/s10052-015-3397-6Eur. Phys. J. C 75, no. 5, 204 (2015). Ball:2013tyh R. D. Ball et al. [NNPDF Collaboration], “Polarized Parton Distributions at an Electron-Ion Collider,” http://dx.doi.org/10.1016/j.physletb.2013.12.023Phys. Lett. B 728, 524 (2014). Ball:2013lla R. D. Ball et al. [NNPDF Collaboration], “Unbiased determination of polarized parton distributions and their uncertainties,” http://dx.doi.org/10.1016/j.nuclphysb.2013.05.007Nucl. Phys. B 874, 36 (2013). deFlorian:2008mr D. de Florian, R. Sassot, M. Stratmann and W. Vogelsang, “Global Analysis of Helicity Parton Densities and Their Uncertainties,” http://dx.doi.org/10.1103/PhysRevLett.101.072001Phys. Rev. Lett. 101, 072001 (2008). Bjorken:1969mm J. D. Bjorken, “Inelastic Scattering of Polarized Leptons from Polarized Nucleons,” http://dx.doi.org/10.1103/PhysRevD.1.1376Phys. Rev. D 1,1376 (1970).Baikov:2010je P. A. Baikov, K. G. Chetyrkin and J. H. Kuhn, “Adler Function, Bjorken Sum Rule, and the Crewther Relation to Order α_s^4 in a General Gauge Theory,” http://dx.doi.org/10.1103/PhysRevLett.104.132004Phys. Rev. Lett.104,132004 (2010). Blumlein:2016xcy J. Blmlein, G. Falcioni and A. De Freitas, “The Complete O(α_s^2) Non-Singlet Heavy Flavor Corrections to the Structure Functions g_1,2^ep(x,Q^2), F_1,2,L^ep(x,Q^2), F_1,2,3^ν(ν̅)(x,Q^2) and the Associated Sum Rules,” http://dx.doi.org/10.1016/j.nuclphysb.2016.06.018Nucl. Phys. B910, 568 (2016).Altarelli:1998nbG. Altarelli, R. D. Ball, S. Forte and G. Ridolfi,“Theoretical analysis of polarized structure functions,”Acta Phys. Polon. B 29, 1145 (1998) Leader:2016sli E. Leader, “The end of WHAT nucleon-spin crisis?,” arXiv:1604.00305 [hep-ph]. https://arxiv.org/abs/1604.00305arXiv:1604.00305 [hep-ph]. Nocera:2012hx E. R. Nocera, S. Forte, G. Ridolfi and J. Rojo, “Unbiased Polarised Parton Distribution Functions and their Uncertainties,” https://arxiv.org/abs/1206.0201arXiv:1206.0201 [hep-ph]. Gockeler:2005vw M. Gockeler, R. Horsley, D. Pleiter, P. E. L. Rakow, A. Schafer, G. Schierholz, H. Stuben and J. M. Zanotti, “Investigation of the second moment of the nucleon's g_1 and g_2 structure functions in two-flavor lattice QCD,” http://dx.doi.org/10.1103/PhysRevD.72.054507Phys. Rev. D 72,054507 (2005).Burkhardt:1970ti H. Burkhardt and W. N. Cottingham, “Sum rules for forward virtual Compton scattering,” http://dx.doi.org/10.1016/0003-4916(70)90025-4Annals Phys. 56,453 (1970). Slifer:2008xu K. Slifer et al. [Resonance Spin Structure Collaboration], “Probing Quark-Gluon Interactions with Transverse Polarized Scattering,” http://dx.doi.org/10.1103/PhysRevLett.105.101601Phys. Rev. Lett.105,101601 (2010). Solvignon:2013yun P. Solvignon et al. [E01-012 Collaboration], “Moments of the neutron g_2 structure function at intermediate Q^2,” http://dx.doi.org/10.1103/PhysRevC.92.015208Phys. Rev. C 92, no. 1, 015208 (2015).Efremov:1996hd A. V. Efremov, O. V. Teryaev and E. Leader, “An Exact sum rule for transversely polarized DIS,” http://dx.doi.org/10.1103/PhysRevD.55.4307Phys. Rev. D 55,4307 (1997). Blumlein:1996vs J. Blumlein and N. Kochelev, “On the twist -2 and twist - three contributions to the spin dependent electroweak structure functions,” http://dx.doi.org/10.1016/S0550-3213(97)00234-4Nucl. Phys. B 498, 285 (1997). Kuhn:2008sy S. E. Kuhn, J. P. Chen and E. Leader, “Spin Structure of the Nucleon - Status and Recent Results,” http://dx.doi.org/10.1016/j.ppnp.2009.02.001Prog. Part. Nucl. Phys.63, 1 (2009). Song:1996ea X. Song, “Polarized structure function g_2 in the CM bag model,” http://dx.doi.org/10.1103/PhysRevD.54.1955Phys. Rev.D54, 1955 (1996). Aschenauer:2016our E. C. Aschenauer et al., “The RHIC Cold QCD Plan for 2017 to 2023: A Portal to the EIC,” https://arxiv.org/abs/1602.03922arXiv:1602.03922 [nucl-ex]. Akiba:2015jwa Y. Akiba et al., “The Hot QCD White Paper: Exploring the Phases of QCD at RHIC and the LHC,” https://arxiv.org/abs/1502.02730arXiv:1502.02730 [nucl-ex]. Aschenauer:2013woa E. C. Aschenauer et al., “The RHIC Spin Program: Achievements and Future Opportunities,” https://arxiv.org/abs/1304.0079arXiv:1304.0079 [nucl-ex]. Adare:2014wht A. Adare et al. [PHENIX Collaboration], “Charged-pion cross sections and double-helicity asymmetries in polarized p+p collisions at √(s)=200GeV,” http://dx.doi.org/10.1103/PhysRevD.91.032001Phys. Rev. D 91, no. 3, 032001 (2015). deFlorian:2014yva D. de Florian, R. Sassot, M. Stratmann and W. Vogelsang, “Evidence for polarization of gluons in the proton,” http://dx.doi.org/10.1103/PhysRevLett.113.012001Phys. Rev. Lett. 113, no. 1, 012001 (2014). Adare:2008aa A. Adare et al. [PHENIX Collaboration], “The Polarized gluon contribution to the proton spin from the double helicity asymmetry in inclusive pi0 production in polarized p + p collisions at √(s) = 200-GeV,” http://dx.doi.org/10.1103/PhysRevLett.103.012003Phys. Rev. Lett. 103, 012003 (2009). Adare:2008qb A. Adare et al. [PHENIX Collaboration], “Inclusive cross section and double helicity asymmetry for π^0 production in p+p collisions at √(s)=62.4 GeV,” http://dx.doi.org/10.1103/PhysRevD.79.012003Phys. Rev. D 79, 012003 (2009). Abelev:2007vt B. I. Abelev et al. [STAR Collaboration], “Longitudinal double-spin asymmetry for inclusive jet production in p+p collisions at √(s) = 200 GeV,” http://dx.doi.org/10.1103/PhysRevLett.100.232003Phys. Rev. Lett. 100, 232003 (2008). Aggarwal:2010vc M. M. Aggarwal et al. [STAR Collaboration], “Measurement of the parity-violating longitudinal single-spin asymmetry for W^± boson production in polarized proton-proton collisions at √(s) = 500-GeV,” http://dx.doi.org/10.1103/PhysRevLett.106.062002Phys. Rev. Lett. 106, 062002 (2011).Adare:2010xa A. Adare et al. [PHENIX Collaboration], “Cross Section and Parity Violating Spin Asymmetries of W^± Boson Production in Polarized p+p Collisions at √(s)=500 GeV,” http://dx.doi.org/10.1103/PhysRevLett.106.062001Phys. Rev. Lett. 106, 062001 (2011). Nocera:2016xhb E. Nocera and S. Pisano, “Summary of WG6: Spin Physics,” https://arxiv.org/abs/1608.08575 arXiv:1608.08575 [hep-ph]. Chu:2016wsq X. Chu, E. C. Aschenauer and J. H. Lee, “Studying photon structure at an EIC,”https://arxiv.org/abs/1607.01705arXiv:1607.01705 [nucl-ex]. Accardi:2012qut A. Accardi et al., “Electron Ion Collider: The Next QCD Frontier - Understanding the glue that binds us all,” http://dx.doi.org/10.1140/epja/i2016-16268-9Eur. Phys. J. A 52, no. 9, 268 (2016).Chudakov:2016ytj E. Chudakov et al., “Heavy quark production at an Electron-Ion Collider,”https://arxiv.org/abs/1610.08536arXiv:1610.08536 [hep-ex]. Montgomery:2016uws H. E. Montgomery, “Electron Ion Collider: Physics and Prospects,” https://arxiv.org/abs/1610.08922arXiv:1610.08922 [hep-ex].Cosyn:2016oiq W. Cosyn, V. Guzey, M. Sargsian, M. Strikman and C. Weiss, “Electron-deuteron DIS with spectator tagging at EIC: Development of theoretical framework,” http://dx.doi.org/10.1051/epjconf/201611201022EPJWebConf. 112, 01022 (2016). Cosyn:2014zfa W. Cosyn et al., “Neutron spin structure with polarized deuterons and spectator proton tagging at EIC,” http://dx.doi.org/10.1088/1742-6596/543/1/012007J. Phys. Conf. Ser. 543, 012007 (2014).Guzey:2014jva V. Guzey, D. Higinbotham, C. Hyde, P. Nadel-Turonski, K. Park, M. Sargsian, M. Strikman and C. Weiss, “Polarized light ions and spectator nucleon tagging at EIC,” PoS DIS 2014, 234 (2014)Hoecker:2016vvy A. Hoecker, “Physics at the LHC Run-2 and Beyond,” https://arxiv.org/abs/1611.07864arXiv:1611.07864 [hep-ex]. Vermaseren:1998uu J. A. M. Vermaseren, “Harmonic sums, Mellin transforms and integrals,” http://dx.doi.org/10.1142/S0217751X99001032Int. J. Mod. Phys. A 14, 2037 (1999). Blumlein:1998if J. Blumlein and S. Kurth, “Harmonic sums and Mellin transforms up to two loop order,” http://dx.doi.org/10.1103/PhysRevD.60.014018Phys. Rev. D 60, 014018 (1999). Adolph:2016myg C. Adolph et al. [COMPASS Collaboration], “Final COMPASS results on the deuteron spin-dependent structure function g_1^ d and the Bjorken sum rule,” https://arxiv.org/abs/1612.00620arXiv:1612.00620 [hep-ex].
http://arxiv.org/abs/1703.09209v2
{ "authors": [ "Hamzeh Khanpour", "Sara Taheri Monfared", "S. Atashbar Tehrani" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170327175107", "title": "Nucleon spin structure functions at NNLO in the presence of target mass corrections and higher twist effects" }
<Short article title> <Autors et al.> 1Computer Center, Guangdong University of Petrochemical Technology, Maoming 525000, China; 2Dept. of Computer Science, Jinan University, Guangzhou 510632, China; 3Sino-France Joint Laboratory for Astrometry, Dynamics and Space Science, Jinan University, Guangzhou 510632, ChinaFringes often appear in a CCD frame, especially when a thin CCD chip and a R or I filter is used. 88 CCD frames of the two open clusters NGC 2324 and NGC 1664 with a Johnson I filter taken from the 2.4-m telescope at Yunnan Observatory are used to study the fringes' impacts to the astrometry and photometry of stars. A novel technique proposed by Snodgrass & Carry is applied to remove the fringes in each CCD frame. And an appraisal of this technique is performed to estimate fringes' effects on astrometry and photometry of stars. Our results show that the astrometric and photometric precisions of stars can be improved effectively after the removal of fringes, especially for faint stars. § INTRODUCTION Large-aperture optical telescopes have a potential for deep-sky exploration with their powerful capability to collect the flux of light. However, the faint stars are subject to some systematic errors, such as bad pixels or columns, quantum efficiency variations, fringes, etc. Though some of these systematic errors can be eliminated by using standard data-processing techniques (bias subtraction, flat-fielding, bad pixel masking, etc), more special care must be taken for fringes' removal, since the fringes are related with the CCD itself and a light wavelength. When a wavelength beyond 600∼700 nm, the absorption efficiency in the CCD silicon gradually decreases with the increase of wavelength. This would lead to fringes, resulting from multiple reflections and interference between CCD surfaces. The variation of CCD surface's thickness would lead to same variation in fringe's amplitude, phase and quantum efficiency, as a function of pixel position on the CCD frame <cit.>. Besides, the observed fringe pattern also depends on the wavelength range of the light. Monochromatic illumination with a proper wavelength produces strong fringe pattern while broad-band illumination produces weaker fringes as the range of wavelengths washes out the appearance of fringes <cit.>. The night-sky emission lines are strong particularly in the red part of the optical spectrum (e.g. the I-band) and primarily drive the fringe pattern in ground-based observations.As such, the fringe pattern changes little over time with the same filter. It can be derived by median filtering of dithering exposures' stack in quantity <cit.> or neon lamp flat-fielding <cit.>. Nonetheless, fringes' amplitudes vary from frame to frame, most likely depending on exposure times, air-masses and weather conditions, etc. Snodgrass & Carry () presented a simple but effective way, through referring a series of pixel pairs, to remove the fringes in original frames automatically.Fringes' effects were studied systematically based on some flat field frames, which were used by the WFC3 calibration pipeline <cit.>. Within a series of flat field data in different filters, the F953N flat fields showed strongest fringes <cit.>. Since the lack of a neon lamp, we prefer to study fringes' impacts based on actual science data rather than laboratory experiments.In this paper, we carry out a study of fringes' effects to some practical observations for their astrometry and photometry based on the CCD frames taken from the 2.4-m telescope at Yunnan Observatory. The geometric distortion (called GD hereafter) correction is also applied for their high-precision astrometry <cit.>.The contents of this paper are arranged as follows. In Section 2, the observations are described. In Section 3, we give details on the data reduction, mainly concentrating on defringing procedure and GD solution. The results for astrometry and photometry are discussed in Section 4 and Section 5 respectively. Finally, the conclusions are drawn in Section 6.§ OBSERVATIONSOur observations were obtained from the 2.4-m telescope with an E2V CCD42-90 chip at Yunnan Observatory on January 3, 2011. Two open clusters NGC 2324 and NGC 1664 were observed by using a dithering scheme. There were 44 exposures for each of the open clusters in a Johnson I filter. The typical seeing of the observations is about 1.5 (FWHM). The brightest star on a frame was just saturated and the exposure time were 30 seconds for NGC 2324 and 19 seconds for NGC 1664. And a clip process was applied for the raw CCD frames to avoid the ineffective boundary, leaving an area of 1900×1900 pixels. Specifications of the 2.4-m telescope and its CCD chip are listed in Table <ref>. § DATA REDUCTION Before the further reduction, a series of calibrations are done: (1) bias subtraction, flat-fielding and removal of cosmic rays; (2) fringes' removal; (3) derive the GD pattern and correct the pixel positions of stars by using it. The details of defringing and GD solution are given as the subsections. §.§ REMOVAL OF FRINGESIn order to derive a fringe pattern, we stack a series of frames which is taken by a dithering scheme. A median filter is applied at each pixel position of the stack and finally a fringe pattern is composed of every median value <cit.>. However, only 44 frames of NGC 2324 are selected to derive the fringe pattern because their signal-to-noise (S/N) is higher than the CCD frames of NGC 1664.We follow the procedure first developed by Snodgrass and Carry () to derive fringes' amplitude of an original frame and the positions of pixel pairs are chosen to avoid the errors resulted from the flux of stars. To be specific, a series of pixel pairs are set between fringes' bright area and dark area on the original frame. Once again, the pixel pairs are set at the same position of the fringe pattern (the red lines on the left panel of Figure <ref>). For the i^th pixel pair, the flux difference between bright and dark areas on the original frame (noted as O) and on the fringe pattern (noted as F) is calculated as follows: δO_i =O^i_bright-O^i_dark δF_i =F^i_bright-F^i_darkWe usually set 30∼40 pixel pairs on an original frame for its fringes' removal. After 3σ-clip, the list of δO_i/δF_i (i=1,2,⋯,N) for an original frame is shown on the right panel of Figure <ref>. We take the median of the δO_i/δF_i as the fringe scaling factor to remove some large discrepancies with a high probability of being outliers. An example of removing fringes is shown in Figure <ref>.§.§ SOLUTION OF GD Since the optical system of a telescope is not a perfect pin-hole model, there are more or less GD effects inevitably. For the 2.4-m telescope at Yunnan Observatory, whose effective field-of-view is only 9×9, the maximum GD would reach up to 1 pixel <cit.>. In order to preserve a good astrometry precision, we solve the pattern of GD depending on the defringed frames as follows. First, a two-dimensional Gaussian fitting and aperture photometry are applied to measure a star's pixel position and instrumental magnitude (the zero point is 25^th magnitude). Then we identify it according to the PPMXL catalog <cit.> which can be downloaded from the VizieR database. About 800 reference stars are matched on each frame. The theoretical positions of reference stars from the catalog are transformed to pixel positions of the frames, through a four-parameter model after considering all astrometry effect (topocentric apparent position, atmospheric refraction, etc). The difference between the directly measured positions from frames and the indirectly computed ones from the catalog (observed minus calculated; O - C), can be resolved into three sources: GD, catalog error and measured error. As the same star in dithering exposures falls in different pixel positions on the CCD chip, the GD in a specific pixel positions can be derived by canceling out the catalog errors and compressing the measured errors <cit.>.The field of view of the CCD chip is divided into 19×19 cells (100×100 pixels per cell) and the GD in each cell is solved as follows. As mentioned above, if the same star is located at different positions in many CCD frames, its GD in a specific cell can be solved from the mean of differences of (O - C) residuals. The remaining GD in a cell can be estimated by the mean of GDs, which are derived from stars falling in the cell. In each iteration, the observed pixel positions are updated through a bilinear interpolation from the newly-derived GD pattern and the GD pattern is re-calculated, until the corrections of GD are under a given threshold, such as 0.01 pixel. Left panel of Figure <ref> shows the final GD pattern. We also solve the GD pattern from the original frames as shown in the middle panel, which is similar to the left. Finally, we make the subtraction of the two GD patterns (shown in the right panel) to check the effect of defringing for GD solution. There are negligible differences in most cells for the two GD patterns, except in the lower right corner of the field of view, which can reach up to 0.1 pixel (≈0.029 arcsec). Although fringes would cause an uneven sky background, we find that, the removal of fringes has only negligible effect to GD solution since only faint stars are affected by the fringes. § THE EFFECT ON ASTROMETRY We adopt the GD pattern derived from defringed frames to correct the pixel positions measured from the original and defringed CCD frames respectively. In order to study fringes' effects for faint stars' astrometric measurement, we need to analyze the results based on the magnitude M_I of the PPMXL catalog (since the equivalent Johnson I filter is used). It should be noted that, the source of M_I is the I magnitude from USNO-B and what's more, the USNO-B magnitude system is not recalibrated in PPMXL catalog as there are discrepancies in the magnitude system from field to field and from early to late epoch <cit.>. Hence M_I represents only a reference magnitude rather than a reliable standard magnitude for a star in our procedure. The magnitude M_I is unknown for many faint stars in PPMXL catalog. According to the transformation of an observed instrumental magnitude to a standard system <cit.>, we assume there is a linear relationship between instrumental magnitudes and catalog magnitudes, which can be expressed as M_I = a + b ×m_inst, where m_inst represents the average instrumental magnitude of a star measured more than once. We can solve a and b by a least-square fitting. The value of the slope b is about 0.9 for both clusters and the intercept a is about 0.3 for NGC 2324 and 1.1 for NGC 1664 respectively. The standard deviation (called SD hereafter) of the same stars' (O - C) positional residuals for the two open clusters before and after defringing are shown in Figure <ref> and Figure <ref> as function of calculated M_I. We assume the faint stars would be more susceptible to the fringes and we roughly divide the bright and faint stars according to the difference before and after defringing in Figure 4 and 5. Similar divisions between the bright and faint stars are made in Section 5. Fringes reveal a greater effect for the faint stars (M_I>15). Moreover, the improvement is more significant at R.A. than at decl. We suppose it is mainly owe to fringes' clearly vertical trend (see the left panel of Figure <ref>), and meanwhile the x axis is almost aligned with R.A. (a star-trailing operation is done before the observations). Based on the magnitude, we list the detailed statistics in Table <ref> and Table <ref>. For stars fainter than 15 in M_I, the improvement is significant in both directions. After defringing, the precisions in two coordinates are almost of the same level.§ THE EFFECT ON PHOTOMETRY We compare the magnitudes for the same star images measured on the original frames and the magnitudes measured on the defringed frames. Figure <ref> shows the differences (original minus defringed) change with the magnitudes (marked as m_inst). Detailed statistics are shown in Table <ref>. In a long exposure time as 30 seconds, a decrease of about 0.6% in flux corresponds to a faint star image at m_inst>18. It shows that the fringes make an additive contribution to the flux counting of a star image as the fringes are formed by long-wavelength photon which is hardly absorbed by the antireflection coating of the CCD. And some bright star images' magnitudes have a clear difference after defringing. It is found that they locate in a crowed field with some faint neighbors. And we derive the photometric errors for the observations before and after defringing as following. Since the instrumental magnitude of the same star would vary with exposures for different observational environments, a standardization process is done as follows. Suppose there is a common star k in the two neighboring exposures, e_i and e_j. We compute the instrumental magnitude differences of many common stars in the two neighboring exposures and derive the mean value of these differences as shown in Equation <ref>. The mean of the instrumental magnitude difference is assumed as the baseline difference between the two exposures. For the star k, its instrumental magnitude in e_j can be calibrated relative to e_i and then to derive its photometric error in e_j as Equation <ref>.Δ_i,j = 1/n∑_k=1^n(mag_i,k-mag_j,k) err^j_k = mag_j,k + Δ_i,j - mag_i,kWe calculate the standard deviation of the photometric errors for the same star in different frames and the results are shown in Figure <ref> as function of stars' average instrumental magnitude (noted as m_inst). There is a more significant improvement for the photometric measurement of NGC 2324 after defringing since fringes appear more clearly with the increase of exposure time and make greater impacts on the background determination. The large discrepancy for some bright stars is due to saturation or blended stars. We analyze the standard deviation of the photometric errors dividually by stars' instrumental magnitudes as listed in Table <ref> and Table <ref>. The fringes' impacts on photometry are negligible for bright stars in short exposure time, while they grow up significantly for both bright stars and faint stars with long exposure time.§ CONCLUSIONS We investigate the effects of fringes on astrometry and photometry based on 88 CCD frames of NGC 2324 and NGC 1664 which were taken at the 2.4-m telescope at Yunnan Observatory. After defringing, the astrometric precision of faint stars (M_I>15) has been significantly improved especially in R.A. direction, which is corresponding to x axis of the CCD plate, as the fringe pattern shows a greater trend in y direction. In a long exposure time as 30 seconds, a faint star image at m_inst>18 is about 0.006 magnitude fainter than before as the fringes make an additive contribution to the flux counting of a star image. Meanwhile, photometric errors are reduced by about 20% for stars fainter than 16^th instrumental magnitude. We acknowledge the support of the staff of the 2.4-m telescope at Yunnan Observatory. Funding for the telescope has been provided by CAS and the People's Government of Yunnan Province. The work is financially supported by the National Natural Science Foundation of China (Grant No. U1431227, No. 11273014) and partly by the Fundamental Research Funds for the Central Universities. And our deepest gratitude goes to the anonymous reviewers for their careful reading and constructive suggestions that have helped improve this paper substantially.[Da Costa(1992)]Da Costa92 Da Costa, G. S. 1992, ASPC, 23, 90-104 [Gullixson(1992)]Gullixson92 Gullixson, C. A. 1992, , 23, 130 [Howell(2012)]Howell12 Howell, S. B. 2012, , 124(913), 263-267 [Peng et al.(2012)]Peng12 Peng, Q. Y., Vienne, A., & Zhang, Q. F., et al. 2012, , 144, 170 [Röeser et al.(2010)]Roeser10 Röeser, S., Demleitner, M., & Schilbach, E. 2010, , 139, 2440-2447 [Snodgrass & Carry(2013)]Snodgrass13 Snodgrass, C., & Carry, B. 2013, The Messenger, 152, 14 [Wong(2010)]Wong10 Wong, M. H. 2010, WFC3 ISR 2010-04 [Zhang et al.(2012)]Zhang12 Zhang, Q. F., Peng, Q. Y., & Zhu, Z. 2012, Res. Astron. Astrophys., 12(10), 1451
http://arxiv.org/abs/1703.08925v2
{ "authors": [ "Z. J. Zheng", "Q. Y. Peng" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170327042634", "title": "Fringes' Impacts to Astrometry and Photometry of Stars" }
=5 =2 line#3#1#4–#5#2#6MyJava stringstyle=,keywordstyle=,commentstyle=,basicstyle=,captionpos=b, frame=single, escapechar=|,numbersep=5pt,numbers=left,language=java,tabsize=2, backgroundcolor=,moredelim=[il][] , moredelim=[is][]%%%%.pdf, .png, .jpgres/=10000 =10000 =1000definitionDefinition8.5in11inpapersize=,code[1][] #1Author's version.5Automatic Detection of GUI Design Smells: The Case of Blob ListenerValéria Lelli University of Ceará, Brazil valerialelli@great.ufc.brArnaud Blouin INSA Rennes, France arnaud.blouin@irisa.fr Benoit Baudry Inria, France benoit.baudry@inria.fr Fabien Coulon Inria, France fabien.coulon@inria.frOlivier Beaudoux ESEO, France olivier.beaudoux@eseo.frDecember 30, 2023 ============================================================================================================================================================================================================================================================================================================================================= Graphical User Interfaces (GUIs) intensively rely on event-driven programming:widgets send GUI events, which capture users' interactions, to dedicated objects called controllers. Controllers implement several GUI listeners that handle these events to produce GUI commands. In this work, we conducted an empirical study on 13 large Java Swing open-source software systems. We study to what extent the number of GUI commands that a GUI listener can produce has an impact on the change- and fault-proneness of the GUI listener code. We identify a new type of design smell, calledthat characterizes GUI listeners that can produce more than two GUI commands.We show that 21 of the analyzed GUI controllers are . We propose a systematic static code analysis procedure that searches forthat we implement in .We conducted experiments on six software systems for which we manually identified 37 instances of .successfully detected 36out of 37. The results exhibit a precision of 97.37 and a recall of 97.59. Finally, we propose coding practices to avoid the use of . F.3.3Studies of Program ConstructsControl primitivesH.5.2User InterfacesGraphical user interfaces (GUI)D.3.3Language Constructs and FeaturesPatterns§ INTRODUCTIONGraphical User Interfaces (GUI) are the visible and tangible vector that enable users to interact with software systems. While GUI design and qualitative assessmentis handled by GUI designers, integrating GUIs into software systems remains a software engineering task. Software engineers develop GUIs following widespread architectural design patterns, such as MVC <cit.> or MVP <cit.> (Model-View-Controller/Presenter), that consider GUIs as first-class concerns ( the View in these two patterns). These patterns clarify the implementations of GUIs by clearly separating concerns, thus minimizing the "spaghetti of call-backs" <cit.>.These implementations rely on event-driven programming where events are treated by controllers (resp. presenters[For simplicity, we use the term controller to refer to any kind of component of MV* architectures that manages events triggered by GUIs, such as Presenter (MVP), or ViewModel (MVVM <cit.>).]), as depicted by <Ref>. In this code example, the AController controller manages three widgets, b1, b2, and m3 (codeIntro1codeIntro11). To handle events that these widgets trigger in response to users' interactions, the GUI listener ActionListener is implemented in the controller (codeIntro3codeIntro4). One major job of GUI listeners is the production of GUI commands,a set of statements executed in reaction of a GUI event produced by a widget (<Ref>). Like any code artifact, GUI controllers must be tested, maintained and are prone to evolution and errors. In particular, software developers are free to develop GUI listeners that can produce a single or multiple GUI commands. In this work, we investigate the effects of such development practices on the code quality of the GUI listeners.[xleftmargin=5.0ex,language=MyJava, label=lst.introEX, caption=Code example of a GUI controller] class AController implements ActionListenerJButton b1;|| JButton b2;|| JMenuItem m3;||Object src = e.getSource();if(src==b1)|| // Command 1||else if(src==b2)|| // Command 2||else if(src instanceof AbstractButton((AbstractButton)src).getActionCommand().equals(||m3.getActionCommand())) // Command 3||||In many cases GUI code is intertwined with the rest of the code. We thus propose a static code analysis required for detecting the GUI commands that a GUI listener can produce. Using this code analysis, we then conduct a large empirical study on Java Swing open-source GUIs. We focus on the Java Swing toolkit because of its popularity and the large quantity of Java Swing legacy code.We empirically study to what extent the number of GUI commands that a GUI listener can produce has an impact on the change- or fault-proneness of the GUI listener code, considered in the literature as negative impacts of a design smell on the code <cit.>. Based on the results of this experiment, we define a GUI design smell we call , a GUI listener that can produce more than two GUI commands. For example with <Ref>, the GUI listener implemented in AController manages events produced by three widgets, b1, b2, and m3 (<Ref>), that produce one GUI command each. 21% of the analyzed GUI controllers are . We provide an open-source tool, [<https://github.com/diverse-project/InspectorGuidget>], that automatically detectin Java Swing GUIs. To evaluatethe ability ofat detecting , we considered six representative Java software systems.We manually retrieved all instances ofin each application, to build a ground truth for our experiments: we found 37 .detected 36out of 37.The experiments show that our algorithm has a precision of 97.37 and recall of 97.59 to detect . Our contributions are:-0.03cm*an empirical study on 13 Java Swing open-source software systems.This study investigates the current coding practices of GUI controllers. The main result of this study is the identification of a GUI design smell we called .*a precise characterization of the . We also discuss the different coding practices of GUI listeners we observed in listeners having less than three commands.*an open-source tool, , that embeds a static code analysis to automatically detect the presence ofin Swing GUIs. We evaluated the ability ofat detecting .The paper is organized as follows.<Ref> describes an empirical study that investigates coding practices of GUI controllers. Based on this study, <Ref> describes an original GUI design smell we called . Following, <Ref> introduces an algorithm to detect , evaluated in <Ref>. The paper ends with related work (<Ref>) and a research agenda (<Ref>). § AN EMPIRICAL STUDY ON GUI LISTENERS All the material of the experiments is freely available on the companion web pagefoot.webpage. §.§ Independent Variables GUI listeners are core code artifacts in software systems. They receive and treat GUI events produced by users while interacting with GUIs. In reaction of such events, GUI listeners produce GUI commands that can be defined as follows: A GUI command <cit.>, aka. action <cit.>, is a set of statements executed in reaction to a user interaction, captured by an input event, performed on a GUI.GUI commands may be supplemented with:a pre-condition checking whether the command fulfills the prerequisites to be executed; undo and redo functions for, respectively, canceling and re-executing the command. Number of GUI Commands (CMD). This variable measures the number of GUI commands a GUI listener can produce.To measure this variable, we develop a dedicated static code analysis (see <Ref>).GUI listeners are thus in charge of the relations between a software system and its GUI. As any code artifact, GUI code has to be tested and analyzed to provide users with high quality (from a software engineering point of view) GUIs. In this work, we specifically focus on a coding practice that affect the code quality of the GUI listeners: we want to state whether the number of GUI commands that GUI listeners can produce has an effect on the code quality of these listeners. Indeed, software developers are free to develop GUI listeners that can produce a single or multiple GUI commands since no coding practices or GUI toolkits provide coding recommendations. To do so, we study to what extent the number of GUI commands that a GUI listener can produce has an impact on the change- and fault-proneness of the GUI listener code. Such a correlation has been already studied to evaluate the impact of several antipatterns on the code quality <cit.>.We formulate the research questions of this study as follows: -0.1cmRQ1 To what extent the number of GUI commands per GUI listeners has an impact on fault-proneness of the GUI listener code?RQ2 To what extent the number of GUI commands per GUI listeners has an impact on change-proneness of the GUI listener code?RQ3 Does a threshold value,a specific number of GUI commands per GUI listener, that can characterize a GUI design smell exist?§.§ Dependent Variables To answer the previously introduced research questions, we measure the following dependent variables.Average Commits (COMMIT). This variable measures the average number of commits per line of code (LoC) of GUI listeners. This variable will permit to evaluate the change-proneness of GUI listeners. The measure of this variable implies that the objects of this study,software systems that will be analyzed, must have a large and accessible change history. To measure this variable, we automatically count the number of the commits that concern each GUI listener. Average fault Fixes (FIX).This variable measures the average number of fault fixes per LoC of GUI listeners. This variable will permit to evaluate the fault-proneness of GUI listeners. The measure of this variable implies that the objects of this study must use a large and accessible issue-tracking system. To measure this variable, we manually analyze the log of the commits that concern each GUI listener. We count the commits which log refers to a fault fix,logs that point to a bug report of an issue-tracking system (using a bug ID or a URL) or that contain the term "fix" (or a synonymous).Both COMMIT and FIX rely on the ability to get the commits that concern a given GUI listener. For each software system, we use all the commits of their history. To identify the start and end lines of GUI listeners, we developed a static code analysis. This code analysis uses the definition of the GUI listener methods provided by GUI toolkits (void actionPerformed(ActionEvent)) to locate these methods in the code.Moreover, commits may change the position of GUI listeners in the code (by adding or removing LoCs).To get the exact position of a GUI listener while studying its change history, we use the Git tool git-log[<https://git-scm.com/docs/git-log>]. The git-log tool has options that permit to:follow the file to log across file renames (option -M); trace the evolution of a given line range across commits (option -L). We then manually check the logs for errors.§.§ Objects The objects of this study are a set of large open-source software systems. The dependent variables, previously introduced, impose several constraints on the selection of these software systems. They must use an issue-tracking system and the Git version control system. Their change history must be large ( must have numerous commits) to let the analysis of the commits relevant. In this work, we focus on Java Swing GUIs because of the popularity and the large quantity of Java Swing legacy code. We thus selected from the Github platform[<https://github.com/>]13 large Java Swing software systems. The average number of commits of these software systems is approximately 2500 commits. The total size of Java code is 1414k Java LoCs. Their average size is approximately 109k Java LoCs. §.§ Results We can first highlight that the total number of GUI listeners producing at least one GUI command identified by our tool is 858,an average of 66 GUI listeners per software system. This approximately corresponds to 20 kLoCs,around 1.33 of their Java code. <Ref> shows the distribution of the listeners according to their number of GUI commands. Most of the listeners (465) can produce one command (we will call 1-command listener). 211 listeners can produce two commands. 81 listeners can produce three commands. 101 listeners can produce at least four commands. To obtain representative data results, we will consider in the following analyses four categories of listeners:one-command listener, two-command listener, three-command listener, and four+-command listener.Besides, a first analysis of the data exhibits many outliers, in particular for the one-command listeners. To understand the presence of these outliers, we manually scrutiny some of them and their change history. We observe that some of these outliers are GUI listeners which size has been reduced over the commits. For instance, we identified outliers that contained multiple GUI commands before commits that reduced them as one- or two-command listeners. Such listeners distort the analysis of the results by considering listeners that have been large, as one- or two-command listeners. We thus removed those outliers from the data set, since outliers removal, when justified, may bring benefits to the data analysis <cit.>. We compute the box plot statistics to identify and then remove the outliers.<Ref> depicts the number of fault fixes per LoC (FIX) of the analyzed GUI listeners. We observe an increase of the fault fixes per LoC when CMD≥ 3. These results are detailed in <Ref>. The mean value of FIX constantly increases over CMD. Because these data follow a monotonic relationship, we use the Spearman's rank-order correlation coefficient to assess the correlation between the number of fault fixes per LoC and the number of GUI commands in GUI listeners <cit.>. We also use a 95 confidence level (p-value<0.05). This test exhibits a low correlation (0.4438) statistically significant with a p-value of 2.2e-16.Regarding RQ1, on the basis of these results we can conclude that the number of GUI commands per GUI listeners does not have a strong negative impact on fault-proneness of the GUI listener code. This result is surprising regarding the global increase that can be observed in <Ref>. One possible explanation is that the mean of the number of bugs per LoC slowly increases over the number of commands as shown in the first row of <Ref>. On the contrary, the range of the box plots of <Ref> strongly increases with 3-command listeners. This means that the 3+-command data sets are more variable than for the 1- and 2-command data sets. <Ref> depicts the number of commits per LoC (COMMIT) of the analyzed GUI listeners. These results are also detailed in <Ref>. We observe that COMMIT does not constantly increases over CMD. This observation is assessed by the absence of correlation between these two variables (0.0570), even if this result is not statistically significant with a p-value of 0.111. We can, however, observe in <Ref> an increase of COMMIT for the three-command listeners.Regarding RQ2, on the basis of these results we can conclude that there is no evidence of a relationship between the number of GUI commands per GUI listeners and the change-proneness of the GUI listener code.Regarding RQ3, we observe a significant increase of the fault fixes per LoC for 3+-command listeners. We observe a mean of 0.004 bugs per LoC for 1- and 2-command listeners, against a mean of 0.024 bugs per LoC for 3+-command listeners, as highlighted by <Ref>. We apply the independent samples Mann-Whitney test to compare 1- and 2-command listeners against 3+-command listeners and we obtain a p-value of 2.2e-16 (p-value<0.05). We observe similar, but not significant, increase on the commits per LoC for the three-command listeners. We thus state that a threshold value,a specific number of GUI commands per GUI listener, that characterizes a GUI design smell exists. On the basis of the results, we define this threshold to three GUI commands per GUI listener. Of course, this threshold value is an indication and as any design smell it may vary depending on the context. Indeed, as noticed in several studies, threshold values of design smells must be customizable to let system experts the possibility to adjust them <cit.>. Using the threshold value of 3, the concerned GUI listeners represent 21% of the analyzed GUI listener and 0.54 of the Java code of the analyzed software systems. Besides, the average size of the 3+-command listeners is 42 LoCs,less than the long method design smell defined between 100 and 150 LoCs in mainstream code analysis tools <cit.>.To conclude on this empirical study we highlight the main findings. The relation between the number of bug fixes over the number of GUI commands is too low to draw conclusions. Future works will include a larger empirical study to investigate more in depth this relation. However, a significant increase of the fault fixes per LoC for 3+-command listeners is observed. We thus set to three the number of GUI commands beyond which a GUI listener is considered as badly designed. This threshold value is an indication and as any design smell it may be defined by system experts according to the context. We show that 0.54 of the Java code of the analyzed software systems is affected by this new GUI design smell that concerns 21% of the analyzed GUI listeners. The threats to validity of this empirical study are discussed in <Ref>.§ BLOB LISTENER: DEFINITION & ILLUSTRATION This section introduces the GUI design smell, we call , identified in the previous section, and illustrates it through real examples.§.§ Blob Listener We define theas follows: Ais a GUI listener that can produce more than two GUI commands.can produce several commands because of the multiple widgets they have to manage. In such a case, ' methods (such as actionPerformed) may be composed of a succession of conditional statements that: 1) check whether the widget that produced the event to treat is the good one,the widget that responds a user interaction; 2) execute the command when the widget is identified. We identified three variants of the . The variations reside in the way of identifying the widget that produced the event. These three variants are described and illustrated as follows.Comparing a property of the widget.<Ref> is an example of the first variant of :the widgets that produced the event (<ref>) are identified with a String associated to the widget and returned bygetActionCommand (<ref>). Each of the three if blocks forms a GUI command to execute in response of on the triggered widget (<ref>).[xleftmargin=5.0ex,language=MyJava, caption=Widget identification using widget's properties in Swing, label=lst.fitsAllSwing,escapechar= ] public class MenuListener implements ActionListener, CaretListener protected boolean selectedText;Object src = e.getSource(); if(src instanceof JMenuItem||src instanceof JButton)   String cmd = e.getActionCommand();    if(cmd.equals("Copy"))  //Command #1 if(selectedText)   output.copy();  else if(cmd.equals("Cut"))  //Command #2 output.cut();  else if(cmd.equals("Paste"))  //Command #3 output.paste();   // etc.    selectedText = e.getDot() != e.getMark(); updateStateOfMenus(selectedText); In Java Swing, the properties used to identify widgets are mainly the name or the action command of these widgets. The action command is a string used to identify the kind of commands the widget will trigger. <Ref>, related to <Ref>, shows how an action command (<ref>) and a listener (<ref>) can be associated to a widget in Java Swing during the creation of the user interface. [xleftmargin=5.0ex,language=MyJava, caption=Initialization of Swing widgets to be controlled by the same listener, label=lst.init] menuItem = new JMenuItem(); menuItem.setActionCommand("Copy");|| menuItem.addActionListener(listener);||button = new JButton(); button.setActionCommand("Cut");|| button.addActionListener(listener);|| //... Checking the type of the widget. The second variant ofconsists of checking the type of the widget that produced the event. <Ref> depicts such a practice where the type of the widget is tested using the operator instanceof (<Ref>). One may note that such if statements may have nested if statements to test properties of the widget as explained in the previous point. [xleftmargin=5.0ex,language=MyJava, caption=Widget identification using the operator instanceof,label=lst.instanceof] public void actionPerformed(ActionEvent evt) Object target = evt.getSource();if (target instanceof JButton) || //... else if (target instanceof JTextField) || //... else if (target instanceof JCheckBox) || //... else if (target instanceof JComboBox) || //... Comparing widget references. The last variant ofconsists of comparing widget references to identify those at the origin of the event. <Ref> illustrates this variant where getSource returns the source widget of the event that is compared to widget references contained by the listener (<ref>).[xleftmargin=5.0ex,language=MyJava, caption=Comparing widget references, label=lst.fitsAllGWT] public void actionPerformed(ActionEvent event) if(event.getSource() == view.moveDown) || //... else if(event.getSource() == view.moveLeft) || //... else if(event.getSource() == view.moveRight) || //... else if(event.getSource() == view.moveUp)//... else if(event.getSource() == view.zoomIn)//... else if(event.getSource() == view.zoomOut)//...In these three variants, multiple if statements are successively defined. Such successions are required when one single GUI listener gathers events produced by several widgets. In this case, the listener needs to identify the widget that produced the event to process.The three variants of thedesign smell also appear in others Java GUI toolkits, namely SWT, GWT, and JavaFX. Examples for these toolkits are available on the companion webpage of this paperfoot.webpage. § AUTOMATIC DETECTION OF GUI COMMANDS AND BLOB LISTENERS§.§ Approach Overview <Ref> describes the process we propose to automatically detect .The detection process includes three main steps. First, GUI listeners that contain conditional blocks (conditional GUI listeners) are automatically detected in the source code through a static analysis (<Ref>). Then, the GUI commands, produced while interacting with widgets, that compose conditional GUI listeners are automatically detected using a second static analysis (<Ref>). This second static analysis permits to spot the GUI listeners that are ,those having more than two commands.uses Spoon, a library for transforming and analyzing Java source code <cit.>, to support the static analyses.§.§ Detecting Conditional GUI Listeners We define a conditional GUI listener as follows: A conditional GUI listener is a listener composed of conditional blocks used to identify the widget that produced an event to process. Such conditional blocks may encapsulate a command to execute in reaction to the event. For instance, five nested conditional blocks (<Ref>) compose the listener method actionPerformed in <Ref> (<Ref>). The first conditional block checks the type of the widget that produced the event (<Ref>). This block contains three other conditional blocks that identify the widget using its action command (<Ref>). Each of these three blocks encapsulates one command to execute in reaction of the event. <Ref> details the detection of conditional GUI listeners. The inputs are all the classes of an application and thelist ofclasses of a GUI toolkit.First,the source code classes are processed to identify the GUI controllers. When a class implements a GUI listener (<Ref>), all the implemented listener methods are retrieved (<Ref>). For example, a class that implements the MouseMotionListener interface must implement the listener methods mouseDragged and mouseMoved. Next, each GUI listener is analyzed to identify those having at least one conditional statement (<Ref>).All listeners with those statements are considered as conditional GUI listeners (<Ref>). §.§ Detecting Commands in Conditional GUI Listeners <Ref> details the detection of GUI commands. The input is a set of GUI conditional listeners.The statements of conditional GUI listeners are processed to detect commands. First, we build the control-flow graph (CFG) of each listener (<Ref>). Second, we traverse the CFG to gather all the conditional statements that compose a given statement (<Ref>). Next, these conditional statements are analyzed to detect any reference to a GUI event or widget (<Ref>). Typical references we found are for instance:[language=MyJava,numbers=none] if(e.getSource() instanceof Component)... if(e.getSource() == copy)... if(e.getActionCommand().contains("copy"))... where e refers to a GUI event, Component to a Swing class, and copy to a Swing widget. The algorithm recursively analyzes the variables and class attributes used in the conditional statements until a reference to a GUI object is found in the controller class. For instance, the variable actionCmd in the following code excerpt is also considered by the algorithm.[language=MyJava,numbers=none] String actionCmd = e.getSource().getActionCommand() if("copy".equals(actionCmd)) ... When a reference to a GUI object is found in a conditional statement, it is considered as a potential command (<Ref>). These potential commands are then more precisely analyzed to remove irrelevant ones (code.59cocode.76co) as discussed below. A conditional block statement can be surrounded by other conditional blocks. Potential commands detected in the function getPotentialCmds can thus be nested within other commands. We define such commands as nested commands. In such a case, the algorithm analyzes the nested conditional blocks to detect the most representative command.We observed two cases: 0cm*A potential command contains only a single potential command, recursively. The following code excerpt depicts this case. Two potential commands compose this code.Command #1 (code.100code.104) has a set of statements ( command #2) to be executed when the widget labeled "Copy" is pressed. However, command #2 (code.101code.103) only checks whether there is a text typed into the widget "output" to then allow the execution ofcommand #1. So, command #2 works as a precondition to command #1,which is the command executed in reaction to that interaction. In this case, only the first one will be considered as a GUI command.[xleftmargin=5.0ex,language=MyJava]if(cmd.equals("Copy")) //Potential command #1||if(!output.getText().isEmpty())//Potential command #2||output.copy();|||| ||*A potential command contains more than one potential command. The following code excerpt depicts this case. Four potential commands compose this code (<Ref>). In this case, the potential commands that contain multiple commands are not considered. In our example, the first potential command (<Ref>) is ignored. One may note that this command checks the type of the widget, which is a variant of(see <Ref>). The three nested commands, however, are the real commands triggered on user interactions.[xleftmargin=5.0ex,language=MyJava,escapechar= ] if(src instanceof JMenuItem) //Potential command #1   String cmd = e.getActionCommand(); if(cmd.equals("Copy")) //Potential command #2   else if(cmd.equals("Cut")) //Potential command #3   else if(cmd.equals("Paste")) //Potential command #4  These two cases are described in <Ref> (code.63cocode.70co). Given a potential command, all its nested potential commands are gathered (code.60cocode.61co).The function getNestCmds analyzes the commands by comparing their code line positions, statements, So, if one command C contains other commands, they are marked as nested to C. Then, for each potential command and its nested ones: if the number of nested commands equals 1, the single nested command is ignored (code.64cocode.65co);if the number of nested commands is greater than 1, the root command is ignored (<Ref>). Finally, GUI listeners that can produce more than two commands are marked as .allows the setting of this threshold value to let system experts the possibility to adjust them, as suggested by several studies <cit.>.§ EVALUATION To evaluate the efficiency of our detection algorithm, we address the two following research questions: 0cmRQ4 To what extent is the detection algorithm able to detect GUI commands in GUI listeners correctly?RQ5 To what extent is the detection algorithm able to detectcorrectly?The evaluation has been conducted using , our implementation of thedetection algorithm.is an Eclipse plug-in that analyzes Java Swing software systems.leverages the Eclipse development environment to raise warnings in the Eclipse Java editor on detected Blob listeners and their GUI commands. Initial tests have been conducted on software systems not reused in this evaluation.and all the material of the evaluation are freely available on the companion web pagefoot.webpage. §.§ Objects We conducted our evaluation by selecting six well-known or large open-source software systems based on the Java Swing toolkit: FastPhotoTagger, GanttProject, JAxoDraw, Jmol, TerpPaint, and TripleA. We use other software systems than those used in our empirical study (<Ref>) to diversify the data set used in this work and assess the validation of the detection algorithm on other systems. Only GanttProject is part of both experiments since it is traditionally used in experiments on design smells. <Ref> lists these systems and some of their characteristics such as their number of GUI listeners. §.§ MethodologyThe accuracy of the static analyses that compose the detection algorithm is measured by the recall and precision metrics <cit.>. We ranon each software system to detect GUI listeners, commands, and . We assume as a precondition that only GUI listeners are correctly identified by our tool. Thus, to measure the precision and recall of our automated approach, we manually analyzed all the GUI listeners detected byto: 0cm* Check conditional GUI Listeners.For each GUI listener, we manually checked whether it contains at least one conditional GUI statement.The goal is to answer RQ4 and RQ5 more precisely, by verifying whether all the conditional GUI listeners are statically analyzed to detect commands and . * Check commands. We analyzed the conditional statements of GUI listeners to check whether they encompass commands. Then, recall measures the percentage of relevant commands that are detected (<Ref>).Precision measures the percentage of detected commands that are relevant (<Ref>). Recall_cmd (%) = |{ RelevantCmds}∩{ DetectedCmds} |/|{ RelevantCmds} |× 100Precision_cmd (%) = |{ RelevantCmds}∩{ DetectedCmds}|/|{ DetectedCmds}|× 100 RelevantCmds corresponds toall the commands defined in GUI listeners,the commands that should be detected by . Recall and precision are calculated over the number of false positives (FP) and false negatives (FN). A command incorrectly detected bywhile it is not a command, is classified as false positive. A false negative is a command not detected by . * Check . To check whether a GUI listener is a , we stated whether the commands it contains concern several widgets. We use the same metrics of commands detection to measure the accuracy ofdetection: Recall_blob (%) = |{ RelevantBlobs}∩{ DetectedBlobs}|/|{ RelevantBlobs}|× 100Precision_blob (%) = |{ RelevantBlobs}∩{ DetectedBlobs}|/|{ DetectedBlobs}|× 100 Relevantare all the GUI listeners that handle more than two commands (see <Ref>).Detectingis therefore dependent on the commands detection accuracy.§.§ Results and Analysis RQ4: Command Detection Accuracy.<Ref> shows the number of commands successfully detected per software system. TripleA has presented the highest number of GUI listeners (559), conditional GUI listeners (174), and commands (152). One can notice that despite the low number of conditional GUI listeners that has TerpPaint (4), this software system has 34 detected commands. So, according to the sample we studied, the number of commands does not seem to be correlated to the number of conditional GUI listeners.<Ref> also reports the number of FN and FP commands, and the values of the recall and precision metrics. TripleA and Jmol revealed the highest number of FN, whereas TerpPaint presented the lowest number of FN.The precision of the command detection is 99.10. Most of the commands (437/441) detected by our algorithm are relevant. We, however, noticed 76 relevant commands not detected leading to an average recall of 85.89. Thus, our algorithm is less accurate in detecting all the commands than in detecting the relevant ones. For example, TripleA revealed 44 FN commands and no false positive result, leading to a recall of 77.55 and a precision of 100. The four FP commands has been observed in JAxoDraw (2) and Jmol (2), leading to a precision of 98.02 and 98.10 respectively.<Ref> classifies the 76 FN commands according to the cause of their non-detection. 28 commands were not detected because of the use of widgets inside block statements rather than inside the conditional statements. For example, their conditional expressions refer to boolean or integer types rather than widget or event types. 16 other commands were not detected since they rely onwidgets or GUI listeners. These widgets are developed for a specific purpose and rely on specific user interactions and complex data representation <cit.>. Thus, our approach cannot identify widgets that are not developed under Java Swing toolkit. All the FN commands reported in this category concern TripleA (14) and Jmol (2) that use severalwidgets. Similarly, we found eight FN commands that use classes defined outside the Swing class hierarchy. A typical example is the use of widgets' models ( classes ButtonModel or TableModel) in GUI listeners. Also, we identified 24 FN commands caused by an incorrect code analysis (either bugs inor in the Spoon library). This result was mainly affected by Jmol, that has a listener with 14 commands not detected.To conclude on RQ4, our approach is efficient for detecting GUI commands that compose GUI listener, even if some improvements are possible.RQ5: Blob Listeners Detection Accuracy.<Ref> gives an overview of the results of thedetection per software system. The highest numbers of detectedconcern TripleA (11), Jmol (11), and JAxoDraw (7). Only one false positive and false negative have been identified against 37correctly detected. The average recall is 97.59 and the average precision is 97.37. The average time spent to analyze the software systems is 10810. It includes the time that Spoon takes to process all classes plus the time to detect GUI commands and . The worst-case is measured in TripleA,the largest system, with 16732. Spoon takes a significant time to load the classes for large software systems (12437 out of 16732 in TripleA). Similarly to the command detection, we did not observe a correlation between the number of conditional GUI listeners, commands, and .So, regarding the recall and the precision, our approach is efficient for detecting .Regarding the single FN , located in the Jmol software system, this FN is due to an error in our implementation. Because of a problem in the analysis of variables in the code, 14 GUI commands were not detected.<Ref> gives an example of the FPdetected in JAxoDraw. It is composed ofthree commands based on checking the states of widgets. For instance, the three commands rely on the selection of a list (Lines <ref>, <ref>, and <ref>).[xleftmargin=5.0ex,language=MyJava, caption=GUI code excerpt, from JAxoDraw, label=lst.codeJaxoDraw, escapechar= ] public final void valueChanged(ListSelectionEvent e) if (!e.getValueIsAdjusting()) final int index = list.getSelectedIndex(); if (index == -1)   //Command #1 removeButton.setEnabled(false); packageName.setText(""); else if ((index == 0)||(index == 1)  //Command #2 || (index == 2)) removeButton.setEnabled(false); packageName.setText(""); else   //Command #3 removeButton.setEnabled(true); String name = list.getSelectedValue().toString(); packageName.setText(name); § DISCUSSIONIn the next three subsections, we discuss the threats to validity of the experiments detailed in this paper, the scope of , and alternative coding practices that can be used to limit . §.§ Threats to validity External validity.This threat concerns the possibility to generalize our findings. We designed the experiments using multiple Java Swing open-source software systems to diversify the observations. These unrelated software systems are developed by different persons and cover various user interactions. Several selected software systems have been used in previous research work, GanttProject <cit.>, Jmol <cit.>, and TerpPaint <cit.> that have been extensively used against GUI testing tools. Our implementation and our empirical study (<Ref>) focus on the Java Swing toolkit only. We focus on the Java Swing toolkit because of its popularity and the large quantity of Java Swing legacy code. We provide on the companion web page examples ofin other Java GUI toolkits, namely GWT, SWT, and JavaFXfoot.webpage.Construct validity. This threat relates to the perceived overall validity of the experiments. Regarding the empirical study (<Ref>), we usedto find GUI commands in the code.might not have detected all the GUI commands. We show in the validation of this tool (<Ref>) that its precision (99.10) and recall (86.05) limit this threat. Regarding the validation of , the detection of FNs and FPs have required a manual analysis of all the GUI listeners of the software systems. To limit errors during this manual analysis, we added a debugging feature infor highlighting GUI listeners in the code. We used this feature to browse all the GUI listeners and identify their commands to state whether these listeners are . During our manual analysis, we did not notice any error in the GUI listener detection. We also manually determined whether a listener is a . To reduce this threat, we carefully inspected each GUI command highlighted by our tool.§.§ Scope of the ApproachOur approach has the following limitations. First,currently focuses on GUIs developed using the Java Swing toolkit. This is a design decision since we leverage Spoon,a library to analyze Java source code. However, our solution is generic and can be used to support other GUI toolkits. Second, our solution is limited to analyze GUI listeners and associated class attributes.We identified several GUI listeners that dispatch the event processing to methods.Our implemented static analyses can be extended to traverse these methods to improve its performance.Last, the criteria for thedetection should be augmented by inferring the related commands.For example, when a GUI listener is acandidate, our algorithm should analyze its commands by comparing their commonalities ( shared widgets and methods). The goal is to detect commands that form in fact a single command.§.§ Alternative Practices We scrutinized GUI listeners that are notto identify alternative practices that may limit . In most of the cases, these practices consist of producing one command per listener by managing one widget per listener.Listeners as anonymous classes.<Ref> is an example of this good practice. A listener, defined as an anonymous class (code.1ecode.2e), registers with one widget (<Ref>). The methods of this listener are then implemented to define the command to perform when an event occurs. Because such listeners have to handle only one widget, if statements used to identify the involved widget are not more used, simplifying the code.[xleftmargin=5.0ex,language=MyJava, caption=Good practice for defining controllers: one widget per listener, label=lst.good] private void registerWidgetHandlers() view.resetPageButton().addActionListener(||new ActionListener() ||public void actionPerformed(ActionEvent e)requestData(pageSize, null););||view.previousPageButton().addActionListener(new ActionListener() public void actionPerformed(ActionEvent e)if(hasPreviousBookmark())requestData(pageSize, getPreviousBookmark()););//...Listeners as lambdas.<Ref> illustrates the same code than <Ref> but using Lambdas supported since Java 8. Lambdas simplify the implementation of anonymous class that have a single method to implement. [xleftmargin=5.0ex,language=MyJava, caption=Same code than in Listing <ref> but using Java 8 Lambdas, label=lst.goodJava8] private void registerWidgetHandlers() view.resetPageButton().addActionListener( e -> requestData(pageSize, null));||view.previousPageButton().addActionListener(e ->if (hasPreviousBookmark()) requestData(pageSize, getPreviousBookmark()););//...Listeners as classes. In some cases, listeners have to manage different intertwined methods. This case notably appears when developers want to combine several listeners or methods of a single listener to develop a more complex user interaction. For example, <Ref> is a code excerpt that describes a mouse listener where different methods are managed: mouseClicked (<Ref>), mouseReleased (<Ref>), and mouseEntered (<Ref>). Data are shared among these methods (isDrag, <Ref>).[xleftmargin=5.0ex,language=MyJava, caption=A GUI listener defined as a class, label=lst.listClass] class IconPaneMouseListener implements MouseListener if(!isDrag) || //... isDrag = false;|| isMouseExited = false;// ... § RELATED WORK Work related to this paper fall into two categories: design smell detection; GUI maintenance and evolution. §.§ Design Smell Detection The characterization and detection of object-oriented (OO) design smells have been widely studied <cit.>.For instance, research works characterized various OO design smells associated with code refactoring operations <cit.>. Multiple empirical studies have been conducted to observe the impact of several OO design smells on the code. These studies show that OO design smells can have a negative impact on maintainability <cit.>, understandability <cit.>, and change- or fault-proneness <cit.>. While developing seminal advances on OO design smells, these research works focus on OO concerns only. Improving the validation and maintenance of GUI code implies a research focus on GUI design smells, as we propose in this paper.Related to GUI code analysis, Silvapropose an approach to inspect GUI source code as a reverse engineering process <cit.>. Their goal is to provide developers with a framework supporting the development of GUI metrics and code analyzes. They also applied standard OO code metrics on GUI code <cit.>. Closely, Almeidapropose a first set of usability smells <cit.>. These works do not focus on GUI design smell and empirical evidences about their existence, unlike the work presented in this paper.The automatic detection of design smells involves two steps. First, a source code analysis is required to compute source code metrics.Second, heuristics are applied to detect design smells on the basis of the computed metrics to detect design smells. Source code analyses can take various forms, notably: static, as we propose, and historical. Regarding historical analysis, Palombapropose an approach to detect design smells based on change history information <cit.>. Future work may also investigate whether analyzing code changes over time can help in characterizing . Regarding detection heuristics, the use of code metrics to define detection rules is a mainstream technique. Metrics can be assemble with threshold values defined empirically to form detection rules <cit.>.Search-based techniques are also used to exploit OO code metrics <cit.>, as well as machine learning <cit.>, or bayesian networks <cit.>. Still, these works do not cover GUI design smells. In this paper, we focus on static code analysis to detect GUI commands to form adetection rule. To do so, we use a Java source code analysis framework that permits the creation of specific code analyzers <cit.>. Future work may investigate other heuristics and analyses to detect GUI design smells.Several research work on design smell characterization and detection are domain-specific. For instance, Moha  propose a characterization and a detection process of service-oriented architecture anti-patterns <cit.>. Garciapropose an approach for identifying architectural design smells <cit.>. Similarly, this work aims at motivating that GUIs form another domain concerned by specific design smells that have to be characterized.Research studies have been conducted to evaluate the impact of design smells on system's quality <cit.> or how they are perceived by developers <cit.>. Future work may focus on how software developers perceive . §.§ GUI maintenance and evolution Unlike object-oriented design smells, less research work focuses on GUI design smells. Zhangpropose a technique to automatically repair broken workflows in Swing GUIs <cit.>. Static analyses are proposed. This work highlights the difficulty "for a static analysis to distinguish UI actions [GUI commands] that share the same event handler [GUI listener]". In our work, we propose an approach to accurately detect GUI commands that compose GUI listeners. Staiger also proposes a static analysis to extract GUI code, widgets, and their hierarchies in C/C++ software systems <cit.>. The approach, however, is limited to find relationships between GUI elements and thus does not analyze GUI controllers and their listeners. Zhangpropose a static analysis to find violations in GUIs <cit.>. These violations occur when GUI operations are invoked by non-UI threads leading a GUI error. The static analysis is applied to infer a static call graph and check the violations. Frolinpropose an approach to automatically find inconsistencies in MVC JavaScript applications <cit.>. GUI controllers are statically analyzed to identify consistency issues ( inconsistencies between variables and controller functions). This work is highly motivated by the weakly-typed nature of Javascript. § CONCLUSION In this paper, we investigate a new research area on GUI design smells. We detail a specific GUI design smell, we call , that can affect GUI listeners. The empirical study we conducted exhibits a specific number of GUI commands per GUI listener that characterizes aexists. We define this threshold to three GUI commands per GUI listener. We show that 21 of the analyzed GUI controllers are affected by . We propose an algorithm to automatically detect . This algorithm has been implemented in a tool publicly available and then evaluated.Next steps of this work include a behavior-preserving algorithm to refactor detected . We will conduct a larger empirical study to investigate more in depth the relation between the number of bug fixes over the number of GUI commands. We will study different GUI coding practices to identify other GUI design smells. We will investigate whether some GUI faults <cit.> are accentuated by GUI design smells.§ ACKNOWLEDGEMENTSThis work is partially supported by the French BGLE Project CONNEXION. We thank Yann-Gaël Guéhéneuc for his insightful comments on this paper.SIGCHI-Reference-Format
http://arxiv.org/abs/1703.08803v1
{ "authors": [ "Valéria Lelli", "Arnaud Blouin", "Benoit Baudry", "Fabien Coulon", "Olivier Beaudoux" ], "categories": [ "cs.SE", "cs.HC", "F.3.3; H.5.2; D.3.3" ], "primary_category": "cs.SE", "published": "20170326104021", "title": "Automatic Detection of GUI Design Smells: The Case of Blob Listener" }
^1Department of Physics, Emory University, Atlanta, GA, USA.We utilize a nanoscale magnetic spin-valve structure to demonstrate that current-induced magnetization fluctuations at cryogenic temperatures result predominantly from the quantum fluctuations enhanced by the spin transfer effect. The demonstrated spin transfer due to quantum magnetization fluctuations is distinguished from the previously established current-induced effects by a non-smooth piecewise-linear dependence of the fluctuation intensity on current. It can be driven not only by the directional flows of spin-polarized electrons, but also by their thermal motion and by scattering of unpolarized electrons. This effect is expected to remain non-negligible even at room temperature, and entails a ubiquitous inelastic contribution to spin-polarizing properties of magnetic interfaces.Spin transfer due to quantum magnetization fluctuations Sergei Urazhdin^1 March 27, 2017 =======================================================Spin transfer <cit.> – the transfer of angular momentum from spin-polarized electrical current to magnetic materials – has been extensively researched as an efficient mechanism for the electronic manipulation of the static and dynamic states in nanomagnetic systems, advancing our understanding of nanomagnetism and electronic transport, and enabling the development of energy-efficient magnetic nanodevices <cit.>. This effect can be understood based on the argument of spin angular momentum conservation for spin-polarized electrons, scattered by a ferromagnet whose magnetization M⃗ is not aligned with the direction of polarization. The component of the electron spin transverse to M⃗ becomes absorbed, exerting a torque on the magnetization termed the spin transfer torque (STT).In nanomagnetic devices such as spin valve nanopillars [Fig. <ref>(a)], STT can enhance thermal fluctuations of magnetization [Fig. <ref>(b)], resulting in its reversal <cit.>or auto-oscillation <cit.>, which can be utilized in memory, microwave generation, and spin-wave logic <cit.>. The approximation for the magnetization as a thermally fluctuating classical vector M⃗ provides an excellent description for the quasi-uniform magnetization dynamics <cit.>. However, the short-wavelength dynamical modes of the magnetization whose frequency extends into the THz range <cit.> become frozen out at low temperatures, and the effects of spin transfer on them cannot be described in terms of the enhancement or suppression of thermal fluctuations. Short-wavelength modes are not readily accessible to the common electronic spectroscopy and magneto-optical techniques, and their role in spin transfer remains largely unexplored. Here, we introduce a frequency non-selective, magnetoelectronic measurement approach allowing us to demonstrate that at low temperatures the current-dependent magnetization fluctuations arise predominantly from the enhancement of quantum fluctuations by spin transfer. The observed effect is analogous to the well-studied spontaneous emission of a photon by a two-level system, also caused by quantum fluctuations, which occurs even when there are no photons to stimulate the emission. In the studied magnetic system, the role of photons is played by magnons - the quanta of the dynamical magnetization modes. Our results indicate that the contribution of quantum fluctuations enhanced by spin transfer remains larger than that of thermal fluctuations at temperatures up to over 100 K, and remains non-negligible even at room temperature. The demonstrated effect also entails a ubiquitous inelastic contribution to spin-polarizing properties of magnetic interfaces.The effects of STT on thermal magnetization fluctuations [Fig. <ref>(b)] can be described in terms of their current-dependent spectral intensity, or equivalently current-dependentpopulation of magnons <cit.>, <N>=N_0/1-I/I_cwhere N_0 is the magnon population in thermal equilibrium, and I_c is the critical current for the onset of the dynamical instability <cit.>.The dependence Eq. (<ref>) has been verified by the magneto-optical <cit.> and magnetoelectronic techniques <cit.>. We utilized the latter to verify the established effects of STT in the Permalloy (Py)-based magnetic nanopillars used in our study. The nanopillars were based on the multilayer with structure Cu(40)Py(10)Cu(4)Py(5)Au(2), where thicknesses are in nanometers. We used a combination of e-beam lithography and Ar ion milling to pattern the "free" layer F1=Py(5) and the Cu(4) spacer into a cylindrical shape with a 70 nm diameter, while the thicker "polarizer" F2=Py(10) was only partially patterned, allowing the magnons generated in this layer due to spin transfer to escape from the active area. Thus, spin transfer affected only the fluctuations of the free layer F1, while the role of F2 was limited to polarizing the electron current flowing through the nanostructure. The nanopillars were contacted with a Cu(80) top electrode, electrically isolated from the bottom electrode by an insulating SiO_2(15) layer. Magnetoelectronic measurements were performed in a pseudo-four probe geometry by the lock-in detection technique, with an ac current of 50 μA rms at a frequency of 1.3 kHz superimposed on the dc bias current.The dependence of resistance on the magnetic field for our test structure is typical for the giant magnetoresistance (GMR) <cit.> in magnetic nanopillars, Fig. <ref>(c). The current-dependent differential resistance exhibits a sharp peak consistent with the onset of the dynamical instability at the critical current I_c <cit.>, Fig. <ref>(d). The dependence of I_c on the magnetic field agrees with the calculation based on the Kittel formula for the ferromagnetic resonance (FMR) mode <cit.>, inset in Fig. <ref>(d).To introduce our approach to magnetoelectronic measurements of current-dependent magnetization fluctuations, we analyze the relationship between GMR and magnon population. The GMR results in a sinusoidal dependence of resistance R on the angle θ between the magnetizations of the “free" layer F1 and the polarizer F2, R(θ)=R(0)+Δ Rsin^2(θ/2)≈ R(0)+Δ Rθ^2, where R_0 is the resistance minimum, and Δ R is the total magnetoresistance <cit.>. The quadratic dependence R(θ) at small θ can be viewed as the lowest-order Taylor expansion. By symmetry, quadratic relationship is also expected for non-uniform states, albeit somewhat rescaled by the electron diffusion across magnetically inhomogeneous regions.To analyze the relation between θ and the magnon population, we note that each magnon has spin 1, regardless of the spatial characteristics of the corresponding dynamical mode <cit.>. Therefore, for the ferromagnet with the total spin L=MV/μ_B, the total magnon population is related to the average θ by N=MVsin^2(θ/2)/μ_B  <cit.>. Here, V is the volume of the nanomagnet, and μ_B is the Bohr magneton. Thus, resistance is proportional to the total magnon population, R(θ)=R(0)+CNμ_BΔ R/MV with the coefficient C of order 1 reflecting the contribution of magnetic inhomogeneity to the GMR signal. Therefore, resistance variations due to GMR directly reflect the total magnon population in the nanopillar, not limited to quasi-uniform dynamical modes.At subcritical currents, the resistance of the studied nanopillar exhibits an unusual piecewise-linear dependence, with a weak singularity at I=0, and a slope at I>0 larger than at I<0, Fig. <ref>(a). The variations of the applied field shift the curves, without noticeably affecting their slopes. The shift can be explained by the magnon freeze-out, as illustrated in Fig. <ref>(b) that shows the field dependence of resistance at I=0, together with the calculated total thermal magnon population. The calculation was performed in the exchange approximation ħω=Dk^2, using the stiffness D=4×10^-40 Jm^2 for Permalloy <cit.>. The magnon population was determined by summing up the contributions N_0=1/[exp(ħω/kT)-1] of each mode, with the allowed values of wavevector k determined using the pinned-magnetization boundary conditions for a square shape with dimensions 70× 70× 5 nm. The overall agreement between the variations of resistance and the calculated magnon population confirms the relationship between them established above, with the scaling coefficient C≈ 0.2 reflecting a reduced sensitivity to short-wavelength modes. The observed dependence R(B) is somewhat weaker than expected based on the calculation of N_0(B), likely because the exchange approximation overestimates the frequencies, and thus underestimates the populations, of short-wavelength modes <cit.>.Since the field does not noticeably affect the slopes of the curves in Fig. <ref>(a), the observed piecewise-linear dependence cannot be associated with thermal fluctuations whose intensity is controlled by the field, see Fig. <ref>(b). It cannot be explained by Joule heating, because the dissipated power is quadratic in current, so that the increase of resistance due to heating must be also at least quadratic in current. It is also inconsistent with the analytical expression Eq. (<ref>) of the spin torque theory. Electronic shot noise exhibits a similar linear increase of power with bias <cit.>. However, shot noise (or fluctuating electron current) can contribute to the measured differential resistance only by inducing magnetization fluctuations, which in the absence of thermal fluctuations is forbidden by the angular momentum conservation argument of spin torque theory.We conclude that a previously unrecognized contribution to spin transfer, not described as enhancement of thermal magnetization fluctuations, results in a linear in current increase of magnon population. To interpret our observations, we note that even if thermal fluctuations are negligible at low temperature, the spin polarization of electrons scattered by the magnetic system cannot be perfectly aligned with the magnetization because of the quantum fluctuations of the latter, driving electron spin dynamics, and resulting in spin transfer. The proposed quantum effect must be distinct from the established spin torque effects described by Eq. (<ref>). Indeed, quantum fluctuations cannot be affected by scattering of the majority electrons, since in contrast to thermal fluctuations they cannot be suppressed [Fig. <ref>(c), top]. However, they can be enhanced by scattering of the minority electrons [Fig. <ref>(c), bottom].There is no established theory for the effects of quantum magnetization fluctuations on spin transfer, although the latter has been analyzed in the context of the quantum theory of magnetism <cit.>. Here, we present a simple model that allows us to extend the spin-angular momentum conservation argument underlying the spin torque theory <cit.> to quantum magnetization states. We can describe the FMR mode by the dynamical states of a quantum macrospin L⃗ representing the magnetization <cit.>, whose projection L_z on the z-axis directed opposite to B⃗ characterizes magnon population N=L-L_z. An electron with spin s⃗=(a,b) scattered by the magnetic layer experiences exchange interaction H_ex=J_exs⃗·L⃗/L, where J_ex is the s-d exchange energy. This interaction results in the precession of both L⃗ and s⃗ around the total angular momentum J⃗=L⃗+s⃗ conserved by the exchange Hamiltonian. This description is a natural extension of the electron spin precession around the magnetization analyzed in the spin torque theory. Following the dephasing argument originally proposed by Slonczewski <cit.>, we can assume that the precession phases are randomized due to variations among electron trajectories. Under these assumptions, one can determine the change of L_z, and thus the average number <Δ N> of magnons generated by the scattered electron. At N<<L, we obtain <cit.>,<Δ N>=-<Δ L_z> ≈ b^2/L+b^2N/L-a^2N/L. This equation can be interpreted by analogy to the interaction between a two-level system and the electromagnetic field. The two-level system is the spin of the scattered electron, and the role of photons is played by magnons. The first term describes spontaneous emission of magnons, which can occur even in the absence of magnons at N=0. The second and the third terms describe stimulated emission and absorption, respectively, with the probability proportional to the number of magnons. This interpretation closely follows the ideas of Berger <cit.>, who described spin transfer in terms of stimulated and spontaneous magnon emission, but ultimately neglected the spontaneous contribution in the analysis of degenerate long-wavelength modes. Without the spontaneous contribution, Eq. (<ref>) is equivalent to the result obtained in the spin torque theory ∂θ/∂ t|_STT=Ig/eLsinθ, Eq. (17) in Ref. <cit.>. Here, g is a function of order one determined by the polarization of current I. Indeed, using Δ n_e=IΔ t/e to represent the number of electrons scattered by the ferromagnet and N=L(1-cosθ), we obtain Δ N/Δ n_e=gsin^2θ≈ 2gN/L, consistent with the contribution of stimulated processes in Eq. (<ref>).In the steady state, the magnon population is determined by the balance between spin transfer driven by current I, and the dynamical relaxation. Describing the latter by the Landau damping, or equivalently for small N by the relaxation time approximation ∂ N/∂ t|_D=-(N-N_0)/τ <cit.> with τ=1/(2αω), we obtain <cit.><N(I)>=N_0+(|I|/p+I)/2I_c/1-I/I_c,where p=a^2-b^2 describes the current polarization. The unusual non-analytical form of Eq. (<ref>) originates from the asymmetry of Eq. (<ref>) with respect to exchanging a and b describing the current reversal. Equation (<ref>) reduces to the STT result Eq. (<ref>) in the classical limit, at N_0≫1 [Fig. <ref>(d), top], when the stimulated contribution in Eq. (<ref>) is dominant. In the quantum limit at N_0≪ 1, we obtain a piecewise-linear dependence [Fig. <ref>d, bottom]. The data in Fig. <ref>(a) are consistent with the dominant quantum contribution once we account for the imperfect electron spin polarization, p<1 in Eq. (<ref>), resulting in spontaneous magnon generation at both positive and negative currents.We emphasize that the contribution of quantum fluctuations is negligible for the degenerate quasi-uniform dynamical modes. However,at 3.4 K the modes with frequencies above 300 GHz are frozen out. Since exchange interaction between the electron spin and the magnetization underlying spin transfer is local, Eq. (<ref>) with appropriate values of I_c must be also applicable to these modes. A similar argument has been put forward in spin torque theory <cit.>. A direct summation of Eq. (<ref>) over the entire magnon spectrum confirmed that the quantum contribution to the current-dependent magnon population is dominant at 3.4 K, consistent with our interpretation of Fig. <ref>(a) <cit.>.The role of quantum fluctuations in spin transfer was further elucidated by measurements at higher temperatures, where we observe a rapid broadening of the zero-current singularity[Fig. <ref>(a)]. This broadening cannot be attributed to the increasing role of thermal magnetization fluctuations, since the piecewise-linear dependence is still apparent at larger currents even at 20 K. To analyze this effect, we fit the data with a piecewise-linear dependence convolved with a Gaussian. The extracted broadening width closely follows a linear dependence Δ I=(1.9± 0.1)kT/eR_0, inset in Fig. <ref>(a). A calculation based on the summation of Eq. (<ref>) convolved with a Gaussian of width 1.9 kT/eR_0 [curves Fig. <ref>(a)] reproduces the thermal broadening effect, but somewhat exaggerates the classical contribution at higher temperatures, likely due to the overestimated frequencies, and thus the relaxation rates, of high-frequency magnons in the exchange approximation for magnon dispersion used in our calculation. We note that this calculation used only the parameter values extracted from matching the calculated magnon populations with resistance at T=3.4 K, Fig. <ref>(b). A good agreement with the data, achieved at elevated temperatures without any fitting parameters, supports the validity of our model.The observed thermal broadening is consistent with the proposed quantum mechanism. Bias current shifts the electron distribution in the magnetic nanopillar, driving the spin transfer [Fig. <ref>(b), top]. At finite temperature, the electron distribution becomes thermally broadened, resulting in scattering of thermally excited electrons and holes [Fig. <ref>(b), bottom] equivalent to a distribution of width Δ V=kT/e of the bias voltage applied to F1, facilitating spin transfer even in the absence of directional current flow. The relation Δ I=(1.9± 0.1)kT/eR_0 obtained by fitting the data [inset in Fig. <ref>(a)] is consistent with the approximately equal contributions of layers F1 and F2 to the total resistance R_0, such that Δ V≈ IR_0/2. Thermal broadening washes out the singular piecewise-linear dependence, but the contribution of quantum fluctuations to spin transfer remains significant even at elevated temperatures.Since the slopes of the piecewise-linear dependence are different for positive and negative currents, the value of dN/dI at I=0 remains finite even in the presence of thermal broadening. By convolving the dependence N(R) [Eq. (<ref>)] with a Gaussian and differentiating with respect to I, we obtain dN/dI=1/2I_c at I=0, independent of temperature. For STT facilitated by thermal fluctuations, the slope is dN/dI=N_0/I_c, see Eq. (<ref>). Since the quantum contribution to spin transfer is independent of T at I=0, the corresponding value of dN/dI for the total magnon population is independent of T, horizontal line in Fig. <ref>(a). For the STT contribution facilitated by thermal fluctuations, the calculated value increases almost linearly with temperature (blue curve), indicating that this contribution is dominated by the degenerately populated low-frequency modes described by the Rayleigh-Jeans law. At B=1 T, the calculated crossover from predominantly quantum to the classical (thermal) spin transfer regime occurs at temperature T_x=38 K. The slope dR/dI at I=0, determined from our measurements, increases linearly with temperature [Fig. <ref>(b)], in agreement with our model. The T=0 intercept represents the quantum contribution, and the slope reflects the classical one. The value T_x=160 K extrapolated from these data is larger than the calculated value, likely because the quantum contribution is underestimated in the model based on the parabolic magnon dispersion. Based on our data, we estimate the characteristic frequency of magnons involved in spin transfer, f_0=k_BT_x/h≈ 3.5 THz.Quantum fluctuations can significantly contribute to current-induced phenomena whenever highly nonuniform dynamical states are involved, for example in reversal via fast domain wall motion in technologically important nanomagnets with perpendicular magnetic anisotropy <cit.>. More generally, the demonstrated magnon generation mechanism can decrease the effective magnetization, lowering the reversal barriers. The relative contribution of quantum fluctuations to current-induced phenomena in antiferromagnets <cit.> is likely larger than in ferromagnets, since the characteristic magnon frequencies are almost two orders of magnitude higher, resulting in significantly smaller thermal magnon populations. Quantum fluctuations may contribute to other phenomena involving interaction between magnetization and conduction electrons, including spin-orbit effects <cit.>, optically-driven effects <cit.>, and spin-caloritronic effects <cit.>. They may also provide a significant contribution to spin-polarizing properties of ferromagnets. Indeed, according to Eq. (<ref>), the probability for a spin-down conduction electron to spin-flip while generating an FMR magnon at T=0 is 1/L, where L is the total spin of the ferromagnet. Since the total number of magnetic modes is approximately L, and each mode contributes to such spin flipping, the total probability of spin-flip is of the order one, providing a ubiquitous inelastic contribution to the spin-polarization of electrons flowing through ferromagnets. This may explain why even advanced spin-dependent band structure calculations generally underestimate electron spin flipping rates at magnetic interfaces <cit.>. By tailoring the magnon spectrum via material and geometry engineering, it may become possible to control the effects of quantum magnetization fluctuations on magnetoelectronic phenomena in nanomagnetic systems.We acknowledge support from NSF Grant Nos. ECCS-1509794 and DMR-1504449. apsrev4-1
http://arxiv.org/abs/1703.09335v2
{ "authors": [ "Andrei Zholud", "Ryan Freeman", "Rongxing Cao", "Ajit Srivastava", "Sergei Urazhdin" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170327230529", "title": "Spin transfer due to quantum fluctuations of magnetization" }
definitionDefinition theoremTheorem
http://arxiv.org/abs/1703.08784v1
{ "authors": [ "Saeedeh Moloudi", "Michael Lentmaier", "Alexandre Graell i Amat" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170326083034", "title": "A Unified Ensemble of Concatenated Convolutional Codes" }
A New Paradigm for Robotic Dust Collection: Theorems, User Studies, and a Field Study Rachel Holladay Robotics Institute Carnegie Mellon Universityrmh@andrew.cmu.edu Siddhartha S. Srinivasa Robotics Institute Carnegie Mellon Universitysiddh@cs.cmu.edu December 30, 2023 ================================================================================================================================================================================= We pioneer a new future in robotic dust collection by introducing passive dust-collecting robots that, unlike their predecessors, do not require locomotion to collect dust. While previous research has exclusively focused on active dust-collecting robots, we show that these robots fail with respect to practical and theoretical aspects, as well as human factors.By contrast, passive robots, through their unconstrained versatility, shine brilliantly in all three metrics.We present a mathematical formalism of both paradigms followed by a user study and field study.§ INTRODUCTION There has been renewed recent interest in the design of efficient and robust dust-collecting robots <cit.>.The oppression of constant dust raining over our heads calls out for immediate attention. Furthermore, the increased cost of legal human labor, and increased penalties for employing illegal immigrants, has made dust-collection all the more critical to automate <cit.>.However, all of the robotic solutions have focussed exclusively on what we define (see def:active_robot for a precise mathematical definition) as active dust-collecting robots. Informally, these are traditional robotic solutions, where the robot locomotes to collect dust. It is understandable why this seems like a natural choice as humans equipped with vacuum cleaners are, after all, also active dust-collectors.Unfortunately, active dust collection presents several challenges: (1) Practical: they require locomotion, which requires motors and wheels, which are expensive and subject to much wear and tear, (2) Theoretical: most active dust-collectors are wheeled robots, which are subject to nonholonomic constraints on motion, demanding complex nonlinear control even for seemingly simple motions like moving sideways <cit.>, (3) Human factors: several of our users in our user study expressed disgust, skepticism, and sometimes terror, about the prospect of sentient robots wandering around their homes, for example:I don't want a f*cking robot running around all day in my house. In this paper, we propose a completely new paradigm for dust collection: passive dust-collecting robots (see def:passive_robot for a precise mathematical definition). Informally, these are revolutionary new solutions that are able to collect dust without any locomotion!As a consequence, passive dust-collecting robots address all of the above challenges: (1) Practical: Because they have no moving parts like wheels or motors, they are both inexpensive and incur no wear and tear, (2) Theoretical: because passive dust-collectors can be trivially parallel transported to the identity element of the 𝕊𝔼(2) Lie Group, they require no explicit motion planning (in situations where parallel transport is inefficient, the robot can be physically transported to the identity element), (3) Human Factors: as passive dust-collecting robots are identical to other passive elements in our homes and work places (like walls, tables, desks, lamps, carpets), their adoption into our lifespace is seamless. In addition, we present and analyze a mathematical model of dust collection. Using our model, we can, for the first time, answer which robot-type is more efficient. This is a critical question to consider in order to inform future cleaning choices. Our analysis reveals that for a certain choice of constants, a passive dust cleaning robot is more efficient than its active counterpart. Through a user study, we contrast this with user's perceived perception of robot efficiency and what factors influence their choices. To explore what choices are actually made we leveraged a field study of Carnegie Mellon's Robotics Institute to determine the prevalence of each robot type. This study reveals that passive dust collecting allows for a wider range of morphologies, suggesting that passive dust collecting is a more inclusive characterization. Furthermore, we see that rather than two paradigms there is a continuum of dust collecting robots. Our work makes the following contributions: Mathematical Formulation. We present a model of active and passive dust collecting robots followed by an efficiency tradeoff analysis. Preference User Study. We surveyed college students to determine what kind of robot they preferred and which they perceived to be more efficient. Field Study. Using data on the robots of the Robotics Institute we investigate the more popular robotic paradigms. We believe our work takes a first step in launching a new discussion concerning the nature of robotic dust collection, paving the way for future cleanliness. § A MATHEMATICAL MODEL FOR DUST COLLECTIONIn order to compare and analyze active and passive dust collecting robots we present a mathematical model of their dust collection capabilities. With this model, we dare to ask: which robot is more efficient?§.§ Dust Model We model dust as a pressureless perfect fluid, which has a positive mass density but vanishing pressure. Under this assumption, we can model the interaction of dust particles by solving the Einstein field equation, whose stress-energy tensor can be written in this simple and elegant formT^μν=ρ U^μU^νwhere the world lines of the dust particles are the integral curves of the four-velocity U^μ, and the matter density is given by the scalar function ρ. Remarkably, unless subjected to cosmological radiation of a nearby black hole, or a near-relativistic photonic Mach cone, this equation can be solved analytically, resulting in dust falling at a constant rate of α.We model our robots as covering 1 unit^2 area of space-time. We present our models for passive and active robots before performing comparative analysis. §.§ Pasive Robot Model We provide the following formalism:We define a passive dust collecting robot as a robot that does not move, collecting the dust that falls upon it. The dust-collecting capability of a passive dust-collecting robot is given byD_passiveα §.§ Active Robot ModelWe provide the following formalism: We define an active dust collecting robot as a robot that moves around the space, actively collecting dust.We model our active robot as driving at speed β. We assume that our robot can only active collect dust of height h.This assumption is drawn from IRobot's Roomba, which reportly can get stuck on cords and cables.As a simplifying assumption we will assume that the robot always collects dust of height h, implying that there is always at least dust of height h prior to the robot's operation.The dust-collecting capability of an active dust-collecting robot is given byD_active hβ^3 + α/β It is obvious that the robot actively collects hβ^3 dust. However this is not the entire story.As the robot drives, actively collecting dust, it also passively collects the dust that happens to fall on it.To model this, we consider the robot passing over some fixed line.Some portion of the robot is occulding this line for 1/β seconds. Thus the robot passively collects α/β dust. Thus, combining the active and passive components our active robot collects:D_active hβ^3 + α/β §.§ Model ComparisonWe next compare for what tradeoffs there are between passive and active dust cleaning robots. We pose this as the question: When are passive dust cleaning robots more efficient then their active counterparts? Hence when is D_passive > D_active?We are now ready to prove our main theorem.The dust-collecting capability of a passive robot exceeds the dust-collecting capability of an active robot whenα > hβ^4/β-1Using eqn:passive and eqn:active we get:D_passive > D_active α > hβ^3 + α/β With some simple arithmetic this becomes:α > hβ^4/β-1fig:model_comparison shows this function over a variety of βs and a few choices of h. The y-axis can be viewed as a measure of efficiency.A passive robot's efficiency corresponds to a straight line across the y-axis at its α value.As the h value increases, the active robot's efficiency increases, which follows from the fact that as it drives, it can collect more dust. While we see an initial drop in efficiency due to a β increase, owing to the fact that the active robot collects less dust passively, this effect is then dwarfed by a faster moving robot that can cover more ground.§ USER STUDYHaving a developed a model of passive and active dust collecting robots we used a user study to evaluate people's opinions on each type of robotś efficiency. This is critical in developing effective robots as we need to explore the possible discrepencies between perceived versus actual robot capability <cit.>. §.§ Experimental SetupWe created an online form to evaluate users opinions of passive and active dust collecting robots.Provided users with def:passive_robot and def:active_robot, we then asked them the following questions:Which type of robot do you think collects more dust: an active dust collecting robot or a passive dust collecting robot? Why? Which robot would you prefer to have?For q:prefer the options were: Active dust collecting robot, Passive dust collecting robot, Whichever robot is the most efficient at collecting dust. Our goal in asking this was to determine what people value more, the illusion of efficiency or actual efficiency. Participants We recurited 23 Carnegie Mellon students (14 males, 9 females, aged 21-23) through online sources.§.§ Analysis The results of our user study can be seen in fig:user_results. While people believe that the active robot collects more dust, people would prefer to have the most efficient robot, regardless of its capabilities. What is perhaps more telling is the variety of user responses we had to why they believed each robot would collect more dust. Those who supported passive dust collecting robots listed a variety of reasons, with many people concerns with active dusting robots dispersing and upsetting more dust than the collect. One user rationalized his choice by the nature of dust saying "I've observed that the stuff that collects the most dust in my place are the items that are static, therefore I would assume that the static robot might collect more dust."Still other users took a more global view with one user, as mentioned above, claiming that they "don't want a f*cking robot running around all day" and another, acccepting the harsh realities of time remarked "All robots ultimately become a passive dust-collecting robot."For every supporter of passive robots, there were still more who argued for active robots. Almost every person, in explaining their choice, argued that active robots, due to their mobility, would be able to cover a larger space. This highlights the dichotomy between efficiency and coverage. While our passive dust collecting robot can provide superior efficiency, its lack of locomotion greatly reduces is potential coverage. By constrast, the active robot has the ability to move around, coverage potentially all of the room, given some amount of time. § FIELD STUDYGiven the results of our user study in sec:user_study, we next probe into how these preferences are reflected in reality.Carnegie Mellon's Robotics Institute is home is a large variety of robots and using the 2010 robot census we analyzed what kind of dust collecting robot we actually see <cit.>. Of the 261 robots listed on the census[The original census data was provided directly from its author.] with complete information, we see that none of them are designed to collect dust actively. However, we can assume many of them collect dust passively.Twenty were listed as having no mobility, making them official passive dust collecting robots. Even the eighty-six robots that have wheeled mobility are unlikey to be driving most of the time and therefore spend much of their life as passive dust collecting robots.In fact, broadening out, despite the variety in morphologies and mobilities from wheeled to winged, from manipulation to entertainment to competition, most, if not all, of the robots at the Robotics Institute spend large quantities of their tenure as passive dust collecting robots.While active dust collecting robots are constrained by their function to have certain properties, passive dust collecting robotics is an all-inclusive, all-accepting genre that allows for nearly any charactertization.We see a huge variety of robots in fig:robot_montage. They can be old or new, outrageously expensive or dirt cheap, beautifully crafted or hastily thrown together. Yet, if they do nothing, they all have the ability inside of them to be passive dust collecting robots.Given the guidelines provided by our model in sec:dust_model, these robots have the capacity to be more efficient than their try-hard active collection counterparts. Based on the results of our study (sec:user_study) this makes them more desirable.From these insights, it is now clear why the CMU Robotics Institute does not have any active dust collecting robots on record.They have been surpased by their more efficient, more inclusive, more desirable counterparts: passive dust collecting robots. § DISCUSSIONWhile our analysis presented in sec:dust_model outlines two classes of robots, our field study from sec:field_study reveals a continuum of dust collecting robots. Robots that do not active collect dust but are not entirely stationary, such as robots that are simply underused, represent the middle ground of dust collection. We can even think of air filters as dust collecting robots that actively collect dust but do not do so by moving themselves. This adds a new dimension of what it means for a robot to be active.This work also aims to highlight the underappreciated advantages of passive dust collecting robots. Passive robots, unconstrained by a need for explicit dust collecting capabilities, afford a wide range of mophologies.This allows for incredibly flexibility in designing the possible human-robot interaction schemes, which is critical to a cleaning robotś acceptance <cit.>.While we focused on dust collecting robots are model generalizes to other situations, such as moving in the rain. Specifically, our model can be used to model whether you would get more wet by standing still or running through the rain. We hope that this work will raise awareness for passive dust collecting robots and raise further discussion on the nature of dust collection.§ ACKNOWLEDGMENTSThis material is based upon work supported by the infinite discretionary money-bag. We do not thank the members of the Personal Robotics Lab for helpful discussion and advice as this project was kept entirely super secret from them. plainnat 11 urlstyle[Cha et al.(2015)Cha, Dragan, and Srinivasa]cha2015perceived Elizabeth Cha, Anca D Dragan, and Siddhartha S Srinivasa. Perceived robot capability. In RO-MAN, pages 541–548. IEEE, 2015.[Choset(2001)]choset2001coverage Howie Choset. Coverage for robotics–a survey of recent results. Annals of mathematics and artificial intelligence, 310 (1):0 113–126, 2001.[Doh et al.(2007)Doh, Kim, and Chung]doh2007practical Nakju Lett Doh, Chanki Kim, and Wan Kyun Chung. A practical path planner for the robotic vacuum cleaner in rectilinear environments. Transactions on Consumer Electronics, 530 (2), 2007.[Fiorini and Prassler(2000)]fiorini2000cleaning Paolo Fiorini and Erwin Prassler. Cleaning and household robots: A technology survey. Autonomous Robots, 90 (3):0 227–235, 2000.[Forlizzi and DiSalvo(2006)]forlizzi2006service Jodi Forlizzi and Carl DiSalvo. Service robots in the domestic environment: a study of the roomba vacuum in the home. In SIGCHI/SIGART, pages 258–265. ACM, 2006.[Hendriks et al.(2011)Hendriks, Meerbeek, Boess, Pauws, and Sonneveld]hendriks2011robot Bram Hendriks, Bernt Meerbeek, Stella Boess, Steffen Pauws, and Marieke Sonneveld. Robot vacuum cleaner personality and behavior. IJSR, 30 (2):0 187–195, 2011.[Jones(2006)]jones2006robots Joseph L Jones. Robots at the tipping point: the road to irobot roomba. IEEE Robotics & Automation Magazine, 130 (1):0 76–78, 2006.[Prassler et al.(2000)Prassler, Ritter, Schaeffer, and Fiorini]prassler2000short Erwin Prassler, Arno Ritter, Christoph Schaeffer, and Paolo Fiorini. A short history of cleaning robots. Autonomous Robots, 90 (3):0 211–226, 2000.[Schackner(2010)]robot_census Bill Schackner. Cmu student wants to know how many are on campus; so far she's up to 547. October 2010. [Online].[Tribelhorn and Dodds(2007)]tribelhorn2007evaluating Ben Tribelhorn and Zachary Dodds. Evaluating the roomba: A low-cost, ubiquitous platform for robotics research and education. In ICRA, pages 1393–1399. IEEE, 2007.[Ulrich et al.(1997)Ulrich, Mondada, and Nicoud]ulrich1997autonomous Iwan Ulrich, Francesco Mondada, and J-D Nicoud. Autonomous vacuum cleaner. Robotics and autonomous systems, 190 (3-4):0 233–245, 1997.
http://arxiv.org/abs/1703.08736v1
{ "authors": [ "Rachel M. Holladay", "Siddhartha S. Srinivasa" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20170325193139", "title": "A New Paradigm for Robotic Dust Collection: Theorems, User Studies, and a Field Study" }
http://arxiv.org/abs/1703.08848v1
{ "authors": [ "C. Adam", "J. M. Speight", "A. Wereszczynski" ], "categories": [ "hep-th", "math-ph", "math.MP" ], "primary_category": "hep-th", "published": "20170326170019", "title": "The volume of a vortex and the Bradlow bound" }
Department of Pure and Applied Sciences, University of Tokyo, Komaba, Meguro-ku, Tokyo 153-8902, JapanDepartment of Pure and Applied Sciences, University of Tokyo, Komaba, Meguro-ku, Tokyo 153-8902, Japan Center for Materials Research by Information Integration, National Institute for Materials Science, 1-2-1 Sengen, Tsukuba,Ibaraki 305-0047, Japan 02.50.Tt, 07.05.Kf, 68.37.EfA sparse modeling approach is proposed for analyzing scanningtunneling microscopy topography data, which contains numerous peakscorresponding to surface atoms. The method, based on the relevancevector machine with L_1 regularization and k-means clustering,enables separation of the peaks and atomic center positioning withaccuracy beyond the resolution of the measurement grid.The validity and efficiency of the proposed method are demonstratedusing synthetic data in comparison to the conventional least-square method.An application of the proposed method to experimental data of a metallic oxide thin film clearly indicates the existence of defects and corresponding local lattice deformations. Real-space analysis of scanning tunneling microscopytopography datasets using sparse modeling approachKoji Hukushima December 30, 2023 =========================================================================================================§ INTRODUCTIONScanning tunneling microscopy (STM) is an experimental technique that enables observation of a material surface at atomic-scale resolution <cit.>. An electron-density topography map is obtained with STM by measuring the tunneling current between the surface to be observed and an atomic-scale conducting tip with an applied bias voltage. Since the invention of STM, various types of scanning probe microscopies, such as atomic force microscope, have been developed and used for measuring surface topography and physical properties of materials surfaces. Several interesting phenomena on the surfaces have been shown to be caused by local strain induced by impurities and/or defects. For example, the critical temperature of High-T_c cuprate superconductors significantly depends on local strain <cit.>. Fourier transforms are often used for extracting certain properties of surface structures, such as the set of lattice vectors of a surface reconstruction structure <cit.>. For a clean crystalline surface structure, Fourier transforms can be used to accurately estimate the atomic positions and associated local strain from the perfect lattice structure. However, thin films of metallic oxides are generally not clean surface structures, and it is difficult to extract local structural information from STM topography data. In fact, the desired local information for thin film structures can be obscured behind noise in the Fourier transform.Hence, a new methodology for performing real-space data analysis beyond the Fourier transform is highly desired. In this study, we propose a data-analysis methodology for extracting the atomic arrangement from noisy STM topography. Our method is based on the fact that the STM topography data for a given surface can be represented by the superposition ofsuitable basis functions with noise, each of which is a spatially localized with a center corresponding to the location of atom.The basis function is characterized bythe set of parameters, which includes the center position, amplitude, and shape of the basis function.Our strategy decomposesa given STM data set into the basis functions with determining the set of parameters in the data model. This strategy could be accomplished by using the least-squares method. In fact, when the number of the peaks N_peak and a shape parameter of the basis function areknown in advance, this simple strategy is effective. However, because the typical number of atoms assumed here could be more than ten thousand and the associated number of data points could be more than one million, it is difficult to know N_peak beforehand. Also, the shape parameters of the basis function are unknown a priori in general.To establish a methodology for analyzing STM topography with an unspecified number of atoms, we use a relevance vector machine (RVM) <cit.> as the data model and a maximum a posteriori (MAP) estimation, which is based on the framework of Bayesian inference, to determine the model parameters. As the prior distribution in MAP estimation, we introduce a Laplace prior which is equivalent tothe least absolute shrinkage and selection operator (LASSO) regression <cit.>. Depending on the measurement resolution, the number of data points is typically much larger than that of atoms, so the variables that weextract from the data can be “sparse.” Using LASSO permits model inference with emphasis on the sparsity of the data. Recently, sparse modeling has been applied to a wide range of problems dealing with high-dimensional data.Our proposedmethod is regarded as a sparse modeling for STM topography data analysis. In this paper, we present the procedures used for extracting the atomic positions and peak amplitudes included in the STM topography images; we discuss not only the method for determining the model parameters but also the method for validating the models. First, we apply our method to synthesis data, and we examine the accuracy of our estimation. Then, we report the results of our model application to actual STM topography data obtained with a metallic oxide thin film.§ MODEL AND METHOD§.§ Data modelTypical topography data obtained by STM measurements of SrVO_3 is shown in Fig. <ref>. The STM topography picture typically shown in literature is the top view shown on the left of Fig. <ref>. The surface of SrVO_3 is relatively clean and flat in comparison to the surfaces of other metallic oxide compounds. Nevertheless, it is noticed from the bird's-eye view (Fig. <ref> (b)) that the STM image has a rugged structure. This structure may be due to atomic-scale fluctuation or the STM tip condition. Each peak in the figure is considered to correspond to an atom, and dark spots oftenindicate the existence of atomic defects. Note that STM is generally responsible for indicating the electronic state underlying the surface, not the atom itself. Our aim in this work is to decompose such STM topography data into the peaks, assumed to be resulting from each atom.This process is formally similar to the peak decomposition of spectral data measured in various natural science experiments. Recently, a statistical analysis technique based on Bayesian inference was used to successfully extract a finite number of peaks for a one-dimensional data spectrum with noise <cit.>. In this sense, our problem might be considered as peak decomposition in a two-dimensional (2D) data spectrum. The number of peaks in this study, however, is much larger than those attained in the previous study. Thus, a different numerical calculation strategy is required for treating the large set of data.In this study, our framework is based on the RVM . The pixel data is denoted as y=(y_1,⋯,y_D) with D being the total number of pixels, and the vector x represents the weight of the STM source signal to be estimated. The weight x_i is defined on an artificial array point, which is generally different from the original pixel array for y. The dimension of the vector x, which is the number of array points introduced, is denoted by N. The number N can be chosen independently of D depending on the resolution of the estimation. Assuming an explicit functional form of the measurement matrix, which we discuss later, our task is reduced to inferring relevant components in vector x for a given vector y.Using a D × N measurement matrix Â, our data model is expressed as y = Âx+ϵ,where ϵ is a noise vector with dimension D associated with the observation. For simplicity, each element of the vector ϵ is assumed to be iid Gaussian random variables with zero mean and variance σ_ϵ: P(ϵ) = ∏_d=1^D 1/√(2 πσ_ϵ^2)exp( -ϵ_d^2/2σ_ϵ^2).In other words, the noise property is independent of the pixel position, and coherent noise such as that induced by the STM tip or by the surface condition is not considered. In our method, the vector x is estimated by a posterior distribution P(x|y) for a given pixel data y. With Bayes' theorem, the a posterior distribution is expressed as P(x|y) = P(y|x) P(x)/∑_x P(y|x) P(x),where P(y|x) and P(x) are the likelihood function and a prior distribution, respectively. We employ the MAP estimation in which the value of x is chosen by maximizing the posterior distribution. The likelihood function is given by the noise distribution of Eq. (<ref>) as P(y|x) = P(ϵ) = ∏_d=1^D 1/√(2 πσ_ϵ^2)exp(-‖y-Âx‖_2^2/2σ_ϵ^2),where ‖⋯‖_2 denotes the L_2 norm. The prior distribution in Eq. (<ref>) used here is given by a Laplace prior over x:P(x) ∝exp(-λ|x|_1 ),where λ is a hyperparameter and |⋯|_1 is the L_1 norm. The prior distribution usually reduces the number of non-zero elements of the vector x. We assume sparsity of the vector x based on the reasonable assumption that the number of signal sources from existing atoms is significantly smaller than that of the pixel arrays. The present approach is called the sparse modeling.It is emphasized that our framework does not specify the number of peaks N_ peak at the present stage. All the element of x could be the peak centers in principle and the sparse modeling is used for a sparse solution for x with a small number of non-zero elements §.§ Measurement matrix and MAP estimateOur data model of Eq. (<ref>), represented by a linear relation with additive noise, means that the observation vector y is a superposition of thebasis functions and that the relevance vector x is a weight factor. In this work, we assume the kernel function is an isotropic 2D Gaussian function in which the element of the measurement matrix in Eq. (<ref>) is given byA_di(r_di;σ) = 1/√(2πσ)exp(-r_di^2/2σ^2),where A_di is an element of the measurement matrix Â, σ represents the variance, and r_di is the spatial distance between the position of measurement y_d and that of signal source x_i.Note that there is no theoretical or physical basis for choosingthe 2DGaussian function. In Ref. Gai, the 2D isotropic Gaussian function with the covariance matrix Σ=σI, where I is an identity matrix, is used to fit peaks in STM topography data. In the field of optics, the width of the point spread function, which corresponds to our basis function, can be measured by an independent experiment a priori. In that case, an algorithm based on the maximum-likelihood method works well <cit.>.However, it is difficult to know the value of σ from a calibration experiment in STM because the target surfaces as well as the tip states are sensitive to experimental conditions. Our problemis more difficult than a peak decomposition problem with known σ in the sense that simultaneous inference of peaks and the value of σ is to be solved from the input data y. The MAP estimate with respect to x is equivalent to the minimization of the cost function E(x;y,λ, μ),E(x; y,λ, μ) = 1/2σ_ϵ^2‖y-Âx‖_2^2 + λ|x|_1,where μ denotes a set of unknown parameters in the measurement matrix Â. The inference scheme with the prior distribution is known as the least absolute shrinkage and selection operator (LASSO) <cit.>, and the hyperparameter λ determines the strength of the sparsity. Our inference scheme is the vector machine with L_1 regularization, which is equivalent the so-called L1VM <cit.>. In our case, the parameter μ includes the variance σ in Eq. (<ref>). Without loss of generality, the unit of the cost function is set to σ_ϵ^-2, and thus the cost function is represented as a function of x, λ,and μ. This resulting problem is an optimization problem. In this work, we use a fast iterative shrinkage-thresholding algorithm (FISTA) <cit.> for minimizing the cost function, which is popularly used in L_1 optimization problems. For λ = 0, minimization of the cost function is reduced to the least-squares method, and for a sufficiently large value of λ, the trivial solution x = 0 is obtained. Therefore, an appropriate value of λ is expected to exist between these two extremes, and λ can be determinedas a consequence of the competition between the data fit and the sparsity of x. Unfortunately, an appropriate value of λ is not known a priori. It would be suitable to choose the value of λ to reduce prediction error; however, the prediction error is difficult to estimate. Instead, a promising method for determining the hyperparameter λ, as well as unknown parameters in the likelihood function, is through cross validation (CV). §.§ Cross validation and hyperparameter selectionIn K-fold CV, the data set of y is divided into K subsets, which are denoted by {y^(k)} = {y_Λ^(k)_1, …, y_Λ^(k)_D/K} with k = 1, …, K. Here, Λ^(k) is an index set of the elements contained in k-th subset. The subsets are chosen randomly from the original y, and each element of y appears once in the subsets. Using the data set y^(k) = y∖y^(k) as a training set, we obtain the optimal solution x^(k) that minimizes the cost function E(x; y^(k), λ, μ). Then, for each test set y^(k), we calculate the CV error asL^(k)(λ,μ) = 1/2‖y^(k)-Â^(k)x^(k)‖_2^2,where the measurement matrix Â^(k) for the partial data set y^(k) is given by Â^(k) = (A_Λ^(k)_1, …, A_Λ^(k)_D/K)^T with A_Λ^(k)_i=(A_Λ^(k)_i 1, …, A_Λ^(k)_i N). Averaging over the possible test data set, the averaged CV error is defined by L^K(λ,μ) = 1/K∑_k=1^KL^(k)(λ,μ).Regarding the CV error as an estimate of the prediction error, the hyperparameter and unknown parameter are determined by minimizing the averaged CV error. The CV error L^K is known to equal the true prediction error in the large K limit, so ideally, we should choose a sufficiently large number of K. In particular, the case of K=D corresponds to so-called leave-one-out cross validation (LOOCV), which requires D minimization calculations for a given set of λ and μ. This CV is time-consuming with increasing the data size D.Recently, Obuchi and Kabashima <cit.> have proposed a simplified method for performing LOOCV. Once the minimization of the cost function is computed for the total data set, the LOOCV error L^LOO is estimated by the approximated formula given byL^LOO(λ, μ) = (N/N_0(ϵ^th))^2 ∑_d=1^D (y_d - ∑_i=1^N A_dix_i )^2,where N_0(ϵ^th) is the number of elements of x below the threshold ϵ^th. The value of ϵ^th may depend on the solver used for minimizing the cost function Eq. (<ref>). Using FISTA, ϵ^th is unambiguously obtained by ϵ_th = λ/L, where L is a Lipschitzconstant of the cost function (see Appendix <ref>). We performed the K-fold CV procedure for typical STM data with changing K and confirmed that L^K(λ, μ) is almost independent of K for K ≥ 10. Thus, in the following sections, we present the results of both 10-fold CV and the approximated LOOCV for comparison. In our data model, two parameters are to be determined by CV: the LASSO tuning parameter λ and the variance of the Gaussian function μ={σ}. We first determine λ for a fixed value of σ according to the one-standard-error rule <cit.> often used in a LASSO analysis, that is,λ^*(σ) = max_λ{λ| ‖L^K(λ) - L^K(λ̂)‖_2 < SE(L^(k)(λ̂)). },where λ̂ is given by λ̂ = _λL^K(λ) and SE(⋯) is the standard error of the K-fold CV error. After choosing λ^* as a function of σ, we choose a suitable σ as the minimizer of the CV error, that is,σ^*=_σL^K (λ^*(σ), σ).§.§ Estimation of the peak position Our goal is to determine the position of atoms with a reasonable resolution in order to quantify any local distortion of the position. The non-zero elements of the estimated vector x will lead to the central peak position. The resolution of each position is, however, limited to the grid size of our data model. Some non-zero elements of the optimized value x_i are localized, and they are separated from each other. Therefore, we can extract the center of peaks from x with higher resolution than those obtained with the L1VM grid size using the k-means clustering method.We suppose that the number of the peaks N_peak in k-means clustering is countable for the estimated vector x. This assumption is based on the fact that the non-zero elements of x are highly localized in the L1VM grid space. The center of kth peak r_k with k=1, …, N_peak is initially chosen by a certain pixel i at which the element x_i takes a maximum value within the radius R around pixel i. The value of R is appropriately set as a mean distance of the localized elements.Then, an attributed variable z_i is allocated for each pixel i as z_i = _k d(i, r_k),where i is the position vector at pixel i on the L1VM grid as i=(i_x, i_y) and d(i, r_k) denotes the Euclid distance between the pixel i and the center r_k. Using the attributed variables, the center is defined by r_k = ∑_i δ_k, z_iθ(x_i) x_i i/∑_i δ_k,z_iθ(x_i) x_i,where θ(x) is a Heaviside step function, meaning that an element with a negative value is not considered inthis analysis. Here,r_kis a weighted average of the pixel positions when the amplitudes x are regarded as the weights. Solving Eq. (<ref>) and (<ref>) iteratively, we obtain the centers of clusters r^*_k.§ NUMERICAL RESULTS §.§ Synthetic data and typical examples of estimated dataFirst, we examine the validity and reliability of the proposed method using a synthetic data set. The synthetic data are generated by the following procedure. For the given primitive basis vectors, a_1 and a_2 of the 2D lattice, the lattice vector r_i of i-th lattice point is defied asr_i = ma_1+na_2+ξ_i,where m and n are integers and ξ_i is a uniformly random vector representinga local lattice distortion. All atoms are allocated at the lattice points in the region [0ℓ] × [0ℓ], where the length unit of the lattice data is set to 1 px. Some lattice points are attributed to vacancy sites, which are randomly chosen with the probability ρ_vac. The number of peaks N_peak is given by N_peak = (1 - ρ_vac) N_tot with N_tot being the number of lattice points in the region under consideration. Thus, the atom positions to be inferred from the imaging data are determined as {r̂_k} (k = 1, …, N_peak).The amplitude {x̂_k} (k = 1, …, N_peak) of a peak is set as a Gaussian random variable with mean 1 and variance σ_x. Using the set of parameters {x̂,r̂}, the synthetic data y(x̂,r̂) is generated through the measurement matrix by Eq. (<ref>). We fix the following parameters: σ = σ_true≡ 2.25, ℓ = 64 px, ρ_vac. = 0.02, R_center=0.15 px, σ_x = 0.01, and σ_ϵ = 5d-4. The synthetic data used in this section is shown in Fig. <ref>.For this synthetic data, we first perform the optimization using the least-squares method (λ = 0) for fixed σ = σ_true (= 2.25). As shown in Fig. <ref>, the optimized vector x contains both positive and negative values and extensively fluctuates with a huge amplitude (x_i≈ 100) compared with the original signal's amplitude (y_d ≈ 0.01). This result demonstrates that the least-squares method overfits the data y,and a non-sparse solution of x is obtained when any regularization terms are absent.Fig. <ref> shows some typical results of the L_1 optimization with several values of λ for fixed σ=σ_true. While some of the relevant variables have negative values for a relatively lower value of λ such as λ=10^-6 shown in Fig. <ref>(a), although the true values x̂have no negative values. Moreover, the variables are noisy in the higher-λ regime such as λ=10^-3 in comparison with the λ=10^-4 case. Therefore, there must be a suitable value of λ between these two extremes, which is to be determined using the CV.§.§ Result of Cross Validation For the synthetic data, we performed the 10-fold CV and LOOCV in order to determine the suitable parameters λ^* and σ^*. The results of 10-fold CV and LOOCV are shown in Fig. <ref> (a) and (b), respectively. There are no apparent quantitative differences between these results in the regime 10^-6 < λ < 10^-3, indicating that the approximated LOOCV error provides a good estimator of the large K-fold CV errors. Then, we choose the optimal λ^*(σ) for each σ in accordance with the one standard error rule.Next, we study the σ-dependence of the 10-fold CV error L^(10)(σ, λ^*(σ)) and the LOOCV error L^LOO(σ, λ^*(σ)) shown in Fig. <ref>. Both CV errors take a minimal value at around σ^* = 2.225. The error bars displayed in Fig. <ref> represent the standard error of each CV error.The mean value L in the parameter regime σ = 2.225 ± 0.050 is within its one standard error at the minimum of σ^* = 2.225. Hence, this result is consistent with the true value σ_true=2.25. By choosing the (hyper)parameters using CV, the optimized amplitude x^*(λ^*; σ^*) is obtained with λ^* = 2D-5 and σ^*=2.225, which is shown in Fig. <ref>. The peaks are separated from each other. Thus, we are able to count the number of the peaks and find N_peak=153 in this case, which is consistent with the number in the synthesis data. §.§ Result of estimated atom positionThe final solution x^*(σ^*, λ^*) is sparse, and x_i has a finite value only near the true peak position, as shown in Fig. <ref>. In Fig. <ref>, we show the amplitude of the true peaks x̂ and the estimated amplitude of L1VM for a portion of the grid . For each true peak, there are still several “active” pixels with non-zero elements of x_i. The number of active pixels is in the range between two to five, depending on the resolution of the L1VM grid. Then, we obtained the peak position r̂^* by applying the above-mentioned k-means clustering method to the optimized L1VM solution x^*(σ^*, λ^*). In Fig. <ref>, we show the obtained positions r̂^* together with the true positions r̂. We also show the difference between the true positions and their corresponding estimated positions on the right side of Fig. <ref>. No significant differences are observed in the figures. In fact, the accuracy of our estimation is within 1px, meaning that the positions of the peaks are extracted from the STM data with accuracy beyond the resolution of the input signal. This is our main claim in this paper.§.§ Application to real experimental data The presented results for the synthetic data are useful for examining the validity of our method. Before applying our scheme to real experimental data sets, some issues must be addressed. For example, the choice of the basis function is the one of the essential problems because the basis function must depend on the surface materials. However, assuming a Gaussian base function, we apply our scheme to experimental data from STM topography measurements of a SrVO_3 thin film. Fig. <ref> presents the tentative results obtained by our scheme. Many defects are clearly observed on the square lattice, and the local lattice distortion is enhanced around the defects. Since our method is not based on Fourier transformations, it should be possible to directly detect real-space properties such as local distortion and/or strain. Details of physical properties of the material are discussed in a separated paper. § CONCLUDING REMARKSIn this study, we propose an efficient data analysis method for STM topography datasets, which allows highly accurate extraction of peak centers.Technically, our main problem belongs to a 2D peak decomposition problem with a large unspecified number of peaks . Examples of such problems include NMR spectral data and X-ray or neutron beam diffraction pattern data. Therefore, our scheme could be applicable to a wide range of datasets by changing the basis function.First, we discuss the computational cost of our method. An elementary step of the L_1 optimization consists ofFISTA. For estimating an N dimensional vector x^*, the computational cost of FISTA is O(N^2) due to matrix-vector product. The typical computation time required for convergence of x^*(σ, λ) estimation in the analysis of 64 × 64 pixel data is about 30 sec with a standard single-core laptop computer. In this case, the dimensions of the measurement matrix  is 4096 × 4096. The typical size of STM topography data is 512 × 512 pixels, so the measurement matrix becomes tremendously large. However, a suitable cutoff length decreases the relevant elements in the measurement matrix when the basis function is spatially localized, such as the Gaussian base function used in this study. We succeeded in a preliminary analysis of 512 × 512 pixels of real data using a set of cluster machines.In our method, most of the computational time is devoted to the hyperparameter estimation by cross validation. As shown in Fig. <ref> and <ref>, our results indicate that the approximated LOOCV error proposed by Obuchi and Kabashimaagrees well with the results of the 10-fold CV error. Thus, using the approximated LOOCV, which requires 10 times less computational time than 10-fold CV, is computationally efficient.When we estimate the center positions of the peaks from topography data, we simultaneously obtain the amplitude values x^* of the L1VM variables. As shownin Fig. <ref>, however, our analysisprovides a bundle of peaks for each true peak. In our analysis, the summation of the peak amplitude for each cluster is easily calculated from the optimized variables x^* using the attributed variable z_i asx̂^*_k = ∑_i=1^N δ_k,z_iθ(x_i) x^*_i.In Fig. <ref>, we compare the accumulated amplitude for each peak to the true amplitude. The estimation of the peak amplitude is not accurate, unlike the estimation of the peak position. Our method is significantly modified from the naive least-square method shown in the right side of Fig. <ref>. The mean values and standard variance of the peak amplitudes are shown for our estimate and the true values in Table. <ref>. The mean value of the estimated amplitude x̂^* is compatible to that of the true value x̂, but the standard variance of x̂^* is about three times larger than that of x̂. This discrepancy may be due to the lack of resolution of the L1VM. We expect that the accuracy of the amplitude estimation will be improved by increasing the dimension N of x so that N is larger than the input dimension D. Another practical way for improving the accuracy might be to re-evaluate the peak amplitude using the knowledge of the peak positions extracted by our scheme.Finally, the choice of the basisfunction is still an important problem in analyzing experimental datasets. In the preliminary results shown in Fig. <ref>, typical STM topography data of a SrVO_3 thin film is analyzed by a 2Disotropic Gaussian function. However, situations exist where this basisfunction choice is not suitable. To apply our scheme to more general cases, we will utilize machine learning techniques to estimate suitable basisfunctions from obtained datasets.We are grateful to Y. Okada and T. Hitosugi for providing STM data and useful discussions. We also thank M. Okada, Y. Kabashima, M. Ohzeki, T. Obuchi and Y. Nakanishi-Ohno for useful discussions. This research was supported by the Grants-in-Aid for Scientific Research from the JSPS, Japan (No. 25120010and 25610102) and the Grant-in-Aid for Challenging Exploratory Research from the MEXT, Japan (No. 15596332). This work was also supported by “Materials research by Information Integration” Initiative (MI^2I) project of the Support Program for Starting Up Innovation Hub from Japan Science and Technology Agency (JST). § FAST ITERATIVE SHRINKAGE-THRESHOLDING ALGORITHM (FITSA) In our study, we use FISTA to L_1 minimize the cost function E(x)=‖y-Âx‖_2^2/2+λ |x|_1. The optimal solution x^* can be determined by FISTA by solving iterative equations with auxiliary variable β and vector ω. One characteristic feature of the algorithm is its use of a soft-thresholding function in the iterative procedure, which is defined byS_ϵ(x)=x - ϵ (x > ϵ),0(-ϵ≤ x ≤ϵ),x + ϵ (x < - ϵ).with threshold ϵ. By setting the initial conditions β_0=1 and w_0=x_0, the update procedure is given by the following equations: x_t+1 = S_λ/L(w_t+ Â^T (y - Âw_t) / L),β_t+1 = 1+√(1+4 β_t^2)/2,w_t+1 = x_t + β_t - 1/β_t+1(x_t+1 - x_t),where L is a Lipschitz constant of the differential of the squared error, f(x)=‖y-Âx‖_2^2/2, that is, L is a positive constant that satisfies the condition ‖∇ f(x)-∇ f(y)‖_2≤ L ‖x-y‖_2. Thus, it is natural to choose the threshold ϵ_th as ϵ_th=λ/L in Eq. (<ref>).The Lipschitz constant L is given as L=‖Â^TÂ‖ with ‖⋯‖ being an operator norm of a matrix. The value of L is computable when the matrix is smaller than a 100 × 100 matrix. For a larger matrix, L can be estimated using the backtracking algorithm <cit.>. Moreover, the sum of a column of our measurement matrix  takes a value close to unity, yielding ‖Â^TÂ‖≈ 1 for a large matrix  by a simple calculation of linear algebra.99STM1G. Binnig, H. Rohrer, Ch. Gerber, and E. Weibel, Phys. Rev. Lett. 49, 57 (1982)STM2 G. Binnig, H. Rohrer, Ch. Gerber, and E. Weibel, Phys. Rev. Lett. 50, 120 (1983).TersoffHamannJ. Tersoff and D. R. Hamann, Phys. Rev. B 31, 805 (1985).YOkada Y. Okada, T.-R. Chang, G. Chang, R. Shimizu, S.-Y. Shiau, H.T. Jeng, S. Shiraki, A. Bansil, H. Lin, and T. Hitosugi, preprint(ArXiv:1604.07334)SainiN.L. Saini, H. Oyanagi, and A. Bianconi, Physica C 357-360, 117 (2001).DeutscherG. Deutscher, J. Appl. Phys. 111, 112603 (2012).ZeljkovicI. Zeljkovic, J. Nieminen, D. Huang, T.-R. Chang, T. He, H.-T. Jeng, Z. Xu, J. Wen, G. Gu, H. Lin, R.S. Markiewicz, A. Bansil, and J.E. Hoffman, Nano Lett. 14, 6749 (2014).GaiZ. Gai, W. Lin, J.D. Burton, K. Fuchigami, P.C. Snijders, T.Z. Ward, E.Y. Tsymbal, J. Shen, S. Jesse, S.V. Kalinin, and A.P. Baddorf, Nat. Commun. 5, 4528 (2014).RVMM. E. Tipping, J. Machine Learn. Res. 1, 211 (2001).LASSOR. Tibshirani, J. Royal. Stat. Soc. Ser. B 58, 267 (1996).NagataK. Nagata, S. Sugita, and M. Okada, Neural Networks 28, 82 (2012).IgarashiY. Igarashi, K. Nagata, T. Kuwatani, T. Omori, Y. Nakanishi-Ohno, and M. Okada, J. Phys.: Conf. Series 699, 012001 (2016).AshidaY. Ashida, and M. Ueda, Optics Lett. 41, 72 (2016).L1VMK.P. Murphy, Machine learning: a probabilistic perspective (Cambridge MA: MIT press, 2012).FISTAA. Beck and M. Teboulle, SIAM J. Imaging Sci. 2, 183 (2009).ObuchiKabashimaT. Obuchi and Y. Kabashima, J. Stat. Mech.: Theory and Experiment2016 (2016) 053304.oneseT. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning (Berlin: Springer, 2001).
http://arxiv.org/abs/1703.08643v1
{ "authors": [ "Masamichi J. Miyama", "Koji Hukushima" ], "categories": [ "physics.data-an", "cond-mat.mtrl-sci" ], "primary_category": "physics.data-an", "published": "20170325033329", "title": "Real-space analysis of scanning tunneling microscopy topography datasets using sparse modeling approach" }
Multiple Access for 5G New Radio:Categorization, Evaluation, and Challenges Hyunsoo Kim, Student Member, IEEE, Yeon-Geun Lim, Student Member, IEEE,Chan-Byoung Chae, Senior Member, IEEE, and Daesik Hong, Senior Member, IEEEH. S. Kim and D. S. Hong are with School of Electrical and Electronic Engineering, Yonsei University, Korea. Y.-G. Lim and C.-B. Chae are with School of Integrated Technology, Yonsei University, Korea (e-mail: hyunsookim, yglim, cbchae, daesik@yonsei.ac.kr). Corresponding author is D. S. Hong.December 30, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Next generation wireless networks require massive uplink connections as well as high spectral efficiency. It is well known that, theoretically, it is not possible to achieve the sum capacity of multi-user communications with orthogonal multiple access. To meet the challenging requirements of next generation networks, researchers have explored non-orthogonal and overloaded transmission technologies–known as new radio multiple access (NR-MA) schemes–for fifth generation (5G) networks. In this article, we discuss the key features of the promising NR-MA schemes for the massive uplink connections. The candidate schemes of NR-MA can be characterized by multiple access signatures (MA-signatures), such as codebook, sequence, and interleaver/scrambler. At the receiver side, advanced multi-user detection (MUD) schemes are employed to extract each user's data from non-orthogonally superposed data according to MA-signatures. Through link-level simulations, we compare the performances of NR-MA candidates under the same conditions. We further evaluate the sum rate performances of the NR-MA schemes using a 3-dimensional (3D) ray tracing tool based system-level simulator by reflecting realistic environments. Lastly, we discuss the tips for system operations as well as call attention to the remaining technical challenges. non-orthogonal multiple access (NOMA), overloading, massive connectivity, 3D ray tracing, and 5G networks.§ INTRODUCTIONWireless communication systems have evolved through one generation supplanting its predecessor. Accompanying this evolution has been the progression of multiple access (MA) technologies. For example, from the first generation (1G) to the 4G-LTE, the conventional communication systems orthogonally assigned radio resources to multi-users in time, frequency, and code domains. Experts believe that by 2020 (5G), mobile data traffic will have grown a thousand-fold (1000x) <cit.>. Explosive growth in data traffic will be propelled by the Internet of things (IoT) and massive machine-type-communications (mMTC). At the ITU-R WP 5D meeting, the target number of the maximum link connections was determined to be one million per square kilometer <cit.>. The orthogonal multiple access (OMA) system, however, has limited capability in supporting the massive devices. Regarding sum capacity, it is well known that the OMA system cannot approach the Shannon limit <cit.>. A great number of industrial and academic researchers have shifted the focus of their studies from orthogonality to non-orthogonality. In fact, in the Third Generation Partnership Project (3GPP), NTT DOCOMO introduced the power domain non-orthogonal multiple access (NOMA). The NOMA allows multiple users to share the same radio resources <cit.>.Another movement has proposed various non-orthogonal and overloaded transmission techniques known as new radio multiple access (NR-MA) schemes. The candidates of NR-MA can be characterized by multiple access signatures (MA-signatures), which are identifiers to distinguish user-specific patterns of the data transmissions. The candidates of NR-MA can be grouped into three categories, i.e., i) codebook-based schemes <cit.>, ii) sequence-based schemes <cit.>, and iii) interleaver/scrambler-based schemes <cit.>. Here, advanced multi-user detection (MUD) algorithms are considered so as to recover superposed signals of NR-MA. In previous studies, however, performance evaluation with different system parameters have been carried out scheme by scheme, making it difficult to compare performances fairly or to extract meaningful insights into system design.In this article, we elaborate on the basic principles of the NR-MA schemes. Then, through the categorization, we precisely explain the key features and differentiated points among the schemes. In an effort to evaluate how well the candidate schemes perform at overloading users and at sustaining inter-user interference, we conduct link-level simulations under common conditions such as transmission power, the number of resource blocks and superposed users' signals, and channel environments. Furthermore, through 3-dimensional (3D) ray tracing-based system-level simulations, we evaluate the sum rate performance of the NR-MA schemes in realistic environments. To the best of our knowledge, this is the first work that fairly evaluates potential NR-MA schemes being discussed for 5G. Finally, we discuss meaningful insights into system design and point out research challenges to operating NR-MA schemes in practice.§ OVERVIEW OF NEW RADIO MULTIPLE ACCESS§.§ Basic PrinciplesNR-MA schemes may be described as a user overloading technology that expands a capacity region. The key is how well a large number of users' signals are superimposed and recovered within a controllable and acceptable amount of interference. An important metric for the NR-MA in this regard is the overloading factor defined as the ratio of the number of overloaded signals to that of orthogonal resource grids.The candidate schemes proposed in the 3GPP share two common features. One, a transmitter spreads data by one or more MA-signatures. Two, a receiver employs advanced MUD algorithms with moderate computational complexity. In the former, the signatures include power, codebook, sequence, interleaver, scrambler, etc. The signal spreading can be operated at either the bit-level, the symbol-level, or at both. By user-specific spreading and resource mapping, it makes each user's signal distinguishable and more robust to inter-user interference. The second feature shared by the candidate schemes is that a receiver employs advanced MUD algorithms with a moderate computational complexity. In the latter, a powerful forward-error-correction (FEC) coding makes up for the insufficient decoding performance of the MUD algorithms.§.§ Categorization In this subsection, we categorize the candidate schemes introduced in 3GPP, and discuss their common architectures and key features. As shown in Table <ref>, the technologies could be categorized according to the predominant use of a MA-signature. Explanations about each category are given below.* Category 1: Codebook-based MA As illustrated in Fig. 1a, the key feature of codebook-based MA schemes is the direct mapping of each user's data stream into a multi-dimensional codeword in a codebook. The codeword has two characteristics: i) signal spreading to obtain diversity/shaping gain, and ii) `zero' elements in a codeword to suppress inter-user interference in a sparse manner. The positions of zero elements in different codebooks are distinct so as to avoid a collision of any two users <cit.>. In this scheme, the spreading factor is equivalent to the dimension of a codeword, and the overloading factor is determined by the ratio of the number of multiplexed users to the spreading factor. To recover users' data streams effectively, the scheme adopts an iterative message passing algorithm (MPA) as a near-optimal solution <cit.>. Since the MPA receiver is based on maximum likelihood (ML) detection, it has high–relative to other categories–computational complexity compared to other categories.Two schemes that belong to this category are sparse code multiple access (SCMA) and pattern division multiple access (PDMA) <cit.>. The main difference between SCMA and PDMA is the resource-utilization pattern. SCMA codewords in all codebooks have the same number of zero/non-zero elements as a regular pattern. In contrast, the PDMA system allocates a different number of non-zero elements by considering each user's channel state. For example, to obtain a high diversity gain for a user with a weak channel gain, the PDMA can assign a codebook with more non-zero elements than those of another codebook. However, the codebook optimization of PDMA is highly complicated, and remains an open problem. Another differentiated point is that PDMA can utilize a power domain MA-signature. Similar to power domain NOMA, the intended near-far effect from power control can help eliminate interference effectively.* Category 2: Sequence-based MA Novel sequence-based MA techniques utilize non-orthogonal complex number sequences to overlap multi-user signals as shown in Fig. 1b. This method contrasts with that of using orthogonal pseudo noise sequences in a code division multiple access (CDMA) system. Similar to SCMA, the overloading factor is determined by the length of the spreading sequence and the number of the overloaded users. A family of complex sequences with short lengths is chosen to enable a simple multi-user interference cancellation <cit.>. For this, a minimum-mean-square-error with successive/parallel interference cancellation (MMSE-SIC/PIC) schemes have been considered for applicable receivers <cit.>. As the MMSE-SIC/PIC are linear-type receivers, they would be advantageous in terms of computational complexity.The key issue in sequence-based MA is how to design and assign non-orthogonal sequence sets to users. In multi-user shared access (MUSA), the real and imaginary parts of sequence elements are randomly generated from {-1, 0, 1}. Thanks to the zero elements in the sequences, inter-user interference is efficiently mitigated as in the codebook-based MA. Non-orthogonal coded multiple access (NCMA) obtains non-orthogonal sequences by solving a Grassmannian line packing problem <cit.>. It is proposed non-orthogonal coded access (NOCA) to utilize the LTE sequences defined for uplink reference signals <cit.>. Group orthogonal coded access (GOCA) exploits a dual-sequence: a non-orthogonal sequence for group separation and an orthogonal sequence for the user separation within a group <cit.>.* Category 3: Interleaver/Scrambler-based MA Figure 1c illustrates a simplified block diagram for an interleaver/scrambler-based MA system. The repetition coding part determines a spreading factor. The interleaver/scrambler part makes diverse superposition patterns, and then obtains the interference averaging effect. The representative technology of interleaver/scrambler-based MA is interleave-division multiple access (IDMA). In an IDMA system, through a user-specific bit-level interleaver, inter-user interference is suppressed by overlapped signal experiences. While the schemes in Categories 1 and 2 spread data in small units using short codewords/sequences, the interleaver enables dispersion of data across a long signal stream. With the spread signals widely distributed, it is difficult to apply an MMSE filtering and an MPA due to the high computational complexity at the receiver side. In this regard, a receiver of IDMA exploits an elementary signal estimator with PIC (ESE-PIC), whichpermits chip-by-chip soft interference estimation and cancellation with a moderate computational complexity <cit.>.By expanding the IDMA system, a sparse symbol-mapping precess known as grid mapping is supplemented by interleave-grid multiple access (IGMA). After the bit-level interleaver and symbol modulation, `zero' symbol padding and resource mapping algorithms are additionally operated. In a repetition division multiple access (RDMA), researchers have introduced a user-specific cyclic repetition pattern for the purpose of avoiding highly correlated interleaving patterns among users <cit.>. Rather than using the interleaving technique, resource spread multiple access (RSMA) relies on a combination of a low-rate channel coding scheme and a user-specific scrambling. Without the joint decoding process at the receiver side, RSMA has the potential to allow grant-free transmission and asynchronous multiple access <cit.>.§ PERFORMANCE EVALUATION FOR UPLINK SCENARIOFor each category, we select the base schemes that well represent the key features: SCMA, MUSA, and IDMA.[Note that candidate schemes have different characteristics despite being in the same category. In this article, however, we focus on fundamental and base schemes to make useful observations and identify system design insights for each category.] Since, in previous studies, experiments were performed with different system parameters scheme by scheme, the experiments were limited in their capacity to show how the schemes' performances compared with one another or to provide insight into system design. In this article, we attempt to overcome such limitations by evaluating the schemes under the same conditions at both the link- and system-level. The detailed system parameters and assumptions are described in Table <ref>.§.§ Link-level Simulation * Comparisons of BLER PerformanceFigures 2a-2c show the block error rate (BLER) performance with diverse overloading factors and code rates. The performance of the OMA system is represented as baselines. Without loss of generality, we assume that the received power per resource element and the spectrum efficiency per user is kept the same for all MA schemes including the OMA. For example, when the spreading factor is eight, the power of each chip is normalized by 1/8. In terms of spectral efficiency, to compare a 200 percent overloaded NR-MA scheme with QPSK, the OMA system sets 16QAM to serve users doubly fast.As displayed in Fig. 2a, all candidates outperform OMA when users are overloaded with 150 percent. Especially, the BLER curves of SCMA and IDMA are almost overlapped at the code rates 0.2 and 0.4. Moreover, due to the diversity gain from sparse codeword mapping or bit stream interleaving, the BLER curves have steeper slopes than the OMA curves. MUSA contrarily shows a different tendency. Unlike in SCMA and IDMA, in MUSA the MMSE linear receiver is adopted to mitigate inter-user interference. It reduces the power of the desired signal as well as suppresses the interference; as a result, diversity gain loss occurs.Figure 2b exhibits the 200 percent overloading case in which IDMA has a unique tendency. The BLER curve of IDMA is saturated when a moderate code rate (0.4) is applied. This is because IDMA treats the inter-user interference as a noise in the initial stage of the ESE-PIC receiver <cit.>. If the total interference exceeds the permissible amount in this stage, the superposed signals experience severe performance degradation due to the error propagation in the PIC stage. Even so, IDMA can overcome this problem with the aid of powerful FEC coding with a low rate (0.2) showing a similar performance compared to the SCMA case.Figure 2c verifies how much interference each technique can endure. SCMA is able to withstand well highly aggregated interference with both the low and moderate code rates. MUSA, on the other hand, does not work with a high overloading factor (300 percent) and a moderate code rate. Since the spreading factor is much smaller than the number of the overloaded users, the effective channel matrix for the MMSE receiving filter does not have rank enough to distinguish a larger number of users' signals <cit.>. This means that it is more difficult to separate each user's signal as the overloading factor increases.[Note that even though the rank of the effective channel matrix can be extended by using multiple receiving antennas, it leads to an unfair comparison by using more orthogonal space domain. In this article, additional orthogonal spatial resources are not considered.] For the same reason as that in the 200 percent case, IDMA cannot be effectively operated even in the 300 percent case.In summary, the SCMA demonstrates the excellent performance over a wide SNR range. Especially, with the aid of a high-complexity MPA receiver, diverse overloading factors and code rates can be applied to the SCMA system. The IDMA shows the best performance with 150 and 200 percents overloading and low code rate. Since the amount of tolerable interference is relatively low, the IDMA works poorly, in contrast, with high overloading factors. Finally, even though the diversity gain of the MUSA is relatively low compared to the gain in other schemes, it provides good BLER performance with low and moderate overloading factors. To support high overloading, the MUSA requires low code rate schemes.* Average Sum Spectral Efficiency We design modulation and coding scheme (MCS) levels for system-level evaluation, and calculate achievable average sum spectral efficiency according to the MCS levels. The design criterion is to find overloading factors and code rates that satisfy the target BLER=0.1 and maximize the sum user throughput at a given SNR. In the design of MCS levels, we consider a finite set of candidate overloading factors and code rates as follows: {150, 200, 300 percents} and {0.1, 0.2, ⋯ 0.9}. Through the Monte-Carlo simulation, we select MCS to maximize spectral efficiency as represented in Figs. 2d-2f. As noted in Figs. 2a-2c, SCMA can operate in a wide-range of overloading factors and code rates, making it possible to achieve an average sum spectral efficiency of up to 4.8 bps/Hz. On the other hand, MUSA shows superior performance when 200 percent users are overloaded up to 2.1 bps/Hz spectral efficiency. The MCS levels for IDMA are composed of a 150 percent overloading factor with code rates 0.1∼0.7, the spectral efficiency of which value is up to 2 bps/Hz.Even though IDMA and MUSA cannot obtain higher spectral efficiency by raising the overloading factor, they can provide higher throughput than SCMA at the low SNR regime. §.§ System-level Simulation * System Modeling Figures 3a and 3b show the simulation environments. We model two realistic 3D digital maps to validate the potential of NR-MA schemes in practice. One is an actual urban area–buildings surrounding Gangnam Station in Seoul, South Korea. The other, for an indoor scenario, isinside Veritas-C building at Yonsei University, South Korea. In the urban scenario, we deploy 4 base-stations (BSs) with Katherine antennas typically used in LTE and position 7 sectors in the area of interest. This can be considered a form of a 7-cell hexagonal layout but more practical. In the indoor scenario, 23 BSs with an omnidirectional antennas are located in the middle of a hall on every floor of the building (about 6 BSs per floor), as illustrated in Fig. 3b. A massive number of users are uniformly distributed on the ground and on every floor of the building at a density of 10^6/km^2/floor.* Simulation Procedure System-level simulations are conducted via the following procedures. First of all, to generate communication links from users to BSs, we utilize a 3D ray-tracing tool, Wireless System Engineering (WiSE), developed by Bell Laboratories. We measure the power-delay-profiles and root-mean-square delays. Based on the measurement, a BS calculates the channel quality of each user. The BS then generates the group sets of users with the same MCS level for a group scheduling.[Even though the MCS design and scheduling algorithm for the NR-MA are still immature and remain open problems, we can derive meaningful observations from this tractable approach, which considers back-and-forward compatibility. To be specific, the NR-MA systems perform a group scheduling for 8 RBs while the OMA system conduct a user scheduling per one RB. For the OMA system evaluation, we utilize the MCS defined in Table 7.2.3-1, 3GPP TS36.213 <cit.>.] The BS randomly schedules a group for NR-MA, and link-level simulations are performed for the scheduled group. Lastly, we obtain the sum rate per transmission-time-interval (TTI, 1 millisecond). * Simulation Results * Full Buffer Traffic Scenario Figure 3c shows the system-level performance in the urban micro area. The users' SINRs are widely distributed from -20 dB to 70 dB. In particular, there are lots of strong outdoor-to-outdoor links; as a result, more than 35 percent of users have high SINR (>20 dB). In this environment, the NR-MA systems outperform the OMA system at the low and middle SINR regime. The MUSA system with 200 percent significantly improves the sum rate at the low SINR regime. The IDMA achieves superior performance with 150 percent and high code rates at a middle SINR regime. Ultimately, by 300 percent overloading, the SCMA can provide the higher maximum sum rate rather than the other NR-MA schemes. At high SINR regime, however, the OMA system outperforms the NR-MA by using high order modulation techniques. This is because the achievable spectral efficiency of the NR-MA is significantly restricted by the controllable interference amount even at the high SINR regime. The OMA system, on the other hand, can obtain a high peak rate by employing 64QAM and high-rate coding schemes.The inside of a building, meanwhile, is a contrasting type of environment. We assume that there are 23 dense sensor networks in the building with the width of 60 m, length of 15 m, and height of 16 m. Due to the interference-limited environment, devices suffer from strong interferences caused by neighboring massive devices. Consequently, almost all users have low- and middle-range SINRs (-10∼15 dB). Figure 3d depicts the cumulative distribution function (CDF) of the user sum rate in the indoor scenario. As shown in the figure, the NR-MA schemes significantly enhance the sum capacity. Also, the NR-MA schemes are able to achieve a peak sum rate nearly equivalent to that of the OMA. In terms of comparing the candidates, the overall tendency is similar to the urban micro scenario: MUSA in the 10th percentile and IDMA in the 50th and 90th percentiles show superior performance, respectively. A summary of these evaluation results can be found in Fig. 3e. * Non-full Buffer Traffic ScenarioThe number of target applications of the 5G system has been rapidly growing, and data transmission patterns have become much more diverse. In this regard, the 3GPP standardization groups determined target packet sizes from 20 bytes to 200 bytes for the mMTC scenario. Figure 4 presents the average sum rate performance of the NR-MA systems. With the smaller packet sizes, the NR-MA systems obtain the greater sum rate gains than does the OMA system. When the packet is small enough to be contained in one resource block (RB), there are unused resource elements (REs) in the RB, which lead to inefficient resource utilization. In this case, the NR-MA systems significantly improve spectral efficiency by overloading a large number of users' packets into 8 RBs. From a time resource perspective, it also means that the NR-MA systems can deal with the given traffic faster than the OMA. With high overloading, SCMA in particular is the most favorable for burst small packet transmissions in the massive uplink scenario. §.§ Insights for System Design From the link-level and system-level evaluations, we observe that the candidate schemes of NR-MA tremendously improved the sum rate performance in massive uplink communication systems. In particular, the NR-MA systems are specialized for small packet transmissions in interference-limited environments such as crowded sensor networks. Moreover, in the NR-MA systems, signaling overhead can be reduced by replacing user-specific information with group-specific information. For example, the NR-MA systems can perform group scheduling which enables users within a group to utilize the same transmission parameters such as the positions of scheduled RBs, the MCS level, and the number of repetitions. To support this group-basis operation, a group cell radio network temporary identifier (group c-RNTI) for NR-MA should be newly investigated in 5G networks. In the initial random access stage, instead of c-RNTI, a BS can assign a group c-RNTI based on the users' channel qualities and controllable interference. Through group physical downlink control channel (PDCCH) with a group c-RNTI, common downlink control information (DCI) can be delivered to the group. In addition, since the aggregation level of DCI can be determined up to the spreading factor of NR-MA, the aggregation of DCI allows cell edge users to decode DCI more reliably.Another noteworthy point is that no single NR-MA scheme overwhelms the others including the OMA in various environments. Fortunately, a massive number of users coexist in 5G networks for various applications with different requirements. Therefore, a smart multiple access strategy is needed that opportunistically operates according to target scenarios. In a situation that requires high data rate communications with a small number of users, such as urban cellular networks, the OMA schemes can be adopted. For applications that support a large number of concurrent small packet transmissions like smart metering networks, the codebook-based MA with a powerful MPA can be beneficial to highly overload data. The sequence-based MA can be more advantageous for areas that require many link connections in addition to coverage enhancement. Also, the interleaver/scrambler-based MA can be applied to an interference-limited area with a heavy traffic load. Through flexible system operation, we anticipate NR-MA schemes to be a promising capacity booster for 5G. § CHALLENGES While we have discovered that system capability is greatly enhanced by NR-MA schemes, we recognize several research challenges remaining to be resolved.Resource and MA-signature Allocation: Since we target NR-MA systems to deal with massive link connections, handling dynamic user scheduling is difficult. More efficient resource management techniques are needed. We can consider two alternatives: i) grant-free access and ii) group scheduling-based access. The grant-free access in particular needs no explicit scheduling permission from the BSs. It does need, however, an advanced collision recovery and a user behavior detection algorithm. In the group scheduling case, group management protocol should be investigated with consideration of transmission patterns and target quality-of-service (QoS). Another critical issue is the MA-signature allocation. A pre-configured MA-signature allocation or random MA-signature selection can be regarded as the candidate approach. Also, to implement the NR-MA systems, researchers should study the handling of MA-signature collisions. Link Adaptation: Most prior work has focused on scenarios in which all the overloaded users transmit with the same MCS. This simple group-basis operation has advantages in terms of overhead reduction. The strong restriction of MCS, however, might be an obstacle to achieving the theoretical sum capacity of an uplink multi-user channel. Furthermore, the BSs have to discover user groups with the same channel quality, thus reducing the flexibility of system operation. To resolve these problems, before providing guidelines for the NR-MA system operation, researchers should first theoretically analyze user overloading and achievable rates. In addition, since the lengths and components of MA-signatures significantly impact diversity gain and the interference averaging effect, researchers may consider a new concept of link adaptation to be signature re-assigning algorithms. Based on comprehensive analyses, researchers should investigate a specialized link adaptation for NR-MA systems. Channel Estimation: In the MUD receivers of NR-MA schemes, overall decoding performance relies heavily on the accuracy of channel information. For a reliable channel estimation, as many pilots as the number of overloaded users are needed. There is a heavy burden to allocating resources orthogonally for each user's pilot in the mMTC scenario. For the resource minimization effort of overheads, researchers should study quasi-orthogonal or non-orthogonal pilots with advanced channel estimation techniques such as MMSE with interference cancellation. Synchronization: According to the agreement made during the 3GPP RAN1 #85 meeting, the uplink synchronization for the grant-free NR-MA is assumed to be the same as the downlink transmission timing. In realistic environments, however, this assumption could not always be guaranteed due to the users' geometrical position differences. Without a timing advance process, there may arise situations in which timing offsets between users are greater than the length of a cyclic prefix. Since the synchronization problem is also directly related to the MUD complexity as well as to the decoding performance, researchers should investigate receiving techniques and frame structures that are robust to timing errors. § CONCLUSION This article has investigated multiple access technologies for 5G new radio. We have discussed the basic principles, categorization, and key features of transceiver structures. Through the link-level simulation, it has been observed that the tolerable amount of inter-user interference could be affected by the characteristics of MA-signatures and receiving algorithms. Further, by modeling the realistic urban and indoor areas with a 3D ray tracing tool, we have verified the great potential and feasibility of new multiple access schemes. From these evaluations, we have discovered superior conditions for each scheme, and drawn insights for multiple access system operation. Lastly, we have introduced key challenges for implementing NR-MA, such as resource management, link adaptation, channel estimation, and synchronization. We expect our investigation will assist for the NR-MA schemes in gaining a promising technical position for next generation wireless networks. 15GMA L. Dai, B. Wang, Y. Yuan, S. Han, C. l. I, and Z. Wang, “Non-orthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends," IEEE Comm. Mag., vol. 53, no. 9, pp. 74-81, Sep. 2015.ITU IMT-2020 PG, “IMT Vision towards 2020 and beyond," ITU-R WP5D ITU-R 2020 Vision Workshop, February 12th, 2014.NOMA S. M. R. Islam, N. Avazov, O. A. Dobre, and K. S. Kwak, “Power-domain non-orthogonal multiple access (NOMA) in 5G systems: potentials and challenges," IEEE Comm. Surveys & Tutorials, early accepted.SCMA L. Lu, Y. Chen, W. Guo, H. Yang, Y. Wu, and S. Xing, “Prototype for 5G new air interface technology SCMA and performance evaluation," China Comm., vol. 12, no. Supplement, pp. 38-48, December 2015.SCMA_Pattern Y. Wu, S. Zhang, and Y. Chen, “Iterative multiuser receiver in sparse code multiple access systems," IEEE Int. Conf. on Comm. (ICC), pp. 2918-2923, London, 2015.PDMA S. Chen, B. Ren, Q. Gao, S. Kang, S. Sun, and K. Niu, “Pattern division multiple access (PDMA) - A novel non-orthogonal multiple access for 5G radio networks," IEEE Trans. on Vehicular Tech., 2016.MUSA Z. Yuan, G. Yu, W. Li, Y. Yuan, X. Wang and J. Xu, “Multi-user shared access for Internet of things," IEEE Vehicular Tech. Conf. (VTC Spring), pp. 1-5, 2016.MUSA_Rx R1-166404, “Receiver Details and Link Performance for MUSA," 3GPP TSG RAN WG1 Meeting #86, Gothenburg, Sweden, 22-26 August, 2016.NCMA R1-162517, “Considerations on DL/UL Multiple Access for NR," 3GPP TSG RAN WG1 Meeting #84bis, Busan, Korea, 11-15 April, 2016.NOCA R1-162517, “Non-orthogonal Multiple Access for New Radio," 3GPP TSG-RAN WG1 #85, Nanjing, China, 23-27 May, 2016.GOCA_RDMA R1-167535, “New Uplink Non-orthogonal Multiple Access Schemes for NR," 3GPP TSG RAN WG1 Meeting #86, Gothenburg, Sweden, 22-26 August, 2016.IDMA Li Ping, Lihai Liu, Keying Wu, and W. K. Leung, “Interleave-Division Multiple-Access," IEEE Trans. on Wireless Comm., vol. 5, no. 4, pp. 938-947, April 2006.IGMA R1-163992, “Non-orthogonal Multiple Access Candidate for NR," 3GPP TSG-RAN WG1#85, Nanjing, China, 23-27 May, 2016.RSMA R1-163510, “Candidate NR Multiple Access Schemes," 3GPP TSG RAN WG1 Meeting #84bis, Busan, Korea, 11-15 April, 2016.MCS 3GPP TS 36.213 version 13.0.0, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer procedures," 3GPP TSG RAN, 2016.Hyunsoo Kim (S'12) received the B.S. degrees from the School of Electrical and Electronic Engineering, Yonsei university, Seoul, Korea, in 2012. He also received the Scholarship of National Research Foundation of Korea during his B.S studies. He is working toward the Ph.D. degree in Electrical Engineering at Yonsei University. His research interests are new waveform, non-orthogonal multiple access, licensed assisted access, and full-duplex. Yeon-Geun Lim (S'12) received his B.S. degree in Information and Communications Engineering from Sungkyunkwan University, Korea in 2011. He is now with the School of Integrated Technology, Yonsei University, Korea and is working toward the Ph.D. degree. His research interest includes massive MIMO, new waveform, full-duplex, and system level simulation for 5G networks. Chan-Byoung Chae(SM'12) is Underwood Distinguished Professor in the School of Integrated Technology, Yonsei University, Korea. Before joining Yonsei University, he was with Bell Labs, Alcatel-Lucent, Murray Hill, New Jersey, as a Member of Technical Staff, and Harvard University, Cambridge, Massachusetts, as a Post-doctoral Research Fellow. He received his Ph.D. degree in Electrical and Computer Engineering from The University of Texas at Austin, TX, USA in 2008.He was the recipient/co-recipient of the Yonam Research Award from LG Foundation (2016), the Best Young Professor Award from Yonsei University (2015), the IEEE INFOCOM Best Demo Award (2015), the IEIE/IEEE Joint Award for Young IT Engineer of the Year (2014), the KICS Haedong Young Scholar Award (2013), the IEEE Signal Processing Magazine Best Paper Award (2013), the IEEE ComSoc AP Outstanding Young Researcher Award (2012), the IEEE Dan. E. Noble Fellowship Award (2008), two Gold Prizes (1st) in the 14th/19th Humantech Paper Contest. He currently serves as an Editor for the IEEE Comm. Mag., the IEEE Trans. on Wireless Comm., the IEEE Wireless Comm. Letters, the IEEE/KICS Jour. Comm. Nets, and the IEEE Trans. on Molecular, Biological, and Multi-scale Comm. Daesik Hong (S'86, M'90, SM'05) received the B.S. and M.S. degrees in Electronics Engineering from Yonsei University, Seoul, Korea, in 1983 and 1985, respectively, and the Ph.D. degree from the School of Electronics Engineering, Purdue University, West Lafayette, IN, in 1990. He joined Yonsei University in 1991, where he is currently the Dean of the College of Engineering and a Professor with the School of Electrical and Electronic Engineering. He is also currently President of the Institute of Electronics and Information Engineering, Korea. He has been serving as Chair of Samsung-Yonsei Research Center for Mobile Intelligent Terminals. He also served as a Vice-President of Research Affairs and a President of Industry-Academic Cooperation Foundation, Yonsei University, from 2010 to 2011. He also served as a Chief Executive Officer (CEO) for Yonsei Technology Holding Company in 2011, and served as a Vice-Chair of the Institute of Electronics Engineers of Korea (IEEK) in 2012.Dr. Hong is a senior member of the IEEE. He served as an editor of the IEEE Transactions on Wireless Communications from 2006 to 2011. He currently serves as an editor of the IEEE Wireless Comm. Letters. He was appointed as the Underwood/Avison distinguished professor at Yonsei University in 2010, and received the Best Teacher Award at Yonsei University in 2006 and 2010. He was also a recipient of the Hae-Dong Outstanding Research Awards of the Korean Institute of Communications and Information Sciences (KICS) in 2006 and the Institute of Electronics Engineers of Korea (IEEK) in 2009. His current research activities are focused on future wireless communication including new waveform, non-orthogonal multiple access, full-duplex, energy harvesting, and vehicle-to-everything communication systems. More information about his research is available at http://mirinae.yonsei.ac.kr.
http://arxiv.org/abs/1703.09042v1
{ "authors": [ "Hyunsoo Kim", "Yeon-Geun Lim", "Chan-Byoung Chae", "Daesik Hong" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170327125028", "title": "Multiple Access for 5G New Radio: Categorization, Evaluation, and Challenges" }
Department of Physics, The University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-0033, JapanExperiments using nuclei to probe new physics beyond the Standard Model, such as neutrinoless ββ decay searches testing whether neutrinos are their own antiparticle, and direct detection experiments aiming to identify the nature of dark matter, require accurate nuclear physics input for optimizing their discovery potential and for a correct interpretation of their results. This demands a detailed knowledge of the nuclear structure relevant for these processes. For instance, neutrinoless ββ decay nuclear matrix elements are very sensitive to the nuclear correlations in the initial and final nuclei, and the spin-dependent nuclear structure factors of dark matter scattering depend on the subtle distribution of the nuclear spin among all nucleons. In addition, nucleons are composite and strongly interacting, which implies that many-nucleon processes are necessary for a correct description of nuclei and their interactions. It is thus crucial that theoretical studies and experimental analyses consider β decays and dark matter interactions with a coupling to two nucleons, called two-nucleon currents. Nuclear physics insights for new-physics searches using nuclei:Neutrinoless ββ decay and dark matter direct detection Javier Menéndez1 menendez@nt.phys.s.u-tokyo.ac.jp December 30, 2023 ======================================================================================================================= § INTRODUCTIONNeutrinos and dark matter are two of the most promising candidates for new physics beyond the Standard Model of particle physics. Because they are both charge neutral and massive, it is possible that neutrinos and antineutrinos would be the same particle, in which case neutrinos would be labeled as Majorana particles <cit.>. This property —very hard to test because neutrinos are so light— would imply the violation of lepton number, a relation which goes in both directions: the violation of lepton number would establish neutrinos to be Majorana particles. In turn, this may have important consequences for the understanding of the baryonic matter-antimatter asymmetry observed in the universe, as in most models the difference between baryon and lepton number is conserved.Unveiling the origin of dark matter stands as one of the biggest challenges in physics. The existence of dark matter has been certified by very different astrophysical observations —galactic rotation velocities, gravitational lensing, anysotropies of the cosmic microwave background— but the nature of dark matter is still unknown <cit.>. Observations have constrained some of its properties, such that it must be neutral to the electromagnetic interaction —to a very good approximation at least— that it should be cold or warm to allow for galaxy structure formation, and that it amounts to more than 80% of the mass content of the universe, and roughly a quarter of the energy content. Ideally one would like to answer these questions on the nature of neutrinos and dark matter in the laboratory. Experimental programs searching for neutrinoless ββ decay —the lepton number violating process most likely to be observed— and the direct detection of dark matter are being pursued vigorously, and impressive advances are permanently being reported, with present experimental sensitivities reaching half-lives longer than T_1/2^0νββ=10^26 years <cit.> for neutrinoless ββ decay, and excluding scattering cross-sections off nuclei smaller than σ_χ𝒩=10^-40 cm^2 <cit.> for searches of dark matter.Further improvements are expected in the near future, as next generation experiments are planned to use over a tonne of source or target material with increasingly reduced backgrounds.Neutrinoless ββ decay and dark matter direct detection experiments have in common that they are looking for the decay and the scattering off atomic nuclei, respectively. Therefore, the design —for example, the choice of source or target material— and the interpretation of the experimental results in principle depends on the nuclear physics of the process at study. In the case of neutrinoless ββ decay, the value of the nuclear matrix element driving the transition relies on the accurate nuclear structure description of the initial and final nuclei, and on the weak-interaction diagrams considered at the nucleon level <cit.>. In the case of dark matter detection, a correct interpretation of the experimental results taking into account all relevant nuclear structure factors depends on considering all possible interactions of nuclei with dark matter particles. In particular, β decays and dark matter interactions with a coupling to two nucleons, in addition to the leading contributions which only involve a single nucleon, can be significant. § NEUTRINOLESS ΒΒ DECAY§.§ ββ decay: two-neutrino and neutrinoless cases The existence of ββ decay is a consequence of the nuclear pairing interaction, which makes nuclei with an even number of protons, or an even number of neutrons, more bound than nuclei with one or two —a proton and a neutron— unpaired nucleons. As a result, in some cases it is energetically favorable for a nucleus to decay along a given isobaric chain —set of nuclei with the same number of nucleons— via a second-order ββ decay, instead of the usual single-β decay channel.For the case of ^76Ge the decay scheme is shown in figure <ref>. ββ decay with the emission of two antineutrinos besides two electrons, a lepton-conserving process permitted by the weak interaction, has been observed in a dozen of cases, favored by a larger energy difference between the initial and final nuclei. The measured half-lives are of the order of T_1/2^2νββ∼10^19-10^21 years <cit.>.Neutrinoless ββ decay does not involve the emission of neutrinos, and it therefore violates lepton number. It demands neutrinos to be Majorana particles. The neutrinoless case is at least five or six orders of magnitude slower than the two-neutrino ββ decay permitted by the Standard Model. In the standard scenario where the decay is mediated by the exchange of the three known light neutrinos this is because the decay rate is proportional to the neutrinos masses, which are tiny compared to those of any other lepton. In other scenarios involving new physics, the reason is the large mass of the exchange particles, or the small coupling of the new physics with the Standard Model sector. In the standard case the neutrinoless ββ decay half-life can be written as <cit.>[T^0νββ_1/2]^-1=G^0νββ|M^0νββ|^2m_ββ^2 ,which naturally includes a phase space factor G^0νββ that takes into account the kinematics, a nuclear matrix element M^0νββ that contains the relevant nuclear physics of the decay, and a third part m_ββ=∑_k m_kU_ek^2 that encodes the new-physics scale —the neutrino masses m_k— and also includes the mixing of electron neutrinos with other flavors, U_ek. The nuclear matrix element can be decomposed according to the spin structure of the operator <cit.>:M^0νββ=M^GT-g_V/g_AM^F+M^T,where the dominant term is the so-called Gamow-Teller component, M^GT. §.§ Nuclear matrix elements: nuclear structure Neutrinoless ββ decay nuclear matrix elements M^0νββ have to be obtained by nuclear structure calculations evaluating the transition operator between the initial and final nuclear states. The present status of these calculations is illustrated in figure <ref>. Unfortunately different nuclear structure approaches disagree in their predicted matrix elements for every ββ decay candidate by up to a factor three. This is a clear evidence that the unavoidable approximations present in solving the nuclear many-body problem are not under control when studying neutrinoless ββ decay <cit.>. In contrast, it should be noted that the same different many-body methods in general agree when studying other nuclear structure propertiessuch as excitation spectra or electromagnetic transitions.It is thus critical to clarify the actual value of neutrinoless ββ decay nuclear matrix elements. A first avenue for doing so is to test the calculations by finding correlations of matrix elements with other measured quantities. Despite efforts in this direction <cit.>, no single observable has been found to be especially correlated to neutrinoless ββ decay. A promising process is two-neutrino ββ decay, which shares with the neutrinoless case the initial and final states and for which there is experimental data. It must be taken into account, however, that the relevant momentum transfers are very different in the two-neutrino and neutrinoless decays, q∼1 MeV for the former and q∼100 MeV for the latter. Unfortunately most many-body approaches cannot predict two-neutrino ββ decay, because an accurate calculation involves dealing with the intermediate odd-proton–odd-neutron system —for instance, the nucleus ^76Sb in figure <ref>— and this is a more involved nuclear structure calculation than the one needed for even-even nuclei. Other approaches like quasiparticle random phase approximation method use the two-neutrino ββ decay half-life to fix a free parameter in their model and are not predictive for this decay. The only remaining many-body approach is the shell model, which actually predicted the two-neutrino ββ decay rate of ^48Ca <cit.> in good agreement with the subsequent measurement a few years later <cit.>. However, the accepted experimental value has been challenged by a recent measurement <cit.>, which would bring the shell model prediction into an overestimation of the corresponding matrix element. The description of the two-neutrino ββ decay rate of ^136Xe within the shell model is also under discussion <cit.>.In recent years an effort has been made to understand the origin of the differences between many-body approaches by comparing the systematic calculations of matrix elements, even if the associated ββ decays make little or no sense experimentally. For instance a comparison between shell model and energy density functional matrix elements which in general disagree the most between different methods, as shown in figure <ref>, but restricting the calculations to uncorrelated states —fully composed of proton-proton and neutron-neutron angular momentum J=0 pairs in the shell model, and limited to spherical states in the energy density functional calculation— showed that the uncorrelated nuclear matrix element disagreement was limited to 30% or less <cit.>. This is illustrated by figure <ref> and is a very significant improvement over the factor three disagreement in figure <ref>. The matrix element values are fixed by the strength of the —neutron-neutron and proton-proton— pairing interaction. This finding suggests that it is the different way in which the many-body approaches include nuclear structure correlations which is behind much of the disagreement in neutrinoless ββ decay nuclear matrix elements. Proton-neutron pairing has been known to be important for β and ββ decay decays for a long time <cit.>. Recent shell model calculations based on a separable effective interaction have confirmed this extreme, showing that nuclear matrix elements are overestimated if proton-neutron pairing —more precisely, isoscalar pairing— is excluded <cit.>. Since at the moment isoscalar pairing is not fully captured by energy density functional and interacting boson model calculations, this may be a reason for the discrepancies shown in figure <ref>. Dedicated studies incorporating these correlations in the corresponding nuclear matrix element calculations are needed.Another cause of missing correlations is the limitation of the configuration space, this is, the single-particle orbitals that nucleons are permitted to occupy. For instance, the shell model and the interacting boson model only solve explicitly the nuclear many-body problem in a limited configuration space around the Fermi surface, and include the effect of the remaining configurations approximately. In order to quantify the effect of the missing correlations, a many-body perturbation theory estimate found relatively moderate increases of less than 50% for the lightest ββ decay emitters ^48Ca, ^76Ge and ^82Se, for which the estimation is easier <cit.>.A more rigorous estimation is the recent computation of the ββ decay of ^48Ca extending the shell model configuration space from one to two major harmonic oscillator shells —limited to 2ħω excitations— reducing the core of the shell model calculation from ^40Ca to ^16O. By doing so, many previously excluded configurations were permitted, and the dimension of the problem increases from less than 10^6 to over 10^9. The impact on the nuclear matrix element is, however, moderate, with a 30% enhancement due to previously-missing cross-shell pairing correlations <cit.>. Interestingly, a cancellation occurs between the general enhancement produced by additional pairing correlations and the contribution of particle-hole excitations. A similar cancellation is expected to be at play for other ββ decay candidates as well. Nevertheless, explicit calculations are needed. Shell model calculations in extended configuration spaces can benefit from using the Monte Carlo shell model technique, which recently has studied configuration spaces with dimension over 10^23 <cit.>.Finally, other many-body approaches than those represented in figure <ref> can shed light into ββ decay nuclear matrix elements. In particular, in the last decade nuclear ab initio calculations —those solving the many-body problem for all nucleons in the system, with nuclear forces fitted only to light nuclei— have been able to perform calculations up to medium-mass isotopes, in many cases achieving very good agreement to experiment <cit.>. Even before ab initio calculations are available for ββ decay emitters, these many-body techniques can be used to benchmark ββ decay matrix elements in lighter or less-correlated systems to gain insight on the relevant physics for this process.Not only ab initio calculations are more controlled than the phenomenological ones available so far for ββ decay,but they also in principle allow for the estimation of theoretical nuclear matrix element uncertainties, a very valuable information for the interpretation of the experimental results <cit.>. §.§ Two-body currents The nuclear matrix discussed so far are based on the standard one-body operator: each weak interaction vertex involves only one nucleon. However, because nucleons are composite particles that strongly interact to each other, the single-β decay hadronic currenttakes the general form <cit.>J=∑_i=1 J_i, 1b+∑_i<j J_ij,2b +⋯Diagrams involving two nucleons, represented in figure <ref>, are a correction to the leading one-body terms. Two-body corrections have been intensively explored in nuclei lighter than those undergoing ββ decay, where they have been found to be relevant for electromagnetic <cit.> and weak <cit.> transitions.Two-body weak currents have been mostly studied in the context of small momentum transfers, such as single-β decay. Since calculations are quite demanding, most works have been limited to relatively light nuclei, up to about oxygen, or relatively rough approximations, such as the normal-ordering of the two-body currents with respect to an isospin-symmetric Fermi gas. These studies indicate that two-body currents tend to cancel the contribution of the leading one-body operator, which means that they are a contribution to the so-called "g_A quenching". The "g_A quenching" stands for the empirical fact that nuclear many-body calculations need to reduce the strength of the spin-isospin operator in order to agree with the experimental half-lives of Gamow-Teller β transitions. However, the size of the two-body contributions is unclear, with results ranging for less than 10% for carbon and oxygen isotopes <cit.>, to about 30% in larger systems but with a more rough normal ordering <cit.>. Especially important are the uncertainties in the short-range two-body currents —not shown in figure <ref>. The "g_A quenching" should be carefully studied in lighter systems where dedicated ab initio calculations with different approaches are feasible.In contrast to single-β decay, where the relevant momentum transfer is q∼1 MeV, in neutrinoless ββ decay transferred momenta can reach q∼100 MeV, because of the virtual nature of the exchanged neutrinos. This different momentum-transfer regime can have important consequences in the effect of two-body currents, because several pion-exchange (figure <ref> a) and pion-pole (figure <ref> b) contributions contribute at finite q <cit.>. Present results show that q-dependent two-body contributions partially cancel other two-body terms <cit.>, resulting in a smaller reduction of Gamow-Teller matrix elements in neutrinoless ββ decay than in single-β and two-neutrino ββ decays.§ DARK MATTER SCATTERING OFF NUCLEI§.§ Dark matter-nucleon interactions and scattering cross-section Direct detection dark matter searches are motivated by weakly interacting massive particles (WIMPs), promising dark matter constituents that are predicted to naturally account for the observed dark matter density. The expected WIMP masses are M_χ∼1-1000 GeV, the mass scale of nuclei. Therefore experiments sensitive to these dark matter masses use nuclei as target. Similarly lighter masses M_χ∼1 MeV are probed via the scattering of dark matter off electrons  <cit.>.WIMPs can interact with nuclei in many ways. However, commonly the interactions suppressed by the small WIMP velocities —v/c∼10^-3— or the momentum transfer of the scattering —much smaller than M_χ and the nucleon mass— are not considered. This leads to two possibilities <cit.>: the direct coupling of the dark matter and nuclear densities, called spin-independent scattering, and the coupling of the dark matter and nuclear spins, referred to as spin-dependent scattering. Spin-independent scattering is favored from the nuclear physics side because it is coherent: at vanishing momentum transfer it receives contributions from all nucleons in the nucleus. In contrast, in spin-dependent responses, on average only the spin of one nucleon contributes, because the spins of two nucleons tend to couple to a spin-zero pair due to the pairing interaction. Other things being equal, coherent scattering is expected to be enhanced by a factor A^2 with respect to the spin-dependent case.A more complete description of the possible WIMP interactionswith nuclei has been worked out in a nonrelativistic effective field theory (EFT) <cit.>. By constructing all possible interactions that can be built from the WIMP and nucleon spins, the momentum transfer and the WIMPs relative velocity, a set of independent operators 𝒪_i is derived at the nucleon level. At the nuclear level, however, not all operators 𝒪_i leave distinct signatures, because there are only six independent nuclear responses, which may interfere between them. In addition, the interactions of dark matter with two nucleons need to be considered <cit.>, see figure <ref>. Two-body currents can contribute significantly to dark matter scattering, both for a coherent response, and for spin-dependent interactions.In general, the WIMP-nucleus cross-section can be written as <cit.>dσ_χ𝒩/d𝐪^2∝|∑_i c_iζ_i ℱ_i |^2 +|∑_i ĉ_iζ̂_i ℱ̂_i |^2+⋯Here ζ,ζ̂ are kinematic factors, c,ĉ encode the hadronic physics and particle physics —for instance the Wilson coefficients coupling WIMPs with quarks and gluons— and ℱ,ℱ̂ represent the nuclear physics: its square gives the nuclear structure factors. As shown in Eq. (<ref>), contributions may interfere or not. In particular, the usual spin-independent and spin-dependent terms do not interfere.§.§ Coherent (spin-independent) scattering Spin-independent —coherent— scattering can be generalized by considering all one-nucleon operators proposed by the nonrelativistic EFT. The most relevant terms are characterized by an enhancement of the cross-section,driven by the nuclear physics, and reflected in the structure factors ℱ^2,ℱ̂^2.In the nonrelativistic EFT there are two nuclear responses which can be coherent <cit.>. First, the standard spin-independent response, denoted as M, corresponding to the operator 𝒪_1 and other subleading operators. Second, the nuclear response associated with the operator 𝒪_3, denoted by Φ”, which is partially coherent. In this case, all nucleons with spin aligned with the angular momentum lcontribute coherently. Nucleons with spin antiparallel to l cancel this contribution, but single-particle states with parallel spin are lowered in energy due to the nuclear spin-orbit force, leaving part of the antiparallel-spin states empty and preventing a complete cancellation. Since this coherent term is spin dependent, the standard terminology must be generalized: it is more appropriate to speak of coherent scattering instead of spin-independent scattering.Besides the nuclear structure aspects —coherence— the hadronic physics is also crucial to set the hierarchy among different terms. Chiral EFT <cit.>, an effective theory of the underlying interaction that binds nucleons, quantum chromodynamics (QCD), is valid at the energy and momentum scales of WIMP scattering and incorporates the physics of the chiral symmetry of QCD. Chiral EFT also describes consistently the interactions with external probes <cit.>. By formulating the WIMP-nucleon interactions in the chiral EFT framework, including scalar, pseudoscalar, vector and axial contributions in the WIMP and hadronic sectors, the leading operators are predicted and can be matched into the nonrelativistic EFT basis <cit.>. In addition, chiral EFT predicts the consistent interactions of WIMPs with two nucleons —two-body currents. The relative importance of the different contributions can be studied by calculating the corresponding structure factors for the one- and two-body operators, assuming similar contributions from the particle physics Wilson coefficients.Figure <ref> shows the structure factors for the most important one- and two-nucleon contributions to the coherent WIMP scattering off ^132Xe. This is the most abundant isotope in xenon, the target used in experiments giving the present best limits on WIMP-nucleus scattering. Solid lines in figure <ref> represent the most important individual contributions, which are the isoscalar 𝒪_1 term —routinely used in experimental analyses— its isovector counterpart —with opposite coupling to protons and neutrons, usually included to account for apparently conflicting results in experiments using different isotopes— and two different couplings of the WIMP to two nucleons: through a scalar coupling, and through a coupling to the trace anomaly of the energy-momentum tensor —θ term. As a result, the following extension of the cross-section for spin-independent scattering is proposed <cit.>dσ_χ𝒩^SI/d𝐪^2 =1/4π𝐯^2|c_+^Mℱ_+^M(𝐪^2)+c_-^Mℱ_-^M(𝐪^2) +c_πℱ_π(𝐪^2)+c_π^θℱ_π^θ(𝐪^2)|^2,with each c coefficient sensitive to a different corner of the parameter space of new-physics models.The dashed lines of figure <ref> take into account the interferences with the dominant term. When these are included, two additional contributions appear: radius, or momentum-dependent, corrections to the leading operator —that nevertheless probe a different combination of new-physics parameters— and the partially coherent operator 𝒪_3, which turns out to give the leading correction among all the one-nucleon operators proposed in the nonrelativistic EFT. A further generalization of the coherent cross-section is therefore suggested <cit.>:dσ_χ𝒩^SI/d𝐪^2=1/4π𝐯^2 |(c_+^M-𝐪^2/m_N^2 ċ_+^M)ℱ_+^M(𝐪^2)+c_πℱ_π(𝐪^2) +c_π^θℱ_π^θ(𝐪^2)+(c_-^M-𝐪^2/m_N^2 ċ_-^M)ℱ_-^M(𝐪^2) +𝐪^2/2m_N^2[c_+^Φ”ℱ_+^Φ”(𝐪^2)+c_-^Φ”ℱ_-^Φ”(𝐪^2)]|^2. Note that not all terms are independent, as for instance for a Majorana (Dirac) spin 1/2 WIMP there are only 4 (7) independent Wilson coefficients, so that a correlated analysis of several experiments would be required. A more practical analysis with data from a single experiment and taking limits on one operator at a time —for instance on Eq. (<ref>)— should take this carefully into account.§.§ Spin-dependent and inelastic scattering Coherent —spin-independent— scattering is not very sensitive to the detailed nuclear structure of the target nuclei. This is because at vanishing momentum-transfer, the structure factor of the leading term is ℱ^M_+ =A^2, simply counting the number of nucleons. The momentum-transfer dependence is given by the nuclear density <cit.>. In contrast, a careful nuclear structure calculation is needed for spin-dependent scattering, which is very sensitive to the nuclear spin distribution among all nucleons <cit.>.Only odd-A nuclei are sensitive to spin-dependent interactions: stable even-A nuclei have spin zero due to nuclear pairing. Therefore, for a given nucleus this interaction is mostly sensitive to the nucleon species with an odd number of components, either protons or neutrons. For odd-A xenon isotopes ^129Xe and^131Xe, with atomic number Z=54, neutrons carry most of the spin and the so-called "structure factor for neutrons" is orders of magnitude larger than the "structure factor for protons". This separation is actually not general as it simply refers to different combinations of isoscalar —same for neutrons and protons— or isovector —opposite— couplings. When considering only one-nucleon operators, however, the separation is valid at vanishing momentum transfer <cit.>:ℱ_1b^SD(q=0)∝|(a_0+a_1)⟨ S_p⟩+(a_0-a_1)⟨ S_n⟩|^2,with a_0/1 the isoscalar/isovector couplings and ⟨ S_p/n⟩ the proton/nucleon spin expectation values. Two-body currents prevent the validity of the separation. As illustrated in figure <ref>, two-nucleon interactions do not distinguish between neutrons and protons, and therefore it is not possible to disentangle proton and neutron contributions. The structure factor can be generalized as <cit.>ℱ_1b+2b^SD(q=0)∝|(a_0+a_1[1+δ])⟨ S_p⟩ +(a_0-a_1[1+δ])⟨ S_n⟩|^2,where δ∼-0.2 <cit.> encodes the two-nucleon contributions and can be calculated with chiral EFT. As a result, with two-body currents, the so-called "structure factor for protons" —defined by a_0=-a_1 in Eq. (<ref>)— is also sensitive to neutrons, and increases by over an order of magnitude with respect to the one-nucleon case because for xenon isotopes ⟨ S_n⟩≫⟨ S_n⟩. This has important practical consequences, because it makes exclusion limits obtained in experiments using xenon —which is more sensitive to neutrons— competitive in "proton cross-sections" with searches using target nuclei with odd number of protons —thus more sensitive to protons— such as fluorine <cit.>.Once dark matter has been detected, it is left to address the nature of the dark matter-nucleus interaction. Spin-dependent scattering can be useful in this respect, because it could be observed in the elastic and inelastic channels. The experimental inelastic signature is distinct from the elastic one —the nucleus γ decays to the ground state— and can be realized if the target nuclei has low-lying excited nuclear states, such as the 40 keV and 80 keV first excited states in ^129Xe and ^131Xe  <cit.>. For coherent scattering, the inelastic channel is always suppressed by a factor A^2 with respect to the elastic channel <cit.>, making it presently undetectable in practice. Therefore, the observation of an inelastic signal would clearly point out to a spin-dependent interaction. § SUMMARY The most impressive experimental efforts are being made to unveil the nature of neutrinos and dark matter in low-energy experiments using nuclei as a source or target. To make the most of these searches, comparable theoretical efforts are needed to understand the nuclear physics driving these processes. Neutrinoless ββ decay nuclear matrix element calculations differ, but the most sensitive nuclear structure correlations for the decay have been identified, and calculations in larger configuration spaces are underway. The effect of the weak interaction involving two nucleons can also be significant, and explain part of the so-called "g_A quenching". Ab initio calculations in lighter systems can be performed to fully understand this "quenching". Analyses of dark matter searches should consider all possible interactions of WIMPs with nuclei. In particular, the coupling to two nucleons can have significant impact in both coherent and spin-dependent scattering. The observation of inelastic scattering is a promising way to determine the nature of the dark matter interaction with nuclei. § ACKNOWLEDGEMENTSI would like to thank my collaborators J. Engel, D. Gazit, N. Hinohara, M. Hoferichter, Y. Iwata, P. Klos, G. Martínez-Pinedo, T. Otsuka, A. Poves, T. R. Rodríguez, N. Shimizu, A. Schwenk, Y. Utsuno and L. Vietze for very enlightening discussions and for making use of our common work for these proceedings. This work has been supported by an International Research Fellowship the Japan Society for the Promotion of Science (JSPS) and JSPS Grant-in-Aid for Scientific Research No. 26·04323. Avignone08F. T. Avignone III, S. R. Elliott, and J. Engel, Rev. Mod. Phys. 80, 481 (2008) Baudis:2016qwxL. Baudis,J. Phys. G 43, 044001 (2016) KamLAND-Zen:2016pfgA. Gando et al. [KamLAND-Zen Collaboration],Phys. Rev. Lett.117, 082503 (2016)Akerib:2015rjg D. S. Akerib et al. [LUX Collaboration],Phys. Rev. Lett.116, 161301 (2016)Engel:2016xgbJ. Engel and J. Menéndez,arXiv:1610.06548 Barabash15A. Barabash, Nucl. Phys. A 935, 52 (2015)Vaquero13N. López Vaquero, T. R. Rodríguez and J. L. Egido,Phys. Rev. Lett.111, 142501 (2013)Yao15J. M. Yao, L. S. Song, K. Hagino, P. Ring and J. Meng,Phys. Rev. C 91, 024316 (2015)Yao16 J. M. Yao and J. Engel, Phys. Rev. C 94, 014306 (2016)Hyvarinen15J. Hyvärinen and J. Suhonen,Phys. Rev. C 91, 024613 (2015)Simkovic13F. Šimkovic, V. Rodin, A. Faessler and P. Vogel,Phys. Rev. C 87, 045501 (2013)Fang15D. L. Fang, A. Faessler and F. Šimkovic,Phys. Rev. C 92, 044301 (2015)Mustonen13M. T. Mustonen and J. Engel,Phys. Rev. C 87, 064302 (2013)Barea15J. Barea, J. Kotila and F. Iachello,Phys. Rev. C 91, 034304 (2015)Horoi16 M. Horoi and A. Neacsu,Phys. Rev. C 93 024308 (2016)Menendez09J. Menéndez, A. Poves, E. Caurier and F. Nowacki,Nucl. Phys. A 818, 139 (2009)Iwata16Y. Iwata, N. Shimizu, T. Otsuka, Y. Utsuno, J. Menéndez, M. Honma and T. Abe,Phys. Rev. Lett.116, 112502 (2016)Freeman12S. J. Freeman, and J. P. Schiffer, J. Phys. G: Nucl. Part. Phys. 39, 124004 (2012)Caurier90E. Caurier, A. P. Zuker, and A. Poves, Phys. Lett. B 252, 13 (1990) Balysh96A. Balysh et al., Phys. Rev. Lett. 77, 5186 (1996) Nemo316R. Arnold et al. [NEMO-3 Collaboration], Phys. Rev. D 93, 112008 (2016)Caurier12E. Caurier, F. Nowacki, and A. Poves, Phys. Lett. B 711 62 (2012)Horoi13M. Horoi and B.A. Brown, Phys. Rev. Lett. 110 222502 (2013)Vogel86P. Vogel and M. R. Zirnbauer,Phys. Rev. Lett.57, 3148 (1986)Menendez14J. Menéndez, T. R. Rodríguez, G. Martínez-Pinedo and A. Poves,Phys. Rev. C 90, 024311 (2014)Menendez16J. Menéndez, N. Hinohara, J. Engel, G. Martínez-Pinedo and T. R. Rodríguez,Phys. Rev. C 93, 014305 (2016)Holt13J. D. Holt and J. Engel,Phys. Rev. C 87, 064315 (2013)Kwiatkowski14 A. A. Kwiatkowski et al.,Phys. Rev. C 89, 045502 (2014)Togashi16T. Togashi, Y. Tsunoda, T. Otsuka and N. Shimizu,Phys. Rev. Lett.117 172502 (2016)Hebeler15K. Hebeler, J. D. Holt, J. Menendez and A. Schwenk,Ann. Rev. Nucl. Part. Sci.65, 457 (2015)Park:2002yp T. S. Park et al.,Phys. Rev. C 67,055206 (2003) MGS2011J. Menéndez, D. Gazit, and A. Schwenk, Phys. Rev. Lett. 107, 062501 (2011)Hoferichter:2015ipaM. Hoferichter, P. Klos and A. Schwenk,Phys. Lett. B 746, 410 (2015)Menendez:2016kkg J. Menéndez,arXiv:1605.05059Prezeau:2003sv G. Prézeau, A. Kurylov, M. Kamionkowski and P. Vogel,Phys. Rev. Lett.91, 231301 (2003)Cirigliano:2012pq V. Cirigliano, M. L. Graesser and G. Ovanesyan,JHEP 1210, 025 (2012)Menendez:2012tm J. Menéndez, D. Gazit and A. Schwenk,Phys. Rev. D 86, 103511 (2012) Hoferichter:2016nvd M. Hoferichter, P. Klos, J. Menéndez and A. Schwenk,Phys. Rev. D 94, 063505 (2016)Bacca:2014tlaS. Bacca and S. Pastore,J. Phys. G 41, 123002 (2014)Gazit09D. Gazit, S. Quaglioni, and P. Navrátil,Phys. Rev. Lett. 103, 102502 (2009) Ekstrom:2014iya A. Ekström et al.,Phys. Rev. Lett. 113, 262504 (2014)eng14J. Engel, F. Šimkovic and P. Vogel, Phys. Rev. C 89, 064308 (2014)Engel:1992bfJ. Engel, S. Pittel and P. Vogel,Int. J. Mod. Phys. E 1, 1 (1992)Fitzpatrick:2012ixA. L. Fitzpatrick, W. Haxton, E. Katz, N. Lubbers and Y. Xu,JCAP 1302, 004 (2013)Anand:2013ykaN. Anand, A. L. Fitzpatrick and W. C. Haxton,Phys. Rev. C 89,065501 (2014)Epelbaum:2008gaE. Epelbaum, H. W. Hammer and U.-G. Meißner,Rev. Mod. Phys.81, 1773 (2009)Machleidt:2011zzR. Machleidt and D. R. Entem,Phys. Rept.503, 1 (2011) Vietze:2014vsaL. Vietze, P. Klos, J. Menéndez, W. C. Haxton and A. Schwenk,Phys. Rev. D 91, 043520 (2015)Klos:2013rwaP. Klos, J. Menéndez, D. Gazit and A. Schwenk,Phys. Rev. D 88, 083516 (2013) Akerib_sd16 D. S. Akerib et al. [LUX Collaboration],Phys. Rev. Lett.116, 161302 (2016)Baudis:2013bbaL. Baudis, G. Kessler, P. Klos, R. F. Lang, J. Menéndez, S. Reichard and A. Schwenk,Phys. Rev. D 88, 115014 (2013)McCabe:2015eiaC. McCabe,JCAP 1605, 033 (2016)
http://arxiv.org/abs/1703.08921v1
{ "authors": [ "Javier Menéndez" ], "categories": [ "nucl-th", "astro-ph.CO", "hep-ex", "hep-ph", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170327040247", "title": "Nuclear physics insights for new-physics searches using nuclei: Neutrinoless $ββ$ decay and dark matter direct detection" }
[footnoteinfo]This work was supported by the Australian Research Council's Discovery Projects funding scheme under Project DP130101658 and Laureate Fellowship FL110100020.1]Qi Yu 1]Daoyi Dong 1]Ian R. Petersen 1,2]Qing Gao [1]School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2600, Australia (e-mail: qi.yu@student.adfa.edu.au; i.r.petersen@gmail.com; daoyidong@gmail.com) [2]Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong SAR, China (e-mail: qing.gao.chance@gmail.com)A filtering problem for a class of quantum systems disturbed by a classical stochastic process is investigated in this paper. The classical disturbance process, which is assumed to be described by a linear stochastic differential equation, is modeled by a quantum cavity model. Then the hybrid quantum-classical system is described by a combined quantum system consisting of two quantum cavity subsystems. Quantum filtering theory and a quantum extended Kalman filter method are employed to estimate the states of the combined quantum system. An estimate of the classical stochastic process is derived from the estimate of the combined quantum system. The effectiveness and performance of the proposed methods are illustrated by numerical results.quantum filtering, hybrid quantum-classical system, quantum extended Kalman filter.§ INTRODUCTIONCharacterizing unknown quantum states have been a fundamental task in quantum computation, quantum metrology and quantum control. To estimate an unknown static quantum state, state tomography methods such as maximum likelihood estimation (<cit.>), Bayesian mean estimation (<cit.>) and linear regression estimation (<cit.>) have been developed. For estimating a dynamic quantum state, a quantum filtering theory has been developed (<cit.>). Quantum filtering theory was introduced by Belavkin in the 1980's as documented in a series of articles (<cit.>). The basic premise is to build a non-commutative counterpart for classical probability theory so that approaches to deriving the classical filtering equation can be adapted to quantum dynamical systems. The main difference between this theory and classical filtering theory is that non-commutative observables in quantum systems cannot be jointly represented on a single classical probability space. Quantum filtering theory enables us to optimally estimate the quantum system state using non-demolition measurements. It plays a crucial role in many areas such as quantum control (<cit.>, <cit.>). Recently, quantum filtering theory has been successfully applied in experimental designs such as trapped ions <cit.>, cavity QED systems <cit.>, and optomechanical systems <cit.>.In practice, physical quantum systems are unavoidably affected by classical signals <cit.>, and a number of researchers are becoming interested in the filtering problem for `hybrid' quantum-classical systems where the quantum systems are subject to a classical process. Relevant results can be found in e.g., Tsang's work on quantum smoothing (<cit.>) where a concept of hybrid quantum-classical density operator was used as the main technical tool. Recently, <cit.> developed a quantum-classical Bayesian inference approach to solve fault tolerant quantum filtering and fault detection problems for a class of quantum optical systems subject to stochastic faults. In this paper, we extend the previous work <cit.> to the case that the disturbance process has a continuous value space and our main goal is to estimate both the quantum state and the classical process using non-demolition quantum measurements. We consider a system-probe model with a time-varying Hamiltonian that depends on a classical stochastic process. This hybrid quantum-classical stochastic system is analyzed by building a quantum analog of the classical stochastic process; see also <cit.>. The idea of using an aritificial quantum system to model noise has been considered before <cit.>. However, the author only consider the disturbance to be quantum noise.Then, in our case, quantum filtering theory can be utilized to investigate the filtering problem. The estimation tasks are accomplished by using a quantum extended Kalman filter (QEKF) approach. The structure of this paper is as follows. In Section <ref>, we briefly introduce quantum probability theory and quantum filtering theory.Section <ref> is devoted to the modeling of the classical signal using a quantum cavity model. A stochastic master equation (SME) is then obtained to solve the filtering problem. A QEKF approach is also employed to estimate both the quantum state and the classical process in Section 4. In Section <ref>, we present a numerical example to demonstrate the performance and also compare the QEKF algorithm with the SME method. Section <ref> concludes this paper.Notation: A_m× n denotes an m row and n column matrix; A^† denotes conjugate and transpose of A; A^⊤ is the transpose of A; A^* is the conjugate of A; Tr(A) is the trace of A; X is used to denote any operators and x is a vector of those operators; ρ is a density operator representing a quantum state; â is the estimate of a; i means the imaginary unit, i.e., i=√(-1). § PRELIMINARIES §.§ Quantum probability theoryWe briefly present a preliminary discussion on quantum probability theory. For a detailed treatment, one can refer to the paper <cit.>. Denote the Hilbert space under consideration as ℋ. The system observables, which represent the physical properties of the system, are represented by self-adjoint operators on ℋ. The quantum state, which provides the status of a physical system, is specified by a density operator ρ∈𝒮(ℋ), where 𝒮 is the class of unity trace operators on the associated Hilbert space <cit.>. In this paper, the evolution of the quantum system is mostly described under the Heisenberg picture. That means, any system observable evolves with time as A(t)= U(t)^† A U(t) while the density operator ρ remains unchanged.Then any simple measurement of A(t) yields values within the spectrum of A(t) with a certain probability distribution and the expectation of the measurement is given by ⟨ A(t) ⟩=Tr[ρ A(t)] <cit.>. The key point of the quantum probability formalism is that any single realization of a quantum measurement corresponds to a particular choice of a commutative *-algebra of observables and any commutative *-algebra is equivalent to a classical (Kolmogorov) probability space <cit.>.For the finite-dimensional case, the set spec(A)= {a_j} of eigenvalues of A is called the spectrum of A, and A can be written asA = ∑_a∈spec(A)aP_a, where P_a is the projection operator of A. The following theorem has been presented in <cit.>. <cit.> (spectral theorem, finite-dimensional case). Let 𝒜 be a commutative *-algebra of operators on a finite-dimensional Hilbert space, and let ℙ be a state on 𝒜. Then there is a probability space (Ω,ℱ,P) and a map ι from 𝒜 onto the set of measurable functions on Ω that is a *-isomorphism; i.e., a linear bijection with ι (AB) = ι(A) ι(B) (pointwise) and ι(A^*)=ι(A)^*, and moreover ℙ(A)=E_P(ι(A)). For the infinite-dimensional case, a system operator can be expressed in terms of its spectral measure byA = ∫_R λ P_A (dλ).The corresponding spectral theorem for infinite-dimensional case is stated as follows <cit.>: <cit.> (Spectral Theorem). Let 𝒞 be a commutative von Neumann algebra. Then there is a measure space (Ω, ℱ,μ) and a *-isomorphism ι from 𝒞 to L^∞(Ω, ℱ,μ), the algebra of bounded measurable complex functions on Ω up to μ - a.s. equivalence. Moreover, a normal state ℙ on 𝒞 defines a probability measure P, which is absolutely continuous with respect to μ such that ℙ(C)=E_P(ι(C)) for all C ∈𝒞.The spectral theorem above allows us to treat any set of commutative observables as a set of classical random variables defined on a single classical probability space. In other words, any quantum probabilistic concept can be directly extended to its classical counterpart. Therefore, classical statistical analysis methods can be applied directly in analyzing quantum systems. The following concept of quantum conditional expectation is defined in a similar way to classical conditional expectation and is very useful in quantum filtering theory <cit.>. <cit.>(conditional expectation). Let (𝒩,ℙ) be a quantum probability space and let 𝒜⊂𝒩 be a commutative von Neumann subalgebra. Then the map ℙ(.|𝒜): 𝒜^'→𝒜 is called (a version of) the conditional expectation from 𝒜^' on to 𝒜 if ℙ(ℙ(B|𝒜)A)=ℙ(BA) for all A∈𝒜, B∈𝒜^' . The 𝒜^' here is used to denote the commutant of 𝒜. ℙ(B|𝒜) is the projection of B onto the algebra 𝒜 and represents the maximum information of B that can be extracted from the observation 𝒜. §.§ Quantum filtering theory We use quantum stochastic differential equations (QSDEs) to describe the dynamics of an open quantum system with driving noises. Three fundamental noise processes are described using the annihilation process A_t, the creation process A_t^* and the Poisson (conservation) process Λ_t. The quantum Itô integral is defined for the calculation of a quantum stochastic integral. With a corresponding conditional quantum expectation, we can estimate an arbitrary quantum observable A(t), which commutes with the observation process Y(t). That means A(t)∈𝒴_t^' <cit.>. A typical quantum scenario in quantum optics demonstrating quantum filtering theory is a collection of atoms interacting with an electromagnetic field that is assumed to be in a vacuum state. The quantum dynamics of the atomic system is described by the following quantum stochastic differential equation <cit.>:dU_t = { LdA_t^* - L^* dA_t - 1/2L^*Ldt -iHdt },U_0 = I,which is driven by the noncommuting white-noise process A_t and A_t^*. The evolution of a system observable X is : X → U^*(t)(X⊗ I)U(t). Also X(t), which is denoted by j_t(X), satisfies:dj_t(X)= j_t (ℒ_L,H(X))dt + j_t ([L^*,X])dA_t + j_t ([X,L])dA_t^*,where ℒ is the quantum Lindblad generator <cit.> such thatℒ_L,H(X)=i[H,X] + L^*XL - 1/2(L^*LX+XL^*L).There are two main types of measurement in quantum optics: homodyne detection and photon counting measurement. In our case, we adopt the homodyne detection scheme. The dynamic equation of the observation isdY_t= j_t(L+L^*)dt +dA_t +dA^*_t.Quantum filtering theory aims to provide an optimal estimate of any system observable using the observation process. From Section 2.1, this can be achieved if one can calculate the recursive equation satisfied by the conditional expectation π_t(X)=ℙ(j_t(X)|𝒴_t). This recursive quantum stochastic equation is then the quantum filter we obtain. Using the reference probability method or the characteristic function method, one has <cit.>dπ_t(X)=π_t (ℒ_L,H(X))dt + (π_t(L^*X+XL)-π_t(L^* + L)π_t(X))(dY_t-π_t(L^* +L)dt),or its SME formdρ_t = -i[H,ρ_t]dt + (Lρ_tL^* - 1/2L^*Lρ_t - 1/2ρ_tL^*L)dt + (Lρ_t + ρ_t L^* -Tr[(L+L^*)ρ_t]ρ_t)dW_t,where the stochastic process dW_t=dY(t)-Tr[(L+L^*)ρ_t]dt is a standard Wiener process. Equations (<ref>) and (<ref>) are quantum filter equations for open systems whose dynamics can be described by (<ref>) and (<ref>).§ DESCRIPTION OF HYBRID QUANTUM-CLASSICAL SYSTEMIn this paper, we consider a quantum cavity system disturbed by a classical diffusion stochastic process; see the schematic in Fig. 1. The classical disturbance process is assumed to evolve according to the following first-order linear stochastic differential equation (SDE)d q = -uq dt -vdw_t,where w_t is classical Wiener process with zero mean and unit variance; u and v are arbitrary real numbers while u is assumed to be positive. The disturbance signal will influence the cavity system S_1 by changing its Hamiltonian such thatH_1(t) =q(t)a^†(t) a(t) ,where a(t) is the annihilation operator of cavity system S_1.In our case, we consider a cavity mode disturbed by an external signal <cit.>. The dynamics of this hybrid system is different from that in the standard quantum filter problem. Later we will show how to transform the problem into a standard quantum filtering problem so that the filtering equations (<ref>) and (<ref>) apply. Rather than using the hybrid quantum-classical density operator method in <cit.> and <cit.>, we build a quantum analog of the classical stochastic process and use quantum probability theory to analyze the combined quantum system consisting of two quantum subsystems. To be specific, we consider a cavity system with a quantum disturbance as in Fig. <ref>, where the quantum disturbance system S_2 is used to model the classical disturbance signal q.The corresponding analogy quantum signal with respect to q is Q_2=b +b^†/2, which is a real quadrature of the system S_2. That is, we writeq ∼Q_2/α,where α is a scalar that depends on the dynamic equations of q and Q_2. Then we can obtain an estimate of q by using the relationship thatq̂=π_t(Q_2)/α=Tr[Q_2ρ̂_2]/α,where π_t(Q_2)=ℙ(Q_2|𝒴_t) represents the quantum estimate of Q_2 given the measurement 𝒴_t.We assume that the Hamiltonian of system S_2 and system S_1 are H_2 = 0 and H_1 =Q_2a^† a/α, respectively. The coupling operators are L_2 = √(K_2) b and L_1 = √(K_1) a. K_1,K_2 > 0 are parameters which indicate the coupling strength to each channel. Open quantum systems with multiple field channels can be characterized by the parameter listG=(S,L,H)where S is a scattering matrix which satisfiesS^† S=I, L is the coupling vector that specifies the interface between the system and the fields and H is the Hamiltonian of the system.For the combined system S_c, the (S,L,H) model<cit.> is given asS= I, H= H_1 + H_2 =Q_2/αa^† a, L= ( [ √(K_1) a; √(K_2) b;]).We have now obtained a quantum system S_2 as the analogue of the classical disturbance system. The classical signal q is now equivalently represented by Q_2/α=b +b^†/2α. We can derive the stochastic properties of q as we obtain an estimate of Q_2. Moreover, we have obtained a model for the combined system S_c consisting of subsystems S_1 and S_2.Note that[b,H] = [b, b+b^†/2αa^† a] =[b, b+b^†/2α]a^† a=[b, b^†]a^† a/2α=a^† a/2α, and that[a,H] = [a, b+b^†/2αa^† a] =b+b^†/2α[a, a^† a]=b+b^†/2α a=Q_b a/α. Then the QSDEs used to describe the disturbance system S_2 and cavity system S_1 are listed below:db= - K_2/2 bdt - ia^† a/2α dt - √(K_2)d w_2 , d b^† = - K_2/2 b^† dt + ia^† a/2α dt - √(K_2)d w_2^† , da= - K_1/2 adt - iQ_2a/α dt - √(K_1)d w_1 ,d a^† = - K_1/2 a^† dt + ia^† Q_2/α dt - √(K_1)d w_1^† . The real quadratures corresponding to the position and momentum of the two systems, respectively, are:Q_1=a+a^†/2 ,P_1=a-a^†/2i , [Q_1,P_1]=-i/2 , Q_2=b+b^†/2 ,P_2=b-b^†/2i ,[Q_2,P_2]=-i/2 . FromQ_2 ∼α q ,dQ_2 = - K_2/2 Q_2 dt - √(K_2)/2(dw_2+dw_2^*) ,dq = -uqdt-vdw_t ,we haveα = √(2u)/2v ,K_2=2u =4(α v)^2 .The fact K_2 > 0 results in u>0.A vector of operators is defined to describe the combined system S_c : x = [ x_1; x_2; x_3; x_4 ]= [ Q_1; P_1; Q_2; P_2 ] . The QSDE satisfied by x isd x = f(x) dt+G dz_x + G^* d z_x^* ,wherez_x = [ w_1; w_2 ] , z_x^*= [ w_1^†; w_2^† ] ;G =[-√(K_1)/20; -√(K_1)/2i0;0-√(K_2)/2;0 -√(K_2)/2i ] ,G^* = [ -√(K_1)/2 0; -i √(K_1)/2 0; 0 -√(K_2)/2; 0 -i √(K_2)/2 ]; f(x) = [-K_1/2x_1+ x_2x_3/α;-K_1/2 x_2-x_1x_3/α;-K_2/2x_3; -K_2/2x_4-1/2αx_1^2-1/2αx_2^2-1/4α ] .The homodyne detection method is used to continuously monitor the scattered field from the cavity S_1, which generates an observation process satisfyingd y = (L_1 + L_1^† )dt + dw_y +dw_y^* .Let C = (2 √(K_1) 000), h(x)=Cx and dz_y=dw_y +dw_y^*.Equation (<ref>) can be rewritten in the following compact formd y = h(x) dt + dz_y. Then the evolution of the system S_c in the Heisenberg picture can be described by the diffusive QSDEs (<ref>) and (<ref>). Since the combined system consists of two quantum subsystems, quantum filtering theory can be directly applied to the combined system and the standard quantum filter is described by the stochastic master equation (SME) dρ_t =-i[H,ρ_t]dt + (Lρ_tL^* - 1/2L^*Lρ_t - 1/2ρ_tL^*L)dt + (Lρ_t + ρ_t L^* -Tr[(L+L^*)ρ_t]ρ_t)(dY(t)-Tr[(L+L^*)ρ_t]dt),where the corresponding H and L are given in (<ref>). § EXTENDED KALMAN FILTERThe fact that the computation time in simulating the filter SME scales exponentially with the dimension of the Hilbert space adds difficulty to implementing the filter in realtime. A quantum extended Kalman filter (QEKF) was introduced in <cit.>, aiming to reduce the computational complexity of the quantum filter. For the QEKF, the constraint that elements of observable operator vector x(t) belong to a commutative von Neumann algebra is not requiredand there can be non-commutating operators in the dynamic equation. Otherwise, if the commutativity of all the observables and operators are given, then the QEKF reduces to a classical EKF since the filtering problem can be transformed into a single classical probability space using the *-isomorphism. This is the main difference between the classical EKF method and the QEKF method. A commutative operator approximation of the non-commutative nonlinear QSDE is used to estimate the system observables given nondemolition measurements. Keeping the first order term of the Taylor series, the Kalman filter gain is effectively calculated. This method was proposed to solve the filtering problem for a class of multiple output channel open quantum systems whose evolution can be described by the following QSDE <cit.>:dx_t = f(x_t)dt + G(x_t)dA_t^* + G(x_t)^*dA_t ,wheref(x_t)=ℒ(x_t) and G(x_t) = [x_t,𝕃_t]𝕊_t^* .The operators 𝕃 and 𝕊 are the parameters from the (𝕊,ℍ,𝕃) model which can be used to describe a multiple channel open quantum system <cit.>. The measurement dynamic equation is given bydy_t = h(x_t)dt + L(x_t)dA_t^* + L(x_t)^*dA_t +N_tdα_t ,withh(x_t)=E_t^*𝕃+E_t𝕃^* + N_t1_t;L(x_t)=(E_t+N_t𝕃)𝕊_t^*,where 𝕃 =𝕃= L and 1_t=L^*L in our case. E_t and N_t are real matrices. dα_t = diag(𝕊_t d Λ𝕊_t^⊤) where Λ is the conservation process that represents the photon counting measurement.E_t indicates output channels which are subject to the homodyne detection measurement. N_t shows photon counting measurement channels. For example, if a quantum system is observed by a homodyne detector and a photon counting measurement, we haveE=[ 10; 00 ] , N=[0 0; 01 ] .According to <cit.>, E_t and N_t have to satisfy the condition of Theorem 3.1 in <cit.>. For this paper, this condition is satisfied since we have E=1 and N=0 which means the system is observed using only one homodyne detector. Let 𝒞_op^1(I) denote the Banach *-algebra of 𝒞^1-functions on the compact interval I such that the corresponding Hilbert space operator function T → f(T), for T=T^* and the spectra of T satisfies sp(T)∈ I, is Fréchet differentiable <cit.>. The Fréchet derivative of an operator differentiable function f ∈𝒞_op^1 (I) can then be constructed as in the following lemma: <cit.> If f ∈𝒞_op^1 (I), for any two elements S,T ∈𝒰, a unital commutative 𝒞^*- algebra, then the corresponding Fréchet derivative satisfies D_(f,T)S = f^' (T)S,where D_(f,T) is the Fréchet derivative and f^' (·) denotes the normed derivative of f(·). This lemma can be used to calculate the partial derivative of the nonlinear quantum Markovian process generator f(x) in case f(x)∈𝒞_op^1 (I). According to <cit.>, we have 𝒞^2 ⊆𝒞_op^1 ⊆𝒞^1 which means if the function f ∈𝒞^2, then its operator extension is operator differentiable.Given (<ref>) and let f(x_t)= ℒ_H,L(x_t), we have f(x_t) ∈𝒞^2(R) which results in f(x_t) ∈𝒞_op^1 (R).Note that, using the QEKF method, x̂_t is no longer the projection of x_t onto 𝒴_t. That is, x̂_t 𝔼_ℙ[x_t|𝒴_t]. However, if x̂_0 ∈𝒴_0, then we still have x̂_t ∈𝒴_t and the elements of x̂_t are commutative with other elements.According to Lemma 3, one can calculateF(x)=f^'(x) =[ -K_1/2x_3/αx_2/α0; -x_3/α -K_1/2 -x_1/α0;00 -K_2/20; -x_1/α -x_2/α0 -K_2/2 ] , H(x)=h^'(x) = [ 2√(K_1) 0 0 0 ] .As in <cit.>, let the variance of the system observables and measurements be denoted as Q_t and R_t, respectively. Also, the cross-correlation matrix of the system observables and measurements is denoted as S_t, such thatQ_t =1/2dt𝔼_ℙ[{dx_t,dx_t}|𝒴_t] ;R_t =1/2dt𝔼_ℙ[{dy_t,dy_t}|𝒴_t] ;S_t =1/2dt𝔼_ℙ[{dx_t,dy_t}|𝒴_t] .The anti-commutator above is given by {x,y}=xy^⊤ + (yx^⊤)^⊤. In our case of the combined system S_c , (<ref>) yieldsQ_t =1/2dt𝔼_ℙ[(GG^† + (GG^†)^⊤) dt|𝒴_t] = 1/2(GG^† + (GG^†)^⊤) ;R_t =1/2dt𝔼_ℙ[2 Idt|𝒴_t] = I ;S_t =0 .To apply the QEKF in our case, the following constraints should be satisfied: (i) The covariance and cross-correlation matrices Q_t, R_t, S_t are single valued (see Definition 2.4 in <cit.> for the definition of a single valued operator); (ii) R_t is invertible; (iii) Initially x̂_0 ∈𝒴_0 .It can be verified that the first two constraints are satisfied in our case. As a result, we only have to make sure that x̂_0 ∈𝒴_0. Then the quantum EKF can be given asdx̂_t = f(x̂_t) dt + K_t (dy_t - dŷ_t) ,where P_t is defined as P_t≜1/2𝔼_ℙ[{x̃_t,x̃_t}|𝒴_t] and x̃_t=x_t - x̂_t. P_t evolves according to the following Riccati differential equation <cit.>dP_t/dt = F(x̂_t)P_t + P_t F(x̂_t)^⊤ +Q_t - [P_t H(x̂_t)^⊤ + S_t] R_t^-1 [P_t H(x̂_t)^⊤ + S_t]^⊤ .Without loss of generality, we assume that P_0 ∈𝒴_0. Consider an open quantum system described by the QSDEs given in (<ref>) subject to the measurements given in (<ref>). From the result in <cit.>, then there exists a Kalman gain K_t ∈𝒴_t,K_t = [P_t H(x̂_t) + S_t]R_t^-1 ,such that if the quantum extended Kalman filter is given by (<ref>), then x̂_t ∈𝒴_t , ∀ t⩾ 0 and P_t evolves according to (<ref>) upon neglecting the residual terms of the Taylor series. To implement numerical calculations using the QEKF method, one needs to transform (<ref>) into a classical stochastic differential equation. This is feasible since we are only concerned with the mean value and covariance of x(t) for our application. Recall that 𝒴_t is a commutative von Neumann algebra generated by the measurement dy_t. By Theorem 2, there exists a *-isomorphism ι from 𝒴_t to L^∞(Ω, ℱ,μ). Denoting ι(·)_t,w as (·)_t,w, the following classical SDE is satisfied <cit.>:dx̂_t,w = [f(x̂_t,w) - K_t,wh(x̂_t,w)]dt + K_t,wdy_t,w,for all w∈Ω, t≥ 0. The dynamic equation (<ref>) for P_t can also be written as a classical SDE in the same way. Since x̂_t,w, dy_t,w and P_t,w are classical random variables in the same probability space L^∞(Ω, ℱ,μ), then the previous constraints x̂_0 ∈𝒴_0 and P_0 ∈𝒴_0 can all be satisfied (for details, see <cit.>). The QEKF method is thus suitable for our problem.§ NUMERICAL EXAMPLE In our example, the evolution of both the system S_1 and the system S_2 can be represented by the annihilators a(t) and b(t). The aim is to estimate the real quadrature Q_2=b+b^*/2 of system S_2. Then we can obtain an estimate of both the quantum quadrature Q_a and the classical signal q using the relationship q̂=π_t(Q_2)/α=Tr[Q_2ρ̂_b]/α . The basic settings are listed below:dq = - 1/4q dt-1/8 dw_t ,which means that α= - √(u)/√(2)v=2√(2) and K_2=2u=0.5. The initial quantum states for S_1 and S_2 are ρ_1=ρ_2=1/2(I + σ_x). We also choose K_1=0.55. The initial value of q is set to be 1/4√(2). Fig. 3 illustrates the trajectories of Q̂_2/α obtained by using the SME and QEKF methods, respectively. In the SME method, each cavity is approximated by a two level system. The red line is the average value of q over 500 realizations. It can be seen that the estimated q̂=Q̂_2/α for both methods converges to the real expected value of q.Fig. 4 demonstrate the estimate of quantum real quadrature Q_1 using the SME method and the QEKF method respectively.In order to test the robustness of our method, a set of perturbations on the initial state is considered in Fig. 5. A certain level of initial error can affect the performance of the QEKF method but the convergence is still guaranteed.§ CONCLUSIONBy modeling a classical stochastic process using a quantum cavity model, we solve the filtering problem of a class of hybrid quantum-classical systems using the standard quantum filtering method and a quantum extended Kalman filtering method. A performance comparison between these two methods is provided using numerical results. Future work includes extending our method to more general quantum systems and classical signals with nonlinear dynamics.Discussions with Muhammad Fuady Emzir are gratefully acknowledged.
http://arxiv.org/abs/1703.08976v1
{ "authors": [ "Qi Yu", "Daoyi Dong", "Ian R. Petersen", "Qing Gao" ], "categories": [ "quant-ph", "cs.SY" ], "primary_category": "quant-ph", "published": "20170327091506", "title": "Hybrid Filtering for a Class of Quantum Systems with Classical Disturbances" }
1] Vincent Runge[E-mail: runge.vincent@gmail.com] [1]LaMME - Laboratoire de Mathématiques et Modélisation d'Evry.UEVE - Université d'Evry-Val-d'Essonne.The Limit Imbalanced Logistic Regression by Binary Predictors and its fast Lasso computation [============================================================================================In this work, we introduce a modified (rescaled) likelihood for imbalanced logistic regression. This new approach makes easier the use of exponential priors and the computation of lasso regularization path. Precisely, we study a limiting behavior for which class imbalance is artificially increased by replication of the majority class observations. If some strong overlap conditions are satisfied, the maximum likelihood estimate converges towards a finite value close to the initial one (intercept excluded) as shown by simulations with binary predictors. This solution corresponds to the extremum of a strictly concave function that we refer to as "rescaled" likelihood. In this context, the use of exponential priors has a clear interpretation as a shift on the predictor means for the minority class. Thanks to the simple binary structure, some random designs give analytic path estimators for the lasso regularization problem. An effective approximate path algorithm by piecewise logarithmic functions based on matrix inversions is also presented. This work was motivated by its potential application to spontaneous reports databases in a pharmacovigilance context. Keywords: path estimator, pharamacovigilance model, piecewise logarithmic approximate path, limit class imbalance, rescaled likelihood, spontaneous reports database, square exact solution. MS classification : Primary 62J12, 62F12, 62F15; secondary 34E05, 49M29, 62P10. § INTRODUCTION If the response y=1 is very rare compared with the response y=0, we are in presence of a rare event configuration also called class imbalance. This problem recently got computer scientists' attention: they aimed at reducing computational costs by bypassing the class imbalance with resampling methods <cit.> <cit.> <cit.>. With these methods, the variance in estimating model parameters increases. Statisticians are aware of this problem and complex procedures such as local case-control sampling were proposed <cit.> (a method initiated in epidemiology <cit.>). In a recent work (2007) by Art B. Owen <cit.>, the opposite approach is considered: the class imbalance is infinitely increased in order to reach the theoretical distribution of the majority class observations. Owen proved that under some overlap conditions the model parameters are finite (apart from the intercept) and built a limit system of equations related to exponential tilting, whose solution is the new estimate. The resulting equations include thedistribution of the infinite class expressed through integrals, which are not easy to infer. This may explain that this work was broadly ignored (The author found it when Sections <ref> and <ref> were already completed).In our approach, the observations of the majority class are infinitely replicated and the Owen's limit distribution becomes the observed distribution. This situation is a kind of degenerate case between resampling (we repeat observations) and infinitely class imbalance (the observed distribution is chosen as the theoretical one). Unlike Owen's result, our limit normal equations can be interpretated as the first order conditions of a new likelihood. The idea of this work comes from the analysis of highly imbalanced binary spontaneous reports databases. Such databases are gathered by many countries and institutions (FDA, MHRA, WHO,...). Imbalanced logistic regression with binary predictors gives maximum likelihood estimate (MLE) very close to its limit imbalanced counterpart. This result makes possible the study of lasso-type regularization problem and the development of effective algorithms to provide model selection.So far, only disproportionality methods are routinely used <cit.> for spontaneous report databases: predictors are analysed one by one, leading to a great number of false positive signals <cit.>. Mathematical tools adjusted to binary data for regression are surprisingly barely developed by scientists (only boolean matrices have been studied by some authors <cit.>). This results in an inflation of empiric methods using lasso regularization in recent years (from <cit.> to <cit.>). This is a worrying trend because recommendations made by these experts shift towards more complicated experimental methods and time-consuming algorithms, not towards a deeper mathematical understanding. This work is motivated by the need to better analyse this kind of applied problem. The paper contains three main sections in which we present the following results: * In Section <ref>, we investigate the properties of the logistic normal equations with binary predictors. Simple existence and uniqueness conditions of Silvapulle's type are found and some exact solutions presented. An invariance property in presence of intercept links this particular solution (called "square solution") to the limit imbalanced problem. We then acquaint ourselves with the issue of variance inflation of the imbalanced problem by computing the Fisher information.* In Section <ref>, we derive Owen-type equations with a first order term evaluating the convergence rate. For the limit system of equations, the existence and uniqueness of the solution is proved with a new method leading to the minimization of a Kullback-Leibler divergence under linear constraints. A rescaling procedure on the initial likelihood and the previously found divergence justify the introduction of a rescaled likelihood corresponding to our limit imbalanced logistic regression problem. In a Bayesian framework, the Jeffreys penalty does not significantly decrease the variance of the estimator but other more appropriate priors, such that exponential ones, could help to reduce it (chosen according to the situation). The closeness in simulation between limit estimates and classical estimates compels us to go one step further with the study of regularization paths, in particular if the model is known to be sparse.* In Section <ref>, we look at a lasso regularization problem for the rescaled likelihood, which has a clear interpretation as a shift on the predictor means for the class of interest. We succeed in finding some path estimators in a few particular cases (independence and orthogonal design). In presence of correlation, we present an effective path following algorithm by piecewise logarithmic functions giving precise estimates. We conclude by explaining the need of an analysis of the correlation structure between predictors. This leads to simple algorithmic procedures with small computational costs for which many different prior penalties could be easily tested. Two examples are given using the French spontaneous reports database. The expressions "infinitely imbalance" and "limit imbalance" are considered as synonymous, although we recommend the use of the second one in our context due to the simple unique limit we impose and an analogy with hydrodynamic limits (in fluid dynamics) while the first expression is related to the underlying distribution introduced by Owen.We conclude this article by discussing the many opportunities that arise with the introduction of a rescaled likelihood in a Bayesian context and of the path following algorithm by logarithmic functions.§ THE LOGISTIC REGRESSION BY BINARY PREDICTORS§.§ Logistic normal equations The binary logistic regression (BLR) problem consists in the determination of coefficients β̂ maximizing a smooth and concave likelihood function given by the relationL(β|I_0,I_1,n^0,n^1) =∏_i = 1^q_1(e^(I_1β)_i/1+e^(I_1β)_i)^n_i^1∏_i = 1^q_0(1+e^(I_0β)_i)^-n_i^0 ,where β = (β_i) ∈ℝ^p+1 is indexed from zero with β_0 corresponding to the intercept. Binary design matrices I_1 ∈ℳ_q_1 × (p+1)(𝔹) and I_0 ∈ℳ_q_0 × (p+1)(𝔹) with 𝔹 = {0,1} are of full rank: they aggregate the p binary predictors. Vectors of weights n^0=(n_1^0,...,n_q_0^0)^T ∈ (ℕ^*)^q_0 and n^1=(n_1^1,...,n_q_1^1)^T ∈ (ℕ^*)^q_1 save repetitions for distinct observations in response classes 0 and 1 separately. The binary structure favours repetitions in the sequence of observations, which justifies these notations. Moreover (I_0β)_i is the i-th component of vector I_0β∈ℝ^q_0 (the same for (I_1β)_i). We introduce other notations thereafter used within this article. The modulus of a vector denotes its l^1 norm, while the overline sign on lower cases stands for l^1 normalization. For example |n^1| = ∑_i=1^q_1n_i^1 and n^1_i = n^1_i/|n^1| gives the vector n^1. A_i is the i-th row of the matrix A and its roman upper case equivalent 𝙰 is the matrix A in which the first column filled by ones (associated to the intercept) was removed. We also need N^1 = 𝙸_1^Tn^1 ∈ℝ^p with T standing for the matrix transpose operator. An important feature in our study is the predictor means vector N^1 for class 1 obtained by the relation 𝙸^T_1 n^1 = N^1. For vectors of same size u, v ∈ℝ^q, uv (resp. u/v) is the vector with components u_k v_k (resp. u_k/v_k), k ∈{1,...,q}. β̃ is the vector β without the intercept coefficient β_0. From Subsection <ref>, the notations I and 𝙸 for matrices I_0 and 𝙸_0 respectively are often used (as well as q for integer q_0). For ease of calculation, we consider the opposite of the log-likelihood. If I_1=I_0=ℐ, we have q_1=q_0=q and we can introduce vectors n = n^1 + n^0 and Δ n = n^1 - n^0. In this latter case, we writel(β)=-log(L(β)) =|n|log 2 + ∑_i = 1^q( -Δ n_i(1/2(ℐβ)_i ) + n_ilogcosh(1/2(ℐβ)_i)) ,and first order conditions are computed, differentiating l with respect to each β_j coefficient. We obtain0 = ∂ l(β)/∂β_j = ∑_i = 1^q( -Δ n_i(1/2ℐ_ij ) + 1/2 n_iℐ_ijtanh(1/2(ℐβ)_i)) , j ∈{0,...,p} ,or in matrix formℐ^T Δ n = ℐ^T (n tanh(1/2ℐβ )) .In a general framework with non-identical matrices I_0 and I_1, we likewise deriveI_1^Tn^1 - I_0^Tn^0 = I_1^T(n^1 tanh(1/2I_1β))+I_0^T(n^0 tanh(1/2I_0β)) . This system of equations (<ref>) gathers the so-called logistic normal equations and will be widely used within this article. These equations are usually presented with a logistic function but we chose another expression to highlight the link with existence and uniqueness conditions.§.§ Existence and uniqueness Necessary and sufficient conditions to ensure existence and uniqueness of the MLE are well-known, they were established by Silvapulle in 1981 <cit.>. They consist in satisfying an overlap condition C_1∩ C_0 ∅ between the cones C_1 = { I_1^T u_1|u_1 ∈ (ℝ^*_+)^q_1}andC_0 = { I_0^T u_0|u_0 ∈ (ℝ^*_+)^q_0} . For the BLR problem, a more convenient description is possible:The BLR problem admits a unique solution if and only if there exist n^+ ∈ (ℕ^*)^q_1 and n^* ∈ (ℕ^*)^q_0, such that I_1^T n^+=I_0^T n^*. Looking at equations (<ref>), this theorem means that a MLE exists and is unique if one can find a couple (n^+, n^*) of observations of the rows in I_1 and in I_0 such that |n^+| = |n^*| vanishing all the regression coefficients (intercept included). An easy necessary condition to check is that at least one 0 and one 1 are present in each column of I_0 and I_1 (at the exception of the first column of ones corresponding to intercept).If I_1^T n^+ = I_0^T n^*, the Silvapulle's condition is immediately verified. Reciprocally, C_1 ∩ C_0 is an open subset of ℝ^p+1 with positive measure because I_0 and I_1 are full rank matrices. By a density argument, there exist q ∈ (ℚ∩ ]0,1[)^p+1, λ∈ (ℝ^*_+)^q_0 and μ∈ (ℝ^*_+)^q_1 satisfying I^T_0 λ = I^T_1 μ = q. We reorder the rows in I_0 and I_1 such that the first p+1 rows are linearly independent. Let H_0 in ℳ_q_0 × (q_0-p-1)(ℝ) and H_1 in ℳ_q_1 ×(q_1-p-1)(ℝ) be orthogonal matrices to I_0and I_1 respectively. Because of the reorganization of the rows in I_i (i ∈{0,1}) we can choose a H_i where its last q_i-p-1 rows form an identity matrix 𝕀_q_i-p-1. For all α_0 ∈ℝ^q_0-p-1 and α_1 ∈ℝ^q_1-p-1 we have the relation I^T_0(λ +H_0α_0) = I^T_1 (μ+H_1α_1) = q. Again with a density argument, we find α_0 such that λ_i + (α_0)_i ∈ℚ^*_+ for all i ∈{p+2,...,q_0} and satisfying the constraint λ +H_0α_0 ∈ (ℝ_+^*)^q_0. For a matrix A ∈ℳ_n × m(ℝ), a vector v ∈ℝ^m and J ⊂{1,...,m}, let [Av]_J denote the vector A_Jv_J, where A_J (resp. v_J) corresponds to the submatrix of A (resp. subvector of v) obtained by removing from A (resp. from v) the columns (resp. rows) that do not correspond to the indices in J. With this notation, we have [I^T_0(λ +H_0α_0)]_{1,...,p+1} = q - [I^T_0(λ +H_0α_0)]_{p+2,...,q_0}∈ℚ^p+1. The binary matrix (I^T_0)_{1,...,p+1} is then nonsingular and using its inverse in ℳ_(p+1)× (p+1)(ℚ) we obtain (λ +H_0α_0)_{1,...,p+1}∈ (ℚ_+^*)^p+1. Finally α_0^* = λ + H_0 α_0 ∈ (ℚ_+^*)^q_0. The same arguments lead to a set of coefficients α_1^* = μ + H_1 α_1 ∈ (ℚ_+^*)^q_1. Multiplying the vector (α_0^*,α_1^*) by the ppcm of all its denominators proves the result.§.§ The square case The situation with identical square design matrices I_0 and I_1 is worthwhile in itself because it leads to explicit analytic formulae for the MLE and their variance (in the asymptotic case). In particular, we focus on the introduction of imbalance between n^1 and n^0 to emphasize the simple solution for MLE and the problem of variance inflation.If I_0 = I_1 = I is a square matrix, we have the following closed form for the maximum likelihood estimator:β̂ = I^-1log(n^1/n^0) .the matrix I verifies the condition q=p+1 and is nonsingular with I^-1 its inverse (because I is of full rank). The vector T is defined as T = tanh(1/2Iβ) ∈ℝ^p+1 i.e. Iβ = log(1+T/1-T). Multiplying (<ref>) by (I^-1)^T = (I^T)^-1, we get T = Δ n/n. Hence, β = I^-1log(n+Δ n/n-Δ n), which achieves the proof.If one of the components in the vectors of weights n^1 or n^0 vanishes, some of the regression coefficients become infinite (but not necessarily all of them).To our knowledge, this is the first general closed form found in the resolution of a logisitic regression. There exist partial results for a unique categorical predictor exposed by Lipovetsky in 2014 <cit.>. An explanation for the lack of such a simple result stands in the poorly studied finite observation structure made possible through binary predictors with repetitions. In Appendix <ref>, some particular solutions to equations (<ref>) are presented. §.§.§ Invariance if intercept We establish an invariance property making a link with the imbalanced problem. In the square case with intercept, multiplying all the components of n^1 or n^0 by a same integer does not change the value of the MLE apart from the intercept.The inverse of a matrix with an intercept term verifies the relationI^-1[ 1; ⋮; 1; ] =[ 1; 0; ⋮; 0; ] ,which means that we can rewrite equations (<ref>) asβ̂_i = log(∏_j=0^p(n^1_j/n^0_j)^a_ij) ,i ∈{0,...,p} , a_ij = (I^-1)_ijand∑_j=0^p a_ij = δ_0i .Substituting n_j^0 bys × n_j^0 (or n_j^1 bys × n_j^1) with s ∈ℕ^* gives the same result for all β_i ,i ∈{1,...,p}. §.§.§ Asymptotic variance To conclude this section, we study the asymptotic behavior of the estimator for large |n^1| and |n^0|. Since the MLE (intercept excluded) remains the same with or without a class imbalance (see the invariance property), we have a glimpse of a general property in class imbalance.In the square case BLR problem, the variance of the maximum likelihood estimator is approximately given by relationsV(β̂_i) ≈∑_j=0^pa_ij^2(1/n_j^1+1/n_j^0) , i ∈{0,...,p} , a_ij = (I^-1)_ij .We compute the observed Fisher information ℐ(β̂) = I^TDI with D a diagonal matrix with elements n_ip̂_i(1-p̂_i) and p̂_i = 1/(1+e^-(Iβ̂)_i). Its inverse gives the desired result, knowing that n_ip̂_i = n_i^1 andn_i(1-p̂_i) = n_i^0.Another method uses the closed form (<ref>) to perform variance and bias estimations by Taylor expansions with the multinomial random vector (n^1,n^0). We obtain V(β̂_i) ≈∑_j=0^pa_ij^2(1/n_j^1+1/n_j^0-2/|n|) and Bias(β̂_i) ≈∑_j=0^pa_ij/2(1/n_j^0-1/n_j^1) , i ∈{0,...,p}. However, simulations give inaccurate results and only the Fisher information method should be retained.We investigate the variation of the variance with respect to the sample size |n| and the value of the intercept β_0 for a simple fixed model (β_1,...,β_5) = (-0.5,-0.25,0,0.25,0.5). With these two parameters given, we simulate 10^4 data sets with a different random binary square matrix I and different random vectors n^1 and n^0 for each of them (but |n^1|+|n^0| is fixed). In table <ref>, we compare the estimated standard deviation (sd.) with the Fisher standard deviation given in Proposition <ref> (F.sd.) accompanied by an estimation of the bias (bias) for coefficient β_4 = 0.25.These simulations highlight the accuracy of the "Fisher variance" in all configurations, which is very close to the estimated one. Bias is negligible compared with variance. For a constant number of observations |n|, the variance increases when the disbalance between classes strengthens. This variance inflation is a key issue in class imbalance, we further explain how one can easily add a prior information to a rescaled likelihood to deal with this problem (see Subsection <ref>).§ LIMIT IMBALANCED STUDY §.§ Owen-type equations The limit case consists in infinitely replicating the majority class observations as if the theoretical distribution of this class was the observed one. This is a degenerate case of the Owen's study, that is why we know that the intercept coefficient tends to minus infinity whereas other regression coefficients are finite if a stronger overlap condition is satisfied <cit.>. For the limit equations, an information reduction for the majority class occurs: only the means of the predictors matter, the correlation structure in this class of interest "disappears".The following proposition presents the logistic normal equations (<ref>) in a new form with a remainder term arising in case of class imbalance.For an imbalanced binary logisitic regression with a class size for response y=0 's' times greater than the one for response y=1, we obtained the system of equations n_1^0/n_1^0= N^1 + 1/s(n_2^0/(n_1^0)^2(n_2^0/n_2^0-N^1 ) - n_1^1/n_1^0(n_1^1/n_1^1-N^1 ))+ o(1/s) ,with s = |n^0|/|n^1|≫ 1. We used notations:n_1^0 = ∑_i=1^q_0n^0_i e^(𝙸_0 β̃)_i , n_1^1 = ∑_i=1^q_1n^1_i e^(𝙸_1 β̃)_i , n_2^0 = ∑_i=1^q_0n^0_i e^2(𝙸_0 β̃)_i , n_2^1 = ∑_i=1^q_1n^1_i e^2(𝙸_1 β̃)_i ,and for vectors in ℝ^p:n_1^0 = 𝙸_0^T (n^0 e^𝙸_0 β̃) ,n_1^1 = 𝙸_1^T (n^1 e^𝙸_1 β̃) ,n_2^0 = 𝙸_0^T (n^0 e^2𝙸_0 β̃) ,n_2^1 = 𝙸_1^T (n^1 e^2𝙸_1 β̃) . The technical proof of this result is exposed in Appendix <ref>. As shown by simulations (see table <ref>), the first order and remainder terms are negligible quantities with binary predictors, even if there is no imbalance! This suggests the introduction of the following limit imbalanced equations, obtained with s = + ∞ in Proposition <ref>. For infinitely imbalanced binary logisitic regression verifying a strong overlap condition (see Theorem <ref>), the following system of p limit imbalanced equations holds[With non-binary design matrices X_1 and X_0 and no vectors of weights, we obtain 𝚇_0^T( e^𝚇_0β̃/∑_i e^(𝚇_0β̃)_i) = N^1 . These equations also differ from Owen's <cit.>. ]𝙸_0^T( n^0 e^𝙸_0β̃/∑_i n^0_ie^(𝙸_0β̃)_i) = N^1 .Notice that the β̃ coefficients do not depend on the structure in rows of the design matrix associated to response y=1 but only on the means of ones for each predictor: N^1.We give a direct simple proof, avoiding the complicated previous proof of Appendix <ref>. For x near minus infinity, the hyperbolic tangent has the following first order expansion:tanh(x/2) = -1 + 2 e^x + o(e^x) .From <cit.> we know that the intercept term tends to minus infinity, then with x = I_0β or x = I_1β, we use the previous expansion neglecting the remainder term. Thus, equations (<ref>) becomeI_1^Tn^1 = I_1^T(n^1 e^I_1β)+I_0^T(n^0 e^I_0β) ,and factoring by exp(β_0) in the first equation of this system we haveexp(β_0)= |n^1|/∑_i=1^q_1 n_i^1 e^(𝙸_1β̃)_i+∑_i=1^q_0 n_i^0 e^(𝙸_0β̃)_i≈|n^1|/∑_i=1^q_0 n_i^0 e^(𝙸_0β̃)_i ,because |n^0|/|n^1|→ + ∞. Looking back at (<ref>) without the first equation, we have𝙸_1^T n^1 = 𝙸_1^T(n^1 e^𝙸_1β̃/∑_i=1^q_0 n_i^0 e^(𝙸_0β̃)_i)+𝙸_0^T(n^0 e^𝙸_0β̃/∑_i=1^q_0 n_i^0 e^(𝙸_0β̃)_i)butn^1 e^𝙸_1β̃/∑_i=1^q_0 n_i^0 e^(𝙸_0β̃)_i = |n^1|/|n^0|n^1 e^𝙸_1β̃/∑_i=1^q_0n_i^0 e^(𝙸_0β̃)_i→ 0because |n^1|/|n^0|→ 0 and we obtain the desired result.In table <ref>, we present simulation results based on limit imbalanced equations (<ref>) compared with classical logistic regression (<ref>). The sample procedure is the same as the one used for table <ref> except that we fixed sample size at |n| = |n^1|+|n^0|= 10^4 and vary dimension for the matrix I_0 (we chose q_0=10, 21, 32). The two estimates β̂_4 for standard and imbalanced regressions are very close to each other as shown by the mean of the l^1 norm – even if the problem is not imbalanced – so that standard deviation and bias are almost the same. This means that, if interesting properties can be established with the limit equations, this context will be appropriate to highlight new features in classical logistic regression. The 1/s first order term in Proposition <ref> should be estimated to understand how good the limit imbalanced approximation is, without having to estimate the standard regression coefficients. Simulations show that this term is very small and we choose not to dwell on this intermediate situation, but it could be a more important result if non-binary design matrices are involved.§.§ Strong overlap condition and rescaled likelihoodExistence and uniqueness conditions to solve (<ref>) are well-known <cit.>, they consist in an overlap condition a little bit stronger than the one given by Silvapulle. In fact, we need the point N^1 to be surrounded by the rows of 𝙸_0 (hereafter denoted by the letter 𝙸). We give this result in the framework of the binary problem (simpler than Owen's general case) and establish a new proof leading to a minimum relative entropy problem. From there and using duality, we build the corresponding rescaled likelihood also justified by a rescaling on the initial likelihood.There exists a unique finite solution to the limit imbalanced BLR problem if and only if there exists λ∈ (ℝ_+^*)^q such that 𝙸^T λ = N^1 and ∑_i=1^qλ_i = 1. (If present, the null row (such that 𝙸_i = (0,...,0)) is removed[in order to have non-zero coefficients λ as for the overlap condition in Theorem <ref>.].)The condition 𝙸^T λ = N^1 means that we have I^T_0 λ = I^T_1 n^1 with λ∈ (ℝ_+^*)^q_0and n^1 ∈ (ℝ_+^*)^q_1 so that C_1∩ C_0 ∅. In other words, the existence and uniqueness of a solution for the limit problem implies existence and uniquenessfor its associated BLR problem. Our proof of this theorem is based on the following three lemmas. The log-sum-exp functionh : ℝ^q ↦ℝ, defined by h(z)= log(∑_i=1^q e^z_i) is a convex, continuous, increasing function on ℝ^q. The function f : ℝ^p ↦ℝ, f(β̃)= log(∑_i=1^qn^0_i e^(𝙸β̃)_i) is continuous and convex on ℝ^p.Function h has a positive semi-definite Hessian and is then convex. Furthermore for all y,z ∈ℝ^q such that y_i ≤ z_i, i ∈{1,...,q}, we have h(y) ≤ h(z) and the function is increasing on ℝ^q. The composition with an affine mapping preserves continuity and convexity. Thus, with z = 𝙸β̃ + b and n^0 = e^b we obtain a convex continuous f(β̃) = h(𝙸β̃ + b) and domf = ℝ^p.The function f : ℝ^p ↦ℝ, f(β̃)= log(∑_i=1^qn^0_i e^(𝙸β̃)_i) is strictly convex on ℝ^p.The Hessian H of h : ℝ^q ↦ℝ, h(z)= log(∑_i=1^q e^z_i), is the following:H_ij = δ_ije^z_i/∑_k=1^qe^z_k-e^z_i/∑_k=1^qe^z_ke^z_j/∑_k=1^qe^z_k , i,j ∈{1,...,q}.For all v=(v_1,...,v_q)^T ∈ℝ^q, we have∑_i,j=1^q v_iH_ijv_j = (∑_k=1^qe^z_kv_k^2)(∑_k=1^qe^z_k)-(∑_k=1^qe^z_kv_k)^2/(∑_k=1^qe^z_k)^2 ,which is non-negative due to the Cauchy-Schwarz inequality. This expression is equal to zero if and only if there exists λ∈ℝ such that e^z_kv_k^2 = λ e^z_k ,∀ k ∈{1,...,q}. Thus, only in the constant direction z_k(t) = t + z_k(0) , k ∈{1,...,q}, t ∈ℝ, the function h is affine, in any others, this function is strictly convex.Suppose that there exists a family of parameters F_a = {β̃(t) ∈ℝ^p, t ∈ [0,a], a >0} such that z(t)=𝙸β̃(t) + b = t + z(0) and e^b = n^0. This means that along the path described by β̃(t) the function f is affine. We obtain 𝙸(β̃(t)-β̃(0))=t and with t0, we have γ = (-t,β̃(t)-β̃(0))^T ∈ℝ^p+1∖{0}^p+1 such that Iγ = 0. This is impossible because the matrix I is of full rank, which proves the lemma. We present a corollary to a theorem on the Legendre-Fenchel transform of convex composite functions exposed in <cit.>. If functions g_i : ℝ^p ↦ℝ, i ∈{1,...,q} are convex and continuous with domg_i = ℝ^p and h : ℝ^q ↦ℝ is convex, continuous and increasing with domh = ℝ^q, then the convex conjugate of h(g_1,...,g_q) is given by[h(g_1,...,g_q)]^*(m) = min_m_1+...+m_q = mα_1 ≥ 0, ..., α_q ≥ 0(h^*(α_1,...,α_q) + ∑_i=1^qα_i g_i^*(m_i/α_i) ) ,with m ∈ (ℝ^p)^T.Let us define the function F_m such thatF_m : {[ℝ^p →ℝ ,; β̃ ↦ m ·β̃ - log(∑_i n^0_i e^(𝙸β̃)_i) . ]. F_m is differentiable on ℝ^p and the first order equations ∂/∂β̃_j(∑_i=1^pm_i ·β̃_i - f(β̃)) = 0 ,j ∈{1,...,p} ,are equal to the system (<ref>) with N^1 = m^T. Function F_m is strictly concave as the sum of a concave function and a strictly concave function (see Lemma <ref>). Consequently, the solution γ to ∇ F_N^1(γ) = 0 is unique.We now introduce the convex conjugate of the function f: f^* : {[(ℝ^p)^T ↦ℝ ,;m ↦sup_β̃∈ℝ^p(m ·β̃ - log(∑_i n^0_i e^(𝙸β̃)_i)) = sup_β̃∈ℝ^p( F_m(β̃)) . ]. We will prove that the three following sets are identicalA = { m ∈ (ℝ^p)^T| ∃β̃∈ℝ^p , 𝙸^T( n^0 e^𝙸β̃/∑_i n^0_ie^(𝙸β̃)_i) = m^T } ,B = { m ∈ (ℝ^p)^T|f^*(m) < + ∞} ,C = { m ∈ (ℝ^p)^T| ∃ λ∈ (ℝ^*_+)^q ,m = λ^T 𝙸 , ∑_i=1^qλ_i = 1 } . i) A ⊂ B. If m_0 ∈ A there exists β̃∈ℝ^p solution to (<ref>), that is ∇ F_m_0(β̃) = 0. Moreover f^*(m_0) = F_m_0(β̃) because of the strict concavity of F_m_0. Thus m_0 ∈ B. ii) B ⊂ C. We use the Lemma <ref> with g_i(β̃) = (𝙸β̃)_i +b_i and h the log-sum-exp function verifying the necessary conditions (Lemma <ref>). We have the convex conjugate g_i^*(u_i)= -b_i if u_i = 𝙸_i and +∞ elsewhere (we do not consider the presence of a null row 𝙸_i=(0,...,0)). The only way to obtain a finite result is to impose the constraint u_i = m_i/α_i = 𝙸_i for all i ∈{1,...,q}. Therefore, knowing thath^*(α_1,...,α_q) = {[∑_i=1^qα_i log(α_i) if α_1 ≥ 0, ..., α_q ≥ 0 , α_1+...+α_q = 1 ,; +∞otherwise , ].we havef^*(m)=[h(g_1,...,g_q)]^*(m) = min_α_1𝙸_1+...+α_q𝙸_q = mα_1+...+α_q = 1α_1 ≥ 0, ..., α_q ≥ 0(∑_i=1^qα_i log(α_i) + ∑_i=1^qα_i (-b_i) ) = min_α_1𝙸_1+...+α_q𝙸_q = mα_1+...+α_q = 1α_1 ≥ 0, ..., α_q ≥ 0(∑_i=1^qα_i log(α_i/n_i^0) ) .We minimize a Kullback–Leibler divergence between two distributions under linear constraints. If one of the α_i is zero, 0g_i^*(m_i/0) = σ_dom g_i(m_i) = 0 if m_i=0 elsewhere + ∞ (see <cit.>) and the previous equalities remain true with m_i = α_i𝙸_i. The KKT conditions of this problem impose the constraint α_i >0 for all i ∈{1,...,q}. Thus,f^*(m)=[h(g_1,...,g_q)]^*(m) =min_α_1𝙸_1+...+α_q𝙸_q = mα_1+...+α_q = 1α_1 > 0, ..., α_q > 0(∑_i=1^qα_i log(α_i/n_i^0) ) .This minimum exists: this is a linear restriction to a convex and continuous function in a simplex and therefore B = dom f^* ⊂ C.iii) C ⊂ B ⊂ A. If m_0 ∈ C, then there exists λ∈ (ℝ^*_+)^q such that ∑_i=1^qλ_i = 1 and λ^T𝙸 = m_0 so thatf^*(m_0) = sup_β̃∈ℝ^p(m_0 ·β̃ - log(∑_i n^0_i e^(𝙸β̃)_i)) = sup_β̃∈ℝ^p(log(e^∑_i λ_i(𝙸β̃)_i/∑_i n^0_i e^(𝙸β̃)_i) ) .If the supremum is reached, there is a miximizing element γ∈ℝ^p and m_0 ∈ B, this element is the solution to the system (<ref>) and thus m_0 ∈ A. To state this result, it is enough to have -F_m_0 coercive. Let ϵ∈ℝ^p ∖{0}^p be an arbitrary vector and β̃ = x ϵ with x ∈ℝ. Then,F_m_0(x ϵ) = log(e^∑_i λ_i(𝙸ϵ)_i x/∑_i n^0_i e^(𝙸ϵ)_i x) = log( e^∑_i λ_i ω_i x/∑_i n^0_i e^ω_i x) ,with ω = 𝙸ϵ. Notice that the vector ω can not satisfy the relations ω_1 = ... =ω_p because I is of full rank. Thus, if W = max_i ∈{1,...,q}(ω_i), we haveF_m_0(x ϵ) = log( e^(∑_i( λ_i ω_i) - W) x/∑_i n^0_i e^(ω_i-W) x) ,and ∑_i( λ_i ω_i) - W<0 because ∑λ_i = 1. Therefore, with Ω = {i ∈{1,...,p} | ω_i = W}lim_x→ +∞ e^(∑_i( λ_i ω_i) - W) x = 0 ,lim_x→ +∞∑_i n^0_i e^(ω_i-W) x = ∑_i ∈Ωn^0_i >0 ,which proves that the function -F_m_0 is coercive when m_0 ∈ C and achieves the proof. The expression f^*(m) in (<ref>) is the minimization of a relative entropy between the class 0 distribution and a kind of ghost class 1 distribution (built on the I = I_0 design matrix). With the duality property, we can introduce a new likelihood. The following proposition leads to the same "limit" likelihood and justifies the use of the adjective "rescaled". Indeed:The limit imbalanced equations arise from the following rescaled likelihood:L^*(β̃|𝙸_0,𝙸_1,n^0,n^1) = ∏_j=1^q_1( e^(𝙸_1β̃)_j/∑_i n^0_ie^(𝙸_0β̃)_i)^n^1_j . With the initial likelihoodL(β|I_0,I_1,n^0,n^1) =∏_j = 1^q_1(e^(I_1β)_j/1+e^(I_1β)_j)^n_j^1∏_j = 1^q_0(1+e^(I_0β)_j)^-n_j^0 ,and the relation exp(β_0) = |n^1|/∑_i=1^q_0 n_i^0 e^(𝙸_0β̃)_i+∑_i=1^q_1 n_i^1 e^(𝙸_1β̃)_i= |n^1|/|n^0| C(|n^1|/|n^0|) (see <ref>), we obtain the following expression for the likelihood, using notation x = |n^1|/|n^0|:x^|n^1|∏_j=1^q_1(e^(𝙸_1β̃)_jC(x))^n^1_j∏_j=1^q_1( 1/1 + x C(x)e^(𝙸_1β̃)_j)^n^1_j∏_j=1^q_0( 1/1 + x C(x)e^(𝙸_0β̃)_j)^n^0_j .We consider that |n^0| is large enough to consider the limit (with |n^1| fixed) "x → 0" and to make the approximations(e^(𝙸_1β̃)_jC(x))^n_j^1→(e^(𝙸_1β̃)_j/∑_i n^0_ie^(𝙸_0β̃)_i)^n_j^1 ,∏_j=1^q_1( 1/1 + x C(x)e^(𝙸_1β̃)_j)^n^1_j→ 1 ,and∏_j=1^q_0( 1/1 + x C(x)e^(𝙸_0β̃)_j)^n^0_j→ e^-|n^1| ,thus, using a rescaling term,L(β) ×(|n^0|/|n^1|)^|n^1|e^|n^1|→ L^*(β̃) ,as β_0 tends to minus infinity because |n^0|/|n^1|→ + ∞.The reader can see an analogy in physics with the existence of different scales of modelization. For example, the discrete mincroscopic N-body problem changed into the mesoscopic Boltzmann equation using the Boltzmann-Grad limit. See the book <cit.> for further information on hydrodynamic limits.This new likelihood makes now possible to consider a wide range of problems, related to variance reduction using simple prior penalties (Subsection <ref>) or regularization (Section <ref>).§.§ The relative entropy dual problemWith a likelihood and an entropy, we benefit from two points of view in order to numerically estimate the regression coefficients. The classical approach using a Newton-Raphson algorithm associated to the likelihood can be challenged by other algorithms on the primal or dual problems as described in <cit.> and <cit.> for classical logistic regression. We present here the dual problem and its link with initial regression coefficients. We leave the numerical analysis to another study.The regression coefficients of the limit imbalanced regression are given by the formulaeβ̂̃̂ = (P^TP)^-1P^T log( n^*/n^0 e^-A) ,where n^* is the probability distribution solving a relative entropy problem with linear constraintsn^* = argmin_𝙸^T α = N^1α_1+...+α_q = 1α_1 > 0, ..., α_q > 0 DL(α || n^0) ,A = ∑_i=1^qn^*_i log(n^*_i/n_i^0) and P = 𝙸 - M with P_ij=𝙸_ij-N^1_j.With the existence of a unique solution (see Subsection <ref>), there exists a solution n^* ∈ (ℝ^*_+)^q such that 𝙸^T n^* = N^1, ∑_i n^*_i = 1, andN^1 ·β̃ - log(∑_i n^0_i e^(𝙸β̃)_i) = ∑_i=1^qn^*_i log(n^*_i/n_i^0) = A .Then, using equations (<ref>) we obtain𝙸^T n^* = 𝙸^T (n^0 e^Ae^Pβ̃) .Let H in ℳ_q × (q-p-1)(ℝ) be an orthogonal matrix to I (the previous relation remains true with I instead of 𝙸) and γ∈ℝ^q-p-1, such that we can remove 𝙸 to obtain the relationn^* + H γ = n^0 e^Ae^Pβ̃ ,hence,-∑_i=1^qn^*_i log(n^*_i/n_i^0)+ log(n^*_k+(Hγ)_k/n_k^0) = ∑_j=1^p(𝙸_kj-N^1_j)β̃_j , k ∈{1,...,q} .Summing all these relations with weights n^*_k+(Hγ)_k, using the fact that ∑_k=1^q(Hγ)_k=0, gives∑_i=1^qn^*_i log(n^*_i/n_i^0)- ∑_k=1^q(n^*_k+(Hγ)_k)log(n^*_k+(Hγ)_k/n_k^0) = 0 .Due to convexity of the Kullback-Leibler divergence, we have a unique minimum obtained (by definition of n^*) at γ =0. Therefore P β̃ = log(n^*/n^0e^-A) ,and the result is proved if P is of full rank. Suppose that this is not the case. Then, there exists γ∈ℝ^p ∖{0}^p such that P γ = (𝙸-M)γ = 0, therefore 𝙸γ = C with C a vector with identical components all equal to ∑_i=1^pN^1_i γ_i. Consequently, the matrix I (that is 𝙸 with the intercept column of ones) is no more of full rank, which is, by definition of I = I_0, impossible.§.§ Priors for variance reduction and a priori informationThe rare events structure of class imbalance goes hand in hand with the problem of precision for estimates. A classical solution consists in introducing an a priori distribution in a Bayesian context. This can be done using a Jeffreys non-informative prior <cit.> allowing both first order bias removal and variance shrinkage <cit.>. Thus, we have to maximize the expressionL^*_J(β̃|𝙸_0,𝙸_1,n^0,n^1) = ∏_j=1^q_1( e^(𝙸_1β̃)_j/∑_i n^0_ie^(𝙸_0β̃)_i)^n^1_j× |ℐ(β̃)|^1/2 ,with |ℐ| the determinant of the Fisher information matrix. This approach is implemented in the R package logistf for logistic regressions. In the imbalanced case, we search for a method conserving the shape of the limit equations and achieving at the same time variance reduction: we choose the following approximation1/2log(|ℐ(β̃)|) ≈1/2∑_i=1^plog( 𝙸_i^T(n^0 e^𝙸β̃)/∑_j n^0_je^(𝙸β̃)_j-(𝙸_i^T(n^0 e^𝙸β̃)/∑_j n^0_je^(𝙸β̃)_j)^2) ,supposing an absence of correlation between predictors in a random design framework (see Section <ref>). With this hypothesis, we derive first order equationsN^1_i - |n^1|𝙸_i^Tn^0 e^𝙸β̃/∑_j n^0_je^(𝙸β̃)_j + 1/2(1 - 2𝙸_i^T(n^0 e^𝙸β̃)/∑_j n^0_je^(𝙸β̃)_j) = 0 , i ∈{1,...,p} ,thus,𝙸^T(n^0 e^𝙸β̃/∑_j n^0_je^(𝙸β̃)_j) = N^1 + 1/2/|n^1|+1 = N^1_J .In table <ref>, we simulate data sets as previously done with the length for n^0 = (n^0_1,...,n^0_10)^T fixed (to 10) and we compare estimated bias and variance for coefficient β_4 = 0.25 with three different methods: a classical logistic regression (bias and sd.), the imbalanced case with means N^1_J (im. bias and im. sd.) and the Jeffreys exact penalty (J. bias and J. sd.). Variance reduction is about 2 percents with the Jeffreys prior and the half as much its easily computable approximation in class imbalance. Bias was already small and gets a little smaller. The shrinkage of the variance is limitated by the Cramér-Rao bound (see Fisher variance in table <ref>) and no miraculous reduction was conceivable. In the next section, we consider path following methods to complete regularization and highlight its "simplicity" with binary data. The initial parameters being the maximum a posteriori estimate (MAP), this estimation is a central problem of the limit imbalanced study. The benefit of the rescaled likelihood compared with the standard one is in the easy use of exponential a priori penalties. Indeed, with the penalty[P could be written as a probability distribution with a normalization term (the support of regression coefficients is finite).]P(β̃) = exp(∑_i=1^pϵ_i β̃_i) ,where ϵ∈ℝ^p, we maintain the shape of the likelihood by only perturbing the predictor means vector N^1 by ϵ/|n^1| (the MAP exists if and only if N^1+ϵ/|n^1| is surrounded by the rows of I, see Theorem <ref>). § PATH ESTIMATORS FOR LASSO-TYPE REGULARIZATIONIn ths section, we consider that each observation 𝙸_i (i ∈{1,...,q}) is generated by a random binary vector X_i^T = (X_i1,...,X_ip)^T ∈{0,1}^p with 𝔼[X_ij]=b_j∈ ]0,1[, j ∈{1,...,p}. With this modelization, we find many path estimators depending on the underlying correlation structure of the random design.§.§ Limit lasso propertiesThe well-known lasso regularization consists in introducing a positive parameter λ defining the strength of a Laplace prior distribution <cit.>. We search for the maximum of the expressionL (β, λ) = L^*(β)×exp( - λ∑_i=1^p |β_i|) ,which verifies the following simple first order conditions. Notice that we use, from now on, the notation β instead of β̃ to facilitate the reading. The limit imbalanced BLR problem with lasso penalty leads to the system of equations𝙸^T( n^0 e^𝙸β/∑_i n^0_ie^(𝙸β)_i) = N^1 - t ν(β),with t = λ/|n^1| and ν_j(β) =sign(β_j), if β_j0, ν_j(β) ∈ [-1,1], if β_j = 0 for all j ∈{1,...,p} (ν is the subgradient of the l^1 norm).Thus, the lasso has a clear interpretation as a shift operating on the observed proportions N^1. Thereafter, we often use the vector p(t) ∈ℝ^p defined as p(t) = N^1 - tν(β). If the strong overlap condition in Theorem <ref> is satisfied, then the functionβ̂ : {[ ℝ→ℝ^p ,; t ↦argmax_β∈ℝ^p( L (β,t)) , ]. is continuous for all t ≥ 0 and there exists T ∈ [0,1], such that β(t)=0 , ∀ t ≥ T.With the positivity of L, we have, argmax_β∈ℝ^p( L (β,t)) = argmin_β∈ℝ^p(-log ( L (β,t))) ,and for all t ≥ 0, -log ( L) is a strictly convex and coercive function in β if the strong overlap condition is satisfied (see proof of Theorem <ref>). Therefore, the function β̂ is well defined for all t ≥ 0. Furthermore, this function is continuous because of the continuity in (β,t) of log ( L) and its strict concavity in β. The equations (<ref>) with t=1 have no solution if one of the components of ν(β) is equal to -1 or +1, therefore β̂_j(1) = 0 for all j ∈{1,...,p}.Using the law of large numbers, the family of model parameters {β(t)}_t ≥ 0 solves the system of equations 𝔼[X_1je^X_1β(t)]/𝔼[e^X_1β(t)] = p_j(t) , j ∈{1,...,p} ,with 𝔼 being the expectation operator. This previous system of equations takes the same form as in (<ref>) because X_1 is a discrete random vector and therefore the path estimator {β(t)}_t ≥ 0 is continuous. Notice that the function ν∘β: ℝ^+ →ℝ is also continuous in t. §.§ Path estimatorsThanks to this previous remark, we are able to find precise analytic estimators of the path in the case of independent and orthogonal random designs. Notice that such solutions already exist in the framework of linear regression (see <cit.>). From now on, the strong overlap condition is considered to be always satisfied at t=0.If the random vector X generating the observations 𝙸 has independent components, a precise path estimator {β̂(t)}_t ≥ 0 is given by the formulaeβ̂_j(t) = β̂_j(0) + log(1-t sign(β̂_j)/N_j^1/1+t sign(β̂_j)/1-N_j^1) , j ∈{1,...,p} , if t ∈[ 0,t_0j] , t_0j =N_j^1|1-e^-β̂_j(0)|/1+N_j^1/1-N_j^1e^-β̂_j(0) andβ̂_j(t) = 0 ,ift > t_0j .The coefficients β̂(0) are give by the classical MLE (solution of equations (<ref>) without intercept) if we want to estimate the path obtained by an (imbalanced) logisitic regression. If we use the limit equations, we need the MLE of the rescaled likelihood (Proposition <ref> and equations (<ref>)) and in this case:β̂_j(0) = log( N^1_j/1-N^1_j1-N^0_j/N^0_j) , j ∈{1,...,p} . For all j ∈{1,...,p} we use the hypothesis of independence:p_j(t) = 𝔼[X_je^Xβ(t)]/𝔼[e^Xβ(t)] = 𝔼[X_je^X_jβ_j(t)]𝔼[∏_kje^X_kβ_k(t)]/𝔼[e^X_jβ_j(t)]𝔼[∏_kje^X_kβ_k(t)]= 𝔼[X_je^X_jβ_j(t)]/𝔼[e^X_jβ_j(t)] = e^β_j(t)P(X_j = 1)/e^β_j(t)P(X_j = 1) + P(X_j = 0) = e^β_j(t) b_j/e^β_j(t)b_j + (1-b_j) ,and the solution isβ_j(t)= log(p_j(t)/1-p_j(t)) - log( b_j/1-b_j) ,t ∈[0, |N_j^1-b_j|] ,and β_j(t) = 0 if t > |N_j^1-b_j|. Indeed, β̇_̇j̇ is negative in region β_j >0 and positive in region β_j <0. β_j(0) = log(N_j^1/1-N_j^11-b_j/b_j) for a random design with independent predictors (see Appendix <ref> with p=1). We replace all the b_j by the frequencies of observations N^0_j to obtain the estimator. The orthogonal case, when the inner product between columns of the design matrix vanishes (X_1jX_1k=0, jk), is also tractable. If the random design is orthogonal, we have 𝙸∈ℳ_(p+1) × p(𝔹) filled by zeros except at positions (i+1,i), i = 1,...,p and the derivative of the path estimator takes the formβ̇̂̇_i(t) = l̇ȯġ(̇ ̇ṗ_̇i̇(̇ṫ)̇/̇1̇-̇∑̇_̇ṡ ̇∈̇ ̇Ṡ_̇ṫṗ_̇ṡ(̇ṫ)̇)̇ , i ∈ S_t , t ≥ 0 ,with {S_t}_t ≥ 0 a family of subsets of {1,...,p} containing the indexes of non-zero coefficients of vector β at time t ≥ 0. The algorithm that describes the positions of the change-points in S_t is described in the proof.With the hypothesis of orthogonality, equations (<ref>) are reducted to{[ b_1e^β_1(t)/b_0 + b_1e^β_1(t) + ⋯ + b_pe^β_p(t)= N^1_1 - t ν_1(t) ,; .......; b_pe^β_p(t)/b_0 + b_1e^β_1(t) + ⋯ + b_pe^β_p(t)= N^1_p - t ν_p(t) , ].and we obtaine^β_i(t) = b_0/b_ip_i(t)/1-∑_j=1^p p_j(t) , i = 1,...,p . Let S_t = {0,1,...,p}∖ S_t and S^*_t =S_t ∖{0}, then{[ e^β_i(t) = b_0/b_iN^1_i - tsign(β_i)/1 - ∑_j=1^p N^1_j + t∑_s ∈ S_t sign(β_s)+ t∑_s ∈S^*_tν_s(t) ,i ∈ S_t ,;1 = b_0/b_iN^1_i - t ν_i(t)/1 - ∑_j=1^p N^1_j + t∑_s ∈ S_t sign(β_s)+ t∑_s ∈S^*_tν_s(t) , i ∈S^*_t .;].After computation, we have explicit formulae for the continuous functions β_i and ν_i (i ∈{1,...,p}):{[e^β_i(t) = b^S/b_ip_i(t)/1 - ∑_s ∈ S_tp_s(t) , i ∈ S_t ,;ν_i(t) = 1/tb^S N^1_i - b_i N^S/b^S-b_iR^S/b^S ,i ∈S^*_t , ].with b^S = ∑_s∈S_tb_s, N^S = ∑_s∈S_tN^1_s and R^S = ∑_s∈ S_t sign(β_s). These functions are monotonous, we need the change-points to draw the path, that is the finite sequence of different models {S_t}_t ≥ 0 = {S_t_0, S_t_1,...,S_t_m}, m ∈ℕ^*. For all i ∈{0,...,m-1}, {S_t}_t ∈ [t_i,t_i+1[ is a unique subset.If t ∈ [t_i,t_i+1[ and we know S_t we determine S_t_i-1,S_t_i+1, t_i and t_i+1 by solving{[ β_i(u_i) = 0⇔ u_i = b^SN^1_i - b_i N^S/b^S sign(β_i) + b_i R^S ,i ∈ S_t ,; ν_i(v_i^+) = 1⇔ v_i^+ = b^SN^1_i - b_i N^S/b^S + b_i R^S , i ∈S^*_t ,;ν_i(v_i^-) = -1⇔v_i^- = b^SN^1_i - b_i N^S/-b^S + b_i R^S , i ∈S^*_t .;].We define W = {w_i} = {u_i,v_j^+,v_j^- ,i ∈ S_t ,j ∈S^*_t} and the two adjacent change-points are given byt_i+1 = min_j{w_j |w_j > t} and t_i = max_j{w_j |w_j ≤ t} .Therefore, S_t_i+1 = S_t ∪ V_i+1∖ U_i+1 and S_t_i-1 = S_t ∪ V_i∖ U_i ,with U_i = {j ∈{1,...,p} |u_j = t_i} , V_i = {j ∈{1,...,p} |v_j^+ = t_i or v_j^-= t_i}.The path can be built forward or backward. If we choose the path following approach (forward),S_t_0 is found using the MLE of the rescaled likelihood (see Section <ref>) and t_0 = 0. In the other configuration (backward), we have S_t_m = ∅ and for t > t_m, b^S=N^S = 1 and R^S = 0, so that t_m = max_i ∈{1,...,p}|N^1_i-b_i|.Simulations with this type of design show that each path usually vanishes only one time (and does not reappear) and thus m>p is a very rare (impossible?) configuration. The opposite situation to orthogonality is inclusion. For example, if X_12 is included in X_11 meaning that for the observed data 𝙸_i1=1 if 𝙸_i2=1, we find an analytic description of the estimator given by the formulaeβ̇̂̇_1(t) = l̇ȯġ(̇ ̇ṗ_̇1̇(̇ṫ)̇/̇ṗ_̇2̇(̇ṫ)̇-̇ṗ_̇1̇(̇ṫ)̇)̇ , β̇̂̇_2(t) = l̇ȯġ(̇ ̇ṗ_̇2̇(̇ṫ)̇-̇ṗ_̇1̇(̇ṫ)̇/̇1̇-̇ṗ_̇2̇(̇ṫ)̇)̇ ,∀ t ∈{u ,β̂_1(u) β̂_2(u)0} . This solution is likely generalizable (with a design in stairs as presented in Appendix <ref>), however, this case is meaningless in the analysis of spontaneous reports databases and then left aside. We give examples of plots of path estimates compared with a standard (using L not L^*) lasso path for different imbalance strengths in appendix <ref>. The results highlight the high quality of the analytic path estimators, even in absence of class imbalance. Another regularization method is called the elastic net penalization and uses, in addition to the lasso, a second penalized term of ridge (or Tikhonov) kind <cit.>:L (β|y) =L^*(β|y) ×exp(- λ[α∑_i=1^p |β_i| + 1-α/2∑_i=1^pβ_i^2]) ,with α∈ ]0,1]. In the case of independence in random vector X, we have an explicit formula for t with respect to β:t = N^1/αsign(β̂) + (1-α)β̂1-e^-β̂(0)+β̂/1+N^1/1-N^1e^-β̂(0)+β̂ ,for β̂ between 0 and β̂(0). The coefficients vanish when t_0^en = 1/α t_0^lasso . The proof of this result is a simple adaptation of the proof for the lasso in Theorem <ref>.§.§ Negative correlation structureIf the random design verifies the relations 𝔼[X_1jX_1ke^X_1β]𝔼[e^X_1β] ≤𝔼[X_1je^X_1β]𝔼[X_1ke^X_1β], ∀ jk, ∀ t ≥ 0, this in-between situation of a β-dependent negative correlation between variables X_j (j=1,...,p) is also tractable and particularly interesting in the sparse context of near-zero components for vector N^1[Spontaneous reports databases are an example of such a sparsity with negative correlation.]. We find two estimators that sourrunded the real path. The path estimator in the β-dependent negative correlation case is surrouned by estimators, whose derivatives are given byl̇ȯġ(̇ ̇ṗ_̇j̇(̇ṫ)̇/̇1̇-̇∑̇_̇ṡ ̇∈̇ ̇Ṡ^̇+̇_̇ṫṗ_̇ṡ(̇ṫ)̇)̇≤β̇̂̇_j(t) ≤l̇ȯġ(̇ ̇ṗ_̇j̇(̇ṫ)̇/̇1̇-̇∑̇_̇ṡ ̇∈̇ ̇Ṡ^̇-̇_̇ṫṗ_̇ṡ(̇ṫ)̇)̇ , j ∈ S_t ,with S_t^+ = {j ∈ S_t |sign(β_j)>0} and S_t^- = {j ∈ S_t |sign(β_j)<0}. With the rare occurrence of resurgence of a coefficient after vanishing, we neglect this possibility and we easily find the p vanishing points and thus the family of subsets {S_t}_t ≥ 0. We differentiate equations (<ref>) with respect to t considering only the equations verifying the condition β_j(t)0, i.e. j ∈ S_t. We obtain at time t,β̇_̇j̇(t) (𝙸_j^T (n^0e^𝙸β)/∑_i n^0_ie^(𝙸β)_i-(𝙸_j^T( n^0e^𝙸β)/∑_i n^0_ie^(𝙸β)_i)^2 ) +∑_kj, k ∈ S_tβ̇_̇k̇(t) ( ∑_i𝙸_ij𝙸_ikn^0_ie^(𝙸β)_i/∑_i n^0_ie^(𝙸β)_i -𝙸_j^T (n^0e^𝙸β)/∑_i n^0_i e^(𝙸β)_i𝙸_k^T( n^0e^𝙸β)/∑_i n^0_ie^(𝙸β)_i) = - sign(β_j) ,or written differently,β̇_̇j̇(t) p_j(t)(1-p_j(t)) + ∑_kj, k ∈ S_tβ̇_̇k̇(t) ( R_jk(t)-p_j(t)p_k(t)) = ṗ_j(t) ,where R_jk(t) = ∑_i𝙸_ij𝙸_ikn^0_ie^(𝙸β)_i/∑_i n^0_ie^(𝙸β)_i is a t-dependent proportion of rows with a one on the columns j and k. With only negative correlations or independence between components of X, we define the matrix F(t) ∈ℳ_r × r([0,1]) with r = #S_t as long as R_jk(t) ≤ p_j(t)p_k(t),R_jk(t) - p_j(t)p_k(t)= (F_jk(t)-1)p_j(t)p_k(t) ,if observations 𝙸 give such a matrix F. We obtain (𝕀_r - (D-F)P)β̇ = P^-1Ṗ(1) with P a diagonal matrix filled with the elements {p_i(t) ,i ∈ S_t}. Matrix D is the correlation-track matrix containing ones at positions (j,k) if F_jk(0)<1 and we have[the non-singularity of the matrix C(t) in (<ref>), C(t)β̇ = l̇ȯġ ̇ṗ(̇ṫ)̇, will be proven with Proposition <ref>.]β̇(t) = ∑_i=0^+∞ ((D-F)P)^i P^-1Ṗ(1) = P^-1Ṗ(1) +∑_i=1^+∞ ((D-F)P)^iP^-1Ṗ(1) ,so that, using the positivity of all the elements in matrix D-F:l̇ȯġ(̇Ṗ(̇1̇)̇)̇ - l̇ȯġ(̇1̇-̇ḊṖ^̇+̇(̇1̇)̇)̇≤β̇(t) ≤l̇ȯġ(̇Ṗ(̇1̇)̇)̇ - l̇ȯġ(̇1̇-̇ḊṖ^̇-̇(̇1̇)̇)̇ ,with P^+ the diagonal matrix filled with vector p^+(t)= (max(sign(β_i),0)) p_i(t))_i ∈ S_t and P^- with vector p^-(t)=(max(-sign(β_i),0)) p_i(t))_i ∈ S_t. Finally,l̇ȯġ(̇ ̇ṗ_̇j̇(̇ṫ)̇/̇1̇-̇(̇Ḋṗ^̇+̇(̇ṫ)̇)̇_̇j̇)̇≤β̇̂̇_j(t) ≤l̇ȯġ(̇ ̇ṗ_̇j̇(̇ṫ)̇/̇1̇-̇(̇Ḋṗ^̇-̇(̇ṫ)̇)̇_̇j̇)̇ , j ∈ S_t .In presence of sparsity (small components in N^1), 0 < p_j(t) ≪ 1-(Dp^-(t))_j(t) and 0 < p_j(t) ≪ 1-(Dp^+(t))_j(t), which makes previous upper and lower bounds good path estimators. The p (or more) change-points are determined step by step as in previous subsection and the estimated path β(t) is stucked between a lower path and an upper paths.§ EFFICIENT ALGORITHMS FOR LASSO REGULARIZATION In this last section, we propose two new algorithms drawing piecewise logarithmic approximate paths derived from a small amount of matrix inversions (p or more). The logarithmic function naturally arised in the expression of all previously found path estimators, consequently, we build approximations involving this function. The main benefit of our algorithms is the direct computation of the sequence {t_i} as done by the LARS <cit.> for linear regression. Our first algorithm follows the path (t increases) and is a simplified procedure adapted to data with a low correlation structure. The second algorithm is a backward procedure (t decreases toward zero) and can challenge the classic coordinate descent approach <cit.>. The efficiency of the algorithms are eventually illustrated on pharmacovigilance data.§.§ Cauchy problem The derivative of the first order equations for the Lasso with respect to t leads to a Cauchy problem.The Lasso regularization path is described by the following system of differential equationsβ̇(t) = C(t)^-1 l̇ȯġ ̇ṗ(̇ṫ)̇ ,t >0, with C(t) ∈ℳ_r_t × r_t(ℝ) (r_i = #S_t), β∈ℝ^r_t, log p(t) ∈ℝ^r_t andC_jk(t) = ( ∑_u𝙸_uj𝙸_ukn^0_ue^(𝙸β)_u(t)/𝙸_j^T( n^0e^𝙸β(t)) -p_k(t) ) ,j,k ∈ S_t⊂{1,...,p} .Equations (<ref>) are divided by vector p(t) and we obtain the desired equations. It remains to be proven the non-singularity of matrix C(t) for all t>0. With diagonal matrix P∈ℳ_r_t × r_t(ℝ) filled by elements (p_j)_j∈ S_t we build a matrix C̃ = PC whose elements are:(PC)_jk = C̃_jk= ∑_u𝙸_uj𝙸_ukn^0_ue^(𝙸β)_u/∑_un_u^0e^(𝙸β)_u -p_j(t)p_k(t),j,k ∈ S_t .Suppose that this matrix C̃(t) is singular, then there exists a non-identically null vector γ∈ℝ^r_t such that C̃(t)γ = 0 or written component-by-component (C̃γ)_j = ∑_u𝙸_uj(∑_k 𝙸_ukγ_k)n^0_ue^(𝙸β)_u/∑_un_u^0e^(𝙸β)_u -p_j(t)∑_u(∑_k 𝙸_ukγ_k)n^0_ue^(𝙸β)_u/∑_un_u^0e^(𝙸β)_u =0, j ∈ S_tWe compute the linear combination ∑_j γ_j (C̃γ)_j = 0 to obtain after computations(∑_uJ_u^2n^0_ue^(𝙸β)_u) (∑_un_u^0e^(𝙸β)_u) -(∑_uJ_un^0_ue^(𝙸β)_u)(∑_uJ_un^0_ue^(𝙸β)_u) =0,with J_u =∑_l 𝙸_ulγ_l. This relation is expanded and simplified into∑_uv(J_u-J_v)^2 n^0_u n^0_v e^(𝙸β)_u+(𝙸β)_v =0.This is a sum of positive terms equals to zero, meaning that each term wanishes and we get J_u = const for all u = 1,...,n. Thus 𝙸γ = const which is impossible because matrix I is a full rank matrix.§.§ The piecewise logarithmic approximate path : a first simple algorithmPath following algorithms <cit.> are competing methods with more used coordinate descent algorithms <cit.> <cit.>. We here present a simple algorithm for an increasing regularization parameter t. Within this procedure, we are able to estimate at each step the value t of the next wanishing component in vector β(t) and thus speeding up the classical Newton-Raphson step <cit.>. We consider that correlation between predictors is "low", so that an emergence of a coefficient along the path after wanishing is not taken into account (but this case is included in the second algorithm). The path following algorithm for limit imbalanced logisitic regression by binary predictors (with low correlation) is the following:i=0, t_0=0, β(t_0)=β(0) given. S_0 = {j | β_j(0)0 ,j = 1,...,p}.WHILE r_i = #S_t_i 0 DOt_i+1 = t_i + minΔ T_i ,Δ T_i = {Δ t_j | Δ t_j = 1-e^-β_j(t_i)/(C_i^-1( sign(β)/p(t_i)))_j > 0 ,j ∈ S_t_i} ,with C_i∈ℳ_r_i × r_i(ℝ) such that(C_i)_jk(t_i) = ( ∑_u𝙸_uj𝙸_ukn^0_ue^(𝙸β)_u(t_i)/𝙸_j^T( n^0e^𝙸β(t_i)) -p_k(t_i) ) ,j,k ∈ S_t_i⊂{1,...,p} .The path, on the segment [t_i, t_i+1], is given by β(t)-β(t_i) = log( 1 -C_i^-1( sign(β)/p(t_i))(t-t_i)) , t ∈ [t_i, t_i+1] , S_t_i+1 = S_t_i∖ U_i , U_i = {j ∈ S_t_i | Δ t_j = minΔ T_i } .i becomes i+1.END DO.Equations (<ref>) take the form C(t)β̇ = l̇ȯġ(̇ṗ(̇ṫ)̇)̇ with C(t) called correction matrix. C_jk(t) = ( ∑_u𝙸_uj𝙸_ukn^0_ue^(𝙸β)_u/𝙸_j^T( n^0e^𝙸β) -p_k(t) ) ,j,k ∈ S_t .Between two annulations of regression coefficients along the path (t_i and t_i+1), we consider this matrix to be constant (C(t_i)=C_i). In this case, β(t_i+1)-β(t_i)=C_i^-1[log(p(t_i+1))-log(p(t_i))] .We have t_0=0, but the sequence of values {t_i} is unknown. However, we iteratively approximate them as follows. Withβ(t_i+1)-β(t_i)=C_i^-1log( 1 - ( sign(β)/p(t_i))(t_i+1-t_i)) ≈log( 1 -C_i^-1( sign(β)/p(t_i))(t_i+1-t_i)) , because |C_i^-1( sign(β)/p(t_i))(t_i+1-t_i)| is small for relative small step t_i+1-t_i. We obtain the piecewise logarithmic path:β(t)-β(t_i) = log( 1 -C_i^-1( sign(β)/p(t_i))(t-t_i)) , t ∈ [t_i, t_i+1] ,witht_i+1 = t_i + minΔ T_i ,Δ T_i = {Δ t_j | Δ t_j = 1-e^β_j(t_i)/(C_i^-1( sign(β)/p(t_i)))_j > 0 ,j ∈ S_t_i} , i = 0,1,...Δ T_i is the set of values for t_i+1-t_i solving (<ref>) with β_j(t_i+1)=0 (for each j ∈ S_t_i). The set U_i gives at each step the indexes of regression coefficients to remove from S_t_i. Other approximations could be performed, for example using a second order term in the previous approximation (<ref>). Simulation tests show that our choice seems to give better results. We notice that the size of the matrix C_i decreases during this procedure, speeding up the computation at each new step t_i. This algorithm has two main computational advantages. Firstly, the sequence {t_i}_i=1,...,p is directely determined, whereas other algorithms use a regular discretization on a logarithmic scale (coordinate descent) or Newton-Raphson steps (path following). Secondly, the sum ∑_i n^0_ie^(𝙸β)_i does not appear in the C_i matrices, which can highly reduce the computational cost especially if the matrix 𝙸 is sparse (0.03% of ones in the French spontaneous reports data base): this algorithm handles sparsity!To explore the efficiency of the algorithm, we simulate data sets with different correlation structures. Model selection is often provided with the BIC <cit.>, which requires to know the different models arising along the path. Hence, we decide to evaluate the algorithm accuracy using a simple indicator: a comparison of the sequence of coefficients in the order of wanishing along the path. The indicator is p'/p if a simulation with our algorithm gives p' coefficients at the same index as in the sequence obtained by a classical lasso algorithm (coordinate descent in R package glmnet). The correlation coefficient (from r= 0 to r = 0.9) means that we chose initial R_jk=(1-r)b_jb_k+r min(b_j,b_k).We simulate 10^3 paths for each number nb and r, nb being the number of predictors in correlation. For each path, β_0=-5 and the 10 regression coefficients (β_1,...,β_10) are always the same and chosen on a regular scale between -0.5 and 0.5. With nb = 3, 5 or 8 correlated predictors over the 10 used, the exact solution with assumption of independence (i) (see Theorem <ref>) deteriorates with the increase in correlation (r), which is (almost) not the case if we use our algorithm (a). Notice that, with a result around 0.8, the approximate path is often very close to the exact one, this is due to the inversion in the sequence of two close t_i terms (see (<ref>)). §.§ A new algorithm The second algorithm presented in this section computes forward selection. It is more suitable for problems with a large number of predictors (when we are looking for a sparse model) or/and in presence of a strong correlation structure.The standard approach for computing regularization path by decreasing t with logistic regression consists in using a first order quadratic approximation of the first derivative of the likelihood between two consecutive closed solutions (that is in practice, two parameters t_i and t_i+1 such that t_i+1-t_i<0 is small). Using small steps for the parameter sequence {t_i} to ensure a good approximation, the path is drawn by the cyclical coordinate method (see <cit.> and the R package glmnet). Our new algorithm is a kind of equivalent of the LARS algorithm for the logistic regression : we compute large step in t. Furthermore, in comparison with the cyclic coordinate descent algorithm, there is no loop at a fixed parameter t. After presenting the algorithm, we challenge the glmnet package with our approach. The backward algorithm for limit imbalanced logisitic regression by binary predictors is the following:i=0, t_0=max_i{|N^1_i-N^0_i|} = |N^1_k-N^0_k|, β(t_0)=(0,...,0)^T ∈ℝ^p and ϵ>0 given. S_t_0 = {β_k}.WHILE (t_i>ϵ or #S_t_i<p) DOt_i+1 = t_i + max{Δ T_i, Δ T_i} ,with Δ T_i = {Δ t_u| Δ t_u= 1-e^-β_u(t_i)/Φ_iu<0 ,u ∈ S_t_i}andΔ T_i = {Δ t_j^+, Δ t_j^- | Δ t_j^+ = t_i 1 - ν_j(t_i)/-1 + Ψ_ijp_j(t_i)<0 , Δ t_j^- = t_i 1 + ν_j(t_i)/-1 - Ψ_ijp_j(t_i)<0 ,j ∈S^*_t_i} .Definitions for matrices Φ and Ψ are given in the proof. Notice that Φ = Φ(t_i,β(t_i)) (as for Ψ).The path, on the segment [t_i+1, t_i], is given byβ_j(t)-β_j(t_i) = log( 1 -Φ_ij(t-t_i)) , t ∈ [t_i+1,t_i] , j ∈ S_t_i ,and for the subgradientsν_j(t) = t_i/tν_j(t_i) + (1- t_i/t)Ψ_ijp_j(t_i) , t ∈ [t_i+1,t_i] , j ∈S^*_t_i .The new set S_t_i+1 is given byS_t_i+1 = (S_t_i∖ U_i) ∪ U'_i,withU_i = {j ∈ S_t_i | Δ t_j = max{Δ T_i, Δ T_i}} , U'_i = {u ∈S^*_t_i | Δ t_u^+or Δ t_u^- = max{Δ T_i, Δ T_i}} .i becomes i+1.END DO We differentiate equations (<ref>) for all j in {1,...,,p} (see also (<ref>)): ∑_k ∈ S_t( ∑_i𝙸_ij𝙸_ikn^0_ie^(𝙸β)_i/𝙸_j^T (n^0e^𝙸β) -p_k(t) )β̇_̇k̇(t) = l̇ȯġ(̇ṗ_̇j̇(̇ṫ)̇)̇ ,or in matrix form with C(t) ∈ℳ_r × r(ℝ), D(t) ∈ℳ_(p-r) × r (ℝ), r = #S_t and vectors p^(t) = (p_j(t))_j∈ S_t^T, p^=(t) = (p_j(t))_j ∈S_t^T and β^(t) = (β_j(t))_j∈ S_t^T we getC(t)β̇^̇(̇ṫ)̇ = l̇ȯġ(̇ṗ^̇(̇ṫ)̇)̇ , D(t)β̇^̇(̇ṫ)̇ = l̇ȯġ(̇ṗ^̇=̇(̇ṫ)̇)̇ .C(t) is a square non-singular matrix for all t in [0,t_0[ (see Remark <ref>). Between two consecutive values t_i and t_i+1 (t_i+1<t_i) of the t sequence, we consider that C(t) ≈ C(t_i) and D(t) ≈ D(t_i), thusβ̇^̇(̇ṫ)̇≈ C^-1(t_i)l̇ȯġ(̇ṗ^̇(̇ṫ)̇)̇ ,l̇ȯġ(̇ṗ^̇=̇(̇ṫ)̇)̇≈ D(t_i)C^-1(t_i)l̇ȯġ(̇ṗ^̇(̇ṫ)̇)̇ = E(t_i)l̇ȯġ(̇ṗ^̇(̇ṫ)̇)̇ ,with E(t_i) ∈ℳ_(p-r_i) × r_i (ℝ) and r_i = #S_t_i. The system of equations involving matrix C^-1 is solved as in the proof of Proposition <ref> and we get β_j(t)-β_j(t_i) = log( 1 -Φ_ij(t-t_i)) , t ∈ [t_i+1,t_i] , j ∈ S_t_i ,with Φ_ij = (C_i^-1( sign(β^)/p^(t_i)))_j. The second set of equations giveslog(p^=(t))-log(p^=(t_i)) = E_i(log(p^(t)-log(p^(t_i)) ,and using the usual approximationlog(p_j(t))-log(p_j(t_i)) ≈log(1- Ψ_ij(t-t_i)) , j ∈S^*_t_i , with Ψ_ij = (E_i( sign(β^)/p^(t_i)))_j and we findν_j(t) = t_i/tν_j(t_i) + (1- t_i/t)Ψ_ijp_j(t_i) , t ∈ [t_i+1,t_i] , j ∈S^*_t_i .We solve 2 r_i +(p-r_i) = p + r_i equations (ν_j(t_i+1)=± 1, j ∈S^*_t_i and β_j(t_t+1)=0, j ∈ S_t_i) to find the possible values for t_i+1-t_i. The maximum of obtained negative values within the p + r_i results is used to build the t sequence.To visualize what is happening during the algorithm, we define linear functions B^=_j : t ↦ tν_j(t) and B^_j : t ↦ e^β_j(t)-1+ sign(β_j)tleading to the p functions B_j (j=1,...,p) such thatB_j(t) = {[B_j^=(t) ,if |B_j^=(t)|≤ t ,; B_j^(t) ,if |B_j^(t)| > t .; ].Functions B_j are all piecewise linear and can be drawn in the plane shown in Figure <ref>.§.§ Path reconstruction with the French spontaneous reports database We illustrate the efficiency of the limit path construction by piecewise logarithmic functions on the French spontaneous reports database. We look at two examples, a first one with no evidence of correlation and a second with strong correlations. The database contains about 330000 reports in 2016 and the imbalance is high or very high for all the adverse effects <cit.>. In the following graphs, the dotted lines represent results obtained by our algorithm, the solid ones result from the classical glmnet package.The Figure <ref> shows common features encountered with other examples. The path of the exponential of the coefficients shapes a set of piecewise linear functions and the algorithm remains efficient even if the number of predictors is high (150 for examples). It seems that there is no case of a path with a curve reappearing after a first canceling (due to a strong correlation between predictors with opposite signs of initial coefficients). Thus, the sets A_i in the algorithm do not have to be determined. We notice that the accuracy of this path following algorithm can easily be increased by adding intermediate steps (in variable t). The main computing limitation being the matrix inversion, one could study the inner product (Gram) matrix for class 0 and reorder rows and columns to reveal patterns and form a block diagonal matrix. These blocks could result from a statistical study of the Gram matrix[To that end, see the literature of the block clustering problem <cit.>.] (finding the pairwise independent predictors) as well as from pharmacological assumptions (medical treatments also shape patterns). Thus, computational costs become a marginal problem and one can concentrate on the bias correction by adding priors related to temporal bias, under-reporting or the introduction of similarity modifying the R matrix[With a similarity matrix S ∈ℳ_p × p([0,1]), the similarity is defined as follows: coefficient R_jk becomes (1-S_jk)R_jk + S_jkmin(p_j,p_k).]. § CONCLUSION AND PERSPECTIVESThe central novelty of this work is the introduction of a rescaled likelihood for the limit imbalanced logistic regression problem. The expression of this likelihood could have some connexions with the well-known likelihoods of the self-controlled case series method <cit.> and of the proportional hazards model <cit.> used in epidemiology.Most results exposed for binary data can be extented to other data types. However, simulations have been done only with binary data, having in mind the underlying applied problem of pharmacovigilance. The new estimate is always very close to the initial MLE because data are located on the vertexes of the hypercube and then one another "close". A convergence study of all possible existing algorithms for the primal and dual problems could be performed with different class imbalances and an evaluation of the first order term.The variance reduction is a central issue that has to be treated in a Bayesian framework. Whereas the prior to add in the standard logistic regression is unclear, the rescaled likelihood takes a well-adapted form for exponential priors. We considered model selection using the BIC and the lasso to answer this question. Due to binary data, the lasso regularization problem became easier to understand in our limit imbalanced case: we found many precise estimators. Piecewise logarithmic approximate paths are built by an effective path following procedure which determines step by step the vanishing time of each path, do not use any loops as in coordinate descent algorithms and computes expressions only involving non-zero data. Moreover, this algorithm can take into account the correlation structure between predictors to further shrink computational costs. The values for N^1, n^0 and for matrix R could be shifted in order to incorporate absolute bias, temporal bias, under-reporting and similarity or correlation corrections.§ A PHARMACOVIGILANCE PROJECT? Within this paper, we have had in mind the pharmacovigilance context as this work was carried out in parallel of a one-year engineering job at the French National Institute of Health and Medical Research[B2PHI laboratory UMR 1181, INSERM, UVSQ, Institut Pasteur, Villejuif 94807, France]. We hope this article could contribute a little to the developement of mathematical tools for pharmacovigilance purposes. The science of drug safety at a postmarketing level is nearly non-existent in France as in many other countries: the reporting process of spontaneous reports is inadequate and resulting databases are badly processed with unadapted tools. Public health scandals related to medication are steadily increasing and the spotlights are turned towards big pharmaceutical companies while patient associations should firstly require public authorities to establish a modern drug safety structure. To that end, the statistical community has a major role to play by proposing trustworthy decision-support tools, opposing science to political and financial influences. Creating a useful tool was the guideline of this present work and the author hopes that other mathematicians will embrace the direction initiated by this article.We would like to conclude by giving our opinion about the work that remains to be done to obtain an operational tool (in five points), hoping that it will inspire epidemiologists.1) Building priors related to bias (temporal bias, under-reporting...) with the help of pharmacologists. 2) Developing the proposed regularization algorithms evaluating their complexity and accuracy levels. 3) Introducing simple indicators to control the quality of the limit approximation. 4) Working on path visualization and new indicators (that are not thresholds). 5) Evaluating the obtained tool in the hands of pharmacologists (the use of reference sets is, to our mind, inadequate).§ ACKNOWLEGMENTI would like to deeply thank Laetitia Comminges from the Paris-Dauphine University for relevant comments that greatly improved the manuscript. I also thank my colleague Mohammed Sedki from the INSERM laboratory of Villejuif for his constant encouragement to complete this work.§ EXACT SOLUTIONS We give a collection of examples consisting in simple solutions of the equation (<ref>).§.§ No interceptIf there is no intercept and no interaction between the regressors, the matrix I equals the identity matrix 𝕀_p andβ_1 = log(n_1^1/n_1^0) , ... , β_p = log(n_p^1/n_p^0) .If one row contains other ones, the inverse matrix is the same matrix with the added ones transformed into its opposite.§.§ Intercept If the square matrix I_p+1 is the following I_p+1 =[ 1 0 ⋯ ⋯ 0; ⋮ 1 0 ⋯ 0; ⋮ 0 ⋱ ⋱ ⋮; ⋮ ⋮ ⋱ ⋱ 0; 1 0 ⋯ 0 1; ] ,thenI_p+1^-1 =[10⋯⋯0; -110⋯0;⋮0⋱⋱⋮;⋮⋮⋱⋱0; -10⋯01;]for the inverse matrix, so that the β coefficients take the formβ_0 = log(n_0^1/n_0^0) ,β_1 = log(n_1^1/n_1^0n_0^0/n_0^1) , ... , β_p = log(n_p^1/n_p^0n_0^0/n_0^1) .§.§ Intercept with one correlation The first row of the following matrixI_p+1 =[ 1 ? ⋯ ⋯ ?; ⋮ 1 0 ⋯ 0; ⋮ 0 ⋱ ⋱ ⋮; ⋮ ⋮ ⋱ ⋱ 0; 1 0 ⋯ 0 1; ]definies the set K = {j ∈{0,...,p} , I_1j=1 }. The case #J = 2 is left out because it does not coincide with a non-singular matrix. The case#J = 1 corresponds to the previous example. The easiest way to solve this example is to look at initial equations (<ref>). We write down the p+1 equations, where only the first one has a different form: ∑_i=0^p (n_i^1-n_i^0) = ∑_i=1^p(n_i^1+n_i^0)tanh(β_0+β_i/2)+ (n_0^1+n_0^0)tanh(1/2∑_j ∈ Kβ_j)and for k ∈{1,...,p},(n_k^1-n_k^0) +1_k ∈ K(n_0^1-n_0^0) = (n_k^1+n_k^0)tanh(β_0+β_k/2) + 1_k ∈ K(n_0^1+n_0^0)tanh(1/2∑_j ∈ Kβ_j) . Subtracting all the p equations to the first one, we obtain(1-#K) (n_0^1-n_0^0) = (1-#K)(n_0^1+n_0^0)tanh(1/2∑_j ∈ Kβ_j),which can be used to simplify equations (<ref>) into(n_k^1-n_k^0) = (n_k^1+n_k^0)tanh(β_0+β_k/2) .Finally, we haveexp(∑_j ∈ Kβ_j) = n_0^1/n_0^0andexp(β_0+β_k) = n_k^1/n_k^0 , k ∈{1,...,p} ,so that we deduce the following closed form for the β coefficients exp(β_0) = ( n_0^0/n_0^1∏_j ∈ J,j0(n_j^1/n_j^0))^1/#J-2 ,exp(β_i) = n_i^1/n_i^0(n_0^1/n_0^0∏_j ∈ J,j0n_j^0/n_j^1)^1/#J-2 , i ∈{1,...,p} . Notice that the regression coefficients behave in a very unpredictable way. It is sufficient to see that on an example with p=2 and #J =3. The matrix I is [ 1 1 1; 1 1 0; 1 0 1; ] ,and we have exp(β_0)=n_0^0 /n_0^1n_1^1 n_2^1/n_1^0 n_2^0 ,exp(β_1)=n_0^1/n_0^0n_2^0/n_2^1 ,exp(β_2)=n_0^1/n_0^0n_1^0/n_1^1 .The first intuition is to think that coefficients β_1 and β_2 do depend on the couples (n_1^0,n_1^1) and (n_2^0,n_2^1) respectively, but it is not the case!§.§ Stairs WithI_p+1= [ 1 0 ⋯ 0; 1 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ 0; 1 ⋯ 1 1; ] ,we findI_p+1^-1= [10⋯⋯0; -1⋱⋱ ⋮;0⋱⋱⋱⋮;⋮⋱⋱⋱0;0⋯0 -11;] and thenβ_0=log(n_0^1/n_0^0),β_i= log(n_i^1/n_i^0n_i-1^0/n_i-1^1) , i ∈{1,...,p} .§ PROOF OF PROPOSITION 3.1.The result is proved with a succession of Taylor expansions of degree 1 or 2 in 1/s. We usetanh(x/2) = -1 + 2 e^x - 2 e^2x + o(e^2x) ,then with (<ref>), we haveI_1^Tn^1 = I_1^Tn^1 e^I_1β+I_0^Tn^0 e^I_0β- (I_1^Tn^1 e^2I_1β+I_0^Tn^0 e^2I_0β) + o(|n| e^2β_0) .The first equation of this system gives|n^1| = ∑ n^1_i e^(I_1β)_i+∑ n^0_i e^(I_0β)_i- (∑ n^1_i e^2(I_1β)_i+∑ n^0_i e^2(I_0β)_i) + o(|n| e^2β_0) ,therefore, using notations introduced in the proposition, e^2β_0(n_2^1+sn_2^0 )-e^β_0(n_1^1+sn_1^0 ) + 1 - o(1/s) = 0 ,ande^β_0 = 1/2n_1^1+sn_1^0/n_2^1+sn_2^0 - 1/2n_1^1+sn_1^0/n_2^1+sn_2^0√(1-4( 1-o(1/s)) n_2^1+sn_2^0/(n_1^1+sn_1^0)^2) .The coefficient s is defined as s = |n^0|/|n^1|. Now we have to find the Taylor expansion of e^β_0 of degree two in 1/s and reinject it in (<ref>). We haven_2^1+sn_2^0/(n_1^1+sn_1^0)^2 = n_2^1+sn_2^0/s^2 (n_1^0)^21/(n_1^1/s n_1^0+1)^2 = n_2^1+sn_2^0/s^2 (n_1^0)^2(1 - 2n_1^1/s n_1^0 + o(1/s)) = 1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3) + o(1/s^2) ,and√(1-4( 1 - o(1/s) )(1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3)+ o(1/s^2) )) =√(1-4(1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3)+ o(1/s^2) ))=1-2(1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3 + 1(n_2^0)^2/(n_1^0)^4) )+ o(1/s^2) .Thereforee^β_0 = n_1^1+sn_1^0/n_2^1+sn_2^0(1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3 + 1(n_2^0)^2/(n_1^0)^4)+ o(1/s^2) )=n_1^1+sn_1^0/sn_2^0(1-n_2^1/s n_2^0 + o(1/s))(1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3 + (n_2^0)^2/(n_1^0)^4) + o(1/s^2) )= (n_1^0/n_2^0+1/s(n_1^1/n_2^0-n_2^1n_1^0/(n_2^0)^2))(1/sn_2^0/(n_1^0)^2 + 1/s^2(n_2^1/(n_1^0)^2-2n_1^1n_2^0/(n_1^0)^3 + (n_2^0)^2/(n_1^0)^4) )+ o(1/s^2) e^β_0 = 1/s n_1^0+(1/sn_1^0)^2(n_2^0/n_1^0-n_1^1) + o(1/s^2) .The system of equations (<ref>) without its first equation isN^1 = 𝙸_1^Tn^1/|n^1| = e^β_0(n_1^1+s n_1^0) - e^2β_0(n_2^1+s n_2^0) + o(1/s)and we use the previous expression for e^β_0:N^1 = (1/sn_1^0+(1/sn_1^0)^2(n_2^0/n_1^0-n_1^1) )(n_1^1+s n_1^0) - (1/sn_1^0)^2(n_2^1+s n_2^0) + o(1/s) N^1 = n_1^0/n_1^0 + 1/s( n_2^0/(n_1^0)^2[ n_1^0/n_1^0-n_2^0/n_2^0] + n_1^1/n_1^0[ n_1^1/n_1^1-n_1^0/n_1^0]) + o(1/s)orN^1 + 1/s(n_2^0/(n_1^0)^2-n_1^1/n_1^0) = n_1^0/n_1^0(1+ 1/s(n_2^0/(n_1^0)^2-n_1^1/n_1^0) ) + o(1/s) .Then, using again a Taylor expansion,n_1^0/n_1^0 = N^1 + 1/s(n_2^0-N^1n_2^0/(n_1^0)^2-n_1^1-N^1n_1^1/n_1^0)+ o(1/s)or n_1^0/n_1^0 - N^1 = 1/s(n_2^0/(n_1^0)^2(n_2^0/n_2^0-N^1 ) - n_1^1/n_1^0(n_1^1/n_1^1-N^1 ))+ o(1/s) . § PATH SIMULATIONSIn the following graphs, the dotted lines are obtained by the exact path corresponding to Theorem <ref> for examples 1 and 2, Theorem <ref> for examples 3 and 4 and the inclusion case for examples 5 and 6. The solid lines are always given by a coordinate descent algorithm for the standard logistic regression. We change the scale for t (by a linear rescaling) in order to have to same max(t) (see (<ref>)) for the exact and algorithmic paths.9 Ahmed Ahmed Ismail, Pariente Antoine, Tubert-Bitter Pascale (2016)Class-imbalanced subsampling lasso algorithm for discovering adverse drug reactions. Statistical Methods in Medical Research. Beziz Beziz et al. (2016)Spontaneous adverse drug reaction reporting in France: A retrospective analysis of reports made to the French medicines agency from 2002 to 2014. Revue d'Épidémiologie et de Santé Publique,64. Caster Caster et al. (2010)Large-Scale Regression-Based Pattern Discovery: The Example of Screening the WHO Global Drug Safety Database. Stat. Anal. Data Min.,3, no. 4, 197–208. Cox Cox David Roxbee (1975)Partial likelihood.Biometrika,62, no. 2, 269–276. Efron Efron Bradley, Hastie Trevor, Johnstone Iain, Tibshirani Robert (2004)Least angle regression. Annals of statistics,32, no. 2, 407–499. Elrahman Elrahman Shaza, Abraham Ajith (2013)A Review of Class Imbalance Problem. Journal of Network and Innovative Computing.,1, 332–340.Firth Firth David (1993)Bias reduction of maximum likelihood estimates. Biometrika,80, no. 1, 27–38. Fithian Fithian William, Hastie Trevor. (2014)Local case-control sampling: efficient subsampling in imbalanced data sets. Ann. Statist.,42, no. 5, 1693–1724.Fri1 Friedman Jerome, Hastie Trevor, Tibshirani Rob (2010)Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software,33, no. 1, 1–22.Fri2 Friedman Jerome, Hastie Trevor, Höfling Holger, Tibshirani Robert (2007)Regularization Paths for Generalized Linear Models via Coordinate Descent. Annals of Applied Statistics,1, no. 2, 302–332. Govaert Govaert Gérard, Nadif Mohamed (2008)Block clustering with Bernoulli mixture models: comparison of different approaches. Comput. Statist. Data Anal.,52, no. 6, 3233–3245. Guo Guo Xinjian et al. (2008) OntheClassImbalance Problem. Proceedings ot the Fourth International Conference on Natural Computation. 4, 192–201.Harpaz Harpaz et al. (2013)Performance of pharmacovigilance signal-detection algorithms for the FDA adverse event reporting system. Clin Pharmacol. Ther.,6Hiriart Hiriart-Urruty Jean-Baptiste (1981)A note on the Legendre-Fenchel transform of convex composite functions. Nonsmooth Mechanics and Analysis. Adv. Mech. Math. Springer, New York,12, 35–46. Jeffreys Jeffreys Harold. (1946)An invariant form for the prior probability in estimation problems. Proc. Roy. Soc. London. Ser. A.,186, 453–461. Ki Ki Hang Kim (1982)Boolean matrix theory and applications. Monographs and textbooks in pure and applied mathematics. ISBN-13: 978-0824717889Lip Lipovetsky Stan (2015)Analytical closed-form solution for binary logit regression by categorical predictors. J. Appl. Stat.,42, no. 1, 37–49. Madigan Madigan David, Ryan Patrick, Simpson Shawn, Zorych Ivan (2011)Bayesian methods in pharmacovigilance. With discussion by William DuMouchel. Oxford Univ. Press, Bayesian statistics 9, 421–438. Mantel Mantel, N. and Haenszel, W. (1959)Statistical aspects of the analysis of data from retrospective studies of disease. J. Natl. Cancer Inst.,22, 719–748.Minka Minka Thomas P. (2007)A comparison of numerical optimizers for logistic regression. URL: http://research.microsoft.com/en-us/um/people/minka/papers/logreg/Oommen Oommen Thomas et al.(2011) Sampling Bias and Class Imbalance in Maximum-likelihood Logistic Regression. Mathematical Geosciences.,43, no. 1, 99–120.Owen Owen Art B. (2007)Infinitely Imbalanced Logistic Regression. J. Mach. Learn. Res.,8, 761–773.Rosset Rosset Saharon (2004). Following Curved Regularized Optimization Solution Paths.Advances in Neural Information Processing Systems. 17, 1153–1160. Rosset2 Rosset S. and Zhu J. (2007). Piecewise linear regularized solution paths.The Annals of Statistics. 35, no. 3, 1012–1030.Saint Saint-Raymond Laure (2009). Hydrodynamic Limits of the Boltzmann Equation. Springer. Lecture Notes in Mathematics. ISBN: 978-3-540-92846-1Schwarz Schwarz Gideon (1978)Estimating the dimension of a model. The Annals of Statistics,6, no. 2, 461–464. Silva Silvapulle Mervyn J. (1981)On the Existence of Maximum Likelihood Estimators for the Binomial Response Models. J. Roy. Statist. Soc. Ser. B,43, no. 3, 310–313.Simpson Simpson Shawn E, Madigan David, Zorych Ivan, Schuemie Martijn J., Ryan Patrick B., Suchard Marc A. (2013)Multiple self-controlled case series for large-scale longitudinal observational databases. Biometrics,69, no. 4, 893–902.Tibshirani Tibshirani Robert (1996)Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B,58, no. 1, 267–288. Yu Yu Hsiang-Fu, Huang Fang-Lan, Lin Chih-Jen (1981)Dual coordinate descent methods for logistic regression and maximum entropy models. Mach. Learn.,85, no. 1-2, 41–75. Zou Zou Hui, Hastie Trevor (2005)Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol.,67, no. 2, 301–320.
http://arxiv.org/abs/1703.08995v2
{ "authors": [ "Vincent Runge" ], "categories": [ "stat.ME", "Primary 62J12, 62F12, 62F15, secondary 34E05, 49M29, 62P10" ], "primary_category": "stat.ME", "published": "20170327101700", "title": "On the Limit Imbalanced Logistic Regression by Binary Predictors" }
Experiments with the dynamics of the Riemann zeta function Nobutaka Nakazono December 30, 2023 ========================================================== We collect experimental evidencefor several propositions, including the following: (1) For each Riemann zero ρ (trivial or nontrivial) and each zeta fixed point ψthere is anearly logarithmic spiral s_ρ, ψ with center ψ containing ρ.(2) s_ρ, ψ interpolates a subsetB_ρ, ψ of the backward zeta orbit of ρ comprising a set of zeros of all iterates ofzeta. (3) If zeta is viewed as a function on sets,ζ(B_ρ, ψ) = B_ρ, ψ∪{ 0 }. (4) B_ρ, ψ has nearly uniform angular distribution aroundthe center of s_ρ, ψ. We will make these statements precise.plain § INTRODUCTION§.§ Overview. For complex w, let ζ^∘ -(w) denote the backward zeta orbit of wso that ζ^∘ -(w) = {s ∈Cs.t.some iterate of zeta takes s to w }. If the sequence B =(a_0,a_1, a_2,... ) satisfies a_0 = w and ζ(a_n) = a_n-1 for all n≥ 1, we say thatB is abranch of ζ^∘ -(w). If B converges withlim B =λ, say, we conjecture that B is unique and we writeB as B_w, λ. We collect numerical evidencesupporting thefollowing claims, which will be made precise below: (1) that for each of a countable set ofnon-real zeta fixed points ψ and eachnontrivial Riemann zero ρ, B_ρ, ψ exists, is unique, and is the center of a nearly logarithmic spiral, says_ρ, ψ, interpolatingB_ρ, ψ; (2) thatthe members ofB_ρ, ψare distributed nearly uniformly ons_ρ, ψ; (3) and that there is another set of real zeta fixed points ψ = ψ_-2n near the trivial zeros -2n, -2n ≤ -20, such that consecutive members ofB_ρ, ψ_-2n rotate aroundψ_-2nthrough an angle of ≈π or 2 π, depending upon the parity of n,so that the members ofB_ρ, ψ_-2n lie on a curve thatis very nearly a straight line passing throughboth ψ_-2n and ρ. Wetreat in detail relationships between zeta basins of attractionandthe branches of ζ^∘(ρ) that we haveobserved experimentally. The resulting plots areincluded here because they suggested thepresence of the spirals which are the main subject of the article,and so–we speculate–they may eventually also suggest the ideas needed to analyze these spirals; for we have not proved any theorems. Our experiments were done with Mathematica andspot-checked withSage.Data files and Mathematicanotebooks areposted on the ResearchGate site <cit.>.§.§ Possible bearing on the Riemann hypothesis.Here are several scenarios. (1)A clear understandingof the spirals s_ρ, ψ withzeta fixed point centers ψ andpassing through Riemann zerosρ mightlead to a sort of dictionary, so that the Riemann hypothesis might beput in a form that speaks of zeta fixed points instead ofzeta zeros. (2) If(as we conjecture) the spirals s_ρ, ψ areapproximated by logarithmicspirals, it might be possible to confine the s_ρ, ψ to lie withinspiral-shaped “error bands” aboutlogarithmic spirals. Because each zero ρ lies on eachs_ρ, ψ, each zero would lie in each one ofa countable collection of these error bands in the complex plane (one for each zeta fixed point ψ); then the zeros would be confined to the intersection of these error bands,and this regionmight be small. Our very incomplete knowledge of the s_ρ, ψ is founded on prior knowledge of thelocations of the zeros, so that this scheme is tainted with circularity; but perhaps this taint might in some way be removed. (3)Under the conjecturesstated in this article, measuring from the fixed points ψ, a zero ρ lies at the “first” intersection (in terms of arc length, say) ofs_ρ, ψ with the critical line; and so the Riemann hypothesis might be restatedin terms of these intersections.Each ρ appears to lie on allof the s_ρ, ψ, and so we mighteventually obtain a countable family of conditions on these intersections, which could, possibly, bein some way usefullycombined. Section 6.1.3 discussessome data that supportConjecture 4, whichcodifies a part of this scenario.§.§ More definitions.Let ζ denote the Riemann zeta function and let us write the iterates of a function f asf^∘ 0(z) = z and f^∘ (n+1)(z)= f(f^∘ n(z)) for n =0, 1, .... An n-cycle for f is an n-tuple (c_0, ... , c_n-1) such that f(c_n-1) = c_0 andf(c_k) = c_k + 1 whenk ≠ n - 1. The forward orbitof w under fis the sequence (w, f(w), f^∘ 2(w), ... ). The backward orbit of w under fis the setof complex numbers s such that f^∘ n(s) = w for some integer n ≥ 0.Let the symbol f^∘ -(w)denote thisbackward orbit; if w does not belong to a cycle,f^∘ -(w) carries the structure of a rooted tree in which the root is w and the children ofs ∈f^∘ -(w) are the solutions t off(t) = s. We will call any path inf^∘ -(w) that begins at w a branch of f^∘ -(w)(also: “a branch of the inverse of f”.) Such a branch, then, is a sequence (a_0, a_1, a_2, ....) with a_0 = w and a_n = f(a_n+1) for each non-negative integer n. Since the Riemann hypothesis has been verifiedwithin the range of our observations, we write without ambiguity ρ_k for the k^th nontrivial Riemann zeroordered by height above the real axis andρ_-k for its complex conjugate.Ifζ^∘ n(z) = 0andζ^∘ n-1(z) is a nontrivial Riemann zero, we call z a nontrivial zero of ζ^∘ n. For typographical reasons, we will occasionally write ζ_n for ζ^∘ n; within our article, there should no confusionwith other common uses of this symbol. For z ∈C∪{∞},A_z :={u ∈C s.t. lim_n →∞ζ^∘ n(u) = z} (the “basin of attraction” of z underzeta iteration.) Let ϕ≈ -.295905 be the largest negative zeta fixed point. Then A_ϕ is a fractal <cit.>;each nontrivial Riemann zeroappears to lie in an irregularly shaped bulb of A_ϕ(Figures 1.1, 3.2, and section 3 more generally.) For a spiral swith center γ, letα_s bethe point on theintersection of the critical line and s closest to γ.Wedefine a real-valued function θ(z) on complex numbers z ∈ s,|z-γ| ≤ |α_s - γ|by requiring thatθ(z) ≡(z - γ) 2π, and that θ(z) increases continuously and monotonically as z moves around s from α in the directionof decreasing |z-γ|. In other words, θ(z) behaves up to a multiplicativeconstant like awinding number. Ifa sequence(a_1,a_2, ...) lies ona spirals with center γ and for some pair of real numbers A > 0, B >0 and all k = 1, 2, ..., |θ(a_k) -θ(a_k+1)| < A e^-Bk, we will say that thea_k are distributed nearly uniformly around s. For complex z, let r(z) := |z-γ|. Let m, b be real numbers, so that r(z) = exp (mθ(z) + b)describes a logarithmic spiralwith center γ and typical element exp (mθ(z) + b) exp (i θ(z)).Suppressing the dependence on m and b, letd_rel(γ, z) := | z-γ-exp (mθ(z) + b) exp (i θ(z))/z-γ|.Wesay that s isc-nearly logarithmicfor real positive c ifmax_z ∈ s, 0 < |z - γ| ≤ |α_s - γ| d(γ, z) < cfor somem and b. We require a notion of“very nearly a straight line.” Suppose (1) a complex curve C of finite arc length has an initial point z_Iand terminates ata point z_T, (2) that there are real numbers m and b such that lim_z ∈ C, z → z_T|(z) - (m (z) + b)/(z)| = 0,and (3) that the convergence has exponential decay as|z - z_T| decreases from |z_I - z_T| to zero. Then we say thatC is very nearly a straight line. We need measures of the absolute and relative deviations of points a_k in a branch B_ρ,ψof the backward orbit of a nontrivial Riemann zeroρ from a logarithmic spiralfitted to that branch using Mathematica's FindFit command. Suppose the a_k are interpolated by a spiral s_ρ, ψ centered at ψ such that forz ∈ s_ρ, ψ, r(z) = |z- ψ| and θ(z) =(z - ψ). Further, suppose that s_ρ, ψ has a log-linear modelr̃(z) = exp (mθ(z) + b)for real numbers m and b, in which we have fitted the points (θ(a_k), log r(a_k)) to a straight line. Then we will writed_abs(ρ,ψ,k):= |a_k- ψ-r̃(a_k) e^i θ(a_k)|andd_rel(ρ,ψ,k):=|a_k - ψ -r̃(a_k) e^i θ(a_k)/a_k - ψ |.(This is anabuse of our earlier notation for d_rel(γ, z) which should not be confusing.) §.§ Summary of observations.We expresssome of our observations asexplicit conjectures.For each pair of positive integers L and n,there isa Riemann zeta L-cycleΛ such thatthe λ∈Λ with maximum imaginary part isclose tothe n^th nontrivial zeroρ_n (but we will not define this use ofthe term “close” more precisely in this draft) and with the following properties. (1)Each λ∈Λ isa repelling fixed point ofζ^∘ L lying in the intersection of boundaries ∂ A_ϕ∩∂ A_∞in the usual topology on C. (2) Foreach λ∈Λ there is acomplex number z_λ and a natural number 0 ≤ j_λ≤ L-1 such that(a) ζ^∘ j_λ(z_λ) = ρ_n. (b)For some small positive c, λ is thecenter of a c-nearly logarithmic spirals_z_λ, λinterpolating a branch B_z_λ,λof (ζ^∘ L)^∘ -(z_λ)(but we will not be more precise about this use of the term “small” in this draft.)(c) lim B_z_λ,λ = λ.(d) ⋃_λ∈Λ B_z_λ,λ is a branchB_ρ_n,Λ of ζ^∘ -(ρ_n).(e) The members ofB_z_λ,λare distributed nearly uniformly ons_z_λ, λ. (3) Ifz ∈ζ^∘ -(ρ_n) then forsome positive integer j and some positive integer Land some L-cycle Λ, ζ^∘ j(z) ∈ B_ρ_n,Λ.Conjecture 1 is essentially a description of patterns we observed reliably innumerous experiments; clause 2e, in particular, isplausible on its face inview of the spiral plots exhibited in Figures5.2,5.3,5.4, 5.7, 5.9, and 6.1.Figure 5.5 supports, in particular,our use in this clause of the term “nearly uniform”as we have defined it above. (a) If L = 1, so that the unique element ofΛ is close to ρ_n,then wewrite Λ ={ψ_n }, and we have thatj_ψ_n = 0 and z_ψ_n= ρ_n.In this situation, in clause (b)of Conjecture 1 we can take c< e^-4;furthermore, if L = 1, thenthe infimum of thevalid values of c goesto zero as n →∞.Some evidence for this conjecture appears in section 6.1.1. When L = 1 we have restricted our claims in this conjecture tospiralss_ρ_n, ψ_n because that is the case we have checked most thoroughly, but we have alsocheckedless thoroughly the case in which the zero is fixed (usually, ρ_1) and ψ_n varies over positive n. It appears to us that the conjecture generalizes to all pairsρ =ρ_n_1, ψ = ψ_n_2, (n_1, n_2) ∈Z^≥ 1×Z^≥ 1. Because λ∈Λ would be repelling, it would also be an attractingfixed point of a local branch of the functional inverse of ζ^∘ L,and then the convergence would follow from standard results,for example, Theorem 2.6 of <cit.>. (1) There are repelling zeta fixed points ψ_-2n near the trivial zeros -2n ≤ -20.(2) For anynontrivial Riemann zeroρ, members of B_ρ, ψ_-2nlie on a curve which isvery nearly a straight line segment. If n is even,the endpoints are ρ and ψ_-2n; otherwise,the curve passes through ψ_-2n and terminates at ρ. (3) If 2n ≡ 0 (mod 4), thendζ(ψ_2n)/dz≈ 2 π; If 2n ≡ 2 (mod 4), thendζ(ψ_2n)/dz≈π.(See Figure 5.7. Some evidence forthis conjecture appears in section 6.1.2.) These observations are consistent, of course, with the proposition that B_ρ, ψ_-2nis interpolated by a spiral.Ourcomputations of ζ(x)-x,x real, indicate that the ψ_-2n are real(the graph crosses the x-axis near eachtrivial zero we examined.) It was conceivable that there might be a broader relationship of the same kindbetween the derivative of zeta at a given fixedpoint and the structure of the associated spiralscentered at those fixed points elsewhere in the complex plane, but our experiments have not verified any such relationship. It is suggestive, of course, that these relationships are exact when zeta is consideredas a function of real numbers and ψ_-2n is replacedby -2n. The following conjecture codifies in part the scenario of section 1.2(1) Therelative deviation d_rel(ρ_n,ψ_n,0) ofthe nontrivial Riemann zero ρ_n froma logarithmic spiral fittedtothe elements of B_ρ_n,ψ_n = (a_0, a_1, ...) satisfieslim_n →∞ d_rel(ρ_n,ψ_n,0) = 0.In particular, if we writeD_rel(N) =log1/N∑_n=1^Nd_rel(ρ_n,ψ_n,0),then there exist two exponents 0 < e_1 < e_2 < 1 such that -(log N)^e_1 < D_rel (N)< -(log N)^e_2 for N = 1, 2, .... (2)The absolute deviationd_abs(ρ_n,ψ_n,0) satisfieslim_n →∞ d_abs(ρ_n,ψ_n,0) = 0.In particular, if we writeD_abs(N) =log1/N∑_n=1^Nd_abs(ρ_n,ψ_n,0),then1/N <D_abs(N) <√(1/N).Wehave provided some support for clause (1)in Figure 6.5 of section 6.1.3, using e_1 = .8 and e_2 =.85;clause (2) is supported bythe plot in the right panel of Figure 6.7 in the same section. We examined the possibility that branches ofζ^∘ -(z), ζ(z) ≠ 0 for arbitrary z on the critical line converge to the same fixed points asbranches ofζ^∘ -(ρ), ρ a non-trivial zero. We tested various such zand found spiral branches converging to the fixed points of zeta. Soit ought be possible to explain the spirals with a theory that avoids any appeal to special properties of theRiemann zeros. We are not going to describe these experiments in any further detail in this article. An analogy from fluid mechanics led us to check for invariance ofbranches of ζ^∘ -(z) for these z under rotation aboutthe fixed points at their centers. We found that the deviation from this sort of invariance is systematic and can itself be described by referring to (other) logarithmic spirals. We made a brief survey of functionsother than zeta to gauge the extent of the spiral phenomenon,which we will not describe in any further detailthan the following.Functions as simple as cosine appearto exhibit this behavior.We also observed it in, for example, theRamanujan L-function.Wehope to carry out another surveywith a different software package.§.§ Prior work.Many authors have examined the Riemann zeta function with computers. Notable citations from the perspective of this article are Arias-de-Reyna <cit.>, Broughan <cit.>, Cloitre, <cit.>, Kawahira <cit.>, King <cit.> and Woon <cit.>. § METHODS §.§ Quadrant plots.We will be displaying colored plots (say, “quadrant plots”) depicting, for a point w of C and ameromorphic function f,the quadrant of f(w).We usequadrant plots in three ways: (1) to determine small squares containing exactly one solution of an equation of interest, so that this information can be used by standard equation-solving routines to find a solution to several hundred digits of precision (which we find is necessary, for example, to locate zeta cycles) lying in a particular region; (2) to superimpose quadrant plotsupon plots of the basin of attraction A_ϕ. These two kinds of plot typically interlock in a way that helps us tounderstand the meaning of many small irregular features of A_ϕ; and (3) toshow how the quadrant plots spiral as we reduce the size of the plotwindow about a fixed point of zeta or one of its iterates. Observation of these spiral motionswas our first indication that forward orbits nearfixed points do lie on spirals. Inquadrant plots, theboundaries ofsingle-colored regions are f pre-images of the axes–curvescorresponding to zero sets of (f(s)) and (f(s));the apparent intersections signal the presence of zeros or poles of f.By adjusting the color scheme todistinguish between regions where |f|is large or small, we can try to distinguish zeros from poles. Some apparent intersections are revealed to be illusions bya change of scale. Similar but colorless methods for plotting zeros were put to use in <cit.>. The visualized region is partitionedinto small squares, each of which is represented by a pixel.We choose a test point s in each square. The pixel representing the square is colored according to the rules inTable 1. In the table, the region D is a disk with center s = 0 andlarge radius r (chosen as may be convenient.)We denote the complement of D as -D. 1inLocation of f(s)Color of pixel depicting region containingsreal and imaginary axes black D.05in∩ Quadrant I rich blue- D.05in∩ Quadrant I pale blueD.05in∩ Quadrant IIrich red- D.05in∩ Quadrant II pale redD.05in∩ Quadrant IIIrich yellow - D.05in∩ Quadrant III pale yellowD.05in∩ Quadrant IV rich green- D.05in∩ Quadrant IV pale green.1in table 1: coloring scheme for quadrant plots .1in-.19in The junction of four rich colorsrepresents a zero, the junction of four pale colors represents a pole, and the boundary of two appropriately-colored regions is anf pre-image of an axis. An example is shown in Figure 2.1:s ↦ (s - 1)^2 (s - i) (s + 1)^5/(s+i)^3. (We have superimposed a pair of axes on thisquadrant plot.) §.§ High precision equation solving under geometric constraints. We illustrate theapplicationof quadrant plots to solve equations under geometricconstraints byshowing how we found the three-cycleΛ described in section 5.4. In Figure 2.2, we have superimposedplots of A_ϕ andquadrant plots ofs ↦ζ^∘ 3(s) - s on small squares near the first non-trivial Riemann zero. The resolution has been kept low to speed up the computations; high resolution is not particularly helpful inthis situation. The leftpanel is a square with side length 20 and center ρ_1. The central panel depicts a squarewith side length 2 and center ρ_1 + 3.4 +.1i;we haveadjusted the center to keep in view a particular four-color junction visible in the left panel. It represents a solution ofζ^∘ 3(s) - s = 0,namely thethree-cycle elementλ_1 we are trying to compute. At this stage, if we used, for example, the Mathematica commandFindRoot constrained to search within this square,it might land on any one of the severalfour-colorjunctions we see in the central panel.So we change the center of the plot again, this time to ρ_1 + 3.46 +.103 i, andwemake asquare plot centered there with side length .02. This is shown in the right panel of Figure 2.2. Next we use a slow, “handmade” routine to approximate λ_1 by searching within this square. Then, using this approximation as the beginning value for asearch with FindRoot, we obtain a solution with 500 digits of precision: 3.9589623348847434673516458439896123461477039951866801455882506555054 331235719797619129160432526832126428515417856326242408422124490775895 37219976674458409141742662175701089081252727395073714398968532356378 12255138302084634149524670809965144703541657360428502230820135428609 38536894453944241116438492746243199878001238993540770158034816978947 866042863811536518002674033394246742728451523022955079328623947833520 567532298244004442294156837342370982002330874074322076777185746207730 323482406094614280046 +14.23622856322181332287122301085588169299871236208494399568695437825 6069322896396761526007136189745757467102551375667154010366364994538731 7916271113823253751110948972775762217941663830770714262041755664035323 9671078789529204404394764315531582588051352309327292004343654135172820 7780017861238006999109644383198471665302823015355865202971277187847669 1974168218415293165267046606327405458655765280027732495125802150527924 57282410834191507107658393848458313664113623935800293262678700791600125 465010766853i. Let us denote this approximation of λ_1 as a.A numerical checkindicates that |ζ^∘ 3(a) -a|agrees with zero to 495 decimal places. Our main reasonfor requiring so much precision is that we will be repeatedly solving equations of the formζ(u) = v for u, in each case replacingv with the previous u, toconstructlists of (usually) 100 elements of a branch of the backward orbit of a nontrivial Riemann zero, looking for the u's near pre-selectedfixed points.Asthe procedurerepeats100 times,there is an accumulation of numerical error, and in this situation very high precisionis needed tomaintain enough accuracy to “see” the spirals formed by these branches in our plots. § A TOUR OF A SUB PHIWe are interested in A_ϕ becauseplots of this set make visible the underlying structure of the network of ζ^∘ npre-images of the critical line for all n at once: (1) the nontrivialzeros of the ζ^∘ n lie inbulbs of A_ϕ on filaments Fdecorating the border of A_ϕ, and(2) oneζ^∘ n__F pre-image of the critical line transects each such F. (Claims 1 and 2 are not, of course logically equivalent; we are summarizing computer observations that we will describe in more detail below.) Thus thestructure of union of rooted treesvisible in plots of A_ϕ is apparently graph-isomorphic to a correspondingstructure for the point set ⋃_(z) = 1/2ζ^∘ -(z) = 𝒰 This observation informs our discussion of the trees T in thenext section. We pretend that we have stateda rigorousdefinition of the decoration notion anddefinite conditionsfor the membership ofa given complex number in a given filament.In view of the relationship between𝒰andA_ϕ, thisshould not cause problems: eachfilament F may be identified with one (of the many) ζ^∘ n__F pre-images of the critical line, the definition of which could be made precise.But we should say explicitly that “A decorates B”is a transitive relation and that the filaments are subsets of A_ϕ. In Figure 3.1, for example, thepoints at the junctions of four colorsrepresent zeros of ζ^∘ 2; the zeros in the long filaments are nontrivial.The right panelofFigure 3.2 shows a quadrant plot of s ↦ζ(s) - s superimposed on a plot of A_ϕ;the fixed points of zeta appear as thejunction of four colors. The left panel depicts the nontrivial Riemann zeros using the same scheme (a quadrant plot of zeta.)The filled Julia set of zeta (the points in C with bounded orbit under iteration by zeta) is C - A_∞. The basin A_ϕ appears to bedense inC - A_∞. The sets C - A_∞(Figure 3.3) and A_ϕ,regarded as regions in the complex plane,are indistinguishable in our plots but they are not identical.For example, there is an infinite number of real zetafixed points (<cit.>, Theorem 1) that belong to (C -A_∞)-A_ϕ. In addition, thereappear to be infinite families ofnon-real zeta k-cycles for each integer k ≥ 1 in (C -A_∞)-A_ϕ.Zero lies in A_ϕ (<cit.>, Theorem 1.) This set is a fractal decorated withnumerous long filaments (Figure 3.1.) Zeroes of the ζ^∘ n lie on the filaments. Because zero is an element of A_ϕ,we know that the whole backward orbitζ^∘ -(0) lies in A_ϕ. Because thepre-images of nontrivial Riemann zerosunder iterates of zeta lie on thefilaments, the itinerary of a point in the backward orbit of a nontrivial zero ρ visits severalfilaments at the edge of A_ϕ before coming to ρ. (Some but not all pre-images of the trivial zeros also lie on filaments.)The set A_ϕ seems to comprise (1) a heart-shaped, seven-lobed central body,which we will call the main cardioid. (2) two major filamentsof bulbs of various irregular shapes that emanate from the main cardioid, transected by the critical line and containing one nontrivial Riemann zero in each bulb (right panel of Figure 3.2.)(3) infinitely many blunt processes and long filamentsdecorating the main cardioid and each of the irregular bulbs. The filaments comprise smallercopies of the bulbs, which, in turn, are decorated with similar filaments, ad infinitum. (Figure 3.1.)Thus, when we plot them, the set offilaments decorating A_ϕ exhibit a visible tree structure. The visible featuresdescribed in (1) - (3) were evident in Woon's plots of C - A_∞<cit.>. The filaments appear to bezeta-iterate pre-images (close copies) of the two major filaments.Pre-images of the real axis pass through the blunt processes and contain pre-images of the trivial zeros. For example,the right panel of Figure 3.1, superimposes a quadrant plotof ζ^∘ 2 on the left panel, so that junctionsof four differently-colored regions each represent a zero of ζ^∘ 2. There are three long filaments depicted in this image containing zeros, the immediate zeta imagesof which arenontrivial Riemann zeros; but between the lower two such filaments is a blunt process transected by a zeta pre-image of the real axis, and we can see another series ofζ^∘ 2-zeros lying along this curve. These are zeta pre-images of the negative even numbers. (4) at each trivial zero < -18, a microscopic, more or less distortedcopy (zeta-iterate pre-image) of the entire assemblagedescribed in (1) - (3). (By “microscopic” features we mean features so small that they can only be visualized by achange of scale from that of Figure 3.3.) In Figure 3.4, we show copies of the main cardioid near the trivial zeros -28, -26, -24, -22superimposed on quadrant plots ofζ^∘ 2 in the same squares. The size of these features decays exponentially with distance from zero. Their left-right orientation alternates.We speculate that the alternation can be derived from thealternating sign of the real derivative dζ(x)/dx|_x = -2n.Because these copies exist on the left half of the real axis, its zeta pre-images also contain complete copies of A_ϕ. The upper left panel of Figure 3.5 depicts the first bulb in the major filament in the upper half plane. It is a 10 by 10 square centered at ρ_1. Along its border we seean apparently infinite set of filaments alternating with an apparently infinite set of blunt processes. Our tests demonstrate that the filaments are transected by ζ^∘ npre-images of the critical linefor (n = 1, 2, 3, ...), and that the blunt processes aretransected similarly by ζ^∘ n pre-images of the real axis.The other three panels depict a smallcopy of A_ϕ to the right of the largest blunt process on the right side of the bulbshown in the upper left panel. In the lower panels, a quadrant plot of ζ^∘ 3in the left panel and of ζ^∘ 4in the right panel have beensuperimposed upon this copy. Evidently, it is a ζ^∘ 3 pre-image of A_ϕ. (5) Our observations indicate that foreach filament F decorating A_ϕthere is a positive integer k_F(say, the degree of F)such that each bulb of F contains one nontrivial ζ^∘ k_F zero,and no nontrivial zeros ofζ^∘ kfor any k ≠ k_F. Even under the Riemann hypothesis, it would not be necessary fromfirst principles that degree k filamentsare transected by ζ^∘ k -1 pre-images of the critical line, even though that is the simplest possibility.But it seems to bethe case. In Figure 3.6, the ζ^∘ k -1pre-images of the critical line transecting degree k filaments decorating the main cardioid are shown fork = 1, 2, 3 and 4. § BRANCHES INTERPOLATED BY SPIRALSFor any integer L > 0there appears to be an infinite set of zeta cycles Λ = ( λ_0, ..., λ_L-1) that pick out a linearly ordered subsetB_ρ, Λ = (a_0, a_1, a_2, ...)of ζ^∘ -(ρ)such that (1) a_0 = ρ, (2) a_n = ζ (a_n+1), n = 0, 1, 2, .... and (3) for each j = 0, 1, 2, ..., L-1, the subsequencesb_j = (a_j, a_j + L, a_j + 2L, ...) appear to converge toλ_j. The sequence b_j is a branch of ζ_L^∘ -(a_j). In most of the casesthat we have examined, each b_j appears to be interpolated by a spiral s_ρ_N,λ_j with center λ_j = lim b_j.The λ_j are repellingfixed points of ζ_L.Now we offer (speaking loosely) a geometric descriptionof some of theb_j in terms of thebasin of attraction A_ϕ. (It applies to most, but notall, instances we have examined to date.)A variety of filaments decorate A_ϕ, but here we restrict attention to those that decorate the maincardioid. We assign the set of filaments a structure of union ofrooted trees T as follows. A filament F ∈ Tis the parent of a filament G ∈ Tif and only if G decorates F and there isno intermediate filament H ∈ T such thatG decorates H and H decorates F.The filaments containingnontrivial Riemann zeros have no ancestors, but they are not unique in this respect. Now fix integers L ≥ 1, N ≠ 0. There is an infinite set of zeta L-cyclesΛ = (λ_0, λ_2, ..., λ_L-1) such that for each integerΔ = 0, 1, 2, ..., L-1, there isa treeT_Δ of filaments decorating the maincardioid of A_ϕ, and apath P_Δ= (F_0, F_1, ...) in T_Δwith k_F_m = Δ + m Land such that(if m > 0) F_mdecorates the |N|^ th bulb ofits parent F_m-1. As in the first columnof Figure 4.1,the filamentsinP_Δ spiral aroundλ_Δ. In our graphic visualizations, the apparent size of F_m decays exponentially with m.Something like this would seem to be anecessary condition of the relationλ_j = lim b_jwe mentioned above.Each bulb of F_m contains a nontrivial zero wof ζ^∘Δ + mL and ζ^∘Δ + mL-1(w) is a nontrivial Riemann zero. Which one? Let w_m, N be the nontrivial ζ^∘Δ + mL zero belonging to the |N|^ thbulb ofthe filament F_min P_Δ.For m > 0,ζ_L(w_m, N) = w_m - 1, N,so the sequence (w_0, N, w_1, N, ...)is a branch of ζ_L^∘ -(w_0, N). In our observations, the zeta image of bulb |N| of a filamentF in T_Δwith k_F > 1is bulb |N| of its parent filamentin T_Δ. Thereforeζ^∘Δ + mL -1(w_m, N) = ρ_± N. The observation that ζ_L(w_m, N) = w_m - 1, Nsuggests thatthere should be graph isomorphisms betweensubgraphs of the rooted tree graphsassociated to the ζ_L^∘ - onone side and subgraphs of thetrees T decorating the main cardioid of A_ϕ on the other.We should mention that the situation for copies of the main cardioid such as the onesillustrated in Figures 3.4 and 3.5is different; except to say thatthe zeta images of copies are also copies,we will not discuss itfurther in the present article. § SPIRALS INTERPOLATING A BRANCH OFTHE BACKWARD ZETA ORBIT OF A RIEMANN ZERO §.§ Single spirals interpolating a branch.When L = 1, Λ= {λ_1 } where λ_1 is a repelling zeta fixed point; there appear to beat least three categories of suchpoints: ψ_-2n (say) lying near thetrivial zeros -2n = -20, -22, ...;zeta fixed points ψ_ρ^*near each nontrivial Riemann zero ρ^*,and eight fixed points lying at the boundary ofthe main cardioid (right panel, Figure 3.2.) How near? In the case ofthe ρ^*, one can form an impression by keeping in mind that this figure depicts a 120by 120 square (section 8.)The distances |-2n -ψ_-2n| are a great dealer smaller; we omit the details.There are exactly two filaments Fwith degree k_F = 1 decorating the main cardioid;one of themcontains ρ_Nin its|N|^ th bulb = β_N, say.Our observations are consistent with the following proposition. The point ψ_ρ__N lies at the border ofthe |N|^ th bulb of a filament F^*with k_F^* = 2 decorating β_N. This ramifies: if ψ_ρ_Nlies at the border ofthe |N|^ th bulb of a filament F^*then there is a child filament F' of F^* such thatψ_ρ_Nlies at the border ofthe |N|^ th bulb of F'andk_F' = k_F^*+1. This is illustrated bythe left column of Figure 5.1 for N = 1, 2, 3, 4. It depicts quadrant plots of s ↦ζ(s) - s, so that ψ__ρ_N shows up as four-color junctions onthe depicted filament (say, F_2.) The right column showsquadrant plots of ζ^∘ 3, ζ^∘ 4, ζ^∘ 5, ζ^∘ 6in rows 1, 2, 3 and 4, respectively,all superposed on plots of A_ϕ. The squares have side length.2, .02, .002, and .0002 in rows 1, 2, 3 and 4, respectively.The center of the squares in row N is ψ_N, so the panels are depicting the region around thispoint at smaller and smaller scales.Figure 4.1 displays the original indications we had that some branches of the inverse of zeta lie on spirals.It zooms in on the illustration of ψ_ρ_1 in the left panel ofthe top row of Figure 5.1.The center of that panel is the fixed point, lying on the border of a filament F_3 (say)decorating the lower border ofthe largest full bulb. In the rightpanel of the top row of Figure 4.1, a quadrant plot ofζ^∘ 3 has been superimposed on a plot of the corresponding region of A_ϕ; we see that F_3 contains zeros of ζ^∘ 3. (Simple tests show that they are nontrivial zeros in the sense of the introduction.)In the lower rows, the zoom is repeated and ψ_ρ_1 is seen to lie near a still-smaller filament decorating the bulb near the center of the figure just above it. The right column depicts quadrant plots of ζ^∘ 4, ζ^∘ 5 and ζ^∘ 6 for the squares opposite them in the left column. So in Figure 4.1 we are seeing zeros of these functions (again nontrivial.) They also appear(in virtue of the shapes of the underlying A_ϕ-bulbs)to be ζ^∘ 3, ζ^∘ 4, ζ^∘ 5 pre-images of nontrivial Riemann zeros. The rapid reduction of scale from one row to the next attests to a similar reduction of the distances of thesepre-images from ψ_ρ_1 (which, as we have remarked, is not surprising.) The possibility that they may be traveling on spirals emerges from a look at the angles that the filaments F_3 and (say) F_4, F_5,F_6 make with the horizontal.These observations led us to do the numerical tests described in the last section.We made a survey of the spirals s_ρ,ψ_ρ^*for various choices of ρ and ρ^*.Wemade a table of ψ_ρ_n, 1 ≤ n ≤ 100with 500 digits of precision and we used a table of nontrivial Riemann zeroswith 300 digits of precision made by Andrew Odlyzko <cit.>. We made tables of the z_k in variousB_ρ, ψ_ρ^* to high precision, proceeding inductively.We set z_1 = ρ and, givena value of z_k, after using Mathematica's FindRoot commandto solve ζ(s) = z_k in the vicinity of ψ_ρ^*, we set z_k+1 equal to the solution. We began with 300z_k for each B and used tests of reliability of each z_k to truncate the list; typically,we ended up with a least 100 consecutive z_k.We use polar coordinates (r(z), θ(z)) to denote a typical point z on s_ρ,ψ_ρ^* such that (1) r(z) = |z - ψ_ρ^* |, (2)θ(z) is chosen so thatθ(z) ≡(z - ψ_ρ^*) 2π, and (3) θ(z) varies continuously and monotonicallyas z moves around the spiral in a fixed direction.In other words, θ(z)behaves up to amutiplicativeconstant like a winding number. Then r(z) appears to decay exponentially with θ(z). (Of course we are only able to check this for z ∈ B_ρ,ψ_ρ^*, that is, for the z_k we propose are interpolated bys_ρ,ψ_ρ^*, because no other test formembership in s_ρ,ψ_ρ^* isavailable to us.) Therefore it was not practical to plot the spiralss_ρ,ψ_ρ^* without re-scaling,so we plotted the points(log r(z), θ(z)) instead. This procedure everts the apparent spirals:if k, j are such that r(z_k) < 1 and r(z_j) < 1, thenr(z_j) <r(z_k) implies that the plotted point(log r(z_j), θ(z_j)) is further fromthe center of the re-scaled interpolating spiral than the point(log r(z_k), θ(z_k)): the reverse of the situation before re-scaling.(The direction of windingof the spiral is also reversed because the logarithms take negative values.) Thepoints near the center of the re-scaled spiraldepict z_k for smaller values of k for whichz_k is closer toρ and further from ρ^*. They are crowded so closely, in spite of ourre-scaling, thatthe interpolating curve near ρ is obscured. Figure 5.2 depicts branches of backward orbits of ρ_n (1≤ n≤ 5)spiraling around two fixed point ≈ -14.613 + 3.101i(left column) and≈- 5.28 + 8.803i(right column)on theborder of the main cardioid; we omit the 500 digit decimalexpansions, which are easy to compute using Mathematica's FindRoot command.Figure 5.3 depicts branches ofbackward orbits of ρ_n spiraling around ψ_ρ__n (1 ≤ n ≤ 10).(We omit their precise expansionsfor the same reason.) §.§ An example. The first panel of Figure 5.4 plots the point set B_ρ_1,ψ_ρ_1 (re-scaled as described,and shifted to place the apparent spiral'scenter at the origin.) It offers theappearance that the z_k (in red) form arms something like those of a spiral galaxy; this seems to be a result of nearly regular growth of θ(z_k) with k.But z_k for consecutive k do not lie in adjacent positions on these arms; in the second panel, the z_k are connected by chords in the same order as they appear in the sequenceB_ρ_1, ψ_ρ_1:vertices v, wrepresenting z_v, w = ζ(z_v) are connected by a chord. The lower left panel of Figure 5.4 is a plotof log |z_k - ψ_ρ_1| vs. k; it is clear that the distances of the z_k (colored red) from the fixed point ψ_ρ_1 at the center of the spiral are decaying exponentially. The lower right panel of Figure 5.4depicts a spiral curve (colored blue) that approximately interpolates thez_k; we found this curve using the NonLinearModelFit command inMathematica. The equation of the curve islog r(z) = a + b θ(z) +c exp(d θ(z)) ,with a ≈ 0.05575203301551956560399459579161529353,b ≈ -2.39481894384498740085074310912697832305, c ≈ -2.8680355917721941635331399485184884 × 10^-120,andd ≈ 0.97375124237020440301256901292961731822. The absolute value of c is so small thatthis is quite close to being the equation of a logarithmic spiral. In the section on error terms below we will compare directly the loci of the z_k withlogarithmic spirals. In Figure 5.5, we study thevariation in θ(z_k). The left panel plots δ_k = θ(z_k+1) - θ(z_k) against k and shows that the θ(z_k) are very nearly periodic in k. The right panel, which plotslog |δ_k+1 - δ_k| against k,shows that the departure from periodicity in θ(z_k) actually appears todecay exponentially with k. However for other choices ofρ and ρ^* this no longer holds, and so it is anopen question whether ornot it would hold even in this examplefor very large k, that is, very closeto the center of the spiral.We remark that the z_k could, of course, be distributed along a nearly-logarithmic spiral while also being distributed in a completely irregular or at least non-periodic way in the theta aspect, so the two questions are at least superficially independent. Now suppose (r_1, θ_1) and (r_2, θ_2)lie on a true logarithmic spirallog r = a + bθ. The constants a, b aredetermined by any two points of the spiral, hence, if two pairs of points determine different values of a and b, then thecurve that the three (or four) points comprising the pairs lie onis not a logarithmic spiral. We used this idea to testB_ρ_1,ψ_ρ_1 = {z_1, z_2, z_3, ... } for the property of beinginterpolated by a logarithmic spiral.We performed the test by solving for a and busing the pairs(z_1, z_k), k = 2, 3, ....In Figure 5.6 we have plotted the resultingvalues of a against b, b against k, and a against k. Evidently a is roughly linear in b,and both a and b appear to converge as k grows without bound. Thus the interpolatingcurve is not a logarithmic spiral, for then a and b wouldbe constants. But the convergence of a and bsuggests that theinterpolating spiral s_ρ_1,ψ_ρ_1resembles a logarithmic spiralmore and more closely as it winds inward towards ψ_ρ_1.§.§ Backward orbits near the trivial zeros.There appear to be real zeta fixed points ψ_-2n near each trivial zero -2n ≤ -20. Whether they lie slightly to the right orto the left of -2n along the real axis appears to depend upon the parity of n. This reflectsthe alternating left-right orientations of copies (zeta pre-images) of thebasin of attraction A_ϕ we see in Figure 3.4.A branch B_ρ,ψ__-2nof the backward orbit of each nontrivial Riemann zero ρ lies on acurve appearing to pass through or terminate at ψ_-2n: if -2n ≡ 0 (mod 4),then the curve appears to terminate at ψ_-2n. These curves closely resemblestraight line segments.Error termsare discussed in section 6. If-2n ≡ 2 (mod 4), then(supposing, for the moment, that the curvereally is a line segment)ψ_-2n lies near its midpoint. In Figure 5.7,several of the backward orbits are depicted,re-scaled logarithmically as above. This observation isconsistent with the hypothesis that B_ρ,ψ_-2n is interpolated by a spiral such that the a_k ∈ B_ρ,ψ_-2n satisfy |(a_k-ψ_-2n) - (a_k + 1-ψ_-2n)| ≈ 2π for all k if -2n ≡ 0 (mod 4), or|(a_k-ψ_-2n) - (a_k + 1-ψ_-2n)| ≈π if -2n ≡ 2 (mod 4). We discuss this further in sections 5.5 and 6.2.§.§ Several spirals thattogether interpolate a branch.Figure 5.8 depicts members of zeta n-cycles(zeros of s ↦ζ^∘ n(s) - s) near the bulb of A_ϕ containing ρ_1 for 2 ≤ n ≤ 5. As n increases the pattern of distribution of these zeros becomes more and more obscure.The situation near ρ_1 appears to be typical ofthat near all nontrivial zeros of zeta iterates. Figure 5.9 illustrates the branch B_ρ__1,Λ ofthe backward orbit of ζ^∘ -(ρ_1) induced by a 3-cycle Λ = ( λ_1, λ_2, λ_3 )with λ_1 ≈ 3.95896 + 24.2362i . The red vertices of a given chord of the graph represent pointsinthe branch. Geometrically, Figure 5.9 is doubly abstract:(1)The spirals have been positioned so that their centers are placed at1000 + 0i, 1000(cos(2π/3)+ i sin(2π/3)) and 1000(cos(4π/3)+ i sin(4π/3))for the sake of legibility; and (2) the spirals are everted: they have beenre-scaled logarithmically, so that points closer tothe center appear, in the figure, to befarther from the center. The row 2, column 2 panelshows vertices representingelements of B_ρ__1,Λand edges between vertex pairs (v, ζ(v)), while the other panels depict the spirals separately; these are portraits ofb_j (j = 1, 2, 3). In these three figures, each vertex pair (v, ζ^∘ 3(v))is connected by an edge.§.§ Angular distribution of branches along the spirals.A structural invariant, which seems to determine the number of arms visible in our plots ofbranches of f^∘ -(z), is thefunction δ_f,z, ψ: k ↦(a_k - ψ) - (a_k+1 - ψ), wherethe a_k's are members of a particularbranchof f^∘ -(z)converging to ψ. The values ofδ_k in the casediscussed in section 5.2 correspond in the presentnotation to those of δ_ζ, ρ_1, ψ_1(k). They appear to change very slowly with k, and this behavior seems to be what gives rise to the appearance of discrete arms in plots of branches of ζ^∘ -(ρ_n). Amongthe zeta fixed points very neartrivial zeros, the only values ofδ_ζ,ρ_n, ψ__-2n - 18(k) that we see(Figure 5.7) are≈π and ≈ 2π, distributed, as we have noted above, according to the mod 4residue classes of the zeros.Because δ_ζ,ρ_n,ψ_ρ_n(k)apparently converges rapidly as k →∞(as in Figure 5.5 where n = 1), we take thevalue ofδ_ζ, ρ_n, ψ_ρ_n(100) as a proxy forlim_k→∞δ_ζ,ρ_n,ψ_ρ_n(k).Then our calculations are consistent with the proposition that lim_k→∞δ_ζ,ρ_n,ψ_ρ_n≈π/2 (Figure 5.10.) Very small differences in this limit as n varies appear to determine very different shapes for the discrete arms visible in our plots. We have observed in all of our experiments that the visiblestructure of a branch of f^∘ -(z) depends upon the fixed point at its centerand not on z,so δ_f,z, ψ shoulddepend only upon f andψ. Contrary to the impression suggestedby our notation, it should be independent of z, but we cannot exclude the possibility that there are counterexamples to this idea.§.§ Logarithmic models of spirals interpolating branches ofthe backward orbit of zeta.The branchesB_ρ, ψ = (a_0 = ρ, a_1, a_2, ...) ofζ^∘ -(ρ)for nontrivial Riemann zeros ρ converging tozeta fixed points ψ are interpolated by curves that resemble logarithmic spirals. We carried out experiments in which welooked for approximations of these interpolating curves by suchspirals. We chose branches of the argument function, varying with k and evaluated ata_k - ψ, such that an angle θ_k was assigned toa_k - ψ which was the least such angle> c + max_j <kθ_j forc = 0 or 1.The angle θ_0 was the valueof (a_0 - ψ) from the branch of argument chosen automatically by Mathematica. The choice of c was dictated by requiring that θ_kact like a winding number about ψevaluated at the points a_k. For ψ near a trivial zero, c = 1 was chosen; for ψ near a nontrivial zero, c = 0. For r_k =|a_k -ψ| plots of thesets of pairs(θ_k,log r_k), k ≥ 0, appear to lie on curves resembling straight lines. For spirals centered atzeta fixed points ψ_ρ_n near the ρ_n, we approximatedthese linesusing Mathematica's FindFit command.Figure 5.11 is a pair of plots of m_n and b_n against n for models |z- ψ_ρ_n| = exp (m_n θ + b_n)fitted tobranches of ζ^∘ -(ρ_n),1≤ n ≤ 600. In particular,we write r_k = exp (m_n θ_k + b_n)for our estimate of r_k. These seem to befirst-order approximations to genuine interpolating curves; the error-termwill bediscussed in the next section.Investigating spirals centered atthe zeta fixed points ψ_-2n that lie near the trivial zeros was carried out in a different way. We were able tocollect a substantial amount ofdata (meaning for the first 200 members of thebranches) for the ψ_ρ_n,n ≤ 600 using500-digit precision. On the other hand, even with 1000-digit precision, we were able to collect data onlyon the first twenty elementsof branches centered at the ψ_-2n for n ≤ 30beforethe use of the FindFit command to get a linear model for the pairs (θ_k, log r_k) produced error messagesfrom Mathematica. Fortunately, the branches spiraling about the ψ_-2n appear to be better behaved thantheones spiralingabout the ψ_ρ_n. For each pair (n, n^*) the branch B_ρ_n, ψ_-2n^* is apparentlyinterpolated both by a curve very nearly alogarithmic spiral and by another curve which is very nearly a straight line passing throughthe points ρ_n andψ_-2n^*. As we will see in the next section, the fit of the branches to the straight line passing through these two points is so good that weuse it as our second model, together with the assumption that theθ_k =aπ k + (a constant depending only on n and n^*), with a = 1 or 2 dependingas we have explained only on the parity of n^*.It was feasible to findlinear models for the maps k ↦log r_k.Combining the assumptions about the θ_k with the linear models weconstruct for the log r_k gives logarithmic models for the interpolating spirals. We test these models in the next section. We choseto examine the behavior ofbranches B_ρ_n,ψ_-2n-18 of ζ^∘ -(ρ_n)because, among zeta fixed points close to thetrivial zeros -2n, the greatest one(the one that lies rightmostalong the real axis) is very close to-20.Figure 5.12 is a plot correspondingto Figure 5.11for the zeta fixed points ψ_-2n-18. Here we haveB_ρ_n,ψ_-2n-18 = (a_0 = ρ_n, a_1, a_2, ...).Writing r_k = |ψ_-2n-18 - a_k|, we took m_n and b_n to be, respectively, the means of the slopes and intercepts of the chords connectingconsecutive pairs p_k = (k, log |a_k - ψ_-2n-18|).Thus our model for r_k = |a_k - ψ_-2n-18| isr_k = exp (m_n k + b_n).We discuss it furtherthis in the following section. § ERROR TERMSWe will take the phrase “error term” to encompass complex-valued deviations from a given estimate as well as their absolute values.Like the original estimates, the curves followed bycomplex-valued deviations appear to have the form of logarithmic spirals. This raises the prospect of an infinite regress, which might perhaps lead to an exactexpression for the best interpolating curves, but we have postponed any investigation of this idea.§.§ Deviation of backwardorbit branches from logarithmic spirals.§.§.§ Branches converging to fixed points near non-trivial zeros.(This subsection provides some of our evidence for Conjecture 1.) For nontrivialRiemann zeros ρ_n and the corresponding zeta fixed points ψ_n, we plotted the relative complex-valued deviationsd_rel(ρ_n,ψ_n,a_k).The runs depicted in Figure 6.1portray both kinds of plots for ρ = ρ_n, ψ = ψ_ρ_n for n = 1, 28, and 48. It seems noteworthy thatthe two kinds of plots resemble each other so closely, butinspection demonstrates that they are not identical. The values of log d_rel(ρ_n,ψ_n,a_k) for n = 1, 28,and 48 are also plotted in Figure 6.1 (column 3.) The magnitude of the d_rel(ρ_n,ψ_n,a_k) appears to decay exponentially fork < roughly 130; for larger k,however, the magnitude of the deviations appears to grow exponentially without exceeding e^-6 for k ≤ 200. We think that the shape of this curve, which is typical, is an artificial effect of the FindFit command on a file of 200 pieces of data: the fit is best near the center of the data file.For a fixed point ψ = ψ_ρ_n near a nontrivialzero ρ = ρ_n,we used the initial 200 elements of each branch B_ρ, ψ as a proxy for B_ρ, ψ to study therelative deviations of the curve interpolating it from a logarithmic spiral. Taking β = 200 for the moment, let us set max_n, β := max_1 ≤ k ≤β d_rel(ρ_n,ψ_n,a_k), max_n, β^* :=√(n/log n)× max_n, β, let mean_n, β denote the mean of the d_rel(ρ_n,ψ_n,a_k), k = 1 ,2, ..., β and letmean_n, β^* :=√(n/log n)× mean_n, β.(We exclude k=0 in these definitions; thus the values of these four numbers tell us nothing about the fit of a_0 = ρ to the spiral in question.)The panel in row 1 column 1 of Figure 6.2is a plot of 600 values of mean_n, β^*.The panel in row 1 column 2is a plot ofmax_n, β^*. The plots in row 2 aresmoothings of the plots inrow 1: for each n, they depict means ofmean_j, β^* and max_j, β^* over the range 1 ≤ j ≤ n.These plots are consistent with the proposition that, forβ = 200, mean_n, β and max_n, β both= O( √(log n/n) ) with both implied constants <1. Moreoptimistically, perhaps, the plots are consistent with the hypothesis thatmean_n, β and max_n, β both= o( √(log n/n) ). We tested the same idea after replacing √(log n/n) with powers (log n/n)^ϵ for 1/2 < ϵ <1. It seems possible that the supremum of ϵ for which these statementsmight be true lies in the half-open interval [.8, .9). It also seems possible that this supremum is adecreasing function of n. We omit therelevant plots.§.§.§ Branches converging to fixed pointsnear the trivial zeros.(This subsection providessome of our evidence for Conjecture 2.) Let the branch B_ρ_n, ψ_-2n-18 ofζ^∘ -(ρ_n) = (a_0= ρ_n, a_1, a_2, ....). Before we test a logarithmic model for the decay of r_k = |a_k - ψ_-2n - 18|, we want to assess how wellB_ρ_n, ψ_-2n-18fits the straight line passing throughρ_n and ψ_-2n-18. We measured the verticaldeviation of the a_k ∈ B_ρ_n, ψ_-2n-18 from the straight line passing through both ρ_n and ψ_-2n-18 as a fraction of the heights of the a_k. We make the following definitions: M_n and B_n are the slope and intercept respectively of the straight line passing through ρ_n and ψ_-2n- 18 and d^trivial(n, k): =|(a_k) - (M_n (a_k) + B_n)/(a_k)|,mean^trivial_n, β = the mean of thed^trivial(n, k), 1 ≤ k ≤β, and max^trivial_n, β = max_1 ≤ k ≤βd^trivial(n, k).The left panel of Figure 6.3 is a plot oflog mean^trivial_n, β, 1 ≤ n ≤ 32 andβ = 20. The right panel is a correspondingof log max^trivial_n, β.Evidently, the a_k lie near the specified lines, and agreement with the lines improves rapidly as n increases. Next we definefor a_k ∈ B_ρ_n, ψ_-2n-18 r_n,k = |a_k - ψ_-2n-18|, for k > 1, m_n, k = the slope of the chord connectingthe ordered pairs (k, log r_n, k) and (k, log r_n, k-1) in R^2, m_n, β = meanof the m_n, k, 1 ≤ k ≤β, and we define a y-intercept function b_n, β analogously. The error functions are defined as follows:d^model(n, k, β):=|log r_n,k - (m_n, β k + b_n, β) /log r_n,k|,mean^model_n, β = the mean of thed^model(n, k,β), 1 ≤ k ≤β, and max^model_n, β = max_1 ≤ k ≤β d^model(n, k,β).The left panel of Figure 6.4 is a plot oflog mean^model_n, β, 1 ≤ n ≤ 32 andβ = 20. The right panel is a correspondingof log max^model_n, β. Once more the fit is good and improves rapidly as n increases. §.§.§ Deviation of the Riemann zeros from fitted logarithmic spirals.The plots in Figure 6.5display numerical information that supportConjecture 4 and the scenario described in section 6.1.3. They indicate thepossibility that as n →∞, the nontrivial Riemann zeros ρ_n become more well fitted to the logarithmic spirals we have in turnfitted to thebranches B_ρ_n,ψ_n. The left panelis a plot of log d_rel(ρ_n,ψ_n,0) againstn, 1 ≤ n ≤ 600; The right panel plots D_rel(N), the log of the running mean of thed_rel(ρ_n,ψ_n,0), as defined in Conjecture 4,1 ≤ N ≤ 600. Figure 6.6 shows the corresponding plotsford_rel(ρ_n,ψ_n+1,0).The next two plots treat absolute deviations d_abs, whichsuggest thenarrowing of the widths of the “error bands” mentioned in thespeculations we ventured in the Introduction. Figures 6.7 corresponds to the plots of relative deviations in Figure 6.5.Figure6.8 displays information on absolute deviations for fixed points ψ_-2n nearest to the trivial zeros -2n, with spirals terminating at ρ_n (left panel) and ρ_n+1 (right panel). §.§ Deviation from rotational invariance.It did not seemplausible to us that something special about Riemann zeros ρ should force thebranches ζ^∘ -(z), z = ρin particular, to be attracted torepelling fixed pointsψ along logarithmic spirals. As we remarked in the introduction, dynamical systems theory leads one to expect that the ψ should attractallof the nearby branches ζ^∘ -(z) whether or not ζ(z) = 0, and (one speculated) probably alongroughly similar curves. Logarithmic spirals appearin fluid mechanics(see, e.g., <cit.>, v.2, pp. 186-188 or <cit.>, p. 358.) By analogy with the streamlines of a vortexin a fluid, wespeculated thatthe existence of spiral curves connecting zeros of zeta torepelling zeta fixed points might be a consequence ofa scenario in whichthere is an infinite family of suchspiralsrelated by rotations around the fixed point.By this we mean a family of spirals s parameterized by real numbers x,varying continuously with x in a sensemade explicit by condition (1) below, such that if s_x and s_x + θ are two such spirals with common center a zeta fixed point ψ, then(1) s_x + θ - ψ = e^i θ ( s_x -ψ) (condition (1) being an equation of homotheties) and (2) z ∈ s_x ⇒ (i) ζ(z) ∈ s_x and (ii) there exists a branch B_z ⊂ s_x of ζ^∘ -(z) suchthat lim B_z = ψ. Inthis scenario, the spiral curves would be congruent in the sense of Euclidean geometry andexactly one spiralwould intersect the critical line at each ρ_n without appealing to special propertiesof the zeros. We have verified the existence of spiral branches B_zof ζ^∘ -(z) for various zon thecritical line other than Riemann zeroswithout meeting a counterexample. Like the spirals we have already described, they are approximately logarithmic; we omit the relevant plots. Condition (1) imposes rotational invariance on the s_x. This suggests the possibility that the branches of ζ^∘ - interpolated by them enjoy the same property. Supposeu and ζ(u) lie on s_x with centerψ and let R_θ,ψ(z) := e^i θ(z - ψ) + ψbe thefunction that takes z to its image under rotation by an angle θ around ψ. Under perfect rotational invariance, not only of the spirals s_x but of the branches of ζ^∘ -(z) for particular z that they interpolate, the numbersR_θ,ψ(ζ(u)) - ζ(R_θ,ψ(u)) must vanish. Therefore we studied theR_θ,ψ(ζ(u)) - ζ(R_θ,ψ(u)). We restricted ourselves to u ∈ζ^∘ -(z) for various z, not only because these are the main objects of interest, but because our onlyreliable information about the spirals comes from their interpolation of the branches B_ψ = (a_0, a_1, a_2, ...)of ζ^∘ -(z), and so our only usefulcandidates for points in s are the members of such branches.Logarithmically scaled plots (which we omit) of the discrepancies (say)R_θ,ψ(ζ(a_n)) - ζ(R_θ,ψ(a_n)) indicate that these numbers decay in modulus exponentially and rotate around the origin in anearly linear fashion with n. In other words, they themselves describe curves that are approximated by logarithmic spirals.§ APPENDIX: THE FIGURES §.§ Figure 1.1Figure 1.1 a 120by 120 square with center 1 + 0i.§.§ Figure 2.1Figure 2.1 depicts a6 by 6 square with center zero. §.§ Figure 3.1 Figure 3.1 shows an 8 by 8 squarewith center -5 + 9.5i.§.§ Figure 3.2Figure 3.2 showsa 120 by 120 square with center zero.§.§ Figure 3.3Figure 3.3 depictsa 60 by 60 square with center zero.§.§ Figure 3.4The upper left panel of Figure 3.4shows a 2.4 × 10^-5 by2.4 × 10^-4 squarecentered at -28. The upper right panel shows a 2.4 × 10^-4 by2.4 × 10^-4 squarecentered at -26. The lower leftpanel shows a .004 by .004 square centered at -24. The lower right depicts a .07 by .07 square centered at -22. §.§ Figure 3.5 The panel in row 1 column 1of Figure 3.5 depictsa 10 by 10 square centered at ρ_1. The other panels show a square with side length .006and center ρ_1 + 4.1215 - .4015i ≈ 4.6215 + 13.7332 i.§.§ Figure 3.6 Each panel of Figure 3.6 is a 30 by 30 square with center = -5.§.§ Figure 4.1 The squares depicted in Figure 4.1have side length.2, .02, .002, and .0002 in rows 1, 2, 3 and 4, respectively.The center of each square is ψ__ρ_1≈ -2.3859 + 16.271i.§.§ Figure 5.1 All the panels of Figure 5.1depict A_ϕ in 2 by 2 squares. In rows 1 - 4,the centers are ψ__ρ_1 -ψ__ρ_4, respectively, where ψ__ρ_2≈ -2.0369 + 21.9931i, ψ__ρ_3≈ -1.6935 + 26.5283i, andψ__ρ_4≈ -1.7496 + 30.8158i.§.§ Figure 5.2 In Figure 5.2, column 1 depicts branches ofζ^∘ -(ρ_n), 1 ≤ n ≤ 5,centered at a zeta fixed point ≈ -14.613 + 3.108 i;column 2 depicts branches ofζ^∘ -(ρ_n), 1 ≤ n ≤ 5,centered at a zeta fixed point≈ -5.279 + 8.803i.§.§ Figure 5.3 In Figure 5.3 (referring to the caption), the value of n in row a, column b is n = 2a + b - 2. §.§ Figure 5.8 Figure 5.8 depicts four views ofa 12 by 12 squarewith center ρ_1.§.§ Figure 5.9 Figure 5.9 depicts the branch B_ρ__1,Λ ofthe backward orbit of ζ^∘ -(ρ_1) induced by a 3-cycle Λ = ( λ_1, λ_2, λ_3 )with λ_1 ≈ 3.95896 + 24.2362i.
http://arxiv.org/abs/1703.08779v2
{ "authors": [ "Barry Brent" ], "categories": [ "math.NT", "11-04" ], "primary_category": "math.NT", "published": "20170326072526", "title": "Experiments with the dynamics of the Riemann zeta function" }
http://arxiv.org/abs/1703.08786v1
{ "authors": [ "Tim Gould", "Sébastien Lebègue", "János G. Ángyán", "Tomáš Bučko" ], "categories": [ "physics.chem-ph" ], "primary_category": "physics.chem-ph", "published": "20170326084305", "title": "A fractionally ionic approach to polarizability and van der Waals many-body dispersion calculations" }
empty ^1 State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics Engineering and Computer Science, and Center for Quantum Information Technology, Peking University, Beijing 100871, China ^2 State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China^3 School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China hongguo@pku.edu.cnWhen use the image mutual information to assess the quality of reconstructed image in pseudo-thermal light ghost imaging, a negative exponential behavior with respect to the measurement number is observed. Based on information theory and a few simple and verifiable assumptions, semi-quantitative model of image mutual information under varying measurement numbers is established. It is the Gaussian characteristics of the bucket detector output probability distribution that leads to this negative exponential behavior. Designed experiments verify the model. Negative exponential behavior of image mutual information for pseudo-thermal light ghost imaging: Observation, modeling, and verification Hong Guo^1 December 30, 2023 =========================================================================================================================================§ INTRODUCTION Image quality assessment is known to be difficult so far <cit.>. Besides traditional image quality measures based on error estimation, assessments of different types are introduced, first by the signal processing community (cf. <cit.> for a review). Among others, the mutual information (MI), representing the amount of information shared by two random variables in information theory <cit.>, was introduced to account the similarity between images <cit.>, and has been successfully applied in different circumstances to assess image quality <cit.>.Being different from the usual “single snapshot” imaging process, ghost imaging (GI) is built on a large number of consecutive measurements on two quantities: light intensity registered by a “bucket” detector with no spatial resolution, and a spatial profile that never reaches the object—either an “idler” reference light field <cit.>, a modulation pattern <cit.>, or the calculated diffraction profile of that field <cit.>. As a consecutive process, modeling the performance under varying measurement numbers is of great significance to GI. One would naturally expect the image quality to improve with increasing measurement number n, and converge when n is quite large, suggesting an upper limit of image quality when n →∞. Unfortunately, previous studies of image quality focus on the influence of either the noise level <cit.> or relative spatial/temporal scale <cit.>, and no quantitative analysis concerning measurement number has been published according to our knowledge, except for a few qualitative observations <cit.> and an untight lower bound <cit.>.In this contribution, we use image mutual information (IMI) between the object O and the reconstructed image Y to assess image quality of pseudo-thermal light GI. Semi-quantitative fitting shows that IMI I( O;Y) is a negative exponential function of the measurement number n. An information-theory-based model explains this behavior. All the assumptions are validated. Designed further experiments demonstrate highly agreement with the predictions of the model.§ METHODS AND OBSERVATION§.§ Experiment setupA conventional GI setup is implemented as in Fig. <ref>. Output of a 532 nm laser passes through a rotating ground glass (R.G.G., Edmund 100 mm diameter 220 grit gound glass diffuser), turning it into pseudo-thermal light <cit.>, whose intensity fluctuates randomly both in the space and time domain. This pseudo-thermal light is then split into two arms by the beam splitter (BS). The signal arm penetrates a transmissive object mask, followed by a focus lens, to be registered as a whole into a temporal intensity sequence B(t) by a bucket detector which has no spatial resolution. The spatial profile of the reference arm, R(x;t), which never reaches the object, is recorded by a commercial CMOS camera (Thorlabs DCC3240C) synchronically with the bucket detector. The second order fluctuation correlation (2^nd FC) <cit.> between corresponding B(t) and R(x;t) yields the reconstructed image Y(x),Y( x) ∝⟨[ R( x;t) - ⟨R( x;t)⟩_t] ×[ B( t ) - ⟨B( t )⟩_t]⟩_t/⟨R( x;t)⟩_t⟨B( t )⟩_t,where ⟨·⟩_t denotes average over all the measurements. §.§ Image mutual informationMutual information In information theory, MI between two random variables A and B is defined asI( A;B) = H( A ) - H( A| B .),where H( A ) =- ∑_a p_A( a )log_2p_A( a ) is the Shannon entropy of A with probability distribution function (PDF) p_A( a ), denoting the amount of information one reveals when gets full knowledge of p_A( a ), and H( A| B .) is the conditional entropy of A given B, representing the amount of the remain unknown information of A even when the probability distribution of B is totaly determined,H( A| B .) =- ∑_a ∑_b p_A,B( a,b)log_2p_A| B .( a| b .) ,where p_A,B( a,b) is the joint probability of A=a and B=b, and p_A| B .( a| b .) is the conditional probability of A=a given B=b. Eq. (<ref>) shows that I( A;B) denotes the amount of information shared by two partite A and B, thus can be a measure of how similar the two variables are, since identical variables have the largest MI, while totally independent ones have the smallest.Image mutual information When IMI is applied, image A ( x ) of N pixels is treated as a one-dimensional random variable A of length N. IMI between two images A ( x ) and B ( x ) is defined as MI between two random variables A and B, which denotes the similarity between the two images. If A ( x ) and B ( x ) are set to be the object and image of an imaging system, respectively, IMI can assess the image quality, since the goal of imaging is to accomplish a duplicate as similar to the object as possible. In fact, by maximizing IMI, imaging distortion and relative displacement can be corrected profoundly—known as the image registration technique (cf. <cit.> for a review). Here we want to note that, in order to reduce the influence made by image distortion or relative displacement on the image quality assessment, the image and object should be aligned at first when uses IMI to assess image quality. What is more, unlike other assessments, e.g., mean square error (MSE), IMI is insensitive to the relative coordinate x within the area of interest (AOI), i.e., a rearrangement of pixels, which would totaly change the image content, makes no difference with respect to IMI. This unique property, on one hand, emphasizes the importance of image alignment, and suggests the potential to develop a content-free image quality assessment on the other, which is a general measure of image quality regardless of what pattern it has for a specified image. §.§ Observation of negative exponential behavior Reconstructed image Y ( x ) is recorded against different measurement numbers. The AOI contains 120 × 120 pixels. To ensure alignment, the image that is the most over-sampled ( n=50000 ), Y_∞( x ), serves as an almost-identical approximation of the object O ( x ), assuming that after so many measurements, the image has been a stable, nearly perfect duplicate to the object. IMI between O ( x ) and Y ( x ), I( O;Y), is calculated under varying n to assess the image quality—the higher I( O;Y) is, the better quality image one gets. For our system, the image quantization bit length when calculates IMI is set to be 9, according to the Appendix (Sect. <ref>). The result is shown in Fig. <ref>. Curve fitting with both linear and nonlinear regression shows that the negative exponential function fits the experiment result best, i.e.,I(O;Y) = C_1 - C_2exp(- n/C_3),where fitting parameter C_1 denotes the upper limit of I(O;Y) when n →∞, and parameter C_3 represents the converge speed, i.e., C_3 measurements are required to reduce the uncertainty between image and object to the 1 /. - e of its initial value. The larger C_3 is, the more measurements one needs to achieve the same level of image quality. § MODELING§.§ GI equivalent model In order to explain Eq. (<ref>), an equivalent model of GI concerning only the data processing part is established, as shown in Fig. <ref>. Light field in the signal arm penetrates the object to be registered by the bucket detector. Under a classical approximation, this light field is identical to its counterpart in the reference arm, up to a propagator, since they are split from the same pseudo-thermal light before the beam splitter in Fig. <ref>. Therefore, in each measurement, O ( x ) is “encoded” by the current reference light field R ( x;t) into the bucket detector output B ( t ). It is the n realizations of R ( x;t) and B ( t ), i.e., r_1,r_2, …r_n and b_1,b_2, …b_n, respectively, that lead to the reconstructed image Y( x ) through second order fluctuation correlation of R ( x;t) and B ( t ), i.e., Y = Y( {b_n},{r_n}). Therefore, any quantity concerning O ( x ) and Y ( x ) should be fully determined by random variables O, {b_n} and {r_n}.§.§ Assumptions Based on the GI equivalent model, derivation of Eq. (<ref>) is shown in the Appendix (Sect.<ref>). Here we want to summary and double check all the involving assumptions.Fixed object This assumes that object O ( x ) is fixed, so that H( O ) is a constant. This is true since we use a static object, and the most over-sampling image Y_∞( x ) is a good-enough approximation of O ( x ).Independent object This assumes that object O ( x ) is independent of the image Y ( x ), so that conditional probability p_O| Y .( o| y .) = p_O( o ) for all o and y. This is true since the object is fixed, no matter what the image is. What is more, since Y = Y( {b_n},{r_n}), one has p_O| Y .( o| Y .) = p_O| B,R.( o| {b_n},{r_n}.) = p_O( o ) for any o, Y, and {b_n}, {r_n}, that the fixed object would not be affected by the value of bucket detector nor the reference light field.Independent reference light field This assumes that reference light field R ( x;t) is independent of the bucket detector output B ( t ), so that conditional probability p_R| B .( r| b .) = p_R( r ) for all b and r. This is true due to the random nature of the reference light field, both in spatial and time domain, caused by the pseudo-thermal property. We calculate the normalized mutual information (NMI) between the time sequences of bucket detector output {b_n} and reference light field {r_n} (as for the value of r_n, both the summation over the whole AOI and the intensity in a fixed pixel are applied) to verify this independenceI_nor( B;R) = 2I(B;R)/H( B ) + H( R ) - I(B;R),where the denominator is the total amount of information, or joint entropy, of {b_n} and {r_n}. Two totally correlated random variables have NMI of unity, while totally independent ones get zero. For varying measurement number n, | I_nor( B;R)| ≤10^ - 14, indicating the independence of {r_n} on {b_n}.Gaussian PDF bucket detector output This assumes the output of bucket detector has a Gaussian probability distribution function,p_B( b_i) ∝exp[- ( b_i - b̅)^2/2σ ^2],i = 1,2, … ,n,where b̅ and σ are the expectation and standard derivation of {b_n}, respectively. The strict derivation for a general case is complex, since specific object pattern is involved. The experiment result is shown in Fig. <ref>. Similar result has been reported in Ref. <cit.>. Bucket detector output iid. This assumes that the bucket detector output values are independent and identical distributed (iid.), so that p_B( {b_n}) = Π_i = 1^n p_B( b_i). This is guaranteed by the pseudo-thermal nature of the reference light field. Since R ( x;t) is random in both the space and time domain <cit.>, and b_i = b_i( O,r_i) for any i ∈[ 1,n], as mentioned in Sect. <ref>, the iid. property of {b_n} is natural.Large measurement number This assumes n ≫ 1, which is fulfilled in our experiment.Image alignment This assumes alignment of image to the object. Instead of the real object pattern, we use the most over-sampled image of the same consecutive measurement sequence as the approximation, which ensures perfect alignment as long as the imaging system is stable. Image registration is also conducted. This assumption is not necessary to derive Eq. (<ref>), but is vital for IMI to be a good image quality assessment as mentioned in Sect. <ref>. §.§ Further verification Experiments are designed to verify the above model from different perspectives.Post-selection on bucket fluctuation Derivation in the Appendix (Sect. <ref>) uses propertylim_n →∞∑_i = 1^n ( b_i - b̅)^2/n = σ ^2,which is true for all the {b_n} as a whole. If one picks up the measurements whose bucket detector output b_i' satisfying ( b_i' - b̅)^2≥m_1σ ^2, i.e., those with larger fluctuations away from the mean value b̅. The variance of selected b_i' satisfiesσ'^2 = ∑_i = 1^n'( b_i' - b̅)^2/n' = m_2σ ^2,in which m_2> 1. Back to Eq. (<ref>), one has C_3' = C_3/ . -m_2, which indicates that if the average fluctuation of selected measurements increases, required measurement number for IMI to converge, i.e., to get an image of same quality, decreases with the equal proportion. In other words, one can achieve the same image quality with fewer measurements after post-selection on the fluctuation amplitude of bucket detector output. Similar phenomena has been reported in Ref. <cit.>, and here we present a quantitative relationship this post-selection process should obeyσ'^2∝1/C_3'.At the same time, according to Sect. <ref> and the fixed object approximation, no matter how m_1 varies,C_1 = H( O ) = const.Experiment results are shown in Fig. <ref>(a). The good agreement with Eq. (<ref>) and Eq. (<ref>) further validates our model. Varying AOI Ref. <cit.> reported that for a thermal light GI, the ultimate imaging signal-to-noise ratio (SNR) after many measurements should be a rational function of the size of AOI, approximately inversely proportional to the square root of AOI scale parameter l. We vary the AOI by choose the l × l pixels in the middle of the recorded reference light field only in each measurement, and calculate corresponding IMI against measurement number n—the same as in Sect. <ref>. Experiment shows that Eq. (<ref>) holds for all the different AOIs. The results are shown in Fig. <ref>(b). Fitting parameter C_1, which stands for the ultimate IMI when n →∞, fits a power-law function of the AOI length l (equivalent to the “resolution parameter R” in Ref. <cit.>) well,C_1∝l^a,where a =- 0.49352±0.01991. This result agrees with Ref. <cit.> nicely, suggesting that IMI has very similar behavior with a usual image quality assessment—imaging SNR. What is more, according to our model, the converge parameter C_3 should be irrelevant to the size of AOI—which is sort of counter-intuitive—indicating that the number of measurement required before the image quality converges and the image gets stable is the same for both large and small images. However, experiment verifies this conjecture, thus further validates our model.Contribution of reference field Our model suggests that the reference light field has no direct contribution to the negative exponential behavior of the IMI vs. n relationship. To verify this, a comparative experiment is designed that bucket detector output is replaced by summation of reference light field intensities within AOI in each measurement. We calculate IMI between the object and images reconstructed this way, Y_rf( x ), against measurement number n, as shown in Fig. <ref>(c). No explicit change of IMI under varying n has been found, which supports our argument. Furthermore, we define “differential” IMI asI_diff( O;Y) = I( O;Y) - I( O;Y_rf),which is the difference between the ordinary IMI and the IMI accomplished by replacing bucket detector output by reference field intensity summation. We expect I_diff( O;Y) to represent the part of IMI revealed “solely” by the bucket detector, and the name “differential” comes from the differential entropy in information theory <cit.>. I_diff( O;Y) also satisfies Eq. (<ref>), dominating the negative exponential behavior with one more advantage over I ( O;Y)—it has a zero initial value when n → 0, making it closer to practical applications since one gets no knowledge of the object before conducting any measurements. This makes I_diff( O;Y) another promising image quality assessment.§ DISCUSSION AND CONCLUSIONComparison with correspondence GI Results of bucket fluctuation post-selection in Sect. <ref> are quite similar to those reported in the correspondence GI <cit.>. For example, using only the measurements with large fluctuations can reduce the number of measurement involved in the reconstructed image calculation, and the larger fluctuations are, the fewer measurements are needed. Rather than a coincident, we believe this is due to the same dynamics behind, even though the image calculation formula is different. In both systems, the bucket detector output has a Gaussian PDF, which decreases as the fluctuation increases. In our case, this means measurements with larger fluctuations from the mean bucket detector output value are rarer, and in the language of information theory, one can reveal more information when these measurements appear. We think this explanation also applies to their case. Furthermore, Fig. <ref>(d) shows the relationship between converge parameter C_3, which denotes the number of measurement required in the bucket fluctuation post-selection calculation, and the “effective” ratio—how many measurements in all that being complemented can fulfill the varying fluctuation requirements. Fitting result suggests a linear relationship. Since the total number of measurement stays the same, one can see that reduction of required measurement number is merely due to that fewer measurements can meet the harsher fluctuation requirement—which makes the satisfying measurements rarer, and cannot reduce the total number of measurement to be conducted at the first place. Therefore, correspondence GI can only reduce the number of post-selection measurement, meanwhile still needs the same number of measurement to be complemented.Applicable range One may think that the above negative exponential behavior of image quality versus measurement number relationship is limited to situations with Gaussian PDF or take mutual information as the image quality assessment. Some of our very recent contributions (arXiv: 1603.00371, 1604.02515, 1702.08687) suggest the other way. Our model applies to a quite general scenario thanks to the fundamentality of information theory.Significance Our work is important to the GI community in several ways. First, we provide a semi-quantitative model of the image quality versus measurement number relationship, enabling accurate prediction of the expected image quality after certain number of measurement, and the necessary measurement number to meet any given image quality requirement. Second, explicit connections between GI parameters and concepts in information theory may lead to a new perspective of interdisciplinary research, which we hope could benefit the comparative new and less developed GI study in return. Last but not least, the similar behavior of image mutual information with usual image quality assessments, e.g., mean square error, contrast-to-noise ratio, imaging signal-to-noise ratio, meanwhile the unique insensitivity to specified coordinate within the AOI, suggests that IMI (and differential IMI) is a promising candidate for content-free image quality assessment.In conclusion, we report the observation, modeling, and verification of the negative exponential behavior when use image mutual information to assess the image quality of pseudo-thermal light ghost imaging, with respect to the measurement number. Rooted in the fundamental information theory, the model applies to a much more general scenario.§ APPENDIX§.§ Number of image quantization levels When calculates the Shannon entropy or mutual information of (quasi-)continuous random variables, discretization is a practical problem to face with, i.e., to decide how many bits should be used in analog-to-digital conversion. We quantize the the most over-sampled reconstructed image Y_∞( x ), which we used as the approximate object profile in Sect. <ref>, into 2^L equally-spaced levels, and calculate its image entropy H( Y_∞) under varying quantization bit length L, to decide the optimum L, as shown in Fig. <ref>.Unfortunately, the ever growing H ( Y_∞) provides no optimum L except L →∞, which means that the more bits one uses, the more information one can reveal from the quantized image. We turn to the economy consideration. We define the redundant bit ratio asR_bit = L - H( Y_∞)/L,which is the ratio of “redundant” bits in all L bits when apply entropy coding in the quantization process <cit.>. Small R_bit means that it is nearly impossible to compress the quantization bit length, in other words, few are wasted in the L bits. The R_bit vs. L relation is also given in Fig. <ref>, from which we found that 9 is the optimum quantization bit length. §.§ Derivation of Eq. (<ref>) According to Eq. (<ref>),I( O;Y) = H( O ) - H( O| Y .).H( O ) equals to a constant C_1 according to the fixed object assumption. Following Eq. (<ref>) and the independent object assumption,[H( O| Y .) = - ∑_o ∑_y p_O,Y( o,y)log_2p_O| Y .( o| y .); = - ∑_o [ log_2p_O( o ) ·∑_y p_O,Y( o,y)] , ]which suggests that the two-fold summation in Eq. (<ref>) can be done in two steps: first ∑_y p_O,Y( o,y) over all the possible values y of the image Y ( x ), then over all o. Notice that the second step is irrelevant to any specified y, and by definition,∑_y p_O,Y( o,y)= p_O,Y(o,Y).The equivalent GI model in Sect. <ref> suggests Y = Y( {b_n},{r_n}). Together with the independent object assumption, one has[p_O,Y( o,Y)=p_O,B,R( o,{b_n},{r_n}); = p_O| B,R.( o| {b_n},{r_n}.)p_B,R( {b_n},{r_n}); = p_O( o )p_B,R( {b_n},{r_n}). ]The independent reference light field assumption leads to[ p_B,R( {b_n},{r_n}) = p_B( {b_n})p_R| B .( {r_n}| {b_n}.); = p_B( {b_n})p_R( {r_n}), ]in which p_R( {r_n}) should be a constant due to the spatial and temporal randomness of the reference light field because of its pseudo-thermal characteristics <cit.>. Under the Gaussian PDF and iid. assumptions of the bucket detector output,[p_B( {b_n})=Π_i = 1^n p_B( b_i); ∝ Π_i = 1^n {exp[- ( b_i - b̅)^2/2σ ^2]}; =exp[- ∑_i = 1^n ( b_i - b̅)^2/2σ ^2]. ]Noticing Eq. (<ref>), when the measurement number is large, i.e., n ≫ 1,∑_i = 1^n ( b_i - b̅)^2= nσ ^2.Substituting Eq. (<ref>) to (<ref>) into Eq. (<ref>),[H( O| Y .) = - ∑_o [ p_O( o )log_2p_O( o ) ·C_4exp(- n/C_3)]; =H( O ) ·C_4exp(- n/C_3). ]Back to Eq. (<ref>), one gets Eq. (<ref>). The above derivation shows that it is the probability distribution of bucket detector output that determines the negative exponential behavior of IMI regarding measurement number n, which is reasonable, given a fixed object that never changes, a random reference light field which is basically irrelevant to the object, and an iid. bucket output with Gaussian PDF that contains all the information passed to the image from the object. §.§ Conflict of interestThe authors declare that they have no conflict of interest. §.§ FundingNational Natural Science Foundation of China (61631014, 61401036, 61471051, 61531003); National Science Fund for Distinguished Young Scholars of China (61225003); China Postdoctoral Science Foundation (2015M580008); Youth Research and Innovation Program of BUPT (2015RC12).amsplainImageAssessment Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43:2959–2965 SignalAssessment Wang Z, Bovik AC (2009) Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Process Mag 26:98–117 InformationTheoryBook Cover TM, Thomas JA (1991) Elements of Information Theory. Wiley-Interscience, New York IMI1 Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P (1997) Multimodality image registration by maximization of mutual information. IEEE Trans Medical Imaging 16:187–198 IMI2 Viola P, Wells III WM (1997) Alignment by maximization of mutual information. Int J Comput Vision 24:137–154 IMIApp1 Pluim JPW, Maintz JBA, Viergever MA (2003) Mutual-information-based registration of medical images: a survey. IEEE Trans Medical Imaging 22:986–1004 IMIApp2 Guo B, Gunn SR, Damper RI, Nelson JDB (2006) Band selection for hyperspectral image classification using mutual information. IEEE Geosci Remote Sens Lett 3:522–526 Shih95 Pittman TB, Shih YH, Strekalov DV, Sergienko AV (1995) Optical imaging by means of two-photon quantum entanglement. Phys Rev A 52:R3429 IEEE08 Duarte MF, Davenport MA, Takbar D, Laska JN, Sun T, Kelly KF, Baraniuk RG (2008) Single-pixel imaging via compressive sampling. IEEE Signal Process Mag 25:83–91 Shapiro08 Shapiro JH (2008) Computational ghost imaging. Phys Rev A 78:061802 IterativeGI Wang W, Wang YP, Li J, Yang X, Wu Y (2014) Iterative ghost imaging. Opt Lett 39:5150–5153 CSLowerBound Sarvotham S, Baron D, Baraniuk RG (2006) Measurements vs. bits: Compressed sensing meets information theory. In Allerton Conference on Communication, Control and Computing SNR02 Jain A, Moulin P, Miller MI, Ramchandran K (2002) Information-theoretic bounds on target recognition performance based on degraded image data. IEEE Trans Pattern Anal Mach Intell 24:1153–1166 SNR07 Neifeld MA, Ashok A, Baheti PK (2007) Task-specific information for imaging system analysis. J Opt Soc Am A 24:B25–B41 SNR08 Ashok A, Baheti PK, Neifeld MA (2008) Compressive imaging system design using task-specific information. Appl Opt 47:4457–4471 SNR11 Brida G, Chekhova MV, Fornaro GA, Genovese M, Lopaeva ED, Berchera IR (2011) Systematic analysis of signal-to-noise ratio in bipartite ghost imaging with classical and quantum light. Phys Rev A 83:063807 PseudothermalLight Martienssen W, Spiller E (1964) Coherence and fluctuations in light beams. Am J Phys 32:919–926 PNFC Chen H, Peng T, Shih YH (2013) 100% correlation of chaotic thermal light. Phys Rev A 88:023808 R2 Miles J (2014) R squared, adjusted R squared. Wiley StatsRef: Statistics Reference Online. CorrespondenceGI Luo KH, Huang BQ, Zheng WM, Wu LA (2012) Nonlocal imaging by conditional averaging of random reference measurements. Chin Phys Lett 29:074216
http://arxiv.org/abs/1703.08763v1
{ "authors": [ "Junhui Li", "Bin Luo", "Dongyue Yang", "Longfie Yin", "Guohua Wu", "Hong Guo" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170326041309", "title": "Negative exponential behavior of image mutual information for pseudo-thermal light ghost imaging: Observation, modeling, and verification" }
#1.3ex#1-.75em1ex∼<>⟨#|1⟨ #1||#⟩1| #1⟩B^0B̅^0B^0_sB̅^0_sb → s μ^+ μ^-b → s νν̅K^*b → sb̅→s̅B → K^* μ^+ μ^-B → K μ^+ μ^- SM NP expt eff Re Im MeV GeV√(2)b → s ℓ^+ ℓ^- .15em>-.85em .35em∼  .15em<-.85em .35em∼ plain UdeM-GPP-TH-17-255; WSU-HEP-1703akalok@iitj.ac.inIndian Institute of Technology Jodhpur, Jodhpur 342011, Indiabhujyo@wayne.eduDepartment of Physics and Astronomy, Wayne State University, Detroit, MI 48201, USAdinesh@phy.iitb.ac.inIndian Institute of Technology Bombay, Mumbai 400076, IndiaDepartment of Physics, University of Rajasthan, Jaipur 302004, Indiajka@tifr.res.inDepartment of High Energy Physics, Tata Institute of Fundamental Research, 400 005, Mumbai, Indialondon@lps.umontreal.caPhysique des Particules, Université de Montréal, C.P. 6128, succ. centre-ville, Montréal, QC, Canada H3C 3J7uma@phy.iitb.ac.inIndian Institute of Technology Bombay, Mumbai 400076, India At present, there are several measurements of B decays that exhibit discrepancies with the predictions of the SM, and suggest the presence of new physics (NP) intransitions. Many NP models have been proposed as explanations.These involve the tree-level exchange of a leptoquark (LQ) or a flavor-changingZ' boson. In this paper we examine whether it is possible to distinguish the various models via CP-violating effects in B→ K^(*)μ^+μ^-. Using fits to the data, we find the following results. Of all possible LQ models, only three can explain the data, and these are all equivalent as far asprocesses are concerned. In this single LQ model, the weak phase of the coupling can be large, leading to some sizeable CP asymmetries in B→ K^(*)μ^+μ^-. There is a spectrum of Z' models; the key parameter is g_L^μμ, which describes the strength of the Z' coupling to μ^+μ^-. If g_L^μμ is small (large), the constraints from - mixing are stringent (weak), leading to a small (large) value of the NP weak phase, and corresponding small (large) CP asymmetries. We therefore find that the measurement of CP-violating asymmetries in B→ K^(*)μ^+μ^- can indeed distinguish among NPmodels.New Physics in : Distinguishing Models through CP-Violating Effects S. Uma Sankar====================================================================§ INTRODUCTION At present, there are several measurements of B decays involvingthat suggest the presence of physics beyond the standard model (SM). These include * B → K^* μ^+μ^-: Measurements of B → K^* μ^+μ^- have been made by the LHCb <cit.> and Belle <cit.> Collaborations. They find results that deviate from the SM predictions. The main discrepancy is in the angular observable P'_5<cit.>. Its significance depends on the assumptions made regarding the theoretical hadronic uncertainties <cit.>. The latest fits to the data <cit.> take into account the hadronic uncertainties, and find that a significant discrepancy is still present, perhaps as large as ∼ 4σ. * →ϕμ^+ μ^-: The LHCb Collaboration has measured the branching fraction and performed an angular analysis of →ϕμ^+ μ^-<cit.>. They find a 3.5σ disagreement with the predictions of the SM, which are based on lattice QCD <cit.> and QCD sum rules <cit.>. * R_K: The ratio R_K ≡ B(B^+ → K^+ μ^+ μ^-)/ B(B^+ → K^+ e^+ e^-) has been measured by the LHCb Collaboration in the dilepton invariant mass-squared range 1 GeV^2≤ q^2 ≤ 6 GeV^2<cit.>, with the result R_K^ = 0.745^+0.090_-0.074  (stat)± 0.036  (syst) .This differs from the SM prediction of R_K^ = 1 ± 0.01<cit.> by 2.6σ, and thus is a hint of lepton flavor non-universality.While any suggestions of new physics (NP) are interesting, what is particularly intriguing about the above set of measurements is that they can all be explained if there is NP in [Early model-independent analyses of NP incan be found in Refs. <cit.> (CP-conserving observables) and <cit.> (CP-violating observables).]. To be specific,transitions are defined via the effective Hamiltonian H_ eff =- α G_F/π V_tb V_ts^* ∑_a = 9,10 ( C_a O_a + C'_a O'_a )  , O_9(10) =[ s̅γ_μ P_L b ] [ μ̅γ^μ (γ_5) μ ]  ,where the V_ij are elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. The primed operators are obtained by replacing L with R, and the Wilson coefficients (WCs) C^(')_a include both SM and NP contributions. Global analyses of theanomalies have been performed <cit.>. It was found that there is a significant disagreement with the SM, possibly as large as 4σ, and it can be explained if there is NP in b → s μ^+ μ^-. Ref. <cit.> gave four possible explanations: (I) C_9^μμ( NP) < 0, (II) C_9^μμ( NP) = - C_10^μμ( NP) < 0, (III) C_9^μμ( NP) = - C_9^'μμ( NP) < 0, (IV) C_9^μμ( NP) = - C_10^μμ( NP) = -C_9^'μμ( NP) = - C_10^'μμ( NP) < 0.Numerous models have been proposed that generate the correct NP contribution toat tree level[The anomalies can also be explained using a scenario in which the NP enters in the b → c c̅ s transition, but constraints from radiative B decays and - mixing must be taken into account, see Ref. <cit.>.]. Most of them use solution (II) above, though a few use solution (I). These models can be separated into two categories: those containing leptoquarks (LQs) <cit.>, and those with a Z' boson <cit.>. But this raises an obvious question: assuming that there is indeed NP in , which model is the correct one? In other words, short of producing an actual LQ or Z' experimentally, is there any way of distinguishing the models?A first step was taken in Ref. <cit.>, where it was shown that the CP-conserving, lepton-flavor-violating decays Υ(3S) →μτ and τ→ 3μ are useful processes for differentiating between LQ and Z' models. In the present paper, we compare the predictions of the various models for CP-violating asymmetries inand .CP-violating effects require the interference of two amplitudes with a relative weak (CP-odd) phase. (For certain CP-violating effects, a relative strong (CP-even) phase is also required.) In the SM,is dominated by a single amplitude, proportional to V_tb V_ts^* [see Eq. (<ref>)]. In order to generate CP-violating asymmetries, it is necessary that the NP contribution tohave a sizeable weak phase. As we will see, this does not hold in all NP models, so that CP-violating asymmetries inandcan be a powerful tool for distinguishing the models. (The usefulness of CP asymmetries infor identifying NP was also discussed in Ref. <cit.>.)We perform both model-independent and model-dependent analyses. In the model-independent case, we assume that the NP contributes to a particular set of WCs (and we consider several different sets). But if a particular model is used, one can work out which WCs are affected. In either case, a fit to the data is performed to establish (i) whether a good fit is obtained, and (ii) what are the best-fit values and allowed ranges of the real and imaginary pieces of the WCs. In the case of a good fit, the predictions for CP-violating asymmetries inandare computed.The data used in the fits include all CP-conserving observables involvingtransitions. The processes are B^0 → K^*0 (→ K^+ π^-) μ^+ μ^-, B^+ → K^*+μ^+ μ^-, B^+ → K^+ μ^+ μ^-, B^0 → K^0 μ^+ μ^-, →ϕμ^+ μ^-, B → X_s μ^+ μ^-, and →μ^+ μ^-. For the first process, a complete angular analysis of B^0 → K^*0 (→ K^+ π^-) μ^+ μ^- was performed in Refs. <cit.>. It was shown that this decay is completely described in terms of twelve angular functions. By averaging over the angular distributions of B and B̅ decays, one obtains CP-conserving observables. There are nine of these. Most of the observables are measured in different q^2 bins, so that there are a total of 106 CP-conserving observables in the fit.For the model-independent fits, only thedata is used. However, for the model-dependent analyses, additional data may be taken into account. That is, in a specific model, there may be contributions to other processes such as , - mixing, etc. The choice of additional data is made on a model-by-model basis. Because the model-independent and model-dependent fits can involve different experimental (and theoretical) constraints, they may yield significantly different results.CP-violating asymmetries are obtained by comparing B and B̅ decays. In the case of B → K μ^+ μ^-, there is only the direct partial rate asymmetry. For B^0 → K^*0 (→ K^+ π^-) μ^+ μ^-, one compares the B and B̅ angular distributions. This leads to seven CP asymmetries. There are therefore a total of eight CP-violating effects that can potentially be used to distinguish among the NPmodels.For the LQs, we will show that there are three models that can explain thedata. The LQs of these models contribute differently to b → s ν_μν̅_μ, so that, in principle, they can be distinguished by the measurements of . However, the constraints from these measurements are far weaker than those from , so that all three LQ models are equivalent, as far as thedata are concerned.We find that some CP asymmetries in B → K^(*)μ^+ μ^- can be large in this single LQ model.In Z' models, there are g_L^bss̅γ^μ P_L b Z'_μ and g_L^μμμ̅γ^μ P_L μ Z'_μ couplings, leading to a tree-level Z' contribution to . In order to explain theanomalies, the product of couplings g_L^bs g_L^μμ must lie within a certain (non-zero) range. If g_L^μμ is small, g_L^bs must be large, and vice-versa. The Z' also contributes at tree level to - mixing, proportional to (g_L^bs)^2. Measurements of the mixing constrain the magnitude and phase of g_L^bs. If g_L^bs is large, the constraint on its phase is significant, so that this Z' model cannot generate sizeable CP asymmetries. On the other hand, if g_L^bs is small, the constraints from - mixing are not stringent, and large CP-violating effects are possible.The upshot is that it may be possible to differentiate Z' and LQ models, as well as different Z' models, through measurements of CP-violating asymmetries in B → K^(*)μ^+ μ^-.We begin in Sec. 2 with a description of our method for fitting the data and for making predictions about CP asymmetries. Thedata used in the fits are given in the Appendix. We perform a model -independent analysis in Sec. 3. In Sec. 4, we perform model-dependent fits in order to determine the general features of the LQ and Z' models that can explain theanomalies. We present the predictions of the various models for the CP asymmetries in Sec. 5. We conclude in Sec. 6.§ METHOD The method works as follows. We suppose that the NP contributes to a particular set ofWCs. This can be done in a “model-independent” way, in the sense that no particular underlying NP model is assumed, or it can be done in the context of a specific NP model. In either case, all observables are written as functions of the WCs, which contain both SM and NP contributions.Given values of the WCs, we useflavio<cit.> to calculate the observables. By comparing the computed values of the observables with the data, the χ^2 can be found.The programMINUIT<cit.> is used to find the values of the WCs that minimize the χ^2. It is then possible to determine whether or not the chosen set of WCs provides a good fit to the data. This is repeated for different sets ofWCs.We are interested in NP that leads to CP-violating effects in B → K^(*)μ^+ μ^-. As noted in the introduction, this requires that the NP contribution tohave a weak phase. With this in mind, we allow the NP WCs to be complex (other fits generally take the NP contributions to the WCs to be real), and determine the best-fit values of both the real and imaginary parts of the WCs.In the case where a particular NP model is assumed, the main theoretical parameters are the couplings of the NP particles to the SM fermions. At low energies, these generate four-fermion operators.The first step is therefore to determine which operators are generated in the NP model. This in turn establishes which observables are affected by the NP. The fit yields preferred values of the WCs, and these can be converted into preferred values for the real and imaginary parts of the couplings.We note that caution is needed as regards the results of the model-independent fits. In such fits it is assumed that the NP contributes to a particular set of WCs. One might think that the results will apply to all NP models that contribute to the same WCs. However, this is not true. The point is that a particular model may have additional theoretical or experimental constraints. When these are taken into account, the result of the fit might be quite different. That is, the “model-independent” fits do not necessarily apply to all models.Indeed, in the following sections we will see several examples of this.Finally, for those sets of WCs that provide good fits to the data, we compute the predictions for the CP-violating asymmetries inand . §.§Fit The χ^2 is a function of the WCs C_i, and is constructed as follows:χ^2(C_i) = (𝒪_th(C_i) -𝒪_exp)^T𝒞^-1(𝒪_th(C_i) -𝒪_exp)  .Here 𝒪_th(C_i) are the theoretical predictions for the various observables used as constraints. These predictions depend upon the WCs.𝒪_exp are the the corresponding experimental measurements.We include all available theoretical and experimental correlations in our fit. The total covariance matrix 𝒞 is obtained by adding the individual theoretical and experimental covariance matrices, respectively 𝒞_th and 𝒞_exp. The theoretical covariance matrix is obtained by randomly generating all input parameters and then calculating the observables for these sets of inputs <cit.>.The uncertainty is then defined by the standard deviation of the resulting spread in the observable values. In this way the correlations are generated among the various observables that share some common parameters <cit.>.Note that we have assumed 𝒞_th to be independent of the WCs. This implies that we take the SM covariance matrix to construct the χ^2 function.As far as experimental correlations are concerned, these are only available (bin by bin) among the angular observables in B → K^(*)μ^+ μ^-<cit.>, and among the angular observables in →ϕμ^+ μ^-<cit.>.For χ^2 minimization, we use theMINUIT library <cit.>. The errors on the individual parameters are defined as the change in the values of the parameters that modifies the value of the χ^2 function such that Δχ^2 = χ^2- χ^2_min=1. However, to obtain the 68.3 % and 95% CL 2-parameter regions, we use Δχ^2 equal to 2.3 and 6.0, respectively <cit.>.The fit includes all CP-conservingobservables. These are * B^0 → K^*0μ^+ μ^-: The CP-averaged differential angular distribution for B^0 → K^*0 (→ K^+ π^-) μ^+ μ^- can be derived using Refs. <cit.>; it is given by <cit.> 1/d(Γ + Γ)/d q^2d^4 (Γ + Γ)/dq^2dΩ⃗ = 9/32 π[3/4 (1 - F_L) sin^2 θ_K^* + F_L cos^2 θ_K^*.3truemm + 1/4 (1 - F_L) sin^2 θ_K^*cos 2θ_ℓ - F_L cos^2 θ_K^*cos 2θ_ℓ + S_3 sin^2 θ_K^*sin^2 θ_ℓcos 2ϕ 3truemm + S_4 sin 2θ_K^*sin 2θ_ℓcosϕ + S_5 sin 2θ_K^*sinθ_ℓcosϕ + 4/3 A_FBsin^2 θ_K^*cosθ_ℓ 3truemm . + S_7 sin 2θ_K^*sinθ_ℓsinϕ + S_8 sin 2θ_K^*sin 2θ_ℓsinϕ + S_9 sin^2 θ_K^*sin^2 θ_ℓsin 2ϕ]  .Here q^2 represents the invariant mass squared of the dimuon system, and Ω⃗ represents the solid angle constructed from θ_l, θ_K^*, and ϕ. There are therefore nine observables in the decay: the differential branching ratio, F_L, A_FB, S_3, S_4, S_5, S_7, S_8 and S_9, all measured in various q^2 bins. The experimental measurements are given in Tables <ref> and <ref> in the Appendix.In the introduction it was mentioned that the main discrepancy with the SM is in the angular observable P'_5. This is defined as <cit.>P'_5 = S_5/√(F_L (1- F_L)) .*The differential branching ratio of B^+ → K^*+μ^+ μ^-. The experimental measurements <cit.> are given in Table <ref> in the Appendix. *The differential branching ratio of B^+ → K^+ μ^+ μ^-. The experimental measurements <cit.> are given in Table <ref> in the Appendix. When integrated over q^2, this provides the numerator in R_K ≡ B(B^+ → K^+ μ^+ μ^-)/ B(B^+ → K^+ e^+ e^-). Thus, the measurement of R_K [Eq. (<ref>)] is implicitly included here[Previous studies (Ref. <cit.> and references therein) have indicated that the R_K anomaly can be accommodated side-by-side with several other anomalies in b→ sμ^+μ^- if new physics only affects transitions involving muons. Following this lead, in this paper we therefore study models that modify the b→ sμ^+μ^- transition while leaving the b→ s e^+ e^- decays unchanged.]. *The differential branching ratio of B^0 → K^0 μ^+ μ^-. The experimental measurements <cit.> are given in Table <ref> in the Appendix. * →ϕμ^+ μ^-: The experimental measurements of the differential branching ratio and the angular observables <cit.> are given respectively in Tables <ref> and <ref> in the Appendix. *The differential branching ratio of B → X_s μ^+ μ^-. The experimental measurements <cit.> are given in Table <ref> in the Appendix. * BR(→μ^+ μ^-) = (2.9 ± 0.7) × 10^-9<cit.>.In computing the theoretical predictions for the above observables, we note the following: *For B→ K^* μ^+ μ^- and →ϕμ^+ μ^-, we use the form factors from the combined fit to lattice and light-cone sum rules (LCSR) calculations <cit.>. These calculations are applicable to the full q^2 kinematic region. In LCSR calculations the full error correlation matrix is used, which is useful to avoid an overestimate of the uncertainties. *In B → K μ^+ μ^-, we use the form factors from lattice QCD calculations <cit.>, in which the main sources of uncertainty are from the chiral-continuum extrapolation and the extrapolation to low q^2. In order to cover the entire kinematically-allowed range of q^2, we use the model-independent z expansion given in Ref. <cit.>. *The decay →ϕμ^+ μ^- has special characteristics, namely (i) there can be (time-dependent) indirect CP-violating effects, and (ii) the - width difference, ΔΓ_s, is non-negligible. These must be taken into account in deriving the angular distribution, see Ref. <cit.>. Inflavio<cit.>, the width difference is taken into account, but all observables correspond to time-integrated ones (so no indirect CP violation). *In the calculation of the branching ratio of the inclusive decay B → X_s μ^+ μ^-, the dominant perturbative contributions are calculated up to NNLO precision following Refs. <cit.>.The above observables are used in all fits. However, a particular model may receive further constraints from its contributions to other observables, such as , - mixing, etc. These additional constraints will be discussed when we describe the model-dependent fits. §.§Predictions Eq. (<ref>) applies to B^0 → K^*0μ^+ μ^- decays. Here the seven angular observables S_3, S_4, S_5, A_FB, S_7, S_8 and S_9 are obtained by averaging the angular distributions of B and B̅ decays. However, one can also consider the difference between B and B̅ decays. This leads to seven angular asymmetries: A_3, A_4, A_5, A_6^s, A_7, A_8 and A_9<cit.>. For B → K μ^+ μ^-, there is only the partial rate asymmetry A_ CP.In general, there are two categories of CP asymmetries. Suppose the two interfering amplitudes are A_ SM = a_1 e^i ϕ_1 e^i δ_1 and A_ NP = a_2 e^i ϕ_2 e^i δ_2, where the a_i are the magnitudes, the ϕ_i the weak phases and the δ_i the strong phases. Direct CP asymmetries involving rates are proportional to sin (ϕ_1 - ϕ_2) sin (δ_1 - δ_2). On the other hand, CP asymmetries involving T-odd triple products of the form p⃗_i · (p⃗_j ×p⃗_k) are proportional to sin (ϕ_1 - ϕ_2) cos (δ_1 - δ_2). Both types of CP asymmetry are nonzero only if the interfering amplitudes have different weak phases, but the direct CP asymmetry requires in addition a nonzero strong-phase difference. In the SM, the weak phase (=arg(V_tb V_ts^*)) and strong phases are all rather small, and the NP strong phase is negligible <cit.>. From this, we deduce that (i) large CP asymmetries are possible only if the NP weak phase is sizeable, and (ii) triple product CP asymmetries are most promising for seeing NP since they do not require large strong phases.In order to compute the predictions for the CP asymmetries, we proceed as follows.As noted above, we start by assuming that the NP contributes to a particular set ofWCs. We then perform fits to determine whether this set of WCs is consistent with all experimental data. In the case of a model-independent fit, the data involve onlyobservables; a model-dependent fit may involve additional observables.We determine the values of the real and imaginary parts of the WCs that minimize the χ^2. In the case of a good fit, we then use these WCs to predict the values of the CP-violating asymmetries A_3-A_9 in B^0 → K^*0μ^+ μ^- and A_ CP in B → K μ^+ μ^-.In Ref. <cit.>, it was noted that A_3, A_4, A_5 and A_6^s are direct CP asymmetries, while A_7, A_8 and A_9 are triple product CP asymmetries. Furthermore, A_7 is very sensitive to the phase of C_10. We therefore expect that, if NP reveals itself through CP-violating effects in B → K^(*)μ^+ μ^-, it will most likely be in A_7-A_9, with A_7 being particularly promising.§ MODEL-INDEPENDENT RESULTS In Refs. <cit.>, global analyses of theanomalies were performed.It was found that there is a significant disagreement with the SM, possibly as large as 4σ, and that it can be explained if there is NP in b → s μ^+ μ^-. Ref. <cit.> offered four possible explanations, each having roughly equal goodness-of-fits:(I)C_9^μμ( NP) < 0  ,(II)C_9^μμ( NP) = - C_10^μμ( NP) < 0  ,(III)C_9^μμ( NP) = - C_9^'μμ( NP) < 0  ,(IV)C_9^μμ( NP) = - C_10^μμ( NP) = -C_9^'μμ( NP) = - C_10^'μμ( NP) < 0  .In this section we apply our method to these four scenarios. There are several reasons for doing this. First, we want to confirm independently that, if the NP contributes to these sets of WCs, a good fit to the data is obtained. Note also that the above solutions were found assuming the WCs to be real. Since we allow for complex WCs, there may potentially be differences. Second, the main idea of the paper is that CP-violating observables can be used to distinguish the various NPmodels. We can test this hypothesis with scenarios I-IV. Finally, it will be useful to compare the model-independent and model-dependent fits. §.§Fits The four scenarios are model-independent, so that the fit includes only theobservables. The results are shown in Table <ref>. In scenarios II and III, there are two best-fit solutions, labeled (A) and (B). In both cases, the two solutions have similar best-fit values for Re(WC), but opposite signs for the best-fit values of Im(WC). In all cases, we obtain good fits to the data. The pulls are all ≥ 4, indicating significant improvement over the SM. Indeed, our results agree entirely with those of Ref. <cit.>.§.§CP asymmetries: predictions For each of the four scenarios, the allowed values of Re(WC) and Im(WC) are shown in Fig. <ref>. In all cases, Im(WC) is consistent with 0, but large non-zero values are still allowed. Should this happen, significant CP-violating asymmetries in B → K^(*)μ^+ μ^- can be generated. To illustrate this, for each of the four scenarios, we compute the predicted values of the CP asymmetries A_7, A_9 and A_8 in B^0 → K^*0μ^+ μ^-. The results are shown in Fig. <ref>. From these plots, one sees that, in principle, one can distinguish all scenarios. If a large A_7 asymmetry is observed, this indicates scenario II, and one can differentiate solutions (A) and (B). A large A_9 asymmetry at low q^2 indicates scenario IV, while a large A_9 asymmetry at high q^2 indicates scenario III (here solutions (A) and (B) can be differentiated). Finally, if no A_7 or A_9 asymmetries are observed, but a sizeable A_8 asymmetry is seen at low q^2, this would be due to scenario I.This then confirms the hypothesis that CP-violating observables can potentially be used to distinguish the various NP models proposed to explain theanomalies. This said, one must be careful not to read too much into the model-independent results. If NP is present indecays, it is due to a specific model. And this model may have other constraints, either theoretical or experimental, that may significantly change the predictions. That is, since the model-independent fits have the fewest constraints, the CP-violating effects shown in Fig. <ref> are the largest possible. In a particular model, there may be additional constraints, which will reduce the predicted sizes of the CP asymmetries. For this reason, while a model-independent analysis is useful to get a general idea of what is possible, real predictions require a model-dependent analysis. We turn to this in the following sections.§ MODEL-DEPENDENT FITS Many models have been proposed to explain theanomalies, of both the LQ <cit.> and Z'<cit.> variety. Rather than considering each model individually, in this section we perform general analyses of the two types of models. The aim is to answer two questions. First, what are the properties of models required in order to provide good fits to thedata? Second, which of these good-fit models can also generate sizeable CP-violating asymmetries in B → K^(*)μ^+ μ^-? We separately examine LQ and Z' models. §.§Leptoquarks The list of all possible LQ models that couple to SM particles through dimension ≤ 4 operators can be found in Ref. <cit.>. There are five spin-0 and five spin-1 LQs, denoted Δ and V respectively, with couplings L_Δ= ( y_ℓ uℓ̅_L u_R + y_eq e̅_R i τ_2 q_L ) Δ_-7/6 + y_ℓ d ℓ̅_L d_R Δ_-1/6 + ( y_ℓ q ℓ̅^c_L i τ_2 q_L + y_eu e̅^c_R u_R ) Δ_1/3+ y_ed e̅^c_R d_R Δ_4/3 + y'_ℓ q ℓ̅^c_L i τ_2 τ⃗ q_L ·Δ⃗'_1/3 + h.c. L_V = (g_ℓ q ℓ̅_L γ_μ q_L + g_ed e̅_R γ_μ d_R) V^μ_-2/3 + g_eu e̅_R γ_μ u_R V^μ_-5/3 + g'_ℓ q ℓ̅_L γ_μτ⃗ q_L ·V⃗^'μ_-2/3+ (g_ℓ d ℓ̅_L γ_μ d_R^c + g_eq e̅_R γ_μ q^c_L) V^μ_-5/6 + + g_ℓ u ℓ̅_L γ_μ u_R^c V^μ_1/6 + h.c.In the fermion currents and in the subscripts of the couplings, q and ℓ represent left-handed quark and lepton SU(2)_L doublets, respectively, while u, d and e represent right-handed up-type quark, down-type quark and charged lepton SU(2)_L singlets, respectively. The LQs transform as follows under SU(3)_c × SU(2)_L × U(1)_Y: Δ_-7/6 : (3̅, 2, -7/6)   ,    Δ_-1/6 : (3̅, 2, -1/6)   ,    Δ_1/3 : (3̅, 1, 1/3)  ,Δ_4/3 : (3̅, 1, 4/3)   ,    Δ⃗'_1/3 : (3̅, 3, 1/3)  , V^μ_-2/3 : (3̅, 1, -2/3)   ,     V^μ_-5/3 : (3̅, 1, -5/3)   ,    V⃗^'μ_-2/3 : (3̅, 3, -2/3)  , V^μ_-5/6 : (3̅, 2, -5/6)   ,     V^μ_1/6 : (3̅, 2, -5/3)  .Note that here the hypercharge is defined as Y =Q_em - I_3.In Eq. (<ref>), the LQs can couple to fermions of any generation. To specify which particular fermions are involved, we add superscripts to the couplings. For example, g^'μ s_ℓ q is the coupling of the V⃗^'μ_-2/3 LQ to a left-handed μ (or ν_μ) and a left-handed s. Similarly, y_e q^μ b is the coupling of the Δ_-7/6 LQ to a right-handed μ and a left-handed b. These couplings are relevant for(and possibly ). Note that the V^μ_-5/3 and V^μ_1/6 LQs do not contribute to .A number of these LQs, and their effects onand other decays, have been analyzed separately. For example, in Ref. <cit.>, it was pointed out that four LQs can contribute to B̅→ D^(*)+τ^- ν̅_τ. They are: a scalar isosinglet with Y = 1/3, a scalar isotriplet with Y = 1/3, a vector isosinglet with Y = -2/3, and a vector isotriplet with Y = -2/3. These are respectively Δ_1/3, Δ⃗'_1/3, V^μ_-2/3 and V⃗^'μ_-2/3. In Ref. <cit.>, they are called S_1, S_3, U_1 and U_3, respectively, and we adopt this nomenclature below.The S_3 LQ has been studied in the context ofin Refs. <cit.>. U_1 has been examined in Refs. <cit.>. In Ref. <cit.>, the U_3 LQ was proposed as an explanation of theanomalies.Finally, in Refs. <cit.> it was claimed that the tree-level exchange of a Δ_-1/6 LQ can account for theresults.There are therefore quite a few LQ models that contribute to , several of which have been proposed as explanations of the B-decay anomalies. We would like to have a definitive answer to the following question: which of the LQs in Eq. (<ref>) can actually explain theanomalies? Rather than rely on previous work, we perform an independent analysis ourselves.§.§.§LQ fits The difference between model-independent and model-dependent fits is that, within a particular model, there may be contributions to new observables and/or new operators, and this must be taken into account in the fit. In the case of LQ models, the LQs contribute to a variety of operators. In addition to O^(')_9,10 [Eq. (<ref>)], there may be contributions toO^(')_ν = [ s̅γ_μ P_L(R) b ] [ ν̅_μγ^μ (1 - γ_5) ν_μ ]  , O^(')_S = [ s̅ P_R(L) b ] [ μ̅μ ]   ,     O^(')_P = [ s̅ P_R(L) b ] [ μ̅γ_5 μ ] .O^(')_ν contributes to b → s ν_μν̅_μ, while O^(')_S and O^(')_P are additional contributions to . Based on the couplings in Eq. (<ref>), it is straightforward to work out which Wilson coefficients are affected by each LQ. These are shown in Table <ref><cit.>. Although the scalar LQs do not contribute to O^(')_S,P, some vector LQs do. For these we have C_P^μμ( NP) = -C_S^μμ( NP) and C_P^'μμ( NP) = C_S^'μμ( NP). There are several observations one can make from this Table. First, not all of the LQs contribute to : Δ_1/3 contributes only to . Second, U_1 has two couplings, g_ℓ q and g_e d. If both are allowed simultaneously, scalar operators are generated, and these can also contribute to . This must be taken into account in the model-dependent fits. The situation is similar for V^μ_-5/6. Finally, the S_3 and U_3 LQs both have C_9^μμ( NP) = -C_10^μμ( NP); they are differentiated only by their contributions to C_ν^μμ( NP).At this stage, we can perform model-dependent fits to determine which of the LQ models can explain the data. First of all, the SM alone does not provide a good fit. We find, for 106 degrees of freedom, thatχ^2_SM/d.o.f. = 1.34   ,    p-value =0.01. We therefore confirm that theanomalies suggest the presence of NP.For the scalar LQs, the results of the fits using only thedata are shown in Table <ref> (we address thedata below). For the S_3 LQ, there are two best-fit solutions, labeled (A) and (B). (The two solutions have the same best-fit values for Re(coupling), but opposite signs for the best-fit values of Im(coupling).) From this Table, we see that only the S_3 LQ provides an acceptable fit to the data. Despite the claims of Refs. <cit.>, the Δ_-1/6 LQ does not explain theanomalies. The vector LQs are more complicated because the U_1 and V^μ_-5/6 LQs each have two couplings. The U_1 case, where the two couplings are g_ℓ q and g_e d, is particularly interesting. If g_e d^ij = 0, we have C_9^μμ( NP) = - C_10^μμ( NP), like the S_3 and U_3 LQs. (Recall that we found that S_3 can explain theanomalies.) And if g_e d^μ b (g_e d^μ s)^* = - g_ℓ q^μ b (g_ℓ q^μ s)^*, we have C_9^μμ( NP) = - C_10^μμ( NP) = -C_9^'μμ( NP) = - C_10^'μμ( NP), which is scenario IV of Eq. (<ref>), and is also found to explain the anomalies. To explore the U_1 model fully, we perform three fits. Fit (1) has g_e d^ij = 0, fit (2) has g_e d^μ b = g_ℓ q^μ b and g_e d^μ s = - g_ℓ q^μ s (which gives g_e d^μ b (g_e d^μ s)^* = - g_ℓ q^μ b (g_ℓ q^μ s)^*), and fit (3) allows the g_e d^ij to be free. For the V^μ_-5/6 LQ, here too we can allow all couplings to vary, but for simplicity we set g_l d^ij = 0. However, we have checked that, even if we vary all the couplings, this model does not provide a good fit.Regarding fit (3), a few comments are useful. Although we allow all couplings to vary, the constraints apply only to products of couplings. This allows some freedom: the magnitude of g_ℓ q^μ s does not affect the best-fit values of the WCs, so we simply set it to 1. Also, in order to avoid problems with correlations in the fits, we set g_ℓ q^μ s and g_ed^μ s to fixed real values. Finally, in Ref. <cit.> it was found that the global fit requires C_S^μμ( NP) ≪ C_9^μμ( NP), i.e., g_ed^μ s/ g_ℓ q^μ s≪ 1.We have found that g_ed^μ s/g_ℓ q^μ s≃ 0.02 leads to a fit with a pull of around 4. The results of the fits are shown in Table <ref>. There are several notable features: *We see that theanomalies can be explained with the U_1 LQ [fit (1)] and the U_3 LQ. Like the S_3 LQ, they have C_9^μμ( NP) = - C_10^μμ( NP). Indeed, because onlydata were used in the fits, the fit results are identical for all three LQ models. *A good fit is also found with the U_1 LQ [fit (3)].However, the best-fit solution has g_e d^μ b≃ 0, so that this is essentially the same as the U_1 LQ [fit (1)]. *The U_1 LQ model [fit (2)] has been constructed to satisfy C_9^μμ( NP) = - C_10^μμ( NP) = -C_9^'μμ( NP) = - C_10^'μμ( NP). Despite this, the model does not provide a good fit of thedata. The reason is that, in this model, there are also important contributions to the scalar operators of Eq. (<ref>). However, the measurement of →μ^+μ^- puts strong constraints on such contributions. The result is that one cannot explain the anomalies in B → K^* μ^+μ^-, →ϕμ^+ μ^- and R_K, while simultaneously agreeing with the measurement of →μ^+μ^-. This provides an explicit example of how the “model-independent,” results of Eq. (<ref>) do not necessarily apply to particular models. *The V^μ_-5/6 LQ model does not provide a good fit of thedata.We therefore see that, of all the scalar and vector LQ models, only S_3, U_1 and U_3 can explain theanomalies. Furthermore, within the context ofprocesses, the models are equivalent, since they all have C_9^μμ( NP) = - C_10^μμ( NP).Finally, recall that the aim of this analysis is to differentiate differentNP models through measurements of CP-violating asymmetries in B → K^(*)μ^+ μ^-. As noted in the introduction, such CP asymmetries can be sizeable only if there is a significant NP weak phase. For the LQ model, we see from Table <ref> that the real and imaginary parts of the coupling are of similar sizes. The NP weak phase is therefore not small, so that large CP asymmetries can be expected.§.§.§Above, we have argued that the S_3, U_1 and U_3 LQ models are equivalent. However, from Table <ref>, note that the three LQs contribute differently to C_ν^μμ( NP), the WC associated with O_ν, the operator responsible for b → s ν_μν̅_μ. To be specific, the S_3 and U_3 LQs have C_ν^μμ( NP) = 1/2 C_9^μμ( NP) and C_ν^μμ( NP) = 2 C_9^μμ( NP), respectively, while the U_1 LQ has C_ν^μμ( NP) = 0.This means that, for S_3 and U_3, constraints on C_ν^μμ( NP) translate into additional constraints on C_9^μμ( NP). This then raises the question: could these three LQ solutions be distinguished by thedata?The effective Hamiltonian relevant foris <cit.>H_ eff = - α G_F/π V_tb V_ts^* ∑_ℓ C_L^ℓ (s̅γ_μ P_L b) (ν̅_ℓγ^μ (1-γ_5)ν_ℓ)  .The WC contains both the SM and NP contributions: C_L^ℓ = C_L^ SM + C_ν^ℓℓ( NP); it allows for NP that is lepton flavor non-universal. This is appropriate to the present case, as the LQs have only a nonzero C_ν^μμ( NP). The SM WC is C_L^ SM = - X_t/s_W^2  ,where s_W ≡sinθ_W and X_t = 1.469 ± 0.017.The latestmeasurements yield <cit.>B(B → K νν̅)<1.6 × 10^-5 , B(B → K^* νν̅)<2.7 × 10^-5 .In Ref. <cit.>, the SM predictions for these decays were computed: B(B → K νν̅)|_SM = (3.98 ± 0.43 ± 0.19) × 10^-6 , B(B → K^* νν̅)|_SM = (9.19 ± 0.86 ± 0.50) × 10^-6 .We define R_K ≡ B(B → K νν̅)/ B_SM(B → K νν̅)  ,     R_K^*≡ B(B → K^* νν̅)/ B_SM(B → K^* νν̅) .Using Eqs. (<ref>) and (<ref>), we obtain R_K < 4.0   ,     R_K^* < 2.9  . From Ref. <cit.>, R_K and R_K^* can be written as R_K =R_K^* =2/3 + 1/3|C_L^SM + C_ν^μμ( NP)|^2/|C_L^SM|^2 = 1 + 2/3 Re(C_ν^μμ( NP) / C_L^SM) + 1/3 |C_ν^μμ( NP) / C_L^SM|^2  .Since C_ν^μμ( NP) is proportional to C_9^μμ( NP), and since |C_9^μμ( NP)| = O(1) (see Table <ref>, scenario II), thedata implies that |C_ν^μμ( NP)| is also O(1). Can thedata provide competitive constraints on |C_ν^μμ( NP)|? Using the R_K^* bound of Eq. (<ref>) (since it is stronger), and neglecting Im(C_ν^μμ( NP)) in Eq. (<ref>), we obtain -10.1 <Re(C_ν^μμ( NP)) < 22.8  .The above limit is significantly weaker than the result |C_ν^μμ( NP)| = O(1) coming from the fit to thedata. We therefore conclude that thedata cannot be used to distinguish the S_3, U_1 and U_3 LQs.Note that this conclusion may not hold if the LQs also couple to other leptons. For example, in Ref. <cit.> it was assumed that the LQs couple to (ν_τ, τ^-)_L in the gauge basis, and that couplings to (ν_μ, μ^-)_L are generated only when one transforms to the mass basis. In this case, the LQs contribute not only to b → s ν_μν̅_μ, but also to b → s ν_τν̅_τ, which can alter the above analysis. Indeed, in Ref. <cit.> it is found that constraints fromare important in the comparison of the S_3, U_1 and U_3 LQs. §.§ Z' bosons Perhaps the most obvious candidate for a NP contribution tois the tree-level exchange of a Z' boson with a flavor-changing coupling s̅γ^μ P_L b Z'_μ. Given that it couples to two left-handed doublets, the Z' must transform as a singlet or triplet of SU(2)_L. The triplet option has been examined in Refs. <cit.>. (In this case, there is also a W' that can contribute to B̅→ D^(*)+τ^- ν̅_τ<cit.>, another decay whose measurement exhibits a discrepancy with the SM <cit.>.) If the Z' is a singlet of SU(2)_L, it must be the gauge boson associated with an extra U(1)'. Numerous models of this type have been proposed, see Refs. <cit.>.The vast majority of these Z' models use scenario II of Eq. (<ref>): C_9^μμ( NP) = - C_10^μμ( NP). Thus, although the underlying details of these models are different, in all cases we can writeΔ L_Z'= J^μ Z'_μ , where J^μ= g_L^μμ L̅γ^μ P_L L + g_L^bs ψ̅_q2γ^μ P_L ψ_q3 + h.c.Here ψ_qi is the quark doublet of the i^th generation, and L=(ν_μ , μ)^T. When the heavy Z' is integrated out, we obtain the following effective Lagrangian containing 4-fermion operators: L_Z'^eff = -1/2 M_Z'^2 J_μ J^μ ⊃-g_L^bs g_L^μμ/M_Z'^2 (s̅γ^μ P_L b) (μ̅γ^μ P_L μ) - (g_L^bs)^2/2 M_Z'^2 (s̅γ^μ P_L b) (s̅γ^μ P_L b)0.5truecm - (g_L^μμ)^2/M_Z'^2 (μ̅γ^μ P_L μ) (ν̅_μγ^μ P_L ν_μ)  .The first 4-fermion operator is relevant fortransitions, the second operator contributes to - mixing, and the third operator contributes to neutrino trident production.Note that g_L^μμ must be real, since the leptonic current of Eq. (<ref>) is self-conjugate. However, g_L^bs can be complex, i.e., it can contain a weak phase. This phase can potentially lead to CP-violating effects in B → K^(*)μ^+ μ^- via the first 4-fermion operators of Eq. (<ref>). The question is: how large can this NP weak phase be? This is the question that is addressed in this subsection by considering constraints from , - mixing, and neutrino trident production.Forwe have C_9^μμ( NP) = -C_10^μμ( NP) = -[ π/√(2) G_F α V_tb V_ts^* ]g_L^bs g_L^μμ/M_Z'^2 . Turning to - mixing, the SM contribution arises due to a box diagram, and is given by N C_VLL^ SM(s̅_L γ^μ b_L) (s̅_L γ_μ b_L)  ,where N = G_F^2 m_W^2/16π^2 (V_tb V_ts^*)^2  ,   C_VLL^ SM = η_B_s x_t [ 1 + 9/1-x_t - 6/(1-x_t)^2 -6 x_t^2 ln x_t/(1-x_t)^3]  .Here x_t ≡ m_t^2/m_W^2 and η_B_s = 0.551 is the QCD correction <cit.>.Combining the SM and NP contributions, we define N C_VLL≡ |N C_VLL^ SM| e^-2 i β_s+ (g_L^bs)^2/2 M_Z'^2 ,where -β_s =arg(V_tb V_ts^*). This leads to Δ M_s =2/3 m_B_s f_B_s^2 B̂_B_s | N C_VLL|  .In addition, the weak phase of - mixing is given by φ_s =arg(N C_VLL). From the above expressions, we see that, the larger g_L^bs is, the more Z' models contribute to – and receive constraints from –- mixing. The experimental measurements of the mixing parameters yield <cit.> Δ M_s^ exp= 17.757 ± 0.021   ps^-1 ,φ_s^cc̅s =-0.030 ± 0.033  .These are to be compared with the SM predictions:Δ M_s^ SM = 2/3 m_B_s f_B_s^2 B̂_B_s |N C_VLL^ SM | = (17.9 ± 2.4)  ps^-1 ,φ_s^cc̅s, SM =-2 β_s = -0.03704 ± 0.00064  .In the above, for Δ M_s^ SM, we have followed the computation of Ref. <cit.>, using f_B_s√(B̂_B_s)=270± 16 MeV <cit.>, |V^_tbV^*_ts| = 0.0405 ± 0.0012<cit.>, and m_t = 160 GeV; φ_s^cc̅s, SM is taken from Refs. <cit.>.The Z' will also contribute to the production of μ^+μ^- pairs in neutrino-nucleus scattering, ν_μ N →ν_μ N μ^+ μ^- (neutrino trident production). At leading order, this process is effectively ν_μγ→ν_μμ^+ μ^-, and is produced by single-W/Z exchange in the SM. This arises from the four-fermion effective operatorℒ_eff:trident= [ μ̅γ^μ( C_V - C_A γ^5 ) μ] [ ν̅γ_μ (1-γ^5) ν] ,with an external photon coupling to μ^+ or μ^-. In the SM, combining both W- and Z-exchange diagrams, we have <cit.> C_V^SM = - g^28 m_W^2( 12 + 2 s_W^2 )   ,    C_A^SM = - g^28 m_W^2 12 .On the other hand, the Z' boson contributes to Eq. (<ref>) with the pure V-A form:C_V^ NP = C_A^ NP= -(g_L^μμ)^24 M_Z'^2 . The theoretical prediction is then. σ_SM+NPσ_SM|_ν N →ν N μ^+ μ^-= (C_V^SM + C_V^NP)^2 + (C_A^SM + C_A^NP)^2(C_V^SM)^2 + (C_A^SM)^2=1/1+(1+4s_W^2)^2 [( 1 + v^2(g_L^μμ)^2/M_Z^'^2 )^2 +( 1 +4 s_W^2 +v^2 (g_L^μμ)^2/M_Z^'^2)^2]  ,to be compared with the experimental measurement <cit.>:. σ_exp.σ_SM|_ν N →ν N μ^+ μ^- = 0.82 ± 0.28  .The net effect is that this will provide an upper limit on (g_L^μμ)^2/M_Z'^2. For M_Z'=1TeV and v = 246 GeV, we obtain the following 1σ bound on the coupling: |g_L^μμ| ≤ 1.25  . We now perform a fit within the context of this Z' model. The fit includes the measurements of theobservables, - mixing (magnitude and phase), and the cross section for neutrino trident production. There are 107 degrees of freedom. Our results are summarized in Table <ref>. We see that a good fit is obtained for g_L^μμ≥ 0.1. (Smaller values of g_L^μμ imply larger values for g_L^bs, which are disfavored by measurements of - mixing.)Once again, recall that the ultimate aim of this study is to compare the predictions of different models for the CP-violating asymmetries in B → K^(*)μ^+ μ^-. Such asymmetries can be sizeable only if the NP weak phase is large. However, from Table <ref>, we see that Im(g_L^bs)/Re(g_L^bs) is O(1) only for g_L^μμ = 0.8, 1.0. It is intermediate for g_L^μμ = 0.4, 0.5, and is small for g_L^μμ = 0.1, 0.2. We therefore expect that models with different values of g_L^μμ will predict different values of the CP asymmetries, potentially allowing them to be differentiated.From the above, we see that a large NP weak phase can only be produced in Z' models if g_L^μμ is large. However, note that, while this is a necessary condition, it is not sufficient. In a particular Z' model, it is necessary to have a mechanism whereby g_L^bs can have a weak phase. This is not the case for all models. As an example, in some models, the Z' couples only to b̅b in the gauge basis. Its coupling constant is therefore real. The flavor-changing coupling to s̅b is only generated when transforming to the mass basis.However, in Refs. <cit.>, this transformation involves only the second and third generations. In other words, it is essentially a 2 × 2 rotation, which is real. In these models a weak phase in g_L^bs cannot be generated.§ CP ASYMMETRIES: MODEL-DEPENDENT PREDICTIONS In the previous section, we have identified the characteristics of NP models that can explain theanomalies. We have found that there are three LQ models –S_3, U_1, U_3– that can do this. All have C_9^μμ( NP) = - C_10^μμ( NP) and so are equivalent, as far asprocesses are concerned. There is a whole spectrum of Z' models that can explain thedata. What is required is that the Z' have couplings g_L^bs s̅γ^μ P_L b Z'_μ and g_L^μμ μ̅γ^μ P_L μ Z'_μ, and that g_L^μμ be ≥ 0.1.The purpose of this paper is to investigate whether these models can be distinguished by measurements of CP-violating asymmetries inand . To this end, the next step is then to compute the predictions of all models for the allowed ranges of the various asymmetries.For the LQ and Z' models, the best-fit values and errors of the real and imaginary parts of the NP couplings are given in Tables <ref> and <ref>, respectively. (For the LQ model, the allowed region in the Re(WC)-Im(WC) plane is shown in the upper right plot of Fig. <ref> (scenario II).) With these we can calculate the predictions for the asymmetries for all models. In Fig. <ref>, we present the predictions for the CP asymmetries A_3-A_9 in B^0 → K^*0μ^+ μ^- and A_ CP in B → K μ^+ μ^-. We consider the LQ model (solutions (A) and (B)) and the Z' model with g_L^μμ = 0.1, 0.5, 1.0. The ranges of the asymmetries are obtained by allowing the real and imaginary parts of the couplings to vary by ± 2σ (taking correlations into account). From these figures we see that *The predictions of the Z' model with g_L^μμ = 1.0 are very similar to those of the LQ model in which solutions (A) and (B) are added. *Even in the presence of NP, the asymmetries A_3, A_4, A_5, and A_9 are very small and probably unmeasurable. *In the LQ and Z' (g_L^μμ = 1.0) models, the asymmetries A_6^s and A_ CP can approach the 10% level in the high-q^2 region. *The asymmetry A_8 can reach 15% in the low-q^2 region in the LQ and Z' (g_L^μμ = 1.0) models; it is small in the Z' (g_L^μμ = 0.1, 0.5) models. *The most useful asymmetry is A_7 in the low-q^2 region. In the LQ and Z' (g_L^μμ = 1.0) models, it can reach ∼ 25%; in the Z' (g_L^μμ = 0.5) model, it can reach ∼ 5%; and it is very small in the Z' (g_L^μμ = 0.1) model. *If a large nonzero CP asymmetry is measured, its sign distinguishes solutions (A) and (B) of the LQ model. From this we see that, using CP-violating asymmetries in B → K^(*)μ^+ μ^-, it may indeed be possible to distinguish the LQ and Z' (g_L^μμ = 1.0) models from Z' models with different values of g_L^μμ.Finally, it was pointed out above that the predictions of the LQ model in which solutions (A) and (B) are added are very similar to those of the Z' model (g_L^μμ = 1.0). Furthermore, we note that these predictions are also very similar to those of the model-independent analysis (scenario II: C_9^μμ( NP) = - C_10^μμ( NP)), shown in Fig. <ref>. This is to be expected. Both the model-independent and LQ fits include onlydata, and for g_L^μμ = 1.0, the Z' fit is dominated by thedata (the additional constraints from - mixing are negligible). On the other hand, in a Z' model with g_L^μμ < 1.0, the constraints from - mixing are important, so that the predicted asymmetries are smaller than with g_L^μμ = 1.0. This is another example of how model-independent and model-dependent fits can yield different results.§ SUMMARY & CONCLUSIONS There are currently a number of B-decay measurements involvingthat exhibit discrepancies with the predictions of the SM. These include the angular analysis of B → K^* μ^+μ^-, the branching fraction and angular analysis of →ϕμ^+ μ^-, and R_K ≡ B(B^+ → K^+ μ^+ μ^-)/ B(B^+ → K^+ e^+ e^-). The model-independent global analysis of Ref. <cit.> showed that these anomalies can be explained if there is new physics in . Assuming that the NP Wilson coefficients are real, the four possible scenarios are (I) C_9^μμ( NP) < 0, (II) C_9^μμ( NP) = - C_10^μμ( NP) < 0, (III) C_9^μμ( NP) = - C_9^'μμ( NP) < 0, and (IV) C_9^μμ( NP) = - C_10^μμ( NP) = -C_9^'μμ( NP) = - C_10^'μμ( NP) < 0.Many models have been proposed as explanations of the B-decay anomalies. The purpose of this paper is to investigate whether one can distinguish among these models using measurements of CP-violating asymmetries inand . (In the SM, all CP-violating effects are expected to be tiny.)We begin by repeating the model-independent global analysis, this time allowing for complex WCs. We confirm that the four scenarios I-IV do indeed provide good fits to the data. Then, using the best-fit values and errors of the real and imaginary parts of the WCs, we compute the allowed ranges of the CP asymmetries in B → K^(*)μ^+ μ^-. We find that several asymmetries can be large, greater than 10%. More importantly, by combining the results of different CP asymmetries, it is potentially possible to differentiate scenarios I-IV.We then turn to a model-dependent analysis. There are two classes of NP that can contribute to : leptoquarks and Z' bosons. We examine these two types of NP in order to determine the characteristics of models that can explain the B-decay anomalies. Note that a specific model may have additional theoretical or experimental constraints, which must be taken into account in the model-dependent fits. This can lead to results that are quite different from the model-independent fits. Given a model that accounts for thedata, we compute its predictions for CP-violating effects. In order to generate sizeable CP asymmetries, the NP weak phase must be large.We consider all possible LQ models and find that three can explain the B anomalies. All have C_9^μμ( NP) = - C_10^μμ( NP) (scenario II), and so are equivalent as far as thedata are concerned. The three LQs contribute differently to b → s ν_μν̅_μ, and so could, in principle, be distinguished by measurements of . However, we find that the constraints on the models from the presentdata are far weaker than those from , so that the three models remain indistinguishable. That is, there is effectively only one LQ model that can explain thedata. There are two best-fit solutions (A) and (B); both have |Im(coupling)/Re(coupling)|= O(1), corresponding to a large NP weak phase.Many Z' models have been proposed to explain the B anomalies, but most of these also have C_9^μμ( NP) = - C_10^μμ( NP) (scenario II). Thus, although the models are constructed differently, all have couplings g_L^bs s̅γ^μ P_L b Z'_μ and g_L^μμ μ̅γ^μ P_L μ Z'_μ. g_L^μμ is necessarily real, but g_L^bs may be complex. The potential size of CP asymmetries is related to the size of the weak phase of g_L^bs. The product g_L^bs g_L^μμ is constrained by , while there are constraints on (g_L^bs)^2 due to the Z' contribution to - mixing. If g_L^μμ is small, thedata requires g_L^bs to be large, so that the - mixing constraints are stringent. In particular, the measurement of φ_s^cc̅s, the weak phase of the mixing, constrains the weak phase of g_L^bs to be small. On the other hand, if g_L^μμ is large, g_L^bs is small, so the - mixing constraints are very weak. In this case, the weak phase of g_L^bs can be large. We therefore see that there is a whole spectrum of Z' models, parametrized by the size of the g_L^μμ coupling.We compute the predictions for the CP asymmetries in B → K^(*)μ^+ μ^- in the LQ model (solutions (A) and (B)) and the Z' model with g_L^μμ = 0.1, 0.5, 1.0. We find that it may indeed be possible to distinguish the LQ and Z' models with various values of g_L^μμ from one another. The most useful CP asymmetry is A_7 in B^0 → K^*0μ^+ μ^-. In the low-q^2 region, this asymmetry (i) can reach ∼ 25% in the LQ and Z' (g_L^μμ = 1.0) models, (ii) can reach ∼ 5% in the Z' (g_L^μμ = 0.5) model, (iii) is very small in the Z' (g_L^μμ = 0.1) model. In addition, the sign of the asymmetry distinguishes solutions (A) and (B) of the LQ model. We therefore conclude that measurements of CP violation in B → K^(*)μ^+ μ^- are potentially very useful in identifying the NP responsible for the B-decay anomalies. Acknowledgements: This work was financially supported by NSERC of Canada (DL), by the U. S. Department of Energy under contract DE-SC0007983 (BB). AKA and BB acknowledge the hospitality of the GPP at the Université de Montréal during the initial stages of the work. BB thanks Alexey Petrov and Andreas Kronfeld for useful discussions. JK would like to thank Christoph Niehoff and David Straub for discussions and several correspondences regardingflavio. DL thanks Gudrun Hiller for helpful information about the CP asymmetries A_3-A_9.AppendixThis Appendix contains Tables of allexperimental data used in the fits. 99BK*mumuLHCb1 R. Aaijet al. [LHCb Collaboration], “Measurement of Form-Factor-Independent Observables in the Decay B^0→ K^*0μ^+ μ^-,” Phys. Rev. Lett.111, 191801 (2013) doi:10.1103/PhysRevLett.111.191801 [arXiv:1308.1707 [hep-ex]]. BK*mumuLHCb2 R. Aaijet al. [LHCb Collaboration], “Angular analysis of the B^0→ K^*0μ^+μ^- decay using 3 fb^-1 of integrated luminosity,” JHEP1602, 104 (2016) doi:10.1007/JHEP02(2016)104 [arXiv:1512.04442 [hep-ex]]. BK*mumuBelle A. Abdesselamet al. [Belle Collaboration], “Angular analysis of B^0 → K^∗(892)^0 ℓ^+ ℓ^-,” arXiv:1604.04042 [hep-ex]. P'5 S. Descotes-Genon, T. Hurth, J. Matias and J. Virto, “Optimizing the basis of B → K^* l l observables in the full kinematic range,” JHEP1305, 137 (2013) doi:10.1007/JHEP05(2013)137 [arXiv:1303.5794 [hep-ph]]. BK*mumuhadunc1 S. Descotes-Genon, L. Hofer, J. Matias and J. Virto, “On the impact of power corrections in the prediction of B → K^*μ^+μ^- observables,” JHEP1412, 125 (2014) doi:10.1007/JHEP12(2014)125 [arXiv:1407.8526 [hep-ph]]. BK*mumuhadunc2 J. Lyon and R. Zwicky, “Resonances gone topsy turvy - the charm of QCD or new physics in b → s ℓ^+ ℓ^-?,” arXiv:1406.0566 [hep-ph]. BK*mumuhadunc3 S. Jäger and J. Martin Camalich, “Reassessing the discovery potential of the B → K^*ℓ^+ℓ^- decays in the large-recoil region: SM challenges and BSM opportunities,” Phys. Rev. D93, 014028 (2016) doi:10.1103/PhysRevD.93.014028 [arXiv:1412.3183 [hep-ph]]. Altmannshofer:2014rta W. Altmannshofer and D. M. Straub, “New physics in b→ s transitions after LHC run 1,” Eur. Phys. J. C75, no. 8, 382 (2015) doi:10.1140/epjc/s10052-015-3602-7 [arXiv:1411.3161 [hep-ph]]. BK*mumulatestfit1 S. Descotes-Genon, L. Hofer, J. Matias and J. Virto, “Global analysis of b→ sℓℓ anomalies,” JHEP1606, 092 (2016) doi:10.1007/JHEP06(2016)092 [arXiv:1510.04239 [hep-ph]]. BK*mumulatestfit2 T. Hurth, F. Mahmoudi and S. Neshatpour, “On the anomalies in the latest LHCb data,” Nucl. Phys. B909, 737 (2016) doi:10.1016/j.nuclphysb.2016.05.022 [arXiv:1603.00865 [hep-ph]]. BsphimumuLHCb1 R. Aaijet al. [LHCb Collaboration], “Differential branching fraction and angular analysis of the decay B_s^0→ϕμ^+μ^-,” JHEP1307, 084 (2013) doi:10.1007/JHEP07(2013)084 [arXiv:1305.2168 [hep-ex]]. BsphimumuLHCb2 R. Aaijet al. [LHCb Collaboration], “Angular analysis and differential branching fraction of the decay B^0_s→ϕμ^+μ^-,” JHEP1509, 179 (2015) doi:10.1007/JHEP09(2015)179 [arXiv:1506.08777 [hep-ex]]. latticeQCD1 R. R. Horgan, Z. Liu, S. Meinel and M. Wingate, “Calculation of B^0 → K^*0μ^+ μ^- and B_s^0 →ϕμ^+ μ^- observables using form factors from lattice QCD,” Phys. Rev. Lett.112, 212003 (2014) doi:10.1103/PhysRevLett.112.212003 [arXiv:1310.3887 [hep-ph]], latticeQCD2“Rare B decays using lattice QCD form factors,” PoS LATTICE2014, 372 (2015) [arXiv:1501.00367 [hep-lat]]. QCDsumrules A. Bharucha, D. M. Straub and R. Zwicky, “B→ Vℓ^+ℓ^- in the Standard Model from light-cone sum rules,” JHEP1608, 098 (2016) doi:10.1007/JHEP08(2016)098 [arXiv:1503.05534 [hep-ph]]. RKexpt R. Aaijet al.[LHCb Collaboration], “Test of lepton universality using B^+→ K^+ℓ^+ℓ^- decays,” Phys. Rev. Lett.113, 151601 (2014) [arXiv:1406.6482 [hep-ex]]. IsidoriRK M. Bordone, G. Isidori and A. Pattori, “On the Standard Model predictions for R_K and R_K^*,” Eur. Phys. J. C76, no. 8, 440 (2016) doi:10.1140/epjc/s10052-016-4274-7 [arXiv:1605.07633 [hep-ph]]. bsmumuCPC A. K. Alok, A. Datta, A. Dighe, M. Duraisamy, D. Ghosh and D. London, “New Physics in b → s μ^+ μ^-: CP-Conserving Observables,” JHEP1111, 121 (2011) doi:10.1007/JHEP11(2011)121 [arXiv:1008.2367 [hep-ph]]. bsmumuCPV A. K. Alok, A. Datta, A. Dighe, M. Duraisamy, D. Ghosh and D. London, “New Physics in b → s μ^+ μ^-: CP-Violating Observables,” JHEP1111, 122 (2011) doi:10.1007/JHEP11(2011)122 [arXiv:1103.5344 [hep-ph]]. Descotes-Genon:2013wba S. Descotes-Genon, J. Matias and J. Virto, “Understanding the B→ K^*μ^+μ^- Anomaly,” Phys. Rev. D88, 074002 (2013) doi:10.1103/PhysRevD.88.074002 [arXiv:1307.5683 [hep-ph]]. AlexLenz S. Jäger, K. Leslie, M. Kirk and A. Lenz, “Charming new physics in rare B-decays and mixing?,” arXiv:1701.09183 [hep-ph]. CCO L. Calibbi, A. Crivellin and T. Ota, “Effective Field Theory Approach to b→ s ℓℓ^('), B→ K^(*)νν̅ and B → D^(*)τν with Third Generation Couplings,” Phys. Rev. Lett.115, 181801 (2015) doi:10.1103/PhysRevLett.115.181801 [arXiv:1506.02661 [hep-ph]]. AGC R. Alonso, B. Grinstein and J. Martin Camalich, “Lepton universality violation and lepton flavor conservation in B-meson decays,” JHEP1510, 184 (2015) doi:10.1007/JHEP10(2015)184 [arXiv:1505.05164 [hep-ph]]. HS1 G. Hiller and M. Schmaltz, “R_K and future b → s ℓℓ BSM opportunities,” Phys. Rev. D90 (2014) 054014 [arXiv:1408.1627 [hep-ph]]. GNR B. Gripaios, M. Nardecchia and S. A. Renner, “Composite leptoquarks and anomalies in B-meson decays,” JHEP1505, 006 (2015) doi:10.1007/JHEP05(2015)006 [arXiv:1412.1791 [hep-ph]]. VH I. de Medeiros Varzielas and G. Hiller, “Clues for flavor from rare lepton and quark decays,” JHEP1506, 072 (2015) doi:10.1007/JHEP06(2015)072 [arXiv:1503.01084 [hep-ph]]. SM S. Sahoo and R. Mohanta, “Scalar leptoquarks and the rare B meson decays,” Phys. Rev. D91, no. 9, 094019 (2015) doi:10.1103/PhysRevD.91.094019 [arXiv:1501.05193 [hep-ph]]. FK S. Fajfer and N. Košnik, “Vector leptoquark resolution of R_K and R_D^(*) puzzles,” Phys. Lett. B755, 270 (2016) doi:10.1016/j.physletb.2016.02.018 [arXiv:1511.06024 [hep-ph]]. BFK D. Bečirević, S. Fajfer and N. Košnik, “Lepton flavor nonuniversality in b → s ℓ^+ℓ^- processes,” Phys. Rev. D92, no. 1, 014016 (2015) doi:10.1103/PhysRevD.92.014016 [arXiv:1503.09024 [hep-ph]]. BKSZ D. Bečirević, N. Košnik, O. Sumensari and R. Zukanovich Funchal, “Palatable Leptoquark Scenarios for Lepton Flavor Violation in Exclusive b→ sℓ_1ℓ_2 modes,” JHEP1611, 035 (2016) doi:10.1007/JHEP11(2016)035 [arXiv:1608.07583 [hep-ph]]. Crivellin:2015lwa A. Crivellin, G. D'Ambrosio and J. Heeck, “Addressing the LHC flavor anomalies with horizontal gauge symmetries,” Phys. Rev. D91, 075006 (2015) doi:10.1103/PhysRevD.91.075006 [arXiv:1503.03477 [hep-ph]]. Isidori A. Greljo, G. Isidori and D. Marzocca, “On the breaking of Lepton Flavor Universality in B decays,” JHEP1507, 142 (2015) doi:10.1007/JHEP07(2015)142 [arXiv:1506.01705 [hep-ph]]. dark D. Aristizabal Sierra, F. Staub and A. Vicente, “Shedding light on the b→ s anomalies with a dark sector,” Phys. Rev. D92, 015001 (2015) doi:10.1103/PhysRevD.92.015001 [arXiv:1503.06077 [hep-ph]]. Chiang C. W. Chiang, X. G. He and G. Valencia, “Z' model for b → s ℓℓ̅ flavor anomalies,” Phys. Rev. D93, 074003 (2016) doi:10.1103/PhysRevD.93.074003 [arXiv:1601.07328 [hep-ph]]. Virto S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente and J. Virto, “Non-abelian gauge extensions for B-decay anomalies,” Phys. Lett. B760, 214 (2016) doi:10.1016/j.physletb.2016.06.067 [arXiv:1604.03088 [hep-ph]],“Phenomenology of an SU(2) × SU(2) × U(1) model with lepton-flavour non-universality,” JHEP1612, 059 (2016) doi:10.1007/JHEP12(2016)059 [arXiv:1608.01349 [hep-ph]]. GGH R. Gauld, F. Goertz and U. Haisch, “On minimal Z' explanations of the B→ K^*μ^+μ^- anomaly,” Phys. Rev. D89, 015005 (2014) doi:10.1103/PhysRevD.89.015005 [arXiv:1308.1959 [hep-ph]],“An explicit Z'-boson explanation of the B → K^* μ^+ μ^- anomaly,” JHEP1401, 069 (2014) doi:10.1007/JHEP01(2014)069 [arXiv:1310.1082 [hep-ph]]. BG A. J. Buras and J. Girrbach, “Left-handed Z' and Z FCNC quark couplings facing newdata,” JHEP1312, 009 (2013) doi:10.1007/JHEP12(2013)009 [arXiv:1309.2466 [hep-ph]]. BFG A. J. Buras, F. De Fazio and J. Girrbach, “331 models facing new b → sμ^+ μ^- data,” JHEP1402, 112 (2014) doi:10.1007/JHEP02(2014)112 [arXiv:1311.6729 [hep-ph]]. Perimeter W. Altmannshofer, S. Gori, M. Pospelov and I. Yavin, “Quark flavor transitions in L_μ-L_τ models,” Phys. Rev. D89, 095033 (2014) doi:10.1103/PhysRevD.89.095033 [arXiv:1403.1269 [hep-ph]]. CDH A. Crivellin, G. D'Ambrosio and J. Heeck, “Explaining h→μ^±τ^∓, B→ K^* μ^+μ^- and B→ K μ^+μ^-/B→ K e^+e^- in a two-Higgs-doublet model with gauged L_μ-L_τ,” Phys. Rev. Lett.114, 151801 (2015) doi:10.1103/PhysRevLett.114.151801 [arXiv:1501.00993 [hep-ph]],“Addressing the LHC flavor anomalies with horizontal gauge symmetries,” Phys. Rev. D91, no. 7, 075006 (2015) doi:10.1103/PhysRevD.91.075006 [arXiv:1503.03477 [hep-ph]]. SSV D. Aristizabal Sierra, F. Staub and A. Vicente, “Shedding light on the b→ s anomalies with a dark sector,” Phys. Rev. D92, no. 1, 015001 (2015) doi:10.1103/PhysRevD.92.015001 [arXiv:1503.06077 [hep-ph]]. CHMNPR A. Crivellin, L. Hofer, J. Matias, U. Nierste, S. Pokorski and J. Rosiek, “Lepton-flavour violating B decays in generic Z' models,” Phys. Rev. D92, no. 5, 054013 (2015) doi:10.1103/PhysRevD.92.054013 [arXiv:1504.07928 [hep-ph]]. CMJS A. Celis, J. Fuentes-Martin, M. Jung and H. Serodio, “Family nonuniversal Z' models with protected flavor-changing interactions,” Phys. Rev. D92, no. 1, 015007 (2015) doi:10.1103/PhysRevD.92.015007 [arXiv:1505.03079 [hep-ph]]. BDW G. Bélanger, C. Delaunay and S. Westhoff, “A Dark Matter Relic From Muon Anomalies,” Phys. Rev. D92, 055021 (2015) doi:10.1103/PhysRevD.92.055021 [arXiv:1507.06660 [hep-ph]]. FNZ A. Falkowski, M. Nardecchia and R. Ziegler, “Lepton Flavor Non-Universality in B-meson Decays from a U(2) Flavor Model,” JHEP1511, 173 (2015) doi:10.1007/JHEP11(2015)173 [arXiv:1509.01249 [hep-ph]]. AQSS B. Allanach, F. S. Queiroz, A. Strumia and S. Sun, “Z' models for the LHCb and g-2 muon anomalies,” Phys. Rev. D93, no. 5, 055045 (2016) doi:10.1103/PhysRevD.93.055045 [arXiv:1511.07447 [hep-ph]]. CFL A. Celis, W. Z. Feng and D. Lüst, “Stringy explanation of b→ sℓ^+ ℓ^- anomalies,” JHEP1602, 007 (2016) doi:10.1007/JHEP02(2016)007 [arXiv:1512.02218 [hep-ph]]. Hou K. Fuyuto, W. S. Hou and M. Kohda, “Z'-induced FCNC decays of top, beauty, and strange quarks,” Phys. Rev. D93, no. 5, 054021 (2016) doi:10.1103/PhysRevD.93.054021 [arXiv:1512.09026 [hep-ph]]. CHV C. W. Chiang, X. G. He and G. Valencia, “Z' model for b→ s ℓℓ̅ flavor anomalies,” Phys. Rev. D93, no. 7, 074003 (2016) doi:10.1103/PhysRevD.93.074003 [arXiv:1601.07328 [hep-ph]]. CFV A. Celis, W. Z. Feng and M. Vollmann,Phys. Rev. D95, no. 3, 035018 (2017) doi:10.1103/PhysRevD.95.035018 [arXiv:1608.03894 [hep-ph]]. CFGI A. Crivellin, J. Fuentes-Martin, A. Greljo and G. Isidori,Phys. Lett. B766, 77 (2017) doi:10.1016/j.physletb.2016.12.057 [arXiv:1611.02703 [hep-ph]]. IGG I. Garcia Garcia,JHEP1703, 040 (2017) doi:10.1007/JHEP03(2017)040 [arXiv:1611.03507 [hep-ph]]. BdecaysDM J. M. Cline, J. M. Cornell, D. London and R. Watanabe,Phys. Rev. D95, no. 9, 095015 (2017) doi:10.1103/PhysRevD.95.095015 [arXiv:1702.00395 [hep-ph]]. Bhatia:2017tgoD. Bhatia, S. Chakraborty and A. Dighe,JHEP1703, 117 (2017) doi:10.1007/JHEP03(2017)117 [arXiv:1701.05825 [hep-ph]]. RKRDmodelsB. Bhattacharya, A. Datta, J. P. Guvin, D. London and R. Watanabe,JHEP1701, 015 (2017) doi:10.1007/JHEP01(2017)015 [arXiv:1609.09078 [hep-ph]]. BHP C. Bobeth, G. Hiller and G. Piranishvili, “CP Asymmetries in B̅→K̅^* (→K̅π) ℓ̅ℓ and Untagged B̅_s, B_s →ϕ (→ K^+ K^-) ℓ̅ℓ Decays at NLO,” JHEP0807, 106 (2008) doi:10.1088/1126-6708/2008/07/106 [arXiv:0805.2525 [hep-ph]]. BK*mumuCPV W. Altmannshofer, P. Ball, A. Bharucha, A. J. Buras, D. M. Straub and M. Wick, “Symmetries and Asymmetries of B → K^*μ^+μ^- Decays in the Standard Model and Beyond,” JHEP0901, 019 (2009) doi:10.1088/1126-6708/2009/01/019 [arXiv:0811.1214 [hep-ph]]. flavio David Straub, flavio v0.11, 2016.http://dx.doi.org/10.5281/zenodo.59840http://dx.doi.org/10.5281/zenodo.59840James:1975dr F. James and M. Roos, “Minuit: A System for Function Minimization and Analysis of the Parameter Errors and Correlations,” Comput. Phys. Commun.10, 343 (1975). doi:10.1016/0010-4655(75)90039-9 James:2004xla F. James and M. Winkler, “MINUIT User's Guide,” http://inspirehep.net/record/1258345?ln=en James:1994vla F. James, “MINUIT Function Minimization and Error Analysis:Reference Manual Version 94.1,” CERN-D-506, CERN-D506.pdg C. Patrignaniet al. [Particle Data Group], “Review of Particle Physics,” Chin. Phys. C40, no. 10, 100001 (2016). doi:10.1088/1674-1137/40/10/100001 Aaij:2014pli R. Aaijet al. [LHCb Collaboration], “Differential branching fractions and isospin asymmetries of B → K^(*)μ^+ μ^- decays,” JHEP1406, 133 (2014) doi:10.1007/JHEP06(2014)133 [arXiv:1403.8044 [hep-ex]]. Lees:2013nxa J. P. Leeset al. [BaBar Collaboration], “Measurement of the B → X_s l^+l^- branching fraction and search for direct CP violation from a sum of exclusive final states,” Phys. Rev. Lett.112, 211802 (2014) doi:10.1103/PhysRevLett.112.211802 [arXiv:1312.5364 [hep-ex]]. Aaij:2013aka R. Aaijet al. [LHCb Collaboration], “Measurement of the B^0_s →μ^+ μ^- branching fraction and search for B^0 →μ^+ μ^- decays at the LHCb experiment,” Phys. Rev. Lett.111, 101805 (2013) doi:10.1103/PhysRevLett.111.101805 [arXiv:1307.5024 [hep-ex]]. CMS:2014xfa V. Khachatryanet al. [CMS and LHCb Collaborations], “Observation of the rare B^0_s→μ^+μ^- decay from the combined analysis of CMS and LHCb data,” Nature522, 68 (2015) doi:10.1038/nature14474 [arXiv:1411.4413 [hep-ex]]. Bailey:2015dka J. A. Baileyet al., “B→ Kl^+l^- decay form factors from three-flavor lattice QCD,” Phys. Rev. D93, no. 2, 025026 (2016) doi:10.1103/PhysRevD.93.025026 [arXiv:1509.06235 [hep-lat]]. Descotes-Genon:2015hea S. Descotes-Genon and J. Virto, “Time dependence in B → Vℓℓ decays,” JHEP1504, 045 (2015) Erratum: [JHEP1507, 049 (2015)] doi:10.1007/JHEP04(2015)045, 10.1007/JHEP07(2015)049 [arXiv:1502.05509 [hep-ph]]. Asatryan:2002iy H. H. Asatryan, H. M. Asatrian, C. Greub and M. Walker, “Complete gluon bremsstrahlung corrections to the process b →s l^+ l^-,” Phys. Rev. D66, 034009 (2002) doi:10.1103/PhysRevD.66.034009 [hep-ph/0204341]. Ghinculov:2003qd A. Ghinculov, T. Hurth, G. Isidori and Y. P. Yao, “The Rare decay B →X_s l^+ l^- to NNLL precision for arbitrary dilepton invariant mass,” Nucl. Phys. B685, 351 (2004) doi:10.1016/j.nuclphysb.2004.02.028 [hep-ph/0312128]. Huber:2005ig T. Huber, E. Lunghi, M. Misiak and D. Wyler, “Electromagnetic logarithms in B̅→X_s l^+ l^-,” Nucl. Phys. B740, 105 (2006) doi:10.1016/j.nuclphysb.2006.01.037 [hep-ph/0512066]. Huber:2007vv T. Huber, T. Hurth and E. Lunghi, “Logarithmically Enhanced Corrections to the Decay Rate and Forward Backward Asymmetry in B̅→ X_s ℓ^+ ℓ^-,” Nucl. Phys. B802, 40 (2008) doi:10.1016/j.nuclphysb.2008.04.028 [arXiv:0712.3009 [hep-ph]]. DatLon A. Datta and D. London, “Measuring new physics parameters in B penguin decays,” Phys. Lett. B595, 453 (2004) doi:10.1016/j.physletb.2004.06.069 [hep-ph/0404130]. Sakakietal Y. Sakaki, M. Tanaka, A. Tayduganov and R. Watanabe, “Testing leptoquark models in B̅→ D^(*)τν̅,” Phys. Rev. D88, no. 9, 094012 (2013) doi:10.1103/PhysRevD.88.094012 [arXiv:1309.0301 [hep-ph]]. Buras:2014fpa A. J. Buras, J. Girrbach-Noe, C. Niehoff and D. M. Straub, “B→K^(∗)νν decays in the Standard Model and beyond,” JHEP1502, 184 (2015) doi:10.1007/JHEP02(2015)184 [arXiv:1409.4557 [hep-ph]]. Grygier:2017tzo J. Grygieret al. [Belle Collaboration], “Search for B→ hνν̅ decays with semileptonic tagging at Belle,” arXiv:1702.03224 [hep-ex]. RKRD B. Bhattacharya, A. Datta, D. London and S. Shivashankara, “Simultaneous Explanation of the R_K and R(D^(*)) Puzzles,” Phys. Lett. B742, 370 (2015) [arXiv:1412.7164 [hep-ph]]. RD_BaBar J. P. Leeset al. [BaBar Collaboration], “Measurement of an Excess of B̅→ D^(*)τ^- ν̅_τ Decays and Implications for Charged Higgs Bosons,” Phys. Rev. D88, 072012 (2013) doi:10.1103/PhysRevD.88.072012 [arXiv:1303.0571 [hep-ex]]. RD_Belle M. Huschleet al. [Belle Collaboration], “Measurement of the branching ratio of B̅→ D^(∗)τ^- ν̅_τ relative to B̅→ D^(∗)ℓ^- ν̅_ℓ decays with hadronic tagging at Belle,” Phys. Rev. D92, 072014 (2015) doi:10.1103/PhysRevD.92.072014 [arXiv:1507.03233 [hep-ex]]. RD_LHCb R. Aaijet al. [LHCb Collaboration], “Measurement of the ratio of branching fractions ℬ(B̅^0 → D^*+τ^-ν̅_τ)/ℬ(B̅^0 → D^*+μ^-ν̅_μ),” Phys. Rev. Lett.115, 111803 (2015) Addendum: [Phys. Rev. Lett.115, 159901 (2015)] doi:10.1103/PhysRevLett.115.159901, 10.1103/PhysRevLett.115.111803 [arXiv:1506.08614 [hep-ex]]. Buchalla:1995vs G. Buchalla, A. J. Buras and M. E. Lautenbacher, “Weak decays beyond leading logarithms,” Rev. Mod. Phys.68, 1125 (1996) doi:10.1103/RevModPhys.68.1125 [hep-ph/9512380]. HFAG Y. Amhiset al. [Heavy Flavor Averaging Group (HFAG) Collaboration], “Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2014,” arXiv:1412.7515 [hep-ex]. Gamiz:2009ku E. Gamizet al. [HPQCD Collaboration], “Neutral B Meson Mixing in Unquenched Lattice QCD,” Phys. Rev. D80, 014503 (2009) doi:10.1103/PhysRevD.80.014503 [arXiv:0902.1815 [hep-lat]]. Aoki:2014nga Y. Aoki, T. Ishikawa, T. Izubuchi, C. Lehner and A. Soni, “Neutral B meson mixings and B meson decay constants with static heavy and domain-wall light quarks,” Phys. Rev. D91, no. 11, 114505 (2015) doi:10.1103/PhysRevD.91.114505 [arXiv:1406.6192 [hep-lat]]. Aoki:2016frl S. Aokiet al., “Review of lattice results concerning low-energy particle physics,” Eur. Phys. J. C77, no. 2, 112 (2017) doi:10.1140/epjc/s10052-016-4509-7 [arXiv:1607.00299 [hep-lat]]. Charles:2004jd J. Charleset al. [CKMfitter Group], “CP violation and the CKM matrix: Assessing the impact of the asymmetric B factories,” Eur. Phys. J. C41, no. 1, 1 (2005) doi:10.1140/epjc/s2005-02169-1 [hep-ph/0406184]. Hocker:2001xe A. Hocker, H. Lacker, S. Laplace and F. Le Diberder, “A New approach to a global fit of the CKM matrix,” Eur. Phys. J. C21, 225 (2001) doi:10.1007/s100520100729 [hep-ph/0104062]. Koike:1971tu K. Koike, M. Konuma, K. Kurata and K. Sugano, “Neutrino production of lepton pairs. 1. -,” Prog. Theor. Phys.46, 1150 (1971). doi:10.1143/PTP.46.1150 Koike:1971vg K. Koike, M. Konuma, K. Kurata and K. Sugano, “Neutrino production of lepton pairs. 2.,” Prog. Theor. Phys.46, 1799 (1971). doi:10.1143/PTP.46.1799 Belusevic:1987cw R. Belusevic and J. Smith, “W - Z Interference in Neutrino - Nucleus Scattering,” Phys. Rev. D37, 2419 (1988). doi:10.1103/PhysRevD.37.2419 Brown:1973ih R. W. Brown, R. H. Hobbs, J. Smith and N. Stanko, “Intermediate boson. iii. virtual-boson effects in neutrino trident production,” Phys. Rev. D6, 3273 (1972). doi:10.1103/PhysRevD.6.3273 CCFR S. R. Mishraet al. [CCFR Collaboration], “Neutrino tridents and W Z interference,” Phys. Rev. Lett.66, 3117 (1991). doi:10.1103/PhysRevLett.66.3117 Aaij:2016flj R. Aaijet al. [LHCb Collaboration], “Measurements of the S-wave fraction in B^0→ K^+π^-μ^+μ^- decays and the B^0→ K^∗(892)^0μ^+μ^- differential branching fraction,” JHEP1611, 047 (2016) doi:10.1007/JHEP11(2016)047 [arXiv:1606.04731 [hep-ex]].
http://arxiv.org/abs/1703.09247v3
{ "authors": [ "Ashutosh Kumar Alok", "Bhubanjyoti Bhattacharya", "Dinesh Kumar", "Jacky Kumar", "David London", "S. Uma Sankar" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170327181348", "title": "New Physics in $b \\rightarrow s μ^+ μ^-$: Distinguishing Models through CP-Violating Effects" }
firstpage–lastpage TUM-HEP 1079/17KIAS-P17060Optimized velocity distributions for direct dark matter detection [8mm] Alejandro Ibarra^1,2, Andreas Rappelt^1 ^1 Physik-Department T30d, Technische Universität München,James-Franck-Straße, 85748 Garching, Germany^2 School of Physics, Korea Institute for Advanced Study, Seoul 02455, South Korea ===========================================================================================================================================================================================================================================In astrophysics, we often aim to estimate one or more parameters for each member object in a population and study the distribution of the fitted parameters across the population. In this paper, we develop novel methods that allow us to take advantage of existing software designed for such case-by-case analyses to simultaneously fit parameters of both the individual objects and the parameters that quantify their distribution across the population.Our methods are based on Bayesian hierarchical modelling which is known to produce parameter estimators for the individual objects that are on average closer to their true values than estimators based on case-by-case analyses. We verify this in the context of estimating ages of Galactic halo white dwarfs (WDs) via a series of simulation studies. Finally,we deploy our new techniques on optical and near-infrared photometry of ten candidate halo WDs to obtain estimates of their ages along with an estimate of the mean age of Galactic halo WDs of 12.11_-0.86^+0.85 Gyr.Although this sample is small, our technique lays the ground work for large-scale studies using data from the Gaia mission. methods: statistical – white dwarfs – Galaxy: halo § INTRODUCTION§.§ White Dwarfs and The Galactic Halo AgeIn the astrophysical hierarchical structure formation model, the present Galactic stellar halo is the remnant of mergers of multiple smaller galaxies <cit.>, most of which presumably formed some stars prior to merging, and some of which may have experienced, triggered, or enhanced star formation during the merging process.The age distribution of Galactic halo stars encodes this process. Any perceptible age spread for the halo thus provides information on this complex star formation history.At present, we understand the Galactic stellar halo largely through the properties of its globular clusters.These star clusters are typically grouped into a few categories: i) those with thick disk kinematics and abundances, ii) those with classical halo kinematics and abundances, iii) the most distant population that is a few Gyr younger than the classical halo population, and iv) a few globular clusters such as M54 that are ascribed to known, merging systems, in this case the Sagittarius dwarf galaxy<cit.>. Globular clusters in category two appear consistent with the simple collapse picture of <cit.>,yet those of categories three and four argue for a more complex precursor plus merging picture.The newly appreciated complexity of multiple populations in many or perhaps all globular clusters <cit.>adds richness to this story, and may eventually help us better understand the earliest star formation environments.Despite the tremendous amount we have learned from globular clusters, they are unlikely to elucidate the full star formation history of the Galactic halo because today's globular clusters represent a ∼1% minority of halo stars.Without studying the age distribution of halo field stars, we do not know whether globular cluster ages are representative of the entire halo population.We do know that globular clusters span a narrower range in abundances than field halo stars <cit.>,so there is every reason to be suspicious that there is more to the story than globular clusters can themselves provide.In order to determine the age distribution of the Galactic halo, we need to supplement the globular cluster-based story with ages for individual halo stars.This is not practical for the majority of main sequence or red giant stars because of well-known degeneracies in their observable properties as a function of age.Gyrochronology <cit.> does hold some hope for determining the ages of individual stars, but this is unlikely to provide precise ages for very old stars even after the technique sees considerably more development.Our best current hope for deriving the Galactic halo distribution is to determine the ages of halo field WDs.WDs have the advantages that they are the evolutionary end-state for the vast majority of stars and their physics is relatively well understood <cit.>.A WD's surface temperature, along with its mass and atmospheric type, is intimately coupled to its cooling age, i.e., how long it has been a WD.The mass of a WD, along with an assumed initial-final mass relation (IFMR), provides the initial main sequence mass of the star, which along with theoretical models, provides the lifetime of the precursor star.Pulling all of this information together provides the total age for the WD.The weakest link in this chain is typically the IFMR.Yet fortunately the uncertainty in the IFMR often has little effect on the relative ages of WDs, and thus the precision of any derived age distribution.Additionally, among the higher mass WDs, the uncertainty in the precursor ages can be reduced to a level where the IFMR uncertainties do not dominate uncertainties in the absolute WD ages. While WDs provide all of these advantages for understanding stellar ages, the oldest are very faint, and thus few are known, with fewer still known with the kinematics of the Galactic halo.The paucity of data for these important objects will shortly become a bounty when Gaia both finds currently unknown WDs with halo kinematics and provides highly accurate and precise trigonometric parallaxes, which constrain WD surface areas and thus masses.The number of cool, halo WDs is uncertain by a factor of perhaps five, and depending on the Galaxy model employed, <cit.> calculate that Gaia will derive parallaxes for ∼60 or ∼350 single halo WDs with T_ eff≤5000.Gaia will measure parallaxes for more than 200,000 WDs with thick disk and disk kinematics.We have developed a Bayesian statistical technique to derive the ages of individual WDs <cit.> and intend to apply this to each WD for which Gaia obtains excellent parallaxes.Yet the number of halo WDs for which we can derive high-quality ages may still be modest, particularly because we also require accurate optical and near-IR photometry.Because of the importance of the age distribution among halo stars, we have developed a hierarchical modelling technique to pool halo WDs and derive the posterior distributions of their ages. §.§ Statistical Analysis of a Population of ObjectsIn statistics,hierarchical models are viewed as the most efficient and principled technique to estimate the parameters of each object in a population <cit.>.In astrophysics, we often aim to estimate a set of parameters for each of a number of objects in a population, which motivates the application of hierarchical models. Noticeably, these models have gained popularity in astronomy mainly for two reasons.Firstly, they provide an approach to combining multiple sources of information. For instance, <cit.> employed Bayesian hierarchical models to analyse the properties of Type Ia supernova light curves by using data from Peters AutomatedInfraRed Imaging TELescope and the literature.Secondly, they generally produce estimates with less uncertainty.By combining information from supernovae Ia lightcurves, <cit.> and <cit.> illustratehow a hierarchical model can improvethe estimates of cosmological parameters. Similarly, <cit.> obtained improved estimatesof several global properties of the Milky Way by using a hierarchical model to combine previous measurements from the literature.However, fully modelling a population of objects within a hierarchical model requires substantial computational investment and often specialised computer code, especially for complicated problems. In this study, we develop novel methods to conveniently obtain the improved estimates available under a hierarchical model. While taking advantage of the existing code for case-by-case analyses, our methods simultaneously estimate parameters of the individual objects and parameters that describe their population distribution.Our methods are based on Bayesian hierarchical modelling which are known to produce estimatorsof parameters of the individual objects that are on average closer to their true values than estimators based on case-by-case analyses. There are many possible applications of hierarchical models in astrophysics. In this article we focus on the analysis of a sample of candidate halo WDs. We perform a simulation study to illustrate the advantage of our approach over the commonly usedcase-by-case analysis in this setting. We find that approximately two thirds ofthe estimated WD ages are closer to their true age under the hierarchical model. Using optical and near-infrared photometry of ten candidates halo WDs, we simultaneously estimate their individual ages and the mean age of these halo WDs; the latter which we estimate as 12.11_-0.86^+0.85 Gyr.Another application to the distance modulus of the Large Magellanic Cloud (LMC) is includedin Appendix <ref> as a pedagogical illustration of our methods in a simpler setting. One of the primary benefits of our approach is that it takes advantages of the existing code which fits one object at a time. We only need to write wrapper code that calls the existing programs,see <cit.> for more details. This saves substantial human capital that might otherwise be devoted to developing and coding a complex new algorithm. The power of this approach can be conceived of as coming from i) an informative assumption, which is that all the objects belong to a population with a particular distribution of the parameters of the objects across the population, and ii) that it otherwise is difficult to come up with a technique that can combine the individual results when they may have asymmetric posterior density functions. The remainder of this article is organised into five sections. We introduce hierarchical modelling and its statistical inference methods in Section 2. We present methods for case-by-case and hierarchical analyses of the ages of a group of WDs in Section 3. In Section 4, we use a simulation study to verifythe advantages of the hierarchical approach. In Section 5 we apply both the case-by-case and our hierarchical model to ten Galactic halo WDs, and then interpret the Galactic halo age in the context of known Milky Way ages. Section 6 summarises the proposed methodology and our results.In Appendix <ref>, we describe the statistical background of hierarchical models and explain why they tend to provide better estimates. We illustrate the application of hierarchical models and their advantageous statistical properties via the LMC example in Appendix <ref>. In Appendices <ref> and <ref> outline the computational algorithms we use to efficiently fitthe hierarchical models.§ HIERARCHICAL MODELLING Suppose we observe a sample of objects from a population of astronomical sources,for example, the photometry of10 WDs from the Galactic Halo, andwe wish to estimate a particular parameter or a set of parameters for each object. We refer to these as the object-level parameters.By virtue of the population, there is a distribution of these parameters across the population of objects. This distribution can be described by another set of parameters that we refer to as the population-level parameters. Often we aim to estimate both the object-level parameters and population-level parameters. As we shall see, however, even if we are only interested in the object-level parameters, they can be better estimated if we also consider their population distribution.Hierarchical models <cit.>, also called random effect models,can be used to combine data from multiple objects in a single coherent statistical analysis. Potentially this can lead to a more comprehensive understanding of the overall population of objects.Hierarchical models are widely used in many fields, spanning the medical, biological, social, and physical sciences. Because theyleverage a more comprehensive data set when fitting the object-level parameters, they tend to result in estimators that on average exhibit smaller errors <cit.>.Because a property of these estimators is that they are “shrunk” toward a common central valuerelative to those derived from the corresponding case-by-case analyses, they are often called shrinkage estimators. More details about shrinkage estimators appear in Appendix <ref>.A concise hierarchical model isY_i|θ_i ∼N(θ_i,σ), i=1,2,⋯,n, θ_i ∼N(γ,τ),where Y=(Y_1, ⋯, Y_n) are observations, σ is the standard error of the observations, θ=(θ_1,⋯, θ_n) are objective-level parameters of interest, γ and τ are the unknown population-level mean and standard deviation parameters of a Gaussian distribution[In this paper we parameterise univariate Gaussian distributions in terms of their means and standard deviations. Generally, we write Y|θ∼N(μ, σ) to indicate that given θ, Y follows a Gaussian (or Normal) distribution with mean μ and standard deviation σ. ]. Bayesian statistical methods use the conditional probability distribution of the unknown parameters given the observed data to represent uncertainty and generate parameter estimates and error bars. This conditional probability distribution is called the posterior distribution and in our notation written as p(γ, τ, θ| Y). To derive the posterior distribution via Bayes theorem requires us to specify a prior distribution which summarises our knowledge about likely values of the unknown parameters having seen the data, see <cit.> and <cit.> for applications of Bayesian methods in the context of astrophysical analyses. Our prior distribution on θ is given in Eq. <ref> and we choose the non-informative prior distribution p(γ, τ)∝1 for γ and τ, which is a standard choice in this setting <cit.>. Two commonly used Bayesian methods to fit the hierarchical model in Eq. <ref>–<ref> are the fully Bayesian (FB) and the empirical Bayes (EB) methods.FB <cit.> fits all of the unknown parameters via their joint posterior distributionp(γ, τ, θ|Y)∝p(γ, τ)∏_i=1^np(θ_i|γ, τ)∏_i=1^np(Y_i|θ_i).Generally, we employ Markov chain Monte Carlo (MCMC) algorithms to obtain a sample from the posterior distribution, p(γ, τ, θ|Y). The MCMC sample can be used to i) generate parameter estimates, e.g., by averagingthe sampled parameter values, ii) generate error bars, e.g., by computing percentiles of the sampled parameter values, and iii) represent uncertainty, e.g., by plotting histograms or scatter plots of the sampled values. For intricate hierarchical models, however it may be computationally challenging to obtain a reasonable MCMC sample. EB <cit.> uses the data to firstfit the parameters of the prior distribution inEq. <ref> and then given these fitted parametersinfer parameters in Eq. <ref> in the standard Bayesian way. Specifically,γ and τ are first estimated as γ̂ and then τ̂ and the prior distributionθ_i∼N(γ̂, τ̂) is used in a Bayesian analysis to estimate the θ_i. Thus, EB proceeds in two steps. Step 1 Find the maximum a posterior (MAP) estimates of γ and τ by maximisingtheir joint posterior distribution, i.e, (γ̂, τ̂)=max_γ, τp(γ,τ|Y)=max_γ, τ∫p(γ,τ, θ|Y)dθ. Step 2 Use N(γ̂, τ̂) as the prior distribution for θ_i, i=1, ⋯, n and estimate θ_i in the standard Bayesian way, i.e.,p(θ_i|Y_i, γ̂, τ̂)∝p(Y_i|θ_i)p(θ_i|γ̂, τ̂).When applying the EB to fit a hierarchical model, it is possible that the estimate of the standard deviation τ is equal to 0, which leads to θ_1=⋯=θ_n=γ̂. This is generally not a desirable result. We can avoid τ̂=0 by using the transformations ξ=logτ or δ=1/τ <cit.>. We refer to EB implemented with these transformations as EB-log and EB-inv, respectively. Step 2 of EB-log and EB-inv remains exactly the same as that of EB, but Step 1changes. Specifically, Step 1 of EB-log isStep 1 Find the MAP estimates of γ and ξ by maximisingtheir joint posterior distribution, i.e., (γ̂, ξ̂)=max_γ, ξp(γ,exp(ξ)|Y)exp(ξ) and setting τ̂=exp(ξ̂), where p(·|Y) is the posterior distributionof γ and τ. Thus, (γ̂, τ̂)=max_γ, τp(γ,τ|Y)τ. Comparing Eq. <ref> with Eq. <ref>, the added τ in Eq. <ref>prevents τ from being zero. The Step 1 of EB-inv proceeds similarly.§ ANALYSES FOR FIELD HALO WHITE DWARFS Our model is based on obtaining photometric magnitudes forn WDs from the Galactic halo. We denote the l-dimensional observed photometricmagnitudes for the i-th WD by X_i andthe known variance-covariance matrix of its measurement errors by Σ_i. Our goal is to use X_i to estimatethe age, distance modulus, metallicity, andzero-age main sequence (ZAMS) mass of the WD. Our WD model is specified in terms of the log_10(age), distance modulus, metallicity, and ZAMS mass of WDs and we denote these parameters by A_i, D_i, Z_i and M_i for i=1,⋯, n respectively. Because we are primarily interested in WD ages, we group the other stellar parameters into Θ_i=(D_i, Z_i, M_i). Finally to simplify notation, wewrite X= (X_i, ⋯, X_n), A= (A_1, ⋯, A_n), and Θ= (Θ_1, ⋯, Θ_n). Here we review a case-by-case analysis method for WDs and develop convenient approaches to obtain the hierarchical modelling fits with improved statistical properties.§.§ Existing Case-by-case Analysis The public-domain Bayesian software suite, Bayesian Analysis of Stellar Evolution with 9 parameters (BASE-9), allows one to precisely estimate cluster parameters based on photometry <cit.>.We have applied BASE-9 to key open clusters <cit.>, extended BASE-9 to study mass loss from the main sequence through the white dwarf stage, the so-called Initial Final Mass Relation (IFMR) <cit.>, and have demonstrated that BASE-9 can derive the complex posterior age distributions for individual field white dwarf stars <cit.>. In this article we focus on the development of BASE-9 for fitting the parameters of individual WD stars. BASE-9 employs a Bayesian approach to fit parameters.The statistical model underlying BASE-9 relates a WD's photometry to its parameters,X_i|A_i, Θ_i∼N_l(G(A_i, Θ_i), Σ_i),where, N_l represents a l-variate Gaussian distribution,G(·) represents the underlying astrophysical models that predicts a star's photometric magnitudesas a function of its parameters. Specifically G combines models for the main sequence through red giant branch <cit.> and the subsequent white dwarf evolution <cit.>. The Bayesian approach employed by BASE-9 requires a joint prior density on (A_i, Θ_i) for each WD. We assume this prior can be factored intop(A_i, Θ_i)=p(A_i|μ_A_i,σ_A_i)p(D_i|μ_D_i,σ_D_i)× p(Z_i|μ_Z_i,σ_Z_i)p(M_i),where, the individual prior distributions on age, distance modulus, and metallicity p(A_i|μ_A_i,σ_A_i),p(D_i|μ_D_i,σ_D_i), and p(Z_i|μ_Z_i,σ_Z_i) are normal densities each with its own prior mean(i.e., μ_A_i, μ_D_i, and μ_Z_i) and standard deviation (i.e., σ_A_i, σ_D_i, and σ_Z_i). When possible, these prior distributions are specifiedusing external studies. The prior on the mass M_i is specified as the initial mass function (IMF) taken from <cit.>, i.e., log_10(M_i)∼N(μ=-1.02, σ=0.67729).BASE-9 deploys a MCMC sampler to separately obtain a MCMC sample from eachof the WD's joint posterior distributions, p(A_i, Θ_i|X_i) ∝p(X_i|A_i, Θ_i)p(A_i|μ_A_i,σ_A_i)× p(D_i|μ_D_i,σ_D_i)p(Z_i|μ_Z_i,σ_Z_i)p(M_i).In this manner, we can obtain case-by-case fits of A_i and Θ_i for each WD using BASE-9. In this paper for both the case-by-case and the hierarchical analysis, we obtainMCMC samples for most of the parameters. After we obtain a reasonable MCMC sample for A_i, Θ_i, i=1, ⋯, n, we estimate these quantities and their 1σ error barsusing the means and standard deviations of their MCMC samples, respectively. For example, letting A_i^(s), s=1, ⋯, S be a MCMC sample for A_i of size S, after suitable burn-in <cit.>, the posterior mean and standard deviation of A_i are approximated byÂ_i=∑_s=1^SA_i^(s)/S,σ̂_A_i=√(∑_s=1^S(A_i^(s)-Â_i)^2/(S-1)).When the posterior distribution of the parameter A_i is highly asymmetric, its posterior mean and 1σ error bar may not be a good representation of the posterior distribution. In this case, we might instead compute the 68.3% posterior interval of A_i as the range between the 15.87% and 84.13% quantiles of the MCMC sample. §.§ Hierarchical Modelling of a Group of WDs In this section, we embed the model in Eq. <ref> into a hierarchical model for a sample of halo WDs,X_i|A_i, Θ_i ∼N_l(G(A_i, Θ_i), Σ_i), A_i ∼N(γ,τ).In this hierarchical model A_i, D_i, Z_i, and M_i are the object-level parameters,while γ and τ are population-level parameters, the mean and standard deviation of the log_10 ages of WDs in the Galactic halo. The assumption of a common population incorporating an age constraint is the source of the statistical shrinkage that we illustrate below.For the prior distributions of each Θ_i, we take the samestrategy as in the case-by-case analysis in Eq. <ref>. For the population-level parameters γand τ, we again choose the uninformative prior distribution, i.e., p(γ, τ)∝ 1.The joint posterior distribution for parameters in the hierarchical model is p(γ, τ, A, Θ|X)∝p(γ,τ)∏_i=1^np(X_i|A_i, Θ_i)p(A_i|γ,τ) × p(D_i|μ_D_i,σ_D_i)p(Z_i|μ_Z_i,σ_Z_i)p(M_i). §.§.§ Fully Bayesian Method The FB approach obtains a MCMC sample from the joint posterior distribution in Eq. <ref>.Here we employ a two-stage algorithm <cit.>to obtain the FB results. This algorithm takes advantage of the case-by-case samples in Section <ref> and is easy to implement. A summary of the computational details of FB appears in Appendix <ref>.§.§.§ Empirical Bayes Method We also illustrate how to fit the hierarchical model in Eq. <ref> withEB. First the joint posterior distribution for γ and τ is calculated as p(γ, τ|X)=∫⋯∫p(γ,τ, A, Θ|X) dA dΘ. The integration in Eq. <ref> is 4×n dimensional, which is computationally challenging.To tackle this, we use the Monte Carlo Expectation-Maximization (MCEM) algorithm<cit.> to findthe MAP estimates of γ and τ.To avoid estimating τ as zero when its (profile) posterior distribution is highly skewed <cit.>,we again implement EB-log (ξ=logτ) or EB-inv (δ=1/τ). For EB-log, the joint posterior distribution of γ and ξ equals p(γ,exp(ξ)|X)exp(ξ),where p(·| X) is the joint posterior distribution of γ and τ. The EB-log method proceeds in two steps.Step 1: Deploy MCEM to obtain the MAP estimates of γ and ξ, and transform to γ and τ, i.e., (γ̂,τ̂)=max_γ, τp(γ, τ|X)τ.For details of MCEM in this setting, see Appendix <ref>.Step 2: For WD i=1, ⋯, n, we obtain a MCMC sample fromp(A_i, Θ_i|X_i, γ̂, τ̂)∝p(X_i|A_i, Θ_i)p(Θ_i)p(A_i|γ̂, τ̂) using BASE-9.EB-inv proceeds in a similar manner, but with Eq. <ref> replaced with (γ̂,τ̂)=max_γ, τp(γ, τ|X)τ^2,where p(·| X) is again the posterior distribution of γ and τ.§ SIMULATION STUDYTo illustrate the performance of the various estimators of the object-levelWD ages and the population-level parameters γand τ, we perform a set of simulation studies. Because the relative advantage of the shrinkage estimates compared with the case-by-case estimates depends both on the precision of the case-by-case estimates and the degree of heterogeneity of the object-level parameters, we repeat the simulation study under five scenarios, each with different values of observation error matrix Σand population standard deviationof log_10(age), i.e., of τ.We simulate the parameters {A_i, D_i, T_i, M_i, i=1,2, ⋯, N_1} for each group of WDsfrom the distributions in Table <ref>, where γ=10.09 (12.30 Gyr) is the population mean and τ varies among the simulation settings given in Table <ref>.For consistency with the data analyses inSection <ref>, we simulate u, g, r, i, z, J, H, K magnitudes for all WDs.Using BASE-9 for each setting, we simulate N_2=25 replicate datasets, each composed of N_1=10 halo WDs.For each WD in every group, we generate its log_10(age), distance modulus, metallicity and mass from distributions in Table <ref>, where τ is given in Table <ref>. The particular values and truncations in Table <ref> and <ref> are chosen because they reflect plausible values foractual halo WDs. We compute the empiricalstandard error for each simulated magnitude by averaging the errors from the observed halo WDs in Section <ref>, and we denote by Σ_0 the variance matrix of observed magnitudes, i.e., the square of empirical standard errors for all eight magnitudes. Specifically, Σ_0 is adiagonal matrix with diagonal elements equal to (0.304^2, 0.092^2, 0.027^2, 0.026^2, 0.068^2, 0.062^2, 0.086^2, 0.083^2).For simplicity in each settingall stars share the same diagonal observation variance, that is each Σ_i=Σ, i =1, 2, ⋯, N_1. The observation error variances for five simulation settings are described in terms of Σ_0 in Table <ref>.In the entire simulation study, we employ the <cit.> WD precursor models,<cit.> WD interior models, <cit.> WD atmospheres and <cit.> IFMR.Subsequently we recover parameters with multiple approaches: EB, EB-log, EB-inv, FB, and the case-by-case analysis. We specify non-informative broad prior distributions on each Θ_i, namely D_i∼N(4.0, 2.4^2), Z_i∼N(-1.5, 0.5^2) and log_10(M_i)∼N(-1.02,0.67729). The case-by-case analyses requires a prior distribution on each A_i and we use p(A_i)∝ 1.The hierarchical model in Eq. <ref> requires priors on γ and τ, and we again use p(γ, τ)∝ 1. We compare the case-by-case estimates with shrinkage estimates based on the hierarchical model. Results from the case-by-case analyses (obtained by fitting the model in Eq. <ref>) are indicatedwith a superscript I (for “individual”) and those from hierarchical analyses(obtained by fitting to model in Eq. <ref>)are indicated with an H.We denote log_10(age) of the i-th simulated WD in the j-th replicate group by A_ij.Using both the case-by-case and hierarchical analyses, we obtain MCMC samples of the parameters A_ij, i=1, 2, ⋯, N_1, j=1, 2, ⋯, N_2. We estimate A_ij by taking the MCMC sample mean as in Eq. <ref> and denote the estimates based on case-by-case and hierarchical analyses by Â_ij^I and Â_ij^H, respectively. We compute the absolute value of the error[We use this term to refer to the absolute value of the error.] of each estimator Â_ij aserror(Â_ij)=|Â_ij-A_ij|.In our simulation study, we are mainly concerned with the difference between absoluteerrors from shrinkage and case-by-case estimatesDiff(A_ij)= error(Â_ij^I)- error(Â_ij^H)=|Â^I_ij-A_ij|-|Â^H_ij-A_ij|,which compares the prediction accuracy of the two methods. If Diff(A_ij)≥0, then the absolute deviation of the case-by-case estimate of star A_ij is greater than that of the shrinkage estimate. Fig. <ref> compares the performance of shrinkage estimates under simulated Setting I (τ=0.05, Σ=Σ_0). The corresponding summary plots for the other simulation settings are similar and appear in Fig.s <ref>–<ref> the online supplement. The histograms in Fig. <ref> demonstrate that the estimates of γ and τ are generally close to their true values (thick, dashed red lines).Under all 5 settings, however, for some replicate groups of halo WDs, EB produces estimates of τ equal to 0, see the first row, middle panel of Fig. <ref>. (This phenomenon is fully discussed in Appendix <ref> and Fig. <ref>.) In these cases, the shrinkage estimate of the age of each WD in these are equal, which potentially leads to large errors. As we mentioned in Section <ref>, this highlights a difficulty with EB, and demonstrates the need for the transformed EB-log or EB-inv. Both of these approaches produce similar results to EB, but avoid the possibility of τ̂=0. The third column in Fig. <ref> shows the scatter plot of Diff(A_ij) against A_ij, i=1, 2, ⋯, N_1, j=1, 2, ⋯, N_2. Because most of the scatter in these plots is above the solid red zero line, the estimates of log_10(age) from the case-by-caseanalyses tend to be further from the true values than the shrinkage estimates. Approximately two thirds of the N_1×N_2 simulated stars in each setting are better estimated with the shrinkage method than the case-by-case fit. For stars below the red solid lines, nominally the case-by-case fit is better, but the advantage is small. In fact, for almost all simulated stars, Diff(A_ij)= error(A_ij^I)- error(A_ij^H) > -0.1, so the shrinkage estimates do not perform much worse than case-by-case estimates for any WD and often perform much better. For some stars, we have Diff(A_ij) >0.5. From the point of view of reliability of the technique, it is comforting thatthe four hierarchical fits (EB, EB-log, EB-inv, FB) perform similarly, at least when τ̂>0 for EB.Table <ref> presents anumerical comparisons of the shrinkage and case-by-case estimates of log_10(age). Specifically it presents the average mean absolute error (MAE) and the average root of the mean squared error (RMSE) for each method, i.e.,MAE(A)=1/N_1·N_2∑_j=1^N_2∑_i=1^N_1|Â_ij-A_ij|, RMSE(A)=√(1/N_1·N_2∑_j=1^N_2∑_i=1^N_1(Â_ij-A_ij)^2).Both MAE and RMSE measure the distance between the true values and their estimates.Smaller MAE and RMSE indicates that the estimate is more accurate. Table <ref> summarises the performance of different estimators under the five simulated settings. In terms of MAE and RMSE, all four shrinkage estimates (EB, EB-log, EB-inv, FB)are significantly better than the case-by-case estimates, though there are slight differences among the four shrinkage estimates. Table <ref> reports the percentage of simulated WDs that are better estimated by shrinkage methods than the case-by-case fits for each of the four statistical approaches and each of the five simulation settings.We conclude that 60%–75% of simulated stars have a more reliable age estimate from the hierarchical analyses than from the case-by-case analyses.From Tables <ref> and <ref>, we conclude that shrinkage estimates from both EB-type and FB approaches outperform the case-by-case analyses in terms of smaller RMSE and MAE. Under the five simulated settings, all four computational methods, EB, EB-log, EB-inv and FB, behave similarly. Their MAEs and RMSEs are comparable. Also, the percentages of better estimated WDs from these four approaches are consistent. Simulation Setting III (τ=0.03, Σ=1.2^2Σ_0) benefits most from the shrinkage estimates in termsof reduced RMSE. The RMSE from the case-by-case fits under Simulation Setting III is approximately 0.20, while the RMSE from the EB is around 0.031, less than one sixth of the former.Simulation Setting IV (τ=0.06, Σ=0.8^2Σ_0) gains the least from the shrinkage estimates; the RMSE of EB (0.063) is about a quarter of the RMSE of case-by-case (0.26). Generally, when Σ is large and τ is small, the advantage of shrinkage estimates is the greatest. With small Σ and large τ, the advantage of shrinkage estimates over case-by-case estimates decreases. This is consistent to statistical theory <cit.>. Generally speaking, using EB-log rather than EB to avoid a fitted variance of zero. In terms of computational investment, the FB algorithm is less time-consuming than all of our EB algorithms.§ ANALYSIS OF A GROUP OF CANDIDATE HALO WHITE DWARFSNow we turn to the hierarchical and case-by-case analysis of the 10 field WDs from the Galactic halo listed inTable <ref>.In the entire analysis, we employ the <cit.> WD precursor models,<cit.> WD interior models, <cit.> WD atmospheres, and <cit.> IFMR. We acquire prior densities on M_i, D_i, and Z_i, i=1, ⋯, 10 fromthe literature <cit.>. The atmospheric compositions and priors on distance moduli for these 10 starsare listed in Table <ref>. We use a ZAMS mass prior IMF from <cit.> on M_i and a diffuseprior on metallicity Z_i∼ N(-1.50, 0.5). In this article, we do not leave the WD core composition as a free parameter, but instead, we use the WD cooling model derived from the work of <cit.>.For the case-by-case fitting of each WD, we employ a minimally informative flat prior on A_i, specifically, A_i∼ Unif(8.4, 10.17609). §.§ Case-by-case AnalysisWe derive the joint posterior density for the parameters using Bayes' theorem:p(A_i, D_i , Z_i, M_i|X_i)∝p(X_i|A_i, D_i, Z_i, M_i)p(A_i)p(D_i)p(Z_i)p(M_i).Before specifying a hierarchical modelling for the 10 WDs,we obtain their case-by-case fits using BASE-9 <cit.>.By using the priors in Table  <ref> and as described above, we fit each of 10 halo WDsindividually withBASE-9. We present results for 5 typical stars in Fig. <ref>.Each column in Fig. <ref> corresponds to one WD. The rows provide different two dimensional projections of the posterior distributions. The asymmetric errors in the fitted parameters, including age, are evident.The first row illustrates that the correlationbetween the metallicity and age for these five WDs is weak. In the second row, the distance and age of WDs have a strong positive correlation for ages less than 10 Gyrs. However, this pattern generally disappearsfor ages greater than 10 Gyrs. From the third and fourthrow, the ZAMS mass displays a clear negative correlations with both age and distance.The plot shows that the range of possible ages for these five stars is large, from 8 Gyrs to 15 Gyrs. Assuming each of these ten WDs are bona fide Galactic halo members, we expect their ages to be similar. However, their masses, distance moduli and metallicities may vary substantially. In this situation, it is sensible to deploy hierarchical modelling on the ages of these 10 WDs,which provides substantial additional information.§.§ Hierarchical AnalysisHere we deploy both EB-log and FB to obtain fits of the hierarchical model in Eq. <ref> based on ten candidate Galactic halo WDs.In Fig. <ref>, we compare the posterior density distributions of the age of each WD obtained with the case-by-case method and with that obtained with both EB-log and FB.Fig. <ref> demonstrates that the posterior distributions of the ages under thehierarchical model – both EB-log and FB – peak near a sensible halo age, whereas the case-by-case estimates (solid lines) disperse over a much wider range. Both EB-log and FB estimates are consistent, which we discuss further below. The photometric errors of these ten WDs are close to Σ_0 in the simulation study.So the data is similar to Simulation Setting I (τ=0.07, Σ=Σ_0). Hence, the advantage of the shrinkage estimates over the case-by-case estimates shownin Simulation Setting I should be predictive and we expect that thehierarchical fits (dotted lines) in Fig. <ref> are better estimates of the true ages of these halo WDs. Table <ref> and Fig. <ref> summarise the estimated ages.The 68.3% posterior intervals ofages of WDs from EB-log and FB are generally narrower than those from the case-by-case analyses, which means that shrinkage estimates (FB and EB-log) produce more precise estimates. The fits and errors from EB-log and FB are quite consistent.In both BASE-9 and the hierarchical model (Eq. <ref>), the ages, A_i, of stars arespecified on the log_10(Year) scale. Given an MCMC sample from the posterior distribution of age on the log_10(Year) scale, we can obtain a MCMC sample on theage scale by backwards transforming each value in the sample viaage=10^(A_i-9),where the units for age and log_10(age) are Gyr and log_10(Year), respectively. For the population-level parameters γ and τ, however,the transformation from the log_10(Year) scale to the Gyr scale is more complicated. Again, startingwith the MCMC sample of γ and τ, for each sampled pair, we (i) generate a Monte Carlo sample of A_i, (ii) transform this sampleto the Gyr scale as in Eq. <ref>, and (iii) compute the mean and standard deviation of the transformed Monte Carlo sample. Histograms of the resulting sample from the posterior distribution of the mean and standard deviation of the age on the Gyr scale appearin Fig. <ref>.We present estimates of the population distribution of the age of Galactic halo WDs in Table <ref>,on both the Gyr and log_10(Year) scales.In the first two rows, we report the 68.3% posterior intervals for the mean (γ) and standard deviation (τ) of the distribution of ages of halo WDs, 12.11_-0.86^+0.85 Gyr and 1.18_-0.62^+0.57 Gyr, respectively. The point estimates of the population mean (11.43 Gyr) and standard deviation (1.86 Gyr)from EB-log are quite consistent to results of FB. EB-log does not directly provide error estimates for the population mean and standard deviation, but bootstrap techniques <cit.> could be used.We do not pursue this here, because it is computationally expensive and uncertainties are provided by FB.In Table <ref>, we also report the 68.3% predictive intervals of the age distribution from FB and EB-log,which summarises the underlying distribution of halo WDs ages.These are our estimates of an interval that contains the ages of68.3% of halo WDs. From FB, the 68.3% predictive interval for the distribution ofhalo WD ages is 12.11_-1.53^+1.40 Gyr. In other words, we predict that 68.3% of WDs in the Galactic halo have ages between10.58 and 13.51 Gyr. The 68.3% predictive interval from EB-log is 11.43±1.86 Gyr, slightly broader than that from the FB.In summary, our hierarchical method finds that the Galactic halo has a mean age of 12.11_-0.86^+0.85 Gyr.Furthermore, the halo appears to have a measurable age spread with standard deviation1.18_-0.62^+0.57 Gyr. This result is preliminary as we await Gaia parallaxes to tightly constrain distances, which constrains both ages and stellar space motions. If one or a few of these WDs have anomalous atmospheres, are unresolved binaries,or are not true halo members, including them in this hierarchical analysis could artificiallyincrease the estimated halo age spread.Our mean halo age estimate is consistent with other WD-based age measurements for the Galactic halo.For halo field WDs, these estimates are 11.4 ± 0.7 Gyr <cit.>, 11-11.5 Gyr <cit.>,and 10.5^+2.0_-1.5 <cit.>.Although broadly consistent, these studies all use somewhat different techniques.The study of <cit.> relies on spectroscopic determinations of field and globular cluster WDs.The <cit.> analysis depends on photometry and trigonometric parallaxes,as does our work, yet at that point only two halo WDs were available for their study. The <cit.> study is based on the halo WD luminosity function isolated by <cit.>.Although this sample contains 135 likely halo WDs, there are as yet no trigonometric parallaxes or spectroscopy to independently constrain their masses.Thus, all of these samples suffer some defects, and it is comforting to see that different approaches to these different halo WD datasets yield consistent halo ages.Another comparison to the field WD halo age is the WD age of those globular clusters that have haloproperties.Three globular clusters have been observed to sufficient depth to obtain their WD ages, and two ofthese (M4 and NGC 6397) exhibit halo kinematics and abundances.The WD age of M4 is 11.6±2 Gyr<cit.> and that age for NGC 6397 is 11.7±0.3 Gyr <cit.>.These halo agesfor globular cluster stars are almost identical to those for the halo field.If there is any problem with these ages,it may only be that they are too young, at ∼2 Gyr younger than the age of the Universe.At this point, welack sufficient data to determine whether this is a simple statistical error, with most techniques havinguncertainties in the range of 1 Gyr, or whether there is a systematic error with the WD models or IFMR forthese stars, or whether these WD studies have simply failed to find the oldest Galactic stars.Alternatively, asmentioned above, the field halo age dispersion may really be of order ±2 Gyr, in which case the halo fieldis sufficiently old, yet these globular clusters may not be.We look forward to future results from Gaia andLSST that should reduce the observational errors in WD studies substantially while dramaticallyincreasing sample size.This will allow us to precisely measure the age distribution of the Galactic halo and placethe globular cluster ages into this context. § CONCLUSIONWe propose the use of hierarchical modelling, fit via EB and FB to obtain shrinkage estimates of the object-level parameters of a population of objects. We have developed novel computational algorithms to fit hierarchical models even when the likelihood function is complicated. Our new algorithms are able to take advantage of available case-by-case code, with substantial savings in software development effort.By applying hierarchical modelling to a group of 10 Galactic halo WDs, we estimate that 68.3% of Galactic halo WDs have agesbetween 10.58 and 13.51 Gyr. This tight age constraint from the photometry of only 10 halo WDsdemonstrates the power of our Bayesian hierarchical analysis. In the near future, we expect not only better calibrated photometry for many more WDs, but also to incorporate highly informative priors on distance and population membership from the Gaia satellite's exquisite astrometry. We look forward to using theses WDs to fit out hierarchical model in order to derive an accurate and precise Galactic halo age distribution.§ ACKNOWLEDGEMENTSWe would like to thank the Imperial College High Performance Computing support team for their kind help, which greatlyaccelerates the simulation study in this project. Shijing Si thanks the Department of Mathematics in Imperial College London for a Roth studentship, which supports his research.David van Dyk acknowledges support from a Wolfson Research Merit Award (WM110023) provided bythe British Royal Society and from Marie-Curie Career Integration (FP7-PEOPLE-2012-CIG-321865) andMarie-Skodowska-Curie RISE (H2020-MSCA-RISE-2015-691164) Grants both provided by the European Commission. mnras§ SHRINKAGE ESTIMATES In this appendix, we discuss shrinkage estimates and their advantages. Consider, for example, a Gaussian model,Y_i|θ_i∼N(θ_i,σ), i=1, 2, ⋯, n,where Y=(Y_1,…,Y_n) is a vector of independent observations of each of n objects,θ=(θ_1,…, θ_n) is the vector of object-level parameters of interest,and σ is the known measurement error.A simple technique to fit Eq. <ref> is to estimate each θ_i individually,θ̂^ ind_i=Y_i, i.e. θ̂^ ind=Y, using only its corresponding data.If a population is believed to be homogeneous, however, we might suppose, in the extreme case, that all of the objects have the same parameter, θ_1=θ_2=⋯=θ_n. For example, one might suppose stars in a cluster all have the same age. Under this assumption, the pooled estimate, θ̂_i^ pool=Y̅=1/n∑Y_i, i.e. θ̂^ pool=(Y̅,…,Y̅), is appropriate.The mean squared error (MSE) is a statistical quantity that can be used to evaluate the quality of an estimator. As its name implies, it measures the average of the squared deviation between the estimator and true parameter value. Thus the MSE of θ̂^ ind isMSE(θ̂^ ind)= E[∑_i=1^n(θ̂^ ind_i-θ_i)^2|θ]=nσ^2,where E(·|θ) is the conditional expectation function that assumes θ is fixed and here that θ̂^ ind=Y varies according to the model in Eq. <ref>. It is well known in the statistics literature that the individual estimators θ̂^ ind are inadmissible if n>3. This means that there is another estimator that has smaller MSE regardless of the true values of θ or σ^2. In particular, the James-Stein estimator of θ, θ̂^ JS=(1-B̂)θ̂^ ind+B̂θ̂^ poolwhereS^2=∑(Y_i-Y̅)^2/(n-1) and B̂=(n-3)σ^2/(n-1)S^2,is known to have smaller MSE than θ̂^ indif n >3 <cit.>[ It can be shown that the MSE of θ̂^ JS isE[∑_i=1^n(θ̂^ JS_i-θ_i)^2|θ] = nσ^2-σ^2(n-3) E(B̂)< nσ^2= E[∑_i=1^n(θ̂^ ind_i-θ_i)^2|θ], which shows the advantage of James-Stein estimator in terms of MSE over the individual estimator when n>3.].When n >3, B̂>0 and the James-Stein estimator of θ_i is a weighted average of the individualestimates, θ̂_i^ ind=Y_i, and the pooled estimates, θ̂^ pool_i=Y̅.The James-Stein estimator is an example of a shrinkage estimators, which are estimates of a set of object-level parameters that are “shrunk” toward a common central value relative to those derived from the corresponding case-by-case analyses. The population-level parameters that describe the distribution of (θ_1, …, θ_n) areoften also of interest. Supposewe model the population by assuming that θ_i follows a common normal distribution, i.e., we extend the model in Eq. <ref> to Y_i|θ_i ∼ N(θ_i,σ), i=1,2,⋯,n; θ_i ∼ N(γ,τ^2),where γ and τ are unknown population-level parameters. The model inEqs. <ref>-<ref> is a hierarchical model and can be fit usingEmpirical Bayes (EB) <cit.>.We choose the non-informative, p(γ, τ)∝ 1, which is a standard choice in this setting <cit.>.The EB approach is Bayesianin that it views Eq. <ref> as a prior distribution and is empirical in that the parameters ofthis prior are fit to the data. Specifically, EB proceeds by first deriving the marginal posteriordistribution of γ and τ^2,p(γ,τ^2|Y)=p(γ, τ^2)∏_i=1^n∫p(Y_i|θ_i)p(θ_i|γ,τ^2) dθ_i,and then estimating γ and τ^2 with the values that maximise Eq. <ref>.These estimates areγ̂=Y̅ and τ̂^2=max{∑_i=1^n(Y_i-Y̅)^2/n+1-σ^2,0}. (Even with the normal assumptions in Eq. <ref>-<ref>, closed form estimates of γ and τ^2 are available only under the simplifying assumption that the measurement errors for each Y_i are the same, i.e., σ^2_1=σ_2^2=⋯=σ_n^2.)Finally, the posterior distribution of θ_i can be expressed as p(θ_i|Y_i,γ̂, τ̂^2)∝p(Y_i|θ_i)p(θ_i|γ̂,τ̂^2). Each θ_i can be estimated with its posterior mean under Eq. <ref>. Under certain conditions, EB is consistent with James-Stein estimators <cit.>. EB can produce estimators having the same advantages as James-Stein and it is readily ableto handle more complicated problems whereas James-Stein would require model specific derivationof MSE-reducing estimators. § LARGE MAGELLANIC CLOUD We illustrate the construction and fitting of hierarchical modelsand the advantages of shrinkage estimates through an illustrative application to data used to estimate the distance to the LMC. The LMC is asatellite galaxy of the Milky Way. Numerous estimates based on various data sources have been made ofthe distance modulus to the LMC. The population of stars used affectsthe estimated distance modulus: Estimates based on Population I tend to be larger thanthose based on Population II stars. We use a set of estimates based on Population I stars, and formulate a hierarchical modelfor these estimates in order to develop a comprehensive estimate. We use the data in Table <ref>, which was compiled by <cit.>. Besides statistical errors, the various distance estimates may be subject to systematic errors. We aim to estimate the magnitude ofthese systematic errors. If we further assume that the systematic errors tend to average out among the various estimators, we can obtain a better comprehensive estimator of the distance modulus.Let μ_i be the best estimate of the distance modulus that could be obtained with method i, i.e., with an arbitrarily large dataset. Because of systematic errors, μ_i does not equal the true distance modulus, but is free of statistical error.Consider the statistical model, D_i ∼ N(μ_i,σ_i), i=1,⋯,13,μ_i ∼ N(γ,τ),where D_i is the actual estimated distance modulus based on the method/dataset i including statistical error, σ_i is the known standard deviation of the statistical error,γ is the true distance modulus of the LMC, and τ is the standard deviation of the systematic errors of the various estimates. Eq. <ref> specifies our assumptionthat the systematic errors tend to average out.We denote D=(D_1,⋯,D_13) and μ=(μ_1, ⋯, μ_13). We take an EB approach to fitting the hierarchical model in Eq. <ref>–<ref>. This involves first estimating the population-level parameters γ and τ and then plugging theseestimates in Eq. <ref> and using it as the prior distribution for each μ_i. Finally the individual μ_i are estimated with their posterior expectations,E(μ_i|D, γ̂, τ̂) and their posterior standard deviations, SD(μ_i|D, γ̂, τ̂) are used as 1σ uncertainties.Our EB approach requires a prior distribution for γ and τ. We choose the standard non-informative prior, p(γ, τ)∝ 1 in this setting. We estimate γ and τ by maximising their joint posterior density, p(γ,τ|D) ∝ ∫p(τ,γ,μ|D) dμ= p(γ, τ)∏_i=1^13∫p(D_i|μ_i)p(μ_i|γ,τ) dμ_i.The values of γ and τ that maximise Eq. <ref> are known as maximum a posterior (MAP) estimates. For any τ, Eq. <ref> is maximised with respect to γ by γ̂(τ)=∑_i=1^13D_i/(τ^2+σ_i^2)/∑_i=1^131/(τ^2+σ_i^2),where γ̂(τ) is a function of τ. The profile posterior density of τ is obtained by evaluating Eq. <ref> at γ̂(τ) and τ, i.e., p(γ̂(τ),τ|D). The global maximiser of the profile posterior distribution is the MAP estimate of τ and must be obtained numerically.As shown in the left panel of Fig. <ref>, the profile posterior density of τ monotonically decreases from its peak at 0,which means that the MAP estimator of τ is 0, a poor summary of the profile posterior.This is because 0 is the lower boundary of the possible values of τ.A better estimate can be obtained using a transformation of the population standard deviation,specifically, ξ=lnτ. The joint posterior of γ and ξ can be expressed asp(γ,ξ|D)= p(γ,exp(ξ)|D)exp(ξ),where p(·| D) is the posterior distribution of γ and τ. The profile posterior of ξ is plotted in the right panel of Fig. <ref>, is more symmetric, and is better summarised by its mode. After having estimated ξ with its MAP estimate, we compute τ̂=exp(ξ̂) and γ̂=γ(τ̂). See <cit.> for a discussion of transforming parameters to achieve approximate symmetry in the case of mode-based estimates. Plugging γ̂ and τ̂ into the prior for μ_i given in Eq. <ref>, we can compute the posterior distribution of each μ_i asp(μ|D, γ̂, τ̂^2)∝∏_i=1^13p(D_i|μ_i)p(μ_i|γ̂,τ̂).Fig. <ref> shows the hierarchical and case-by-case posterior distributions of the individual estimates, μ_i. The hierarchical results (dashed red lines) are shrunk toward the centre relative to the case-by-case results (blue solid lines). The case-by-case density functions of μ_i range from 18.0 to 19.2, whereas the hierarchical posterior density functions are more precise, ranging from 18.3 to 18.7. This is an example of the shrinkage of the case-by-case fits towards their average that occurswhen fitting a hierarchical model. We can also see this effect in the posterior means,E(μ_i|τ,D)=γ̂(τ)/τ^2+D_i/σ_i^2/1/τ^2+1/σ_i^2,which are weighted averages of the case-by-case estimates, D_i, and the combined(MAP) estimate of the distance modulus, given in Eq. <ref>. The MAP estimate of the distance modulus is γ̂=18.525 and the standard deviation of the systematic errors is τ̂=0.045 and the distance modulus is γ̂=γ(τ̂)=18.525.We compute τ̂ via the MAP estimate of ξ as described above. It measures the extent of heterogeneity between 13 different published results. To compute the uncertainty of γ̂, we generate 200 bootstrap samples <cit.> of D and for each we compute the MAP estimate for γ, resulting in 200 bootstrap estimates of γ with standard deviation 0.024. Thus our estimate of γ, the distance modulus of LMC, is 18.53±0.024, that is, 50.72±0.56 kpc.For illustration, in Fig. <ref> we plot the posterior expectation E(μ_i|τ,D)of each of the best estimates of the distance moduli from each method as a function of τ as 13 coloured lines. The black solid line is the MAP estimate γ̂(τ) plotted as a function ofτ. Whenτ is close to zero, the conditional posteriormeans of each μ_i shrink toward the overall weighted mean γ̂(0)=∑D_i/σ_i^2/1/σ^2_i. The τ=0 case corresponds to no systematic error and relatively large statistical error.As τ becomes larger, theconditional posterior means approachthe case-by-case estimators of the distance moduli marked by plus signs at the far right in Fig. <ref>.The red dashed vertical line indicates our estimateof τ and intersects the coloured curves at the hierarchical estimates of each μ_i. Fig. <ref> shows how the hierarchical fitreduces to the case-by-case analyses as the variance of the systematic errors goes to infinity. We include Fig. <ref> toillustrate the “shrinkage” of the estimates produced with hierarchical models, but such a plotis not needed to obtain the final fit. § THE TWO-STAGE ALGORITHM FOR FBIn this section, we illustrate how to fit the hierarchical model (in Eq. <ref>) via our two-stage algorithm. For more details about this algorithm, see <cit.>.Step 0a: For each WD run BASE-9 to obtain a MCMC sample of p(A_i, Θ_i|X_i) under the case-by-case analysis. Thin each chain to obtain an essentially independent MCMC sampleand label it {A_1^(t), Θ_1^(t), ⋯, A_n^(t), Θ_n^(t), t=1, 2, ⋯, t_MC}.Step 0b: Initialise each WD age at Ã_i^(1)=A_i^(1) and the other parameters at Θ̃_i^(1)=Θ_i^(1).For s=1, 2, ⋯, we run Step 1 and Step 2.Step 1: Sample γ̃^(s) and τ̃^(s) fromp(γ, τ| Ã_1^(s), ⋯, Ã_n^(s)).Step 2: Randomly generate n integers between 1 and t_MC, and denotethem r_1,⋯, r_n. For each i, set A_i^∗=A_i^(r_i), Θ_i^∗=Θ_i^(r_i) as the new proposal and set Ã^(s+1)=A_i^∗, Θ^(s+1)=Θ_i^∗ with probability α=min{1, p(A_i^∗|γ̃^(s), τ̃^(s))/ p(A_i^∗|μ_A_i, σ_A_i)/p(Ã_i^(s)|γ̃^(s), τ̃^(s))/ p(Ã_i^(s)|μ_A_i, σ_A_i)}. Otherwise, set Ã^(s+1)=Ã^(s), Θ̃_i^(s+1)=Θ̃_i^(s).Steps 1 and 2 are iterated until a sufficiently large MCMC sample is obtained.If a good sample from the case-by-case analysis is available, this two-stage sampler only takes a few minutes to obtain a MCMC sample from the FB posterior distribution for the hierarchical model in Eq. <ref>. § MCEM-TYPE ALGORITHM In this section, we present our algorithm to optimise population-level parameters in Step 1 of EB-type methods (EB, EB-log and EB-inv). We employ a Monte Carlo Expectation Maximisation (MCEM) algorithmwith importance sampling for our EB-type methods. MCEM is a Monte Carloimplementation of Expectation Maximisation (EM) algorithm. See <cit.> for more detailson EM and MCEM, and an illustration of their application in astrophysics. To apply EM,we treat the object-level parameters, namely,A_1, M_1, D_1, T_1, ⋯, A_n, M_n, D_n, T_n as latent variables. Due to the complex structure of this astrophysical model, it is impossible to obtain the expectation step (E-step) of the ordinary EM algorithm in closed form. MCEM avoids this via a Monte Carlo approximation to the E-step. We employ two algorithms to compute the MAP estimate of (γ, τ): Approach 1 is MCEM and Approach 2 uses importance sampling to evaluate the integral in the expectation step instead of drawing samples from the conditional density of the latent variables.Using Approach 1 to update γ and ξ=lnτ requires invokingBASE-9 once for each WD at each iteration of MCEM. This is computationally expensive and motivates Approach 2. We suggest interleaving Approach 1 and 2 to construct a moreefficient algorithm for computing the MAP estimates of γ and τ.Approach 1: MCEM Step 0: Initialise γ=γ^(1),ξ=ξ^(1), d_1=1 and τ=exp(ξ^(1)); Repeat for t=1, 2, ⋯, until an appropriate convergence criterion is satisfied.Step 1: For star i=1,⋯, n, sample A_i^[s,t], Θ_i^[s,t], s=1,⋯, S_t from their joint posterior distributionp(A_i, Θ_i|X_i,γ^(t),τ^(t))∝p(X_i|A_i, Θ_i)p(A_i|γ^(t), τ^(t))p(Θ_i),where S_t is the MCMC sample size at the t-th iteration and should be an increasing function of t (we take S_t=1000+500t).Step 2: Set γ^(t+1) =1/S_t·n∑_i=1^I∑_s=1^S_tA_i^[s,t], ξ^(t+1) =log(1/S_t·(I-1)∑_i=1^n∑_s=1^S_t(A_i^[s,t]-γ^(t+1))^2)/2; τ^(t+1) =exp(ξ^(t+1)); Approach 2: EM with importance samplingSuppose we have a sample at the ∗-th iteration, (A_i^[∗,s], Θ_i^[∗,s]), i=1,⋯, n, s=1,⋯, S_∗ from the joint posterior distributionp(A_i, Θ_i|X_i,γ^∗,τ^∗) given γ=γ^∗, τ=τ^∗.Setw_i^[t,s] =ϕ(A_i^[∗,s]|γ^(t),τ^(t))/ϕ(A_i^[∗,s]|γ^∗,τ^∗)/∑_s=1^S_tϕ(A_i^[∗,s]|γ^(t),τ^(t))/ϕ(A_i^[∗,s]|γ^∗,τ^∗); γ^(t+1) =1/n∑_i=1^n∑_s=1^S_tA_i^[∗,s]w_i^[t,s]; ξ^(t+1) =log(1/(n-1)∑_i=1^n∑_s=1^S_t[A_i^[∗,s]- γ^(t+1)]^2)/2; τ^(t+1) =exp(ξ^(t+1));where ϕ(x|μ,σ)=1/√(2πσ^2)exp(-(x-μ)^2/2σ^2).
http://arxiv.org/abs/1703.09164v2
{ "authors": [ "Shijing Si", "David A. van Dyk", "Ted von Hippel", "Elliot Robinson", "Aaron Webster", "David Stenning" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170327161609", "title": "A Hierarchical Model for the Ages of Galactic Halo White Dwarfs" }
[Electronic address: ]luca.visinelli@fysik.su.se NORDITA-2017-026Axion-like particles (ALPs) might constitute the totality of the cold dark matter (CDM) observed. The parameter space of ALPs depends on the mass of the particle m and on the energy scale of inflation H_I, the latter being bound by the non-detection of primordial gravitational waves. We show that the bound on H_I implies the existence of a mass scale m̅_χ = 10 n eV÷ 0.5 peV, depending on the ALP susceptibility χ, such that the energy density of ALPs of mass smaller than m̅_χ is too low to explain the present CDM budget, if the ALP field has originated after the end of inflation. This bound affects Ultra-Light Axions (ULAs), which have recently regained popularity as CDM candidates. Light (m < m_χ) ALPs can then be CDM candidates only if the ALP field has already originated during the inflationary period, in which case the parameter space is constrained by the non-detection of axion isocurvature fluctuations. We comment on the effects on these bounds from additional physics beyond the Standard Model, besides ALPs. Light axion-like dark matter must be present during inflation Luca Visinelli December 30, 2023 =============================================================§ INTRODUCTIONIn the era of precision cosmology, the cold dark matter (CDM) budget in our Universe has been established at about 84% of the total matter in the Universe, yet its composition remains unknown. Among the proposed hypothetical particles which could address this fundamental question is the QCD axion <cit.>, the quantum of the axion field arising from the spontaneous breaking of a U(1) symmetry conjectured by Peccei and Quinn (PQ <cit.>) to solve the strong-CP problem in quantum chromodynamics (QCD). The symmetry breaking occurs at a yet unknown energy scale f_a, which is constrained by measurements to be much larger than the electroweak energy scale <cit.>. The mass of the QCD axion at zero temperature m_0 is related to the axion energy scale f_a by m_0 f_a = Λ_a^2, where the energy scale Λ_a is related to the QCD parameter Λ_ QCD. Realistic “invisible” axion models introduce new particles that further extend the Standard Model: examples include the coupling of the axion to heavy quarks <cit.> or to a Higgs doublet <cit.>. The history and the properties of axions produced in the early Universe depend on the relative magnitude of the energy scale f_a compared to the inflation energy scale H_I <cit.>. In facts, if f_a > H_I/2π, the breaking of the U(1)_ PQ symmetry occurs before reheating begins and axions must be present during inflation, while, if f_a < H_I/2π, the axion field originates after the end of inflation. Measurements of the CMB properties constrain the parameter space of the axion, including the scale of inflation H_I and axion isocurvature fluctuations. Dense structures like axion miniclusters <cit.> or axion stars <cit.> could be used as laboratories for axion searches in the near future. Laboratory searches have developed strategies that involve axion electrodynamics <cit.> for promising detection methods <cit.>. See Refs. <cit.> for reviews of the QCD axion.Besides the QCD axion, other Axion-Like Particles (ALPs) arise from various ultra-violet completion models, in which additional U(1) symmetries which are spontaneously broken are introduced, as well as some other underlying physics. In facts, although the ALP mass might share a common origin with the QCD axion, it is possible for these particle not to be related to the dynamics of the gauge fields whatsoever. Examples include “accidental” axions <cit.> and axions from string theory <cit.> that generally arise in models with extra dimensions <cit.>. See also Ref. <cit.> for the effects of wormholes to the QCD axion potential. The potential of the axion thus generated might be in tension with the recent swampland conjectures, unless some sophisticated possibilities are considered <cit.>. In all these scenarios, two energy scales emerge: the symmetry-breaking scale Λ and the ALP decay constant f. Similarly to the QCD axion, the ALP field acquires a mass m ∼Λ^2/f, so that, contrarily to the QCD axion, the mass m and the energy scale f can be treated as independent parameters. An interesting proposed ALP is the Ultra-Light Axion (ULA), of mass m_ ULA≈ 10^-22eV <cit.>. Such a light axion, recently revised in Refs. <cit.>, would have a wavelength of astrophysical scale λ∼ 1kpc and could possibly address some controversies arising when treating small scales in the standard ΛCDM cosmology, namely the missing satellites and the cusp-core problems (see Ref. <cit.> for a review).ALPs from global and accidental U(1) symmetries share a common cosmological history with the QCD axion and spectate inflation whenever f > H_I / 2π. One of the main results of the present paper is to show that, in the opposite regime f < H_I / 2π, the observational constraint on H_I coming from the Planck mission leads to a lower bound on the ALP mass, m ≳m̅_χ, for some limiting mass m̅_χ whose value depends on the ALP susceptibility χ. We find a numerical value m̅_χ = 10 n eV÷ 0.5 peV, depending on the value of χ. This means that, if the CDM is discovered to be entirely composed of an ALP of mass m < m̅_χ, e.g. ULAs, such particles must be already present during inflation. Instead, if an ALP is discovered with m > m̅_χ, both cosmological origins are possible. We also show that, when f > H_I / 2π and the U(1) symmetry is never restored afterwards, the non-detection of axion isocurvature fluctuations by the Planck mission leads to an upper bound on the scale of inflation H_I, regardless of the ALP mass. Although this second result is quite straightforward to derive, it has never been stressed in the past literature.The paper is organized as follows. In Sec. <ref> we review the temperature dependence of the QCD axion mass, the ALP parameter space, and we derive the lower bound on the ALP mass. In Sec. <ref> we show results for the ALP parameter space, assuming either a cosine or a harmonic potential. In Sec. <ref>, we discuss some exceptions to the computation used coming from the effects of some physics beyond the standard model, including the modification to the effective number of degrees of freedom, non-standard cosmologies, or entropy dilution. Conclusions are drawn in Sec. <ref>.§ ALPS AND INFLATION §.§ Reviewing the temperature dependence of the QCD axion mass The QCD axion mass originates from non-perturbative effects during the QCD phase transition. At zero temperature, the axion gets a mass m_0 from mixing with the neutral pion <cit.>,m_0 = Λ_a^2/f_a = √(z)/1+z m_π f_π/f_a,where z = m_u/m_d is the ratio of the masses of the up and down quarks, m_π and f_π are respectively the mass and the energy scale of the pion, and f_a is the QCD axion energy scale. The energy scale Λ_a is proportional to the QCD scale Λ_ QCD, so that the axion mass is tied to the underlying QCD theory. Using z = 0.48(5), m_π = 132MeV, and f_π = 92.3MeV, the authors in Ref. <cit.> obtain Λ_a = 75.5MeV, a value slightly smaller than what obtained in other work. For example, Ref. <cit.> obtains Λ_a = 78MeV within the framework of the “interacting instanton liquid model”, fixing the QCD scale to Λ_ QCD = 400MeV. Recently, more refined computations on the QCD lattice have become accessible <cit.>.When temperature-dependent effects become important, the QCD axion mass acquires a complicated dependence on the plasma temperature <cit.>. Here, we model such dependence as <cit.>m_a(T) = α^2Λ_ QCD^2/f_a(Λ_ QCD/T)^χ/2,for  T ≥, m_0, for  T < ,where χ is the QCD axion susceptibility and α is a numerical factor. At present, there is no general consensus on the numerical value of the susceptibility, which depends on the particle content of the embedding theory <cit.>, as well as the computational technique used <cit.>. Ref. <cit.> obtains χ = 6.68 and α = (1.68× 10^-7)^1/4≈ 0.02 while the methods in Refs. <cit.> predict χ = 8 andα = Λ_a/Λ_ QCD C^1/2Λ_ QCD/200 MeV^1/4≈ 0.03÷ 0.05,where C = 0.018, see Eq. (4) in Ref. <cit.>. In addition, we have introduced the temperature scale = Λ_ QCD (αΛ_ QCD/Λ_a)^4/χ at which the two expressions in Eq. (<ref>) match. This allows us to rewrite Eq. (<ref>) as m_a(T) = m_0 G(T), with the functionG(T) = (/T)^χ/2,for  T ≥, 1 , for  T < .§.§ Observational constraints The QCD axion, and more generally ALPs, are suitable CDM candidates in some region of the parameter space, provided that these particles are produced non-thermally. In the following, we assume that the totality of the observed CDM budget is in the form of ALPs of mass m. This is equivalent to demanding that the energy density in ALPs, here ρ_A, is equal to the present CDMenergy density ρ_ CDM. We write this requirement asΩ_A h^2 =h^2 = 0.1197 ± 0.0022where Ω_A = ρ_A / ρ_ crit and = ρ_ CDM / ρ_ crit are, respectively, the energy densities in ALPs and in the observed CDM <cit.> at 68% Confidence Level (CL), both given in units of the critical density ρ_ crit = 3H^2_0^2/8π, with the Planck mass M_ Pl=1.221 × 10^19GeV and where h is the Hubble constant H_0 in units of 100 km s^-1Mpc^-1.Besides its mass, energy scale, and initial value of the misalignment angle, the ALP energy budget depends on the Hubble expansion rate H_I at the end of inflation, which is constrained from measurements on the scalar power spectrum Δ^2_ℛ(k_0) and the tensor-to-scalar ratio r_k_0 at the pivotal scale k_0 as <cit.>H_I < /4 √(π Δ^2_ℛ(k_0) r_k_0)∼ 7× 10^13 GeV.The numerical value of the bound has been computed by using the measurements at the wave number k_0 = 0.05Mpc^-1 <cit.>Δ^2_ℛ(k_0)=(2.215^+0.032_-0.079)× 10^-9, at 68% CL, r_k_0 <0.07,at 95% CL. We finally comment on isocurvature perturbations. Quantum fluctuations imprint into all massless scalar field a present during inflation, with variance <cit.>⟨|δ a^2|⟩ = H_I/2π^2.Primordial quantum fluctuations later develop into isocurvature perturbations <cit.>, which modify the number density of axions, since the gauge invariant entropy perturbation is non-zero <cit.>,𝒮_a = δn_a/s/n_a/s≠ 0,where s is the comoving entropy and n_a the axion number density. If all of the CDM is in axions, then we define <cit.>Δ_S, A^2 ≡⟨ |𝒮_a|^2⟩ = Δ^2_ℛ(k_0)β/1-β,where the parameter β is constrained from Planck <cit.> at the scale k_0 = 0.05Mpc^-1 asβ≲ 0.037,at 95% CL,independently on the ALP mass.§.§ Constraining the ALP massWe now consider the parameter space of ALPs produced through the vacuum realignment mechanism (VRM) <cit.>, as revised in Appendix <ref>. Although, in principle, other mechanisms in addition to the VRM like the decay of topological defects produced at the PQ phase transition through the Kibble mechanism <cit.> and the decay of parent particles into ALPs might sensibly contribute to the present abundance of cold ALPs, we do not consider them here. Similarly to what obtained for axions, we represent the ALP mass as m(T) = m G(T), where m is a new parameter and G(T) is given in Eq. (<ref>). The ALP susceptibility χ might take any real non-negative value and is left here as a free parameter. An infinite susceptibility corresponds to the ALP mass abruptly jumping from zero to the value m at temperature ; any finite value of χ results in a smoother transition. ALPs from string theory or arising from accidental symmetries have χ = 0. The ALP energy scale f is related to the ALP mass by f = Λ^2/m, where Λ is a new energy scale specified by an underlying theory. Finally, we write = c Λ, for some constant value c.We review the non-thermal production of a cosmological population ofALPs from the misalignment mechanism in the Appendix <ref>, assuming that ALPs move in the potentialV(θ) = f^2 m^2(T) 1-cosθ,where θ = a/f and a is the ALP field. We assume that, when the ALP field originates, the initial value of the misalignment angle is θ_i. The present value of the ALP energy density obtained from the misalignment mechanism is given in Eq. (<ref>),ρ_A = Λ^4 G(T_1)/2g_S(T_0)/g_S(T_1)(T_0/T_1)^3 ⟨θ_i^2⟩,where ⟨θ_i^2⟩ is the initial value of the misalignment angle squared, averaged over our Hubble volume, while the effective number of relativistic (“R”) and entropy (“S”) degrees of freedom are defined as <cit.>g_R(T)= ∑_iT_i/T^4∫_0^+∞Q_i(x)dx, g_S(T)= 3/4∑_iT_i/T^3∫_0^+∞ x^2Q_i(x)1 +x^2/3(x^2 + y_i^2)dx, Q_i(x)= 15g_i/π^4√(x^2+y_i^2)/exp√(x^2+y_i^2)+(-1)^Q_i^f.In the expressions above, T is the temperature of the plasma, and the sum runs over the i species considered, each with temperature T_i, mass m_i, y_i ≡ m_i/T_i, and Q_i^f = 1 (Q_i^f = 0) if i is a fermion (boson). Instead of computing the integrals in Eqs. (<ref>)-(<ref>), we have considered the parametrization in Refs. <cit.>, where the effective number of degrees of freedom are approximated with a series of step functions, for temperatures up to O(100 GeV).In Eq. (<ref>), we have introduced the initial value of the misalignment angle θ_i, which is the ALP field in units of f, and angle brackets define the average over all possible values of θ_i. In this scenario, θ_i takes different values within our Hubble horizon, so⟨θ_i^2⟩ = 1/2π∫_-π^π θ_i^2 F(θ_i) dθ_i,where the weighting function F(θ_i) has been thoroughly discussed in the literature <cit.>. Here, we take <cit.>F(θ_i) = ln[e/1-(θ_i/π)^4],which gives √(⟨θ_i^2⟩) = 2.45.Coherent oscillations in the ALP field begin at temperature T_1 given by 3H(T_1) = m, see Eq. (<ref>) below, and the Hubble rate during radiation domination isH(T) = (T)T^2/3, (T) = √(4π^3/5g_*(T)).The temperature T_1 at which the coherent oscillations in the ALP field begin isT_1 =(f̂/f)^2/4+χ,for  f ≤f̂, (f̂/f)^1/2, for  f > f̂.where we have defined the axion energy scalef̂≡/c^2(T_1).Inserting Eq. (<ref>) into Eq. (<ref>), we obtain the present ALP energy density asρ_A = ρ̂_A ⟨θ_i^2⟩m/f̂^1/2 (f/f̂)^16+3χ/2(4+χ),for  f ≤f̂, (f/f̂)^2,for  f > f̂,where we have definedρ̂_A = g_*S(T_0)/g_*S(T_1)f̂/2(T_0/c)^3.If the ALP field originates after inflation, the energy density is a function of the mass m and the ALP energy scale f only, but it does not depend on θ_i which is averaged out. Equating ρ_A in Eq. (<ref>) with the CDM energy density ρ_ CDM = Ω_ CDM ρ_ crit givesf =f̂ρ_ CDM/ρ̂_A⟨θ_i^2⟩^8+2χ/16+3χf̂/m^4+χ/16+3χ,for  f ≤f̂, ρ_ CDM/ρ̂_A⟨θ_i^2⟩^1/2f̂/m^1/4,for  f > f̂.For any value of m, Eq. (<ref>) expresses the value of f for which the ALP explains the observed CDM budget.We show that lighter ALPs cannot make the totality of the CDM when produced after the end of inflation. In facts, the region where f < H_I / 2π (which implies f < f̂) is constrained by the bound on H_I expressed in Eq. (<ref>), which leads to the lower bound on the ALP mass,m ≥m̅_χ≡f̂ 64π/Δ^2_ℛ(k_0) r_k_0f̂/^2 ^16+3χ/8+2χρ_ CDM/ρ̂_A⟨θ_i^2⟩^2.The numerical value of m̅_χ depends on the susceptibility χ and on the value of the constant c in the model. Setting c = 1, we obtain the limiting cases m̅_0 = 10 n eV and m̅_∞ = 0.5 peV. Axion theories where m < m̅_χ must embed the axion production in the inflationary mechanism, as we discuss below. We remark that the bound in Eq. (<ref>) only applies if the ALP field originated after the end of inflation, f < H_I/2π, and if the ALP field has originated from the breaking of a U(1) symmetry. in these scenarios, a Hubble volume contains a multitude of patches where the axion field has a different, random value. These patches are bound by topological defects which could decay and leave to an additional component of the cold ALP energy density. The inclusion of non-relativistic ALPs from the decay of topological defects would increment their number density, potentially reducing the value of m̅_χ by a couple of orders of magnitude. Here, we do not consider such contribution. Notice that the result in Eq. (<ref>) does not depend on the value of Λ. §.§ ALPs and inflation ALPs of mass smaller than m̅_χ can still be regarded as CDM candidates, although the related U(1) symmetry must have broken during the inflationary period, with the ALP energy scale satisfying f > H_I/2π. The cosmological properties of such ALPs would greatly differ from those described in the region f < H_I/2π, in particular no defects are present and a unique value of θ_i is singled out by the inflationary period within our Hubble volume. For example, consider the case of an ULA of mass m_ ULA=10^-22eV, which is the mass scale proposed to solve some small-scale galactic problems <cit.> and recently has been vigorously reconsidered as a possible CDM candidate <cit.>. Since the mass scale m_ ULA falls well within the limit excluded by Eq. (<ref>), ULAs must have been produced during inflation to be the CDM, with a precise relation between the initial misalignment angle and the energy scale given by Eq. (<ref>) with ⟨θ_i^2⟩ replaced by θ_i^2 F(θ_i). The replacements accounts for the fact that the angle average ⟨θ_i^2⟩ singles out a uniform value for θ_i over the entire Hubble volume. In this scenario, we expect that the initial misalignment angle is of the order of one, with smaller values of θ_i still possible albeit disfavored. In Fig. <ref>, we show the value for f given in Eq. (<ref>), as a function of the ALP mass m, for the value θ_i = 1 and for different values of the ALP susceptibility: χ = 0 (blue solid line), χ = 8 (green dotted line), χ = +∞ (red dashed line). Values of f of the order of the GUT scale f ∼ 10^15GeV are expected for m∼ 10^-17÷10^-13eV, while the ULA mass m_ ULA∼ 10^-22eV gives f ∼ 10^17GeV <cit.>. For higher values of the ALP mass, the spread among f for different values of χ widens. § FRAMING THE ALP PARAMETER SPACE §.§ Cosine potential We apply the expression for axion isocurvature fluctuations in Eq. (<ref>) to the ALP scenario, to obtain <cit.>Δ_S,A^2 = (∂lnρ_A/∂θ_i)^2 ⟨δθ_i^2⟩ = (H_I (θ_i)/π θ_i f)^2,where in the last step we have used Eq. (<ref>), and where we defined the function(x) = 1 + xF'(x)/2F(x).Results on the various bounds on the ALP parameter space are summarized in Fig. <ref>. Since we do not consider the contribution from the decay of topological defects, the parameter space of CDM ALPs depends on six quantities, f, θ_i, H_I, m, c, and χ. We show how the parameter space modifies when considering different values of the ALP mass: m=10^-20eV (top left), m=10^-10eV (top right), m=10^-5eV (bottom left), and m=10^-3eV (bottom right). For each panel, the line f=H_I/2π separates the region where the axion is present during inflation (top-left) from the region where the axion field originates after inflation (bottom-right), for a fixed value c = 1. This line has to be though as a qualitative bound between the two scenarios we will describe, since the exact details depend on the inflationary model, the preheating-reheating scenarios, and axion particle physics. The horizontal line labeled “ALP CDM” corresponds to the requirement that the primordial ALP condensate has started behaving like CDM at matter-radiation equality (See Ref. <cit.> for details),f ≳53TeV/π √( eV/m). We first discuss the scenario where f > H_I/2π. The region is bound by the non detection of axion isocurvature fluctuations, obtained from Eq. (<ref>) with the requirement that ρ_A = ρ_ CDM. We plot the bound for three different values of the susceptibility: χ = 0 (blue solid line), χ = 8 (green dotted line), χ = +∞ (red dashed line). For clarity, we shade in yellow the region below the minimum of the three curves although we have to bear in mind that the whole parameter space below a curve of fixed χ has been ruled out. The change in the slope corresponds to the argument of the anharmonicity function F(θ_i) approaching π. For each value of χ, the horizontal lines in the allowed parameter space show the “natural” value of f for which ρ_A = ρ_ CDM and θ_i = 1, as shown in Fig. <ref>. For m=10^-20eV, the natural value of the axion energy scale is of the order of f ∼ 10^16GeV, corresponding to the “ALP miracle” discussed in Ref. <cit.>. For smaller values of the ALP mass, the natural value of f lowers, and the spread among different values of χ widens, as shown in Fig. <ref>. The bound from isocurvature fluctuations steepens when θ_i decreases, and it is vertical for θ_i ≪ 1 and for χ = 0, or for f > f̅. We reformulate this constraint as an upper bound on H_I for a given ALP theory, which is obtained by combining Eqs. (<ref>), (<ref>), and (<ref>) asH_I ≤πf̂f̂/m^1/4√(ρ_ CDM/ρ̂_Aβ/1-βΔ^2_ℛ(k_0)) = 10^7 GeV√( eV/m).Isocurvature bounds have been used in the string axiverse realization discussed in Ref. <cit.>, neglecting the dependence on the susceptibility and the anharmonic corrections in the potential. The presence of axion isocurvatures in the CMB, whose constrain on the power spectrum leads to Eq. (<ref>), relies on the fact that the PQ symmetry has never been restored after the end of inflation. Caveats that allow to evade the bound from isocurvature fluctuations in Eq. (<ref>) include the presence of more than one ALP <cit.> or by identifying the inflaton with the radial component of the PQ field <cit.>. This latter technique has been embedded into the SMASH model <cit.> where, for a decay scale f ≲ 4×10^16GeV, the PQ symmetry is restored immediately after the end of inflation and isocurvature modes are absent, so that the bound in Eq. (<ref>) does not apply.In the second scenario f< H_I/2π, the axion is not present during inflation. In this scenario, a horizontal line gives the value of f for which the ALP is the CDM for a given value of the susceptibility. ALPs with an energy scale smaller than this value are a subdominant CDM component (green region, ρ_A < ρ_ CDM), while values above are excluded (yellow region, ρ_A > ρ_ CDM). The constrain in Eq. (<ref>) applies in this region of the parameter space, for some values of the ALP mass. For m = 10^-20eV, which lies below the critical value m̅_χ in Eq. (<ref>), we always have ρ_A < ρ_ CDM, so the region f< H_I/2π is shaded with green. Larger values of the ALP mass allow for ρ_A = ρ_ CDM for some values of f and χ, avoiding the constrain in Eq. (<ref>).§.§ Harmonic potential In Fig. <ref>, we have shown the parameter space of ALPs moving in the cosine ALP potential in Eq. (<ref>), including the non-harmonic corrections through the function F(θ_i) in Eq. (<ref>). However, the ALP potential can greatly differ from what expressed in Eq. (<ref>). For example, in the presence of a monodromy <cit.>, the degeneracy among the minima of the cosine potential is lifted by a quadratic potential, which might dominate the axion CDM potential <cit.>. We repeat the computation in the previous Section for a harmonic potential, by switching off the non-harmonic corrections, setting F(θ_i) = 1, considering the ALP moving in the quadratic potentialV_H(θ) = 1/2f^2 m^2(T)θ^2.Inserting Eq. (<ref>) into Eq. (<ref>) for a harmonic potential to eliminate θ_i leads to a relation between H_I and f,f = f̂ [πf̂/H_I^2ρ_ CDM/ρ̂_AβΔ^2_ℛ(k_0)/1-β]^8+2χ/8+χ.We show results for the parameter space thus obtained in Fig. <ref>. Notice that the upper left panel (m=10^-20eV) qualitatively reproduces the results recently obtained in Ref. <cit.> when the anharmonic corrections are neglected in the isocurvature modes. Eq. (<ref>) describes the vertical blue line at the boundary of the region excluded by the non-observation of isocurvature fluctuations. § EFFECTS OF ADDITIONAL PHYSICS BEYOND THE STANDARD MODELAdditional new physics might sensibly alter the axion parameter space presented in Fig. <ref>. Besides the QCD axion and other ALPs, examples of new physics not currently described within the framework of the Standard Model include additional particles whose presence modifies the effective number of degrees of freedom, or heavy scalar fields that might have dominated the Universe before the onset of radiation domination. We discuss some of the issues in the following. We focus on the case in which the axion mass is independent of temperature, since results can be easily generalized. §.§ Effective number of degrees of freedom The existence of particles that are still to be discovered would alter the effective number of relativistic and entropy degrees of freedom for temperatures larger than T ≳ O(100) GeV. For example, the maximum number of effective relativistic degrees of freedom is 106.75 in the Standard Model, and 228.75 in the Minimal Supersymmetric Standard Model <cit.>. Setting 3H(T) = m, with H given in Eq. (<ref>) and T = 1TeV, we obtain that corrections to g_R(T) from physics beyond the Standard Model become important when m ≳ 10^-4 eV. We thus neglect these contributions when deriving the results in Sec. <ref>. §.§ Non-standard cosmological history The content of the Universe for temperatures larger than ≳ 4 MeV is currently unknown, with lower bound being obtained from the requirement that the Big-Bang nucleosynthesis is achieved in a radiation-dominated cosmology <cit.>. However, for higher temperatures, the expansion rate of the Universe could have been dominated by some unknown form of energy, with an equation of state that differs from the one describing a relativistic fluid. A popular example is the early domination of a massive scalar field ϕ, emerging as a by-product of the decay of the inflaton field. In the following, we refer to this modified cosmology as being ϕ-dominated. The effect of a non-standard cosmological history might vary the present value of the axion energy density by orders of magnitude <cit.>, depending on the equation of state for the fluid that dominates the expansion and the presence of an entropy dilution fact. In a nutshell, in a ϕ-dominated Universe the ALP begins to oscillate at a temperature T_1 that is different from what obtained in the standard picture, because of a different relation between temperature and time in the modified cosmology. Assuming that the equation of state of the ϕ field in the modified cosmology is p = wρ (w=1/3 for radiation), for times t larger than the moment t_ RH at which the Universe transitions from ϕ domination to radiation domination, the Hubble rate isH = 2/3(w+1)t = H_ RHT/^3(w+1)/2,where the last expression is valid only if the entropy density s=g_ST^3 in a comoving volume is conserved, we have neglected the contribution from the entropy degrees of freedom, andH_ RH = H(t_ RH) = ()^2/3.We consider the temperature dependence of the ALP mass as m(T) = m(/T)^χ/2, while the constant ALP mass case is obtained by setting χ = 0. An early ϕ domination modifies the temperature at which coherent oscillations begin, Eq. (<ref>), asT_1 = f̂_ RH/f^2/3(w+1)+χ /^3w-1/3(w+1)+χ.where f̂_ RH≡/c^2() ≈f̂. The new value of T_1 modifies the present energy density, given by Eq. (<ref>) when it is assumed entropy conservation from the onset of axion oscillations. The ALP energy density isρ_A =ρ̂_Am/f̂^1/2⟨θ_i^2⟩cΛ̂/^(6+χ)(3w-1)/6(w+1)+2χf/f̂^2(16+3χ)+(3w-1)(8+χ)/4[3(w+1)+χ],where ρ̂_A has been defined in Eq. (<ref>). Notice that, setting w=1/3, we obtain the energy density given in the first line in Eq. (<ref>). The axion energy scale for which the ALP is the CDM particle readsf =f̂ρ_ CDM/ρ̂_A⟨θ_i^2⟩√(f̂/m)^4[3(w+1)+χ]/2(16+3χ)+(3w-1)(8+χ)/cΛ̂^2(6+χ)(3w-1)/2(16+3χ)+(3w-1)(8+χ).For a generic cosmological mode, the constraint in Eq. (<ref>) for the region f < H_I / 2π modifies asm≥ f̂ 64π/Δ^2_ℛ(k_0) r_k_0f̂/^2 ^2(16+3χ)+(3w-1)(8+χ)/2[3w(8+χ)+χ]× ×ρ_ CDM/ρ̂_A⟨θ_i^2⟩^4[3(w+1)+χ]/3w(8+χ)+χ/cf̂^2(6+χ)(3w-1)/3w(8+χ)+χ.The latter expression depends on two additional parameters w and , and gives the result already obtained in Eq. (<ref>) for w = 1/3.For w < 1/3, Eq. (<ref>) can be restated as a lower bound on the reheating temperature, valid when assuming that the ALPs considered make up the totality of the CDM observed and that coherent oscillations in the field began after inflation, in a ϕ-dominated cosmology. In the case of an early matter-dominated cosmology w = 0, the bound in Eq. (<ref>) can be restated as a bound on the reheating temperature as≥ cf̂f̂/m^χ/2(6+χ) 64π/Δ^2_ℛ(k_0) r_k_0f̂/^2 ^24+5χ/4(6+χ)ρ_ CDM/ρ̂_A⟨θ_i^2⟩^6+2χ/6+χ.If the mass is not affected by non-perturbative effects and χ = 0, like for accidental ALPs, the expression above becomes independent on m and yields the bound ≳ 3GeV, which is about three orders of magnitude more stringent than what obtained in Refs. <cit.> using BBN considerations. We nevertheless stress that the bound in Eq. (<ref>) can be easily evaded, given the strong assumptions under which it has been derived. §.§ Dilution factor Some scenarios predict a violation in the conservation of the total entropy in a comoving volume, sa^3, due for example to the decay into lighter degrees of freedom of the ϕ field that dominates the Universe at that time. This is the case, for example, of a low-temperature reheating (LTR) stage <cit.>, in which the Universe is dominated by a massive, decaying moduli field. In this situation, the relation between the scale factor and temperature changes from the simple relation g_S^1/3T ∼ 1/a to a generic relation aT^δ∼ const, where δ is a new constant in the model. For example, δ = 8/3 in the LTR scenario <cit.>. A different parametrisation consists in assuming that a certain amount of entropy γ is produced during the decaying stage <cit.>. See Ref. <cit.> for the cosmology with a decaying kination field <cit.>. Either way, the effect of entropy dilution reduces the present energy density of axions in Eq. (<ref>) by a factor γ, and the bound on the ALP mass in Eq. (<ref>) is lowered. In general, we obtain the ALP energy density to be diluted by a factor ρ_A →ρ_A / γ. If γ were to be independent on the ALP mass, we would get a reduction by m̅_χ→m̅_χ/γ^2.We compute the dilution factor in the LTR scenario asγ = g_S()a_R^3 T_R^3/g_S(T_1)a_1^3 T_1^3 = g_S()/g_S(T_1)T_1/^3(δ-1) == g_S()/g_S(T_1)/^3(δ-1)(f̂/f)^6(δ-1)/4+χ,where in the last step we have used the expression for T_1 in Eq. (<ref>) for the case f ≤f̂. Since we expect oscillations to begin in the ϕ-dominated scenario, for which T_1 >, demanding δ > 1 indeed leads to a dilution that is larger than one. For example, using = 4MeV and m = 10^-5eV with δ = 8/3 and χ = 0, we obtain γ≈ 10^20. This large discrepancy with respect to the standard cosmology scenario has been used in Ref. <cit.> to dilute the energy density of the cosmological QCD axion, obtaining results that sensibly differ from the standard picture. Taking the expression for ρ_A in Eq. (<ref>), we rephrase the bound in Eq. (<ref>) when the dilution in Eq. (<ref>) is added asm≥ g_S(T_1)/g_S()g_R(T_1)/g_R(T_0)H_0^2/T_0^348Ω_ CDM/⟨θ_i^2⟩Δ^2_ℛ(k_0)r_k_0^2/3δ-2× ×(T_1)/^3(δ-2)/3δ-2^6(δ-1)/3δ-2∼ 10^-13eV.We have treated separately the effects due to the modified expansion rate and dilution to obtain the bounds in Eqs. (<ref>) and (<ref>). A consistent derivation within a modified cosmology (say, LTR), has to consistently take into account both effects.§ CONCLUSIONThe present energy density of ALPs depends on both its mass m and the energy scale f. In general, these parameters can be tuned so that ρ_A = ρ_ CDM. However, in models where the ALP field originates after inflation, we have shown in Sec. <ref> that the bound on the scale of inflation H_I from the non-detection of primordial gravitational waves leads to a minimum value of the ALP mass m̅_χ below which the tuning of m and f is no longer possible. An ALP with mass m < m̅_χ can still be a CDM candidate if it spectates inflation. In this latter scenario, the scale of inflation H_I is bound by the ALP mass through Eq. (<ref>) which, although used in other work <cit.>, has never been explicitly derived before. We have shown how these results affect the parameter space of the ALP for different values of the mass and of the susceptibility in Fig. <ref> (cosine potential) and Fig. <ref> (harmonic potential). Finally, we have commented on how results are affected by the presence of additional physics beyond the standard model, focusing on the modification of the effective number of degrees of freedom, non-standard inflation and post-inflation cosmologies, and entropy dilution. The author would like to thank the anonymous referee for the careful read and the helpful suggestions, which led to a substantial improvement of the manuscript with respect to its original version, and Javier Redondo (U. Zaragoza) for the useful discussion. The author acknowledges support by the Vetenskapsrådet (Swedish Research Council) through contract No. 638-2013-8993 and the Oskar Klein Centre for Cosmoparticle Physics. § REVIEW OF THE VACUUM REALIGNMENT MECHANISM §.§ Equation of motion for the axion fieldThe ALP field originates from the breaking of the PQ symmetry at a temperature of the order of f. The equation of motion for the angular variable of the ALP field at any time isθ̈ + 3H θ̇ - ∇̅^2/R^2 θ + m^2(T) sinθ = 0,where θ is the ALP field in units of f, ∇̅ is the Laplacian operator with respect to the physical coordinates x̅, and R is the scale factor. To derive Eq. (<ref>), we have considered the simplest possible ALP potential V(θ) = f^2m^2(T) (1-cosθ). The mass term in the equation of motion becomes important when the Hubble rate is comparable to the axion mass,H(T_1) = 3m(T_1),whose solution gives the temperature T_1 when coherent oscillations begin. Setting the scale factor and the Hubble rate at temperature T_1 respectively as R_1 and H_1, we rescale time t and scale factor R as t → H_1t and R → R/R_1, so that Eq. (<ref>) readsθ̈ + 3H θ̇ - ∇^2/R^2 θ + 9g^2 sinθ = 0,where the Laplacian operator is written with respect to the co-moving spatial coordinates x = H_1 R_1 x̅ and g=G(T)/G(T_1). We work in a radiation-dominated cosmology, where time and scale factor are related by R ∝ t^1/2. Setting θ = ψ/R, Eq. (<ref>) readsψ” - ∇^2 ψ + 9g^2 R^3 sin(ψ/R) = 0,where a prime indicates a derivation with respect to R. Eq. (<ref>) coincides with the results in Ref. <cit.>, where the conformal time η is used as the independent variable in place of the scale factor R. Taking the Fourier transform of the axion field asψ() = ∫ e^-i qψ(q),we rewrite Eq. (<ref>) asψ” + q^2 ψ + 9g^2 R^3 sin(ψ/R) = 0.Eq. (<ref>) expresses the full equation of motion for the axion field in the variable R, conveniently written to be solved numerically. §.§ Approximate solutions of the equation of motion Analytic solutions to Eq. (<ref>) can be obtained in the limiting regime ψ/R ≪ 1, where Eq. (<ref>) readsψ” + κ^2(R) ψ = 0,with the wave number κ^2(R) = q^2 + 9g^2R^2. An approximate solution of Eq. (<ref>), valid in the adiabatic regime in which higher derivatives are neglected, is <cit.>ψ = ψ_0(R) exp (i ∫^R κ(R') dR'),where the amplitude ψ_0 is given by|ψ_0(R)|^2 κ(R) =const.Each term appearing in κ(R) is the leading term in a particular regime of the evolution of the axion field. We analyze these approximate behavior in depths in the following.* Solution at early times, outside the horizonAt early times t ∼ R^2 ≲ t_1 prior to the onset of axion oscillations, the mass term in Eq. (<ref>) can be neglected since m(T) ≪ m(T_1). Defining the physical wavelength λ = R/q, we distinguish two different regimes in this approximation, corresponding to the evolution of the modes outside the horizon (λ≳ t) or inside the horizon (λ≲ t). In the first case λ≳ t, Eq. (<ref>) at early times reduces to ψ” = 0, with solution (ψ = Rϕ)ϕ(q,t) = ϕ_1(q) + ϕ_2(q)/R = ϕ_1(q) + ϕ_2(q)/t^1/2,the first solution being a constant in time ϕ_1(q), while the second solution dropping to zero. The axion field for modes larger than the horizon is “frozen by causality” <cit.>. * Solution at early times, inside the horizonEq. (<ref>) for modes that evolve inside the horizon λ≲ t reduces to ψ” + q^2 ψ = 0,whose solution in a closed form, obtained through Eq. (<ref>) and ϕ = ψ/R, readsϕ∝ R^-1 exp(iq R).The dependence of the amplitude |ϕ|∼ 1/R in Eq. (<ref>) is crucial, since it shows that the axion number density scales as cold matter,n_A(q, t) ∼|ϕ|^2/λ∼1/R^3.* Solution for the zero mode at the onset of oscillationsAn approximate solution of Eq. (<ref>) for the zero-momentum mode q = 0, valid after the onset of axion oscillations when t ∼ t_1, is obtained by settingκ(R) ≈ 3gR,so that the adiabatic solution for ψ in Eq. (<ref>) in this slowly oscillating regime gives the axion number densityn_A^ mis(R) = 1/2m(R) f^2 |ψ|^2/R^2 =(R/R_1)^-3,whereis the number density of axions from the misalignment mechanism at temperature T_1,= 1/2 m(T_1) f^2 F(θ_i)θ_i^2,and F(θ_i) is a function that accounts for neglecting the non-harmonic higher-order terms in the Taylor expansion of the sine function, see Eq. (<ref>). Eq. (<ref>) shows that the axion number density of the zero modes after the onset of axion oscillations scales as cold matter, with R^-3. The present ALP energy density is found by conservation of the comoving axion number density,ρ_A = m s(T_0)/s(T_1) = mg_*S(T_0)/g_*S(T_1)(T_0/T_1)^3,where s(T) is the entropy density and g_*S(T) is the number of degrees of freedom at temperature T.
http://arxiv.org/abs/1703.08798v3
{ "authors": [ "Luca Visinelli" ], "categories": [ "astro-ph.CO", "hep-ph" ], "primary_category": "astro-ph.CO", "published": "20170326095444", "title": "Light axion-like dark matter must be present during inflation" }
Department of Physics, Guru Nanak Dev University, Amritsar, Punjab-143005, India Atomic, Molecular and Optical Physics Division, Physical Research Laboratory, Navrangpura, Ahmedabad-380009, India We present additional magic wavelengths (λ_magic) for the clock transitions in the alkaline-earth metal ionsconsidering circular polarized light aside from our previously reported values in [J. Kaur et al., Phys. Rev. A 92, 031402(R) (2015)] for the linearly polarized light. Contributions fromthe vector component to the dynamic dipole polarizabilities (α_d(ω)) of the atomic states associated with the clocktransitions play major roles in the evaluation of these λ_magic, hence facilitating in choosing circular polarizationof lasers in the experiments. Moreover, the actual clock transitions in these ions are carried out among the hyperfine levels. The λ_magic values in these hyperfine transitions are estimated and found to be different from λ_magic forthe atomic transitions due to different contributions coming from the vector and tensor part of α_d(ω). Importantly, we alsopresent λ_magic values that depend only on the scalar component of α_d(ω) for their uses in a specially designed trapgeometry for these ions so that they can be used unambiguously among any hyperfine levels of the atomic states of the clock transitions.We also present α_d(ω) values explicitly at the 1064 nm for the atomic states associated with the clock transitions which maybe useful for creating “high-field seeking” traps for the above ions using the Nd:YAG laser. The tune out wavelengths at which the stateswould be free from the Stark shifts are also presented. Accurate values of the electric dipole matrix elements required for these studiesare given and trends of electron correlation effects in determining them are also highlighted. 32.60.+i, 37.10.Jk, 32.10.DkAnnexing magic and tune-out wavelengths to the clock transitions of the alkaline-earth metal ions B. K. Sahoo[Email: bijaya@prl.res.in] Received date; Accepted date =================================================================================================§ INTRODUCTION Atomic clocks based on optical lattices are capable of proffering outstanding stable and accurate time keeping devices. Afundamental feature of an optical lattice clock is that it interrogates an optical transition with the controlled atomic motion<cit.>. At present, the most stable clock is based on the optical lattices of ^87Sr atoms with accuracy below 10^-18 <cit.>.The unique feature of this clock is that the atoms are trapped at the wavelengths of an external electric field at which the differential light shift of an atomic transition nullifies. These wavelengths are commonly known as the magic wavelengths (λ_magic)<cit.>. However, ions provide more accurate atomic clocks since various systematics in the ions can be controlled better <cit.>. As a result a number of ions, such as, ^27Al^+ ^199Hg^+, ^171Yb^+, ^115In^+, ^88Sr^+, ^40Ca^+, ^113Cd^+ etc. are under consideration for building accurate clocks.Among the various ions proposedfor frequency standards <cit.>, the alkaline earth metal ions possess an advantage thattransitions required for cooling and re-pumping of ions and clock frequency measurement can be easily accessed using non-bulky solid state or diode lasers <cit.>. Moreover presence of metastable D states in these ions, whose lifetimes range from milliseconds to severalseconds, assist in carrying out measurements meticulously. The recent development on measurement of λ_magic in a singly charged ^40Ca^+ ion has now open-up a platform forthe possibility of building up pure optical trapped ion clocks <cit.>. Optical lattices blended with unique features of opticaltransitions in an ionic system can revolutionize the secondary as well as primary frequency standards. However, the potential of the opticaldipole trap perturb the energy levels of the ion unevenly and the consistency of anion optical clock depreciates. Therefore, knowledge of λ_magic values in these ions would be instrumental for constructingall-optical trapped ion clocks. These wavelengths can be found out using accurate values of the dynamic dipole polarizabilities α_d(ω) of the states associated with the clock transitions. Also, information on the dynamic (α_d(ω)) values, especially at which the ions arebeing trapped will be of great significance. Improved atomic clocks will obviously ease the widely used technologies including precise determination offundamental constants <cit.>, accurate control of quantum states <cit.> and advancement in communication, GlobalPositioning System <cit.> etc. Following the measurement of λ_magic in the 4S_1/2 - 3D_5/2 transition of ^43Ca^+, we had investigatedthese values in the nS_1/2 - (n-1)D_3/2 and nS_1/2 - (n-1)D_5/2 transitions of the alkaline-earth ions <cit.> for the ground state principal quantumnumber n. However, those studies were focused mainly on the linearly polarized light limiting the choice for experimental measurements. Application of circular polarized light to atomic systems introduce contributions from the vector polarizabilities in the Stark shiftsof the energy levels, which is linearly proportional to the angular frequency of the applied field. This can help in manipulating theStark shifts in the energy levels and can lend to more degrees of freedom to attain further λ_magic values as per theexperimental stipulation. Moreover, it is advantageous to consider hyperfine transitions in certain isotopes of the singly chargedalkaline-earth ions, giving rise to zero hyperfine angular momentum, to get-rid off the systematics due to electricquadrupole shifts <cit.>.Since α_d values of atomic and hyperfine states in an atomic system are different when vector and tensor components of α_dcontribute, the λ_magic values also differ between the atomic and hyperfine transitions. Thus, it would be pertinentto investigate λ_magic both in the atomic and hyperfine transitions before the experimental consideration of theproposed ions. In fact, it could be more convenient to have λ_magic that are independent of choice of both magnetic and hyperfine sub-levels in a given clock transition.In fact, the Nd:YAG lasers at 1064 nm is often used for trapping atoms and ions because of their relatively high power andlow intensity noise <cit.>. Traps built with long wavelength lasers are generally “high-field-seeking” where the atoms are attracted to the intensity maxima. Dynamic polarizabilities for the considered ions at 1064 nm will be of immense interest to theexperimentalists since these polarizabilities will be immediately useful for operating optical traps at 1064 nm light fields.In the present work, we aim to search for the λ_magic for the nS_1/2 - (n-1)D_3/2,5/2 optical clock transitions,both in the atomic and hyperfine levels, in ^43Ca^+ (with nuclear spin I=7/2), ^87Sr^+ (I=9/2) and ^137Ba^+ (I=3/2) ions using circularly polarized light. These values can be compared with the values for the linearly polarized light reported in Ref.<cit.> for the experimental consideration to trap the above ions. Also, we had demonstrated in a recent work how a trap geometry can be chosen in such away that Stark shifts observed by the energy levels can be free from the contributions from the vector and tensor components of theα_d of the atomic states <cit.>. Assuming such trapping geometries for the considered alkaline-earth ions, we also giveλ_magic values using only the scalar polarizability contributions. Moreover, we identify the tune-out wavelengths(λ_T) of the respective states for which the dynamic dipole polarizability of these ions vanishes. Comprehension of theseλ_T values are needed for the sympathetic cooling of other possible singly and multiply charged ions in two-speciesmixtures with the considered alkaline earth ions <cit.>. We also present dynamic polarizability of these statesat the 1064 nm wavelength of the applied external electric field. Contributions from various electric dipole (E1) matrix elements in determining the α_d values and the role of the electron correlation effects for the evaluation of accurate values of the E1 matrix elementsare also discussed. Unless stated otherwise, all the results are given in atomic unit (a.u.) throughout this paper.§ THEORY The Stark shift in the energy of K^th level of an atom placed in an electric field is given by <cit.>Δ E^K= -1/4α_d^K (ω) E^2,where E is the amplitude of the external electric field due to the applied laser in an atomic system and α_d^K(ω) is the dynamicdipole polarizability for the state K with its magnetic projection M. In tensor decomposition, α_d^K can be expressed asα_d^K(ω) = α_d,0^K(ω)+β(ϵ)M/2Kα_d,1^K(ω ) + γ(ϵ) 3M^2-K(K+1)/K(2K-1)α_d,2^K(ω),where α_d,i^K(ω) with i=0,1,2 are the scalar, vector and tensor components of α_d^K(ω) respectively.In the specific cases, K can be either the atomic angular momentum J or hyperfineangular momentum F. The terms β(ϵ) and γ(ϵ) are defined as <cit.>β(ϵ) = ι(ϵ̂×ϵ̂^*)·ê_Bandγ(ϵ)= 1/2[3(ϵ̂^*·ê_B)(ϵ̂·ê_B)-1],with the quantization axis unit vector ê_B and polarization unit vector ê. The differential Stark shift of a transition betweenstates K to K' can be formulated asδ E_KK' = Δ E_K - Δ E_K' = -1/2 [ {α_d,0^K ( ω)-α_d,0^K'(ω) } + β(ϵ) {M_K/2Kα_d,1^K(ω)- M_K'/2K'α_d,1^K'(ω) }+ γ(ϵ) {3M_K^2-K(K+1)/K(2K-1)α_d,2^K (ω) -3M_K'^2-K'(K'+1)/K'(2K'-1)α_d,2^K' (ω) } ]E^2, To obtain null differential Stark shift, it is obvious from Eq. (<ref>) that either the independent components of the polarizabilities cancel out each other or the net resultant nullifies which depend upon the choice of β(ϵ), γ(ϵ), M_K and M'_K'magnetic sublevels. Moreover, as we have demonstrated recently, the differential Stark shift can be independent of the vector and tensorcomponents of the states involved in a transition for a certain trap geometry <cit.>. Such trapping scheme is usefulfor M_J , F and M_F insensitive trapping and can be suitably applied for considered clock transitions in alkaline earth metal ions.Conveniently the expressions for polarizabilities of the hyperfine and atomic levels can be expressed as <cit.> α_d^F(ω)= α_d,0^F (ω) + α_d,1^F (ω ) A cosθ_k M_F/2F + α_d,2^F(ω ) × ( 3cos^2θ_p-1/2) [ 3M_F^2-F(F+1)/F(2F-1) ] ,and α_d^J(ω)= α_d,0^J (ω) + α_d,1^J (ω ) A cosθ_k M_J/2J + α_d,2^J(ω ) × ( 3cos^2θ_p-1/2) [ 3M_J^2-J(J+1)/J(2J-1) ] ,where α_d,i^K=J,F(ω) with i=0,1,2 are the scalar, vector and tensor components of the respective polarizabilities, A represents degree of polarization, θ_k is the angle between the quantization axis and wave vector, and θ_p is the anglebetween the quantization axis and direction of polarization of the field. In the presence of magnetic field, cosθ_k andcos^2θ_p can have any values depending on the direction of applied magnetic field. In the absence of magnetic field,cosθ_k = 0 and cos^2θ_p = 1 for the linearly polarized light, where polarization vector is assumed to be along thequantization axis. However it yields cosθ_k = 1 and cos^2θ_p = 0 for the circularly polarized light, where wave vector isassumed to be along the quantization axis. Polarizabilities of the hyperfine states can be evaluated from the atomic state resultsusing the relations <cit.>α_d,0^F(ω) = α_d,0^J(ω),α_d,1^F(ω) = (-1)^J+F+I+1{[ F J I; J F 1 ]}× √(F(2F+1)(2J+1)(J+1)/J(F+1))α_d,1^J(ω)andα_d,2^F(ω) = (-1)^J+F+I{[ F J I; J F 2 ]}× √(F (2F-1)(2F+1)/(2F+3)(F+1))× √((2J+3)(2J+1)(J+1)/J(2J-1))α_d,2^J(ω) . Further, we can evaluate the atomic dipole polarizabilities using the expressionsα_d,0^J(ω) = 2/3(2J+1)∑_J'(E -E')|⟨ J || D||J' ⟩|^2/ω^2 - (E -E')^2, α_d,1^J(ω) = √(24J/(J+1)(2J+1))∑_J'(-1)^J+J' ×{[J1J;1 J'1 ]}ω |⟨ J || D|| J' ⟩|^2/ω^2 - (E -E')^2andα_d,2^J(ω)= √(40 J (2J-1)/3(J +1)(2J+3)(2J+1))∑_J'(-1)^J+J'× {[J2J;1 J'1 ]} (E -E')|⟨ J|| D||J' ⟩|^2/ω^2 - (E -E')^2,where J's are the angular momentum of the intermediate states, E and E' are energies of the corresponding states, ⟨ J|| D||J'⟩ are the reduced E1 matrix elements. We define the tune-out wavelength λ_T as the wavelength where the dynamic polarizability of the state is zero.We have determined the tune out wavelengths for the ground, (n-1)D_3/2 and (n-1)D_5/2 statesof ^43Ca^+, ^87Sr^+ and ^137Ba^+ ions. The detailed description about these calculationshave been given in the Refs. <cit.>.§ METHOD OF EVALUATION As discussed in our earlier works <cit.>, each component of α_d^J(ω) can be conveniently evaluated in the considered alkaline earth ions, in which many of the low-lying states have electronic configurations as a common closed core of inertgas atoms and a well defined valence orbital, by classifying into three different contributions such asα_d,i^J(ω)=α_d,i^J,c(ω)+α_d,i^J,cv(ω)+α_d,i^J,v(ω),where α_d,i^J,c, α_d,i^J,cv and α_d,i^J,v are referred to as the core, core-valence and valenceelectron correlation contributions, respectively. Since the valence electron correlation effects are mainly estimated byα_d,i^J,v, this gives majority contribution followed by α_d,i^J,c. Again, accuracies in the ab initioresults of these quantities mainly suffer due to the uncertainties associated with the calculated energies of the atomic states. Therefore, we calculate only the E1 matrix elements of as many as low-lying states of the considered ions employing a relativisticcoupled-cluster (RCC) method, which is an all order perturbative method, and combine them with the experimental energy values fromthe National Institute of Science and Technology (NIST) database <cit.> to determine the dominant contributions to α_d,i^J,v. We refer this as “Main” contribution to α_d,i^J,v; while the smaller contributions coming from the high-lying excited states are estimated in the ab initio formalism using the Dirac-Hartree-Fock (DHF) method and are mentioned as “Tail” contribution to α_d,i^J,v. To estimate the other two contributions, it is not possible to use sum-over-states approach, so we determineα_d,i^J,c in the random-phase approximation (RPA). This included core-polarization effects to all orders. It has been foundthat RPA can give these values reliably for the inert gas configured atomic systems <cit.>. Again as demonstrated later,the α_d,i^J,cv contributions are very small in these ions. Thus, we evaluate them using the DHF method. A small number of E1 matrix elements of the nD-nF transitions of ^137Ba^+ are borrowed from the work of Sahoo et al.<cit.>, while the rest E1 matrix elements for the evaluation of “Main” contribution to α_d^J,v are obtained consideringthe singles and doubles excitation approximation in the RCC (SD) method as described in Refs. <cit.>. In the SDmethod, the wave function of the state with the closed-core with a valence electron v is representedas an expansion|Ψ_v ⟩_ SD = [1+ ∑_maρ_ma a^†_m a_a+ 1/2∑_mnabρ_mnaba_m^† a_n^† a_b a_a.. + ∑_mvρ_mv a^†_m a_v + ∑_mnaρ_mnva a_m^† a_n^† a_a a_v] |Φ_v⟩,where |Φ_v⟩ is the DHF wave function of the state. In the above expression, a^†_i and a_i are the creation andannihilation operators with the indices {m,n} and {a,b} designating the virtual and core orbitals of |Φ_v⟩,ρ_ma and ρ_mv are the corresponding single core and valence excitation coefficients, and ρ_mnab andρ_mnva are the double core and valence excitation coefficients. To obtain the DHF wave function, we use a finite size basis setconsisting of 70 B-splines constrained to a large cavity of radius R = 220 a.u. and solved self consistently using the Roothan equationon a nonlinear grid. In order to verify contributions from the higher level excitations, we also estimate leading order contributions from the triple excitationsperturbatively in the SD method framework (SDpT method) by expressing the atomic wave functions as <cit.>|Ψ_v ⟩_ SDpT=|Ψ_v ⟩_ SD+1/6∑_mnrabρ_mnrvab^pert a_m^† a_n^† a_r^† a_b a_a a_v |Φ_v⟩,where ρ_mnrvab^pert is the perturbed triple valence excitation amplitudes. After obtaining wave functions employing the SD and SDpT method, we determine the E1 matrix element for a given transition between thestates |Ψ_v⟩ and |Ψ_w⟩ by evaluating the expressionZ_vw = ⟨Ψ_v|Z|Ψ_w⟩/√(⟨Ψ_v|Ψ_v⟩⟨Ψ_w|Ψ_w⟩). To find out the uncertainties with the calculated E1 matrix elements, we have carried out semi-empirical scaling of the wave functions thataccounts for evaluation of missing correlations contributions to the wave functions from the approximated SD and SDpT methods. Thisprocedure involves scaling of the excitation coefficients and reevaluation of the E1 matrix elements. The scaling factors are decided fromthe correlation energy trends in the SD and SDpT methods. Details regarding this scaling procedure are given in Ref. <cit.>.§ RESULTS AND DISCUSSIONSince valence correlation contributions are vital for accurate estimate of polarizabilities, we include the E1 matrix elements amongthe low-lying states up to 4S-7P, 3D - 7P and 3D-6F transitions in ^43Ca^+, 5S-8P, 4D-8P and 4D-6F transitions in ^87Sr^+ and 6S-8P, 5D-8P and 5D-6F transitions in ^137Ba^+ for the evaluation of the “Main” contributions. All these matrix elements are calculated using the RCC method described in the previous section. A few E1 matrix elements of the nD-nF transitions of^137Ba^+ are used from Ref. <cit.>. We present these considered E1 matrix elements in the Supplemental Material from different levelsof approximation in the considered many-body methods. Scaled values to the SD and SDpT results, which mainly ameliorates the results byaccounting corrections beyond the Brueckner-orbital contributions <cit.>, are also given in the same table. We then give the“Final” results considering the most reliable values along with their estimated uncertainties in parentheses in the last column of this table <cit.>.As can be seen from the Supplemental Material, the DHF method gives large E1 matrix element values while the SD method brings down the values while the SDpT method slightly increases the values from the SD values. The scaled values from both the SD and SDpT methods modifies these valuesmarginally. Thus, these values seem to be very reliable for determining the polarizabilities of the considered ions. For the final use, we recommend the SD method values and uncertainties to these values are estimated by taking the differences from the results obtainedusing the SDpT method. Below, we discuss polarizability results using these E1 matrix elements and magic wavelengths of the S-D clocktransitions of the above alkaline-earth ions. §.§ Static Polarizability resultsUsing the E1 matrix elements given in Supplement Material, we first evaluate the static polarizabilities of the ground and (n-1)D_J states ofthe ^43Ca^+, ^87Sr^+ and ^137Ba^+ ions and compare them with the previously available experimental and theoretical results in Table <ref>. We give both the scalar and tensor polarizabilities of the considered ground and (n-1)D_J states along with theircontributions from “Main” and “Tail” to valenceα_d,i^J,v, core-valenceα_d,i^J,cv and core α_d,i^J,c contributions to our calculations in this table. These results are discussed ion-wise below.^43Ca^+: As can be seen from Table <ref>, α_d,i^J(0) value of 76.1(2) a.u. for the ground state polarizabilityobtained for ^43Ca^+ ion is in close agreement with the other theoretical calculations which are 75.28 a.u. and 75.49 a.u. by Tanget al. <cit.> and Mitroy et al. <cit.>, respectively. Mitroy et al. have also given these values inthe 3D_3/2 and 3D_5/2 states of Ca^+. They had evaluated these polarizabilities by diagonalizing a semi-empirical Hamiltonianconstructed in a large dimension single electron basis. Our estimated values agree quite well those values within the quoted error bars. In Ref. <cit.>, ab initio calculations of these quantities are reported using a RCC method and our values are also comparereasonably well with them. The most stringent experimental value for the ^43Ca^+ ground state polarizability was obtained by spectral analysis in Ref. <cit.>, our result is also in agreement with this value.^87Sr^+: Next, we compare our polarizability results for the ^87Sr^+ ion given in Table <ref>. The RCC results of theS and D states reported by Sahoo et al. <cit.> are in agreement with our values. Mitroy et al. have also given thesevalues by employing a non-relativistic method in the sum-over-states approach <cit.>. It can be seen from Table <ref> that our groundstate dipole polarizability is in very good agreement with their result. However, it seems inappropriate to compare their non-relativistic valuesfor the dipole polarizabilities of the 4D_J states with our relativistic calculations. The estimate for the ground state staticpolarizability of ^87Sr^+ by Barklem et al. <cit.> derived by combining their theoretical calculations with the experimentaldata from Ref. <cit.>. There is a considerable discrepancy between their result with our present value. This is mainly because of omission of the core contribution which has been included by us using the RPA method. There are no direct experimental results available for the Sr^+ ion dipole polarizabilities to make a comparative analysis with the theoretical values.^137Ba^+: Our precise ground state polarizability calculation gives a value of 123.2(5) a.u. for the ^137Ba^+ ion, whichis in good agreement with the high precision measurement achieved by a novel technique based on the resonant excitation Stark ionizationspectroscopy <cit.>. We also expect that results of the 5D_J states will also be of similar accuracies in this ion.Having analyzed accuracies of the static polarizabilities satisfactorily, we now move on to present the dynamic polarizabilities in the aboveions. We adopt the similar procedures for calculations of these quantities, thus anticipating similar accuracies in the dynamic polarizabilityvalues as their corresponding static values. This ascertains us to determine the λ_magic values of the nS-(n-1)D_3/2,nS-(n-1)D_5/2 transitions and the λ_ T values of the associated states in the alkaline earth metal ions without much qualm using these dynamic polarizabilities. §.§ Dynamic dipole polarizabilities at 1064 nm We are discussing here on the dynamic polarizabilities at the 1064 nm that are very demanding for creating high-field seeking traps of theconsidered ions (far detuned traps), where the atoms are attracted to the intensity maxima. Recently Chen et al. <cit.> had carriedout measurements of scalar and tensor contributions to the atomic polarizabilities in the Rb atom at this wavelength. In our calculationof dynamic polarizabilities in the considered ^43Ca^+, ^87Sr^+ and ^137Ba^+ ions, we have used same E1 matrix elements todetermine the “Main” contributions and estimated other non-dominant contributions in the similar procedure as was done for the evaluation ofthe static polarizabilities. We list the contributions to the nS_1/2 and (n-1)D_J dynamic polarizabilities at this wavelength of the above ions in Table <ref>.The dominant contributions are listed explicitly. This table illustrates very fast convergence of the nS_1/2 state polarizabilities fromwhich we find that the largest contributions are appearing from the nP excited states. We also notice that contributions from the4D_3/2-5P_1/2 transition to the 4D_J state polarizabilities in ^87Sr^+ increase 20 times as compared to the contribution fromthis transition to the 4D_J state static polarizability. The reason for the overwhelming contribution from this particular transition is due to the proximity of the 4D_3/2-5P_1/2 resonance that lies at 1091 nm to the laser wavelength of 1064 nm.§.§ Magic and Tune out wavelengths for circularly polarized light In order to find out λ_magic among the (J,M_J) levels of the nS_1/2→ (n-1)D_3/2 and nS_1/2→ (n-1)D_5/2 transitions, we plot total dynamic dipole polarizabilities for the nS_1/2 and (n-1)D_3/2,5/2 states in Figs. (<ref>), (<ref>) and (<ref>) for the ^43Ca^+, ^87Sr^+ and ^137Ba^+ ions respectively. The λ_magic for the clock transitions are obtained by locating the crossing points between the two polarizability curves. InTables <ref>, <ref>, <ref> and <ref>, we list the λ_magic for the considered transitions along with their respectiveuncertainties in the parentheses. The corresponding polarizability values at λ_magic are listed as well. The resonant wavelengths λ_res are listed in the same table to demonstrate placement of a λ_magic in betweentwo resonant transitions. In this work, we use left-handed circularly polarized light (A=-1) for all purposes considering all possible positive and negative M_J sublevels for the ground S_1/2 and D_3/2,5/2 states. Note that λ_magic for the right circularly polarized light of a transition for a given M_J are equal to the λ_magic forleft circularly polarized light with opposite sign of M_J <cit.>.We also investigate λ_magic between the transitions involving the |nS_1/2F,M_F=0⟩ and |(n-1)D_3/2,5/2F,M_F=0⟩states. The ac Stark shifts of the hyperfine levels of an atomic state are calculated using the method described in Sec. <ref>.We choose M_F=0 sublevels in the hyperfine transitions as for this particular magnetic sublevel, the first order Zeeman shift vanishes.This is advantageous for the optical clock experiments <cit.>. We show the λ_magic values of the| nS_1/2F,M_F=0⟩→|(n-1)D_3/2,5/2F,M_F=0⟩ transitions in Figs. (<ref>),(<ref>) and (<ref>) for the ^43Ca^+, ^87Sr^+ and ^137Ba^+ ions respectively. These values are listed inTable <ref> and we discuss below about these results for the individual ion. ^43Ca^+: As evident from Fig. (<ref>), the dynamic polarizabilities for the 4S_1/2 state are small except in the vicinity of the resonant 4S_1/2-4P_1/2 and 4S_1/2-4P_3/2 transitions around 396.847 nm and 393.366 nm respectively. Sincethe 3D_3/2,5/2 states have significant contributions from the resonances in the interested wavelength range, theλ_magic are expected to lie in between these resonances. We found a total of nine λ_magic for all possible magneticsublevels of the 4S_1/2-3D_3/2 transition in between four resonances. The first four λ_magic are located around 395 nmbetween the resonant 4S_1/2-4P_1/2 and 4S_1/2-4P_3/2 transitions for all the M_J magnetic sublevels of the 4D_3/2 state. Out of these, two λ_magic support blue-detuned trap, whereas the other two support red-detuned trap. The next fiveλ_magic are identified at 850.9(2) nm, 853.1(2) nm, 1467.8(4) nm, 1013.4(5) nm and 870.7(3) nm, which lie in the infrared region.For some of the M_J sublevels, the λ_magic is missing. In such case, it would be imperative to consider a geometry whereλ_magic will be independent of the magnetic sublevels as mention in Sec. <ref> and discussed elaborately in our previous work <cit.>. In Table <ref>, we present the λ_magic for the 4S_1/2-3D_5/2 transitionconsidering all the magnetic sublevels of the 3D_5/2 state. As seen, most of these λ_magic in the ^43Ca^+ ionsupport red detuned trap, indicated by small positive values of the polarizabilities for the corresponding λ_magic values.Similarly, we tabulate λ_magic for the 4S (M_J =-1/2) - 3D_3/2,5/2 transition in table <ref> and <ref>.It can be evidently seen from the table that λ_magic are red shifted from the λ_magic for 4S (M_J =1/2) - 3D_3/2,5/2.Similarly in Table <ref>, we list the λ_magic values between 300-1300 nm for the |4S_1/2F,M_F=0⟩→|3D_3/2,5/2F,M_F=0⟩ transitions. The F-dependent polarizabilities values at the respective λ_magic arelisted as well. For this wavelength range, total twenty four λ_magic are located. Out of which, fourteen λ_magicare identified in the infrared region. From this table, it can be found that for the | 4S_1/2F,M_F = 0⟩→|3D_3/2,_5/2F,M_F = 0⟩ transitions, all λ_magic support red-detuned traps.^87Sr^+: The dynamic polarizabilities for the 5S_1/2, 4D_3/2 and 4D_5/2 states of ^87Sr^+ calculated by usare plotted in Fig. (<ref>). A number of λ_magic are identified by the intersections of the polarizability curves of the5S_1/2 and 4D_3/2,5/2 states for all their magnetic sublevels of the 4D_3/2,5/2 states in the 5S_1/2(M_J=1/2)-4D_3/2,5/2transitions and are presented in Table <ref> along with their resonant lines. Four λ_magic are found to be around413 nm between the 5S_1/2-5P_1/2 and 5S_1/2-5P_3/2, resonant transitions. These values belong to the visible region, while theother five λ_magic are located at 1009.3(3) nm, 1019.7(3) nm, 1062.5(3) nm and 1577.2(3) nm that lie in the infrared region. All the λ_magic values mentioned for the 5S_1/2-4D_3/2 transition, except the one at 412.5(9) nm, support the red-detunedtrapping scheme. Similarly, for the 5S_1/2-4D_5/2 transition total eight λ_magic are appearing for all possibleM_J sublevels. Among them two λ_magic are located at 1379.4(3) nm and 1130.8(4) nm appear after the 4D_5/2-5P_3/2resonance for the M_J=1/2 and M_J=-1/2 magnetic sublevels respectively. Use of these λ_magic are recommended to carry out experiments selectively for the corresponding magnetic sublevels. Similarly, we present λ_magic for the 5S (M_J =-1/2) - 4D_3/2,5/2 transition in table <ref> and <ref>.It can be noticed from these tables that λ_magic are red shifted from the λ_magic for5S (M_J =1/2) - 4D_3/2,5/2 transition. We have also determined total sixteen extra λ_magic between 5S_1/2-5P_1/2and 4D_3/2-5P_3/2 resonance transition.In Table <ref>, λ_magic values above 300 nm are listed in the case of ^87Sr^+ for the |5S_1/2F,M_F= 0⟩→|4D_3/2,5/2F,M_F=0⟩ transitions. It is found that λ_magic around 417 nm with very smallpolarizabilities for the |5S_1/2 F M_F=0⟩→|4D_3/2F,M_F=0⟩and |5S_1/2FM_F=0⟩→|4D_5/2 F,M_F=0⟩ transitions. Therefore, it will be challenging to trapthe ^87Sr^+ ion at these wavelengths. However, the λ_magic values in the infrared region for these transitions may beuseful for trapping the above ion in the experiments. ^137Ba^+: Total nine λ_magic are found for the 6S_1/2-5D_3/2 transition of ^137Ba^+, among which fourλ_magic are around 468 nm in the vicinity of the 6S_1/2-6P_3/2 resonant transition. The next λ_magic at 587.6(9) nm, 589.5(3) nm and 589.6(5) nm are located at the sharp intersection of polarizability curves close to the 5D_3/2-6P_3/2 and5D_3/2-6P_1/2 resonances, as seen in Fig. (<ref>). The last two λ_magic are located at 841.7(5) nm and 690.7(7) nmfor the M_J=3/2 and 1/2 sublevel respectively, have positive polarizabilities. Hence these λ_magic could providesufficient trap depth at the reasonable laser power. In fact, some of the expected λ_magic are missing for the M_J= -3/2 and -1/2 sublevels. Similarly, several λ_magic are also located for the 6S_1/2-5D_5/2 transitions, as seenfrom Fig. (<ref>), in the wavelength range 300-800 nm which are listed in Table <ref>. The expected trend of locatingλ_magic between the resonances in this transition is similar to the previous two ions.For the 6S (M_J = -1/2) - 5D_3/2,5/2 transition, we list the λ_magic in table <ref> and <ref>.These magic wavelengths are slightly red shifted to those demonstrated for 6S (M_J = -1/2) - 5D_3/2,5/2 transition. We also found total fifteen λ_magic between 6S_1/2-6P_1/2 and 5D_3/2-6P_1/2 resonance transitions in both the tables. We have also determined an extra λ_magic at 590.5 nm in table <ref>, which supports a red detuned trap.In Table <ref>, we also list λ_magic for the |6S_1/2F,M_F=0⟩→|5D_3/2,5/2F,M_F=0⟩ transitions whichlie within the wavelength range of 300-800 nm. We correspondingly locate twenty λ_magic in the visible region. We notice that all the λ_magic values, except around 480 nm, are expected to be more promising for experiments. An ion trap at these wavelengths can have sufficient trap depth at the reasonable laser power. Tune-out wavelengths: Table <ref> and <ref> illustrate the identified tune out wavelengths of the nS_1/2,(n-1)D_3/2 and (n-1)D_5/2 states in the (J,M_J) and (F,M_F) levels of the ^43Ca^+, ^87Sr^+ and ^137Ba^+ alkaline earth metalions. To locate these tune out wavelengths, we have calculated the dynamic polarizabilities of the above states for a particular range ofwavelength in the vicinity of relevant resonances for the corresponding ion and find out values of λ for which the polarizabilityvalues tend to zero. §.§ Magnetic sublevel independent λ_magic and λ_T We have also used our the frequency dependent scalar polarizability results of the ^43Ca^+, ^87Sr^+ and ^137Ba^+ ions to find out λ_magic that are independent of the magnetic sublevels M_J of the atomic states; so also hyperfine sublevel independent.Table <ref> lists of the λ_magic values for the nS_1/2-(n-1)D_3/2,5/2 transitions, which lie withinthe wavelength range of 300-1500 nm. We are also able to locate the tune out wavelengths of the ground, (n-1)D_3/2 and (n-1)D_5/2states of the considered alkaline ions that are independent of the F, M_F and M_J values of the respective ion and giventhem in Table <ref>. Occurrences of these λ_magic and λ_ T for the considered ions can offer pathways to carry out many high precision measurements with minimal systematics.§ CONCLUSION We have determined scalar, vector and tensor polarizabilities of the nS_1/2, (n-1)D_3/2 and (n-1)D_5/2 states in the ^43Ca^+,Sr^+ and ^137Ba^+ alkaline earth-metal ions with the ground state principal quantum number n. We used very precise values of theelectric dipole matrix elements that are obtained them by employing a relativistic all-order method.Non-dominant contributions in the adopted sum-over-states approach for the evaluation of the polarizabilities are estimated using lowerorder perturbation methods. The obtained static polarizability values are compared with the available other theoretical results andexperimental values to gauge their accuracies. Dynamic polarizabilities at the 1064 nm are given explicitly for the nS_1/2 and(n-1)D_3/2,5/2 states of the considered alkaline earth-metal ions, which could help in creating “high-field seeking“ traps using the Nd:YAG laser. Furthermore using the dynamic polarizabilities for a wide range of wavelengths, we have located a number of tune out wavelengthsλ_T of the above states and the magic wavelengths λ_magic for the | nS_1/2F,M_F=0⟩→| (n-1)D_3/2,5/2F,M_F=0⟩ clock transitions due to thecircularly polarized light in the ^43Ca^+, ^87Sr^+ and ^137Ba^+ alkaline earth ions. We have located a significant number of λ_magic for these clock transitions, which can help the experimentalists to trap the above ions to reduce uncertaintiesin the clock transitions due to Stark shifts. This knowledge would also be of immense interest to carry out other high precision studies using the considered ions. In addition, we have also determined the λ_magic and λ_T values that are independent ofthe choice of magnetic and hyperfine sublevels of the above clock transitions. § ACKNOWLEDGEMENT The work of B.A. is supported by Department of Science and Technology, India and work of J. K. is supported by UGC-BSR Grant No. F.7-273/2009/BSR, India.S.S. acknowledges the financial support from UGC-BSR. A part of the computations were carried out using Vikram-100 HPC cluster of Physical Research Laboratory and the employed SD methodwas developed in the group of Professor M. S. Safronova of the University of Delaware, USA.48 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Poli et al.(2014)Poli, Schioppo, Vogt, Falke, Sterr, Lisdat, and Tino]Poli authorN. Poli, authorM. Schioppo, authorS. Vogt, authorS. Falke, authorU. Sterr, authorC. Lisdat, and authorG. M. Tino, journalApplied Physics B volume117, pages1107 (year2014).[Nicholson et al.(2015)Nicholson, Campbell, Hutson, Marti, Bloom, McNally, Zhang, Barrett, Safronova, Strouse et al.]Nicholoson authorT. Nicholson, authorS. Campbell, authorR. Hutson, authorG. Marti, authorB. Bloom, authorR. McNally, authorW. Zhang, authorM. Barrett, authorM. Safronova, authorG. Strouse, et al., journalNature Communications volume6, pages6896 (year2015).[Katori et al.(1999)Katori, Ido, and Gonokami]katori authorH. Katori, authorT. Ido, and authorM. K. Gonokami, journalJ. Phys. Soc. Jpn. volume68, pages2479 (year1999).[Champenois et al.(2004)Champenois, Houssin, Lisowski, Knoop, Hagel, Vedel, and Vedel]Champenois authorC. Champenois, authorM. Houssin, authorC. Lisowski, authorM. Knoop, authorG. Hagel, authorM. Vedel, and authorF. Vedel, journalPhys. Lett. A volume331, pages298 (year2004).[Chou et al.(2010)Chou, Hume, Koelemeij, Wineland, and Rosenband]Chou authorC. W. Chou, authorD. B. Hume, authorJ. C. J. Koelemeij, authorD. J. Wineland, and authorT. Rosenband, journalPhys. Rev. Lett. volume104, pages070802 (year2010).[Margolis et al.(2004)Margolis, Barwood, Huang, Klein, Lea, Szymaniec, and Gill]margolis authorH. S. Margolis, authorG. P. Barwood, authorG. Huang, authorH. A. Klein, authorS. N. Lea, authorK. Szymaniec, and authorP. Gill, journalScience volume306, pages1355 (year2004).[Peik et al.(2005)Peik, Lipphardt, Schnatz, Schneider, Tamm, and Karshenboim]Peik authorE. Peik, authorB. Lipphardt, authorH. Schnatz, authorT. Schneider, authorC. Tamm, and authorS. G. Karshenboim, journalLaser Phys. volume15, pages1028 (year2005).[et al.(2008)]Rosenband authorT. R. et al., journalScience volume319, pages1808 (year2008).[Stalnaker et al.(2007)Stalnaker, Diddams, and et al.]Stalnaker authorJ. Stalnaker, authorS. Diddams, and authorT. F. et al., journalAppl. Phys. B volume89, pages167 (year2007).[Liu et al.(2015)Liu, Huang, Bian, Shao, Guan, Tang, Li, Mitroy, and Gao]Liu authorP. L. Liu, authorY. Huang, authorW. Bian, authorH. Shao, authorH. Guan, authorY. B. Tang, authorC. B. Li, authorJ. Mitroy, and authorK. L. Gao, journalPhys. Rev. Lett. volume114, pages223001 (year2015).[Karshenboim and Peik(2004)]book1 authorS. G. Karshenboim and authorE. Peik, titleAstrophysics, Clocks and Fundamental Constants, 3-540-21967-6 (publisherSpringer, addressSpringer Verlag Berlin Heidelberg, year2004).[Sackett et al.(2000)Sackett, Kielpinski, King, Langer, Meyer, Myatt, Rowe, Turchette, Itano, Wineland et al.]Sackett authorC. A. Sackett, authorD. Kielpinski, authorB. E. King, authorC. Langer, authorV. Meyer, authorC. J. Myatt, authorM. Rowe, authorQ. A. Turchette, authorW. M. Itano, authorD. J. Wineland, et al., journalNature volume404, pages256 (year2000).[Hong et al.(2005)Hong, Cramer, Cook, Nagourney, and Fortson]Hong authorT. Hong, authorC. Cramer, authorE. Cook, authorW. Nagourney, and authorE. N. Fortson, journalOpt. Lett. volume30, pages2644 (year2005).[Kaur et al.(2015a)Kaur, Singh, Arora, and Sahoo]jasmeet2 authorJ. Kaur, authorS. Singh, authorB. Arora, and authorB. K. Sahoo, journalPhys. Rev. A volume92, pages031402(R) (year2015a).[Sahoo et al.(2009a)Sahoo, Timmermans, Das, and Mukherjee]Sahoo09 authorB. K. Sahoo, authorR. G. E. Timmermans, authorB. P. Das, and authorD. Mukherjee, journalPhys. Rev. A volume80, pages062506 (year2009a).[Sherman et al.(2005)Sherman, Trimble, Metz, Nagourney, and Fortson]Sherman authorJ. Sherman, authorW. Trimble, authorS. Metz, authorW. Nagourney, and authorN. Fortson, journalin 2005 Digest of the LEOS Summer Topical Meetings, (IEEE), New York p. pages99 (year2005).[Sahoo et al.(2007)Sahoo, Das, Chaudhuri, Mukherjee, Timmermans, and Jungmann]Sahoo07 authorB. K. Sahoo, authorB. P. Das, authorR. K. Chaudhuri, authorD. Mukherjee, authorR. G. E. Timmermans, and authorK. Jungmann, journalPhys. Rev. A volume76, pages040504(R) (year2007).[Burd et al.(2015)Burd, Leibfried, Wilson, and Wineland]Burda authorS. Burd, authorD. Leibfried, authorA. C. Wilson, and authorD. J. Wineland, journalProc. of SPIE volume9349, pages93490 (year2015).[Tang et al.(2013)Tang, Qiao, Shi, and Mitroy]Tang authorY. B. Tang, authorH. X. Qiao, authorT. Y. Shi, and authorJ. Mitroy, journalPhys. Rev. A volume87, pages042517 (year2013).[Sahoo et al.(2009b)Sahoo, Das, and Mukherjee]sahooca09 authorB. K. Sahoo, authorB. P. Das, and authorD. Mukherjee, journalPhys. Rev. A volume79, pages052511 (year2009b).[Mitroy and Zhang(2008)]mitroyca authorJ. Mitroy and authorJ. Y. Zhang, journalEur. Phy. J. D. volume46, pages415 (year2008).[Chang(1983)]edward authorE. S. Chang, journalJ. Phys. B volume16, pages539 (year1983).[Mitroy et al.(2008)Mitroy, Zhang, and Bromley]SR authorJ. Mitroy, authorJ. Y. Zhang, and authorM. W. J. Bromley, journalPhys. Rev. A volume77, pages032512 (year2008).[Sahoo et al.(2009c)Sahoo, Timmermans, Das, and Mukherjee]sahoo authorB. K. Sahoo, authorR. G. E. Timmermans, authorB. P. Das, and authorD. Mukherjee, journalPhys. Rev. A volume80, pages062506 (year2009c).[Barklem and O'Mara(2000)]Barklem authorP. S. Barklem and authorB. J. O'Mara, journalMon. Not. R. Astron. Soc. volume311, pages535 (year2000).[Snow and Lundeen(2007)]snow authorE. L. Snow and authorS. R. Lundeen, journalPhys. Rev. A volume76, pages052505 (year2007).[Singh et al.(2016a)Singh, Sahoo, and Arora]sukhjitnew authorS. Singh, authorB. K. Sahoo, and authorB. Arora, journalPhys. Rev. A volume93, pages063422 (year2016a).[Hobein et al.(2011)Hobein, Solders, Suhonen, Liu, and Schuch]Hobein authorM. Hobein, authorA. Solders, authorM. Suhonen, authorY. Liu, and authorR. Schuch, journalPhys. Rev. Lett. volume106, pages013002 (year2011).[Bonin and Kresin(1997)]bonin authorK. D. Bonin and authorV. V. Kresin, titleElectric-Dipole Polarizabilities of Atoms, Molecules and Clusters (addressWorld Scientific, Singapore, year1997).[Manakov et al.(1986)Manakov, Ovsiannikov, and Rapoport]manakov authorN. Manakov, authorV. Ovsiannikov, and authorL. Rapoport, journalPhys. Rep. volume141, pages319 (year1986).[Beloy(2009)]beloyt authorK. Beloy, titlePh.D. thesis (addressUniversity of Nevada, year2009).[Flambaum et al.(2008)Flambaum, Dzuba, and Derevianko]dzubaflam authorV. V. Flambaum, authorV. A. Dzuba, and authorA. Derevianko, journalPhys. Rev. Lett. volume101, pages220801 (year2008).[Arora et al.(2011)Arora, Safronova, and Clark]arora2011 authorB. Arora, authorM. S. Safronova, and authorC. W. Clark, journalPhys. Rev. A volume84, pages043401 (year2011).[LeBlanc and Thywissen(2007)]Leblanc2007 authorL. J. LeBlanc and authorJ. H. Thywissen, journalPhys. Rev. A volume75, pages053612 (year2007).[Arora et al.(2012)Arora, Nandy, and Sahoo]nandy authorB. Arora, authorD. K. Nandy, and authorB. K. Sahoo, journalPhys. Rev. A volume85, pages012506 (year2012).[Kaur et al.(2015b)Kaur, Nandy, Arora, and Sahoo]recent authorJ. Kaur, authorD. K. Nandy, authorB. Arora, and authorB. K. Sahoo, journalPhys. Rev. A volume91, pages012705 (year2015b).[Kramida et al.(2014)Kramida, Ralchenko, Reader, and Team]NIST authorA. Kramida, authorY. Ralchenko, authorJ. Reader, and authorN. A. Team, titleNist atomic spectra database, ver. 5.2 (year2014), notehttp://physics.nist.gov/asd [12 Dec. 2014]. National Institute of Standards and Technology, Gaithersburg, MD.[Singh et al.(2013)Singh, Sahoo, and Das]yashpal authorY. Singh, authorB. K. Sahoo, and authorB. P. Das, journalPhys. Rev. A volume88, pages062504 (year2013).[Sahoo et al.(2009d)Sahoo, Wansbeek, Jungmann, and Timmermans]SahooBa authorB. K. Sahoo, authorL. W. Wansbeek, authorK. Jungmann, and authorR. G. E. Timmermans, journalPhys. Rev. A volume79, pages052512 (year2009d).[Blundell et al.(1991)Blundell, Johnson, and Sapirstein]Blundell authorS. A. Blundell, authorW. R. Johnson, and authorJ. Sapirstein, journalPhys. Rev. A volume43, pages3407 (year1991).[Johnson et al.(1987)Johnson, Idrees, and Sapirstein]Johnson87 authorW. R. Johnson, authorM. Idrees, and authorJ. Sapirstein, journalPhys. Rev. A volume35, pages8 (year1987).[Safronova et al.(1998)Safronova, Derevianko, and Johnson]theory authorM. S. Safronova, authorA. Derevianko, and authorW. R. Johnson, journalPhys. Rev. A volume58, pages1016 (year1998).[Safronova and Johnson(2007)]safro07 authorM. S. Safronova and authorW. R. Johnson, journalAdv. At. Mol. Opt. Phys. volume55, pages191 (year2007).[Safronova and Safronova(2011)]safroca authorM. S. Safronova and authorU. I. Safronova, journalPhys. Rev. A volume83, pages012503 (year2011).[Chen et al.(2015)Chen, Goncalves, and Raithel]Chen authorY. J. Chen, authorL. F. Goncalves, and authorG. Raithel, journalPhy. Rev. A volume92, pages060501(R) (year2015).[Singh et al.(2016b)Singh, Kaur, Sahoo, and Arora]sukhjitJPB authorS. Singh, authorK. Kaur, authorB. K. Sahoo, and authorB. Arora, journalJ. Phys. B volume49, pages145005 (year2016b).[Arora and Sahoo(2012)]ab1 authorB. Arora and authorB. K. Sahoo, journalPhys. Rev. A volume86, pages033416 (year2012).[Sahoo and Arora(2013)]ab2 authorB. K. Sahoo and authorB. Arora, journalPhys. Rev. A volume87, pages023402 (year2013).
http://arxiv.org/abs/1703.08969v1
{ "authors": [ "Jasmeet Kaur", "Sukhjit Singh", "Bindiya Arora", "B. K. Sahoo" ], "categories": [ "physics.atom-ph" ], "primary_category": "physics.atom-ph", "published": "20170327082444", "title": "Annexing magic and tune-out wavelengths to the clock transitions of the alkaline-earth metal ions" }
APS/123-QED zhangyiqi@mail.xjtu.edu.cn ypzhang@mail.xjtu.edu.cn ^1Key Laboratory for Physical Electronics and Devices of the Ministry of Education & Shaanxi Key Lab of Information Photonic Technique, Xi'an Jiaotong University, Xi'an 710049, China ^2Department of Applied Physics, School of Science, Xi'an Jiaotong University, Xi'an 710049, China^3ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain^4Institute of Spectroscopy, Russian Academy of Sciences, Troitsk, Moscow Region 142190, Russia^5Department of Physics, University of Bath, Bath BA2 7AY, United Kingdom^6Science Program, Texas A&M University at Qatar, P.O. Box 23874 Doha, Qatar ^7Department of Physics, University of Arkansas, Fayetteville, Arkansas, 72701, USA^8National Laboratory of Solid State Microstructures and School of Physics, Nanjing University, Nanjing 210093, ChinaWe address edge states and rich localization regimes available in the one-dimensional (1D) dynamically modulated superlattices, both theoretically and numerically. In contrast to conventional lattices with straight waveguides, the quasi-energy band of infinite modulated superlattice is periodic not only in the transverse Bloch momentum, but it also changes periodically with increase of the coupling strength between waveguides. Due to collapse of quasi-energy bands dynamical superlattices admit known dynamical localization effect. If, however, such a lattice is truncated, periodic longitudinal modulation leads to appearance of specific edge states that exist within certain periodically spaced intervals of coupling constants. We discuss unusual transport properties of such truncated superlattices and illustrate different excitation regimes and enhanced robustness of edge states in them, that is associated with topology of the quasi-energy band. 03.65.Vf, 42.25.Gy, 78.67.–n Edge states in dynamical superlattices Min Xiao^7,8 December 30, 2023 ======================================§ INTRODUCTION Periodically modulated lattice systems attract considerable attention in diverse areas of physics, including condensed matter physics <cit.> and photonics <cit.>. One of the main reasons behing interest to such systems is that due to variation of parameters of the system along the evolution coordinate (time in condensed matter physis or propagation distance in photonics) not only rich variety of resonant dynamical effects associated with specific deformations of quasi-energy bands appears (for an overview of such dynamical effects see <cit.>), but one may also encounter the effects of purely topological origin. One of the manifestations of such effects is the appearance of topologically protected edge states that are typically unidirectional (in the 2D systems) and that demonstrate immunity to backscattering on disorder and other structural lattice defects due to topological protection. In modulated periodic photonic systems, frequently called Floquet insulators <cit.>, longitudinal variations of underlying potential were shown to lead to appearance of the effective external time-dependent “magnetic fields” that qualitatively change the behaviour of the system and allow to design a new class of devices employing topologically protected transport, including photonic interconnects, delay lines, isolators, couplers, and other structures <cit.>. Periodically modulated photonic lattices were employed for realization of discrete quantum walks <cit.>, and allowed observation of Floquet topological transitions with matter waves <cit.>.Previous investigations of modulated lattices were mainly focused on the 2D and 3D geometries, and less attention was paid to the 1D settings. Moreover, upon consideration of bulk and surface effects in the modulated photonic 1D systems only simplest lattices were utilized with identical coupling strength between all channels and with identical (usually sinusoidal) laws of their longitudinal variation <cit.>. Only recently dynamical superlattices with specially designed periodically varying separation between channels belonging to two different sublattices were introduced that allowed observation of intriguing new resonant phenomena, such as light rectification <cit.>. Previously only bulk modulated superlattices were considered and no surface effects in such structures were addressed. Therefore, main aim of this work is the exploration of new phenomena stemming from the interplay between superlattice truncation and its longitudinal modulation. We aim to show that dynamically modulated truncated superlattices exhibit topological transition manifested in qualiative modification of the quasi-energy spectrum upon variation of the coupling strength between waveguides forming the lattice. Namely, within proper intervals of coupling strength the isolated eigenvalues appear that are associated with nonresonant (i.e. existing within continuous intervals of coupling strengths) edge states. Interestingly, such edge states persist even when conditions for collapse of bulk quasi-energy band are met. We discuss specific propagation dynamics in the regime, where edge states exist. We believe that these findings substantially enrich the approaches for control of propagation paths of light beams in periodic media.As an example of the dynamical superlattice we consider discrete structure depicted in Fig. <ref>, which is somewhat similar to the Su-Schrieffer-Heeger lattice <cit.>. The superlattice is composed of two sublattices, denoted as A and B (red and green channels in Fig. <ref>). The single-mode waveguides in individual sublattices are curved such that coupling strength between nearest neighbours belonging to different sublattices changes with propagation distance in a step-like fashion, as shematically shown in Fig. <ref>(a) [since there are two sublattices, one can introduce two coupling strengths J_1(z) and J_2(z) describing coupling between waveguides with equal (n,n) or with different (n,n+1) indices from two sublattices]. We assume that the coupling strength increases to maximal value J when two waveguides are close and drops down nearly to zero when they are well separated, due to exponential decrease of the overlap integrals between modal fields with increase of the distance between waveguides. The longitudinal period of the structure is given by T, while transverse period is given by 2a. In Fig. <ref>(c) we display one longitudinal period of the structure indicated by a dashed box in Fig. <ref>(b). Coupling constants on two different segments of the lattice are indicated in Fig. <ref>(a). Such a lattice can be easily fabricated with femtosecond-laser writing technique <cit.>.§ THEORETICAL MODEL AND BAND STRUCTURE We describe propagation of light in the infinite superlattice depicted in Fig. <ref> using discrete model <cit.>idA_n/dz= J_1(z)B_n+J_2(z)B_n-1,idB_n/dz= J_1(z)A_n+J_2(z)A_n+1,where coupling constants J_1,2(z) are step-like periodic functions of the propagation distance z, while A_n, B_n stand for the field amplitudes on sites of sublattices A and B. According to the Floquet theory, the evolution of excitations in longitudinally modulated lattice governed by the Hamiltonian H(𝐤,t)=H(𝐤,t+T) (here T is the period of longitudinal modulation and 𝐤 is the transverse Bloch momentum), can be described by the Floquet evolution operator U(t)=𝒯exp[-i∫_0^tH( k,t')dt'], where 𝒯 is the time-ordering operator. Defining evolution operator U(T) for one longitudinal period of the structure [i.e. |ϕ(𝐤,T)⟩=U(T)|ϕ(𝐤,0)⟩, where |ϕ(𝐤,t)⟩ is the Floquet eigenstate of the system] and using adiabatic approximation, one can introduce effective Hamiltonian H_ eff of the modulated lattice in accordance with definition U(T)=exp(-iH_ effT). In contrast to instantaneous Hamiltonian H(𝐤,t), the effective Hamiltonian H_ eff is z-independent, and it offers “stroboscopic” description of the propagation dynamics over complete longitudinal period. The spectrum of the system can be described by quasi-energies ϵ — eigenvalues of the effective Hamiltonian <cit.> — that can be obtained from the expression U(T)|ϕ⟩=exp(-iϵ T)|ϕ⟩. Using this approach in the case of infinite discrete lattice we search for solutions of Eq. (<ref>) in the form of periodic Bloch waves A_n=Aexp(ikx_n) and B_n=Bexp(ikx_n+ika), where x_n=2na is the discrete transverse coordinate, and k∈[-π/2a,π/2a] is the Bloch momentum in the first Brillouin zone. Substituting these expressions into Eq. (<ref>), one obtainsidA/dz= [J_1(z)exp(iak_x)+J_2(z)exp(-iak_x)]B,idB/dz= [J_1(z)exp(-iak_x)+J_2(z)exp(iak_x)]A. Thus, Floquet evolution operator over one period can be represented as<cit.>U(T)=exp(-iH_2T/2)exp(-iH_1T/2) =cos(ak)exp(-iak)×[cos(JT)+itan(ak)iexp(iak)sin(JT);iexp(iak)sin(JT) exp(2iak)[cos(JT)-itan(ak)] ],where Hamiltonians on the first and second half-periods are given byH_1=[0Jexp(iak); Jexp(-iak)0 ],H_2=[0 Jexp(-iak);Jexp(iak)0 ]. One can see from Eq. (<ref>) that Floquet evolution operator is a periodic function of transverse momentum k with a period π/a and of the coupling strength J with a period 2π/T . Similarly, by introducing the effective Hamiltonian through U=exp(-iH_ effT) and calculating its eigenvalues (quasi-energies ϵ), one obtains that the latter are also periodic functions of k and J. In Fig. <ref>, we depict the dependence ϵ(k,J). Quasi-energy band is symmetric with respect to the plane ϵ=0 (it is periodic also in the vertical direction with a period 2π/T because eigenvalues of periodic system are defined modulo 2π/T ). The maxima of quasi-energies within vertical interval shown in Fig. 2 are located at k=nπ/a and J=(2l+1)π/T, where n is an integer and l is a non-negative integer. To highlight the details of this dependence we show quasi-energies in Figs. <ref>(a) and <ref>(b) for certain fixed values of coupling strength J and Bloch momentum k, respectively. Importantly, it follows from Fig. <ref>(a) that the quasi-energy band is dispersive at J<π/T (see red curves), so for this coupling strength any localized wavepacket launched into system will diffract. When J increases up to π/T the dependence ϵ(k) becomes linear <cit.> (see black lines). This means that effective dispersion coefficient vanishes and excitations in such a lattice will propagate without diffraction, but with nonzero transverse velocity — this is the rectification regime. Further increase of the coupling strength makes quasi-energy band dispersive again. Finally, quasi-energy band collapses to a line at J=2π/T (see the blue line). In this regime of dynamical localization the shape of any wavepacket launched into system will be exactly reproduced after one longitudinal period. Very similar transformations can be observed for different Bloch momenta, when quasi-energy is plotted as a function of coupling constant J, as shown in Fig. <ref>(b).The situation changes qualitatively when the superlattice is truncated in the transverse direction. In this case one cannot introduce Bloch momentum anymore, so evolution dynamics is described by the system (<ref>), where equations for amplitudes in the edge sites A_1 and A_N are replaced by the equations idA_1/dz=J_1(z)B_1, idA_N/dz=J_2(z)B_N-1. One should stress that the properties of the system do not change qualitatively if superlattice is truncated on the site belonging to sublattice A on the left side, and on the site belonging to sublattice B on the right side. By introducing effective Hamiltonian for the finite longitudinally modulated superlattice, one can determine its quasi-energies that can be plotted as a function of the coupling strength J. In Fig. <ref>(a) we display corresponding dependence. One can see that this dependence inherits some features of ϵ(J) dependence of the infinite lattice [compare Figs. <ref>(a) and <ref>(b)]. Among them is the (partial) collapse of the quasi-energy band for specific values of the coupling constant J=2π m/T. At the same time, there are two qualitative differences. First, within the interval J∈[π/T, 3π/T] of coupling constants the isolated quasi-energies emerged (see red lines) that are associated with edge states. In fact, such edge states appear periodically in the intervals [(4m+1)π/T, (4m+3)π/T], where m is an integer. The second difference is that the period of the ϵ(J) dependence is doubled in comparison with dependence in the infinite lattice. Qualitative modification of the quasi-energy spectrum indicates on the topological transition that occurs in finite modulated superlattice upon variation of coupling strength between waveguides. Interestingly, the collapse of quasi-energy band at J=2π/T indicating on the presence of dynamic localization in the system coexists with the fact of formation of edge states, so for this particular value of J two qualitative different localization mechanisms are simultaneously available.The width of emerging edge states strongly depends on the coupling constant. To illustrate this we introduce the participation ratio R=∑_n|q_n|^4/(∑_n|q_n|^2)^2, where q_n=A_n,B_n stands for light amplitudes on sites of sublattices A and B. The width of the mode is inversely proportional to participation ratio. In Fig. <ref>(b), we show the width of the edge state versus coupling constant J. Localization increases with increase of coupling constant, so that already at J>1.1π/T the edge state occupies less than ten sites of the lattice. Maximal localization in nearly single surface channel occurs at J=2π/T, and further increase of the coupling constant leads to gradual delocalization of the edge state. Examples of profiles of edge states (absolute value) with notably different localization degrees at J=1.16π/T and J=1.56π/T are shown in Fig. <ref>(c) .§ TRANSPORT PROPERTIES Topological transition that occurs in finite longitudinally modulated superlattice suggests the existence of novel propagation scenarios in this system. To study transport properties in such structures we simultaneously consider excitations of the internal and edge sites and use three representative values of the coupling constant. In the particular realization of the lattice, that we use to study propagation dynamics (see Fig. <ref>) two edge sites belong to different sublattices, i.e. the “bottom” site belongs to sublattice A, while the “top” site belongs to sublattice B. First, we consider the case J=0.5π/T, where quasi-energy band has finite width, while edge states do not appear. In Fig. <ref>(a), we excite the internal waveguide and find that the beam diffracts during propagation. Similarly, the excitation of the edge waveguide shown in Fig. <ref>(b) is also accompanied by rapid diffraction without any signatures of localization. Second, we turn to the system with the coupling strength J=1.5π/T. For this coupling constant according to Fig. <ref>(a), the width of the quasi-energy band is still finite, but edge states already emerge. Therefore, if an internal site is excited, discrete diffraction will be observed as shown in Fig. <ref>(c). In contrast, excitation of the edge site leads to the formation of well-localized edge state and only weak radiation can be detected, as shown in Fig. <ref>(d). The reason for small radiation is that we use excitation that does not match directly the shape of the edge state, hence delocalized bulk modes are excited too, but with small weights. Finally, we consider the case with J=2π/T, where quasi-energy band collapses [Fig. <ref>(a)]. In this particular case dynamic localization occurs irrespectively of the location of the excited site. In Fig. <ref>(e), we show such a localization for excitations of sites number 10, 20, and 30. In addition, we also excite the edge waveguides in Fig. <ref>(f), where one can see that light beam does not experience expansion and remains confined in two near-surface sites. This is the regime where two distinct localization mechanisms coexist.The propagation dynamics in this system is specific at J=π/T and it deserves separate discussion. In the infinite lattice this coupling constant corresponds to linear dependence of quasi-energy on Bloch momentum k, i.e. the absence of diffraction (rectification regime). Finite superlattice inherits this property to some extent, i.e. localized excitations in finite lattice also do not diffract, but move with constant transverse velocity. Interestingly, despite the absence of diffraction, the excitation of edge states in this regime does not occur, since moving excitations just change their propagation direction when they hit edge sites. This is illustrated in Figs. <ref>(a) and <ref>(b), where we simultaneously excite two opposite edge waveguides. In this particular case we excited sites belonging to different sublattices, as before, but dynamics does not change qualitatively if sites from one sublattice are excited. Notice that in this interesting regime the transverse confinement occurs without any nonlinearity, and at the same time the propagation trajectory of the beam and its output position can be flexibly controlled that is advantageous for practical applications. To illustrate enhanced robustness of edge states introduced here we deliberately introduce considerable deformation at the surface of the lattice, by replacing the whole section of the edge waveguide between z=T and z=2.5T with a straight section, as shown schematically in Fig. <ref>(c). The coupling constant for internal waveguides is selected as J=1.8π/T, i.e. it corresponds to situation when edge states form at the surface. The corresponding propagation dynamics in this deformed structure is shown in Fig. <ref>(d). Despite considerable deformation of the structure the edge excitation passes the defect without noticable scattering into the bulk of the lattice. However, it should be mentioned that if surface defect is too long and extends over three or more periods of the structure, the edge state may be destroyed and light will penetrate into the depth of the lattice. Finally, we design a structure that is composed of two parts with different coupling strengths between waveguides: in the first part of the lattice J=π/T for closely spaced waveguides, while in the second part of the lattice J=1.5π/T. Such variation in the coupling strength can be achieved by reduction of the transverse period at certain distance z, as shown in Fig. <ref>(e). Since in the first part of the lattice the coupling constant is selected such that no edge states can form, but diffractionless propagation is possible, the input beam will propagate from one edge of the lattice towards opposite edge. If it arrives to opposite edge in the point, where coupling constant changes and edge states become possible, the beam may excite the edge state and stay near the surface of the structure, as shown in Fig. <ref>(f). If, however, the beam hits the opposite edge before the point where coupling constant increases, it will be bounced back and enter into right half of the lattice in one of the internal waveguides. This will lead to fast diffraction of the beam without excitation of edge states. This setting can be considered as a kind of optical switch, where the presence of signal in the output edge channel depends on the position of the input excitation.§ CONCLUSIONS Summarizing, we investigated transport properties in the one-dimensional dynamical superlattices. We have shown that in finite modulated superlattices topological transition may occur that leads to appearance of edge states, whose degree of localization depends on the coupling constant between lattice sites. This localization mechanism may coexist with dynamic localization due to collapse of quasi-energy bands.§ ACKNOWLEDGEMENTThis work was supported by China Postdoctoral Science Foundation (2016M600777, 2016M600776, 2016M590935), the National Natural Science Foundation of China (11474228, 61605154), and Qatar National Research Fund (NPRP 6-021-1-005, 8-028-1-001). myprx
http://arxiv.org/abs/1703.08938v2
{ "authors": [ "Yiqi Zhang", "Yaroslav V. Kartashov", "Feng Li", "Zhaoyang Zhang", "Yanpeng Zhang", "Milivoj R. Belić", "Min Xiao" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170327054429", "title": "Edge states in dynamical superlattices" }
lable1,label2]Huan Lilable1,label2]Zhongzhi Zhang zhangzz@fudan.edu.cn [lable1]School of Computer Science, Fudan University, Shanghai 200433, China [label2]Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 200433, China The size and number of maximum matchings in a network have found a large variety of applications in many fields. As a ubiquitous property of diverse real systems, power-law degree distribution was shown to have a profound influence on size of maximum matchings in scale-free networks, where the size of maximum matchings is small and a perfect matching often does not exist. In this paper, we study analytically the maximum matchings in two scale-free networks with identical degree sequence, and show that the first network has no perfect matchings, while the second one has many. For the first network, we determine explicitly the size of maximum matchings, and provide an exact recursive solution for the number of maximum matchings. For the second one, we design an orientation and prove that it is Pfaffian, based on which we derive a closed-form expression for the number of perfect matchings. Moreover, we demonstrate that the entropy for perfect matchings is equal to that corresponding to the extended Sierpiński graph with the same average degree as both studied scale-free networks. Our results indicate that power-law degree distribution alone is not sufficient to characterize the size and number of maximum matchings in scale-free networks. Maximum matching, Perfect matching, Pfaffian orientation, Scale-free network, Complex network § INTRODUCTIONA matching in a graph with N vertices is a set of edges, where no two edges are incident to a common vertex. A maximum matching is a matching of maximum cardinality, with a perfect matching being a particular case containing N/2 edges. The size and number of maximum matchings have numerous applications in physics <cit.>, chemistry <cit.>, computer science <cit.>, among others. For example, in the context of structural controllability <cit.>, the minimum number of driving vertices to control the whole network and the possible configurations of driving vertices are closely related to the size and number of maximum matchings in a bipartite graph.Due to the relevance of diverse aspects, it is of theoretical and practical importance to study the size and number of maximum matchings in networks, which is, however, computationally intractable <cit.>. Valiant proved that enumerating perfect matchings in general graphs is formidable <cit.>, it is #P-complete even in bipartite graph <cit.>. Thus, it is of great interest to find specific graphs for which the maximum matching problem can be solved exactly <cit.>. In the past decades, the problems related maximum matchings have attracted considerable attention from the community of mathematics and theoretical computer science <cit.>.A vast majority of previous works about maximum matchings focused on regular graphs or random graphs <cit.>, which cannot well describe realistic networks. Extensive empirical works <cit.> indicated that most real networks exhibit the prominent scale-free property <cit.>, with their degree distribution following a power law form. It has been shown that power law behavior has a strong effect on the properties of maximum matchings in a scale-free network <cit.>. For example, in the Barabási-Albert (BA) scale-free network <cit.>, a perfect matching is almost sure not to exist, and the size of a maximum matching is much less than half the number of vertices. The same phenomenon was also observed in a lot of real scale-free networks, which are far from being perfectly matched as the BA network. Then, an interesting question arises naturally: whether the power-law degree distribution is the only ingredient characterizing maximum matchings in scale-free networks? In order to answer the above-raised problem, in this paper, we present an analytical study of maximum matchings in two scale-free networks with identical degree distribution <cit.>, and show that the first network has no perfect matchings, while the second network has many. For the first network, we derive an exact expression for the size of maximum matchings, and provide a recursive solution to the number of maximum matchings. For the second network, by employing the Pfaffian method proposed independently by Kasteleyn <cit.>, Fisher and Temperley <cit.>, we construct a Pfaffian orientation of the network. On the basis of Pfaffian orientation, we determine the number of perfect matchingsas well as its entropy, which is proved equal to that corresponding to the extended Sierpiński graph <cit.>. Our findings suggest that the power-law degree distribution by itself cannot determine the properties of maximum matchings in scale-free networks. § PRELIMINARIES In this section, we introduce some useful notations and results that will be applied in the sequel.§.§ Graph and operation Let 𝒢=(𝒱(𝒢), ℰ(𝒢)) be a graph with N vertices and E edges, where 𝒱(𝒢) is the vertex set {v_1,v_2,⋯,v_N} and ℰ(𝒢) is the edge set. In this paper, all graphs considered are simple graphs without loops and parallel edges, having an even number of vertices. Throughout the paper, the two terms graph and network are used indistinctly.Let e={u,v}∈ℰ(𝒢) be an edge in𝒢. We say that the edge e is subdivided if we insert a new vertex w between them, that is, edge e is replaced by a path u-w-v of length 2.The subdivision graph B(𝒢) of a graph 𝒢 is the graph obtaining from 𝒢 by performingthe subdivision operation on each edge in ℰ(𝒢).The line graph L(𝒢) of a graph 𝒢 is the graph, where the vertex set is exactly the edge set ℰ(𝒢) of 𝒢, and two vertices are adjacent if and only if their corresponding edges of 𝒢 are connected to a common vertex in 𝒱(𝒢).The subdivided-line graph Γ(𝒢) of a graph 𝒢 is the line graph of the subdivision graph of 𝒢, i.e., Γ(𝒢) = L(B(𝒢)). We call Γ the subdivided-line graph operation. The g-iterated (g≥ 1) subdivided-line graph Γ^g(𝒢) of 𝒢 is the graph obtained from 𝒢 by iteratively using the subdivided-line graph operation g times. §.§ Structural properties of a graph For a network, the distance between two vertices is defined as the number of edges in the shortest path between them, and the average distance of the network is defined as the arithmetic average of distances over all pairs of vertices. The diameter of a network is the length of the shortest path between any pair of farthermost vertices in the network. A network is said to be small-world <cit.> if its average distance grows logarithmically with the number of vertices, or more slowly.A random variable x is said to follow a power law distribution if its probability density function P(x) obeys the form P(x) ∼ x^-γ. A network is scale-free <cit.> if its degree satisfies, at least asymptotically, a power law distribution P(d) ∼ d^-γ. In realistic scale-free networks <cit.>, the power exponent γ of degree distribution typically lies between 2 and 3. Cumulative degree distribution P_ cum(d) of a network is defined as P_ cum(d)=∑_d'≥ d P(d'). For a scale-free network withpower law degree distribution P(d)∼ d^-γ, its cumulative degree distribution is also power law P_ cum(d) ∼ d^-(γ-1) <cit.>. In real networks there are nontrivial degree correlations among vertices <cit.>. There are two measures characterizing degree correlations in a network. The first one is the average degree of the nearest neighbors for vertices with degree d as a function of this degree value, denoted as k_ nn(d) <cit.>. When k_ nn(d) is an increasing function of d, it means that vertices have a tendency to connect to vertices with a similar or larger degree. In this case, the network is called assortative. For example, the small-world Farey graph <cit.> is assortative. In contrast, if k_ nn(d) is a decreasing function of d, which implies that vertices of large degree are likely to be connected to vertices with small degree, then the network is said to be disassortative. And if k_ nn(d)= const, the network is uncorrelated.The other quantity describing degree correlations is Pearson correlation coefficient of the degrees of the endvertices of all edges <cit.>. For a general graph 𝒢=(𝒱(𝒢), ℰ(𝒢)), this coefficient is defined asr(𝒢)=E ∑_i=1^E j_ik_i -[ ∑_i=1^E 1/2 (j_i+ k_i)]^2 /E ∑_i=1^E 1/2(j_i^2+ k_i^2) -[ ∑_i=1^E 1/2(j_i+ k_i)]^2,where j_i, k_i are the degrees of the two endvertices of the ith edge, with i=1,2,⋯, E. The Pearson correlation coefficient is in the range -1≤ r(𝒢) ≤ 1. Network 𝒢 is uncorrelated, if r(𝒢)=0; 𝒢 is disassortative, if r(𝒢)<0; and 𝒢 is assortative, if r(𝒢)>0. A network 𝒢 is fractal if it has a finite fractal dimension, otherwise it is non-fractal <cit.>. In general, the fractal dimension of 𝒢 can be computed by a box-covering approach <cit.>. A box of sizel_B is a vertex set such that all distances between any pair of vertices in the box are less than l_B.We use boxes ofsize l_B to cover all vertices in 𝒢, and let N_B be minimum possible number of boxesrequired to cover all vertices in 𝒢. If N_B satisfies N_B ∼ l_B^-d_B with 0<d_B<∞, then 𝒢 is fractal with its fractal dimension being d_B. Self-similarity of a network <cit.> refers to the scale invariance of the degree distribution under coarse-graining with various box sizes l_B as well as under the iterative operations of coarse-graining with fixed l_B. Intuitively, a self-similar network resembles a part of itself. Notice that fractality and self-similarity do not always imply each other. A fractal network is self-similar, while a self-similar network may be not fractal. §.§ Matchings in a graph A matching in 𝒢 is a subset M ⊆ℰ(𝒢) such that no two edges in M have a vertex in common. The size of a matching M is the number of edges in M. A matching of the largest possible size is called a maximum matching. Matching number of 𝒢 is defined as the cardinality of any maximum matching in 𝒢. A vertex incident with an edge inM is said to be covered byM. A matching M is said to be perfect if it coversevery vertex of 𝒢. Obviously, any perfect matching is a maximum matching. We use ψ(𝒢) to denote the number of perfect matchings of 𝒢.<cit.>For a connected graph 𝒢 with N vertices and E edges, where E is even, the number ψ(L(𝒢)) of perfect matchings in its line graph satisfies ψ(L(𝒢)) ≥ 2^E-N+1, where equality holds if the degree of all vertices in 𝒢 is less than or equal to 3.A path is called an elementary one if it touches each vertex one time at most. Acycle iselementary if it touches each vertex one time at most, except thestarting and ending vertices. In this paper, all paths and cycles mentioned are elementary paths and elementary cycles, respectively. A cycle C of 𝒢 is nice if 𝒢∖ C contains a perfect matching, where 𝒢∖ C represents the induced subgraph of 𝒢 obtained from 𝒢 by removing all vertices of C and the edges connected to them. Similarly, a path P of 𝒢 is nice if 𝒢∖ P includes a perfect matching. Since the number of vertices of any graph considered in this paper is even, the length (number of edges) of every nice cycle is even, while the length of every nice path is odd.Let 𝒢^e be an orientation of 𝒢. Then the skew adjacency matrix of 𝒢^e, denoted by A(𝒢^e), is defined asA(𝒢^e)=(a_ij)_N × N ,wherea_ij= 1,(v_i,v_j) ∈ℰ(𝒢^e) , -1,(v_j,v_i) ∈ℰ(𝒢^e) , 0,otherwise . For a cycle C with even length, we shall say C is oddly (or evenly) oriented in 𝒢^e if C contains an odd (or even) number of co-oriented edges when the cycle is traversed in either direction. Similarly, a path P is said to be oddly (or evenly) oriented in 𝒢^e if it has an odd (or even) number of co-oriented edges when P is traversed from its starting vertex to ending vertex.𝒢^e is a Pfaffian orientation of 𝒢 if every nice cycle of 𝒢 is oddly oriented in 𝒢^e <cit.>.If 𝒢^e is a Pfaffian orientation of network 𝒢, then the number of perfect matchings of 𝒢, ψ(𝒢), is equal to the square root of the determinant of A(𝒢^e): <cit.> Let 𝒢^e be a Pfaffian orientation of 𝒢. Thenψ(𝒢) =√((A(𝒢^e))) . An interesting quantity related to perfect matchings is entropy. For a network 𝒢 with sufficiently large number of vertices, the entropy for perfect matchings is defined as follows <cit.>:z(𝒢)=lim_N→∞lnψ(𝒢)/N/2 .After introducing related notations, in what follows we will study maximum matchings of two scale-free networks with the same degree sequence <cit.>: one is fractal and large-world, while the other is non-fractal and small-world. We will show that for both networks, the properties of their maximum matchings differ greatly. § MAXIMUM MATCHINGS IN A FRACTAL AND SCALE-FREE NETWORK In this section, we study the size and number of maximum matchings for a fractal scale-free network. §.§ Construction and structural properties We first introduce the construction methods of the fractal network and study some of its structural properties. The fractal scale-free network is generated in an iterative way <cit.>.Let ℱ_g=(𝒱(ℱ_g), ℰ(ℱ_g)), g ≥ 1, denote the fractal scale-free network after g iterations, with 𝒱(ℱ_g) and ℰ(ℱ_g) being the vertex set and the edge set, respectively. Then, ℱ_g is constructed as follows:For g=1, ℱ_1 is a quadrangle containing four vertices and edges.For g>1, ℱ_g is obtained from ℱ_g-1 by replacing each edge of ℱ_g-1 with a quadrangle on the right-hand side (rhs) of the arrow in Fig. <ref>. Figure <ref> illustrates the construction process of the first several iterations.The fractal scale-free network is self-similar, which can be easily seen from alternative construction approach <cit.>. As will be shown below, in ℱ_g, g ≥ 1, there are four vertices with the largest degree, which we call hub vertices. For the four hub vertices in ℱ_1, we label one pair of diagonal vertices as v_1 and v_2, and label the other pair of vertices as v_3 and v_4. Then, the fractal scale-free network can be created in another way as illustrated in Fig. <ref>. Given the network ℱ_g-1=(𝒱(ℱ_g-1), ℰ(ℱ_g-1)), g > 1, ℱ_g=(𝒱(ℱ_g), ℰ(ℱ_g)) is obtained by performing the following operations:(i)Merging four replicas of ℱ_g-1, denoted by ℱ_g-1^(i), i=1,2,3,4, the four hub vertices of which are denoted by v_k^(i), k=1,2,3,4, with v_k^(i) in ℱ_g-1^(i) corresponding to v_k in ℱ_g-1.(ii)Identifying v_1^(1) and v_1^(4) (or v_2^(2) and v_1^(3), v_2^(1) and v_1^(2), v_2^(3) and v_2^(4)) are as the hub vertex v_1 (or v_2,v_3,v_4) in ℱ_g.Let N_g and E_g denote, respectively, the number of vertices and edges in ℱ_g. By construction, N_g and E_g obey relations N_g = 4N_g-1-4 and E_g=4 E_g-1. With the initial condition N_1=E_1=4, we have N_g=2/3(4^g+2 ) and E_g=4^g.According to the first construction, we can determine the degree for all vertices inℱ_g and its distribution. Let L_v(g_i) denote the number of vertices created at iteration g_i. Then, L_v(1)=4 and L_v(g_i)=2× 4^ g_i -1 for g_i >1. In network ℱ_g, any two vertices generated at the same iteration have the same degree. Let d_i(g) be the degree of a vertex in ℱ_g, which was created at iteration g_i. After each iteration, the degree of each vertex doubles, implying d_i(g)=2 d_i(g-1), which together with d_i(g_i)=2leads to d_i(g)= 2^g-g_i+1. Thus, all possible degree of vertices in ℱ_g is 2,2^2,2^3,…,2^g-1,2^g, and the number ofvertices with degree δ =2^ g-g_i+1 is L_v(g_i). From the degree sequence we can determine the cumulative degree distribution of ℱ_g. <cit.> The cumulative degree distribution of network ℱ_gobeys a power law form P_ cum(d)∼ d^-2.Thus, network ℱ_g is scale-free with its power exponent γ of degree distribution being 3. <cit.> The average distance of ℱ_g isμ(ℱ_g) = 22× 2^g × 16^g+8^g(21g+42)+27× 4^g+98× 2^g/42×16^g+105× 4^g+42. For large g, μ(ℱ_g) ∼11/212^g. On the other hand, when g is very large, N_g ∼2/3· 4^g. Thus, μ(ℱ_g) grows as a square root of the number of vertices, implying that the network is “large-world",instead of small-world. Note that although most real networks are small-world, there exist some “large-world" networks, e.g. global network of avian influenza outbreaks <cit.>.In addition, the network ℱ_g is fractal and disassortative. <cit.> The network ℱ_g is fractal with the fractal dimension being 2. <cit.> In network ℱ_g, g≥ 1, the average degree of the neighboring vertices for vertices with degree d isk_ nn (d) =2g, d = 2,2, d > 2.Thus, network ℱ_g is disassortative, which can also be seen from itsPearson correlation coefficient.The Pearsoncorrelation coefficient of the network ℱ_g, g≥ 1, isr(ℱ_g)= (g-1)^2/-3×2^g+g^2+2g+3.We first calculate the following three summations over all E_g edges in ℱ_g.∑_m=1^E_g j_m k_m =1/2∑_m=1^E_g (j_m k_m + k_m j_m)=1/2∑_i=1^N_g[ d_i(g)∑_(i,j)∈ℰ(ℱ_g) d_j(g)], = 1/2∑_g_i=1^g L_v(g_i) d^2_i(g) k_ nn(d_i(g))=g· 4^g+1, ∑_m=1^E_g (j_m+ k_m)= ∑_i=1^N_gd_i^2(g)=∑_g_i=1^g L_v(g_i) d^2_i(g)=(g+1)2^2g+1, ∑_m=1^E_g (j_m^2+ k_m^2) = ∑_i=1^N_gd_i^3(g)=∑_g_i=1^g L_v(g_i) d^3_i(g)=2^2g+1(3× 2^g-2).Inserting these results into Eq. (<ref>) and considering E_g=4^g yields the result. §.§ Size of maximum matchings Although for a general graph, the size of maximum matchings is not easy to determine, for the fractal scale-free network ℱ_g, we can obtain it by using its self-similar structure.The matching number of network ℱ_g is 4^g+8/6. In order to determine the size of maximummatchings of network ℱ_g, denoted by c_g, we define some useful quantities. Let a_g be the size of maximum matchings of ℱ_g∖{v_1,v_2}, and let b_g denote the size of maximum matchings of ℱ_g∖{v_1}, which equals the size of maximum matchings ofℱ_g∖{v_2}. We now determine the three quantities a_g, b_g, and c_g by using the self-similar architecture of the network.Figs. <ref>, <ref>, <ref> show, respectively, all the possible configurations of maximummatchings of network ℱ_g+1∖{v_1,v_2}, ℱ_g+1∖{v_1}, and ℱ_g+1. Note that in Figs. <ref>, <ref>, <ref>, only the hub vertices are shown explicitly, with solid line hubs being covered and dotted line hubs being vacant. From Figs. <ref>, <ref>, <ref>, we can establish recursive relations for a_g, b_g, and c_g, given bya_g+1 = 2a_g+2b_g , b_g+1 =max{2a_g+b_g+c_g ,a_g+3b_g } , c_g+1 =max{2a_g+2c_g,4b_g,a_g+2b_g+c_g } .With initial condition a_1=0, b_1=1, and c_1=2, the above equations are solved to obtain a_g = 4^g-4/6, b_g = 4^g+2/6 and c_g = 4^g+8/6. §.§ Number of maximum matchings Let θ_g be the number of maximum matchings of ℱ_g. To determine θ_g, we introduce two additional quantities.Let ϕ_g be the number of maximum matchings of ℱ_g∖{v_1,v_2}, and let φ_g be the number of maximum matchings of ℱ_g∖{v_1}, which is equal to the number of maximum matchings of ℱ_g∖{v_2}. For network ℱ_g, g ≥ 1, the three quantities ϕ_g, φ_g and θ_g can be determined recursively according to the following relations:ϕ_g+1 =4ϕ_g^2φ_g^2,φ_g+1 =4ϕ_g^2φ_gθ_g + 4ϕ_gφ_g^3,θ_g+1 =2ϕ_g^2θ_g^2 + 2φ_g^4 + 12ϕ_gφ_g^2θ_g,with initial conditions ϕ_1 = 1, φ_1 = 2 and θ_1 = 2. Theorem <ref> show that a_g = 4^g-4/6, b_g = 4^g+2/6 and c_g = 4^g+8/6, which meansa_g+1 = 2a_g+2b_g, b_g+1 = 2a_g+b_g+c_g = a_g+3b_g, c_g+1 = 2a_g+2c_g = 4b_g = a_g+2b_g+c_g.Thus, Fig. <ref>, <ref> and <ref> actually providearrangements of all non-overlappingmaximum matchings for ℱ_g∖{v_1,v_2}, ℱ_g∖{v_1}, and ℱ_g, respectively. From these three figures we can obtain directly the recursive relations for ϕ_g, φ_g and θ_g. According to Fig. <ref>, we can see that, for the four configurations their contribution toϕ_g+1is identical.Byusing addition and multiplication principles, we obtainϕ_g+1= 4ϕ_g^2φ_g^2.In a similar way, we can derive the recursive relations for φ_g+1 and θ_g+1:φ_g+1 = 4ϕ_g^2φ_gθ_g + 4ϕ_gφ_g^3,θ_g+1 = 2ϕ_g^2θ_g^2 + 2φ_g^4 + 12ϕ_gφ_g^2θ_g,as stated by the theorem.Theorem <ref> shows that the number of maximum matchings of network ℱ_g can be calculated in 𝒪(ln N_g) time.§ PFAFFIAN ORIENTATION AND PERFECT MATCHINGS OF A NON-FRACTAL SCALE-FREE NETWORK In the preceding section, we study the size and number of maximum matchings ofa fractal scale-free network ℱ_g, which has no perfect matching for all g>1. In this section, we study maximum matchings in the non-fractal counterpart ℋ_g of ℱ_g, which has aninfinite fractal dimension <cit.>. Both ℱ_g and ℋ_g have the same degree sequence for all g≥ 1, but ℋ_g has perfect matchings. Moreover, we determine the number of perfect matchings in ℋ_g by applying the Pfaffian method, and show that ℋ_g has the same entropy for perfect matchings as that associated with an extended Sierpiński graph <cit.>. §.§ Network construction and structural characteristics The non-fractal scale-free network is also built iteratively.Let ℋ_g be the non-fractal scale-free network ℋ_g=(𝒱(ℋ_g), ℰ(ℋ_g)), g ≥ 1, after g iterations, with vertex set 𝒱(ℋ_g) and edge set ℰ(ℋ_g). Then, ℋ_g is constructed in the following iterative way.For g=1, ℋ_1 consists of a quadrangle.For g>1, ℋ_g is derived from ℋ_g-1 by replacing each edge in ℋ_g-1 with a quadrangle on rhs of the arrow in Fig. <ref>. Figure <ref> illustrates three networks for g=1,2,3. The non-fractal scale-free network is also self-similar, which can also be generated in an alternative approach <cit.>. Similar to its fractal counterpart ℱ_g, in ℋ_g, g ≥ 1, the initial four vertices created at g =1 have the largest degree, which are call hub vertices. We label the four hub vertices in ℋ_1 by v_1, v_2, v_3, and v_4: one pair of diagonal vertices are label as v_1 and v_4, and the other pair of vertices are labeled as v_2 and v_3, see Fig. <ref>. Then, the non-fractal scale-free network can be created alternatively as shown in Fig. <ref>.Given the network ℋ_g-1=(𝒱(ℋ_g-1), ℰ(ℋ_g-1)), g > 1, ℋ_g=(𝒱(ℋ_g), ℰ(ℋ_g)) is obtained by executing the following two operations:(i)Amalgamating four copies of ℋ_g-1, denoted by ℋ_g-1^(i), i=1,2,3,4, the four hub vertices of which are denoted by v_k^(i), k=1,2,3,4, with v_k^(i) in ℋ_g-1^(i) corresponding to v_k in ℋ_g-1.(ii)Identifying v_1^(1) and v_1^(4) (or v_2^(2) and v_1^(3), v_2^(1) and v_1^(2), v_2^(3) and v_2^(4)) as the hub vertex v_1 (or v_4,v_3,v_2) in ℋ_g. In the sequel, we will use the abovenotations for ℱ_g to represent the samequantities corresponding to those of ℋ_g in the case without confusion.In ℋ_g, the number of vertices is N_g=2/3(4^g+2 ), the number of edges is E_g=4^g. According to the first construction method, the number of vertices created at iteration g_i, g_i >1, is L_v(g_i)=2× 4^ g_i -1, the degree of a vertex i created at iteration g_i, g_i ≥ 1, is d_i(g)= 2^g-g_i+1, all possible degree of vertices is 2^ g-g_i+1, 1 ≤ g_i ≤ g, and the number of vertices with degree is L_v(g_i).As shown in <cit.>, for all g≥ 1, ℋ_g has the same degree sequence as that of ℱ_g. Thus, ℋ_g is scale-free with the power exponent γ=3, identical to that of ℱ_g.In spite of the resemblance of degree sequence between ℋ_g and ℱ_g, there are obvious difference between them. For example, ℋ_g is non-fractal since its fractal dimension is infinite <cit.>. Another example is that network ℋ_g is typically small-world. <cit.> The average distance of ℋ_g isμ(ℋ_g) = 2/3×8+16× 4^g+3× 16^g + 6g× 16^g/4× 16^g + 10× 4^g + 4. For infinite g, μ(ℋ_g) approximates g, and thus increases logarithmically with number of vertices, implying that network ℋ_g exhibits a small-world behavior.Interestingly, network ℋ_g is absolutely uncorrelated.In network ℋ_g, g≥ 1, the average degree of all the neighboring vertices for vertices with degree d isk_ nn (d) = g+1,independent of d.Note that in network ℋ_g, the possible degree is d_i(g)=2^g-g_i+1 with 1 ≤ g_i ≤ g.For any d_i(g)=2^g-g_i+1, let k_ nn (d_i(g)) bethe average degree of the neighboring vertices forall the L_v(g_i) vertices with degree d_i(g).Then, k_ nn (d_i(g)) is equal to the ratio ofthe total degree of all neighbors of the L_v(g_i) vertices having degree d_i(g) to the total degree of these L_v(g_i) vertices, given byk_ nn (d_i(g)) =1/L_v(g_i)d_i(g) ×(∑_g'_i=1^g_i-1d(g'_i,g)L_v (g'_i)d(g'_i,g_i-1)+ ∑_g'_i=g_i + 1^gd(g'_i, g)L_v (g_i)d(g_i,g'_i-1) )+1,where d(x,y), x ≤ y, represents the degree of a vertex in network ℋ_y, which was generated at iteration x. In Eq. (<ref>), the first sum on the rhsaccounts for the links made to vertices with larger degree (i.e. 1≤ g'_i <g_i) when the vertices was generated at iteration g_i. The second sum explains the links made to the current smallest degree vertices at each iteration g'_i> g_i. The last term 1 describes the link connected to the simultaneously emerging vertex. After simple algebraic manipulations, we have exactly k_ nn (d_i(g)) = g+1, which does not depend on d_i(g). The absence of degree correlations in network ℋ_g can also be seen from its Pearson correlation coefficient.The Pearson correlation coefficient of network ℋ_g, g≥ 1, is r(ℋ_g)=0. Using a similar proof process for Proposition <ref>, we can determine related summations over all E_g edges in ℋ_g:∑_m=1^E_g j_m k_m =1/2∑_g_i=1^g L_v(g_i) d^2_i(g) k_ nn(d_i(g))=4^g(g+1)^2,and∑_m=1^E_g (j_m+ k_m)= ∑_i=1^N_gd_i^2(g)=∑_g_i=1^g L_v(g_i) d^2_i(g)=(g+1)2^2g+1.Then, according to Eq. (<ref>), the numerator of r(ℋ_g) isE_g∑_m=1^E_g j_m k_m - [∑_m=1^E_g1/2 (j_m+ k_m)]^2= 4^g 4^g (g+1)^2 - [ 2^2g(g+1) ]^2 =0,which leads to r(ℋ_g)=0.§.§ Pfaffian orientation We here define an orientation of network ℋ_g, and then prove that the orientation is a Pfaffian orientation of ℋ_g, by using its self-similar structure. The orientation ℋ_g^e of network ℋ_g, g ≥ 1, is defined as follows:For g=1, the orientations of four edges (v_1,v_2), (v_1,v_3), (v_2,v_4), and (v_3,v_4) in ℋ_1 are, respectively, from v_1 to v_2, from v_1 to v_3, v_4 to v_2, and v_3 to v_4.For g>1, ℋ_g^e is obtained from ℋ_g-1^e. Note that ℋ_g includesfour copies of ℋ_g-1, denoted by ℋ_g-1^(i), 1≤ i ≤ 4. The orientations ofℋ_g-1^(i) are represented by ℋ_g-1^e (i), each of which is a replica of ℋ_g-1^e. Fig. <ref> shows the orientations of network ℋ_g for g=1,2,3. Although no polynomial algorithm for checking whether a given orientation is Pfaffian or not is known <cit.>, for the network ℋ_g, we can prove that ℋ_g^e is Pfaffian. For all g≥ 1, the orientation ℋ_g^e is a Pfaffian orientation of network ℋ_g. In order to prove Theorem <ref>, we first prove that for any g≥ 1,there exist perfect matchings for ℋ_g.When g=1,ℋ_1is a quadrangle, which has two perfect matchings. Suppose that ℋ_g-1 (g≥ 2) has perfect matchings. If we keep the matching configurationsfor vertices inℋ_g-1, we can cover those new vertices generated at iteration g as follows. According to the first network construction approach, for any two vertices generated by an oldedge in ℋ_g-1,we cover this vertex pair by the new edge connecting them in ℋ_g. We further introduce some notations and give some auxiliary results. For arbitrary sequential hub vertices u_1, u_2, ⋯, u_m, let S_ℋ_g(u_1, u_2, ⋯, u_m)present the set of paths (if u_1≠ u_m) or cycles (if u_1=u_m) of ℋ_g, where each path or cycle takes the form u_1-⋯-u_2-⋯-u_m,exclusive of other hub vertices.Obviously, in ℋ_g there exist nice paths starting from vertex v_1 to v_2. For example, the directed edge from v_1 to v_2 is a nice path.For g≥ 1, if P is a nice path of ℋ_g starting from vertex v_1 to v_2, then P is oddly oriented relative to ℋ_g^e. By induction. For g=1, it is obvious that the base case holds.For g>1, suppose that the statement is true for ℋ_g-1. By construction, for any nice P of ℋ_g from vertex v_1 to v_2, it belongs to either of the two sets: S_ℋ_g(v_1,v_2) and S_ℋ_g(v_1,v_3,v_4,v_2).For the first case P ∈ S_ℋ_g(v_1,v_2), P is evidently a nice path of ℋ_g-1^(4) starting from vertex v_1^(4) to vertex v_2^(4). By induction hypothesis, P is oddly oriented.For the second case P ∈ S_ℋ_g(v_1,v_3,v_4,v_2), we split P into three sub-paths, P_1, P_2 and P_3, such that P_1 ∈ S_ℋ_g(v_1,v_3), P_2 ∈ S_ℋ_g(v_3,v_4) and P_3 ∈ S_ℋ_g(v_4,v_2). Notice that P_1 corresponds to a nice path of ℋ_g-1^(1) from v_1^(1) tov_2^(1). By induction hypothesis, P_1 is oddly oriented. Analogously, we can prove that P_2 and P_3 are both oddly oriented. Therefore, P is oddly oriented. [Proof of Theorem <ref>.] In order to prove that ℋ_g^e is a Pfaffian orientation of ℋ_g, we only require to prove that every nice cycle of ℋ_g is oddly oriented relative to the orientation ℋ_g^e of ℋ_g. By induction. For g=1, ℋ_1 has a unique nice cycle. It is easy to see that this nice cycle is oddly oriented relative to ℋ_1^e.For g>1, assume that the statement is true for all ℋ_j (1 ≤ j < g). Let C be an arbitrary nice cycle of ℋ_g. By construction, Cbelongs to either a subgraph ℋ_g-1^(i), i=1,2,3,4, of ℋ_g or set S_ℋ_g(v_1,v_3,v_4,v_2,v_1).When C belongs to H_g-1^(i), i=1,2,3,4, we can prove that there exists a subgraph ℒ of ℋ_g-1^(i) satisfying that ℒ is isomorphic to ℋ_k (with k being the smallest integer between 1 and g-1) and C is a nice cycle of ℒ. Such a subgraph ℒ can be obtained in the following manner. First, let ℒ=ℋ_g-1^(i). If C belongs to one of the four mutually isomorphic subgraphs ℋ_g-2^(i'), i'=1,2,3,4, forming ℒ, then let ℒ=H_g-2^(i'). In an analogousway, by iteratively using the operations on ℋ_g-2^(i') and the resulting subgraphs, we can find the smallest integer k (1 ≤ k < g), such that ℒ is isomorphic to ℋ_k. We next show that C is a nice cycle of ℒ. Let v_1^*, v_2^*, v_3^* and v_4^* be the four hub vertices of ℒ, corresponding to the hubs v_1, v_2, v_3 and v_4 in ℋ_k. Then C must belongs to S_ℒ(v_1^*, v_3^*, v_4^*, v_2^*, v_1^*). Therefore, in ℋ_g ∖ C, the vertices in ℒ∖ C are separated from other ones.Because C is a nice cycle of ℋ_g, ℋ_g ∖ C has a perfect matching, implying that ℒ∖ C has also a perfect matching. Hence, C is a nice cycle of ℒ. By induction hypothesis, C is oddly oriented relative to ℒ, indicating that C is oddly oriented with respect toℋ_g^e.When the nice cycle C ∈ S_ℋ_g(v_1,v_3,v_4,v_2,v_1), it can be split into four nice sub-paths P_1, P_2, P_3 and P_4, with P_1 ∈ S_ℋ_g(v_1,v_3), P_2 ∈ S_ℋ_g(v_3,v_4), P_3 ∈ S_ℋ_g(v_4,v_2), and P_4 ∈ S_ℋ_g(v_2,v_1). Then, P_1, P_2, P_3 are nice paths of ℋ^(1)_g-1, ℋ^(2)_g-1, and ℋ^(3)_g-1, respectively. By Lemma <ref>, P_1, P_2, P_3are all oddly oriented. Analogously, P_4 is evenly oriented. Thus,C is oddly oriented with respect to ℋ_g^e.§.§ Number of perfect matchings We are now ready to determine the number and entropy of perfect matchings in network ℋ_g. The main results can be stated as follows. The number of perfect matchings of ℋ_g, g≥ 1, is ψ(H_g)=2^1/9·4^g+2/3g-1/9, and the entropy for perfect matchings in ℋ_g, g→∞, is z(ℋ_g)=ln2/3. Below we will prove Theorem <ref> by evaluating the determinant of the skew adjacency matrix for the Pfaffian orientation ℋ_g^e of network ℋ_g. To this end, we first introduce some additional quantities and provide some lemmas. The six matrices A_g, B_g, B_g^', D_g, D_g^' and K_g associated with the Pfaffian orientation ℋ_g^e are defined as follows:A_g is the skew adjacency matrix A(H_g^e), for simplicity.B_g (or B_g^') is a sub-matrix of A_g, obtained by deleting from A_g the row and column corresponding to vertex v_1 (or v_2).D_g (or D_g^') is a sub-matrix of A_g, obtained by deleting from A_g the row corresponding to vertex v_1 (or v_2) and the column corresponding to vertex v_2 (or v_1).K_g is a sub-matrix of A_g, obtained by deleting from A_g two rows and two columns corresponding to vertex v_1 and v_2. The following Lemma is immediate from the second construction of network ℋ_g+1, see Definition <ref>.For g≥ 1, matrices A_g+1, B_g+1, B_g+1^', D_g+1,D_g+1^' and K_g+1 satisfy the following relations: A_g+1=( [0110x_goox_g; -100 -1ooy_gy_g; -1001y_gx_goo;01 -10oy_gx_go; -x_g^⊤o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤ -y_g^⊤o^⊤o^⊤OOOK_g ]) , B_g+1=( [00 -1ooy_gy_g;001y_gx_goo;1 -10oy_gx_go;o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤ -x_g^⊤ -y_g^⊤OK_gOO; -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -y_g^⊤o^⊤o^⊤OOOK_g ]) , B_g+1^'=( [010x_goox_g; -101y_gx_goo;0 -10oy_gx_go; -x_g^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤o^⊤o^⊤OOOK_g ]) , D_g+1=( [ -10 -1ooy_gy_g; -101y_gx_goo;0 -10oy_gx_go; -x_g^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤o^⊤o^⊤OOOK_g ]) , D_g+1^'=( [110x_goox_g;001y_gx_goo;1 -10oy_gx_go;o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤ -x_g^⊤ -y_g^⊤OK_gOO; -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -y_g^⊤o^⊤o^⊤OOOK_g ]) , and K_g+1=( [01y_gx_goo; -10oy_gx_go; -y_g^⊤o^⊤K_gOOO; -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -x_g^⊤OOK_gO;o^⊤o^⊤OOOK_g ]) , where x_g and y_g are two (N_g-2)-dimensional row vectors describing, respectively, the adjacency relation between the two hub vertices v_1 and v_2, and other vertices in network ℋ_g; O (or o) is zero matrix (or zero vector) of appropriate order; and the superscript ⊤ of a vector represents transpose. Equation (<ref>) can be accounted for as follows. Let us represent A_g+1 in the block form: A_g+1^(i,j) (i,j=1,2,…,8) denote the block of A_g+1 at row i and column j. Let 𝒱_g^(i) be the vertex set of ℋ_g^(i), with the two hub vertices corresponding to v_1 and v_2 in ℋ_g beingremoved, see Fig. <ref>. Then A_g+1^(i,j) (i,j=1,2,3,4) represents the adjacency relation between vertex v_i and v_j. Similarly, A_g+1^(i,j) (i,j=5,6,7,8) represents the adjacency relation between vertices in set 𝒱_g^(i-4) and vertices in set 𝒱_g^(j-4). Fig. <ref> shows when i≠ j, there exists no edge between vertices in 𝒱_g^(i-4) and vertices in 𝒱_g^(j-4), so the corresponding block A_g+1^(i,j)=O. Equations (<ref>), (<ref>), (<ref>), (<ref>),and (<ref>) can be accounted for in an analogous way.The following lemmas are useful for determining the number of perfect matching in network ℋ_g.For g≥ 1, (B_g )=(B_g^' )=0. By definition, B_g is an antisymmetric matrix and its order is odd order. Then,(B_g )= (B_g^⊤ )=(-1)^N_g -1 (B_g )=- (B_g ) ,which yields (B_g )=0. Similarly, we can prove(B_g^' )=0.For g≥ 1, (D_g^')=- (D_g). From Eqs. (<ref>) and (<ref>), one obtainsD_g+1^'=-D_g+1^⊤ .Then,(D_g+1^')=(-1)^N_g+1-1(D_g+1)=-(D_g+1) ,which, together with (D_1^')=-(D_1), results in (D_g^')=- (D_g).For g≥ 1, (K_g+1)= [(K_g)]^3(A_g). By using Laplace's theorem <cit.> to Eq. (<ref>), we obtain ( K_g+1 ) = ( [01oooo; -10oy_gx_go; -y_g^⊤o^⊤K_gOOO; -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -x_g^⊤OOK_gO;o^⊤o^⊤OOOK_g ])+( [00y_gooo; -10oy_gx_go; -y_g^⊤o^⊤K_gOOO; -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -x_g^⊤OOK_gO;o^⊤o^⊤OOOK_g ])+( [00ox_goo; -10oy_gx_go; -y_g^⊤o^⊤K_gOOO; -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -x_g^⊤OOK_gO;o^⊤o^⊤OOOK_g ])Let Θ_g+1^(i), 1 ≤ i ≤ 3, denote sequentially the three determinants on the rhs of Eq. (<ref>). Applying some elementary matrix operations and the properties of determinants, weobtainΘ_g+1^(1) =(K_g)( [01ooo; -10oy_gx_g; -y_g^⊤o^⊤K_gOO; -x_g^⊤ -y_g^⊤OK_gO;o^⊤ -x_g^⊤OOK_g ]) = -(K_g)( [1o; -x_g^⊤K_g ])( [ -1oy_g; -y_g^⊤K_gO; -x_g^⊤OK_g ])= -[ (K_g)]^2( [ -1oy_g; -y_g^⊤K_gO; -x_g^⊤OK_g ]) = -[ (K_g)]^3( [ -1y_g; -x_g^⊤K_g ]) ,Θ_g+1^(2) =(K_g)( [00y_goo; -10oy_gx_g; -y_g^⊤o^⊤K_gOO; -x_g^⊤ -y_g^⊤OK_gO;o^⊤ -x_g^⊤OOK_g ]) =(K_g)( [0y_g; -y_g^⊤K_g ])( [0y_gx_g; -y_g^⊤K_gO; -x_g^⊤OK_g ]), and Θ_g+1^(3) = [ (K_g)]^2 ( [00x_go; -10y_gx_g; -x_g^⊤ -y_g^⊤K_gO;o^⊤ -x_g^⊤OK_g ]) = [ (K_g)]^2 ( [00x_go; -10y_go; -x_g^⊤ -y_g^⊤K_gO;o^⊤0OK_g ]) +[ (K_g)]^2 ( [00x_go;00ox_g; -x_g^⊤0K_gO;o^⊤ -x_g^⊤OK_g ])= [ (K_g)]^3 ( [00x_g; -10y_g; -x_g^⊤ -y_g^⊤K_g ]) + [ (K_g)]^2 ( [0x_g; -x_g^⊤K_g ]) ( [0x_g; -x_g^⊤K_g ])= [ (K_g)]^3 ( [01x_g; -10y_g; -x_g^⊤ -y_g^⊤K_g ])+ [ (K_g)]^3 ( [ -1y_g; -x_g^⊤K_g ])+ [ (K_g)]^2 ( [0x_g; -x_g^⊤K_g ]) ( [0x_g; -x_g^⊤K_g ]) . Note that both matrices ( [0y_g; -y_g^⊤K_g ]) and ( [0x_g; -x_g^⊤K_g ]) are antisymmetric and have odd order, which implies( [0y_g; -y_g^⊤K_g ]) = ( [0x_g; -x_g^⊤K_g ]) = 0 .By Definition <ref>, we have( [01x_g; -10y_g; -x_g^⊤ -y_g^⊤K_g ]) = (A_g)and( [ -1y_g; -x_g^⊤K_g ]) =(D_g).Thus,Θ_g+1^(1) = -[ (K_g)]^3(D_g) , Θ_g+1^(2) = 0 , Θ_g+1^(3) = [ (K_g)]^3(A_g)+[ (K_g)]^3 (D_g),which leads to(K_g+1) = Θ_g+1^(1) + Θ_g+1^(2) + Θ_g+1^(3) = [ (K_g)]^3(A_g).This completes the proof of the lemma. For g≥ 1, (A_g+1)=4[(A_g)]^2[(K_g)]^2. By using Laplace's theorem <cit.> to Eq. (<ref>), we have(A_g+1)=( [0100oooo; -100 -1ooy_gy_g; -1001y_gx_goo;01 -10oy_gx_go; -x_g^⊤o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤ -y_g^⊤o^⊤o^⊤OOOK_g ]) + ( [0010oooo; -100 -1ooy_gy_g; -1001y_gx_goo;01 -10oy_gx_go; -x_g^⊤o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤ -y_g^⊤o^⊤o^⊤OOOK_g ]) + ( [0000x_gooo; -100 -1ooy_gy_g; -1001y_gx_goo;01 -10oy_gx_go; -x_g^⊤o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤ -y_g^⊤o^⊤o^⊤OOOK_g ]) + ( [0000ooox_g; -100 -1ooy_gy_g; -1001y_gx_goo;01 -10oy_gx_go; -x_g^⊤o^⊤ -y_g^⊤o^⊤K_gOOO;o^⊤o^⊤ -x_g^⊤ -y_g^⊤OK_gOO;o^⊤ -y_g^⊤o^⊤ -x_g^⊤OOK_gO; -x_g^⊤ -y_g^⊤o^⊤o^⊤OOOK_g ]) . We use Λ_g+1^(i)(i=1,2,3,4) to denotesequentailly the fourdeterminants on the rhs of the above equation.As in the proof of Lemma <ref>, by applying some elementary matrix operations, we haveΛ_g+1^(1) = Λ_g+1^(2) = -2 (A_g) (D_g) [(K_g)]^2 , Λ_g+1^(3) = Λ_g+1^(4) = 2[(A_g)]^2 [(K_g)]^2 + 2(A_g) (D_g) [(K_g)]^2,which leads to(A_g+1) = Λ_g+1^(1) + Λ_g+1^(2) + Λ_g+1^(3) + Λ_g+1^(4) = 4[(A_g)]^2 [(K_g)]^2 ,as desired. For g≥ 1, (A_g)=4^1/9·4^g+2/3g-1/9. By Lemmas <ref> and <ref>, the result follows by consideringinitial conditions (A_1)=4 and (K_1)=2. [Proof of Theorem <ref>.] From Lemmas <ref> and <ref>, the number of perfect matchings in network ℋ_g isψ(ℋ_g)=√((A_g))=2^1/9·4^g+2/3g-1/9 ,and the entropy of perfect matchings in network ℋ_g (g →∞) isz(ℋ_g)=lim_g→∞lnψ(ℋ_g)/N_g/2=lim_g→∞ln 2^1/9·4^g+2/3g-1/9/4^g + 2/3=ln2/3 .This completes the proof of Theorem <ref>.§.§ Comparison with the extended Sierpiński graph We have shown that for the two networks ℱ_gandℋ_g with the same degree sequence, the fractal scale-free network ℱ_g has no perfect matchings; in sharp contrast, the non-fractal scale-free network ℋ_g has perfect matchings. Moreover, the number of perfect matchings in ℋ_g is very high, the entropy of which is equivalent to that of the extended Sierpiński graph <cit.>, as will be shown below.The extended Sierpiński graph is a particular case of Sierpiński-like graphsproposed by Klavžar and Moharin <cit.>, which is in fact a variant of the Tower of Hanoi graph <cit.>. The extended Sierpiński graph can bedefined by iteratively applying the subdivided-line graph operation <cit.>. Denote by Γ^1 (𝒢 ) = L(B(𝒢)), and the gth subdivided-line of 𝒢 is obtained through the iteration Γ^g(𝒢 ) = Γ(Γ^g-1(𝒢 )). Let 𝒦_4 be the complete graph of 4 vertices, and let 𝒮^++_g denote the extended Sierpiński graph. Then 𝒮^++_g, g≥ 1, is defined by𝒮^++_g=Γ^g-1(𝒦_4), with 𝒮^++_1=𝒦_4. Fig. <ref> illustrates an extended Sierpiński graph 𝒮^++_3.For all g≥ 1, the extended Sierpiński graph 𝒮^++_g is a 3-regular graph. Moreover,𝒮^++_g is fractal but not small-world. By definition, it is easy to verify that the number of vertices and edges in the extended Sierpiński graph 𝒮^++_g are N_g = 4· 3^g-1 and E_g = 2· 3^g, respectively. The number of perfect matchings in the extended Sierpiński graphs 𝒮^++_g, g≥ 1, is ψ(𝒮^++_g)=2^2· 3^g-2 + 1, and the entropy for perfect matchings in 𝒮^++_g, g→∞, is z(𝒮^++_g)=ln2/3. By definition, 𝒮^++_g=L(B(𝒮^++_g-1)). It is obvious that B(𝒮^++_g-1) has N_g-1+E_g-1=4· 3^g-2+2· 3^g-1 vertices and 2E_g-1=4· 3^g-1 edges. Moreover, the degree of a vertex in B(𝒮^++_g-1) is either 2 or 3. From Lemma <ref>, for all g≥ 1, the number of perfect matchings in 𝒮^++_g isψ(𝒮^++_g) = ψ(L(B(𝒮^++_g-1))) = 2^2E_g-1 - (N_g-1+ E_g-1) + 1 = 2^2· 3^g-2 + 1.Then, and the entropy of perfect matchings inextended Sierpiński graphs 𝒮^++_g, g →∞, isz(𝒮^++_g)=lim_g→∞lnψ(𝒮^++_g)/N_g/2=lim_g→∞ln 2^2· 3^g-2 + 1/2· 3^g-1=ln2/3 ,as the theorem claims. § CONCLUSION In this paper, we have studied both the size and the number of maximum matchings in two self-similar scale-free networks with identical degree distribution, and shown that the first network has no perfect matchings, while the second network has many perfect matchings. For the first network, we determined explicitly the size and number of maximum matchings by using its self-similarity. For the second network, we constructed a Pfaffian orientation, using the skew adjacency matrix of which we determined the exact number of perfect matchings and its associated entropy. Furthermore, we determined the number of perfect matchings in an extended regular Sierpiński graph, and demonstrated that entropy for its perfect matchings equals that of the second scale-free network. Thus, power-law degree distribution itself is not enough to characterize maximum matchings in scale-free networks, and care should be needed when making a general statement on maximum matchings in scale-free networks. Due to the relevance of maximum matchings to structural controllability, our work is helpful for better understanding controllability of scale-free networks.§ ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China under Grant No. 11275049. 55 natexlab#1#1 [#1],#1[Montroll(1964)]Mo64 authorE. W. Montroll, titleLattice statistics, in: editorE. Beckenbach (Ed.), booktitleApplied Combinatorial Mathematics, publisherWiley, addressNew York, year1964, pp. pages96–143.[Vukičević(2011)]Vu11 authorD. Vukičević, titleApplications of perfect matchings in chemistry, in: editorM. Dehmer (Ed.), booktitleStructural Analysis of Complex Networks, publisherBirkhäuser Boston, year2011, pp. pages463–482.[Lovász and Plummer(1986)]LoPl86 authorL. Lovász, authorM. D. Plummer, titleMatching Theory, volume volume29 of seriesAnnals of Discrete Mathematics, publisherNorth Holland, addressNew York, year1986.[Liu et al.(2011)Liu, Slotine, and Barabási]LiSlBa11 authorY.-Y. Liu, authorJ.-J. Slotine, authorA.-L. Barabási, titleControllability of complex networks, journalNature volume473 (year2011) pages167–173.[Balister and Gerke(2015)]BaGe15 authorP. Balister, authorS. Gerke, titleControllability and matchings in random bipartite graphs, in: editorA. Czumaj, editorA. Georgakopoulos, editorD. Král, editorV. Lozin, editorO. Pikhurko (Eds.), booktitleSurveys in Combinatorics, volume volume424, publisherCambridge University Press, addressCambridge, year2015, pp. pages119–146.[Propp(1999)]Pr99 authorJ. Propp, titleEnumeration of matchings: Problems and progress, in: editorL. Billera, editorA. Björner, editorC. Greene, editorR. Simeon, editorR. P. Stanley (Eds.), booktitleNew Perspectives in Geometric Combinatorics, publisherCambridge University Press, addressCambridge, year1999, pp. pages255–291.[Valiant(1979a)]Va79TCS authorL. Valiant, titleThe complexity of computing the permanent, journalTheor. Comput. Sci. volume8 (year1979a) pages189–201.[Valiant(1979b)]Va79SiamJComput authorL. Valiant, titleThe complexity of enumeration and reliability problems, journalSIAM J. Comput. volume8 (year1979b) pages410–421.[Karp and Sipser(1981)]KaSi81 authorR. M. Karp, authorM. Sipser, titleMaximum matching in sparse random graphs, in: booktitleIEEE 54th Annual Symposium on Foundations of Computer Science, organizationIEEE, pp. pages364–375.[Galluccio and Loebl(1999)]Ga96 authorA. Galluccio, authorM. Loebl, titleOn the theory of Pfaffian orientations. I. Perfect matchings and permanents, journalElectron. J. Comb. volume6 (year1999) pagesR6.[Uno(1997)]Un97 authorT. Uno, titleAlgorithms for enumerating all perfect, maximum and maximal matchings in bipartite graphs, journalAlgorithms Comput.(year1997) pages92–101.[Mahajan and Varadarajan(2000)]MaVa00 authorM. Mahajan, authorK. R. Varadarajan, titleA new NC-algorithm for finding a perfect matching in bipartite planar and small genus graphs, in: booktitleProceedings of the thirty-second annual ACM symposium on Theory of computing, addressPortland, OR, USA, pp. pages351–357.[Gabow et al.(2001)Gabow, Kaplan, and Tarjan]GaKTa01 authorH. N. Gabow, authorH. Kaplan, authorR. E. Tarjan, titleUnique maximum matching algorithms, journalJ. Algorithms volume40 (year2001) pages159–183.[Propp(2003)]Jm03 authorJ. Propp, titleGeneralized domino-shuffling, journalTheor. Comput. Sci. volume303 (year2003) pages267 – 301.[Liu and Liu(2004)]LiLi04 authorY. Liu, authorG. Liu, titleNumber of maximum matchings of bipartite graphs with positive surplus, journalDiscrete Math. volume274 (year2004) pages311–318.[Kuo(2004)]Er04 authorE. H. Kuo, titleApplications of graphical condensation for enumerating matchings and tilings, journalTheor. Comput. Sci. volume319 (year2004) pages29 – 57.[Yan et al.(2005)Yan, Yeh, and Zhang]YaYeZh05 authorW. Yan, authorY.-N. Yeh, authorF. Zhang, titleGraphical condensation of plane graphs: A combinatorial approach, journalTheor. Comput. Sci. volume349 (year2005) pages452 – 461.[Yan and Zhang(2005)]YaZh05 authorW. Yan, authorF. Zhang, titleGraphical condensation for enumerating perfect matchings, journalJ. Comb. Theory Ser. A volume110 (year2005) pages113 – 125.[Yan and Zhang(2006)]YaZh06 authorW. Yan, authorF. Zhang, titleEnumeration of perfect matchings of a type of Cartesian products of graphs, journalDiscrete Appl. Math. volume154 (year2006) pages145–157.[Kenyon et al.(2006)Kenyon, Okounkov, and Sheffield]Ke06 authorR. Kenyon, authorA. Okounkov, authorS. Sheffield, titleDimers and amoebae, journalAnn. Math. volume163 (year2006) pages1019–1056.[Zdeborová and Mézard(2006)]ZdMe06 authorL. Zdeborová, authorM. Mézard, titleThe number of matchings in random graphs, journalJ. Stat. Mech. Theory Exp. volume2006 (year2006) pagesP05003.[Yan and Zhang(2008)]YaZh08 authorW. Yan, authorF. Zhang, titleA quadratic identity for the number of perfect matchings of plane graphs, journalTheor. Comput. Sci. volume409 (year2008) pages405–410.[Teufl and Wagner(2009)]TeSt09 authorE. Teufl, authorS. Wagner, titleExact and asymptotic enumeration of perfect matchings in self-similar graphs, journalDiscrete Math. volume309 (year2009) pages6612 – 6625.[Chebolu et al.(2010)Chebolu, Frieze, and Melsted]ChFrMe10 authorP. Chebolu, authorA. Frieze, authorP. Melsted, titleFinding a maximum matching in a sparse random graph in O (n) expected time, journalJ. ACM volume57 (year2010) pages24.[D'Angeli et al.(2012)D'Angeli, Donno, and Nagnibeda]DaAlTa12 authorD. D'Angeli, authorA. Donno, authorT. Nagnibeda, titleCounting dimer coverings on self-similar Schreier graphs, journalEur. J. Combin. volume33 (year2012) pages1484 – 1513.[Kosowski et al.(2013)Kosowski, Navarra, Pajak, and Pinotti]KoNaPaPi13 authorA. Kosowski, authorA. Navarra, authorD. Pajak, authorC. M. Pinotti, titleMaximum matching in multi-interface networks, journalTheoret. Comput. Sci. volume507 (year2013) pages52–60.[Yuster(2013)]Yu13 authorR. Yuster, titleMaximum matching in regular and almost regular graphs, journalAlgorithmica volume66 (year2013) pages87–92.[Meghanathan(2016)]Me16 authorN. Meghanathan, titleMaximal assortative matching and maximal dissortative matching for complex network graphs, journalComput. J. volume59 (year2016) pages667–684.[Newman(2003)]Ne03 authorM. E. J. Newman, titleThe structure and function of complex networks, journalSIAM Rev. volume45 (year2003) pages167–256.[Barabási and Albert(1999)]BaAl99 authorA. Barabási, authorR. Albert, titleEmergence of scaling in random networks, journalScience volume286 (year1999) pages509–512.[Zhang et al.(2009)Zhang, Zhou, Zou, Chen, and Guan]ZhZhZoChGu09 authorZ. Zhang, authorS. Zhou, authorT. Zou, authorL. Chen, authorJ. Guan, titleDifferent thresholds of bond percolation in scale-free networks with identical degree sequence, journalPhys. Rev. E volume79 (year2009) pages031110.[Kasteleyn(1961)]Ka61 authorP. W. Kasteleyn, titleThe statistics of dimers on a lattice: I. The number of dimer arrangements on a quadratic lattice, journalPhysica volume27 (year1961) pages1209–1225.[Temperley and Fisher(1961)]TeFi61 authorH. Temperley, authorM. Fisher, titleDimer problem in statistical mechanics-an exact result, journalPhilos. Mag. volume6 (year1961) pages1061–1063.[Klavžar and Mohar(2005)]KlMo05 authorS. Klavžar, authorB. Mohar, titleCrossing numbers of Sierpiński-like graphs, journalJ. Graph Theory volume50 (year2005) pages186–198.[Watts and Strogatz(1998)]WaSt98 authorD. Watts, authorS. Strogatz, titleCollective dynamics of `small-world' networks, journalNature volume393 (year1998) pages440–442.[Newman(2002)]Ne02 authorM. E. Newman, titleAssortative mixing in networks, journalPhys. Rev. Lett. volume89 (year2002) pages208701.[Pastor-Satorras et al.(2001)Pastor-Satorras, Vázquez, and Vespignani]PaVaVe01 authorR. Pastor-Satorras, authorA. Vázquez, authorA. Vespignani, titleDynamical and correlation properties of the internet, journalPhys. Rev. Lett. volume87 (year2001) pages258701.[Zhang and Comellas(2011)]ZhCo11 authorZ. Zhang, authorF. Comellas, titleFarey graphs as models for complex networks, journalTheoret. Comput. Sci. volume412 (year2011) pages865–875.[Yi et al.(2015)Yi, Zhang, Lin, and Chen]YiZhLiCh15 authorY. Yi, authorZ. Zhang, authorY. Lin, authorG. Chen, titleSmall-world topology can significantly improve the performance of noisy consensus in a complex network, journalComput. J. volume58 (year2015) pages3242–3254.[Song et al.(2005)Song, Havlin, and Makse]SoHaMa05 authorC. Song, authorS. Havlin, authorH. Makse, titleSelf-similarity of complex networks, journalNature volume433 (year2005) pages392–395.[Song et al.(2007)Song, Gallos, Havlin, and Makse]SoGaHaMa07 authorC. Song, authorL. K. Gallos, authorS. Havlin, authorH. A. Makse, titleHow to calculate the fractal dimension of a complex network: the box covering algorithm, journalJ. Stat. Mech. Theory and Exp. volume2007 (year2007) pagesP03006.[Kim et al.(2007)Kim, Goh, Kahng, and Kim]KiGoKaKi07 authorJ. S. Kim, authorK.-I. Goh, authorB. Kahng, authorD. Kim, titleFractality and self-similarity in scale-free networks, journalNew J. Phys. volume9 (year2007).[Dong et al.(2013)Dong, Yan, and Zhang]DoYaZh13 authorF. Dong, authorW. Yan, authorF. Zhang, titleOn the number of perfect matchings of line graphs, journalDiscrete Appl. Math. volume161 (year2013) pages794–801.[Kasteleyn(1963)]Ka63 authorP. W. Kasteleyn, titleDimer statistics and phase transitions, journalJ. Math. Phys. volume4 (year1963) pages287–293.[Burton and Pemantle(1993)]BuPe93 authorR. Burton, authorR. Pemantle, titleLocal characteristics, entropy and limit theorems for spanning trees and domino tilings via transfer-impedances, journalAnn. Probab.(year1993) pages1329–1371.[Wu(2006)]WU200620 authorF. Y. Wu, titleDimers on two-dimensional lattices, journalInt. J. Mod. Phys. B volume20 (year2006) pages5357–5371.[Berker and Ostlund(1979)]BeOs79 authorA. N. Berker, authorS. Ostlund, titleRenormalisation-group calculations of finite systems: order parameter and specific heat for epitaxial ordering, journalJ. Phys. C: Solid State Phys. volume12 (year1979) pages4961.[Zhang et al.(2007)Zhang, Zhou, and Zou]ZhZhZo07 authorZ.-Z. Zhang, authorS.-G. Zhou, authorT. Zou, titleSelf-similarity, small-world, scale-free scaling, disassortativity, and robustness in hierarchical lattices, journalEur. Phys. J. B volume56 (year2007) pages259–271.[Hinczewski and Berker(2006)]HiBe06 authorM. Hinczewski, authorA. N. Berker, titleInverted Berezinskii-Kosterlitz-Thouless singularity and high-temperature algebraic order in an Ising model on a scale-free hierarchical-lattice small-world network, journalPhys. Rev. E volume73 (year2006) pages066126.[Small et al.(2008)Small, Xu, Zhou, Zhang, Sun, and Lu]SmXuZhZhSuLu08 authorM. Small, authorX. Xu, authorJ. Zhou, authorJ. Zhang, authorJ. Sun, authorJ.-A. Lu, titleScale-free networks which are highly assortative but not small world, journalPhys. Rev. E volume77 (year2008) pages066112.[Lin and Zhang(2009)]Lin200916 authorF. Lin, authorL. Zhang, titlePfaffian orientation and enumeration of perfect matchings for some Cartesian products of graphs, journalElectron. J. Comb. volume16 (year2009) pagesR52.[Strang(2009)]St09 authorG. Strang, titleIntroduction to Linear Algebra, publisherWellesley-Cambridge Press, Wellesley, MA, year2009.[Hinz et al.(2013)Hinz, Klavžar, Milutinović, and Petr]HiKlMiPeSt13 authorA. M. Hinz, authorS. Klavžar, authorU. Milutinović, authorC. Petr, titleThe Tower of Hanoi– Myths and Maths, publisherSpringer, year2013.[Zhang et al.(2016)Zhang, Wu, Li, and Comellas]ZhWuLiCo16 authorZ. Zhang, authorS. Wu, authorM. Li, authorF. Comellas, titleThe number and degree distribution of spanning trees in the Tower of Hanoi graph, journalTheoret. Comput. Sci. volume609 (year2016) pages443–455.[Hasunuma(2015)]Ha15 authorT. Hasunuma, titleStructural properties of subdivided-line graphs, journalJ. Discrete Algorithms volume31 (year2015) pages69–86.
http://arxiv.org/abs/1703.09041v1
{ "authors": [ "Huan Li", "Zhongzhi Zhang" ], "categories": [ "cs.SI", "cs.DM", "math.CO" ], "primary_category": "cs.SI", "published": "20170327124924", "title": "Maximum matchings in scale-free networks with identical degree distribution" }
Temporal Non-Volume Preserving Approach to Facial Age-Progression and Age-Invariant Face Recognition Chi Nhan Duong ^1,2, Kha Gia Quach ^1,2, Khoa Luu ^2 , T. Hoang Ngan Le ^2 and Marios Savvides ^2 ^1 Computer Science and Software Engineering, Concordia University, Montréal, Québec, Canada ^2 CyLab Biometrics Center and the Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA {chinhand, kquach, kluu, thihoanl}@andrew.cmu.edu, msavvid@ri.cmu.eduUpdated: March 2017 ========================================================================================================================================================================================================================================================================================================================================================================================================================= Modeling the long-term facial aging process isextremely challenging due to the presence of large and non-linear variations during the face development stages. In order to efficiently address the problem, this work first decomposes the aging process into multiple short-term stages. Then, a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, is presented to model the facial aging process at each stage. Unlike Generative Adversarial Networks (GANs), which requires an empirical balance threshold, and Restricted Boltzmann Machines (RBM), an intractable model, our proposed TNVP approach guarantees a tractable density function, exact inference and evaluation for embedding the feature transformations between faces in consecutive stages. Our model shows its advantages not only in capturing the non-linear age related variance in each stage but also producing a smooth synthesis in age progression across faces. Our approach can model any face in the wild provided with only four basic landmark points. Moreover, the structure can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. Our method is evaluated in both terms of synthesizing age-progressed faces and cross-age face verification and consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD).A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach.§ INTRODUCTIONFace age progression is known as the problem of aesthetically predicting individual faces at different ages.Aesthetically synthesizing faces of a subject at different development stages is a very challenging task. Human aging is very complicated and differs from one individual to the next. Both intrinsic factors such as heredity, gender, and ethnicity, and extrinsic factors, i.e. environment and living styles, jointly contribute to this process and create large aging variations between individuals. As illustrated in Figure <ref>, given a face of a subject at the age of 34 <cit.>, a set of closely related family faces has to be provided to a forensic artist as references to generate multiple outputs of his faces at 40’s, 50’s, 60’s, and 70’s. In recent years, automatic age progression has become a prominent topic and attracted considerable interest from the computer vision community. The conventional methods<cit.> simulated face aging by adopting parametric linear models such as Active Appearance Models (AAMs) and 3D Morphable Models (3DMM) to interpret the face geometry and appearance before combining with physical rules or anthropology prior knowledge. Some other approaches <cit.> predefined some prototypes and transferred the difference between them to produce age-progressed face images.However, since face aging is a non-linear process, these linear models have lots of difficultiesand the quality of their synthesized results is still limited. Recently, deep learning based models <cit.> have also come into place and produced more plausible results. In <cit.>, Recurrent Neural Networks (RNN) are used to model the intermediate states between two consecutive age groups for better aging transition. However, it still has the limitations of producing blurry results by the use of a fixed reconstruction loss function, i.e. ℓ_2-norm. Meanwhile, with the advantages of graphical models, the Temporal Restricted Boltzmann Machines (TRBM) has shown its potential in the age progression task <cit.>. However, its partition function is intractable and needs someapproximations during training process. §.§ Contributions of this WorkThis paper presents a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, for age progression. This modeling approach enjoys the strengths of both probabilistic graphical models to produce better image synthesis quality by avoiding the regular reconstruction loss function, and deep residual networks (ResNet) <cit.> to improve the highly non-linear feature generation.The proposed TNVP guarantees a tractable log-likelihood density estimation, exact inference and evaluation for embedding the feature transformations between faces in consecutive age groups.In our framework, the long-term face aging is first considered as a composition of short-term stages. Then our TNVP models are constructed to capture the facial aging features transforming between two successive age groups.By incorporating the design of ResNet <cit.> based Convolutional Neural Network (CNN) layers in the structure, our TNVP is able to efficiently capture the non-linear facial aging feature related variance.In addition, it can be robustly employed on face images in the wild without strict alignments or any complicated preprocessing steps. Finally, the connections between latent variables of our TNVP can act as “memory” and contribute toproduce a smooth age progression between faces while preserving the identity throughout the transitions.In summary, the novelties of our approach are three-fold. (1) We propose a novel generative probabilistic models with tractable density function to capture the non-linear age variances. (2) The aging transformation can be effectively modeled using our TNVP. Similar to other probabilistic models, our TNVP is more advanced in term of embedding the complex aging process. (3) Unlike previous aging approaches that suffer from a burdensome preprocessing to produce the dense correspondence between faces, our model is able to synthesize realistic faces given any input face in the wild. Table <ref> compares the properties between our TNVP approach and other age progression methods.§ RELATED WORKThis section reviews various age progression approaches which can be divided into four groups: prototyping, modeling,reconstructing, anddeep learning-based approaches.Prototyping approaches use the age prototypes to synthesize new face images. The average faces of people in the same age group are used as the prototypes <cit.>. The input image can be transformed into the age-progressed face by adding the differences between the prototypes of two age groups <cit.>. Recently, Kemelmacher-Shlizerman et al. <cit.> proposed to construct sharper average prototype faces from a large-scale set of images in combining with subspace alignment and illumination normalization. Modeling-based approaches represent facial shape and appearance via a set of parameters and model facial aging process via aging functions. Lanitis et al. <cit.> and Pattersons et al. <cit.> proposed to use AAMs parameters together with four aging functions for modeling both general and specific aging processes. Luu et al. <cit.> incorporated common facial features of siblings and parents to age progression. Geng et al. <cit.> proposed an AGing pattErn Subspace (AGES) approach to construct a subspace for aging patterns as a chronological sequence of face images. Later, Tsai et al. <cit.> improved the stability of AGES by adding subject's characteristics clue.Suo et al. <cit.> modeled a faceusing a three-layer And-Or Graph (AOG) of smaller parts, i.e. eyes, nose, mouth, etc. and learned the aging process for each part by applying a Markov chain. Reconstructing-based methods reconstruct the aging face from the combination of an aging basis in each group.Shu et al. <cit.> proposed to build aging coupled dictionaries (CDL) to represent personalized aging pattern by preserving personalized facial features.Yang et al. <cit.> proposed to model person-specific and age-specific factors separately via sparse representation hidden factor analysis (HFA).Recently, deep learning-based approaches are being developed to exploit the power of deep learning methods.Duong et al. <cit.> employed Temporal Restricted Boltzmann Machines (TRBM) to model the non-linear aging process with geometry constraintsand spatial DBMs to model a sequence of reference faces and wrinkles of adult faces.Similarly, Wang et al. <cit.> modeled aging sequences using a recurrent neural network with a two-layer gated recurrent unit (GRU).Conditional Generative Adversarial Networks (cGAN) is also applied to synthesize aged images in <cit.>. § OUR PROPOSED METHODThe proposed TNVP age-progression architecture consists of three main steps. (1) Preprocessing; (2) Face variation modeling via mapping functions; and (3) Aging transformation embedding. With the structure of the mapping function, our TNVP model is tractable and highly non-linear. It is optimized using a log-likelihood objective function that produces sharper age-progressed faces compared to the regular ℓ_2-norm based reconstruction models. Figure <ref> illustrates our TNVP-based age progression architecture. §.§ Preprocessing Figure <ref> compares our preprocessing step with other recent age progression approaches, including Illumination Aware Age Progression (IAAP) <cit.>, RNN based <cit.>, and TRBM based Age Progression <cit.> models. In those approaches, burdensome face normalization steps are applied to obtain the dense correspondence between faces. The use of a large number of landmark points makes them highly depend on the stability of landmarking methods that are challenged in the wild conditions. Moreover, masking the faces with a predefined template requires a separate shape adjustment for each age group in later steps.In our method, given an image, the facial region is simply detected and aligned according to fixed positions of four landmark points, i.e. two eyes and two mouth corners. By avoiding complicated preprocessing steps, our proposed architecture has two advantages. Firstly, a small number of landmark points, i.e. only four points, leverages the dependency to the quality of any landmarking method. Therefore, it helps to increase the robustness of the system. Secondly, parts of the image background are still included, and thus it implicitly embed the shape information during the modeling process. From the experimental results, one can easily notice the change of the face shape when moving from one age group to the next. §.§ Face Aging ModelingLet ℐ⊂ℝ^D be the image domain and {𝐱^t, 𝐱^t-1}∈ℐbe observed variables encoding the texture of face images at age group t and t-1, respectively.In order to embed the aging transformation between these faces, we first define a bijection mapping function from the image space ℐ to a latent space 𝒵 and then model the relationship between these latent variables. Formally, let ℱ: ℐ→𝒵 define a bijection from an observed variable 𝐱 to its corresponding latent variable 𝐳 and𝒢:𝒵→𝒵 be an aging transformation function modeling the relationships between variables in latent space. As illustrated in Figure <ref>, the relationships between variables are defined as in Eqn. (<ref>).𝐳^t-1 = ℱ_1 (𝐱^t-1; θ_1) 𝐳^t = ℋ(𝐳^t-1,𝐱^t; θ_2, θ_3) = 𝒢(𝐳^t-1;θ_3) + ℱ_2(𝐱^t;θ_2)where ℱ_1, ℱ_2 define the bijections of 𝐱^t-1 and 𝐱^t to their latent variables, respectively. ℋ denotes the summation of 𝒢(𝐳^t-1;θ_3) and ℱ_2(𝐱^t;θ_2). θ = {θ_1,θ_2,θ_3 } present the parameters of functions ℱ_1, ℱ_2 and 𝒢, respectively. Indeed, given a face image in age group t-1, the probability density function can be formulated as in Eqn. (<ref>).p_X^t(𝐱^t|𝐱^t-1;θ) =p_X^t(𝐱^t|𝐳^t-1;θ)=p_Z^t(𝐳^t|𝐳^t-1;θ)|∂ℋ(𝐳^t-1, 𝐱^t;θ_2, θ_3)/∂𝐱^t|=p_Z^t(𝐳^t|𝐳^t-1;θ)|∂ℱ_2(𝐱^t;θ_2)/∂𝐱^t| where p_X^t(𝐱^t|𝐱^t-1;θ) and p_Z^t(𝐳^t|𝐳^t-1;θ) are the distribution of 𝐱^t conditional on 𝐱^t-1 and the distribution of 𝐳^t conditional on 𝐳^t-1, respectively. In Eqn. (<ref>), the second equality is obtained using the change of variable formula. ∂ℱ_2(𝐱^t;θ_2)/∂𝐱^t is the Jacobian. Using this formulation, instead of estimating thedensity of a sample 𝐱^t conditional on 𝐱^t-1 directly in the complicated high-dimensional space ℐ, the assigned task can be accomplished by computing the density of its corresponding latent point 𝐳^t given 𝐳^t-1 associated with the Jacobian determinant |∂ℱ_2(𝐱^t;θ_2)/∂𝐱^t|.There are some recent efforts to achieve the tractable inference process via approximations <cit.> or specific functional forms <cit.>. Section <ref> introduces a non-linear bijection function that enables the exact and tractable mapping from the image space ℐ to a latent space 𝒵 where the density of its latent variables can be computed exactly and efficiently. As a result, the density evaluation of the whole model becomes exact and tractable. §.§ Mapping function as CNN layers In general, a bijection function between two high-dimensional domains, i.e. image and latent spaces, usually produces a large Jacobian matrix and is expensive for its determinant computation. Therefore, in order to enable the tractable property for ℱ with lower computational cost, ℱ is presented as a composition tractable mapping unit f where each unit can be represented as a combination of several convolutional layers. Then the bijection function ℱ can be formulated as a deep convolutional neural network.§.§.§ Mapping unit Given an input 𝐱, a unit f:𝐱→𝐲 defines a mapping between 𝐱 to an intermediate latent state 𝐲 as in Eqn. (<ref>).𝐲 = 𝐱' + (1-𝐛) ⊙[ 𝐱⊙exp(𝒮(𝐱')) + 𝒯(𝐱') ]where 𝐱'=𝐛⊙𝐱; ⊙ denotes the Hadamard product; 𝐛 = [1,⋯, 1, 0, ⋯, 0] is a binary maskwhere the first d elements of 𝐛 is set to one and the rest is zero; 𝒮 and 𝒯 represent the scale and the translation functions, respectively. The Jacobian of this transformation unit is given by∂ f/∂𝐱 = [ ∂𝐲_1:d/∂𝐱_1:d ∂𝐲_1:d/∂𝐱_d + 1:D; ∂𝐲_d+1:D/∂𝐱_1:d ∂𝐲_d+1:D/∂𝐱_d+1:D ]=[ 𝕀_d 0; ∂𝐲_d+1:D/∂𝐱_1:d diag(exp(𝒮(𝐱_1:d))) ]where diag(exp(𝒮(𝐱_1:d))) is the diagonal matrix such that exp(𝒮(𝐱_1:d)) is their diagonal elements.This form of ∂ f/∂𝐱 provides two nice properties for the mapping unit f. Firstly, since the Jacobian matrix ∂ f/∂𝐱 is triangular, its determinant can be efficiently computed as,|∂ f/∂𝐱| = ∏_j exp(s_j) =exp( ∑_j s_j)where 𝐬 = 𝒮(𝐱_1:d). This property also introduces the tractable feature for f. Secondly, the Jacobian of the two functions 𝒮 and 𝒯 are not required in the computation of |∂ f/∂𝐱|. Therefore, any non-linear function can be chosen for 𝒮 and 𝒯. From this property, the functions 𝒮 and 𝒯are set up as a composition of CNN layers in ResNet (i.e. residual networks) <cit.>style with skip connections. This way, high level features can be extracted during the mapping process and improve the generative capability of the proposed model. Figure <ref> illustrates the structure of a mapping unit f. The inverse function f^-1:𝐲→𝐱 is also derived as𝐱 =𝐲' + (1-𝐛) ⊙[ (𝐲 - 𝒯(𝐲')) ⊙exp(-𝒮(𝐲'))]where 𝐲'=𝐛⊙𝐲.§.§.§ Mapping function The bijection mapping function ℱ is formulated by composing a sequence of mapping units {f_1, f_2, ⋯, f_n}.ℱ = f_1 ∘ f_2 ∘⋯∘ f_n The Jacobian of ℱ is no more difficult than its units and still remains tractable.∂ℱ/∂𝐱 = ∂ f_1/∂𝐱·∂ f_2/∂ f_1…∂ f_n/∂ f_n-1Similarly, the derivations of its determinant and inverse are|∂ℱ/∂𝐱|= |∂ f_1/∂𝐱| ·|∂ f_2/∂ f_1| …| ∂ f_n/∂ f_n-1| ℱ^-1= (f_1 ∘ f_2 ∘⋯∘ f_n)^-1 = f_1^-1∘ f_2^-1∘…∘ f_n^-1Since each mapping unit leaves part of its input unchanged (i.e. due to the zero-part of the mask 𝐛), we alternatively change the binary mask 𝐛 to 1-𝐛 in the sequence so that every component of 𝐱 can be jointed through the mapping process. As mentioned in the previous section, since each mapping unit is set up as a composition of CNN layers, the bijection ℱ with the form of Eqn. (<ref>) becomes a deep convolutional networks that maps its observed variable 𝐱 in ℐ to a latent variable 𝐳 in 𝒵. §.§ The aging transform embeddingIn the previous section, we present the invertible mapping function ℱ between a data distribution p_X and a latent distribution p_Z. In general, p_Z can be chosen as a prior probability distribution such that it is simple to compute and its latent variable z is easily sampled. In our system, a Gaussian distribution is chosen for p_Z, but our proposed model can still work well with any other prior distributions. Since the connections between 𝐳^t-1 and 𝐳^t embed the relationship between variables of different Gaussian distributions, we further assume that their joint distribution is a Gaussian. The transformation 𝒢 between 𝐳^t-1 and 𝐳^t is formulated as, 𝒢(𝐳^t-1;θ_3) = 𝐖𝐳^t-1 + 𝐛_𝒢where θ_3={𝐖, 𝐛_𝒢} is the transform parameters representing connecting weights of latent-to-latent interactions and the bias. From Eqn. (<ref>) and Figure <ref>, the latent variable 𝐳^t is computed from two sources: (1) the mapping from observed variable 𝐱^t defined by ℱ_2(𝐱^t; θ_2) and (2) the aging transformation from 𝐳^t-1 defined by 𝒢(𝐳^t-1;θ_3). The joint distribution p_Z^t, Z^t-1(𝐳^t,𝐳^t-1) is given by𝐳^t-1 ∼𝒩(0,𝕀) ℱ_2(𝐱^t,θ_2) = 𝐳̅^t ∼𝒩(0,𝕀)p_Z^t, Z^t-1(𝐳^t,𝐳^t-1; θ)∼𝒩([ 𝐛_𝒢; 0 ] , [ 𝐖^T𝐖 + 𝕀𝐖;𝐖𝕀 ]) §.§ Model LearningThe parameters θ={θ_1, θ_2, θ_3} of the model are optimized to maximize the log-likelihood:θ_1^*, θ_2^*, θ_3^*=max_θ_1, θ_2, θ_3log p_X^t(𝐱^t|𝐱^t-1; θ_1, θ_2, θ_3) From Eqn. (<ref>), the log-likelihood can be computed aslog p_X^t(𝐱^t|𝐱^t-1; θ) =log p_Z^t(𝐳^t|𝐳^t-1,θ) + log|∂ℱ_2(𝐱^t; θ_2)/∂𝐱^t|=log p_Z^t, Z^t-1(𝐳^t,𝐳^t-1; θ)-log p_Z^t-1(𝐳^t-1;θ_1)+ log|∂ℱ_2(𝐱^t;θ_2)/∂𝐱^t| where the first two terms are the two density functions and can be computed using Eqn. (<ref>) while the third term (i.e. the determinant) is obtained using Eqns. (<ref>) and (<ref>). Then the Stochastic Gradient Descent (SGD) algorithm is applied to optimal parameter values. §.§ Model PropertiesTractability and Invertibility: With the specific structure of the bijection ℱ, our proposed graphical model has the capability of modeling arbitrary complex data distributions while keeping the inference process tractable. Furthermore, from Eqns. (<ref>) and (<ref>), the mapping function is invertible. Therefore, both inference (i.e. mapping from image to latent space) and generation (i.e. from latent to image space)are exact and efficient. Flexibility: as presented in Section <ref>, our proposed model introduces the freedom of choosing the functions 𝒮 and 𝒯 for their structures. Therefore, different types of deep learning models can be easily exploited to further improve the generative capability of the proposed TNVP. In addition, from Eqn. (<ref>), the binary mask 𝐛 also provides the flexibility for our model if we consider this as a template during the mapping process. Several masks can be used in different levels of mapping units to fully exploit the structure of the data distribution of the image domain ℐ.Although our TNVP shares some similar features with RBM and its family such as TRBM,the log-likelihood estimation of TNVP is tractable while that in RBM is intractable and requires some approximations during training process. Compared to other methods, our TNVP also shows its advantages in high-quality synthesized faces (by avoiding the ℓ_2 reconstruction error as in Variational Autoencoder) and efficient training process (i.e. avoid maintaining a good balance between generator and discriminator as in case of GANs). § EXPERIMENTAL RESULTS §.§ DatabasesWe train our TNVP system using AginG Faces in the Wild (AGFW) <cit.> anda subset of the Cross-Age Celebrity Dataset (CACD) <cit.>. Two other public aging databases, i.e. FG-NET <cit.> and MORPH <cit.>, are used for testing.AginG Faces in the Wild (AGFW): consists of 18,685 images that covers faces from 10 to 64 years old. On average, after dividing into 11 age groups with the span of 5 years, each group contains 1700 images. Cross-Age Celebrity Dataset (CACD) is a large-scale dataset with 163446 images of 2000 celebrities. The age range is from 14 to 62 years old. FG-NET is a common aging database that consists of 1002 images of 82 subjects and has the age range from 0 to 69. Each face is manually annotated with 68 landmarks.MORPH includes two albums, i.e. MORPH-I and MORPH-II. The former consists of 1690 images of 515 subjects and the latter provides a set of 55134 photos from 13000 subjects. We use MORPH-I for our experiments. §.§ Implementation detailsIn order to train our TNVP age progression model, we first select a subset of 572 celebrities from CACD as in the training protocol of <cit.>.All images of these subjects are then classified into 11 age groups ranging from 10 to 65 with the age span of 5 years.Next, the aging sequences for each subject are constructed by collecting and combining all image pairs that cover two successive age groups of that subject.This process results in 6437 training sequences.All training images from these sequences and the AGFW dataset are then preprocessed as presented in Section <ref>. After that, a two-step training process is applied to train our TNVP age progression model. In the first step, using faces from AGFW, all mapping functions (i.e. ℱ_1, ℱ_2) are pretrained to obtain the capability of face interpretation and high-level feature extraction. Then in the later step, our TNVP model is employed to learn the aging transformation between faces presented in the face sequences.For the model configuration, the number of units for each mapping function is set to 10.In each mapping unit f_i, two Residual Networks with rectifier non-linearity and skip connections are set up for the two transformations𝒮 and 𝒯. Each of them contains 2 residual blocks with 32 feature maps. The convolutional filter size is set to 3 × 3.The training time for TNVP model is 18.75 hours using a machine of Core i7-6700 @3.4GHz CPU, 64.00 GB RAM and a single NVIDIA GTX Titan X GPU and TensorFlow environment. The training batch size is 64.§.§ Age Progression After training, our TNVP age progression system is applied to all faces over 10 years old from FG-NET and MORPH. As illustrated in Figure <ref>, given input faces at different ages, our TNVP is able to synthesize realistic age-progressed faces in different age ranges. Notice that none of the images in FG-NET or MORPH is presented in the training data. From these results, one can easily see that our TNVP not only efficiently embed the specific aging information of each age group to the input faces but also robustly handles in-the-wild variations such as expressions, illumination, and poses.Particularly, beards and wrinkles naturally appear in the age-progressed faces around the ages of 30-49 and over 50, respectively. The face shape is also implicitly handled in our model and changes according to different individuals and age groups. Moreover, by avoiding the ℓ_2 reconstruction loss and taking the advantages of maximizing log-likelihood, sharper synthesized results with aging details are produced by our proposed model. We compare our synthesized results with other recent age progression works whose results are publicly available such asIAAP <cit.>, TRBM-based model <cit.> in Figure <ref>. The real faces of the subjects at target ages are provided for reference.Other approaches, i.e. Exemplar based Age Progression (EAP) <cit.> and Craniofacial Growth (CGAP) model <cit.>,are also included for further comparisons.Notice that since our TNVP model is trained using the faces ranging from 10 to 64 years old, we choose the ones with ages close to 10 years old during the comparison. These results again show the advantages of our TNVP model in term of efficiently handling the non-linear variations and aging embedding. §.§ Age-Invariant face verification In this experiment, we validate the effectiveness of our TNVP model by showing the performance gain for cross-age face verification using our age-progressed faces. In both testing protocols, i.e. small-scale with images pairs from FG-NET and large-scale benchmark on Megaface Challenge 1, our aged faces have show that they can provide 10 - 40% improvement on top of the face recognition model without re-train it on cross-age databases.We employ the deep face recognition model <cit.>, named Center Loss (CL), which is among the state-of-the-art for this experiment. Under the small-scale protocol, in FG-NET database, we randomly pick 1052 image pairs with the age gap larger than 10 years of either the same or different person. This set is denoted as A consisting of a positive list of 526 image pairs of the same person and a negative list of 526 image pairs of two different subjects. From each image pair of setA, using the face with younger age, we synthesize an age-progressed face image at the age of the older one using our proposed TNVP model. This forms a new matching pair, i.e. the aged face vs. the original face at older age. Applying this process for all pairs of setA, we obtain a new set denoted as set 𝐁_1.To compare with IAAP <cit.> and TRBM <cit.> methods, we also construct two other sets of image pairs similarly and denote them as set 𝐁_2 and 𝐁_3, respectively.Then, the False Rejection Rate-False Acceptance Rate (FRR-FAR) is computed and plotted under the Receiver Operating Characteristic (ROC) curves for all methods (Fig. <ref>).Our method achieves an improvementof 30% on matching performance over the original pair (set A) while IAAP and TRBM slightly increase the rates. In addition, the advantages of our model is also experimented on the large-scale Megaface <cit.> challenge 1 with the FGNET test set. Practical face recognition models should achieve high performance against having gallery set of millions of distractors and probe set of people at various ages.In this testing, 4 billion pairs are generated between the probe and gallery sets where the gallery includes one million distractors. Thus, only improvements on Rank-1 identification rate with one million distractors gallery and verification rate at low FAR are meaningful <cit.>.Table <ref> shows the results of rank-1 identification results with one million distractors. As shown in Fig. <ref>, Rank-1 identification accuracy drops when the number of distractors increases. The TAR-FAR and ROC curves[The results of other methods are provided in MegaFace website.] are presented in Fig. <ref>. From these results, using our aged face images not only improve face verification results by 10% compared to the original model in <cit.> but also outperform most of the models trained with a small training set. The model from DeepSense achieves the best performance under the cross-age training set while the original model trained solely on CASIA WebFace dataset <cit.> having < 0.49M images with no cross-age information. We achieve better performance on top of this original model by simply synthesizing aged images without re-training.§ CONCLUSIONThis paper has presented a novel generative probabilistic model with a tractable density function for age progression. The model inherits the strengths of both probabilistic graphical model and recent advances of ResNet. The non-linear age-related variance and the aging transformation between age groups are efficiently captured. Given the log-likelihood objective function, high-quality age-progressed faces can be produced. In addition to a simple preprocessing step, geometric constraints are implicitly embedded during the learning process. The evaluations in both quality of synthesized faces and cross-age verification showed the robustness of our TNVP. ieee
http://arxiv.org/abs/1703.08617v1
{ "authors": [ "Chi Nhan Duong", "Kha Gia Quach", "Khoa Luu", "T. Hoang Ngan le", "Marios Savvides" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170324224305", "title": "Temporal Non-Volume Preserving Approach to Facial Age-Progression and Age-Invariant Face Recognition" }
Departamento de Física, Universidad Autónoma Metropolitana-Iztapalapa, A. P. 55-534, 09340 Mexico City, Mexico Instituto de Energías Renovables, Universidad Nacional Autónoma de México (UNAM), 62580 Temixco, Morelos, México Employing tight-binding approximation we derive a transfer matrix formalism for one-dimensional single photon transport through a composite scatteringcenter, which consists of parallel connected resonator optical waveguides. By solving the single-mode eigenvectors of the Hamiltonian, we investigate the quantum interference effects of parallel couplings on the photon transport through this parallel waveguide structure. We find a perfect reflection regime determined by the number of coupled resonator waveguides. Numerical analysis reveals that by changing atom transition frequency, the window of perfect reflection may shift to cover almost all incoming photon energy, indicating the effective control of single photon scattering by photon-atominteraction. 42.50.Ex, 03.67.Lx, 42.68.MjOne-Dimensional Photon Transport Through a Two-Terminal Scattering Cluster: Tight-Binding Formalism M. Lozada-Cassou Received: date / Accepted: date =================================================================================================== § I. INTRODUCTION Photon transport in one dimensional coupled waveguide is an important model system for exploring quantum information processing and manipulation mechanism. The coupling of atoms or quantum emitters to the optical waveguide offers feasible control schemes to achieve quantum switching, routing, photon storages and other quantum information operations<cit.>. Currently there are several important theoretical approaches being used to study the photon transport in one dimension, including the real space Bethe anzatz approach<cit.>, Input-out formalism<cit.>, and Lippman-Schwinger scattering-theory approach <cit.>. Recently a tight-binding formalism has been employed to show that the scattering of a single photon inside a one-dimensional resonator waveguide can be ocntrolled by the coupled two-level quantumsystem<cit.>. It is demonstrated that a finite-bandwidth spectrum of perfect reflection appears as the detuning between the photon and atomicfrequency varies. It is obvious that when a single photon transport properties can be influenced by the number of atoms that interact with the waveguideas a result of multiple scattering between the propagating photon and the quantum emitters<cit.>. In addition to the effects of the collective interaction with the multiple emitters, the coupling modes between atom and theone-dimensional waveguide also affect the photon transport and result in quite involved, nontrivial dispersion relations that can lead to strongreduction of the group velocity of photons<cit.>, as a consequence of a finite bandwidth. Since the coupling configurationbetween atoms and waveguide play an important role in determining the transport properties of photons, in this paper we will focus on the photontransfer through a black-box like scattering cluster, by employing the tight-binding theory<cit.>. This paper is organized as follows. In Sec. II we derive a general theory for photon transport through a two-terminal interaction box, where multipleatoms are coupled in series as well as in parallel. We then discuss the simple case of coupled cavity array based on the Jaynes-Cummings model forsingle cavities, in Sec. III. Finally we give a summary in Sec. VI.§ II. GENERAL THEORYThe model system discussed in this work consists of two leads sndwiched by a tight-binding network in the parallel configurations, as illustratedin Fig.1.In this section we derive the transmission and reflection amplitudes in the Tight-Binding (TB) model for parallel connected transmision lines.The typical TB equations, resulting from the Schrodinger equation HΨ=EΨ can be written as(ϵ_j-E)ψ(j)=∑_nJ_j,nψ(j+n)For purposes of illustration of our method, we consider here only the simplest case where the transfer integral between the nearest neightborsJ_j,n=1 and the lattice constant a=1. We also assume that the scattering is elastic and the traveling wave functions may take the followingformψ(j)=Ae^ikj+Be^-ikj, with the dispersion relation ϵ_j-E_k=2cos(k). Suppose that there are n_j sites at the j-th channel, and the total number of the TB sites in the central scattering network is N=∑^m_j=1n_j.Here m represents the number of the parallel channels that converge into the left and right terminals.The TB equations of the parallel configuration can be written as (ϵ_j-E_k)ψ(-1) =ψ(-2)+∑^n_i_iψ_i(0), (ϵ_j-E_k)ψ(N) =ψ(N+1)+∑^n_i_iψ_i(n_i-1).By taking the standard expressions for the left incoming and the right outgoing waves,ψ_L(j) = e^ikj+re^-ikj,j<0,ψ_R(j) = te^ikj,j>N,we find1+r =∑_i=1^mψ_i(0), t_N =∑_i=1^mψ_i(n_i-1),where t_N=te^ik(N-1), and ψ_i(j) is the wave function at the site j of the i-th channel.Now let us derive a relation between the wave functions ψ_i(0) and ψ_i(n_i-1). To this end, we resort to the transfer matrix expression ofthe TB equations,[ ψ_i(j+1); ψ_i(j);] =[ α_i,j-1; 1 0; ][ ψ_i(j); ψ_i(j-1);],with α_i,j=ϵ_i,j-E_k. Here ϵ_i,j stands for the energy of the j-th site of the i-th waveguide channel. It isstraightforward that[ ψ_i(n_i); ψ_i(n_i-1);] =M_i [ψ_i(0); ψ_i(-1); ].Taking into account of the continuity condition of the wave functions at any TB sites, we have that ψ_i(-1)=1+r and ψ_i(n_i)=ψ(N)=t_N.Introducing thatM_i= [ α_i,n_i-1-1; 1 0; ] ... [ α_i,0-1; 1 0; ] = [ a_i,n_i-1 b_i,n_i-1; c_i,n_i-1 d_i,n_i-1; ]where the matrix elements can be obtained from the following recursion relationsa_i,n=α_i,n-1a_i,n-1-c_i,n-1, b_i,n=α_i,n-1b_i,n-1-d_i,n-1, c_i,n= a_i,n-1, d_i,n= b_i,n-1,with a_i,0=α_i,0, b_i,0=-1, c_i,0=1 and d_i,0=0. By inserting (7)-(9) into (5)we get the transmission and reflection amplitudes for the scattering cluster formed by parallel coupled oupled cavity waveguides,r =(P'e^ik-1)(1-Qe^-ik)+PQ'/(Qe^ik-1)(P'e^ik-1)-PQ'e^2ik, t_N =-2iQ'sin(k)/(Qe^ik-1)(P'e^ik-1)-PQ'e^2ik,withP=∑_i=1^m1/a_i,n_i-1,Q=-∑_i=1^mb_i,n_i-1/a_i,n_i-1, P'=∑_i=1^mc_i,n_i-1/a_i,n_i-1,Q'=∑_i=1^m(d_i,n_i-1-b_i,n_i-1c_i,n_i-1/a_i,n_i-1).This is the main result of this work. It is worth to point out that the physical relavance of the site energy ϵ_i(j) may be quite general,including the PT-symmetric potentials. It is remarkable that we can evaluate the photon transport properties without assuming the detailed wavefunctions in the scattering zone, provided that the tight-binding approximation is valid, in sharp contrast with the current approaches used inthe literature.§ III. NUMERICAL ANALYSIS To illustrate our method, in the following, we consider the problem of the single photon transport in an one-dimensional coupled resonatorwaveguides. To be more specific, we assume that photon propagates along the left waveguide and enters the multiple parallel coupled waveguidesthrough the splitter point O. And then it is transfered to the left waveguide via the converter O'. As part of the scattering cluster thoseone-dimensional waveguide are coupled with two-level atoms each. Let us denote by a^† the single mode in the j-th cavity, withfrequency ω. The Hamiltonian of the CRW is given byH_cv = ω∑_ja^†_ja_j-∑_j,j'(J_j,j'a^†_ja_j'+H.c.)where J_j,j' is the inter-cavity photon hopping constant among the connecting cavities. For uniform hopping constant J_j,j'=J, theHamiltonian (1) can be readily diagonalized to yield the dispersion relation E_k=ω-2Jcos(k). Here we assume that ħ=1 and thelattice constant (inter-cavity distance) a=1.We further assume that the Hamiltonian of the atom is given by H_A=∑ω_j|e_j><e_j| and the interaction of the single photon with atwo-level atom inside the j-th cavity, is then described by the Jaynes-Cummings HamiltonianH_j = g_j(a^†_j|g_j><e_j|+H.c.) 0≤ j ≤ N-1Here |g_j> and |e_j> are the ground and excited state of the j-th two-level atom, respectively. ω_j is the transition energybetween the two energy levels. g_j is the photon-atom coupling strength. The total Hamiltonian of the whole system is H=H_cv+H_A+H_int.Thus the stationary eigenstate may be expressed as|ψ_k> = ∑_jψ(j)(a^†_j|g,0>+ψ_j|e,0>,where |0> stands for the vacuum state the photon in the cavities coupled to the waveguide, and ψ_j, (0≤ j ≤ N-1), gives theprobability amplitude of the atom in the excited state.§.§ A. N atoms coupled in series The photon tranport in one-dimensional coupled resonator waveguide has been extensively in recent years studied with the use of scatteringtheory based on the quantum Lippman-Schwinger formalism<cit.>, by transfer matrix method<cit.>, the input-output theory <cit.>and the Bethe ansatz approach in the contex of the real continuous and discrete space<cit.>. The dynamics of the photon transporthas be investigated by the coupled mode theory as well as the input-output theory<cit.>. Here we derive a formalism based onthe tight-binding approximation<cit.>. We show that this formalism is a practical and efficient method for the calculation of thetransmission and reflection amplitudes for arbitrary atoms and coupling modes.Suppose that the scattering cluster is formed by N cavities with identical two-level atoms inside. The TB equations for this central coupledlattice is described byωψ(j)-Jψ(j-1)-Jψ(j+1)+gϕ_j=E_kψ(j), ω_jϕ_j+gψ(j)=E_kψ(j),where ω_j stands for the transition energy of the coupled atoms. Taking into account the fact of identical atoms, i.e., ω_j=ω_0 for 0 ≤ j ≤ N-1, we findα=2cos(k)+g^2/Δ_k,where Δ_k=ω-2cos(k)-ω_0 is the detuning. Then it follows immediately,r =-a_N-1-b_N-1e^-ik+c_N-1e^ik+d_N-1/a_N-1+(b_N-1-c_N-1)e^ik-d_N-1e^2ik, t_N =-2i(a_N-1d_N-1-b_N-1c_N-1)sin(k)/a_N-1+(b_N-1-c_N-1)e^ik-d_N-1e^2ik. For an identical coupled resonator waveguide it is shown that a finite bandwidth of perfect reflection R=1 may be observed for certain values of theatom transition frequency ω_0, even for a single coupled atom. The increase of the number of coupled atom can only lead to a clear cut-off of the band borders<cit.>. In order to demonstrate how the atom transition frequency controls the photon transport in our model system, we assumethat the cavity-waveguide and the cavity-atom coupling strength are constant and taken to be 1, throughout this paper. Obviously much diversifiedtransport properties may emerge varying coupling constants. In general, there are two parameter regimes are considered. One is that the detuning of theincoming photon energy to the atom transition energy is inside the interference range, where the waveguide photon mode is comparable to that of the atom, so that stimulated absorption or emission may occur. Another regime is the photon frequency is far away from the interference range, and so the resonanttransmission is usually expected. In the following numerical analysis, we focus on the perfect reflection regime. This phenomenon is related tothe manipulation of the photon propagating group velocity as a finite bandwidth makes it possible to acomodate a real wave packet. Fig.2 shows typical photon transport properties for identical coupled atoms. We plot the reflectance R as a function of the photon wavevector andthe detuning Δ. As is shown in Fg.2 (a) and (b), there two typical parameter sets related to R=1. One is defined by dR/dk=0, and another corresponds to dR/dk>>1.Our numerical results reveals that as the atom transition energy is changed the position and the bandwidth of the R=1 window vary accordingly.As shown in Fig.2(c), when the atom frequency is changes the R=1 band will sweep over all spectrum andthe bandwidth increases as the atom frequency increases from ω_0=0 to ω_0=ω=2π. And then it decreases to its minimum at ω_0=4π. This variation pattern repeats inself periodically as one varies the atom frequency. It looks quite surprising that even when the incoming photon energy is complete out of the possible interaction range with the atom, there exists still a small finite R=1 windows. and this leads to the atomic mirror with all-frequency perfect reflection. It is noticed that for identical coupled atoms perfect reflection intervals can not be created by increasing the number of the atoms, which is against theintuitive or conclustions drawn in other model systems that the accumulated effects of the multiple scattering will result eventually to the complete reflection[uuuuu].Generally speaking, there are many different coupling patterns by which the atoms are connected to form an array of coupled resonate waveguide. In addition to the unform coupled atoms, the next important pattern is the so-called super array of coupled resonator eaveguide, where two diferent kinds of atoms are connected alternatively, simbolized as ABABABA.... Here we conduct numerical analysis on the simplest combination case, that is, two two-level atoms with different atom transition energies. Fig.3 shows the transport features for N=3 atoms with connection patterns symbolized by ABA and BAB. This type of coupling resembles the modelsystems studied in<cit.>, where it has been shown that such kind of systems support photon quasibound states and offers a mechanismfor photon storage and control of photon group velocity. For this kind of super-array system, by simply changing the position order, one may obtain an ideal atomic mirror with perfect reflection.§.§ B. N atoms coupled in parallel One of the well studied two-terminal parallel connected system is the quantum ring, where two paragation channels are merged at theleft and right junctions, through which they are connected to the left and right lead, respectively. Here we first derive a formalism for an arbitrary number of transmission lines that are joined at their extremes and coupled to two terminals (see Fig.1(b)). To illustrate the effects of the branched scattering processes in each single coupled waveguide, let us first study the special casewhere there is only one atom coupled to the single waveguide. Consider now N resonators with embedded two-level atoms, are connectedto the left and the right leads in a parallel way, which are labelled by j=0 through j=N-1. Those resonators are coupled to thesame 0-th resonator to the left and to the N-th oscillator to the right lead, respectively. Hence the tight-binding equations forthose parallel connected resonators readωψ(j)-Jψ(-1)-Jψ(N)+g_jϕ_j=E_kψ(j), ωψ(-1)-Jψ(-2)-J∑_j=0^N-1ψ(j)=E_kψ(-1), ωψ(N)-Jψ(N+1)-J∑_j=0^N-1ψ(j)=E_kψ(N), ω_jϕ_j+g_jψ(j)=E_kψ(j),Using the standard traveling wave functionψ(-1) = e^-ik+re^ik, ψ(N) = te^ikN.and defining thatγ_N=∑_j=0^N-1J/ω-E_k+g_jG_j G_j=g_j/E_k-ω_j,0≤ j ≤ N-1we obtain the corresponding transmission and reflection amplitudes,r=2γ_Ncos(k)-1/1-2γ_Ne^ik, t_N=te^ik(N-1)=-2iγ_Nsin(k)/1-2γ_Ne^ik, It is an easy matter to verify that if Δ_j=ω - ω_j-2cos(k)=0 or ω - E_k+g_jG_j=0, which gives rise immediatelyto R=1, and T=0. Moreover, when N goes to infinity, we have R=cos^2(k) and T=sin^2(k).Fig.4 demonstrates the reflectance of single photon through a ring with one atom (a) and two atoms (b) on each branch. An interesting feature of the scattering spectrum is coexistence of perfect transmission and reflection bands as illustrated in Fig.4(a). This result suggests that coupled resonator ring waveguide behaves just like an energy filterthat allows photons of certain energy passes freely and reflects others completely. We now turn to calculate the transport property for general parallel connected coupled resonator waveguides. In this case the scattering box is composedby N identical parallel coupled resonator waveguides,where the left single waveguide is split into N identical sub-waveguides. In this case we have ψ_j(0)=ϕ(0) and ψ_j(n_j-1)=ϕ(n-1).Assume that there are N_0 parallel connected waveguides, and each of them coupled with n identical two-level atoms. Thus, (5) becomes1+r =∑_i=1^N_0ψ_i(0)=N_0ϕ(0), t_N =∑_i=1^N_0ψ_i(n_i-1)=N_0ϕ(n-1).By employing the tight-binding equations, we obtain the following results,r =-a_n-1-N_0b_n-1e^-ik+N_0c_n-1e^ik+N^2_0d_n-1/a_n-1+N_0e^ik(b_n-1-c_n-1-N_0d_n-1e^ik), t_N =-2iN_0(a_n-1d_n-1-b_n-1c_n-1)sin(k)/a_n-1+N_0e^ik(b_n-1-c_n-1-N_0d_n-1e^ik).For N_0=1 we return to the case of a single waveguide with serially coupled atoms. There are several remarkable features of our model system.One is the aparently surprising conclusion that when N_0 is sufficiently large, the parallel coupled system behaves like a total perfectreflection mirror with R=|r|^2=1 and T=|t|^2=0, independent of the the detuning. Particularly when the detuning is such that no aparentradiation interaction between the photon and the atom, in which case the resonant transmission is expected. The possible implication is directlyrelated to desctructive interference among the photon states propagating along different channels. It is obvious that the mechanism behind theperfect reflection revealed here seems to be different from the serially coupled waveguide as illustrated in the literature<cit.>,where the total reflection is accumulation process due to multiple consecutive scatterings. Nevertheless, as will be shown later, this phenomenaindeed is closely related to the characteristic features of the single coupled waveguide.In Fig.5 we show that the reflectance for single coupled resonator waveguide (a) and (b), and the parallel connected multiple coupled resonator waveguides. we find that by increasing the number of connected waveguides the perfect reflection windows expand into all spectrum, while the the bandwidth and spectrum position are almost independent of the number of serial coupled atoms along the waveguide. This seems quite contra-intuitive, since the resistence in serial circuit is espected to grow and reduces in parallel connections.It should be emphasized that the mechanism of atomic mirror observed here is different than those discussed in Ref.<cit.>,where the perfect reflections is attributed to the multiple collisions with serially coupled atom-contained cavities. Numerical calculationsreveal that in contrast to the serially coupled resonator waveguide, by incorporating more cavities into the system destroy the above-mentionedperfect transmission and reflections windows, leaving the usual transmission patterns. This can be derived directly from Eq.(28). That is, foridentical cavities, when N →∞, R=cos^2(k).§ VI. CONCLUSIONSIn conclusion, we develop a simple tight-binding formalism for both serially and parallel connected coupled resonator waveguides, and conducta series of numerical calculation of single phton transport properties. Our numerical results reveal the following novel features of the coupled resonator waveguide: (i) For single waveguide with serially coupled cavities with embedded tow-level quantum system, the waveguide photon mode with arbitrarywavevector can be completely reeflected by properly chosen atom transition energy. A smilar atomic control of photon transfer can be achieved by theso-called super-array wavegudide where two types of atoms are connected alternatively. (ii) In the case of parallel coupled resonator waveguides, an all-frequency ferfection reflection regime is reached when the number of coupled waveguides are sufficiently large. A photon filter can be obtained when the single photon propagatethrough a ring of coupled resonator waveguide, in which case the resonant transmission occurs for certain values of photon energys, while perfect reflection is imposed on single photons with different wavevectors.It is obvious that our method provide a straightforward means for investigationg one-dimentional photon transport through a scattering clusters with certain regular internal structures, both in theoretic discussion and numerical analysis. It is expected that more real, complicated parameter regimescan be studied withthe use of the tight-binding formalism presented in this work, and much novel transport properties can be revealed.99 SriniK. Srinivasan and O. Painter, Nature (London) 450, 862 (2007) RaimondJ. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. 73, 565 (2001).Pelton M. Pelton, C. Santori, J. Vuckovi´c, B. Zhang, G. S. Solomon, J. Plant, and Y. Yamamoto, Phys. Rev. Lett. 89, 233602 (2002).Bermel P. Bermel, A. Rodriguez, S. G. Johnson, J. D. Joannopoulos, and Marin Soljaci´c, Phys. Rev. A 74, 043818 (2006).MenonV. M. Menon, W. Tong, F. Xia, C. Li and S. R. Forrest,Opt.Lett. 29, 513 (2004)AstafO. Astafiev, A. M. Zagoskin, A. A. Abdumalikov Jr., Yu. A. Pashkin, T. Yamamoto, K. Inomata, Y. Nakamura, and J. S. Tsai, Science 327, 840 (2010).Shen05J.-T. Shen and S. Fan, Optics Lett. 30, 2001 (2005).Shen07 J.-T. Shen and S. Fan, Phys. Rev. Lett. 98, 153003 (2007).GardinerC. W. Gardiner and M. J. Collett, Phys. Rev. A 31, 3761 (1985).Fan10S. Fan, S. E. Kocabas, and J. T. Shen, Phys. Rev. A 82, 063821 (2010).Fan99 S. Fan, P.R. Villenueve, J.D. Joannopoulos, M.J. Khan, C. Manolatou, and M.A. Haus,S. E. Kocabas, and J. T. Shen, Phys. Rev. B 59, 15 882 (1999).Xu Y. Xu, Y. Li, R. K. Lee, and A. Yariv, Phys. Rev. E 62, 7389 (2000).Tsoi T. S. Tsoi and C. K. Law, Phys. Rev. A 80, 033823 (2009).Roy D. Roy, Phys. Rev. Lett. 106, 053601 (2011).Zhou08 L. Zhou, Z. R. Gong, Yu-xi Liu, C. P. Sun, and F. Nori, Phys. Rev. Lett. 101, 100501 (2008).Yanik04M.F. Yanik, W. Suh, Z. Wang, and S. Fan, Phys. Rev. Lett. 93, 233903 (2004).Yanik05M.F. Yanik and S. Fan, Phys. Rev. A 71, 013803 (2005).Zhou08aL. Zhou, H. Dong, Yu-xi Liu, C. P. Sun, and F. Nori, Phys. Rev. A 78, 063827 (2008).Chang11Y. Chang, Z.R. Gong, and C.P. Sun, Phys. Rev. A 83, 013823 (2011).Zhou13L. Zhou, L. Yang, Y. Li, and C.P. Sun, Phys. Rev. Lett. 111, 103604 (2013).Cheng12M. Cheng, X. Ma, M. Ding, Y. Luo, and G. Zhao, Phys. Rev. A 85, 053840 (2012).Liao16Z. Liao, H. Nha, and M. S. Zubairy, Phys. Rev. A 93, 033851 (2016).Qin16W. Qin and F. Nori, Phys. Rev. A 93, 032337 (2016).MZIYi Xu and Andrey E. Miroshnichenko, Phys. Rev. A 84, 033828 (2011).CalajoG. Calajo and P. Rabl, arXiv:1612.06728 (2016).KowalD. Kowal, U. Sivan, O. Entin-Wohlman, and Y. Imry, Phys. Rev. B 42, 9009 (1990).ManolatouC. Manolatou, M.J. Khan, S. Fan, P.R. Villenueve, and M.A. Haus, IEEE J. Quan. Elec. 35, 1322 (1999).XuFanS. Xu and S. Fan, Phys. Rev. A 94, 043826 (2016).
http://arxiv.org/abs/1703.09259v1
{ "authors": [ "Yu Jiang", "M. Lozada-Cassou" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170327183636", "title": "One-Dimensional Photon Transport Through a Two-Terminal Scattering Cluster: Tight-Binding Formalism" }
petri.toivanen@fmi.fi[]Telephone number: +358-50-5471521Finnish Meteorological Institute, FIN-00101, Helsinki, FinlandThe shape of a rotating electric solar wind sail under the centrifugal force and solar wind dynamic pressure is modeled to address the sail attitude maintenance and thrust vectoring. The sail rig assumes centrifugally stretched main tethers that extend radially outward from the spacecraft in the sail spin plane. Furthermore, the tips of the main tethers host remote units that are connected by auxiliary tethers at the sail rim. Here, we derive the equation of main tether shape and present both a numerical solution and an analytical approximation for the shape as parametrized both by the ratio of the electric sail force to the centrifugal force and the sail orientation with respect to the solar wind direction. The resulting shape is such that near the spacecraft, the roots of the main tethers form a cone, whereas towards the rim, this coning is flattened by the centrifugal force, and the sail is coplanar with the sail spin plane. Our approximation for the sail shape is parametrized only by the tether root coning angle and the main tether length. Using the approximate shape, we obtain the torque and thrust of the electric sail force applied to the sail. As a result, the amplitude of the tether voltage modulation required for the maintenance of the sail attitude is given as a torque-free solution. The amplitude is smaller than that previously obtained for a rigid single tether resembling a spherical pendulum. This implies that less thrusting margin is required for the maintenance of the sail attitude. For a given voltage modulation, the thrust vectoring is then considered in terms of the radial and transverse thrust components. Electric solar wind sail Attitude control Transverse thrust § NOMENCLATURE@lcl@ a=voltage modulation torque-freec=cosine functione=unit vectorF=electric sail forceℱ=total sail thrustG=centrifugal forceg=voltage modulation generalℐ=integralk=force ratioL=main tether lengthl=coordinate along the main tether M=total massN=number of main tethersm=single main tether masss=sine functionT=main tether tension𝔗=electric sail torque𝒯=total sail torqueu=local tether tangentv=solar wind velocityv=solar wind speed(x,y,z)=Cartesian coordinatesα=sail angleγ=local tether coning angleΔ t=rotation periodμ=linear mass densityψ=thrust angle(ρ,ϕ,z)=circular cylindrical coordinatesτ=angular torque densityξ=electric sail force factorω=sail spin rateSubscripts @lcl@ 0=tether rooti=indexL=tether lengthmt=main tetherq=vector component indexru=remote units=sail(x,y,z)=Cartesian coordinatesα=sail angleγ=local tether coning angle(ρ,ϕ,z)=circular cylindrical coordinates Superscripts @lcl@ j=summation index*=orbital frame of reference§ INTRODUCTIONThe electric solar wind sail is a propulsion system that uses the solar wind proton flow as a source of momentum for spacecraft thrust <cit.>. The momentum of the solar wind is transferred to the spacecraft by electrically charged light-weight tethers that deflect the proton flow. The sail electrostatic effective area is then much larger than the mechanical area of the tethers, and the system promises high specific acceleration up to about 10 mm/s^2 <cit.>. As the tethers are polarized at a high positive voltage they attract electrons that in turn tend to neutralize the tether charge state. However, only a modest amount of electric power of a few hundred watts is required to operate electron guns to maintain the sail charge state, and the sail can easily be powered by solar panels <cit.>. The main tethers are centrifugally deployed radially outward from the spacecraft in the sail spin plane (Fig. <ref>). To be tolerant to the micro-meteoroid flux each tether has a redundant structure that comprises a number (typically 4) of 20-50 μm metal wires bonded to each other, for example by ultrasonic welding <cit.>. As a baseline design, the tips of the main tethers host remote units that are connected by auxiliary tethers at the sail perimeter to provide mechanical stability to the sail<cit.>. As the electric sail offers a large effective sail area with modest power consumption and low mass, it promises a propellantless continuous low thrust system for spacecraft propulsion for various kinds of missions <cit.>. These include fast transit to the heliopause <cit.>, missions in non-Keplerian orbit such as helioseismology in a solar halo orbit <cit.>, space weather monitoring with an extended warning time (closer to the sun than L1), multi-asteroid touring mission. Using the electric sail, such missions can typically be accomplished without planetary gravity assist maneuvers and associated launch windows. If planetary swing-bys are planned during the mission, each solar eclipse has to be carefully considered to avoid drastic thermal contraction and expansion of the sail tethers <cit.>. In addition to scientific missions, the electric sail can be used for planetary defense as a gravity tractor <cit.> or an impactor <cit.> and to rendezvous with such Potentially Hazardous Objects that cannot be reached by conventional propulsion systems <cit.>. The electric sail has also been suggested as a key method of transportation for products of asteroid mining<cit.>. Specifically, water from asteroids can be used for in-orbit production of LH2/LOX by electrolysis to provide a cost efficient way of transporting infrastructure associated with manned Mars missions <cit.>.The electric sail has an intrinsic means for its flight control, i.e., spin plane attitude control, maintenance, and maneuvers. These can be realized by applying differential voltage modulation to the sail tethers synchronously with the sail spin <cit.>. Thus the flight control is similar to the helicopter rotor flight control based on the blades' angle of attack. Furthermore, the sail can fully be turned off for orbital coasting phases or proximity maneuvers near light weight targets such as small asteroids. The coasting phases are also central to optimal transfer orbits between circular, for example, planetary orbits <cit.> (when reaching a target in an elliptical orbit such as the comet 67P/Churyumov-Gerasimenko coasting phases are not needed <cit.>). Note that these coasting phases are not associated with the planetary gravity assist maneuvers. Navigation to the target is also feasible, in spite of the variable nature of the solar wind <cit.>.In this paper, we derive an integral equation for the sail main tether shape under the solar wind dynamical pressure and the centrifugal forces in Sec. <ref>. The resulting equation of the tether shape is then solved numerically (Sec. <ref>) and an analytical approximation for the shape is then obtained (Sec. <ref>). Using this approximation, we obtain general expressions for the thrust (Sec. <ref>) and the torque (Sec. <ref>) arising from the solar wind transfer of momentum to the sail. In Sec. <ref>, we introduce a tether voltage modulation that leads to a torque-free sail motion. Finally, in Sec. <ref>, we consider the sail thrust vectoring in terms of both the radial and transverse thrust.The reference frames used in this paper are illustrated in Fig. <ref>. One of the frames (x^∗,y^∗,z^∗) is the orbital reference frame with the z^∗ axis pointing to the sun, the y^∗ axis being in the direction of the negative normal of the orbital plane, and the x^∗ completing the triad in the direction of the orbital velocity vector. In the other system (x,y,z), z is aligned with the sail spin axis, and x is chosen so that the solar wind nominal direction is in the xz plane. These two systems are related by a rotation around y^∗ axis by the sail angle α. In the xyz system, the circular cylindrical coordinates (ρ,ϕ,z) are used. The reference frames introduced above are local in the following sense: they rotate with respect to the distant stars while the sail is orbiting around the sun; however, the sail itself keeps its orientation with respect to the distant stars; and thus the sail spin axis is slowly rotating (360^∘/yr) in these non-inertial local frames in terms of the Coriolis effect. In order to maintain the sail orientation with respect to the sun, an additional tether voltage modulation has to be introduced. The amplitude of this modulation is, however, much smaller compared to the modulation associated with the inclined sail<cit.>, and the Coriolis effect can be neglected in this work. It is noted, however, that the Coriolis effect can only be partially canceled by the main tether voltage modulation and it leads to a secular variation in the sail spin rate<cit.>. This is a topic considered in a future study that addresses the electric sail spin rate variations and control using the model developed in this paper.§ TETHER SHAPE §.§ Equation of tether shapeThe electric sail tether shape under the solar wind forcing can be obtained by writing an integral equation similar to that of a catenary<cit.>. Fig. <ref> shows the electric sail force and the centrifugal force influencing the tether shape. Local unit vectors parallel and perpendicular to the tether can be written in terms of sine and cosine of the local coning angle γ ase_∥ =c_γe_ρ +s_γ e_ze_⊥ =s_γ e_ρ -c_γ e_z.According to Fig. <ref>, the total force T =F +G that equals the tether tension can be split into ρ and z components asT_z/T_ρ = tanγ = dz/dρ≡ u(ρ),where we have introduced the local tether tangent u(ρ). An equation for the tether shape can then simply be written asu = F_z/G+F_ρ.Note that the forces present here are the total forces integrated over the tether from the reference point ρ to the tether tip at ρ_L.For a tether segment dl with a mass of dm_ mt, the centrifugal force (dG = ω^2ρ dm_ mt) can be written in terms of the tether linear mass density μ (dG = μω^2ρ dl). As the length of the tether segment reads asdl = √(1+(dz/dρ)^2)dρ = √(1+u^2) dρ,the total centrifugal force isG = μω^2∫_ρ^ρ_Lρ√(1+u^2) dρ + m_ ruω^2ρ_L,where the last term is the centrifugal force exerted by the remote unit including the auxiliary tether mass.The electric sail force per unit tether length is directed along the solar wind velocity component perpendicular to the tether direction asd F/dl = ξ v_⊥where v_⊥ is the solar wind component perpendicular to the main tether direction and ξ is a force factor arising from the electric sail thrust law <cit.>. Similarly to the centrifugal force above, the electric sail force can be integrated to giveF = ∫_ρ^ρ_Lξ v_⊥√(1+u^2)dρ.As the solar wind velocity is assumed to be radial, it can be written asv = v(s_α e_ρ + c_α e_z)in terms of the sail angle and solar wind speed with typical values of about 400 km/s. The component perpendicular to the tether direction can be expressed in terms of the unit vector of Eq. (<ref>) asv_⊥ =( v· e_⊥) e_⊥=v( s_α s_γ^2- c_α s_γ c_γ) e_ρ + v( c_α c_γ^2- s_α s_γ c_γ) e_z.Using trigonometric identities to express s_γ and c_γ in terms of tanγ (with tanγ = u), ρ and z components of the electric sail force (<ref>) can be written asF_ρ = - ξ v∫_ρ^ρ_L( c_α -s_α u)u/√(1+u^2) dρandF_z = ξ v∫_ρ^ρ_L c_α -s_α u/√(1+u^2) dρFinally, inserting the integral force terms in Eq. (<ref>), the equation of shape of the tether can be written asu = ξ v∫_ρ^ρ_L c_α -s_α u/√(1+u^2) dρ/μω^2∫_ρ^ρ_Lρ√(1+u^2) dρ + m_ ruω^2ρ_L- ξ v∫_ρ^ρ_L( c_α -s_α u)u/√(1+u^2) dρIn addition, the tether extent in ρ, ρ_L is determined by the tether length and shape asL = ∫_ρ_0^ρ_L√(1+u^2) dρ.The shape of the tether can then be solved using Eqs. (<ref>) and (<ref>). §.§ Numerical solutionNumerical solution to Eq. (<ref>) can be found by considering z(ρ) being locally linear as z_i = u_i ρ + c_i at ρ = ρ_i. All integrals in Eq. (<ref>) depend only on u and ρ, and we are left to find a recurrence relation only for u_i. To do so, an integral ℐ of any general function h(ρ,u) can be written asℐ_i = ∫_ρ_i^ρ_L h(ρ,u)dρ = h(ρ_i,u_i)Δρ_i + ℐ_i-1.An equation for u_i can be obtained by substituting all integrals in Eq. (<ref>) with Eq. (<ref>), accordingly. After some algebra, u_i can be written asu_i = ξ vc_αΔ L + F_i-1^z/(ξ vs_α + μ_tω^2ρ_i-1)Δ L + G_i-1 + m_Rω^2 ρ_L - F_i-1^ρ.Given an initial starting point ρ_L, a numerical solution can be found recursively using Eq. (<ref>) over the tether length. As ρ_L is unknown, depending on the initial guess of ρ_L, the process is iterated until the solved tether root distance equals the actual tether attachment point at the spacecraft. Fig. <ref> shows the tether shape z(ρ) and the local tether tangent u. Parameter values used are L = 20 km, ξ v = 0.5 mN/km, α = 45^∘, μ = 10 g/km, m_ ru = 1 kg, Δ t = 125 min. These values are motivated as follows: a baseline sail assumes hundred tethers with a length of 20 km each; the thrust per tether length of 0.5 mN/km translates to a baseline thrust of 1 N; tether linear mass density is about 10 g/km <cit.>; a remote unit with a dry mass of about 0.5 kg was developed and qualification tested in an EU/FP7/ESAIL project <cit.>; and the rotation period of 125 min is used here for a prominent tether coning to visualize the tether shape. Note that the solution can be easily verified by calculating the force integrals in Eq. (<ref>) as shown in bottom panel of Fig. <ref> and equating these against u as in Eq. (<ref>).§.§ Analytical approximationAn analytical approximation for the tether shape can be obtained for a weakly coning sail (u ≈ 0). Fig. <ref> shows the numerically obtained tether shape with a maximum tether tension of 5 grams. As the tether can tolerate a tension of about 13 grams at maximum <cit.>, the tension of 5 grams leaves a clear safety margin to 13 grams. The parameter values are the same as in Fig. <ref> except the sail spin is faster, and the rotation period, Δ t = 70 min. In general, an approximation for the equation of shape (<ref>) can be found as an expansion of ρ = b_0 + b_1 u + b_2 u^2. After solving the coefficients (b_0,b_1,b_2) using Eqs. (<ref>) and (<ref>), u can be solved from the expansion above. However, for the purposes of this paper we simplify the analysis and consider only the linear terms so that u can be written asu = u_0(1-ρ/ρ_L)As it can be seen in Fig. <ref>, this is well justified, and u = u_0 at ρ = 0 and u = 0 at ρ = ρ_L as it is the case. The tether shape can then be integrated (dz/dρ = u) to givez = u_0ρ(1-ρ/2ρ_L).To finalize our model for the tether shape we are left to solve ρ_L and u_0 as functions of the sail and solar wind parameters. Using Eq. (<ref>), expanding √(1+u^2) as a power series in u, and integrating, ρ_L can be expressed in terms of the total tether length asρ_L = L(1-1/6u_0^2)The equation of shape (<ref>) at ρ = 0 can be written asu_0 = ξ v∫_0^ρ_L( c_α -s_α u) dρ/μω^2∫_0^ρ_Lρ dρ + m_rω^2ρ_L - ξ v∫_0^ρ_L c_αu dρby excluding terms higher than first order in u_0 (√(1+u^2)≈ 1). Noting that ∫_0^ρ_Lu dρ = ρ_L u_0/2, one can solve u_0 to obtainu_0 = 2kcosα/2 + ksinα,wherek = 2 ξ v/(m_ mt + 2 m_ ru)ω^2is the ratio of the electric sail force to the centrifugal force. Fig. <ref> shows the approximations for the shape for the sail angles of -α and α corresponding to the tether azimuth locations of ϕ = 0 and ϕ = π, respectively. §.§ Sail shape The shape of the model sail is parametrized by the radial extent of the sail (ρ_ s) and the tangent of the sail coning angle (u_ s) at the spacecraft. The sail radial extent is trivial and it equals the single tether length up to second order in u_ s as in Eq. <ref>, and we are left to only determine u_ s.Here, we present two estimates for u_ s based on the results shown above. One solution is to use Eq. (<ref>) to give the sail coning tangent as an average of tether tangents at ±α,u_ s = 4kcosα/4-k^2sin^2α.The other solution is to consider the solar wind vector to be rotated around the z axis in sail coordinates to the locations of the individual tethers. Then, as the solar wind components in the sail plane cancel when averaging over the tethers, we are left with an effective solar wind z component v_ eff = vcosα. Then, using Eq. (<ref>) with the zero effective sail angle, the sail coning tangent is given asu_ s = k_ eff = kcosα.As the centrifugal force is typically much larger than the electric sail force (k ≪ 1), Eqs. (<ref>) and (<ref>) are essentially equal.§ SAIL THRUST AND TORQUE §.§ ThrustThe total sail thrust is calculated by summing over the number of tethers (N) and integrating over the single tethers asℱ_q = ∑_j=1^N∫_0^LdF_q^j/dldlBy changing variables (l →ρ→ u), the integral in Eq. (<ref>) can be written asℱ_q = ∑_j=1^N∫_0^u_ sρ_L/u_ sdF_q^j/dl√(1+u^2)duNext, we assume that the sail comprises such a large number of tethers (, i.e., N ≳ 12) that the summation over the tethers in Eq. (<ref>) can be replaced by integration over the tether azimuthal locations in ϕ as∑_j=1^NF(ϕ_j) → N∫_0^2πf(ϕ)dϕ,where f(ϕ) = F(ϕ)/2π can be considered as the angular thrust density. The total thrust is then an integral of the thrust density and it can be written asℱ_q = N∫_0^2π∫_0^u_ sρ_L/u_ sdf_q/dl√(1+u^2)dudϕ.According to the electric sail force law of Eq. (<ref>), the thrust on a line segment dl is given asd F/dl = g_ϕξ v_⊥,where we have added the tether voltage modulation g_ϕ. The modulation is scaled to the maximum voltage with g_ϕ∈ [0,1]. We also assume for simplicity that the solar wind velocity is given asv = v_x e_x + v_z e_z.Its component perpendicular to the tether reads then asv_⊥ =v - ( v· e_∥) e_∥=v - (v_xc_γ c_ϕ + v_zs_γ) e_∥,where the unit vector parallel to the tether is given by e_∥ =c_γ e_ρ +s_γ e_z as in Eq. (<ref>). Since e_ρ =c_ϕ e_x +s_ϕ e_y in the circular cylindrical coordinate system, the thrust components per line segment can be expressed asdF_x/dl =g_ϕξ[v_x - (v_xc_γ c_ϕ + v_zs_γ) c_γ c_ϕ]dF_y/dl =-g_ϕξ(v_xc_γ c_ϕ + v_zs_γ) c_γ s_ϕ dF_z/dl =g_ϕξ[v_z - (v_xc_γ c_ϕ + v_zs_γ) s_γ].The next step is to integrate over the tether length, i.e., from zero to u_ s in terms of u. Using the shape of the sail tethers as given by Eq. (<ref>) with u_0 = u_ s we determine the thrust to the second order in u_ s. This can be accomplished by using any computer algebra system such as Maxima <cit.>, and the angular thrust density can be given asf_x= g_ϕξ L/2π[v_x - 1/2v_zu_ s c_ϕ - v_x(1-1/3u_ s^2) c_ϕ^2] f_y=-g_ϕξ L/2π[1/2v_z u_ s s_ϕ + v_x(1-1/3u_ s^2) s_ϕ c_ϕ]f_z = g_ϕξ L/2π[v_z(1-1/3u_ s^2)-1/2v_xu_ s c_ϕ].Note that to obtain the total force to the entire sail Eq. (<ref>) has to be integrated over the sail in ϕ for a given voltage modulation. In Sec. <ref>, this will be done for the modulation that results in torque-free sail dynamics. §.§ TorqueBy definition, the torque on a tether segment dl generated by the electric sail force Eq. (<ref>) is given asd𝔗_q/dl = g_ϕξ[ r× v_⊥]_q.Writing v_⊥ as in Eq. (<ref>) and r = ρ c_ϕ e_x + ρ s_ϕ e_y + z e_z, the cross product r× v_⊥ can be calculated and the torque per line segment can be written asd𝔗_x/dl =g_ϕξ[ρ v_zs_ϕ - (v_xc_γ c_ϕ + v_zs_γ)(ρ s_γ-zc_γ) s_ϕ]d𝔗_y/dl =g_ϕξ[z v_x - ρ v_zc_ϕ + (v_xc_γ c_ϕ + v_zs_γ)(ρ s_γ-zc_γ) c_ϕ] d𝔗_z/dl =-g_ϕξρ v_xs_ϕ.The angular torque density can then be obtained by integration over the tether length as in Eq. (<ref>), and the torque density reads asτ_x= g_ϕξ L^2/4π[v_z(1-1/6u_ s^2) s_ϕ + 1/3 v_x u_ s c_ϕ s_ϕ] τ_y= g_ϕξ L^2/4π[2/3v_x u_ s - v_z(1-1/6u_ s^2) c_ϕ - 1/3v_x u_ s c_ϕ^2] τ_z =-g_ϕξ L^2/4π v_x(1-1/4u_ s^2) s_ϕ.Note that Eq. (<ref>) has to be integrated over the sail in ϕ for a given voltage modulation to obtain the total sail torque.§ RESULTS §.§ Torque-free sail dynamicsIn order to find torque-free dynamics for the sail, we apply a modulation given asg_ϕ = 1-a(1 ± c_ϕ).where ± corresponds to ±α. After integrating Eq. (<ref>), only the y component of the total torque is different from zero and it can be expressed as𝒯_y = 1/4Nξ L^2[v_x u_ s - a(v_x u_ s∓ (v_z - 1/6 v_z u_ s^2))].Setting 𝒯_y equal to zero, the amplitude a can be solved and it is seen that with the modulation given in Eq. (<ref>), the sail dynamics is free of torque whena = - u_ stanα(1 + u_ stanα + 𝒪(u_ s^2)),where v_x/v_z is replaced with ±tanα. For a non-inclined (α = 0^∘) or fully planar (u_ s = 0) sail, the efficiency equals 1 as no voltage modulation is needed for the sail attitude control. Otherwise, a portion of the available voltage is required for the sail control which decreases the sail efficiency as shown in Fig. <ref>. Here, the efficiency of the tether voltage modulation, and below, the rest of the results are shown as contour plots as a function of the sail angle and the ratio of the electric sail force to the centrifugal force as given in Eq. (<ref>). Note that the second order terms in Eq. (<ref>) and expressions below are given in Tab. <ref> merely as estimates for the validity of the power series expansions, and any geometric interpretations based on these terms are conceivably irrelevant.As a comparison, for a rigid tether model without auxiliary tethers, the modulation amplitude equals 3tanΛtanα <cit.>, where Λ is the rigid tether coning angle. The percentage difference between these two models is shown in Fig. <ref>. For this model, the angular velocity of the tether varies as the tethers are not mechanically coupled, and the tether angular velocity varies over the rotation phase enhancing the amplitude of the voltage modulation. Also a model with rigid tethers and auxiliary tethers can be considered (the sail resembles the Asian conical hat). The analysis of such a model is similar to the one carried out in this paper, and the modulation amplitude for such a model equals 2tanΛtanα. It can be seen that both the mechanical coupling and the realistic tether shape increase the sail efficiency as shown by Eq. (<ref>). §.§ Thrust vectoringUsing the voltage modulation (<ref>) in Eq. (<ref>), the total thrust can be integrated over the tethers in the case of the torque-free sail flight orientation determined by the sail angle α,ℱ_x= ∓1/2 N ξ L vsinα(1 + u_ stanα + 𝒪(u_ s^2))ℱ_z =- N ξ L vcosα(1 + u_ stanα + 𝒪(u_ s^2)).The thrust components can then be rotated by the sail angle α to give the transverse and radial thrust components asℱ_∥ = ±1/4 N ξ L vsin 2α(1 + u_ stanα + 𝒪(u_ s^2))ℱ_⊥ =-N ξ L v(1 - 1/2sin^2α)(1 + u_ stanα + 𝒪(u_ s^2)).Fig. <ref> shows the dimensionless transverse thrust component of the sail thrust. Naturally, the transverse thrust is enhanced as the sail angle increases reaching the maximum of about one fourth of the total electric sail force at α = 45^∘. As a comparison, the decay of the transverse thrust in k is somewhat slower than that of the single tether model. This is clarified in Fig. <ref> that shows the percentage difference in transverse thrust magnitudes between these two models.Finally, the tangent of the thrusting angle can then be written astanψ = ∓sin 2α/2(2-sin^2α)(1 + 𝒪(u_ s^2)).It can be seen that the thrusting angle (Fig. <ref>) has only a weak dependence on the sail root coning tangent u_ s. Thus the thrusting angle can be computed by assuming that the sail is fully planar (tanψ = ∓sin 2α/(4-2sin^2α)). § DISCUSSION AND CONCLUSIONSIn this paper, we assumed that the solar wind is nominally flowing radially from the sun. This served the purposes of this paper which was to estimate the effects of the actual sail shape to the efficiency of the sail control and thrust vectoring. When solar wind temporal variations are considered, the y component of the solar wind must be added in the sail torque components in order to write a complete rigid body simulation for the electric solar wind sail. Furthermore, the Euler equations require also the moments of inertia in addition to the torques given in this paper. However, both the general thrust components in the sail body frame and moments of inertia can be attained with a reasonable effort by following the analysis of this paper, especially, when using a computer algebra. Such a complete Euler description of the electric solar wind sail can then be used, for example to address the effects of the solar wind variation to the sail navigation, and spin rate control and evolution in sail orientation maneuvers.In this paper, we derived the equation of tether shape, solved it by a simple numerical iteration, and presented an analytical approximation for the single tether shape. Our approximation is parametrized by the tether root coning angle and the tether length. The latter is a free parameter whereas the former depends both on the ratio of the electric sail force to the centrifugal force and the sail angle with respect to the sun direction. This ratio then depends on the tether voltage, solar wind density and speed, sail spin rate, and total mass of the tether and remote unit combined. The sail coning angle at the spacecraft is essentially the tether root coning angles averaged over the tether locations in the sail rig. The resulting sail shape is such that the coning decreases and the sail surface tangential to the tethers approaches the sail spin plane towards the perimeter of the sail.Having obtained the model for the sail, we derived expressions for the angular thrust and torque densities. Introducing a tether voltage modulation that results in torque-free sail dynamics, we solved the amplitude of the modulation. This amplitude has to be reserved for the sail control and correspondingly the voltage available for thrusting is less than the maximum designed voltage increasing the sail efficiency. We showed that this amplitude is 3 times smaller for the sail model introduced here than for that derived using a single tether model <cit.>. Finally, the total thrust to the sail was obtained for the torque-free sail motion. The transverse thrust is somewhat larger (up to about 10%) than that of the single rigid tether model. The reason is that a portion of the sail near the perimeter of the sail is coplanar with the sail spin plane. The thrusting angle was shown to be essentially equal to the fully planar sail being about 20^∘ at sail angles higher than 45^∘.§ ACKNOWLEDGMENTSThis work was supported by the Academy of Finland grant 250591 and by the European Space Agency.model6-num-names 00eka P. Janhunen, Electric sail for spacecraft propulsion, J. Propul. Power, 20 (4) (2004) 763–764, http://dx.doi.org/10.2514/1.8580.rsi P. Janhunen, P. K. Toivanen, J. Polkko, S. Merikallio, P.Salminen, E. Haegström, H. Seppänen, R. Kurppa, J. Ukkonen, S. Kiprich, G. Thornell, H. Kratz, L. Richter, O. Krömer, R. Rosta, M. Noorma, J. Envall, S. Lätt, G. Mengali, A. A. Quarta, H. Koivisto, O. Tarvainen, T. Kalvas, J. Kauppinen, A. Nuottajärvi, A. Obraztsov, Electric solar wind sail: Towards test missions, Rev. Sci. Instrum. 81 (11) (2010) 111301–111311, http://dx.doi.org/ 10.1063/1.3514548.janhunensandroos P. Janhunen, A. Sandroos, Simulation study of solar wind push on a charged wire: basis of solar wind electric sail propulsion, Ann. Geophys. 25 (3) (2007) 755–767, http://dx.doi.org/10.5194/angeo-25-755-2007.toka P. Janhunen, The electric sail - a new propulsion method which may enable fast missions to the outer solar system, J. British Interpl. Soc. 61 (8) (2008) 322–325.onekmtether H. Seppänen, T. Rauhala, S. Kiprich, J. Ukkonen, M. Simonsson, R. Kurppa, P. Janhunen and E. Hæggström, One kilometer (1 km) electric solar wind sail tether produced automatically, Rev. Sci. Instrum. 84 (2013) 095102, http://dx.doi.org/10.1063/1.4819795.fp7 ESAIL FP7 project deliverables [Retrieved on September 21, 2016] <http://www.electric-sailing.fi/fp7/fp7docs.html>.applications P. Janhunen, P. Toivanen, J. Envall, S. Merikallio, G. Montesanti, J. Gonzalez del Amo, U. Kvell, M. Noorma, S. Lätt, Overview of Electric Solar Wind Sail Applications, Proc. Estonian Acad. Sci. 63 (2S) (2014) 267–278, http://dx.doi.org/10.3176/proc.2014.2S.08.outersolarsystem A. A. Quarta and G. Mengali, Electric sail mission analysis for outer solar system exploration, J. Guid. Contr. Dyn. 33 (3) (2010) 740–755, http://dx.doi.org/10.2514/1.47006.nonkeplerian G. Mengali and A. A. Quarta, Non-Keplerian orbits for electric sails, Cel. Mech. Dyn. Astron. 105 (1) (2009) 179-195, http://dx.doi.org/10.1007/s10569-009-9200-y.eclipse P. Janhunen and P. Toivanen, Safety criteria for flying E-sail through solar eclipse, Acta Astronaut. 114 (2015) 1–5, http://dx.doi.org/10.1016/j.actaastro.2015.04.006.gravitytractor S. Merikallio and P. Janhunen, Moving an asteroid with electric solar wind sail, Astrophys. Space Sci. Trans. 6 (2010) 41–48, http://dx.doi.org/10.5194/astra-6-41-2010.impactor K. Yamaguchi and H. Yamakawa, Electric solar wind sail kinetic energy impactor for asteroid deflection missions, J of Astronaut. Sci. 63 (1) (2016) 1–22, http://dx.doi.org/10.1007/s40295-015-0081-x.pho A. A. Quarta and G. Mengali, Electric sail missions to potentially hazardous asteroids, Acta Astronaut. 66 (9) (2010),1506–1519, http://dx.doi.org/10.1016/j.actaastro.2009.11.021.ky26 A. Quarta, G. Mengali, and P. Janhunen, Electric Sail for a Near-Earth Asteroid Sample Return Mission: Case 1998 KY26, J. Aerosp. Eng. 27 (6) (2014) http://dx.doi.org/10.1061/(ASCE)AS.1943-5525.0000285.emmi P. Janhunen, S. Merikallio, and M. Paton, EMMI - Electric solar wind sail facilitated Manned Mars Initiative, Acta Astronaut. 113 (2015) 111–119, http://dx.doi.org/10.1016/j.actaastro.2015.03.029.controla P. Toivanen, P. Janhunen, Spin plane control and thrust vectoring of electric solar wind sail by tether potential modulation, J. Prop. Power 29 (1) (2013) 178–185, http://dx.doi.org/10.2514/1.B34330.catenary G. S. Carr, A synopsis of elementary results in pure mathematics, 2, Francis Hodgson, London, 1886, pp. 722.marsorbit G. Mengali, A. A. Quarta, P. Janhunen, Electric sail performance analysis, J. Spacecr. Rockets 45 (1) (2008) 122–129, http://dx.doi.org/10.2514/1.31769.rosetta A. A. Quarta, G. Mengali and P. Janhunen, Electric sail option for cometary rendezvous, Acta Astronaut. 127 (2016) 684–692, http://dx.doi.org/10.1016/j.actaastro.2016.06.020.navigation P. Toivanen and P. Janhunen, Electric sailing under observed solar wind conditions, Astrophys. Space Sci. Trans. 5 (2009) 61–69.maxima Maxima, a Computer Algebra System [Retrieved on September 21, 2016], <http://maxima.sourceforge.net>.
http://arxiv.org/abs/1703.08975v1
{ "authors": [ "Petri Toivanen", "Pekka Janhunen" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170327090539", "title": "Thrust vectoring of an electric solar wind sail with a realistic sail shape" }
On positive-definite ternary quadratic forms with the same representations overRyoko Oishi-TomiyasuGraduate School of Science and Engineering,Yamagata University / JST PRESTO 990-8560 1-4-12, Kojirakawa-cho, Yamagata-shi, Yamagata, JapanE-mail: tomiyasu@imi.kyushu-u.ac.jp[Current affiliation: Institute of Mathematics for Industry (IMI), Kyushu University] ===================================================================================================================================================================================================================================================================================================In any network, the interconnection of nodes by means of geodesics and the number of geodesics existing between nodes are important. There exists a class of centrality measuresbased on the number of geodesics passing through a vertex. Betweenness centrality indicates the betweenness of a vertex or how often a vertex appears on geodesics betweenother vertices. It has wide applications in the analysis of networks. Consider GP(n,k). For each n and k(n > 2k), the generalized Petersen graph GP ( n , k ) is a trivalent graph with vertex set { u_ i ,v_ i|0 ≤ i ≤ n - 1 }and edge set { u_ i u_ i + 1 , u_ i v _i , v_ i v_ i + k|0≤ i ≤ n - 1,subscripts reduced modulon }. There are three kinds of edges namely outer edges, spokes and inner edges. The outer vertices generate an n-cycle called outer cycle and inner vertices generate one or more inner cycles. In this paper, we consider GP(n,2) and find expressions for the number ofgeodesicsand betweenness centrality. Keywords: Petersen graph, geodesics, wicket,Möbius strip, betweenness centrality,induced betweenness centrality.§ INTRODUCTIONGeneralized Petersen graphs were first defined by Watkins<cit.> who was interested in trivalent graphs without proper three edge-colorings. For integers n and k with 1 ≤ k < n/2, the generalized Petersen graph GP(n,k) has been defined as an undirected graph with vertex-set V={u_0,u_1,…,u_n-1,v_0,v_1,…,v_n-1} and edge set E consisting of all pairs of the three forms (u_i,u_i+1),(u_i,v_i) and (v_i,v_i+k) where i is an integer and all subscripts are read modulo n. The above three forms of edges are called outer edges, spokes, and inner edges respectively. In this original definition, GP(n,k) is a trivalent graph of order 2n and size 3n. It can be seen that when n is even and k = n/2 the resulting graph is not cubic. And because of the obvious isomorphismGP(n,k)≅ GP(n,n-k), k <n/2.In GP(n, k), there exist one outer cycle andone or more inner cycles.In this paper we may refer even (odd) subscripted vertices by even (odd) vertices.<cit.> If d denotes the greatest common divisor of n and k, then the set of inner edges generates a subgraph which is the union of d pairwise-disjoint (n/d)-cycles. A graph is said to be vertex-transitive if its automorphism group acts transitively on the vertex set. <cit.> G(n, k) is vertex-transitive if and only if k^2≡± 1 ( n) or n = 10 and k = 2 The well known Petersen graph GP(5,2)is the smallest vertex-transitive graph which is not a Cayley graph<cit.>. It has many interesting properties and can be taken as counter examples for many conjectures<cit.>.The generalized Petersen graphs GP(n, 1) are prisms,isomorphic to the Cartesian product C_ n□ K_2. It can be easily seen that for even values of n i.e, n=2k, GP(2k,2) is planar for each k. Again Robertson<cit.> has shown that GP(n, 2) is Hamiltonian unless n ≡ 5 6.A set of vertices of the form {v_i, u_i,u_i+1,…,u_i+n,v_i+n} in a generalized Petersen graph is called an n-wicket(or simply a wicket) <cit.>. The number of shortest paths or geodesics between two vertices u and v in a graph will bedenoted by σ (u,v).§ GENERALIZED PETERSEN GRAPH GP(N,2)The graph GP(n,2) is defined for n≥ 5. It contains either one or two inner cyclesaccording as n is odd or even (See Fig.<ref>).If n is odd, the inner cycle contains even vertices followed by odd vertices and when n is even there are two inner cycles - the cycle of even vertices and the cycle of odd vertices each having n/2 vertices. (See Fig<ref>). Two inner vertices v_i and v_j are consecutive if |i-j|=1 or |i-j|= -1 (n) andadjacent if |i-j|=2 or |i-j|=-2 (n). If n is odd, say n=2k+1, then for each vertex u_ithere are two eccentric vertices u_i± k in the outer cycle as diametric vertices. Thevertices on either side of u_i are identical with respect to the metric. When k is increased by one, the eccentric pair advances one more distance away from u_i.If n is even, say n=2k, then these diametric vertices coincide to a single vertex.GP(n,2) when n is oddcan be viewed as a Möbius strip with outer vertices lying in the middle and inner vertices lying on the border. If n=2k+1, it is easy to see that there are k odd vertices lying on the upper ends and k+1 even vertices lying on the lower ends of thestrip (See Fig<ref>). Moving along the middle of the Möbius strip, the outer vertices comes in a regular manner as u_0,u_1,u_2,…,u_2k and along the border of the strip,odd vertices follow the even vertices. One movement along the edge in the border of the strip is equivalent to two movements along the edges in the middle. Hence shortest paths prefer edges along the border when the number of vertices increases.§ GEODESICS IN GP(N,2) §.§ Geodesics in GP(n,2) when n is odd§.§.§ Number of geodesics between a pair of vertices in the outer cycle of GP(n,2) when n is oddIn GP(n,2) where n=2k+1, by symmetry we consider the vertices u_0 and u_r where 1≤ r ≤ k, k≥ 2 and any shortest path joining them may be denoted by P(u_0,u_r).In GP(2k+1,2), k≥ 2for the vertices u_0 and u_rin the outer cycle,1≤ r ≤ k,there is always a geodesic joining them contained in the outer cycle forr≤ 5.By symmetry, we consider r for 1≤ r ≤ k.It is obvious for r=1 since (u_0,u_1) makes an outer edge.For r=2, the only geodesic is the one joining {u_0,u_1,u_2} since any path intersecting the inner cycle contains twospokes and atleast one inner edge. For r=3, the only geodesic is the one joining {u_0,u_1,u_2,u_3} since the inner vertices v_0 and v_3 are non-adjacent, they make a path of minimum length 4 joining either {u_0,u_1,v_1,v_3,u_3}, {u_0,v_0,v_2,u_2,u_3} or {u_0,v_0,v_5,v_3,u_3} in the case k=3. For r=4 there are two geodesics of length 4, one joining {u_0,u_1,u_2,u_3,u_4} and the other joining {u_0,v_0,v_2,v_4,u_4}. For r=5(<k), there are three geodesics joining {u_0,u_1,u_2,u_3,u_4,u_5}, {u_0,u_1,v_1,v_3,v_5,u_5} and {u_0,v_0,v_2,v_4,u_4,u_5} each of length 5. When r=5(=k), u_0 and u_5 become the extreme vertices and hence there are four geodesicsbetweenu_0 and u_5 including the one passing through the inner cycle in the reverse direction.In GP(2k+1,2), for even r≤ k,there is a unique geodesic joiningu_0 and u_r for r>5and it passes through thespokes at u_0 and u_r and the inner vertices lying between v_0 and v_r. Consider u_r where r is even and r>5. Let P(u_0,u_r) be any geodesic joining u_0 and u_rcontained in the outer cycle.If P(u_0,u_r) is contained in the outer cycle, its length becomes r. Since r is even, v_0 and v_r are even vertices and they lie ona unique shortest path of length r/2 contained in the inner cycle. Considering the spokes at v_0 and v_r, the length of the path P(u_0,u_r) becomes r/2+2<r for r>5. Thus P(u_0,u_r) passes through the inner vertices and the spokes at u_0 and u_r for r>5.In GP(2k+1,2), for odd r,there are two geodesics between u_0 and u_r for 5<r<k and three for r=k allpassing through the inner vertices. When r is odd and 5<r<k, it can be easily seen that P(u_0,u_r) contains exactly two spokes either at u_0 and u_r-1 or at u_1 and u_r having length r-1/2+3. Otherwise r<r-1/2+3, a contradiction. When r is odd, the distance between v_0 and v_r in the inner cycle is k-r-1/2.When r=k, including the spokes at u_0 and u_r, the distance becomes r-1/2+3. Thus there are three geodesics between u_0 and u_r when r=k. In GP(2k+1,2), k≥ 2, for the vertices u_0 and u_rin the outer cycle,1≤ r ≤ k,there is no geodesic joining them contained in the outer cycle for r> 5.If u_i and u_j are any two distinct vertices in the outer cycle of GP(2k+1,2) where |i-j|=r≤ k, then the number of geodesics σ(u_i,u_j) between u_i and u_jis given by σ(u_i,u_j)=1for r=1,2,32for r=4 3for r=5; r<k4for r=5; r=k1for r=6,8,10,…2for r=7,9,11,…; r<k3for r=7,9,11,…; r=k By symmetry, we consider r≤ k. When n is odd, there exists only one inner cycle in GP(n,2). There is a unique geodesic between u_i and u_i+r for r=1,2,3 lying on the outer cycle (Lemma <ref>) and two geodesics joining u_i and u_i+4 namely, {u_i,u_i+1,u_i+2,u_i+3,u_i+4} and {u_i,v_i,v_i+2,v_i+4,u_i+4}When r=5(=k), the vertices u_i and u_i+5 becomes diametric pair on the outer cycle and hence there are four geodesicsjoining them namely, {u_i,u_i+1,u_i+2,u_i+3,u_i+4,u_i+5}lying on the outer cycle and {u_i,u_i+1,v_i+1,v_i+3,v_i+5,u_i+5}, {u_i,v_i,v_i+2, v_i+4,u_i+4,u_i+5}, through inner cycle in the forward direction and {u_i,v_i,v_i-2,v_i-4,v_i-6,u_i-6(=u_i+5)} in the reverse direction. But when r<k, the pathin the reverse direction does not become a geodesic. Therefore there are only three geodesics between u_i and u_i+5 when r<k. When r>5, all geodesics pass through the inner cycle and no geodesic lies entirely on the outer cycle. When r(>5) is even, u_i and u_i+r have the same parity and hence there exists only one geodesic joining u_i and u_i+r namely,{u_i,v_i,v_i+2,v_i+4,…,v_i+r,u_i+r} When r is odd and5<r< k, there are two geodesics namely, {u_i,u_i+1,v_i+1,v_i+3,…,v_i+r,u_i+r} and{u_i,v_i,v_i+2,v_i+4,…,v_i+r-1,u_i+r-1,u_i+r}. When r=k, there is one more geodesics in the reverse direction i.e,{u_i,v_i,v_i-2,v_i-4,…,v_i-r-1,u_i-r-1(=u_i+r)}§.§.§ Number of geodesics between a pair of vertices in the inner cycle of GP(n,2) when n is odd If v_i and v_j are any two distinct vertices in the inner cycle of GP(2k+1,2) where|i-j|=r≤ k, then number of geodesics σ(v_i,v_j) between v_i and v_j is given by σ(v_i,v_j)= 1for evenr 1for oddr ,r>k-2 r+1/2 for oddr ,r<k-2 r+3/2 for oddr , r=k-2When n is odd, there exists only one inner cycle in GP(n,2) containing all even vertices followed by all odd vertices.When n=2k+1, there are k+1 even vertices and k odd vertices in the inner cycle. Consider two consecutive inner vertices v_i and v_i+1.Since v_i and v_i+1 are non adjacent inner vertices, there is a unique geodesic between v_i and v_i+1 passing through the 1-wicket {v_i,u_i,u_i+1,v_i+1}containing two spokes and an outer edge.When r is even, both v_i and v_i+r have the same parity and thereforethere is a unique geodesic P(v_i,v_i+r) of length r/2 namely {v_i,v_i+2,v_i+4,…,v_i+r}.When r is odd, r<k-2; v_i and v_i+r have opposite parity and a geodesic from v_i to v_i+r passes through the 1-wicket at any one of the consecutive pairs(v_i, v_i+1),(v_i+2, v_i+3),…, (v_i+r-1,v_i+r)and thus there exist r+1/2 geodesics joining v_i and v_i+r having length r+1/2+3. When r=k-2, the pair (v_i,v_i+r)liessufficiently apart so that there is one more geodesic in the reverse direction. When r>k-2, the geodesic in the reverse direction alone exists. §.§.§ Number of geodesics between a pair of vertices in the outer and inner cycle of GP(n,2) when n is odd If u_i and v_j are any twovertices in the outer and inner cycles respectively of GP(2k+1,2) where|i-j|=r≤ k, then the number of geodesicsσ(u_i,v_j) between u_i and v_j is given by σ(u_i,v_j)= 1forr<k 1 for evenr, r=k 2 for oddr,r=kWhen n is odd, there exists only one inner cycle in GP(n,2) containing all even vertices followed by allodd vertices.Consider u_i and v_i+r. If both vertices are either even or odd, v_i and v_i+r are also the same and there is a unique geodesic of length r/2+1 passing through the spoke (u_i,v_i) joining v_i and v_i+r along the inner cycle . Otherwise the geodesic passes through the outer edge (u_i,u_i+1),the spoke (u_i+1,v_i+1) and joins v_i+1 to v_i+r along the inner cycle. It is of length r-1/2+2. In the extreme case r=k, there is one more geodesic in the reverse direction passing through the spoke (u_i,v_i). §.§ Geodesics in GP(n,2) when n is even§.§.§ Number of geodesics between a pair of vertices in the outer cycle of GP(n,2) when n is even If u_i and u_j are any two distinct vertices in the outer cycle of GP(2k,2),k≥ 3 where |i-j|=r≤ k, then the number of geodesics σ(u_i,u_j) between u_i and u_j is given by σ(u_i,u_i+r)= 1forr=1,2,3; r<k2forr=3; r=k 2forr=4; r<k 4forr=4; r=k 3forr=5; r<k 6forr=5; r=k 1forr=6,8,10,…; r<k 2for r=6,8,10,…; r=k 2for r=7,9,11,…; r<k 4for r=7,9,11,…; r=k Since n is even, there are two inner cycles - the cycle of even vertices and the cycle of odd vertices. When r=1,2,3; r<k, there is a unique geodesic between u_i and u_i+r lying on the outer cycle and in the extreme case, i.e, when r=3, r=k, the outer cycle itself makes two geodesics on either sides.When r=4, r<k, there are two geodesics of length 4, one over the outer cycle and the other over the inner cycle of even or odd vertices according as i is even or odd by means ofthe two spokes at the given vertices. In its extreme case there are two more geodesics passing over the outer cycle and the inner cycle in the reverse direction.When r=5, r<k, there are 3 geodesics of length 5. One lying on the outer cycle and the others passingthrough the spokes either at u_i+1 and u_i+5 or at u_i and u_i+4. In its extreme case when r=k, there are 3 more geodesics passing over the outer and inner cycles in the reverse direction.When r=6,8,…; r<k, there is no geodesic passing over the outer cycle. Since v_i and v_i+r lie on the same inner cycle, there is a geodesic joining them of length r/2. Therefore there is a unique geodesic of length r/2+2 joining u_i and u_i+r when r=6,8,…; r<k. In the extreme case there is one more geodesic passing over the inner cycle in the reverse direction.When r=7,9,…; r<k, v_i and v_i+r lie on different inner cycles and therefore there are two geodesics one joining the vertices {u_i,u_i+1,v_i+1,v_i+3,…,v_i+r,u_i+r} and the other joining the vertices {u_i,v_i,v_i+2,v_i+4,…,v_i+r-1,u_i+r-1,u_i+r}. In the extreme case, reversing the direction over the outer and inner cycles, there lie two more geodesics. §.§.§ Number of geodesics between a pair of vertices in the inner cycle of GP(n,2) when n is even If v_i and v_j are any two distinct vertices in the inner cycle of GP(2k,2),k≥ 3 where |i-j|=r≤ k, then the number of geodesics σ(v_i,v_j) between v_i and v_j is given byσ(v_i,v_j)= 1 for even r, r<k2 for even r, r=k r+1/2 for odd r, r<kr+1 for odd r, r=kWhen r is even and r<k both the vertices v_i and v_i+r lie on the same inner cycle. Therefore, there exists a unique geodesic lying on the same inner cycle. In the extreme case, there is one more geodesic on the reverse side of the inner cycle.When r is odd and r<k the vertices lie on different inner cycles and hence joined by a wicket containing two spokes and an outer edge at any vertex v_i,v_i+2,…,v_i+r-1. Hence there are r+1/2 geodesics. In the extreme case the above method can be repeated along the reverse side of the inner cycle.§.§.§ Number of geodesics between a pair of vertices in the outer and inner cycle of GP(n,2) when n is even If u_i and v_j are any twovertices in the outer and inner cycles respectively of GP(2k,2), k≥ 3 where |i-j|=r≤ k, then the number of geodesicsσ(u_i,v_j) between u_i and v_j is given byσ(u_i,v_j)= 1 r<k 2 r=kWhen r is even and r<k, both v_i and v_i+r are either even or odd and hence there is a unique geodesic joining the spoke (u_i,v_i) to v_i+r.In the extreme case there is one more geodesic along the opposite side of the same inner cycle.When r is odd and r<k, u_i and v_i+r lie on different inner cycles. So there is a unique geodesic passing through {u_i,u_i+1,v_i+1}. In the extreme case there is one more geodesic lying in the opposite direction. It can be seen that in GP(n,2) when n is even, the outer and inner cycles are even and hence the number of geodesics in each of the extreme cases doubles. § DISTANCE BETWEEN TWO VERTICES INGP(N,2) The distance between a pair of vertices (u_i,u_j),(v_i,v_j) and (u_i,v_j) inGP(n,2) is given by d(u_i,u_j)=rforr≤ 5r+4/2for evenr ,r>5 r+5/2for oddr , r>5 d(v_i,v_j)=r/2for evenrr+5/2for oddr d(u_i,v_j)=r+2/2for evenr r+3/2for oddr where |i-j|=r≤*n-1/2 For r≤*n-1/2, from any vertex u_i tou_i+r (or u_i-r) for r≤ 5 there is a geodesic of length r lying on the outer cycle.When r is even,r>5 there is a geodesic of length r/2+2 joining u_i and u_i+r passing through {v_i,v_i+2,…,v_i+r} and the two spokes at u_i and u_i+r.When r is odd, there is a geodesic of length r-1/2+3 joining u_i and u_i+r passing through {u_i+1,v_i+1,v_i+3,…,v_i+r-1,u_i+r-1}.If both v_i and v_i+r are either odd or even there is a unique geodesic of length r/2 joining them along the inner cycle, if not so, there is a geodesic {v_i,u_i,u_i+1,v_i+1,v_i+3,…,v_i+r} of length r-1/2+3.If r is even, u_i and v_i both are either odd or even and there is a geodesic of length r/2+1, otherwise there is a geodesic of length r-1/2+2 including a spoke and an outer edge. The diameter of GP(n,2) when n≥ 8, is given by diamGP(n,2)=*n-1/4+2 § BETWEENNESS CENTRALITYBetweenness centrality<cit.> measures the relative importance of vertex in a graph.A vertex is said to becentralif it can effectively monitor the communication betweenvertices. It describes how a vertex acts as a bridge among all the pairs of vertices.Betweenness centrality of a vertex x is the sum of the fraction of all-pairs shortest paths that pass through x. It has wide applications in the analysis of networks<cit.>.Betweenness centrality of a vertex in a graph <cit.>.Let G be a graph and x∈ V(G), then the betweenness centrality of x in G denoted by B_G(x) or simply B(x)may be defined asB_G(x) = ∑_s,t ∈ V(G)∖{x}σ_ st(x)/σ_ stwhereσ_ st(x) denotes the number of shortest s-t paths in G passing through x and σ_ st, the number of shortest s-t paths inG. The ratio σ_ st(x)/σ_ st is called pair dependency of {s,t} on x, denoted by δ_G(s,t,x). We may now define the following terms related to betweenness centrality. Let G be a graph and H a subgraph of G. Let x∈ V(H), then the betweenness centrality of x in H denoted by B_H(x)may be defined asB_H(x) =∑_s,t∈ V(H)∖{x}σ^H_ st(x)/σ^H_ stwhereσ^H_ st(x) and σ^H_ st denotesthe number of shortest s-t pathspassing through x andthe number of shortest s-t paths respectively, being their vertices in H.Let G be a graph and H a subgraph of G. Let x∈ V(G), then the betweenness centrality of x induced by H denoted by B(x,H) may be defined asB(x,H) =∑_s,t(≠x)∈ V(H)σ_ st(x)/σ_ stLet G be a graph and S a subset of V(G). Let x∈ V(G), then the betweenness centrality of x induced by S denoted by B(x,S)may bedefined asB(x,S) = ∑_s,t(≠ x)∈ Sσ_ st(x)/σ_ st Let G be a graph and x,x_0 ∈ V(G),then the betweenness centrality ofx induced by x_0 in G, denoted by B_G(x,x_0) or simply B(x,x_0) is defined byB_G(x,x_0)=∑_t∈ V(G)∖ xσ_ x_0t(x)/σ_ x_0t It can be easily seen that in any graph G, the betweenness centrality induced by a vertex on its extreme vertex or an end vertex is zero. B(x_i,x_j) = 0 for complete graph K_n. Let P_n be a path on n vertices {x_1,…,x_n}, then B(x_i,x_j) =i-1 ifi<jn-i if j<i If C_n is a cycle on n vertices {x_0,…,x_n-1}, thenif n is even,B(x_i,x_0) =n-1-2i/2if1≤ i<n/2 0 if i=n/2if n is odd, B(x_i,x_0) =n-1-2i/2if 1≤ i≤n-1/2By symmetry,B(x_i,x_0)=B(x_n-1,x_0)For a starS_n with central vertexx_0, B(x_i,x_0) = 0 B(x_0,x_i) = n-2 B(x_i,x_j) = 0 fori,j≠ 0 For a wheelW_n, n>5 with central vertexx_0, B(x_i,x_0) = 0, B(x_0,x_i) =n-5 B(x_i,x_i± 1 ) = 1/2,B(x_i,x_i± j ) = 0 for j≥ 2 Let Gbe a graph and x_i∈ V(G), thenB_G(x_i)=1/2∑_j≠ iB_G(x_i,x_j) Let G be a graph and x∈ V(G). Let S, T be two disjoint subsets of V(G), then the betweenness centrality of x induced by S and T denoted by B(x,S,T)may be defined asB(x,S,T)=B(x,S)+B(x,T)Let G be a graph and x∈ V(G). Let S,T be two disjoint subsets of V(G) where s(≠ x)∈ S and t(≠ x)∈ T, then the betweenness centrality of x induced by S and T one against the other denoted by B(x,S | T)may be defined asB(x,S|T)=∑_s∈ S, t∈ Tσ_st(x)/σ_st§ BETWEENNESS CENTRALITY OF A VERTEX IN GP(N,2)Let us consider the betweenness centrality of GP(n,2), where the vertices lie on two vertex transitive subgraphs namely the outer cycle generated byU={u_0,u_1,…,u_n-1} and the inner cycle generated by V={v_0,v_1,… v_n-1}.§.§ Betweenness centrality of an outer vertex in GP(n,2) The betweenness centrality of an outer vertex u in GP(n,2) is given by B(u) =1/4(5n+1) for n=13,17,21,… 15n^2+32n-79/12(n+1)forn=15,19,23,… 1/4(5n+14) for even n, n≥ 12The betweenness centrality of a vertex in Gis the sumof thebetweenness centralities induced byU, V and U Vs V determined in lemma <ref> -<ref>. For any vertex u_0 in the outer cycle of GP(n,2), n≥ 6, there are 10 pairs of outer vertices, and for each pair there is a geodesic lying on the outer cyclewith u_0 as an internal vertex. More over, these pairs contribute the value 6.5 for its betweenness centrality. Consider GP(n,2), n≥ 6. Now for any vertex u_0∈ U, followed by lemma <ref>, it can be easily seen that thereexists a geodesic joining u_-1 to u_r and u_-2 to u_s where r=1,2,3,4 and s=2,3 lying entirely on the outer cycle,contributing1, 1, 1/2, 2/3 and 1/2, 1/3 respectively to the betweenness centrality of u_0. Hence by the symmetry of metric, there are 10 pairs of vertices with total contribution13/2. In GP(n,2), the betweenness centrality of an outer vertex u_0 induced by the outer cycle is given byB(u_0,U) =1/4 (n+13) forn=13,17,21,… 1/12(3n+41) forn=15,19,23,… 1/4 (n+14) foreven n, n≥ 12 Consider an outer vertex u_0∈ U in GP(n,2) for n≥ 12. First we take all possible U-U pairs of outer vertices and find their contributions to B(u_0). By lemma <ref> the outer cycle contains 10 geodesics passing through u_0 contributing 13/2. When n=2k+1, for even k, k≥ 8, the pair (u_-1,u_r), r=6,8,…,k-2 contributes 1/2 for each r and when k is incremented, there is one more pair (u_-1,u_k-1) with contribution 1/3. Therefore, by symmetry, the outer pairs contribute the sum 1/2 (k+7) when k is even and 1/6(3k+22) when k is odd.When n=2k, foreven k, k≥ 8, the pair (u_-1,u_r), r=6,8,…,k-2 contributes 1/2 for each r and when k is incremented, there is one more pair (u_-1,u_k-1) with contribution 1/4. Hence by symmetry, it can be seen that outer pairs contribute1/2 (k+7) for k, even or odd.In GP(n,2), the betweenness centrality of an outer vertex u_0 induced by the inner cycle is given byB(u_0,V) =1/2(n-5) forn=13,17,21,… n^2-2n-19/2(n+1)forn=15,19,23,… n/2 foreven n, n≥ 12 Consider the possible V-V pairs of inner vertices. When n=2k+1 for evenk, k≥ 6, the pair (v_0,v_r)for r=1,3,…,k-3 contributes the betweenness centrality 1/t where t=(r+1)/2. By the symmetry of the metric, there are 2t similar pairswith a total contribution 2 for each r. Since there are (k-2)/2 of such (v_0,v_r) pairs, the total contribution of V-V pairs is k-2. When k is incremented, there is one new pair (v_0,v_r) where r=k-2with contribution 1/t where t=(k+1)/2 and there are k-1 similar pairs. Therefore, by symmetry, inner pairs contribute the sum k^2-5/k+1. Consider the case n=2k, for even k,k≥ 6 the inner pairs (v_0,v_r) for r=1,3,…,k-1 contributes 1/t where t=(r+1)/2. Because of symmetry, each pair (v_0,v_r) belongs to a set of 2t similar pairs giving 2 and these sets contributesa total k to the centrality of u_0. When k is incremented, the leading pair (v_0,v_r) for r=k gives 1/t where t=(k+1)/2 and so get the total contribution as k.In GP(n,2), the betweenness centrality of an outer vertex u_0 induced by the vertices of outer Vs inner cycle is given byB(u_0,U|V) =1/2(n-1) for oddn, n≥ 13n/2 foreven n, n≥ 12Consider the possible U-V pairs. When n=2k+1, for even k, k≥6, the pair (u_1,v_-r) for r=0,2,4,…,k-2 contributes 1,When k is incremented, the leading pair (u_1,v_-r) makes the contribution 1/2 for r=k-1. Thereforeby symmetry, in either case the total contribution can be found as k. Similar argument is there for n=2k.§.§ Betweenness centrality of an inner vertex in GP(n,2) The betweenness centrality of an inner vertex v in GP(n,2)is given by B(v) =1/4(n^2-n-26) forn=13,17,21,… 3n^3-83n+16/12(n+1)forn=15,19,23,… 1/4(n+5)(n-6) for even n, n≥ 12The betweenness centrality of v∈ Vis the sumof its betweenness centralities induced by the subsets U, V and U Vs V in G determined in lemma <ref> -<ref>. In GP(n,2), the betweenness centrality of an inner vertex v_0 induced by the outer cycle is given byB(v_0,U) =1/16(n^2+6n-127) forn=13,17,21,… 1/48(3n^2+18n-377) forn=15,19,23,… 1/16(n^2+6n-128) foreven n, n≥ 14 Consider all outer pairs (u_i,u_j) such that v_0 lies on atleast one geodesic joining them. Let d=d(u_i,u_j), then clearly 4≤ d≤ D, where D=diam(G). Let B_d(v_0) denotes the total contribution of those pairs at a distance d towards the betweenness centrality B(v_0) of v_0. Consider n=2k+1 where k=2l, l≥ 3. Now for d=4, there exists 3pairs of 1/2and for d=5, there exists 6 pairs of 1/3 and 4 pairs of 1.For 6 ≤ d≤ D, there exist 2d-4 pairs of 1/2 and d-1 pairs of 1 giving the sum 2d-3. Therefore, B(v_0,U)=∑_d=4^DB_d=15/2+∑_d=6^D(2d-3)=1/16(n^2+6n-127)In the case n=2k+1, when k is incremented from odd to even, the leading diametric pairs (u_i,u_j) i.e, vertices at a distance D=(n+9)/4, contains 2 pairs of 2/3 and 3(n-3)/4 pairs of 1/3 and hence there is a contribution of an extra sum (3n+7)/12. Therefore, B(v_0,U)=1/16(n^2+2n-135)+(3n+7)/12= 1/48(3n^2+18n-377)Consider the case n=2k where k=2l, l≥ 3, then D=(n+8)/4 and when d=D, there are (3n-4)/4 pairs of 1/2 and a single pair of 1 giving the sum (3n+4)/8. Therefore, B(v_0,U)=15/2+∑_d=6^D-1(2d-3)+(3n+4)/8=1/16(n^2+6n-128) Consider the case n=2k where k=2l+1, l≥ 3, then D=(n+10)/4 and when d=D, there are (n-2)/2 pairs of 1/4 and one diametric pair of 2/4 giving the sum (n+2)/8. Therefore, B(v_0,U)=15/2+∑_d=6^D-1(2d-3)+(n+2)/8=1/16(n^2+6n-128) In GP(n,2), the betweenness centrality of an inner vertex v_0 induced by the inner cycle is given byB(v_0,V) =1/16(n^2-6n+21) forn=13,17,21,… n^3-5n^2+3n+137/16(n+1)forn=15,19,23,… 1/16(n-2)(n-4) foreven n, n≥ 12 We consider all inner pairs (v_i,v_j) such that v_0 lies on their shortest paths. Let d=d(v_i,v_j) and D=diam(G), then clearly 2≤ d≤ D. Let B_d(v_0) denotes the total contribution of pairs at a distance d towards the betweenness centrality B(v_0) of v_0. Let n=2k+1 where k=2l, l≥ 3, then D=(n+7)/4. For d=2, there exists apair contributing 1and for 2≤ d≤ D-1 we haveB_d(v_0)=2d-4. Therefore, B(v_0,V)=∑_d=2^D-1B_d=1+∑_d=3^D-1(2d-4)=1/16(n^2-6n+21) In the case n=2k+1 where k=2l+1,Gplay l≥ 3, we have D=(n+9)/4 andB(v_0,V)=∑_d=2^D-1B_d=1+∑_d=3^D-2(2d-4)+(D-4)+4(D-2)^-1=n^3-5n^2+3n+137/16(n+1)Consider n=2k where k=2l, l≥ 3.There are two inner k-cycles. Since v_0 lies on a k-cycle, subscripted with even numbers, we need not consider the pair (v_i,v_j) with odd subscripts i and j.The even subscripted pairs (v_i,v_j) give the betweenness centrality (k-2)^2/8. Now for j=2,4,6, etc, the pair (v_-1,v_j) gives 1/2,1/3,…, 1/l; (v_-3,v_j) gives 2/3,2/4,…, 2/l and finally (v_-(k-3),v_2) gives (l-1)/l. Considering vertices of these two inner k-cycles. B_d(v_0)=d-3 for 4≤ d≤ D where D=k/2+2. Therefore,B(v_0,V)=∑_d=4^DB_d+(k-2)^2/8=1/16(n-2)(n-4)In the casen=2k where k=2l+1, l≥ 3, the vertices of the same cycle contribute (k-1)(k-3)/8 to v_0 and for the vertices of the different cycles B_d(v_0)=d-3 for 4≤ d≤ D-1 and B_D(v_0)=(k-1)/4 where D=(k+5)/2. NowB(v_0,V)=∑_d=4^DB_d+(k-1)(k-3)/8=1/16(n-2)(n-4)In GP(n,2), the betweenness centrality of an inner vertex v_0 induced by the vertices of outer Vs inner cycle is given byB(v_0,U|V) =(n-1)^2/8forn=13,17,21,… 1/8(n^2-2n+5) forn=15,19,23,… n(n-2)/8foreven n, n≥ 12In GP(n,2) where n=2k+1, k=2l, l≥ 3, consider the possible u-v pairs determining the value of B(v_0). From u_0 there are k/2 geodesics passing through v_0 to either sides.For an even index i≤ k-2, fromu_i and u_i-1there is an equal number of geodesics, i.e, (k-i)/2 passing through v_0and therefore, by symmetry, there exists k^2/2, u-v geodesicsthrough v_0. HenceB(v_0,U|V)=(n-1)^2/8forn=13,17,21,… In the case of odd k, i.e, k=2l+1, l≥ 3 there is one more geodesic of 1/2 from each outer vertex andthe expression k^2/2 turns to be (k-1)^2/2+k HenceB(v_0,U|V)=1/8(n^2-2n+5)forn=15,19,23,… Consider the case n=2k, k≥ 7. From u_0 to any inner even vertex v_j, (j≠ 0) there is a unique geodesic passing through v_0, and from u_i to v_j there are two if v_i and v_j are diametric pairs of the inner even cycle. Thusk(k-1)/2stands for B(v_0). HenceB(v_0,U|V)=n(n-2)/8foreven n, n≥ 12§ CONCLUSIONHere we found the number of geodesics between two vertices and the betweenness centrality of GP(n,k) for k=2.The study of geodesics is extremely important in the context of interconnection networks and it has wide applications in routing , fault-tolerance, time delays and in the calculation of manycentrality measures. This work may be extended for any k<n/2. unsrt
http://arxiv.org/abs/1703.08849v1
{ "authors": [ "Sunil Kumar R", "Kannan Balakrishnan" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170326170056", "title": "On the number of geodesics of Petersen graph $GP(n,2)$" }
guoxingdao@mail.nankai.edu.cnhaoxiqing@htu.edu.cnkhw020056@tju.edu.cnzhaomg@nankai.edu.cnlixq@nankai.edu.cn1. School of Physics and Math, Xuzhou University of Technology, Xuzhou, 221111, P.R. China2. Physics Department, Henan Normal University, Xinxiang 453007, P.R. China3. School of Science, Tianjin University, Tianjin, 300072, P.R. China 4. Department of Physics, Nankai University, Tianjin, 300071, P.R. China It is well recognized that looking for new physics at lower energy colliders is a tendency which is complementary to high energy machines such as LHC. Based on large database of BESIII, we may have a unique opportunity to do a good job. In this paper we calculate the branching ratios ofsemi-leptonic processes D^+_s → K^+ e^-e^+, D^+_s → K^+ e^-μ^+ and leptonic processes D^0 → e^-e^+, D^0 → e^-μ^+ in the frames of U(1)' model, 2HDM and unparticle separately. It is found that both the U(1)' and 2HDM may influence the semi-leptonic decay rates, but only the U(1)' offers substantial contributions to the pure leptonic decays and the resultant branching ratio of D^0 → e^-μ^+ can be as large as10^-7∼10^-8 which might beobserved at the future super τ-charm factory. Looking for New Physics via Semi-leptonic and Leptonic rare decays of D and D_s ===============================================================================§ INTRODUCTIONOne of tasks of the colliders with high-intensity but lower-energy is to find traces of new physics beyond the Standard Model(SM) through measuring the rare decays with high accuracy, namely look for deviations ofthe measured values from the SM predictions. Generally, it is believed that new physics scale may exist at several hundreds of GeV to a few TeV whereas for lower energies, the contributions from new physics might be drowned out in the SM background. However, in some rare decays, contributions from SM are highly suppressed or even forbidden, then thenew physics beyond SM (BSM) might emerge and play the leading role. If such processes are observed in high precision experiments, a trace of BSM could be pinned down. Concretely,the processes where the flavor-changing-neutral-current (FCNC) is involved, are the goal of our studies. Even though such results may not determine what kind of new physics, it may offer valuable information about new physics to the high energy colliders such as LHC. In SM, FCNC and lepton flavor violation(LFV) processes can only occur via loop diagrams so would suffer a suppression. Thus study on the FCNC/LFV transitions would compose a key for the BSM search.The rare decays of D and B mesons provide a favorable area because they are produced at e^+e^- colliders, where the background is much cleaner than that at hadron colliders. The newest measurements set upper bounds for the branching ratios of D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ as 3.7×10^-6 and 9.7×10^-6 respectively <cit.>, and the upper bounds for D^0 → e^-e^+ and D^0 → e^-μ^+ are 7.9×10^-8 and 2.6×10^-7 <cit.>. Theoretically, those decay processes receive contributions from both short and long distance effects of SM <cit.>. Especially, forD^+_s → K^+ e^-e^+, its rate mainly is determined by the long distance effect and the SM predicted value is 1.8×10^-6, which is higher than the short distance contribution ( 2×10^-8<cit.>) by two orders. For other concerned processes, the contributions from SM are so small that can be neglected.As indicated, at lower energy experiments, one can notice the new physics trace, but cannot determine what it is, thus in collaboration, theorists would offer possible scheme(s) to experimentalists and help them to extract information from the data. That is the main idea of this work.There are many new physics models (BSM) constructed by numerous theorists, for example, the fourth generation<cit.>, the non-universal Z' boson<cit.>, the 2 Higgs doublet model(2HDM)<cit.> and the unparticle<cit.> etc., in their framework, FCNC/LFV processes occur at tree level. Thus once such rare decays involving FCNC/LFV processes are experimentally observed, one may claim existence of BSM, then comparing the values predicted by different models with the data, he would gain a hint about what BSM may play role which is valuablefor high energy colliders.In Refs.<cit.>, based on several BSM models the authors derived the formulaes and evaluated the decay rates of semi-leptonic and leptonic decays of D mesons while the model parameters are constrained mainly by the data of D^0-D̅^0 mixing. The result obtained by them was pessimistic thatthese decay rates cannot provide any trace of the concerned models. In this work we choose three new physics models: U(1)' model, 2HDM type III and unparticle but relax the constraint from D^0-D̅^0 mixing by supposing there were some unknown reasons to suppress the rate if the present measurements are sufficiently accurate, instead we consider the constraints obtained by fitting the experimental data for τ→ 3l<cit.>. Then we calculate the branching ratios of D^+_s → K^+ e^-e^+, D^+_s → K^+ e^-μ^+, D^0 → e^-e^+ and D^0 → e^-μ^+ in the framework of those models respectively.Our numerical results show that only Z' which is from a broken extra U'(1) gauge symmetry and 2HDM of type III can result in substantial enhancement to the branching ratios of D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+up to 10^-6∼10^-7. Those results will be tested in future BES III experiment. Indeed ,we lay our hope on the huge database of BES III, without which we cannot go any further to search for new physics after all.We, in this work, also try to set schemes for analyzing the data on those decays based on the BES III data and extract information about new physics BSM. This paper is organized as follows. In Sections 2 and 3, we first briefly review the SM results for the semi-leptonic and pure leptonic rare decays and then derive corresponding contributions induced by new physics models: extra U(1)' , 2HDM of type III and unparticle one by one. In fact some of them had been deduced by other authors and here we only probe their formulation, moreover add those which were not derived before. We obtain the corresponding Feynman amplitudes and decay widths for D^+_s → K^+ e^-e^+, D^+_s → K^+ e^-μ^+, D^0 → e^-e^+ and D^0 → e^-μ^+. In section 4, we present our numerical results along with the constraints on the model parameters obtained by fitting previous experimental data except the D^0-D̅^0 mixing. In section 5, we set an experimental scheme for analyzing the data which will be achieved by the BES III collaborations in the near future. In the last section, we present a brief discussion and draw our conclusion.prsty § D^+_S SEMI-LEPTONIC DECAYFor the decay processes D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+, the contributions of SMto theseFCNCprocesses are realized via electromagnetic penguin diagrams and suppressed. However, besides the short-distance effects, there exist a long-distance contribution which is larger. Moreover, because of smallness of the direct SM process, any new physics model whose Hamiltinian includesFCNC interactions, may inducethe semi-leptonic and leptonic decays of D^+_s and D^0 at tree level. In this section we only explore three possible models: U(1)' model, 2HDM of type III and unparticle. Since those models have been studied by many authors from various aspects, here we only give a brief review. §.§ the SM contribution The authors of Refs.<cit.> gave the amplitudes for D^+_s → K^+ e^-e^+, here we only list the formulas for readers' convenience. The Feynman amplitude of decay D^+_s → K^+ e^-e^+ in the framework of SM is[ ℳ_SM= 4G_F/√(2)[C_7⟨ e^+e^- |e A^δl̅γ_δ l|γ⟩1/q^2⟨γ K^+|O_7|D^+_s⟩+ C_9⟨ e^+e^- K^+|O_9|D^+_s⟩]; ]where[ O_7= e/16π^2m_c(u̅_L σ^αβc_R)F_αβ; O_9= e^2/16π^2(u̅_L γ^α c_L)l̅γ_α l;]After some simple reductions, ℳ_SM is transited to[ℳ_SM= 4G_F/√(2)e^2m_c/16π^2C_7u̅(p_2)(γ_β q_α-γ_α q_β)v(p_1)/2q^2f_T(q^2)/m_D_s[(p+p')^α q^β-(p+p')^β q^α+iϵ^αβρσ(p+p')_ρ q_σ];+e^2/32π^2C_9u̅(p_2)γ^δ v(p_1){f_+(q^2)[(p+p')_δ-m^2_D_s-m^2_K/q^2q_δ]+f_0(q^2)m^2_D_s-m^2_K/q^2q_δ} ]where q=p_1+p_2, C_7=4.7×10^-3<cit.>. Following Refs.<cit.>, we also consider the resonance processes D^+_s → K^+ V_i→ K^+ e^-e^+ with i=ρ,ω,ϕ which are accounted as long distance contributions and the corresponding Feynman diagrams are shown in Fig.<ref>.Thus C_9 can be written asC_9=(0.012+3π/α_e^2∑_i=ρ,ω,ϕκ_im_V_iΓ_V_i→ e^+e^-/m_V_i^2-q^2-im_V_iΓ_V_i) (V_udV_cd+V_usV_cs)with κ_ρ=0.7, κ_ω=3.1 and κ_ϕ=3.6. The second part in the parenthesis corresponds to the long-distance contributions. Following Ref.<cit.>, the hadronic form factors are written as[f_T(q^2)=f^T_D_s K(0)/(1-q^2/m^2_D_s)(1-a_T q^2/m^2_D_s); f_+(q^2)=f^+_D_s K(0)/(1-q^2/m^2_D_s)(1-α_D_s Kq^2/m^2_D_s);f_0(q^2)=f^+_D_s K(0)/1-q^2/(β_D_s Km^2_D_s); ]where f^T_D_s K(0)=0.46, a_T=0.18, f^+_D_s K(0)=0.75±0.08, α_D_s K=0.30±0.03 and β_D_s K=1.3±0.07.The long-distance contribution is of an order of 10^-6 <cit.>. Thus the contribution from SM may be close or even larger than that of BSM, so they would interfere among each other. We will discuss it in section 4. §.§ Contributions of Z' in the U(1)' model The U(1)' model was proposed and applied by many authors <cit.>, and the corresponding lagrangian isℒ_Z'=∑_i,j[l̅_i γ^μ(ω_ij^L P_L+ω_ij^R P_R) l_j Z'_μ + q̅_iγ^μ(ε_ij^L P_L+ε_ij^R P_R) q_j Z'_μ]+h.c.where P_L(R)=1-(+)γ_5/2, ω_ij ( ε_ij) denote the chiral couplings between the new gauge boson Z' and various leptons (quarks). Whetherit can be applied to solve some phenomenological anomalies, the key point is theintensity of the coupling and the mass of Z' gauge boson which would be fixed by fitting available data. For the decay processes D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+, corresponding Feynman diagrams are shown in Fig.<ref>. The corresponding Feynman amplitude with Z' as the mediate particle was derived by the authors of <cit.> as[ ℳ_Z'(D^+_s → K^+ l_i l̅_j)={f_+(q^2)[(p+p')_σ-m^2_D_s-m^2_K/q^2q_σ]+f_0(q^2)m^2_D_s-m^2_K/q^2q_σ}; ε_cu^L +ε_cu^R/g/√(2)1/q^2-m_Z'^2u̅(p_2)(ω_ij^L P_L+ω_ij^R P_R)γ^σ v(p_1) ]where ω_ij=ω_ee forD^+_s → K^+ e^-e^+ and ω_ij=ω_eμ forD^+_s → K^+ e^-μ^+ respectively.The contributions of SM (indeed from the long-distance part) and Z' might be of the same order depending on the model parameters thus we should consider their interference. So we have[|ℳ|^2=|ℳ_SM+ℳ_Z' e^iϕ|^2; = |ℳ_SM|^2+|ℳ_Z'|^2+2|ℳ_SMℳ_Z'| cosϕ. ]Averaging initial spin and summing over finial spin polarizations, the decay width Γ(D^+_s → K^+ e^-e^+) is[ dΓ/dq^2= [G^2_F α^2_e/1536π^5 m^3_D_s|C_9 f_+(q^2))^2+2C_7 f_T(q^2) m_c/m_D_s|^2 +(ε_cu^L+ε_cu^R)^2((ω_ee^L)^2+(ω_ee^R)^2)/192 π^3 g^2 m^4_Z' m^3_D_sf_+(q^2)^2;+(ε_cu^L+ε_cu^R)(ω_ee^L+ω_ee^R)G_F α_e/384π^4 g m^2_Z' m^3_D_s f_+(q^2)(C_9 f_+(q^2))^2+2C_7 f_T(q^2) m_c/m_D_s)cosϕ] λ^3/2(q^2,m^2_D_s,m^2_K) ]where λ(a,b,c)=a^2+b^2+c^2-2ab-2bc-2ca is the Kallen function. We can obtain the total decay width by integrating over dq^2, as[ Γ=∫^(m_D_s-m_K)^2_4m_e^2dΓ/dq^2 dq^2 ]§.§ Contributions of heavy neutral Higgs in the two-Higgs-Doublet Model of type IIIIn 2HDM of type III <cit.>, there are two neutral CP even Higgs bosons, one is the Higgs boson in SM and another is a heavy Higgs boson, the corresponding Lagrangian for the heavy Higgs boson isℒ_Yukawa=∑_i,j[l̅_i (m^i_l/vcosαδ_ij-ρ^E_ij/√(2)sinα) l_j H +q̅_i(m^i_q/vcosαδ_ij- ρ^U_ij/√(2)sinα) q_j H]+h.c.where ρ^E_ij and ρ^U_ij stand for effective coupling constants for leptons and quarks respectively. cosα is the mixing angle between light and heavy Higgs bosons. Following Refs. <cit.>, we take cosα→ 0. and do not adopt theso-called ChengCSher ansatz for ρ^f_ij which was discussed in Ref.<cit.>. Instead,we take a range of ρ^f_ij to 0.1∼0.3 as suggested in Ref. <cit.>.The Feynman amplitude corresponding to contributionsthrough exchanging a heavy Higgs boson is[ℳ_hh(D^+_s → K^+ l_i l̅_j) = {2f^+_D_s K(q^2)p'· p/m_D_s+[f^+_D_s K(q^2)+f^-_D_s K(q^2) ]q· p/m_D_s}; ρ^U_cu1/q^2-m^2_hhu̅(p_1)v(p_2)ρ^E_ij ]where ρ_ij=ρ_ee, ρ_ij=ρ_eμstand for D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ respectively. The differential decay width dΓ(D^+_s → K^+ e^-e^+) is[ dΓ/d q^2= [G^2_F α^2_e/1536π^5 m^3_D_s|C_9 f_+(q^2))^2+2C_7 f_T(q^2) m_c/m_D_s|^2λ(q^2,m^2_D_s,m^2_K); +(ρ^U_cuρ^E_ee)^2 (f^0_D_s K(q^2)(m^2_D_s-m^2_K)(m^2_D_s-m^2_K+s_12) -f^+_D_s K(q^2)(m_D_s^4+(m_K^2-s_12)^2-2m_D_s^2(m_K^2+s_12)))^2/64 g^2 m_D_s^5 m_hh^4 π^3 s12]; λ^1/2(q^2,m^2_D_s,m^2_K). ]Then we obtain the total decay width by integrating over dq^2 as done in Eqn.<ref>. §.§ contribution from unparticleThe idea of unparticle was proposed by Georgi<cit.> a while ago. Then many authors followed him to explore relevant phenomenology and study the basic theory. In the scenario of unparticle, flavor changing term exists in the basic Lagrangian, so that the FCNC can occur at tree level. One is naturally tempted to conjecture that the unparticle mechanism may contribute to D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+. Following Ref.<cit.>, we only consider the interactions between fermions and scalar unparticle. The corresponding effective interaction is :ℒ=∑_f',fc^f'f_s/Λ_𝒰^d_𝒰f̅'γ_μ (1-γ_5)f ∂^μ𝒪_𝒰+h.c.where c^f'f_s stands for coupling constants between unparticle and fermions, 𝒪_𝒰 is the scalar unparticle field, d_𝒰 is a nontrivial scale dimension and Λ_𝒰 is an energy scale at order of TeV. The propagator of the scalar unparticle is<cit.>[ ∫ d^4x e^iP· x⟨ 0|T 𝒪_𝒰(x) 𝒪_𝒰(0)|0⟩= iA_d_𝒰/2sin(d_𝒰π)(P^2)^2-d_𝒰e^-i(d_𝒰-2)π, ]with A_d_𝒰 is[ A_d_𝒰=16π^5/2/(2π)^2d_𝒰Γ(d_𝒰+1/2)/Γ(d_𝒰-1) Γ(2d_𝒰). ]Supposing D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ occur via exchanging a scalar unparticle, the corresponding Feynman amplitude is[ ℳ(D^+_s → K^+ l_i l̅_j)={2f^+_D_s K(q^2)p'· q+[f^+_D_s K(q^2)+f^-_D_s K(q^2)]q^2};c^cu_s/Λ_𝒰^d_𝒰1/(q^2)^2-d_𝒰e^-i(d_𝒰-2)πu̅(p_1) /q (1-γ_5)v(p_2)c^ij_s/Λ_𝒰^d_𝒰. ]where c^ij_s=c^ee_s, c^ij_s=c^eμ_s correspond to D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ respectively. Since numerically the unparticle contribution to D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ is much smaller than that from SM and other models BSM, we list the formula involving unparticle, and for completeness, we include the numerical results of the unparticle contribution in the corresponding tables. The differential decay width Γ(D^+_s → K^+ e^-e^+) is[dΓ/d q^2= 1/256π^3 m^3_D_s(c^cu_s c^ee_s)^2 2^12-4d_𝒰me^2π^5-4d_𝒰(2me^2+s_12)/s_12^6-2d_𝒰Γ^2[1/2+d_𝒰]/Λ_𝒰^4d_𝒰sin^2 d_𝒰π;(f^0_D_s K(q^2)(m^2_D_s-m^2_K)(2m_e^2+s_12)+2f^+_D_s K(q^2)m_e^2(-m^2_D_s+m^2_K+s_12))^2/g^2Γ^2[d_𝒰-1]Γ^2[2d_𝒰]λ^1/2(q^2,m^2_D_s,m^2_K). ] §.§Semi-leptonic decay of D^+Decays of D^+ →π^+ e^-e^+ and D^+ →π^+ e^-μ^+ are similar toD^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+, only difference is the species of the spectators. Therefore all the formulas of D^+_s → K^+ l_i l̅_j can be transferred to D^+ → K^+ l_i l̅_j by an SU(3) symmetry.§RARE LEPTONIC DECAYS OF D^0The rare leptonic decays of D^0 refer to D^0→ ll̅ and D^0→ l_il̅_j with i≠ j which is not only a FCNC, but also a lepton-flavor violation (LFV) process.In SM, in D^0→ ll̅, charm-quark and u̅ annihilate into a virtual photon via an electromagnetic penguin which suppresses the reaction rate. For the LFV process, not only at the initial part, c and u̅ need annihilating into a Z virtual meson which later turns into a pair of neutrinos, then via a weak scattering the neutrinos eventually end with two leptons with different flavors. Because neutrinos are very light, this process is much suppressed than D^0→ ll̅. In fact, if there does not exist new physics BSM, such LFV processes can never be experimentally measured. Therefore, search for such LFV processes composes a trustworthy probe of BSM.Actually, contribution to the leptonic decays (both lepton-flavor conserving and lepton-flavor violating processes) of SM is too small to be observed<cit.>, thus we only consider contribution from new physics. Since D^0 is a pseudo-scalar meson and heavy Higgs is scalar boson, processes D^0 → e^-e^+ and D^0 → e^-μ^+ cannot occur through exchanging heavy Higgs boson. In the Z' and unparticle scenariosD^0 → e^-e^+ and D^0 → e^-μ^+ might be induced to result in sizable rates. §.§ The Z' gauge boson from U(1)' model For the decay processes D^0 → e^-e^+ and D^0 → e^-μ^+, corresponding Feynman diagrams are shown in Fig.<ref>. The corresponding Feynman amplitude with Z' as the mediate particle is written as[ ℳ(D^0 → l_i l̅_j)=Tr[v̅(q_2) (ε_cu^L P_L+ε_cu^R P_R)γ^σ u(q_1)] 1/m_D^2-m_Z'^2u̅(p_1)(ω_ij^L P_L+ω_ij^R P_R)γ_σ v(p_2) ]where ω_ij=ω_ee forD^+_s → K^+ e^-e^+ and ω_ij=ω_eμ forD^+_s → K^+ e^-μ^+. Following Ref.<cit.> we have[ u(q_1) v̅(q_2) → f_D ( /p+m_D)γ_5 . ]The decay width Γ(D^0 → e^-e^+) is[Γ= (ε_cu^L-ε_cu^R)^2(ω_ee^L-ω_ee^R)^2 f_D^2 m_e^2 √(m_D^2-4m_e^2)/2π (m_Z'^2-m_D^2)^2. ]§.§ contribution from unparticle D^0 → e^-e^+ and D^0 → e^-μ^+ could also be realized via exchanging a scalar unparticle, and the corresponding Feynman amplitude is[ ℳ=Tr[v̅(q_2)/p(1-γ_5) u(q_1)] c^cu_s/Λ_𝒰^d_𝒰1/(m_D^2)^2-d_𝒰e^-i(d_𝒰-2)πu̅(p_2) /p(1-γ_5) v(p_1)c^ee_s/Λ_𝒰^d_𝒰, ]where c^ij_s=c^ee_s for D^0 → e^-e^+ and c^ij_s=c^eμ_s forD^0 → e^-μ^+.The decay width Γ(D^0 → e^-e^+) is[ Γ= (c^cu_s c^ee_s)^2f_D^2 √(m_D^2-4m_e^2)me^22^9-4d_𝒰π^4-4d_𝒰/m_D^4-4d_𝒰Λ_𝒰^4d_𝒰Γ^2[1/2+d_𝒰]/sin^2 d_𝒰πΓ^2[d_𝒰-1]Γ^2[2d_𝒰]. ]§ NUMERICAL RESULTSFor D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ where a Z' boson is exchanged at s-channel, we follow the authors of Ref.<cit.> and set the ranges of ε_cu^L, ε_cu^R, ω_ee(μ)^L and ω_ee(μ)^R to -0.5∼0.5 accordingly.We plot the branching ratios of D^+_s → K^+ e^+e^- and D^+_s → K^+ e^-μ^+ versus the mixing anglebetween SM Z and Z' of U(1)' θ in Fig.<ref>.When we calculate the branching ratios of D^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ via exchanging a heavy Higgs boson, we just follow Ref.<cit.> and take the range of ρ^f_ij within 0.01∼0.3, other than adopting theso-called ChengCSher ansatz for the couplings ρ^f_ij which were done in Ref.<cit.>. We plot the branching ratios of D^+_s → K^+ e^+e^- and D^+_s → K^+ e^-μ^+ versus the mass of the heavy Higgs boson in Fig.<ref>.Then, we calculate branching ratios ofD^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+via exchanging a scalar unparticle. Following Refs.<cit.>, we take Λ_𝒰=1TeV, 1<d_𝒰<2 and the range of c_S to be 0.01∼0.04 with the relation[ c_S^f'f=c_Sf≠ f' κ c_Sf=f' ]where κ=3 <cit.>. Then we plot the branching ratio of decaysD^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ versus Λ_𝒰 with different d_𝒰 in Fig.<ref>. We list the branching ratios ofD^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ predicted by various new physicsmodels (BSM) in Tab.<ref> and <ref> separately. From those tables we notice that for the U(1)' model<cit.> and 2HDMof type III<cit.>, the branching ratios can be up to order of 10^-6∼10^-7.We also listbranching ratios of leptonic decays D^0 → e^-e^+ and D^0 → e^-μ^+ predicted by various models of new physics beyond SM in Tab.<ref>. Since D^0 cannot decay to l_i l̅_j through a scalar particle, only Z' and unparticle could contribute to those leptonic decays. Our numerical results indicate that as the experimental bounds being taken into account and the corresponding coupling constants in U(1)' model and 2HDM taking their maximum values, the branching ratios ofD^+_s → K^+ e^-e^+ and D^+_s → K^+ e^-μ^+ can be up to order of 10^-6. Whereas the contribution of the scalar unparticle to the branchingratios can only reach an order of 10^-18(10^-15).§ SEARCHING FOR SEMI-LEPTONIC AND LEPTONICDECAYS BASED ON THE LARGE DATABASE OF BESIII In this section, let us discuss possible constraints and the potential to observe the aforementioned rare semi-leptonic and leptonic decays of D mesons based on the large database of BES III. Unlike the hadron colliders, electron-positron colliders have much lower background which is well understood at present and helps to reduce contaminations from the measurement circumstance. Thus controllable and small systematic uncertainties are expected.The BES III experiment has accumulated large data samples at 3.773 and 4.18 GeV, which are just above the production thresholds of DD̅ and D_s^⋆+ D_s^-+c.c.. This provides an excellent opportunity to investigate the decays of these charmed mesons.At these energies, the charmed mesons are produced in pairs. That is to say, if only one charmed meson is reconstructed in an event, which is defined as a single tag event, there must exist another charmed meson in the recoiling side. With the selected singly tagged events, the concerned rare charm decays can be well studied in the recoiling side of the reconstructed charmed meson.This is named as the double-tag technology, which is firstly employed by the MARK-III Collaboration and now widely used in the BES III experiments. With this method, the two charmed mesons are both tagged in one event, one of the charmed mesons is reconstructed through a well measured hadronic channel, then the other one decays into the concerned signal process.Benefiting from the extremely clean background, the systematic uncertainties in double tag measurements can be reduced to a fully controlled level.In principle, there are two ways to perform the search for rare/forbidden decays. One is based on the single tag method where one charmed meson is reconstructed for the signal process while no any constraint is set to the other. This method can provide larger statistics meanwhile a more complex and higher background might exist as the price to pay. Another way is using the double tag method which presents a simple and clean backgrounds but a relatively poorer statistics (see table <ref>). Whether employing the double-tag technique for studying the relevant processes depends on a balance between reducing background contaminations and expecting higher statistics.In the following, we discuss the statistics of the measurements on the rare decays, which may compose the factor to restrict the ability of searching for new physics in most cases. For single tag method, the background analysis is severely mode dependent. Thus, to simplify the estimation, we will focus our discussion on the result of double tag method.The BESIII experiment has accumulated huge threshold data samples of about 2.95 fb^-1 and 3.15 fb^-1 at the cms energies √(s)= 3.773 and 4.180 GeV, which are about 3.5 times and 5 times more than the previously accumulated database, respectively.According to the published papers of the BESIII experiments, there are more than 1.6×10^6 and 2.8×10^6 singly tagged charged and neutral DD̅ mesons, respectively. These modes can be used as the tagging side for the double-tag method. Namely, because of the advantage of the double-tag method which may remarkably reduce the background and enhance the confidence level, we suggest to adopt the double-tag method for the analysis on the rare decay data while employing the well established modes as the tagging side. Then at the recoiling side, one can look for the expected signal. Omitting some technical details, we know that while adopting this double-tag method, the experimental sensitivity can reach about 10^-6 at 90% Confidence Level (CL)if assuming zero-signal and zero-background events. In next 10 years, 4 to 6 times more charm data can be expected, we may have a better chance to detect such rare decays. However unfortunately according to our predictions this sensitivity is still below the bound of observing the pure leptonic rare decays of D^0 (no matter lepton-flavor-conserving or lepton-flavor-violating processes). If the size of BESIII data sample can reach 20 fb^-1 in next 10 years, the sensitivity would be at 10^-7 level which is almost touching the bottom line of our prediction on the rate of pure leptonic modes. The analysis is a little more complex at the 4.180 GeV even though the method is similar. The sensitivities for the rare semi-leptonic decays of D_s^+ or D_s^⋆+ mesons can be expected to reach 10^-5 at 90% CL, however it is not enough to test our predictions for the rare D_s^+ semi-leptonic decays.If the proposed super τ-charm factory is launched in the near future, we would be able to collect at least 100 or 1000 times more data since the designed luminosity of STCF will be as high as 1×10^35 cm^-2 s^-1 which is 100 times of the BEPC II. Then, the sensitivities of searching for the concerned signals in D or D_s^+ decays can be greatly improved as 10^-9 ∼ -10 or 10^-7 ∼ -8 at 90% CL, respectively, may be expected. With this improved sensitivities, the rates of D_s^+→ K^+e^+e^- and D_s^+→ K^+μ^+e^- predicted by the U'(1) or 2HDM models become measurable. Then, the more challengeable lepton-flavor-violation modes D^0→ e^-μ^+ predicted by the U'(1) and unparticle models can be possibly tested.§ DISCUSSION AND CONCLUSION The rare decays of heavy flavored hadrons which are suppressed or even forbidden in SM can serve as probe portals for searching new physics BSM. Experimentally measured “anomalies” which obviously deviate from the SM predictions are considered as the candidate signals of BSM, at least provide hints to BSM for the experiments of high energy colliders, such as LHC. That is the common sense for experimentalists and theorists of high energy physics. However, how to design a new experiment which might lead to discovery of new physics is an art. As following the historical experience, beside the blind search in experiments, researchers tend to do measurements according to the prediction made by theorists based on the available and reasonable models.FCNC/LFV processes provide a sensitive test for new physics BSM, which compose a complementary area to high energy collider physics. Definitely, those processes where SM substantially contributes, do not stand as candidates for seeking new physics BSM, because the new physics contributions would be drown in the SM background. Researchers are carefully looking for rare processes where SM contributions are very suppressed or even forbidden by some rules. The rare semi-leptonic and leptonic decays of B and D mesons are ideal places because they are caused by FCNC. Especially the lepton-flavor violation decays which cannot be resulted in by the SM because neutrino masses are too tiny to make any non-negligible contribution, are the goal which we have interests in.Recently, most of researches focus on B decays. The reason is obvious, that B mesons are at least three times heavier than D mesons, so the processes involving B-mesons are closer to new physics scale and moreover, the coupling between b-quark and top-quark has a large CKM entry. Indeed, there are many research works concerning B→ K^(*)ll̅<cit.> and B^0(B_s)→ ll̅ have emerged<cit.>. On another aspect, several authors have studied the case of D mosons, and drawn constraints on the free parameters in the proposed models by fitting available data. The model parameters can be compared with those obtained by fitting the data of B decays. In this work,based on the large database of the BESIII, we follow the trend to investigate possibilities of detecting the rare semi-leptonic and pure leptonic decays of D meson, andspecially we pay more attention to the analysis of the lepton-flavor-violation processes.In this work, we calculate the decay rates of D^+_s → K^+ e^-e^+, D^+_s → K^+ e^-μ^+, D^0 → e^-e^+ and D^0 → e^-μ^+ through exchanging a neutral particle in terms of three BSM new physics models: the extra U'(1), 2HDM of type III and unparticle. The decay rate of D^+_s → K^+ e^-e^+ receives sizable contribution from SM whose branching ratio is up to orders of 10^-6. It is noted that the branching ratio of direct decay process via penguin diagram is small at order of 10^-8, while the long-distance reaction makes a larger contribution. Our numerical results show that U(1)' and 2HDM of type III can make significant contributions to the process D^+_s → K^+ e^-e^+ as long as the model parameters which are obtained by fitting relevant data are adopted, but the unparticle model cannot make any substantial contribution. The recent researchers seem to be more tempted to use the extra U'(1) model and we follow their trends. But here for fixing the model parameters, we deliberately relax the constraint set by the D^0-D̅^0 mixing as we discussed in the above text. If the constraints were taken into account, the predicted branching ratio of D^+_s → K^+ e^-e^+ would be reduced by two more orders as 10^-8 which is much lower than the contribution of the SM long-distance effect. Thus the new physics contribution would be buried in the SM background. However, as we only consider the constraints on U'(1) parameter taken by fitting the data of τ→ 3l other than D^0-D̅^0 mixing, the predicted branching ratio can be large to order of 10^-6, thus the resultant amplitude might interfere with the SM long-distance contribution.In future BES III experiment, the experimental sensitivity can be up to order of 10^-6∼10^-7, thus the data on D^+_s → K^+ e^-e^+ might tell us some information of new physics.Our numerical results show that the U(1)' model and 2HDM of type III could make an observable branching ratio of D^+_s → K^+ e^-μ^+ with the BES III data as its precision can reach orders of 10^-6∼10^-7. For processes D^0 → e^-e^+ and D^0 → e^-μ^+, the theoretically predicted ranching ratio of decay D^0 → e^-e^+ is of the order of 10^-10 since its width is proportional to m_e^2, such a small value is hard to be observed. While for decay D^0 → e^-μ^+, its branching ratio can be up to orders of 10^-7, which may be observed in future super τ-charm factory. Moreover, one can expect to watchD^0 →μ^-μ^+, while unfortunately,D^0 →μ^-τ^+ is forbidden by the phase space of final states because m_μ+m_τ>m_D^0. According to the presently available new physics models, U'(1), 2HDM and unparticle model, the data on D-mesons which will be collected in the future 10 years can marginally detect the new physics contributions to D^+_s → K^+ e^-e^+, D^+_s → K^+ e^-μ^++h.c., D^0 → e^-e^+ and D^0 → e^-μ^++h.c. as long as only the constraints set by some experiments are accounted, but the data of D^0-D̅^0 mixing are relaxed. If the data of D^0-D̅^0 mixing are taken into account the BES III and even the planned high luminosity τ-charm factory will not be able to “see” those rare decays as predicted by these models. However, it by no means forbids experimental search for these rare decays in the charm energy regions based on the huge data sample collected by BES and the future τ- charm factory. Blind experimental search is not affected by the available theoretical prediction because the present BSM are only possible ones conjectured by theorists, while nature might suggest an alternative scenario. Once such new observation is made, we would be stunned and explore new models BSM to explain the phenomena, thus our theories would make new progress, and that is what we expected. § ACKNOWLEDGMENTSThis work is supported by National Natural Science Foundation with contract No. 11675082, 11375128,11405046 and the Special Grant of the Xuzhou University of Technology No. XKY2016211. 99 Olive:2016xmw C. Patrignani et al. [Particle Data Group],Chin. Phys. C 40, no. 10, 100001 (2016).Burdman:2001tf G. Burdman, E. Golowich, J. L. Hewett and S. Pakvasa,Phys. Rev. D 66, 014009 (2002)[hep-ph/0112235]. Hou:2006mx W. S. Hou, M. Nagashima and A. Soddu,Phys. Rev. D 76, 016004 (2007)[hep-ph/0610385].Langacker:2008yv P. Langacker,Rev. Mod. Phys.81, 1199 (2009)[arXiv:0801.1345 [hep-ph]]. Leike:1998wr A. Leike,Phys. Rept.317, 143 (1999)[hep-ph/9805494]. rnm R. N. Mohapatra, Unification And Supersymmetry, (Springer-Verlag, Berlin, 1986) Yue:2016mqm C. X. Yue and J. R. Zhou,Phys. Rev. D 93, no. 3, 035021 (2016)[arXiv:1602.00211 [hep-ph]]. Cheng:1987rs T. P. Cheng and M. Sher,Phys. Rev. D 35, 3484 (1987).Davidson:2010xv S. Davidson and G. J. Grenier,Phys. Rev. D 81, 095016 (2010)[arXiv:1001.0434 [hep-ph]]. Omura:2015nja Y. Omura, E. Senaha and K. Tobe,JHEP 1505, 028 (2015)[arXiv:1502.07824 [hep-ph]]. Georgi:2007ek H. Georgi,Phys. Rev. Lett.98, 221601 (2007)[hep-ph/0703260]. Georgi:2007si H. Georgi,Phys. Lett. B 650, 275 (2007)[arXiv:0704.2457 [hep-ph]]. Luo:2007bq M. Luo and G. Zhu,Phys. Lett. B 659, 341 (2008)[arXiv:0704.3532 [hep-ph]]. Fajfer:2015mia S. Fajfer and N. Ko?nik,Eur. Phys. J. C 75, no. 12, 567 (2015)[arXiv:1510.00965 [hep-ph]]. deBoer:2015boa S. de Boer and G. Hiller,Phys. Rev. D 93, no. 7, 074001 (2016)[arXiv:1510.00311 [hep-ph]]. Buras:1998raa A. J. Buras,hep-ph/9806471. Khodjamirian:2009ys A. Khodjamirian, C. Klein, T. Mannel and N. Offen,Phys. Rev. D 80, 114005 (2009)[arXiv:0907.2842 [hep-ph]]. Gao:2010zzg T. J. Gao, T. F. Feng, X. Q. Li, Z. G. Si and S. M. Zhao,Sci. China Phys. Mech. Astron.53, 1988 (2010).Langacker:2009im P. Langacker,AIP Conf. Proc.1200, 55 (2010)[arXiv:0909.3260 [hep-ph]]. Liu:2015oaa X. Liu, L. Bian, X. Q. Li and J. Shu,Nucl. Phys. B 909, 507 (2016)[arXiv:1508.05716 [hep-ph]]. Li:2007by X. Q. Li and Z. T. Wei,Phys. Lett. B 651, 380 (2007)[arXiv:0705.1821 [hep-ph]]. Jia:2013haa L. B. Jia, M. G. Zhao, H. W. Ke and X. Q. Li,Chin. Phys. C 38, no. 10, 103101 (2014)[arXiv:1312.7649 [hep-ph]]. Cheung:2007zza K. Cheung, W. Y. Keung and T. C. Yuan,Phys. Rev. Lett.99, 051803 (2007)[arXiv:0704.2588 [hep-ph]]. Hao:2006nf G. Hao, Y. Jia, C. F. Qiao and P. Sun,JHEP 0702, 057 (2007)[hep-ph/0612173]. Ali:1999mm A. Ali, P. Ball, L. T. Handoko and G. Hiller,Phys. Rev. D 61, 074024 (2000)[hep-ph/9910221]. Altmannshofer:2008dz W. Altmannshofer, P. Ball, A. Bharucha, A. J. Buras, D. M. Straub and M. Wick,JHEP 0901, 019 (2009)[arXiv:0811.1214 [hep-ph]]. Buchalla:1998ba G. Buchalla and A. J. Buras,Nucl. Phys. B 548, 309 (1999)[hep-ph/9901288]. Bobeth:2001sq C. Bobeth, T. Ewerth, F. Kruger and J. Urban,Phys. Rev. D 64, 074014 (2001)[hep-ph/0104284].
http://arxiv.org/abs/1703.08799v1
{ "authors": [ "Xing-Dao Guo", "Xi-Qing Hao", "Hong-Wei Ke", "Ming-Gang Zhao", "Xue-Qian Li" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170326095919", "title": "Looking for New Physics via Semi-leptonic and Leptonic rare decays of $D$ and $D_s$" }
EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, EX44QJ, UKSchool of Physics, Georgia Institute of Technology, Atlanta, Georgia, 30332-0430, USA While spiral wave breakup has been implicated in the emergence of atrial fibrillation, its role in maintaining this complex type of cardiac arrhythmia is less clear. We used the Karma model of cardiac excitation to investigate the dynamical mechanisms that sustain atrial fibrillation once it has been established. The results of our numerical study show that spatiotemporally chaotic dynamics in this regime can be described as a dynamical equilibrium between topologically distinct types of transitions that increase or decrease the number of wavelets, in general agreement with the multiple wavelets hypothesis. Surprisingly, we found that the process of continuous excitation waves breaking up into discontinuous pieces plays no role whatsoever in maintaining spatiotemporal complexity.Instead this complexity is maintained as a dynamical balance between wave coalescence – a unique, previously unidentified, topological process that increases the number of wavelets – and wave collapse – a different topological process that decreases their number. Dynamical mechanism of atrial fibrillation: a topological approach Roman O. Grigoriev December 30, 2023 ================================================================== Atrial fibrillation is a type of cardiac arrhythmia featuring multiple wavelets that continually interact with each other, appear, and disappear. The genesis of this spatiotemporally chaotic state has been linked to the alternans instability that leads to conduction block and wave breakup, generating an increasing number of wavelets. Less clear are the dynamical mechanisms that sustain this state and, in particular, maintain the balance between the creation and destruction of spiral wavelets. Even the relation between wave breakup and conduction block, which is well-understood qualitatively, at present lacks proper quantitative description. This paper introduces a topological description of spiral wave chaos in terms of the dynamics of wavefronts, wavebacks, and point defects – phase singularities – that anchor the wavelets. This description both allows a dramatic simplification of the spatiotemporally chaotic dynamics and enables quantitative prediction of the key properties of excitation patterns. § INTRODUCTION Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia <cit.>. While not itself lethal, it has a number of serious side effects, such as increased risk of stroke and systemic thromboembolism <cit.>. The origin of AF has been debated through much of the previous century <cit.>.In 1913, Mines proposed that fibrillation is caused by a reentrant process <cit.>, which leads to a high-frequency wave propagating away from the reentry site and breaking up into smaller fragments. This mechanism is presently referred to as anatomical reentry and requires a structural heterogeneity of the cardiac tissue, such as a blood vessel (e.g., vena cava). Reentry could also be functional <cit.>, where the heterogeneity (i.e., tissue refractoriness) is dynamical in nature. Neither picture, however, explains the complexity and irregularity of the resulting dynamics.The first qualitative explanation of this complexity came in the form of the multiple wavelet hypothesis proposed by Moe <cit.>. In this hypothesis multiple independent wavelets circulate around functionally refractory tissue, with some wavelets running into regions of reduced excitability and disappearing and others breaking up into several daughter wavelets, leading to a dynamical equilibrium. This picture was subsequently evaluated and refined based on numerical simulations <cit.> and experiments <cit.>. Krinsky <cit.> and then Winfree <cit.> suggested that the dynamical mechanism of fibrillation relies on the formation and interaction of spiral waves. Spiral waves rotate around phase singularities that may or may not move, producing reentry that requires neither structural nor dynamical heterogeneity in refractoriness. The presence and crucial role of spiral waves in AF was later confirmed in optical phase mapping experiments <cit.>. Experimental evidence shows that spiral waves tend to be very unstable: only a small fraction of these complete a full rotation <cit.> with the dynamics dominated by what appears to be wavebreaks or wave breakups (WBs). Although there is plentiful experimental and computational <cit.> evidence that WBs play a crucial role in the transition to fibrillation, it is far less obvious that this mechanism is essential for maintaining AF. As Liu et al. <cit.> write, “for a wave to break, its wavelength must become zero at a discrete point somewhere along the wave.This can happen if the wave encounters refractoriness that creates local block (wavelength = 0), while propagating elsewhere.Therefore, WBs can be detected at locations where activating wavefronts meet the repolarization wavebacks.” The data produced by experimental studies is highly unreliable in this regard, since detecting the position of wavefronts and wavebacks based on optical recordings is far from straightforward. Numerical simulations, on the other hand, have focused mostly on the transition, rather than sustained AF. Theoretical studies of model systems such as the complex Ginzburg-Landau equation <cit.> and FitzHugh-Nagumo equation <cit.> lack dynamical features, such as the alternans instability <cit.>, that are believed to play an essential role in conduction block that leads to WBs and AF <cit.>.Even if WBs do play a role in maintaining AF, they tell only a part of the story.Indeed, in sustained AF, despite some variation, the quantitative metrics such as the number of wavelets or phase singularities, have to remain in dynamical equilibrium. While WBs may explain the increase in the number of wavelets and phase singularities, it cannot explain how these numbers might ever decrease. The multiple wavelet hypothesis <cit.> comes the closest to providing all the necessary ingredients for such a dynamical equilibrium, but it lacks sufficient detail to be either validated or refuted.The main objective of this paper is to construct a mathematically rigorous topological description of the dynamics and to use this description to characterize and classify different dynamical events that change the topological structure of the pattern of excitation waves in a state of sustained atrial fibrillation. We will focus on the smoothed version <cit.> of the Karma model <cit.>,∂_tu⃗ = D∇^2u⃗ + f⃗(u⃗),where u⃗(t,x⃗) = [u_1,u_2](t,x⃗),f_1 =(u^* - u_2^M){1 - tanh(u_1-3)}u_1^2/2 - u_1, f_2 =ϵ{βΘ_s(u_1-1) + Θ_s(u_2-1)(u_2-1) - u_2},where Θ_s(u)=[1+tanh(su)]/2, u_1 is the (fast) voltage variable, and u_2 is the (slow) gating variable.The parameter ϵ describes the ratio of the corresponding time scales, s is the smoothing parameter, and the diagonal matrix D of diffusion coefficients describes spatial coupling between neighboring cardiac cells (cardiomyocytes).The parameters of the model <cit.> are M=4, ϵ = 0.01, s = 1.2571, β = 1.389, u_1^* = 1.5415, D_11 = 4.0062, and D_22 = 0.20031. This is the simplest model of cardiac tissue that develops sustained spiral wave chaos from an isolated spiral wave through the amplification of the alternans instability resulting in conduction block and wave breaks, mirroring the transition from tachycardia to fibrillation.The outline of the paper is as follows: Section <ref> introduces the topological description of the complicated multi-spiral states. Section <ref> discusses the relationship between tissue refractoriness and conduction block. The dynamical mechanisms that maintain spiral wave chaos are presented in Section <ref>.Section <ref> discusses the statistical measures quantifying sustained dynamics,and Section <ref> contains the discussion of our results and conclusions.§ WAVE ANATOMY To quantitatively describe the topological changes such as wave breakup and creation/destruction of spiral cores we must first discuss the anatomy of excitation waves and define the appropriate terminology. §.§ Wavefront and waveback The region of excitation can be thought of as being bounded by the wavefront, which describes fast depolarization of cardiac cells, and waveback, which describes a typically slower repolarization. The most conventional definition of the wavefront and waveback that has been used in both experiment <cit.> and numerical simulations <cit.> is based on a level-set of the voltage variable, u_1(t,x⃗) = u̅_1. If we define the region of repolarizationR={x⃗ | ∂_tu_1(t,x⃗)) < 0 },then the wavefront/back is the part of the level set outside/inside R.The choice of the voltage threshold u̅_1 is arbitrary and is typically taken as a percentage of the difference between the voltage maximum and its value in the rest state <cit.>. This very simple definition allows easy identification of the action potential duration (APD) and diastolic interval (), where the percentage is often used as a subscript which refers to the choice of the threshold (e.g., _80 corresponds to 80% of the difference).Analytical studies tend to use a different definition <cit.> for the wavefront and waveback which is based on scale separation between the dynamics of fast variables, such as voltage, and slow variables, such as potassium concentration (the voltage u_1 and the gating variable u_2, respectively, in the Karma model). For the simplest two-variable models (Karma, Barkley <cit.>, FitzHugh-Nagumo <cit.>, Rinzel-Keller <cit.>, etc.), in the limit ϵ→0, the excitation waves are related to a limit cycle oscillation in the (u_1,u_2) plane governed by the system of coupled ordinary differential equations <cit.>u̇_1= f_1(u⃗), u̇_2= f_2(u⃗).The wavefront and waveback correspond to the segments of the limit cycle solution of (<ref>) connecting the two stable branches of the u_1-nullcline f_1( u)=0 for which u_2/ u_1<0 (they are denoted with superscripts - and +, as illustrated in Fig. <ref>(a)). In the limit ϵ→ 0 these segments become horizontal lines and describe very fast (in time) variation of the voltage variable. In space, both the wavefront and the waveback have widths that scale as √(ϵ). In the limit ϵ→0 they become very sharp (green curves in Fig. <ref>(b)) and can be thought of as the boundaries of the region of excited tissue E={x⃗ | f_1^+( u(t,x⃗)) = O(ϵ) },shown as the green shaded area in Fig. <ref>(b).These boundaries can be defined with equal precision using any curve in the (u_1,u_2) plane that bisects both the wavefront and the waveback segments of the limit cycle. If we define this curve as the zero level set of an indicator functiong(u⃗)=0,the wavefront and the waveback in the physical space at a particular time t are given by∂ E={x⃗ | g(u⃗(t,x⃗)) = 0 }.In particular, a level set of the voltage variable discussed previously corresponds to a vertical line in Fig. <ref>(a), g(u⃗)=u_1-u̅_1. A less arbitrary and dynamically better justified choice g(u⃗)=f_1^0( u) corresponds to the unstable branch of the u_1-nullcline (for which u_2 /u_1 > 0). In order to generalize this choice to finite values of ϵ, the unstable branch has to be extended beyond its end points u^≶ where u_2 /u_1 = 0, e.g.,g(u⃗)={ u_1-u_1^<, u_2≤ u_2^<, f_1^0( u), u_2^<<u_2<u_2^>, u_1-u_1^>, u_2≥ u_2^>. .The excited region where g( u)>0 according to (<ref>) is shaded green in Fig. <ref>(a).Yet another alternative suggested by Fig. <ref>(a) is to use the u_2-nullcline g(u⃗)=f_2( u) that also bisects both the wavefront and the waveback.As time evolves, the level set (<ref>) moves with normal velocityc=-b⃗·∂_tu⃗/b⃗·∂_nu⃗,where b⃗=∂ g/∂u⃗, ∂_n=n⃗·∇ is the directional derivative, and n⃗ is the outside normal to ∂ E.This normal velocity is taken to be positive for the wavefront and negative for the waveback. In particular, for g(u⃗)=u_1-u̅_1, (<ref>) simplifies, yielding c=-∂_tu_1/∂_nu_1, so the sign of c corresponds to the sign of ∂_tu_1 for proper choices of u̅_1 such that ∂_nu_1<0 over the entire level set.Whatever the choice of the bisecting curve g( u)=0 in the (u_1,u_2) plane is, it may not define a continuous curve in the physical space at all times. Indeed, the PDE model (<ref>) does not take the cellular structure of the tissue into account. Instead, a spatial discretization of (<ref>) should be used, so that the field u⃗ becomes a discontinuous function of space.In this case the wavefront and waveback should instead be defined as the boundary ∂ E of the regionE={x⃗ | g(u⃗(t,x⃗)) > 0 },rather than the level set (<ref>).There are several serious problems with the “local” definitions discussed above, which are based on the kinetics of isolated cardiac cells.For one, they ignore the coupling between neighboring cells in tissue (electrotonic effects) and hence cannot correctly describe the essential properties of excitability and refractoriness, making quantitative description of conduction block impossible.Furthermore, since the value of ϵ is not vanishingly small for physiologically relevant models, the widths of both the wavefront and the waveback become finite as well, so different choices of g(u⃗) can produce rather distinct results in the physical space. Additional complications will be discussed below.The definitions of the wavefront and waveback can be generalized for a tissue model by noticing that the level sets such as f_1^0(u⃗)=0 or f_2(u⃗)=0, in the limit ϵ→ 0, coincide with the level sets ∂_tu_1=0 and ∂_tu_2=0. These are special cases of a more general relationg(u⃗)=a⃗·∂_tu⃗=0,where a⃗=(cosα,sinα) and 0<α<π is a parameter that can be chosen to properly describe the refractoriness and excitability of the model for finite values of ϵ.The wavefront and waveback can again be distinguished as the parts of the level set that lie outside or inside R, respectively. We will set α=π/2 below, which yields the following definition∂ E = {x⃗ | ∂_t u_2(t,x⃗) = 0 }.The level set (<ref>) is shown as the dashed blue line in Fig. <ref>(b). As Fig. <ref>(a-b) illustrates, for the Karma model, it gives an extremely good agreement with the more conventional definition based on (<ref>) for both the wavefront and the waveback. Variation in α by O(ϵ) has a very weak effect on the position of the wavefront (which is very sharp), but has a more pronounced effect on the position of the waveback (which is much broader). §.§ Phase singularities Typically (although certainly not always <cit.>), the temporal frequency and wavelength of spiral waves are controlled by their central region, usually referred to as a spiral core or rotor <cit.>.This region is spatially extended and its size can be characterized using the adjoint eigenfunctions of the linearization <cit.>.In practice it is more convenient to deal with a single point that describes the location of the core region. In particular, the center of this region is associated with a phase singularity, where the amplitude of oscillation vanishes. The location of the phase singularity depends on the definition of the phase, however, and the proper definition is far from obvious for strongly nonlinear oscillations characteristic of excitable systems. The methods based on phase <cit.> or amplitude <cit.> reconstruction rely on the dynamics being nearly recurrent and break down for spatiotemporally chaotic states featuring frequent topologically nontrivial events such as the creation or annihilation of spiral cores.A more conventional (and convenient) approach is to use instead the spiral tip, which is a point on the boundary ∂ E of the excited region that separates the depolarization wavefront from the repolarization waveback (green circle in Fig. <ref>(b)).A number of different definitions of the spiral tip have been introduced in the literature.The most popular are the ones based on the level-set intersection (LSI) <cit.>u_1(t,x⃗) = u̅_1, f_1^0(u̅_1,u_2)=0,or zero normal velocity (ZNV) <cit.>u_1(t,x⃗) = u̅_1,∂_tu_1(t,x⃗) = 0,or the curvature κ of the level set u_1(t,x⃗) = u̅_1 <cit.>. In particular, ZNV and LSI define the spiral tip(s) x⃗_p(t) as intersection of two level sets which are much easier to compute than the curvature. LSI can be thought of as the limiting case of ZNV where D_11→ 0, so the difference between the positions of spiral tips defined using these two methods provides a measure of the importance of electrotonic effects.Albeit they could be simple to define, spiral tips typically exhibit spurious dynamical effects.For instance, they move (in circular trajectories) for spiral wave solutions of (<ref>) rigidly rotating around an origin x⃗' (née relative equilibria) which satisfy∂_tu⃗(t,x⃗) = ω∂_θu⃗(t,x⃗),where ∂_θ = ẑ⃗̂·(x⃗-x⃗')×∇ and ω=2π/T is the angular frequency. Indeed, even if the normal velocity (<ref>) of the spiral tip vanishes, its tangential velocity will not vanish, unless u̅_1=u_1(x⃗'). Hence, spiral tips are not ideally suited to be used as indicators of the wave dynamics.The phase singularity, unlike the spiral wave tip, should remain stationary for a rigidly rotating spiral wave.This requires that the location x⃗_o(t) of every phase singularity satisfies∂_tu⃗(t,x⃗_o) = 0⃗.Equivalently, x⃗_o(t) correspond to the intersections of the level sets ∂ R={x⃗|∂_tu_1(t,x⃗))=0}and ∂ E defined according to (<ref>), i.e., phase singularities are points on the boundary of the excited region that separate the refractory region from the excitable region. Note that, for ∂ E defined by (<ref>), its intersections with ∂ R are independent of α, and so is the definition of the phase singularities. This is explicit in the definition (<ref>). It is easy to see that the boundaries ∂ E and ∂ R merely correspond to different level sets φ=α±π/2 and φ=±π/2 of the phase fieldφ=arg(∂_tu_1+i∂_tu_2),so x⃗_o(t) indeed corresponds to a phase singularity of the phase field (<ref>). Since they are defined locally (just like the spiral wave tips defined via LSI and ZNV), the phase singularities can be easily determined for arbitrarily complicated solutions. More generally, x⃗_o(t) can be interpreted as the instantaneous center of rotation for slowly drifting spiral waves, i.e., for which the rotation-averaged translation of the spiral wave core is much smaller than the typical propagation velocity c of excitation waves.The positions of spiral wave tips and phase singularities are compared in Fig. <ref>(c). For the LSI and ZNV definitions, the positions of the spiral tips are shown for 1.68 ≤u̅_1 ≤ 2.11, corresponding to the voltage threshold between _70 and _90, respectively <cit.>.Clearly, electrotonic effects are non-negligible for the present model, as the tip positions predicted by LSI are dramatically different from those defined by ZNV over a range of choices of u̅_1. The tip positions defined by ZNV much more closely match the phase singularities defined by (<ref>), and mostly differ in position in the direction normal to the wavefront (along the local gradient of u_1).In conclusion of this section, we should mention that, for a typical multi-spiral solution, u_1 varies rather significantly across the spatial domain, while u_2 is restricted to a narrow range of O(ϵ) width around the value u̅_2 that corresponds to the “stall solution” for a planar front connecting the two stable branches of the u_1-nullcline∫_u_1^-^u_1^+ u_1 f_1(u_1,u̅_2)=0,where f_1^±(u_1^±,u̅_2)=0 (cf. Fig. <ref>(a)).This is a characteristic value that corresponds to the phase singularities, as shown by Fife <cit.>. For the parameters used here <cit.> u̅_2=0.9724. §.§ Topological description We can associate chirality (topological charge) q_j=± 1 with each of the phase singularities (enumerated by j=1,2,…), which determines whether the spiral wave rotation is counter- or clockwise. Chirality can be defined locally <cit.> as q_j= sign(ẑ·∇ u_1×∇ u_2).For spatially discretized models it is more reliable to use a nonlocal definition of chirality. Let us define a neighborhood of each phase singularity x⃗_o,j using the window function w_j(x⃗) = e^-r_j/d_j,where r_j=|x⃗-x⃗_o,j| and d_j=min_k≠ j |x⃗_o,j-x⃗_o,k| is the distance to the nearest distinct phase singularity. Further, let us define the pseudo-chirality q̃_j of each spiral wave as the value for which the function J(q̃_j)=∫_Ω ^2x⃗w_j(x⃗) |∂_tu⃗(t,𝐱)-q̃_jω∂_θu⃗(t,x⃗) |^2,is minimized. The functional J(q̃_j) defines a local reference frame rotating with angular velocity q̃_jω around the phase singularity x⃗_o,j; this functional is minimized for spiral waves that are stationary in that reference frame. For a single rigidly rotating spiral wave, chirality is precisely ± 1.Minimization of (<ref>) for complex multi-spiral states produces pseudo chirality values equal to ± 1 within a few percent, such that we can safely define q_j= sign(q̃_j). In practice, this definition proves very robust when spiral cores are sufficiently well separated, i.e., when d_j exceeds the width of an isolated spiral core <cit.>.Since phase singularities by definition (<ref>) lie on the level set ∂ E, for periodic boundary conditions, wavefronts and wavebacks can only terminate at a spiral core (or, more precisely, phase singularity). Conversely, in multi-spiral states, each wavefront and waveback is bounded by a pair of spiral cores of opposite chirality. In modern electrophysiology literature wavelets are identified with wavefronts <cit.>. Consequently, the events when a wavelet is created (destroyed) are associated with an increase (decrease) in the number of spiral cores by two. Although the total number of spiral cores is not conserved, the total topological charge q=∑_jq_j=0is conserved <cit.>.A number of topologically distinct processes which respect (<ref>) are possible. Although some of these correspond to the time-reversed version of the others, the dynamics of the dissipative systems are not time-reversible and do not have to respect the symmetry between these processes.In fact, as we will see below, in excitable systems such as the Karma model the dominant topological processes increasing/decreasing the number of spiral cores are not related by time-reversal symmetry.§ CONDUCTION BLOCK AND TISSUE REFRACTORINESS As we mentioned previously, conduction block plays a major role in wave breakup, which is essential for transition to fibrillation and spiral wave chaos in general.The origin of conduction block can be structural, i.e., related to tissue heterogeneity <cit.>, but it can also be dynamical, i.e., occur in homogeneous tissue as a result of an instability. For instance, conduction block can occur when a receding waveback is moving slower than the subsequent advancing wavefront <cit.>.In fact, there is a variety of other dynamical mechanisms leading to conduction block <cit.>.Conduction block refers to the failure of an excitation front to propagate because the tissue ahead of it is refractory and cannot be excited.Refractoriness is traditionally <cit.> defined on the level of individual cardiac cells by quantifying whether a voltage perturbation applied to the quiescent state of the cell will trigger an action potential. These definitions are not particularly useful for understanding conduction block in tissue for two reasons <cit.>: First of all, in tissue the excitation wave is triggered by coupling between neighboring cells, rather than a voltage perturbation to an isolated cell.Second, in tissue, especially during AF or tachycardia, which are both characterized by very short s, the cells never have sufficient time to return to the rest state. §.§ Low-curvature wavefront In the Karma model, conduction block can arise as a result of discordant alternans instability <cit.> which leads to variation in the width and duration of action potentials.For excitation waves with low curvature, we can determine the boundary of the refractory region by considering a one-dimensional periodic pulse train. In the reference frame moving with velocity c of the wavefront, the voltage variable satisfies the evolution equation D_11 u”_1 + c u'_1 + f_1(u⃗) = 0,provided the pulse train does not change shape, where u'_1 = ∂_ξu_1(ξ), u”_1 = ∂_ξ^2u_1(ξ), and ξ = x-ct. For sufficiently small , the conduction velocity c decreases monotonically withand vanishes identically at finite  <cit.>. This means that there are no propagating solutions below this value of .At the critical value of , we have c=0, so the wavefront fails to propagate when D_11u”_1 + f_1(u⃗) = 0. For plane waves in two dimensions, u”_1=∇^2 u_1, so combining this with the evolution equation (<ref>) we find that the boundary of the refractory region is given by∂_tu_1=D_11∇^2 u_1+f_1(u⃗) = 0and coincides with the boundary (<ref>) of the repolarization region. Similarly, the refractory region can be identified with the region of repolarization (<ref>). This makes intuitive sense: whatever the conditions are, the voltage increases outside the refractory region.Although derived for a very special case of one-dimensional periodic pulse trains, this definition of the refractory region works well even for states that are not time-periodic and feature excitation waves with significant curvature. This is illustrated in Fig. <ref> which shows the time trace of of the variables u_1(t,x⃗_0) and u_2(t,x⃗_0) for a spatiotemporally chaotic solution similar to that shown in Fig. <ref>.The point x⃗_0 was chosen near the spatial location where conduction block occurs (such as the center of the marked region in Fig. <ref> below). The excited and refractory intervals (temporal analogues of the excitable and refractory regions) are shown as red- and blue-shaded rectangles in Fig. <ref>; they are bounded by the level sets ∂ R and ∂ E in Fig. <ref>.As expected, we find that, when the wavefronts are well separated from the trailing edges of the refractory intervals (e.g., at t≈ 140), long, large-amplitude action potentials are found. This is in sharp contrast with the short and low-amplitude action potential that is initiated at t≈ 29.5, soon after the previous refractory interval ends at t≈ 26.5. As the location of conduction block is approached (not shown), the separation between the trailing edge of the refractory interval and the subsequent excitation wavefront vanishes along with the action potential itself. This suggests that conduction block occurs when and where the level sets ∂ R and ∂ E first touch, in agreement with Winfree's critical point hypothesis <cit.>.§.§ High-curvature wavefront There are, however, other dynamical mechanisms that can lead to conduction block. Consider, for instance, the opposite situation when the curvature of the wavefront is high. For curved wavefronts the propagation speed c decreases as the curvature κ of the wavefront increases, so there is a critical value of the curvature at which the wave fails to propagate <cit.>. It can be estimated in the limit D_22→ 0 using the eikonal approximation <cit.> which givesc = c_0 - D_11κ,where c_0 is the velocity of a planar wavefront. Using the value of c_0 = λ/T ≈ 1.44 which corresponds to a large rigidly rotating spiral wave <cit.>, we find κ^-1=r_c ≈ 2.8, where r_c is the critical radius of curvature of the wavefront.A more accurate estimate for r_c can be obtained using the condition (<ref>) for conduction blockD_11[∂_r^2u_1 + r^-1∂_ru_1 + r^-2∂_θ^2u_1] + f_1(u⃗)=0and the definition of the wavefront (<ref>) rewritten via (<ref>)D_22[∂_r^2u_2 + r^-1∂_ru_2 + r^-2∂_θ^2u_2] + f_2(u⃗)=0in polar coordinates (r,θ). For small ϵ, u_2 varies slowly in both time and space and can be considered a constant,u_2=u̅_2 given by (<ref>). The first three terms of (<ref>) can therefore be ignored, so (<ref>) reduces tof_2(u⃗)=0. The third term D_11r^-2∂_θ^2u_1 in (<ref>) can also be neglected, since u_1 varies much faster in the direction normal to the wavefront r=r_c than in the tangential direction.Since u_2 is constant, subject to boundary conditions ∂_ru_1=0 at r=0 and r=∞, (<ref>) has rotationally symmetric solutions u_1=u_1(r) with a stationary wavefront at the critical radius r_c given byf_2(u_1(r_c),u̅_2)=0.The stationary solution of (<ref>) and (<ref>) is shown in Fig. <ref>. It corresponds to the critical radius of curvature r_c≈ 6, which is a factor of two larger than the value obtained using the eikonal approximation (<ref>). This solution is a two-dimensional analogue of the one-dimensional “critical nucleus” for excitation <cit.>. Wavefronts with radius of curvature larger than r_c propagate forward, while the wavefronts with radius of curvature smaller than r_c retract (i.e., become wavebacks).Before we discuss the numerical results, let us emphasize that, with the proper choice of variables, the definitions of the wavefronts and wavebacks (<ref>), leading and trailing edges of the refractory region (<ref>), and phase singularities (<ref>) are model-independent and can be used to analyze both numerical and experimental data, provided thatmeasurements of two independent variables (e.g., voltage and calcium) are available. Generalization of the topological description presented above to higher-dimensional models is discussed in the Appendix.§ NUMERICAL RESULTS As mentioned in the introduction, during AF most spiral waves do not complete a full rotation. Spiral wave chaos in the Karma model produces qualitatively similar dynamics: topological changes involving a change in the number of spiral wave cores occur on the same time scale as the rotation. (For parameters considered in this study rotation period is T≈ 51, which corresponds to 127 ms in dimensional units <cit.>.) The larger the spatial domain, the more frequent are the topological changes in the structure of the solution.However, as we discussed in Sect. <ref>, each topological event is essentially local and involves either birth or annihilation of a pair of spiral cores of opposite chirality. Of the different types of topological events, spiral wave breakup – associated with a creation of a new pair of spiral cores – received a lion's share of the attention due to its role in the initiation of fibrillation. However, the number of spiral cores cannot increase forever; eventually a dynamic equilibrium is reached when the number of cores fluctuates about some average, with core creation balanced by core annihilation. To the best of our knowledge, the process(es) responsible for core annihilation, however, have never been studied systematically. To investigate which of the topological events dominate and what the dynamical mechanisms underlying these events are, we performed a numerical study of the Karma model (<ref>)-(<ref>) on a square domain of side-length L=192 (5.03 cm), which is close to the minimal size required to support spiral wave chaos. Spatial derivatives were evaluated using a second-order finite-difference stencil and a fourth-order Runge-Kutta method was used for time integration <cit.>. To avoid spurious topological transitions involving a boundary, periodic boundary conditions were used, unless noted otherwise.Before identifying topological transitions in the numerical simulations, it is worth enumerating the topologically distinct local configurations.In the following it will be convenient to use the following shorthand notations: ∂ E^+ (wavefront, ∂_t u_1>0), ∂ E^- (waveback, ∂_t u_1 < 0), ∂ R^+ (leading edge of the refractory region, ∂_t u_2>0), and ∂ R^- (trailing edge of the refractory region, ∂_t u_2<0). For a wave train in the region with no spiral cores, the boundaries of the refractory and excitable regions will follow the periodic sequence (…, ∂ R^-, ∂ E^-, ∂ R^+, ∂ E^+, …) in the co-moving frame (cf. Fig. <ref>(e)).§.§ Virtual pairsDeformation of (nearly planar) waves due to instability or heterogeneous refractoriness can lead to intersection of any pair of adjacent level sets (e.g., ∂ E^+ and ∂ R^-) and, correspondingly, creation of a new pair of spiral cores.From the topological perspective, there are four distinct possibilities shown in Figs. <ref>(b), <ref>(d), <ref>(f), or <ref>(h).The corresponding configurations are all transient and only persist for a fraction of the revolution time T of a typical spiral wave during topological transitions.Each of these transient configurations can undergo a total of five distinct topological transitions, including a reverse transition back to the initial configuration with nonintersecting level sets, associated with the destruction of the two new phase singularities. The other four persistent possibilities will be considered in subsequent sections.The transitions that correspond to crossing of two level sets followed by the reverse transition (indicated by horizontal or vertical double gray arrows in Fig. <ref>) produce a “virtual” core pair that appears and quickly disappears, restoring the original topological structure. The number of spiral cores and wavelets before and after these transitions remains exactly the same, so while such events do occur rather frequently they do not play a dynamically important role and can be safely ignored. As discussed below, the topological transitions identified with the arrows in Fig. <ref> are all very fast; they occur on a time scale much shorter than the typical rotation period of a spiral. The much slower transitions between panels <ref>(a) → <ref>(b) → <ref>(c) → <ref>(f) → <ref>(i) → <ref>(h) → <ref>(g) → <ref>(d) → ⋯ associated with figure-8 re-entry for two well-separated counter-rotating spirals are not shown for clarity. While the topology of some level sets (either ∂ E of ∂ R) changes during these transitions, neither the topological charge nor the number of phase singularities does, so these are not proper topological transitions, as defined in Section <ref>. In contrast, the topology of ∂ E and ∂ R may not change during the proper topological transitions.The trajectories of the two phase singularities and the distance between them in a representative example of a virtual pair are shown in Fig. <ref>. The phase singularities do not move far from their initial positions and remain close at all times.In fact, the distance between them never exceeds a fraction of the typical separation d_0 between persistent spiral cores (to be be discussed in more detail in Sect. <ref>). Since the level sets are smooth curves, their intersections that define the positions of the cores move with infinite velocities at the time instants when the cores are created and destroyed.As a result, their motion near those times can not be resolved using time stepping, explaining the gaps at the beginning and the end of the trajectories in Fig. <ref>(a).The transient configuration shown in Fig. <ref>(f) deserves a special mention. It corresponds to the phenomenon of “back-ignition” observed in some reaction-diffusion models whereby the waveback can become a source of a new backward propagating wave under appropriate conditions.While topologically permissible, this configuration is only observed as a very short transient in the present model, reflecting the asymmetry of initial conditions imposed by the dynamics. The relative likelihood of the four transient configurations shown in Figs. <ref>(b), <ref>(d), <ref>(f), and <ref>(h) can in principle be computed using stability analysis of a planar wave train solution for any tissue model, but this is outside the scope of the present study. §.§ Wavelet/pair creation Next consider the transitions from the four intermediate configurations shown in Figs. <ref>(b), <ref>(d), <ref>(f), and <ref>(h) to configurations other than the initial one shown in Fig. <ref>(e).For each of these four transient configurations there are four distinct possibilities.Two possibilities are shown in Fig. <ref> (the other two will be discussed in the next Section): either of the two level set fragments connecting the cores can reconnect with the neighboring level set of the same type. For instance, the configuration shown in Fig. <ref>(h) can transform to the configuration shown in Figs. <ref>(g) or <ref>(i).If the crossing and reconnection occur simultaneously, the transition occurs directly between the persistent configuration shown in Fig. <ref>(e) and one of the persistent configurations shown in Figs. <ref>(a), <ref>(c), <ref>(g) or <ref>(i) without passing through any of the intermediate transient configurations. The dynamically allowed direct transitions, as determined based on the results of numerical simulations, are shown as diagonal gray arrows.Note that the transition between the configurations shown in Fig. <ref>(e) and <ref>(a) corresponds to wave breakup.It occurs when and where the wavefront reconnects with the waveback of the same excitation wave <cit.> as a result of conduction block. Since this topological process increases the number of disconnected excited regions, it is quite natural to find that it plays an important role in the transition from, say, normal rhythm or tachycardia (featuring a single excitation wave) to AF (featuring many separate wavelets). While wave breakup may be prevalent during the initial stage when AF is being established, we have not found a single instance of this topological event in our numerical simulations of sustained spiral wave chaos, casting serious doubt on the premise that wave breakup plays a dynamically important role in maintaining AF in tissue or in other models.Our numerical simulations reveal only one topological process that leads to a lasting increase in the complexity of the pattern. This process which we call “wave coalescence” corresponds to the transition from the initial configuration shown in Fig. <ref>(e) to the final configuration shown in Fig. <ref>(i), either directly or through the intermediate transient configurations shown in Figs. <ref>(f) and <ref>(h).A representative example from the simulations is shown in Fig. <ref>, where a purple rectangle marks the region of interest. Outside of this region the wavefronts are well-separated from the refractory tails, but inside the separation is markedly smaller (cf. Fig. <ref>(a)). The separation quickly decreases (cf. Fig. <ref>(b)) until the level sets ∂ E^- and ∂ R^- cross and two new spiral cores with opposite chirality are created (cf. Fig. <ref>(c)). Immediately after this the two parts of the level set ∂ E^+ reconnect, bringing the configuration to the topological state shown in Fig. <ref>(i). The cores separate (cf. Fig.  <ref>(c)) and the excited regions of two subsequent waves coalesce in the gap flanked by these two cores. Due to the high curvature of ∂ E the two new cores are quickly pulled apart, and two new counter-rotating spiral waves emerge, “locking in” the resulting topological configuration.This is illustrated by Fig. <ref> which shows the trajectories of the cores and the distance between them.It is worth noting that, before the spiral waves complete even half a revolution, the separation between the cores approaches the typical equilibrium distance <cit.> d_0.Our numerical simulations did not produce any examples of topological transitions to the configurations shown in Figs. <ref>(c) or <ref>(g).In the horizontal band bounded by the cores (indicated by lighter-shade gray), the corresponding states are characterized by the voltage variable that is changing slowly in space, since the distance d between the minimum of u_1 (dashed white line ∂ R^-) and the maximum of u_1 (solid white line ∂ R^+) is extremely large. Hence, the term D_11∇^2u_1∝ d^-2 in (<ref>) is negligible. Since D_22 is small, we can ignore the diffusive terms D_22∇^2u_2 as well and consider all cells in this region to be spatially decoupled, such that their dynamics is described well by (<ref>) and the phase diagram shown in Fig. <ref>(a).Consider the part of the band where u_1 is slowly and monotonically increasing in time (to the left of ∂ R^+ in Fig. <ref>(c)) or decreasing in time (to the left of ∂ R^- in Fig. <ref>(g)). The cells in this region should be in the state that lies close to either one of the stable u_1-nullclines (f_1^-=0 in the former case and f_1^+=0 in the latter case). According to Fig. <ref>(a), this entire region should lie either to the left or to the right of the u_2-nullcline, so ∂_t u_2 should be sign-definite, while in both Fig. <ref>(c) and <ref>(g) the sign of ∂_t u_2 changes (when the level set ∂ E connecting the two phase singularities is crossed). Hence, while these configurations are not forbidden on topological grounds, they are forbidden dynamically.Furthermore, we have not observed transitions from the persistent configurations shown in Figs. <ref>(a), <ref>(c), <ref>(g), and <ref>(i) to either the transient configurations shown in Figs. <ref>(b), <ref>(d), <ref>(f), and <ref>(h) or the persistent configuration shown in Fig. <ref>(e).While these transitions are allowed topologically, they appear to be forbidden dynamically.The only dynamically allowed (direct or indirect) transition irreversibly transforms the configuration with no spiral cores (Fig. <ref>(e)) to the configuration (Fig. <ref>(i)) with two spiral cores, increasing the total number of cores by two and the number of wavelets by one. §.§ Wavelet/pair destruction Finally, let us consider the topological transitions from the four intermediate configurations that have not been considered in the previous sections. We have redrawn these four configurations in Fig. <ref> in the same locations as in Fig. <ref>, dropping all non-essential level sets. For each of these intermediate configurations there are two possibilities which involve reconnection between the two extended branches of a level set that terminate at the cores. For instance, the configuration shown in Fig. <ref>(h) can transform to the configurations shown in Figs. <ref>(g) or <ref>(i). None of these transitions (shown as horizontal or vertical gray lines), in either the forward or the reverse direction, have been observed in numerical simulations, however.As we discussed previously, if the crossing and reconnection of the level sets occur simultaneously, the configuration transitions directly between the persistent configuration with no level set intersections (Fig. <ref>(e)) and one of the persistent configurations with a pair of intersections shown in Figs. <ref>(a), <ref>(c), <ref>(g), and <ref>(i) without passing through any of the intermediate configurations. The dynamically allowed direct transitions observed in the simulations are shown as diagonal gray arrows. Note that again there is no time-reversal symmetry: only the transitions that destroy the existing core pairs are dynamically allowed. Therefore the observed direct transitions shown in Fig. <ref> reduce the net number of spiral cores and wavelets balancing the increase due to wave coalescence.The configurations shown in Figs. <ref>(a), <ref>(c), <ref>(g), and <ref>(i) all describe a pair of counter-rotating spiral waves. In particular, the configurations in Figs. <ref>(c) and <ref>(g) correspond to multi-spiral states (wavefronts and/or wavebacks connect spiral cores inside and outside the region shown) and hence are quite typical.On the other hand, the configurations in Figs. <ref>(a) and <ref>(i) correspond to configurations with a single pair of spirals (wavefronts and wavebacks connect the phase singularities inside the region shown) and are never observed during sustained spiral wave chaos. Consequently, only transitions from the configurations in Figs. <ref>(c) and <ref>(g) are found in the simulations, with the vast majority of transitions involving the former configuration.To understand why and when this transition happens, consider the interaction between a pair of isolated counter-rotating spiral waves separated by distance d. (The interaction is short-range, so the presence of other, remote, spiral wave cores does not change the outcome.) Using the approximate mirror symmetry of the configuration, the dynamics can be understood by considering a single spiral interacting with a planar no-flux boundary at a distance ζ=d/2.As we showed previously <cit.>, at large separations the spiral cores can be considered essentially non-interacting, while at smaller separations the equilibrium distance d becomes quantized, with the smallest stable separation <cit.> equal to d_0=2ζ_0≈ 40 (10.4 mm in dimensional units) for the values of the parameters considered in this study. For separations below some critical distance d_c<d_0, the cores attract each other, eventually colliding and destroying both spiral waves. As the cores approach each other, the wavefront confined between them collapses, so we will refer to this process as “wavelet collapse” or “wave collapse”.The details of wave collapse depend on the relation between the initial phase of the spiral wavesand separation between their cores. A very typical example of wave collapse is shown in Fig. <ref>. In this particular example we find that the curvature of the wavefront becomes quite large before collapse. The curvature at which this happens can be related to the mechanism of conduction block discussed in Sect. <ref>. Since the cores are moving relatively slowly prior to wave collapse (cf. Fig. <ref>(a)), as the wavefront propagates its curvature gradually increases (cf. Fig. <ref>(b)). The largest value of the curvature is related to the distance between the cores, κ^-1≈ d/2. Once the curvature becomes comparable to the inverse of the critical radius r_c≈ 6, the wave stops propagating, the cores slide towards each other along the wavefront and annihilate (cf. Fig. <ref>(c)), the wavebacks merge, and the wave starts to retract (cf. Fig. <ref>(d)).This picture predicts that the minimal distance at which the spiral cores with opposite chirality can persist without annihilating with each other is given by d_c=2r_c=12 (3.1 mm in dimensional units).This value is in good agreement with the critical isthmus width (2.5 mm) found for conduction block in isolated sheets of ventricular epicardial muscle with an expanding geometry <cit.>. Our numerical simulations show that the minimal distance is d_c=16 (4.2 mm), also close to the predicted value. The core trajectories and the distance between them in the example from Fig. <ref> are shown in Fig. <ref>. The initial distance in this case was d=18 (4.7 mm), illustrating that, under appropriate conditions, wave collapse can also occur for cores separations somewhat larger than d_c (but still less than d_0). The transition between the configurations shown in Figs. <ref>(g) and <ref>(e) corresponds to merger between two wavefronts that were originally separated by a waveback. Hence, we shall refer to this topological transition as a “wave merger” event. Wave mergers, however, are extremely rare, so a reduction in the total number of cores and wavelets is due almost entirely to wave collapse events. This is similar to the dynamical asymmetry between the wave breakup and wave coalescence events. Therefore, dynamical equilibrium in sustained spiral wave chaos can be understood, at least in the Karma model, as a balance between wave coalescence and wave collapse.§ DYNAMICAL EQUILIBRIUM Although the topological description itself is not quantitative, it helps identify the key dynamical mechanisms, such as wave coalescence and wave collapse, responsible for maintaining AF. This should, in turn, enable a quantitative description of the dynamics in general and dynamical equilibrium in particular and give the answers to open questions that have been debated for a long time. For instance, it is presently not well understood either what the minimal size of tissue is that can sustain AF or what the minimal number of wavelets is in sustained AF.The leading-circle concept <cit.> suggests that the number of wavelets that the atria can accommodate should be related to the wavelength. Moe's computer model <cit.> predicted that between 23 and 40 wavelets are necessary for the maintenance of AF, while Allessie <cit.> places the minimal number of wavelets between four and six.These hypotheses can be easily tested in the context of the Karma model. Let us start by determining whether the wavelength (λ=78 for the values of parameters considered here) is a relevant length scale. The size (diameter) of a reentry circle with the perimeter equal to the wavelength is d=λ/π≈25 which is larger that the minimal separation d_c between persistent spiral wave cores, but considerably smaller than the minimal stable separation d_0 between the cores.To show that d_0 is the relevant length scale, we computed the probability density function P(d) for core-core separation on a square domain of side L=192 (this is the smallest domain with no-flux boundary conditions that supports sustained spiral wave chaos). For each time t and each core j we computed the distance d_j to the nearest core (cf. Sect. <ref>), then averaged over j and t. The resulting distribution, for both no-flux and periodic boundary conditions, is shown in Fig. <ref>. In both cases we find that the distribution P(d) is rather narrow, with the maximum achieved at d=d_0.The effect of the boundary conditions on the shape of the distribution is somewhat subtle: on a bi-periodic domain, the probability of large core separations (d=O(L)) is decreased compared with the same size domain with no-flux boundary conditions. Effectively, as there must always be a chirally-matched pair on the periodic domain, the furthest these cores may be is d_ max = L/√(2), as opposed to an isolated spiral matched with it's mirror image across the no-flux boundary, which corresponds to maximal distance d_ max = √(2)L. Thus, on a periodic domain, the maximal accessible distance is precisely 1/2 the maximal distance available on a no-flux domain of the same size.The upper bound for the number of spiral cores can be estimated as the ratio of the total area of the domain (i.e., L^2) to the area of the smallest tiles <cit.> supporting one persistent spiral wave (i.e., d_0^2), that is n̅_c<L^2/d_0^2=23 (in fact, we should have n̅_c≤ 22 since the net topological charge is zero). In reality the tiles tend not to be squarish and have a larger area on average, giving a lower average number of spiral cores, n̅_c=10, as the probability distribution function P(n_c) illustrates (cf. Fig. <ref>). The number n_w of separate wavelets is exactly half of the number n_c of cores (on a domain with periodic boundary conditions), so on average n̅_w=n̅_c/2≈ 5, in perfect agreement with the results of Allessie <cit.>. The number of cores exhibits considerable fluctuation (between 4 and 16), correspondingly the number of wavelets varies between 2 and 8. The likelihood of these extreme values is, however, rather small (an order of magnitude smaller than that corresponding to the average value).The observation that the minimal number of wavelets is just two is a sign that the dynamics are on the border of spontaneous collapse of spiral wave chaos (recall that our domain is just large enough to sustain this regime). We should have P(0)=0, because once all the spiral cores disappear, so does the mechanism of reentry (at least in our homogeneous model), resulting in a transition to the rest state or, in the presence of pacing, normal rhythm. The smallest number of spiral cores required for reentry (in a domain with periodic boundary conditions) is two, so one could, in principle, expect P(2) to be nonzero. However, as our results show, the mechanism that sustains spiral wave chaos is wave coalescence, which requires at least two wavelets, and therefore at least four spiral cores, to be present.§ DISCUSSION This paper presents a general topological approach for studying spiral wave chaos in two-dimensional excitable media.It is illustrated using the Karma model which, in a certain parameter regime, produces dynamics that are remarkably similar to those observed during atrial fibrillation.Therefore, our results could shed new light on this important and complicated phenomenon. The confusing and often contradictory results regarding the dynamical origins of AF reported in experimental and numerical studies are to some extent due to the complexity of the patterns of excitation.The descriptive language and intuition developed primarily in the context of simple structures – plane or spiral waves – often fail us when applied to states that are topologically complicated and nonstationary.To give a few examples, the mental picture of a spatially localized excitation wave, or wavelet, that is bounded by a wavefront and a waveback falls apart when applied to complex multi-spiral patterns since the boundary of one excited region is often composed of multiple wavefronts and wavebacks, as Figs. <ref> and <ref> illustrate.As a result, the number of excited regions almost never corresponds to the number of wavelets.Neither is the notion of a spiral wave immediately useful for describing such complicated patterns, which only resemble spiral waves in small neighborhoods of spiral cores.Similarly, a reduction of complicated field configurations to the number and positions of phase singularities is also problematic, both because they appear, move, and disappear for sustained spiral chaos and because identifying them using existing approaches, such as phase mapping <cit.>, is notoriously unreliable when the data is noisy.This paper aims to rectify some of these difficulties by introducing a topological description that can rigorously and easily identify the dynamically important elements of the excitation patterns – wavefronts, wavebacks, phase singularities, etc. – without modeling assumptions and in a manner that can be implemented in both simulations and experiments.By defining the phase singularities as intersections of level sets of an appropriately defined phase field, this topological description directly connects the dynamics of excitation waves and phase singularities; it can be used not only to quantify and classify the excitation patterns, but also to identify the dynamical mechanisms that lead to qualitative changes in the pattern.In particular, we show that the qualitative changes can be conveniently described and classified based on the dynamics of spiral cores which are created or destroyed in pairs, leading to an increase or decrease in the number of wavelets, with a one-to-one correspondence between the number of cores and wavelets. The topological description also allowed us to identify the dominant dynamical mechanisms responsible for maintaining AF in a model of atrial tissue. In particular, it allowed us to make a major discovery with implications that, in all likelihood, go far beyond the simple model considered here. We found that wave breakup due to conduction block that is widely believed to be the key mechanism responsible for maintaining AF plays no role whatsoever in sustaining this regime. While wave breakup does play a key role in the transition to AF, it is a dynamically and topologically distinct event – wave coalescence – that is responsible for maintaining AF.Wave coalescence which leads to the increase in the number of spiral cores and wavelets is balanced by wave collapse which decreases the number of spiral cores and wavelets. It is this delicate balance that is responsible for maintaining the complexity of the pattern and of the dynamics and it is this balance that controls whether AF persists or terminates. Past studies of the dynamical origins and control of AF tended to focus solely on the mechanism(s) that lead to an increase in the complexity of the pattern. Indeed suppressing the processes that generate new spiral cores and new wavelets is one way to terminate or prevent AF. However, enhancing the processes that destroy the spiral cores and wavelets could be just as effective. Therefore, both wave coalescence and wave collapse are attractive targets for electrical, surgical, and pharmacological approaches to the treatment of AF. While this study has not focused on the interaction of excitation waves with no-flux boundaries, the methods and approaches presented here are applicable to this situation as well. Hence topological analysis could be quite helpful for improving treatment of chronic AF usingsurgical procedures such as ablation that effectively introduce additional boundaries.In conclusion, we should point out that our results raise new questions regarding the role of conduction block in maintaining AF.While conduction block undoubtedly plays a crucial role in wave collapse, it is not at all clear that it is relevant in wave coalescence. Therefore, quite paradoxically, we find that conduction block plays a more important role in decreasing the complexity of the excitation pattern than in increasing its complexity. Further studies are needed in order to fully understand the dynamical mechanisms behind wave coalescence, wave collapse, and possibly other topologically allowed events important in maintaining AF using more detailed and physiologically accurate models of atrial tissue.This material is based upon work supported by the National Science Foundation under Grant No. CMMI-1028133. The Tesla K20 GPUs used for this research were donated by the “NVIDIA Corporation” through the academic hardware donation program. CDM gratefully acknowledges the financial support of the EPSRC via grant EP/N014391/1 (UK).§ APPLICATION TO MORE REALISTIC ACTION POTENTIALS The topological approach introduced here is sufficiently general to be extended to much more complicated electrophysiological models. For example, the definition of the leading and trailing edges of the refractory region (<ref>) relies solely on the voltage variable, while the definition (<ref>) of the wavefronts and wavebacks can be trivially generalized toan m-variable reaction-diffusion model by choosing the “weighting” vector a⃗ = [a_1,…,a_m] to properly represent the physiological role of different variables in triggering the depolarization front. The phase singularities can then be defined again as the intersection of the boundaries ∂ E and ∂ R, although this does not guarantee that (<ref>) will be satisfied for m>2.For illustration, we used such a generalization to identify the wavefronts, wavebacks, and phase singularities in the four-variable minimal model of Bueno-Orovio et al. <cit.>, considering only the contribution from a single slow variable, a⃗ = [0,0,1,0], as a simple approximation.A virtual pair event shown in Fig. <ref> provides an illustration that these definitions are equally successful in describing topological changes in a substantially more complex and detailed model, which is capable of producing quantitatively accurate description of excitation patterns in various types of cardiac tissues, given appropriate choices of parameters (in the present example we used the epicardial parameter set). The only complication arises when the diffusion coefficients for the slow variable(s) vanish identically, since this can lead to subtle artifacts when discontinuities of the kinetics, e.g., in the switching between on and off states, combine with the high spatial gradients near the wavefront.However, this issue is not a fault of the method, but rather a consequence of the unphysical nature of the simplified ionic kinetics, and can be easily rectified by using an appropriately smoothed version of the model kinetics (as we did in Karma model). Furthermore, even without smoothing, one can simply define the wavefront and waveback as the boundary of the excited region E defined using the indicator function g( u)=a⃗·∂_tu⃗.§ REFERENCES
http://arxiv.org/abs/1703.10680v2
{ "authors": [ "Christopher D Marcotte", "Roman O Grigoriev" ], "categories": [ "q-bio.TO" ], "primary_category": "q-bio.TO", "published": "20170324223502", "title": "Dynamical mechanism of atrial fibrillation: a topological approach" }
StyleBank: An Explicit Representation for Neural Image Style Transfer[ Received ; accepted======================================================================In the recent years there has been an increased interest in studying regularity properties of the derivatives of stochastic evolution equations (SEEs) with respect to their initial values. In particular, in the scientific literature it has been shown for every natural number n∈ that if the nonlinear drift coefficient and the nonlinear diffusion coefficient of the considered SEE are n-times continuously Fréchet differentiable,then the solution of the considered SEE is also n-times continuously Fréchet differentiable with respect to its initial value and the corresponding derivative processes satisfy a suitable regularity property in the sense that the n-th derivative process can be extended continuously to n-linear operators on negative Sobolev-type spaces with regularity parameters δ_1,δ_2,…,δ_n∈[0,1/2) provided that the condition ∑^n_i=1δ_i < 1/2is satisfied. The main contribution of this paper is to reveal that this conditioncan essentially not be relaxed.§ INTRODUCTIONIn the recent years there has been an increased interest in studying regularity properties of the derivatives of stochastic evolution equations (SEEs) with respect to their initial values (cf., e.g., Cerrai <cit.>,Debussche <cit.>, Wang & Gan <cit.>, Andersson et al. <cit.>). One important reason for this increased interest is that appropriate estimates on the first, second, and higher order derivatives of SEEs with respect to their initial values have been used as key tools for establishing essentially sharp weak convergence rates (see, e.g., Debussche <cit.>, Wang & Gan <cit.>,Andersson & Larsson <cit.>, Bréhier <cit.>, Bréhier & Kopec <cit.>, Wang <cit.>, Conus et al. <cit.>,<cit.>,and Hefter et al. <cit.>). In particular, in the recent article Andersson et al. <cit.> it has been shown that if the nonlinear drift coefficient and the nonlinear diffusion coefficient of an SEE are n-times continuously Fréchet differentiable,then the solution of the considered SEE is also n-times continuously Fréchet differentiable with respect to its initial value and the corresponding derivative processes satisfy a suitable regularity property (see item (iv) of Theorem 1.1 in Andersson et al. <cit.> and item (<ref>) of Corollary <ref> below, respectively). In this work we reveal that this regularity property can essentially not be improved. To illustrate our result in more detail we consider the following notation throughout the rest of this introductory section. For every measure space ( Ω , ℱ, μ ), every measurable space ( S , 𝒮 ), and every ℱ/𝒮-measurable function X Ω→ S we denote by [ X ]_μ, 𝒮 the set given by[ X ]_μ, 𝒮 = { YΩ→ S(Yis ℱ/𝒮-measurable)∧ ( ∃A ∈ℱμ(A) = 0 and {ω∈Ω X(ω) ≠ Y(ω) }⊆ A ) } .We first briefly review the above mentioned regularity result on derivative processes of SEEs from Andersson et al. <cit.>. More formally, Theorem 1.1 in Andersson et al. <cit.> includes the following result, Corollary <ref> below, as a special case. For every real number T∈(0,∞), all nontrivial separable -Hilbert spaces( H, ·_H, ⟨· , ·⟩_H ) and( U, ·_U, ⟨· , ·⟩_U ), every probability space (Ω,ℱ,),every normal filtration (ℱ_t)_t∈[0,T] on (Ω,ℱ,),every Id_U-cylindrical( Ω , ℱ, , ( ℱ_t )_ t ∈ [0,T])-Wiener process (W_t)_t∈[0,T],every generatorA D(A)⊆ H → Hof a strongly continuous analytic semigroup withspectrum(A) ⊆{z∈ℂRe(z)<0},and all infinitely often Fréchet differentiable functionsFH → HandBH → HS(U,H) with globally bounded derivatives it holds * that there exist up-to-modifications unique ( ℱ_t )_ t ∈ [0,T]/ℬ(H)-predictable stochastic processes X^x [0,T] ×Ω→ H, x ∈ H, which fulfill for all x ∈ H, p ∈ [2,∞), t ∈ [0,T] that ∫^t_0 e^(t-s)A F(X^x_s)_H + e^(t-s)A B(X^x_s)^2_HS(U,H)ds < ∞, sup_ s ∈ [0,T] [ X_s^x ^p_H ] < ∞, and [ X_t^x - e^tA x ]_, ℬ(H) = [ ∫_0^t e^ ( t - s ) A F(X_s^x) s ]_, ℬ(H) + ∫_0^t e^ ( t - s ) A B(X_s^x) W_s , * that it holds for all p ∈ [2,∞), t ∈ [0,T] that H ∋ x ↦ [X^x_t]_,ℬ(H)∈pH is infinitely often Fréchet differentiable, and*that it holds for allp ∈ [2,∞),n ∈ = {1,2,…},q∈[0,∞),δ_1, δ_2, …, δ_n ∈ [0,1/2),t ∈ (0,T] with∑^n_i=1δ_i < 1/2 that sup_ x ∈ H sup_ u_1,u_2,…,u_n ∈H[ ( [ (-A)^-q(d^n/dx^n[X^x_t]_,ℬ(H))(u_1,u_2,…,u_n) ^p_H] )^1/p/∏_ i = 1 ^n(-A)^-δ_i u_i _H ] < ∞ . Item (iv) of Theorem 1.1 in Andersson et al. <cit.> and item (<ref>) of Corollary <ref> in this paper, respectively, prove that the condition ∑^n_i=1δ_i < 1/2for the regularity parametersδ_1,δ_2,…,δ_n ∈ [0,1/2) of the considered negative Sobolev-type spaces is sufficient to ensure that the left-hand side of (<ref>) is finite. The main result of this work (see Corollary <ref> below and Theorem <ref> in Subsection <ref> below, respectively) reveals that this condition can essentially not be relaxed. More specifically, Theorem <ref>in Subsection <ref> below directly implies the following result. For every real number T∈(0,∞), every infinite dimensional separable -Hilbert space ( H, ·_H, ⟨· , ·⟩_H ), every nontrivial separable -Hilbert space ( U, ·_U, ⟨· , ·⟩_U ), every probability space (Ω,ℱ,), every normal filtration (ℱ_t)_t∈[0,T] on (Ω,ℱ,), and every Id_U-cylindrical ( Ω , ℱ, , ( ℱ_t )_ t ∈ [0,T])-Wiener process (W_t)_t∈[0,T] there exist a generator A D(A)⊆ H → H of a strongly continuous analytic semigroup with spectrum(A) ⊆{z∈ℂRe(z)<0} and infinitely often Fréchet differentiable functions FH → H and BH → HS(U,H) with globally bounded derivatives such * that there exist up-to-modifications unique ( ℱ_t )_ t ∈ [0,T]/ℬ(H)-predictable stochastic processes X^x [0,T] ×Ω→ H, x ∈ H, which fulfill for all x ∈ H, p ∈ [2,∞), t ∈ [0,T] that ∫^t_0 e^(t-s)A F(X^x_s)_H + e^(t-s)A B(X^x_s)^2_HS(U,H)ds < ∞, sup_ s ∈ [0,T] [ X_s^x ^p_H ] < ∞, and [ X_t^x - e^tA x ]_, ℬ(H) = [ ∫_0^t e^ ( t - s ) A F(X_s^x) s ]_, ℬ(H) + ∫_0^t e^ ( t - s ) A B(X_s^x) W_s , * that it holds for all p ∈ [2,∞), t ∈ [0,T] that H ∋ x ↦ [X^x_t]_,ℬ(H)∈pH is infinitely often Fréchet differentiable, * that it holds for all p ∈ [2,∞), n ∈, q∈[0,∞), δ_1, δ_2, …, δ_n ∈ [0,1/2), t ∈ (0,T] with ∑^n_i=1δ_i < 1/2 that sup_ x ∈ H sup_ u_1,u_2,…,u_n ∈H[ ( [ (-A)^-q (d^n/dx^n[X^x_t]_,ℬ(H))(u_1,u_2,…,u_n) ^p_H] )^1/p/∏_ i = 1 ^n (-A)^-δ_i u_i _H ] < ∞ , and *that it holds for allp ∈ [2,∞),n ∈,q∈[0,∞),δ_1, δ_2, …, δ_n ∈,t ∈ (0,T]with∑^n_i=1δ_i > 1/2 that sup_ x,u_1,u_2,…,u_n ∈(∩_r∈H_r)[ ( [ (-A)^-q (d^n/dx^n[X^x_t]_,ℬ(H))(u_1,u_2,…,u_n) ^p_H] )^1/p/∏_ i = 1 ^n(-A)^-δ_i u_i _H ] = ∞ .Regularity results for Kolmogorov equations associated to SEEs of the form (<ref>) and (<ref>), which are in some sense related to Corollaries <ref> and <ref>, can, e.g., be found in Debussche <cit.>, Wang & Gan <cit.>,Andersson & Larsson <cit.>, Bréhier <cit.>, Wang <cit.>, Andersson et al. <cit.>,and Brehier & Debussche <cit.>.The remainder of this article is organized as follows. In Section <ref> we state and prove the main result of this paper; see Theorem <ref> in Subsection <ref> below. In Subsection <ref> we present the drift and the diffusion coefficient functions that we use throughout Section <ref>. In Subsection <ref> we derive an explicit representation of the considered diffusion coefficient function (see Lemma <ref> in Subsection <ref>).In Subsection <ref> we present explicit formulas for the solution and its derivatives of the SEE associated with the drift and diffusion coefficient functions considered in Subsection <ref> (see Lemma <ref> in Subsection <ref>). In Subsection <ref> we employ Lemma <ref> in Subsection <ref> and Lemma <ref> in Subsection <ref> to prove the main result of this paper, Theorem <ref> in Subsection <ref>. Corollary <ref> above is an immediate consequence of Theorem <ref> in Subsection <ref>.§ COUNTEREXAMPLES TO REGULARITIES FOR THE DERIVATIVE PROCESSES ASSOCIATED TO STOCHASTIC EVOLUTION EQUATIONS §.§ SettingThroughout this section we consider the following setting.For every set A let 𝒫(A) be the power set of A and let #_A ∈_0 ∪{∞}be the number of elements of A, letΠ_k ∈𝒫(𝒫( 𝒫() )),k ∈_0, be the sets which satisfy for all k ∈ that Π_0=∅ andΠ_k = { A ⊆𝒫() [ ∅∉ A ] ∧[ ∪_ a ∈ Aa = { 1, 2, …, k }] ∧[ ∀a, b ∈ A ( a ≠ b ⇒ a ∩ b = ∅) ] }(see, e.g., (10) in Andersson et al. <cit.>), let( H, ·_H, ⟨· , ·⟩_H ) be an -Hilbert space, let e = ( e_n )_ n ∈→ H be an orthonormal basis of H,let λ = ( λ_n )_ n ∈→,PH → H,and BH → Hbe functions which satisfy for allv ∈ H that sup_ n ∈λ_n < 0,P v = ∑_ n = 2 ^∞⟨ e_n, v ⟩_H e_n,andB( v ) = √( 1 +P v ^2_H )e_1,let T ∈ (0,∞),let( Ω, ℱ,) be a probability space with a normal filtration(ℱ_t)_t∈[0,T], let W[0,T] ×Ω→ be a standard( Ω , ℱ, , ( ℱ_t )_ t ∈ [0,T])-Brownian motion, let AD(A) ⊆ H → Hbe the linear operator which satisfies D(A) = {v ∈ H ∑^∞_n=1| λ_n⟨ e_n,v ⟩_H |^2 < ∞}and∀v ∈ D(A)Av = ∑^∞_n=1λ_n⟨ e_n,v⟩_H e_n,let ( H_r , ·_ H_r, ⟨· , ·⟩_ H_r), r ∈, be a family of interpolation spaces associated to - A (cf., e.g., <cit.>), andfor every ℱ/ℬ(H)-measurable function X Ω→ H letX be the set given by X = { Y Ω→ H( Yis ℱ/ℬ(H)- -measurable and (X=Y) = 1) }.§.§ An explicit representation for the diffusion coefficientAssume the setting in Section <ref>. Then*it holds thatBH → His infinitely often differentiable,*it holds for alln ∈,v_0, v_1, …, v_n ∈ Hthat B^(n)( v_0 )( v_1,v_2,…,v_n ) = ( ∑_ϖ∈Π_n [ ∏^#_ϖ - 1 _ i=0(1-2i) ] [ ∏_ I ∈ϖ⟨1_{1,2}( #_I ) Pv_max(I) _{2}(#_I), v_min(I) ⟩_H ] /[ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) ) e_1 ,and *it holds for all n ∈ thatsup_v∈ HB^(n)(v)_L^(n)(H,H) < ∞.Throughout this proof letf ∈ C^∞( (0,∞),)andg ∈ C^∞( H, (0,∞) )be the functions which satisfy for allx ∈ (0,∞), v ∈ Hthat f(x) = √(x)and g(v) = 1+ Pv ^2_Hand let I^ϖ_i ∈ϖ,i ∈{1,2,…,#_ϖ},ϖ∈Π_n,n ∈,be the sets which satisfy for alln ∈,ϖ∈Π_n thatmin( I^ϖ_1 ) <min( I_2^ϖ ) < … <min(I_#_ϖ^ϖ) .Note that the fact that∀v ∈ HB(v) = (f∘ g)(v) e_1 = f(g(v))e_1proves item (<ref>).In the next step we prove (<ref>) by induction on n∈. For the base case n=1 we note that for allv_0, v_1 ∈ Hit holds thatB'(v_0) v_1= [ (f ∘ g)'(v_0) v_1] e_1 = [ (f' ∘ g)(v_0) g'(v_0) v_1] e_1 = 1/ [1+P v_0^2_H]^1/2⟨ P v_0, P v_1 ⟩_H e_1 = ⟨ P v_0, v_1 ⟩_H / [1+P v_0^2_H]^1/2e_1 .This and the fact that Π_1 = {{{1}}} prove (<ref>) in the base case n=1.For the induction step ∋ n → n+1 ∈{2,3,…} assume that (<ref>) holds for some natural number n ∈. Observe that item (<ref>), the induction hypothesis, and the product rule of differentiation ensure that for allv_0, v_1, …, v_n+1∈ Hit holds that B^(n+1)( v_0 )( v_1,v_2,…,v_n+1 ) = ( dd v_0[ B^(n)( v_0 )( v_1,v_2,…,v_n )]) v_n+1= ( ∑_ϖ∈Π_n [ ^#_ϖ - 1 _ j=0(1-2j) ] ( d/d v_0[ ∏_ I ∈ϖ⟨1_{1,2}( #_I ) Pv_max(I) _{2}(#_I), v_min(I) ⟩_H / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) ] ) v_n+1) e_1 = ( ∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2[ ^#_ϖ - 1 _ j=0(1-2j) ] ( d/d v_0[ ∏^#_ϖ_i=1⟨ Pv_max(I^ϖ_i) _{2}(#_I^ϖ_i) , v_min(I^ϖ_i) ⟩_H / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) ] ) v_n+1) e_1 = ( ∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2{[ ∏^#_ϖ_ j=0 (1-2j) ] ⟨ P v_0, P v_n+1⟩_H [ ∏^#_ϖ_ i=1 ⟨ Pv_max(I^ϖ_i) _{2}(#_I^ϖ_i) , v_min(I^ϖ_i) ⟩_H ] / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ + 1/2 ) + [∏^#_ϖ-1_j=0(1-2j)] / [1+P v_0^2_H]^(#_ϖ-1/2)( d/dv_0[ ∏^#_ϖ_i=1⟨ Pv_max(I^ϖ_i) _{2}(#_I^ϖ_i), v_min(I^ϖ_i) ⟩_H ] ) v_n+1}) e_1 .Hence, we obtain that for allv_0, v_1, …, v_n+1∈ H it holds that B^(n+1)( v_0 )( v_1,v_2,…,v_n+1 ) = ( ∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2{[ ∏^#_ϖ_ j=0 (1-2j) ] ⟨ P v_0, P v_n+1⟩_H [ ∏_ I ∈ϖ⟨ Pv_max(I) _{2}(#_I) , v_min(I) ⟩_H ] / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ + 1/2 ) + ∑_i∈{1,2,…,#_ϖ}[ ∏^#_ϖ - 1 _ j=0 (1-2j) ] / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 )⟨_{2}(#_I^ϖ_i∪{n+1}) Pv_n+1, v_min(I^ϖ_i) ⟩_H ·∏_ j ∈{1,2,…,#_ϖ}∖{i}⟨ Pv_max(I^ϖ_j) _{2}(#_I^ϖ_j), v_min(I^ϖ_j) ⟩_H }) e_1 = ( ∑_ϖ∈Π_n{[ ∏^#_ϖ∪{{n+1}}-1 _ j=0 (1-2j) ] [ ∏_ I ∈ϖ∪{{n+1}}⟨_{1,2}(#_I) Pv_max(I) _{2}(#_I) , v_min(I) ⟩_H ] / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ∪{{n+1}} - 1/2 ) + ∑_i∈{1,2,…,#_ϖ}[ ∏^#_ϖ - 1 _ j=0 (1-2j) ] / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 )⟨_{1,2}(#_I^ϖ_i∪{n+1}) Pv_n+1, v_min(I^ϖ_i) ⟩_H ·∏_ I ∈ϖ∖{I^ϖ_i}⟨_{1,2}(#_I) Pv_max(I) _{2}(#_I), v_min(I) ⟩_H }) e_1 .This implies that for allv_0, v_1, …, v_n+1∈ Hit holds that B^(n+1)( v_0 )( v_1,v_2,…,v_n+1 ) = ( ∑_ϖ∈Π_n {∑_Ξ∈Π_n+1,Ξ = ϖ∪{{n+1}}[ [ ∏^#_Ξ - 1 _ i=0(1-2i) ] [ ∏_ I ∈Ξ⟨1_{1,2}( #_I ) Pv_max(I) _{2}(#_I), v_min(I) ⟩_H ] / [ 1 +Pv_0 ^2_H ]^ ( #_Ξ - 1/2 ) ] + ∑_Ξ∈Π_n+1, i ∈{1,2,…,#_ϖ},Ξ = (ϖ∖{I^ϖ_i})∪{I^ϖ_i∪{n+1}}[ [ ∏^#_Ξ - 1 _ i=0(1-2i) ] [ ∏_ I ∈Ξ⟨1_{1,2}( #_I ) Pv_max(I) _{2}(#_I), v_min(I) ⟩_H ] / [ 1 +Pv_0 ^2_H ]^ ( #_Ξ - 1/2 ) ] }) e_1.Combining this with the fact that Π_n+1 = {ϖ∪{{n+1}}ϖ∈Π_n}{{ I^ϖ_1, I^ϖ_2, …, I^ϖ_i-1,I^ϖ_i ∪{n+1}, I^ϖ_i+1, I^ϖ_i+2,…, I^ϖ_#_ϖ} i ∈{1,2,…,#_ϖ},ϖ∈Π_n }proves (<ref>) in the case n+1.Induction therefore establishes item (<ref>).It thus remains to prove item (<ref>). For this we note that for alln ∈,ϖ∈Π_nwith∀I ∈ϖ#_I ≤ 2it holds that #_{ I ∈ϖ#_I = 1 } = 2#_ϖ-n .Next observe that the Cauchy-Schwarz inequality and (<ref>) ensure that for alln ∈,v_0, v_1, …, v_n ∈ Hit holds that B^(n)( v_0 )( v_1,v_2,…,v_n ) _ H = ( ∑_ϖ∈Π_n [ ∏^#_ϖ - 1 _ i=0 (1-2i) ] [ ∏_ I ∈ϖ⟨1_{1,2}( #_I ) Pv_max(I) _{2}(#_I) , v_min(I) ⟩_H ] /[ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) ) e_1 _H = | ∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 [ ∏^#_ϖ - 1 _ i=0(1-2i) ] [ ∏_ I ∈ϖ⟨ Pv_max(I) _{2}(#_I), Pv_min(I) ⟩_H ] /[ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) | ≤∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 | ∏^#_ϖ - 1 _ i=0(1-2i) | ∏_ I ∈ϖ[Pv_max(I) _{2}(#_I)_H Pv_min(I) _H ] /[ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ).Moreover, the fact that∀n ∈,ϖ∈Π_n ∪_I∈ϖ I = {1,2,…,n} implies that for alln ∈,ϖ∈Π_n,v_0, v_1, …, v_n ∈ Hwith∀I ∈ϖ#_I≤ 2 it holds that ∏_ I ∈ϖ[Pv_max(I) _{2}(#_I)_H Pv_min(I) _H ] = ( ∏_ I ∈ϖ, #_I = 1 [Pv_0_H Pv_min(I) _H ] )( ∏_ I ∈ϖ, #_I = 2 [Pv_max(I)_H Pv_min(I) _H ] ) = ( ∏_ I ∈ϖ, #_I = 1Pv_0_H ){( ∏_ I ∈ϖ, #_I = 1Pv_min(I)_H )( ∏_ I ∈ϖ, #_I = 2 [Pv_max(I)_H Pv_min(I) _H ] ) }= Pv_0^#_{I∈ϖ#_I=1}_H∏^ n _ i=1Pv_i _H .This, (<ref>), and (<ref>) show that for alln ∈,v_0, v_1, …, v_n ∈ Hit holds that B^(n)( v_0 )( v_1,v_2,…,v_n ) _ H ≤∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 | ∏^#_ϖ - 1 _ i=0(1-2i) | Pv_0^#_{I∈ϖ#_I=1}_H / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) [ ∏^ n _ i=1Pv_i _H ] = ∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 | ∏^#_ϖ - 1 _ i=0(1-2i) | Pv_0^ (2#_ϖ - n) _H / [ 1 +Pv_0 ^2_H ]^ ( #_ϖ - 1/2 ) [ ∏^ n _ i=1Pv_i _H ] .The fact that∀v ∈ HPv_H ≤v_H therefore implies that for alln ∈ it holds that sup_ v ∈ HB^(n)( v ) _ L^(n)( H, H ) ≤∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 | ∏^#_ϖ - 1 _ i=0(1-2i) | sup_ v ∈ H [ Pv^ (2#_ϖ - n) _H / [ 1 +Pv ^2_H ]^ ( #_ϖ - 1/2 ) ] ≤∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 | ∏^#_ϖ - 1 _ i=0(1-2i) | sup_ v ∈ H [ [1+Pv^2_H]^ (#_ϖ - n/2) / [ 1 +Pv ^2_H ]^ ( #_ϖ - 1/2 ) ] = ∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 | ∏^#_ϖ - 1 _ i=0(1-2i) | sup_ v ∈ H [1 / [ 1 +Pv ^2_H ]^ ( n-1 )/2 ] ≤∑_ϖ∈Π_n,∀I ∈ϖ#_I≤ 2 ∏^#_ϖ - 1 _ i=0|2i-1| ≤∑_ϖ∈Π_n [2#_ϖ]^#_ϖ .This and the fact that∀n ∈,ϖ∈Π_n #_Π_n+#_ϖ < ∞ establish item (<ref>). The proof of Lemma <ref> is thus completed.§.§ Explicit representations for the derivative processesAssume the setting in Section <ref>.Then*there exist up-to-modifications unique ( ℱ_t )_ t ∈ [0,T]/ℬ(H)-predictable stochastic processes X^0,x [0,T] ×Ω→ H,x ∈ H,which fulfill for all p ∈ [2,∞),x ∈ H,t ∈ [0,T] that sup_ s ∈ [0,T] [X_s^0,x^p_H ] < ∞ andX_t^0,x - e^tA x = ∫_0^t e^ ( t - s ) AB(X_s^0,x) W_s , *it holds for allp ∈ [2,∞),t ∈ [0,T]thatH ∋ x ↦X^0,x_t∈pHis infinitely often Fréchet differentiable,*there exist up-to-modifications unique ( ℱ_t )_ t ∈ [0,T]/ℬ(H)-predictable stochastic processes X^n,𝐮 [0,T] ×Ω→ H,𝐮∈ H^n+1,n ∈, which fulfill for allp ∈ [2,∞),n ∈,𝐮∈ H^n,x ∈ H,t ∈ [0,T]that ( d^ndx^nX^0,x_t) 𝐮 = ( H ∋ y ↦X^0,y_t∈pH)^(n)(x)𝐮 = X^n,(x,𝐮)_t , *it holds for allp ∈ [2,∞),n ∈,δ_1,δ_2,…,δ_n ∈ [0,∞),t ∈ (0,T]with∑^n_i=1δ_i < 1/2 thatsup_𝐮 = (u_0,u_1,…,u_n) ∈ H × (H)^n ([ X^ n,𝐮_t ^p_H ])^1/p/∏^ n _ i=1 u_i_H_-δ_i < ∞,and*it holds for alln ∈_0,𝐮 = ( u_0, u_1, …, u_n ) ∈ H^ n+1,t ∈ [0,T]that X_t^n,𝐮 - 1_{ 0, 1 }(n) e^tA u_n = ∫_0^t e^ ( t - s ) AB^ ( n ) ( e^sA u_0 ) ( e^sA u_1 , e^sA u_2 , … , e^sA u_n ) W_s . Throughout this proof for everyn ∈,ϖ∈Π_nlet I^ϖ_i ∈ϖ,i ∈{1,2,…,#_ϖ},be the sets which satisfy thatmin( I^ϖ_1 ) <min( I_2^ϖ ) < … <min(I_#_ϖ^ϖ),let I_ i, j ^ϖ∈ I_i^ϖ⊆,j ∈{ 1,2,…,#_I^ϖ_i},i ∈{1,2,…,#_ϖ},be the natural numbers which satisfy for alli ∈{1,2,…,#_ϖ} thatI_ i, 1 ^ϖ < I_ i, 2 ^ϖ < … < I_ i, #_ I_i^ϖ^ϖ,and let[ · ]_i^ϖ H^ n + 1 →H^#_I_i^ϖ + 1,i ∈{ 1, 2, …, #_ϖ},be the mappings which satisfy for all i ∈{ 1, 2, …, #_ϖ},𝐮 = (u_0, u_1, …, u_n) ∈H^ n + 1 that[ 𝐮 ]_i^ϖ = ( u_0, u_ I_ i, 1 ^ϖ , u_ I_ i, 2 ^ϖ , … , u_ I_ i, #_I_i^ϖ^ϖ ). We note that items (i), (ii), (ix), and (x) of Theorem 2.1 in Andersson et al. <cit.>(withT=T,η=0,H=H,U=,W=W,A=A,F=0,B=(H∋ v ↦ (∋ u ↦ B(v)u ∈ H) ∈ HS(,H)),α=0,β=0,k = n,p = p,δ_1=δ_1, δ_2=δ_2, …, δ_n=δ_n for(δ_1,δ_2,…,δ_n) ∈{(κ_1,κ_2,…,κ_n)∈[0,1/2)^n ∑^n_i=1κ_i < 1/2},p ∈ [2,∞),n ∈ in the notation of Theorem 2.1 in <cit.>) ensure that * there exist up-to-modifications unique (ℱ_t)_t∈[0,T]/ℬ(H)-predictable stochastic processes X^ n,𝐮 [ 0 , T ] ×Ω→ H,𝐮∈ H^n+1,n ∈_0, which fulfill for all n ∈_0, p ∈ [2,∞), 𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T] that sup_s∈[0,T][X^n,𝐮_s^p_H] < ∞and X_t^n,𝐮 - 1_{ 0, 1 }(n) e^tA u_n= ∫_0^t e^ ( t - s ) A [ 1_{ 0 }(n) B(X_s^0,u_0) + ∑_ϖ∈Π_nB^ ( #_ϖ ) ( X_s^ 0,u_0) ( X_s^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_s^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_s^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) ] W_s , * it holds for allp ∈ [2,∞),t ∈ [0,T]thatH ∋ x ↦X^0,x_t∈pHis infinitely often Fréchet differentiable,* it holds for alln ∈,p ∈ [2,∞),𝐮∈ H^n,x ∈ H,t ∈ [0,T]that ( d^ndx^nX^0,x_t) 𝐮 = ( H ∋ y ↦X^0,y_t∈pH)^(n)(x)𝐮 = X^n,(x,𝐮)_t ,and* it holds for alln ∈,p ∈ [2,∞),δ_1,δ_2,…,δ_n ∈ [0,∞),t ∈ (0,T]with∑^n_i=1δ_i < 1/2 thatsup_𝐮 = (u_0,u_1,…,u_n) ∈ H × (H)^n ([ X^ n,𝐮_t ^p_H ])^1/p/∏^ n _ i=1 u_i_H_-δ_i < ∞. This and item (i) of Corollary 2.10 in Andersson et al. <cit.> establish items (<ref>)–(<ref>). It thus remains to prove (<ref>). For this letX^ n,𝐮 [ 0 , T ] ×Ω→ H,𝐮∈ H^n+1,n ∈_0, be (ℱ_t)_t∈[0,T]/ℬ(H)-predictable stochastic processes which fulfill for all n ∈_0, p ∈ [2,∞), 𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T] that sup_s∈[0,T][X^n,𝐮_s^p_H] < ∞and X_t^n,𝐮 - 1_{ 0, 1 }(n) e^tA u_n = ∫_0^t e^ ( t - s ) A [ 1_{ 0 }(n) B(X_s^0,u_0)+ ∑_ϖ∈Π_nB^ ( #_ϖ ) ( X_s^ 0,u_0) ( X_s^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_s^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_s^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) ] W_s .Note that (<ref>) and the fact that∀v ∈ HP(B(v)) = 0imply thatfor allx ∈ H,t ∈ [0,T] it holds that P X^0,x_t - P e^tA x = P ∫^t_0 e^(t-s)A B( X^0,x_s ) dW_s = ∫^t_0 e^(t-s)A[ P ( B( X^0,x_s ) ) ] dW_s = 0 .This shows that for allx ∈ H,t ∈ [0,T] it holds that ( B( X^0,x_t ) = √( 1 +P X^0,x_t ^2_H )e_1 = √( 1 +P e^tA x ^2_H )e_1 = B( e^tA x ) ) =1 .This and (<ref>) yield thatfor allx ∈ H,t ∈ [0,T] it holds that X^0,x_t-e^tA x = ∫^t_0 e^ (t-s)AB( e^sA x ) W_s .Next note that item (<ref>) of Lemma <ref> ensures that for alln ∈,𝐯∈ H^n,x ∈ H it holds thatP (B^(n)(x)𝐯) = 0.This and (<ref>) imply that for alln ∈,𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T]it holds that P X^n,𝐮_t - _{1}(n) P e^tA u_1= P ∫^t_0 e^(t-s)A∑_ϖ∈Π_nB^ ( #_ϖ ) ( X_s^ 0,u_0) ( X_s^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_s^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_s^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) dW_s = ∫^t_0 e^(t-s)A∑_ϖ∈Π_n [ P( B^ ( #_ϖ ) ( X_s^ 0,u_0) ( X_s^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_s^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_s^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) ) ] dW_s =0 .Hence, we obtain that for alln∈{2,3,…},𝐮∈ H^n+1,t∈[0,T] it holds that ( P(X^n,𝐮_t) = 0 ) = 1 .In addition, note that item (<ref>) of Lemma <ref> implies that for alln∈, v_0,v_1,…,v_n ∈ Hit holds that B^(n)(v_0)(v_1,v_2,…,v_n)= B^(n)(Pv_0)(Pv_1,Pv_2,…,Pv_n) .Combining this with (<ref>) ensures that for alln ∈,ϖ∈Π_n,𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T] withϖ≠{{1}, {2}, …, {n}} it holds that ( B^ ( #_ϖ ) ( X_t^ 0, u_0) ( X_t^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_t^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_t^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) = B^ ( #_ϖ ) ( P X_t^ 0, u_0) ( P X_t^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , P X_t^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , P X_t^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) = 0 ) =1.Equation (<ref>) hence implies that for alln ∈,𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T]it holds that ( ∑_ϖ∈Π_nB^ ( #_ϖ ) ( X_t^ 0, u_0) ( X_t^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_t^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_t^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) = B^ (n) ( X_t^ 0, u_0) ( X_t^ 1, (u_0,u_1), X_t^ 1, (u_0,u_2), … , X_t^ 1, (u_0,u_n) ) = B^ (n) ( P X_t^ 0, u_0) ( P X_t^ 1, (u_0,u_1), P X_t^ 1, (u_0,u_2), … , P X_t^ 1, (u_0,u_n) ) )=1 .Combining this with (<ref>), (<ref>), and (<ref>) shows that for alln ∈,𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T]it holds that ( ∑_ϖ∈Π_nB^ ( #_ϖ ) ( X_t^ 0, u_0) ( X_t^#_I^ϖ_1, [ 𝐮 ]_1^ϖ , X_t^#_I^ϖ_2, [ 𝐮 ]_2^ϖ , … , X_t^#_I^ϖ_#_ϖ, [𝐮 ]_#_ϖ^ϖ) = B^ ( n ) ( P e^tA u_0) ( P e^tA u_1 , P e^tA u_2 , …, P e^tA u_n ) = B^ ( n ) ( e^tA u_0) ( e^tA u_1 , e^tA u_2 , …, e^tA u_n ) )=1 .This and (<ref>) assure that for alln ∈,𝐮 = (u_0,u_1,…,u_n) ∈ H^n+1,t ∈ [0,T]it holds that X_t^n,𝐮 - 1_{ 1 }(n) e^tA u_n= ∫_0^t e^ ( t - s ) AB^ ( n ) ( e^sA u_0 ) ( e^sA u_1 , e^sA u_2 , … , e^sA u_n ) W_s .Combining this and (<ref>) establishes item (<ref>). The proof of Lemma <ref> is thus completed.§.§ Disprove of regularities for the derivative processesAssume the setting in Section <ref>,let c ∈ (0,∞),and assume for all n ∈ thatλ_n = -c n^2. Then*there exist up-to-modifications unique ( ℱ_t )_ t ∈ [0,T]/ℬ(H)-predictable stochastic processes X^0,x [0,T] ×Ω→ H,x ∈ H,which fulfill for all p ∈ [2,∞), x ∈ H, t ∈ [0,T] that sup_ s ∈ [0,T] [ X_s^0,x^p_H ] < ∞ andX_t^0,x - e^tA x = ∫_0^t e^ ( t - s ) AB(X_s^0,x) W_s , *it holds for allp ∈ [2,∞),t ∈ [0,T]thatH ∋ x ↦X^0,x_t∈pHis infinitely often Fréchet differentiable,*there exist up-to-modifications unique ( ℱ_t )_ t ∈ [0,T]/ℬ(H)-predictable stochastic processes X^n,𝐮 [0,T] ×Ω→ H,𝐮∈ H^n+1,n ∈, which fulfill for allp ∈ [2,∞),n ∈,𝐮∈ H^n,x ∈ H,t ∈ [0,T]that( d^ndx^nX^0,x_t) 𝐮 = ( H ∋ y ↦X^0,y_t∈pH)^(n)(x)𝐮 = X^n,(x,𝐮)_t , *it holds for allp ∈ [2,∞),n ∈,q,δ_1,δ_2,…,δ_n ∈ [0,∞),t ∈ (0,T]with∑^n_i=1δ_i < 1/2 thatsup_𝐮 = (u_0,u_1,…,u_n) ∈ H × (H)^n ([ X^ n,𝐮_t ^p_H_-q])^1/p/∏^ n _ i=1 u_i_H_-δ_i < ∞,and*it holds for allp ∈ [2,∞),n ∈,q∈[0,∞),δ_1, δ_2, …, δ_n ∈,t ∈ (0,T]with∑^n_i=1δ_i > 1/2 that sup_𝐮=(u_0,u_1,…,u_n) ∈ ((∩_r∈H_r))^n+1([X^ n,𝐮_t ^p_H_-q])^1/p/∏^ n _ i=1 u_i_H_-δ_i = ∞ . Throughout this proofletv^ k, r _ n,N ∈ H,N,n ∈,k ∈_0,r ∈,be the vectors which satisfy for allN,n ∈,k ∈_0,r ∈that v^ k, r _ n,N= (-A)^r [ ∑^N_ j=1e_k+jn] = ∑^N_ j=1[ c (k+jn)^2 ]^r e_k+jn = c^r [ ∑^N_j=1 (k+jn)^2re_k+jn] , let𝐮^ε,m,δ_n,N∈ H^n+1,δ∈^n,ε∈, m ∈_0, N,n ∈,be the vectors which satisfy for allN,n ∈,m ∈_0, ε, δ_1,δ_2,…,δ_2n+1∈that 𝐮^ε,m,δ_1_1,N = (v^1,-ε_N^m,N, N^m v^1,δ_1-(1/2)-ε_N^m,N),𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N = ( e_1, v^1,δ_1-ε_nN^m,N, v^1,δ_2-ε_nN^m,N,v^2,δ_3-ε_nN^m,N, v^2,δ_4-ε_nN^m,N, …,v^n-1,δ_2n-3-ε_nN^m,N,v^n-1,δ_2n-2-ε_nN^m,N,v^n,δ_2n-1-ε_nN^m,N,N^m v^n,δ_2n-(1/2)-ε_nN^m,N ),and 𝐮^ε,m,(δ_1,δ_2,…,δ_2n+1)_2n+1,N = ( v^n+1,-ε_(n+1)N^m,N, v^1,δ_1-ε_(n+1)N^m,N, v^1,δ_2-ε_(n+1)N^m,N,v^2,δ_3-ε_(n+1)N^m,N,v^2,δ_4-ε_(n+1)N^m,N,…,v^n,δ_2n-1-ε_(n+1)N^m,N, v^n,δ_2n-ε_(n+1)N^m,N,N^m v^n+1,δ_2n+1-(1/2)-ε_(n+1)N^m,N ),letθ^n_iH^n → H,i ∈{1,2,…,n},n ∈,be the functions which satisfy for alln ∈,i ∈{1,2,…,n},𝐮=(u_1,u_2,…,u_n) ∈ H^nthatθ^n_i(𝐮) = u_i,and let⌊·⌋→and⌈·⌉→be the functions which satisfy for all t ∈ that ⌊ t ⌋ = max( (-∞,t] ∩{ 0, 1 , - 1 , 2 , - 2 , …}) = max( (-∞,t] ∩ℤ )and⌈ t ⌉ = min( [ t, ∞ ) ∩{ 0, 1 , - 1 , 2 , - 2 , …}) = min( [t,∞) ∩ℤ ) .Note that for all N,n∈, m∈_0, ε,δ_1,δ_2,…,δ_2n-1∈ it holds that 𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N = ( v^n,-ε_nN^m,N, v^1,δ_1-ε_nN^m,N, v^1,δ_2-ε_nN^m,N,v^2,δ_3-ε_nN^m,N,v^2,δ_4-ε_nN^m,N,…,v^n-1,δ_2n-3-ε_nN^m,N, v^n-1,δ_2n-2-ε_nN^m,N,N^m v^n,δ_2n-1-(1/2)-ε_nN^m,N ) .Moreover, observe that items (<ref>)–(<ref>) of Lemma <ref> establishitems (<ref>)–(<ref>).It thus remains to prove item (<ref>). For this let X^n,𝐮[0,T]×Ω→ H,𝐮∈ H^n+1,n∈_0,be (ℱ_t)_t∈[0,T]/ℬ(H)-predictable stochastic processes which fulfill that for allp∈[2,∞),n∈, 𝐮∈ H^n,x∈ H,t∈[0,T]it holds * that sup_s∈[0,T][X^0,x_s^p_H] < ∞, * that X_t^0,x - e^tA x = ∫_0^t e^ ( t - s ) AB(X_s^0,x) W_s ,and* that( d^ndx^nX^0,x_t) 𝐮 = ( H ∋ y ↦X^0,y_t∈pH)^(n)(x)𝐮 = X^n,(x,𝐮)_t . Next observe that the fact that∀n, j, k, l_1, l_2, …, l_n ∈, i ∈{1,2,…,n} l_i + jk ≥ 2 implies that for allN, n, k, l_1, l_2, …, l_n ∈,r_1, r_2, …, r_n ∈,t ∈ [0,T]it holds that ∏^n_i=1 P e^tA v^l_i,r_i_k,N^2_H = ∏^n_i=1 P e^tA(-A)^r_i^N_j=1 e_l_i+jk^2_H = ∏^n_i=1 e^tA(-A)^r_i^N_j=1 e_l_i+jk^2_H.This shows that for allN, n, k, l_1, l_2, …, l_n ∈,r_1, r_2, …, r_n ∈,t ∈ [0,T]it holds that ∏^n_i=1 P e^tA v^l_i,r_i_k,N^2_H = ∏^n_i=1^N_j=1(e^tA(-A)^r_i e_l_i+jk) ^2_H = ∏^n_i=1( ∑^N_j=1 e^tA(-A)^r_i e_l_i+jk^2_H ) = ∏^n_i=1( ∑^N_j=1[ e^-c(l_i+jk)^2 t (-A)^r_i e_l_i+jk_H ]^2 ) = ∑^N_j_1=1∑^N_j_2=1…∑^N_j_n=1( ∏^n_i=1[ e^-c(l_i+j_i k)^2 t (-A)^r_i e_l_i+j_i k_H ]^2 ) ≥∑^N_j_1=1∑^j_1_j_2=j_1∑^j_1_j_3=j_1…∑^j_1_j_n=j_1( ∏^n_i=1[ e^-c(l_i+j_i k)^2 t (-A)^r_i e_l_i+j_i k_H ]^2 ) = ∑^N_j=1( ∏^n_i=1[ e^-c(l_i+jk)^2 t (-A)^r_i e_l_i+jk_H ]^2 ) = ∑^N_j=1( ∏^n_i=1[ e^-c(l_i+jk)^2 t[c (l_i+jk)^2]^r_i]^2 ) .This and the fact that ∀x ∈ x=max{x,0}+min{x,0}=max{x,0}-max{-x,0}ensure that for allN, n, k, l_1, l_2, …, l_n ∈,r_1, r_2, …, r_n ∈,t ∈ [0,T]it holds that ∏^n_i=1 P e^tA v^l_i,r_i_k,N^2_H≥∑^N_j=1( ∏^n_i=1[ | e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2 [ l_i+jk/(jk/n)]^4r_i] ) = ∑^N_j=1( ∏^n_i=1[ | e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2 [ n+nl_i/jk]^4r_i] ) = ∑^N_j=1( ∏^n_i=1[ | e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2( n+nl_i/(jk) )^4max{r_i,0}/ ( n+nl_i/(jk) )^4max{-r_i,0}] ) ≥∑^N_j=1( ∏^n_i=1[ | e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2 / ( n+nl_i/(jk) )^4max{-r_i,0}] ) ≥∑^N_j=1( ∏^n_i=1[ | e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2 / ( n+nmax_m∈{1,2,…,n} l_m )^4max{-r_i,0}] ) .This assures that for allN, n, l_1, l_2, …, l_n ∈,k ∈{n,2n,3n,…},r_1, r_2, …, r_n ∈,t ∈ [0,T]it holds that ∏^n_i=1 P e^tA v^l_i,r_i_k,N^2_H ≥∑^N_j=1( ∏^n_i=1[ | e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2 / ( n+nmax_m∈{1,2,…,n} l_m )^4|r_i|] ) = 1/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i|[ ∑^N_j=1( ∏^n_i=1| e^-c(l_i+jk)^2 t[c (jk/n)^2]^r_i|^2 ) ] ≥1/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i|[ ∑^N_j=1( ∏^n_i=1| e^-2c(l_i)^2 te^-2c(jk)^2 t[c (jk/n)^2]^r_i|^2 ) ] =e^-4ct∑^n_i=1 |l_i|^2/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i|[ ∑^N_j=1| e^-2c(jk)^2 n t[c (jk/n)^2]^∑^n_i=1 r_i|^2 ] .Therefore, we obtain that for allN, n, l_1, l_2, …, l_n ∈,k ∈{n,2n,3n,…},r_1, r_2, …, r_n ∈,t ∈ [0,T]it holds that ∏^n_i=1 P e^tA v^l_i,r_i_k,N^2_H≥ e^-4ctnmax_i∈{1,2,…,n} |l_i|^2/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i|[ ∑^N_j=1| e^-2n^3tc(jk/n)^2[c (jk/n)^2]^∑^n_i=1 r_i|^2 ] = e^-4ctnmax_i∈{1,2,…,n} |l_i|^2/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i|[ ∑^N_j=1 e^2n^3tA (-A)^(∑^n_i=1r_i) e_(jk/n)^2_H ] =e^-4ctnmax_i∈{1,2,…,n} |l_i|^2/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i|e^2n^3tA (-A)^(∑^n_i=1r_i)^N_j=1 e_(jk/n)^2_H =e^-4ctnmax_i∈{1,2,…,n} |l_i|^2/ ( n+nmax_i∈{1,2,…,n} l_i )^4∑^n_i=1|r_i| e^2n^3tAv^0,∑^n_i=1r_i_k/n,N^2_H .Furthermore, note that for allN,n ∈,k_1, k_2 ∈{1,2,…,n},r_1,r_2 ∈,t ∈ [0,T]it holds that ⟨ P e^tA v^k_1,r_1_n,N, e^tA v^k_2,r_2_n,N⟩_H = ⟨ P e^tA (-A)^r_1 v^k_1,0_n,N, P e^tA (-A)^r_2 v^k_2,0_n,N⟩_H = _{k_1}(k_2) P e^tA (-A)^(r_1+r_2)/2 v^k_1,0_n,N^2_H = _{k_1}(k_2) P e^tA v^k_1,(r_1+r_2)/2_n,N^2_H .In particular, this implies that for allN,n ∈,k_1, k_2 ∈{1,2,…,n},r_1,r_2 ∈,t ∈ [0,T]with k_1≠ k_2 it holds that ⟨ P e^tA v^k_1,r_1_n,N, e^tA v^k_2,r_2_n,N⟩_H = 0 .Next observe that items (<ref>) and (<ref>) of Lemma <ref> ensure that for alln ∈,r∈[0,∞),u_0, u_1, …, u_n ∈ H,t∈[0,T]it holds that ∫^t_0e^(t-s)AB^(n)( e^sA u_0 )( e^sA u_1, e^sA u_2, …, e^sA u_n )^2_H_-rds < ∞ .Item (<ref>) of Lemma <ref> andItô's isometry hence show that for alln ∈,r∈[0,∞),𝐮= (u_0, u_1, …, u_n) ∈ H^n+1,t∈ [0,T]it holds that [X^ n,𝐮_t ^2_H_-r] = 1_{1}(n) (e^tA u_1 ^2_H_-r + 2[ < e^tA u_1 , ∫^t_0 e^(t-s)AB'( e^sA u_0 ) e^sA u_1 W_s>_H_-r] ) + [∫^t_0 (-A)^-r e^(t-s)AB^(n)( e^sA u_0 )( e^sA u_1, e^sA u_2, …, e^sA u_n ) dW_s ^2_H] = 1_{1}(n) (e^tA u_1 ^2_H_-r + 2< e^tA u_1 , [ ∫^t_0 e^(t-s)AB'( e^sA u_0 ) e^sA u_1 W_s] >_H_-r) + ∫^t_0(-A)^-r e^(t-s)AB^(n)( e^sA u_0 )( e^sA u_1, e^sA u_2, …, e^sA u_n )^2_Hs= 1_{1}(n) e^tA u_1 ^2_H_-r+ ∫^t_0(-A)^-r e^(t-s)AB^(n)( e^sAθ^n+1_1(𝐮) )( e^sAθ^n+1_2(𝐮), e^sAθ^n+1_3(𝐮), …, e^sAθ^n+1_n+1(𝐮) )^2_Hs≥∫^t_0e^(t-s)A B^(n)( e^sAθ^n+1_1(𝐮) )( e^sAθ^n+1_2(𝐮), e^sAθ^n+1_3(𝐮), …, e^sAθ^n+1_n+1(𝐮) )^2_H_-r s .In particular, this shows that for allN,n ∈, m ∈_0,r∈[0,∞),ε,δ_1,δ_2,…,δ_2n∈,t ∈ [0,T] it holds that [ X^ 2n-1,𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N_t ^2_H_-r]≥∫^t_0e^(t-s)AB^(2n-1)( e^sAθ^2n_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ( e^sAθ^2n_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), e^sAθ^2n_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), …, e^sAθ^2n_2n(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ^2_H_-r sand[ X^ 2n,𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N_t ^2_H_-r]≥∫^t_0e^(t-s)AB^(2n)( e^sAθ^2n+1_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ( e^sAθ^2n+1_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), e^sAθ^2n+1_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), …, e^sAθ^2n+1_2n+1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ^2_H_-r s .In the next step we estimate the right hand sides of (<ref>) and (<ref>) from below to establish suitable lower bounds for the left hand sides of (<ref>) and (<ref>), respectively. We start with estimating the right hand side of (<ref>) from below. Observe that (<ref>) implies that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n-1∈,t ∈ [0,T] it holds that B^(2n-1)( e^tAθ^2n_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ( e^tAθ^2n_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), e^tAθ^2n_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), …,e^tAθ^2n_2n(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) = B^(2n-1)( e^tA v^n,-ε_nN^m,N ) ( e^tA v^1,δ_1-ε_nN^m,N, e^tA v^1,δ_2-ε_nN^m,N, e^tA v^2,δ_3-ε_nN^m,N, e^tA v^2,δ_4-ε_nN^m,N, …, e^tA v^n-1,δ_2n-3-ε_nN^m,N, e^tA v^n-1,δ_2n-2-ε_nN^m,N, N^m e^tA v^n,δ_2n-1-(1/2)-ε_nN^m,N) .Item (<ref>) of Lemma <ref> therefore yields that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n-1∈,t ∈ [0,T] it holds that B^(2n-1)( e^tAθ^2n_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ( e^tAθ^2n_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), e^tAθ^2n_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), …,e^tAθ^2n_2n(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) = ( ∑_ϖ∈Π_2n-1( ( ∏^#_ϖ - 1 _ i=0(1-2i) ) /[ 1 +P e^tA v^n,-ε_nN^m,N^2_H ]^ ( #_ϖ - 1/2 ) ·[ ∏_ I ∈ϖ( < 1_{1}( #_I ) P e^tA v^n,-ε_nN^m,N, N^m1_{2n-1}(min(I)) e^tA v^⌈min(I)/2⌉,δ_min(I)-(1/2)1_{2n-1}(min(I))-ε_nN^m,N>_H + < 1_{2}( #_I ) P ( N^m1_{2n-1}(max(I)) e^tA v^⌈max(I)/2⌉,δ_max(I)-(1/2)1_{2n-1}(max(I))-ε_nN^m,N) , e^tA v^⌈min(I)/2⌉,δ_min(I)-ε_nN^m,N>_H ) ] ) ) e_1 .This and (<ref>) imply that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n-1∈,t ∈ [0,T] it holds that B^(2n-1)( e^tAθ^2n_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ( e^tAθ^2n_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), e^tAθ^2n_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), …,e^tAθ^2n_2n(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) = ( ∑_ϖ∈Π_2n-1,ϖ = {{1,2}, {3,4}, …, {2n-3,2n-2}, {2n-1}}( (∏^#_ϖ - 1 _ i=0(1-2i) ) / [ 1 +P e^tA v^n,-ε_nN^m,N^2_H ]^ ( #_ϖ - 1/2 ) ·< P e^tA v^n,-ε_nN^m,N , N^m e^tA v^n,δ_2n-1-(1/2)-ε_nN^m,N>_H [ ∏^n-1_ i=1 < P e^tA v^i,δ_2i-ε_nN^m,N, e^tA v^i,δ_2i-1-ε_nN^m,N>_H ] ) ) e_1 .Identity (<ref>) therefore shows that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n-1∈,t ∈ [0,T] it holds that B^(2n-1)( e^tAθ^2n_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ( e^tAθ^2n_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), e^tAθ^2n_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), …,e^tAθ^2n_2n(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) = N^m( (∏^n-1_ i=0 (1-2i) ) / [ 1 +P e^tA v^n,-ε_nN^m,N^2_H ]^ ( n - 1/2 ) P e^tA v^n,(δ_(2n-1)/2)-(1/4)-ε_nN^m,N^2_H ·[ ∏^n-1_i=1 P e^tA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H ] ) e_1 .Hence, we obtain that for allN,n ∈, m ∈_0,r∈[0,∞),δ_1,δ_2,…,δ_2n-1∈,ε∈ (0,∞),t ∈ [0,T],s ∈ [0,t] it holds that e^(t-s)AB^(2n-1)( e^sAθ^2n_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ( e^sAθ^2n_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N),e^sAθ^2n_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N), …, e^sAθ^2n_2n(𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N) ) ^2_H_-r= ( N^m/c^r)^2 e^ -2c(t-s) [ ( ∏^n-1_ i=0(1-2i) ) / [ 1 +P e^sA v^n,-ε_nN^m,N^2_H ]^ ( n - 1/2 ) P e^sA v^n,(δ_(2n-1)/2)-(1/4)-ε_nN^m,N^2_H ·( ∏^n-1_i=1 P e^sA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H ) ]^2 .Plugging this into the right hand side of (<ref>) yields that for allN,n ∈, m ∈_0,r∈[0,∞),δ_1,δ_2,…,δ_2n-1∈,ε∈ (0,∞),t ∈ (0,T] it holds that [ X^ 2n-1,𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N_t ^2_H_-r] ≥( N^m/c^r)^2 ∫^t_0 e^ -2c(t-s) [ ( ∏^n-1_ i=0(1-2i) ) / [ 1 +P e^sA v^n,-ε_nN^m,N^2_H ]^ ( n - 1/2 ) P e^sA v^n,(δ_(2n-1)/2)-(1/4)-ε_nN^m,N^2_H ·( ∏^n-1_i=1 P e^sA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H ) ]^2s≥[N^m ( ∏^n-1_ i=0(1-2i) ) / c^r e^ct [ 1 + sup_s∈[0,T] P e^sA v^n,-ε_nN^m,N^2_H ]^ ( n - 1/2 ) ]^2 ·∫^t_0 [P e^sA v^n,(δ_(2n-1)/2)-(1/4)-ε_nN^m,N^2_H ( ∏^n-1_i=1 P e^sA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H ) ]^2s .Combining this with (<ref>) ensures that for allN,n ∈, m ∈_0,r∈[0,∞),δ_1,δ_2,…,δ_2n-1∈,ε∈ (0,∞),t ∈ (0,T] it holds that [ X^ 2n-1,𝐮^ε,m,(δ_1,δ_2,…,δ_2n-1)_2n-1,N_t ^2_H_-r] ≥[N^m ( ∏^n-1_ i=0(1-2i) ) / c^r e^ct [ 1 + sup_s∈[0,T] P e^sA v^n,-ε_nN^m,N^2_H ]^ ( n - 1/2 ) ]^2 ·∫^t_0e^ -8cn^3se^2n^3sA v^0,-(1/4)-nε+∑^2n-1_i=1(δ_i/2)_N^m,N^4_H / (n+n^2)^8(|(δ_(2n-1)/2)-(1/4)-ε|+∑^n-1_i=1|((δ_2i-1+δ_2i)/2)-ε|) s≥[N^m ( ∏^n-1_ i=0(1-2i) ) / c^r e^ c(1+4n^3) t (2n^2)^(1+4nε+2∑^2n-1_i=1|δ_i|)[ 1 + sup_s∈[0,T] P e^sA v^n,-ε_nN^m,N^2_H ]^ ( n - 1/2 ) ]^2 ·∫^t_0e^2n^3sA v^0,-(1/4)-nε+∑^2n-1_i=1(δ_i/2)_N^m,N^4_Hs .Next we estimate the right hand side of (<ref>) from below. Note that (<ref>)shows that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n∈,t ∈ [0,T] it holds that B^(2n)( e^tAθ^2n+1_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ( e^tAθ^2n+1_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), e^tAθ^2n+1_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), …,e^tAθ^2n+1_2n+1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) = B^(2n)( e^tA e_1 ) ( e^tA v^1,δ_1-ε_nN^m,N, e^tA v^1,δ_2-ε_nN^m,N, e^tA v^2,δ_3-ε_nN^m,N, e^tA v^2,δ_4-ε_nN^m,N, …, e^tA v^n-1,δ_2n-3-ε_nN^m,N, e^tA v^n-1,δ_2n-2-ε_nN^m,N, e^tA v^n,δ_2n-1-ε_nN^m,N, N^m e^tA v^n,δ_2n-(1/2)-ε_nN^m,N) .Item (<ref>) of Lemma <ref> therefore ensures that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n∈,t ∈ [0,T] it holds that B^(2n)( e^tAθ^2n+1_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ( e^tAθ^2n+1_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), e^tAθ^2n+1_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), …,e^tAθ^2n+1_2n+1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) = ( ∑_ϖ∈Π_2n( ( ∏^#_ϖ - 1 _ i=0(1-2i) ) /[ 1 +P e^tA e_1 ^2_H ]^ ( #_ϖ - 1/2 ) ·[ ∏_ I ∈ϖ( < 1_{1}( #_I ) P e^tA e_1, N^m1_{2n}(min(I)) e^tA v^⌈min(I)/2⌉,δ_min(I)-(1/2)1_{2n}(min(I))-ε_nN^m,N>_H + < 1_{2}( #_I ) P ( N^m1_{2n}(max(I)) e^tA v^⌈max(I)/2⌉,δ_max(I)-(1/2)1_{2n}(max(I))-ε_nN^m,N) , e^tA v^⌈min(I)/2⌉,δ_min(I)-ε_nN^m,N>_H ) ] ) ) e_1 = ( ∑_ϖ∈Π_2n,∀I ∈ϖ#_I = 2( [ ∏^#_ϖ - 1 _ i=0(1-2i) ] ·[ ∏_ I ∈ϖ< P ( N^m1_{2n}(max(I)) e^tA v^⌈max(I)/2⌉,δ_max(I)-(1/2)1_{2n}(max(I))-ε_nN^m,N) , e^tA v^⌈min(I)/2⌉,δ_min(I)-ε_nN^m,N>_H ] ) ) e_1 .This and (<ref>) assure that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n∈,t ∈ [0,T] it holds that B^(2n)( e^tAθ^2n+1_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ( e^tAθ^2n+1_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), e^tAθ^2n+1_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), …,e^tAθ^2n+1_2n+1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) = ( ∑_ϖ∈Π_2n,ϖ = {{1,2}, {3,4}, …, {2n-3,2n-2}, {2n-1,2n}}( [ ∏^#_ϖ - 1 _ i=0 (1-2i)] ·< P( N^m e^tA v^n,δ_2n-(1/2)-ε_nN^m,N) , e^tA v^n,δ_2n-1-ε_nN^m,N>_H [ ∏^n-1_ i=1 < P e^tA v^i,δ_2i-ε_nN^m,N , e^tA v^i,δ_2i-1-ε_nN^m,N>_H ] ) ) e_1 .Furthermore, identity (<ref>) implies that for allN, n ∈, m ∈_0,ε, δ_1,δ_2, …,δ_2n∈,t ∈ [0,T] it holds that B^(2n)( e^tAθ^2n+1_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ( e^tAθ^2n+1_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), e^tAθ^2n+1_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), …,e^tAθ^2n+1_2n+1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) = N^m( [ ∏^n-1_ i=0(1-2i) ]P e^tA v^n,((δ_2n-1+δ_2n)/2)-(1/4)-ε_nN^m,N^2_H [ ∏^n-1_i=1 P e^tA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H ] ) e_1.We therefore obtain that for all N,n ∈, m ∈_0,r∈[0,∞),δ_1,δ_2,…,δ_2n∈,ε∈ (0,∞),t ∈ [0,T],s ∈ [0,t] it holds that e^(t-s)AB^(2n)( e^sAθ^2n+1_1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ( e^sAθ^2n+1_2(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N),e^sAθ^2n+1_3(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N), …, e^sAθ^2n+1_2n+1(𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N) ) ^2_H_-r= [ N^m/c^r]^2 e^ -2c(t-s) [ ( ∏^n-1_ i=0(1-2i) )P e^sA v^n,((δ_2n-1+δ_2n)/2)-(1/4)-ε_nN^m,N^2_H ·( ∏^n-1_i=1 P e^sA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H )]^2 .Plugging this into the right hand side of (<ref>) yields that for allN,n ∈, m ∈_0,r∈[0,∞),δ_1,δ_2,…,δ_2n∈,ε∈ (0,∞),t ∈ [0,T] it holds that [ X^ 2n,𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N_t ^2_H_-r] ≥[ N^m/c^r]^2 ∫^t_0 e^ -2c(t-s) [ ( ∏^n-1_ i=0(1-2i) )P e^sA v^n,((δ_2n-1+δ_2n)/2)-(1/4)-ε_nN^m,N^2_H ·( ∏^n-1_i=1 P e^sA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H )]^2s≥[N^m ( ∏^n-1_ i=0(1-2i) ) / c^r e^ ct ]^2 ·∫^t_0 [P e^sA v^n,((δ_2n-1+δ_2n)/2)-(1/4)-ε_nN^m,N^2_H ( ∏^n-1_i=1 P e^sA v^i,((δ_2i-1+δ_2i)/2)-ε_nN^m,N^2_H ) ]^2s .This and (<ref>) assure that for allN,n ∈, m ∈_0,r∈[0,∞),δ_1,δ_2,…,δ_2n∈,ε∈ (0,∞),t ∈ [0,T] it holds that [ X^ 2n,𝐮^ε,m,(δ_1,δ_2,…,δ_2n)_2n,N_t ^2_H_-r] ≥[N^m ( ∏^n-1_ i=0(1-2i) ) / c^r e^ ct ]^2 ∫^t_0e^-8cn^3s e^2n^3sA v^0,-(1/4)-nε+∑^2n_i=1(δ_i/2)_N^m,N^4_H / (n+n^2)^8(|((δ_2n-1+δ_2n)/2)-(1/4)-ε|+∑^n-1_i=1|((δ_2i-1+δ_2i)/2)-ε|) s≥[N^m ( ∏^n-1_ i=0(1-2i) ) / c^r e^ c(1+4n^3)t(2n^2)^(1+4nε+2∑^2n_i=1|δ_i|)]^2 ∫^t_0e^2n^3sA v^0,-(1/4)-nε+∑^2n_i=1(δ_i/2)_N^m,N^4_Hs .This and (<ref>) yield that for allN, n ∈,r∈[0,∞),δ_1,δ_2,…,δ_n ∈,ε∈ (0,1/4),m∈_0∩[1/4ε-1,∞), t ∈ (0,T]it holds that [ X^ n,𝐮^ε,m,(δ_1,δ_2,…,δ_n)_n,N_t ^2_H_-r] ≥[N^m ( ∏^⌈n/2⌉ - 1 _ i=0(1-2i) ) / c^r e^ c(1+4|⌈ n/2 ⌉|^3) t (2|⌈ n/2 ⌉|^2)^(1+4⌈ n/2 ⌉ε+2∑^n_i=1|δ_i|)[ 1 + sup_s∈[0,T] P e^sA v^⌈ n/2 ⌉,-ε_⌈ n/2 ⌉ N^m,N^2_H ]^ ( ⌈n/2⌉ - 1/2 ) ]^2 ·∫^t_0e^2|⌈n/2⌉|^3sA v^0,-(1/4)-⌈n/2⌉ε+∑^n_i=1(δ_i/2)_N^m,N^4_Hs .Moreover, note that for allN,n∈,i∈{1,2,…,n}, ε∈(0,1/4), m∈_0∩[1/4ε-1,∞), t∈[0,T]it holds that P e^tA v^i,-ε_nN^m,N^2_H =P e^tA (-A)^-ε[ ^N_j=1e_i+jnN^m] ^2_H = ^N_j=1[ e^tA (-A)^-ε e_i+jnN^m] ^2_H = ∑^N_j=1 e^tA (-A)^-ε e_i+jnN^m^2_H = 1/c^2ε[ ∑^N_j=1e^-2tc(i+jnN^m)^2/ (i+jnN^m)^4ε] ≤1/c^2ε[ ∑^N_j=11/ (jN^m)^4ε] = 1/c^2ε N^4mε[ 1+ ∑^N_j=21/ j^4ε] = 1/c^2ε N^4mε( 1+ ∑^N_j=2∫^j_j-11/ j^4εdx ) ≤1/c^2ε N^4mε( 1+ ∑^N_j=2∫^j_j-11/ x^4εdx ) = 1/c^2ε N^4mε( 1+ ∫^N_1 1/ x^4εdx ) = 1/c^2ε N^4mε( 1+ 1/(1-4ε)( N^(1-4ε)-1 ) ) = 1/c^2ε( 1/N^4mε+ 1/(1-4ε)[ N^(1-4ε-4mε)-1/N^4mε] ) .This and the fact that ∀ ε∈(0,1/4), m∈_0∩[1/4ε-1,∞)1-4ε-4mε≤ 0 ensure that for allN,n∈,i∈{1,2,…,n}, ε∈(0,1/4), m∈_0∩[1/4ε-1,∞) it holds that sup_t∈[0,T]P e^tA v^i,-ε_nN^m,N^2_H ≤1/c^2ε( 1/N^4mε+ 1/(1-4ε)- 1/(1-4ε)N^4mε) ≤1/c^2ε(1-4ε) .Plugging (<ref>) into the right hand side of (<ref>) yields that for allN, n ∈,r∈[0,∞),δ_1,δ_2,…,δ_n ∈,ε∈ (0,1/4),m∈_0∩[1/4ε-1,∞), t ∈ (0,T]it holds that [ X^ n,𝐮^ε,m,(δ_1,δ_2,…,δ_n)_n,N_t ^2_H_-r] ≥[N^m ( ∏^⌈n/2⌉ - 1 _ i=0(1-2i) ) / c^r e^ c(1+4|⌈ n/2 ⌉|^3) t (2|⌈ n/2 ⌉|^2)^(1+4⌈ n/2 ⌉ε+2∑^n_i=1|δ_i|)[ 1 + 1/c^2ε(1-4ε) ]^ ( ⌈n/2⌉ - 1/2 ) ]^2 ·∫^t_0e^2|⌈n/2⌉|^3sA v^0,-(1/4)-⌈n/2⌉ε+∑^n_i=1(δ_i/2)_N^m,N^4_Hs .Next note that for allN,l ∈,m ∈_0,ε∈ (0,∞),δ∈ [1/2+2lε,∞),t ∈ (0,T] it holds that ∫^t_0e^2l^3sA v^0,-(1/4)-lε+(δ/2)_N^m,N^4_Hs = ∫_0^te^ 2l^3sA(-A)^-(1/4)-lε+(δ/2)( _ j = 1 ^N e_jN^m) ^4_H ds = ∫_0^t [ _ j = 1 ^N[ e^ 2l^3sA(-A)^-(1/4)-lε+(δ/2) e_jN^m] ^2_H ]^2 ds = ∫_0^t [ ∑_ j = 1 ^Ne^ 2l^3sA(-A)^-(1/4)-lε+(δ/2) e_jN^m^2_H ]^2 ds = ∫_0^t [ ∑_ j = 1 ^N( e^ -2l^3cj^2N^2m s(-A)^-(1/4)-lε+(δ/2) e_jN^m_H )^2 ]^2 ds= ∑_ j, k = 1 ^N(-A)^-(1/4)-lε+(δ/2) e_jN^m^2_H (-A)^-(1/4)-lε+(δ/2) e_kN^m^2_H [ ∫_0^t e^ -4l^3c ( j^2 + k^2 )N^2m s ds ]= ∑_ j, k = 1 ^ N ( [( 1 - e^ -4l^3c (j^2+k^2) N^2m t) / 4l^3c^(2+4lε-2δ) (j^2 + k^2) N^2m] (jN^m)^ ( 2δ -1 -4lε ) (kN^m)^ ( 2δ -1 -4lε ) ) .The fact that∀l ∈,ε∈ (0,∞),δ∈ [1/2+2lε,∞)2δ-1-4lε≥ 0 therefore assures that for allN,l ∈,m ∈_0,ε∈ (0,∞),δ∈ [1/2+2lε,∞),t ∈ (0,T] it holds that ∫^t_0e^2l^3sA v^0,-(1/4)-lε+(δ/2)_N^m,N^4_Hs ≥∑_ j, k = 1 ^ N( 1 - e^ -4l^3c (j^2+k^2) N^2m t) / 4l^3c^(2+4lε-2δ) (j^2 + k^2) N^2m≥ ( 1 - e^ -ct) / 4l^3c^(2+4lε-2δ) N^2m[ ∑_ j, k = 1 ^N 1/ (j^2+k^2) ] .This and (<ref>) imply that for alln ∈,r∈[0,∞),δ_1,δ_2,…,δ_n ∈,ε∈ (0,1/4),m∈_0∩[1/4ε-1,∞), t ∈ (0,T]with∑^n_i=1δ_i ≥1/2 + 2⌈n/2⌉ε it holds that sup_N∈([ X^ n,𝐮^ε,m,(δ_1,δ_2,…,δ_n)_n,N_t ^2_H_-r])^1/2≥sup_N∈(N^m∏^⌈n/2⌉ - 1 _ i=0|1-2i| / c^r e^ c(1+4|⌈ n/2 ⌉|^3) t (2|⌈ n/2 ⌉|^2)^(1+4⌈ n/2 ⌉ε+2∑^n_i=1|δ_i|) [ 1 + 1/c^2ε(1-4ε) ]^ ( ⌈n/2⌉ - 1/2 ) ·[( 1 - e^ -ct) / 4|⌈n/2⌉|^3c^(2+4⌈n/2⌉ε-2∑^n_i=1δ_i) N^2m∑_ j, k = 1 ^N 1/ (j^2+k^2) ]^1/2) = ∏^⌈n/2⌉ - 1 _ i=0|1-2i| / c^r e^ c(1+4|⌈ n/2 ⌉|^3) t (2|⌈ n/2 ⌉|^2)^(1+4⌈ n/2 ⌉ε+2∑^n_i=1|δ_i|) [ 1 + 1/c^2ε(1-4ε) ]^ ( ⌈n/2⌉ - 1/2 ) ·[( 1 - e^ -ct) / 4|⌈n/2⌉|^3c^(2+4⌈n/2⌉ε-2∑^n_i=1δ_i)∑_ j, k = 1 ^∞1/ (j^2+k^2) ]^1/2 = ∞ .Next note that (<ref>) and (<ref>) ensure that for alln ∈,δ_1,δ_2,…,δ_n ∈,ε∈ (0,1/4),m∈_0∩[1/4ε-1,∞) it holds that sup_N∈[ ∏^n_i=1θ^n+1_i+1(𝐮^ε,m,(δ_1,δ_2,…,δ_n)_n,N) _H_-δ_i] = sup_N∈( N^m v^⌈n/2⌉,δ_n-(1/2)-ε_⌈n/2⌉ N^m,N_H_-δ_n∏^n-1_i=1 v^⌈i/2⌉,δ_i-ε_⌈n/2⌉ N^m,N_H_-δ_i) ≤[ sup_N∈( N^m v^⌈n/2⌉,δ_n-(1/2)-ε_⌈n/2⌉ N^m,N_H_-δ_n)]∏^n-1_i=1[ sup_N∈ v^⌈i/2⌉,δ_i-ε_⌈n/2⌉ N^m,N_H_-δ_i] = [ sup_N∈( N^2mv^⌈n/2⌉,δ_n-(1/2)-ε_⌈n/2⌉ N^m,N^2_H_-δ_n) ]^1/2 ∏^n-1_i=1[ sup_N∈ v^⌈i/2⌉,δ_i-ε_⌈n/2⌉ N^m,N^2_H_-δ_i]^1/2 .Furthermore, observe that (<ref>) shows that for alln∈,m∈_0,δ∈,ε∈(0,∞) it holds that sup_N∈( N^2m v^n,δ-(1/2)-ε_nN^m,N^2_H_-δ) = sup_N∈( N^2m v^n,-(1/2)-ε_nN^m,N^2_H ) = sup_N∈( N^2m ^N_j=1[ (-A)^-(1/2)-ε e_n+jnN^m] ^2_H ) = sup_N∈( N^2m/ c^(1+2ε)[ ∑^N_j=11/ (n+jnN^m)^(2+4ε)] ) ≤sup_N∈( N^2m/ c^(1+2ε)[ ∑^∞_j=11/ (jN^m)^(2+4ε)] ) = sup_N∈( 1/ N^4mεc^(1+2ε)[ ∑^∞_j=11/ j^(2+4ε)] ) = 1/ c^(1+2ε)[ ∑^∞_j=11/ j^(2+4ε)] < ∞.In addition, note that (<ref>) and (<ref>) imply that for alln∈, i∈{1,2,…,n},δ∈,ε∈(0,1/4),m∈_0∩[1/4ε-1,∞) it holds that sup_N∈v^i,δ-ε_nN^m,N^2_H_-δ = sup_N∈v^i,-ε_nN^m,N^2_H = sup_N∈Pv^i,-ε_nN^m,N^2_H ≤1/c^2ε(1-4ε) < ∞ .Combining (<ref>) with (<ref>) and (<ref>) yields that for alln ∈,δ_1,δ_2,…,δ_n ∈,ε∈ (0,1/4),m∈_0∩[1/4ε-1,∞) it holds that sup_N∈[ ∏^n_i=1θ^n+1_i+1(𝐮^ε,m,(δ_1,δ_2,…,δ_n)_n,N) _H_-δ_i] < ∞.Moreover, note that for allN,n ∈,k ∈_0,r ∈it holds thatv^k,r_n,N∈span({e_m m∈}) ∖{0}.This and the fact thatspan({e_n n∈})⊆∩_r∈ H_r ensure that for allN,n∈,m ∈_0,ε∈,δ∈^nit holds that 𝐮^ε,m,δ_n,N∈( (∩_r∈ H_r) ∖{0})^n+1.Combining this with (<ref>) assures that for alln ∈,δ∈^n,ε∈ (0,1/4),m∈_0∩[1/4ε-1,∞) it holds that inf_N∈[ 1/∏^n_i=1θ^n+1_i+1(𝐮^ε,m,δ_n,N) _H_-δ_i] = 1/[ sup_N∈( ∏^n_i=1θ^n+1_i+1(𝐮^ε,m,δ_n,N) _H_-δ_i) ] ∈(0,∞).This, (<ref>), and (<ref>) show that for alln ∈,q ∈ [0,∞),δ=(δ_1, δ_2, …, δ_n) ∈^n, ε∈ (0,⌈n/2⌉/2),m∈_0∩[⌈n/2⌉/2ε-1,∞),t ∈ (0,T]with∑^n_i=1δ_i ≥1/2 + ε it holds that sup_𝐮=(u_0,u_1,…,u_n) ∈ ((∩_r∈H_r))^n+1[ ([ X^ n,𝐮_t ^2_H_-q])^1/2/∏^n_ i=1 u_i_H_-δ_i] = sup_N∈sup_𝐮=(u_0,u_1,…,u_n) ∈ ((∩_r∈H_r))^n+1[ ([ X^ n,𝐮_t ^2_H_-q])^1/2/∏^n_ i=1 θ^n+1_i+1(𝐮)_H_-δ_i] ≥sup_ N ∈[ ([ X^ n,𝐮^ε/(2⌈n/2⌉),m,δ_n,N_t ^2_H_-q])^1/2/∏^n_i=1θ^n+1_i+1(𝐮^ε/(2⌈n/2⌉),m,δ_n,N) _H_-δ_i] ≥[ inf_N∈1/∏^n_i=1θ^n+1_i+1(𝐮^ε/(2⌈n/2⌉),m,δ_n,N) _H_-δ_i] [ sup_ N ∈([ X^ n,𝐮^ε/(2⌈n/2⌉),m,δ_n,N_t ^2_H_-q])^1/2] =∞ .This and Hölder's inequality establish item (<ref>).The proof of Theorem <ref> is thus completed. § ACKNOWLEDGEMENTS We gratefully acknowledge Adam Andersson for a number of useful comments. This project has been supported through the SNSF-Research project 200021_156603"Numerical approximations of nonlinear stochastic ordinary and partial differential equations".acm
http://arxiv.org/abs/1703.09198v1
{ "authors": [ "Mario Hefter", "Arnulf Jentzen", "Ryan Kurniawan" ], "categories": [ "math.PR", "math.AP" ], "primary_category": "math.PR", "published": "20170327173247", "title": "Counterexamples to regularities for the derivative processes associated to stochastic evolution equations" }
Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Altenberger Straße 69, 4040 Linz, Austria A Dynamic Programming Solution to Bounded Dejittering Problems Lukas F. Lang============================================================== We propose a dynamic programming solution to image dejittering problems with bounded displacements and obtain efficient algorithms for the removal of line jitter, line pixel jitter, and pixel jitter.§ INTRODUCTION In this article we devise a dynamic programming (DP) solution to dejittering problems with bounded displacements. In particular, we consider instances of the following image acquisition model. A D-dimensional image u^δ: Ω→^D defined on a two-dimensional domain Ω⊂^2 is created by the equationu^δ = u ∘Φ,where u: Ω→^D is the original, undisturbed image, Φ: Ω→Ω is a displacement perturbation, and ∘ denotes function composition. Typically, Φ = (Φ_1, Φ_2)^⊤ is a degradation generated by the acquisition process and is considered random. In addition, u^δ may exhibit additive noise η. The central theme of this article is displacement error correction, which is to recover the original image solely from the corrupted image.One particularly interesting class are dejittering problems. Jitter is a common artefact in digital images or image sequences and is typically attributed to inaccurate timing during signal sampling, synchronisation issues, or corrupted data transmission <cit.>. Most commonly, it is observed as line jitter where entire lines are mistakenly shifted left or right by a random displacement. As a result, shapes appear jagged and unappealing to the viewer. Other—equally disturbing—defects are line pixel jitter and pixel jitter. The former type is due to a random shift of each position in horizontal direction only, while the latter is caused by a random displacement in ^2. See Fig. <ref> for examples.The problem of dejittering is to reverse the observed effect and has been studied in numerous works, see <cit.>. Recent efforts either deal with finite-dimensional minimisation problems <cit.> or rely on an infinite-dimensional setting <cit.>. Typically, variational approaches such as <cit.> are based on a linearisation of (<ref>) and try to directly infer the true image. Alternatively, as done in <cit.>, one can alternate between finding the displacement and inferring the original image. Both approaches typically enforce a certain regularity of the reconstructed image.In this article, we investigate efficient solutions to dejittering models introduced in <cit.>. However, we assume the magnitude of each component of the displacement x - Φ(x), where x = (x_1, x_2)^⊤∈Ω, to be bounded by a constant ρ > 0. That is,x_i - Φ_i(x)_L^∞(Ω)≤ρ.The main idea is to assume that (<ref>) can be inverted (locally) by reconstructing the original value from a small neighbourhood. Even though not guaranteed theoretically, this approach is found to work surprisingly well for Gaussian distributed displacements of zero mean. A possible explanation is that the original value at a certain position x ∈Ω is likely to occur in the close vicinity of x. Moreover, it does not require derivatives of the disturbed data, which typically occur during linearisation of (<ref>), see <cit.>. The obvious drawback is that u(x) can only take values which appear in u^δ within a small neighbourhood of x. As a result, its capabilities are limited in the presence of noise.We build on previous work by Laborelli <cit.> and Nikolova <cit.>, and utilise DP for the numerical solution. For the removal of line jitter, we extend the algorithm in <cit.> to include regularisation of the displacement, yielding a stabler reconstruction. In comparison to the greedy approach in <cit.> we are able to recover a global minimiser to the one-dimensional non-linear and possibly non-convex minimisation problem formulated in Sec. <ref>. For the case of line pixel jitter, we rewrite the problem into a series of independent minimisation problems, each of which can be solved optimally via DP. For pixel jitter removal we follow a different strategy as the regularisation term in the considered functional prohibits a straightforward decomposition into simpler subproblems. We employ block coordinate descent, which is an iterative method and is guaranteed to converge energy-wise <cit.>. All of our algorithms generalise to D-dimensional images defined on ^2. Moreover, generalisation to regularisation functionals involving higher-order derivatives of the sought image and to higher-order discretisation accuracy is straightforward. Table <ref> summarises the results of this work. Notation. Let Ω = [0, W] × [0, H] ⊂ℝ^2 be a two-dimensional domain. For x = (x_1, x_2)^⊤∈^2, the p-th power of the usual p-norm of ^2 is denoted by x_p^p = ∑_ix_i^p. For D ∈, we denote by u: Ω→ℝ^D, respectively, by u^δ: Ω→ℝ^D the unknown original and the observed, possibly corrupted, D-dimensional image. A vector-valued function u = (u_1, …, u_D)^⊤ is given in terms of its components. We write ∂_i^k u for the k-th partial derivative of u with respect to x_i and, for simplicity, we write ∂_i u for k = 1. For D = 1, the spatial gradient of u in ^2 is ∇ u = (∂_1 u, ∂_2 u)^⊤ and for D = 3 it is given by the matrix ∇ u = (∂_i u_j)_ij. In the former case, its p-norm is simply ∇ u_p and in the latter case it is given by ∇ u_p^p = ∑_j=1^D∑_i=1^2 (∂_i u_j)^p. For a function f: Ω→^D and 1 ≤ p < ∞ we denote the p-power of the norm of L^p(Ω, ^D) by f_L^p(Ω)^p = ∫_Ωf(x)_p^pdx. Moreover, f_L^∞(Ω) denotes the essential supremum norm. A continuous image gives rise to a discrete representation u_i,j∈^D of each pixel. A digital image is stored in matrix form u ∈^m × n × D, where m denotes the number of columns arranged left to right and n the number of rows stored from top to bottom. § PROBLEM FORMULATIONLet u^δ: Ω→^D be an observed and possibly corrupted image generated by (<ref>). We aim to reconstruct an approximation of the original image u: Ω→^D. The main difficulty is that Φ^-1 might not exist and that u^δ might exhibit noise. Lenzen and Scherzer <cit.> propose to find a minimising pair (u, Φ) to the energy∫_ΩΦ(x) - x_2^2dx + αℛ(u)such that (u, Φ) satisfies (<ref>). Here, ℛ(u) is a regularisation functional and α > 0 is a parameter. In what follows, we consider one exemplary class of displacements which arise in dejittering problems. They are of the formΦ =+ d,with d: Ω→Ω depending on the particular jitter model. Typically, ℛ(u) is chosen in accordance with d. We assume that d is Gaussian distributed around zero with σ^2 variance and whenever x + d lies outside Ω we typically have u^δ(x) = 0. In order to approximately reconstruct u we will assume that, for every x ∈Ω, there existsd(x) = inf{v_2| v ∈^2, u(x) = u^δ(x - v) }.In other words, we can invert (<ref>) and locally reconstruct u by finding d. While this requirement trivially holds true for line jitter under appropriate treatment of the boundaries, it is not guaranteed in the cases of line pixel jitter and pixel jitter. Moreover, as a consequence of (<ref>) and (<ref>) we have, for i ∈{1, 2},Φ_i(x) - x_i_L^∞(Ω) = d_i_L^∞(Ω)≤ρ.Line Jitter. In this model, the corrupted image is assumed to be created asu^δ(x_1, x_2) = u(x_1 + d(x_2), x_2) + η(x_1, x_2),where d: [0, H] →ℝ is a random displacement and η: Ω→^D is typically Gaussian white noise. The corruption arises from a horizontal shift of each line by a random amount, resulting in visually unappealing, jagged shapes. Assuming zero noise, (<ref>) can be inverted within [ρ, W - ρ] × [0, H] given d. The original image is thus given byu(x_1, x_2) = u^δ(x_1 - d(x_2), x_2).For η≢0 additional image denoising is required, see e.g. <cit.> for standard methods of variational image denoising. We minimise the energyℰ_α, p^k(d) αd_L^2([0, H])^2 + ∑_ℓ=1^k∬_Ω∂_2^ℓ u^δ(x_1 - d(x_2), x_2)_p^pdx_1dx_2,subject to d_L^∞([0, H])≤ρ. The first term in (<ref>) is suitable for displacements which are Gaussian distributed around zero. It prevents the reconstruction from being fooled by dominant vertical edges and effectively removes a constant additive displacement, resulting in a centred image. The second term utilises identity (<ref>) and penalises the sum of the magnitudes of vertical derivatives of the reconstructed image up to k-th order. Here, α≥ 0 is a regularisation parameter and p > 0 is an exponent. The proposed framework for the solution of (<ref>) is more general and allows a different exponent p_ℓ and an individual weight for each term in the sum. Moreover, any other norm of d might be considered.We restrict ourselves to discretisations of ℰ_α, p^1 and ℰ_α, p^2 and assume that images are piecewise constant, are defined on a regular grid, and that all displacements are integer. Then, for every j ∈{1, …, n}, we seek d_j∈ℒ with ℒ{-ρ, …, ρ}. By discretising with backwards finite differences we obtainℰ_α, p^1(d)≈∑_j=1^nαd_j^2 + ∑_j=2^n∑_i=1^mu^δ_i - d_j, j - u^δ_i - d_j-1, j-1_p^p,ℰ_α, p^2(d)≈ℰ_α, p^1(d) + ∑_j=3^n∑_i=1^mu^δ_i - d_j, j - 2u^δ_i - d_j-1, j-1 + u^δ_i - d_j-2, j - 2_p^p.Line Pixel Jitter. Images degraded by line pixel jitter are generated byu^δ(x_1, x_2) = u(x_1 + d(x_1, x_2), x_2) + η(x_1, x_2),where d: Ω→ℝ now depends on both x_1 and x_2. As before, the displacement is in horizontal direction only. Images appear pixelated and exhibit horizontally fringed edges. In contrast to line jitter, in the noise-free setting one is in general not able to reconstruct the original image solely from u^δ, unless d(·, x_2): [0, W] → is bijective on [0, W] for every x_2∈ [0, H]. As a remedy, we utilise the fact that d is assumed to be independent (in x_1) and identically Gaussian distributed around zero. The idea is that the original value u(x_1, x_2), or a value sufficiently close, at (x_1, x_2) is likely be found in a close neighbourhood with respect to the x_1-direction. We assume thatd(x_1, x_2) = inf{v| v ∈, u(x_1, x_2) = u^δ(x_1 - v, x_2) }exists and that d_L^∞(Ω)≤ρ. Clearly, it is not unique without further assumptions, however, finding one d is sufficient. We utilise (<ref>) and minimiseℱ_α, p^k(d) αd_L^2(Ω)^2 + ∑_ℓ=1^k∬_Ω∂_2^ℓ u^δ(x_1 - d(x_1, x_2), x_2)_p^pdx_1dx_2subject to d_L^∞(Ω)≤ρ. Again, p > 0 and α≥ 0. In contrast to before, we decompose the objective into a series of minimisation problems, which then can be solved independently and in parallel by DP. To this end, let us rewriteℱ_α^k(d) = ∫_0^W∫_0^H( αd(x_1, x_2)^2 + ∑_ℓ=1^k∂_2^ℓ u^δ(x_1 - d(x_1, x_2), x_2)_p^p) dx_2dx_1. As before we consider derivatives up to second order. Assuming piecewise constant images defined on a regular grid we seek, for each (i, j) ∈{1, …, m}×{1, …, n}, a displacement d_i, j∈ℒ with ℒ{-ρ, …, ρ}. The finite-dimensional approximations of ℱ_α, p^1 and ℱ_α, p^2 hence readℱ_α, p^1(d)≈∑_i=1^m( ∑_j=1^nαd_i, j^2 + ∑_j=2^nu^δ_i - d_i, j, j - u^δ_i - d_i, j-1, j-1_p^p),ℱ_α, p^2(d)≈ℱ_α, p^1(d) + ∑_i=1^m∑_j=3^nu^δ_i - d_i, j, j - 2u^δ_i - d_i, j-1, j-1 + u^δ_i - d_i, j-2, j - 2_p^p.Pixel Jitter. An image corrupted by pixel jitter is generated byu^δ(x_1, x_2) = u(x_1 + d_1(x_1, x_2), x_2 + d_2(x_1, x_2)) + η(x_1, x_2),where d = (d_1, d_2)^⊤, d_i: Ω→, is now a vector-valued displacement. Edges in degraded images appear pixelated and fringed in both directions. Unless the displacement d is bijective from Ω to itself and η≡ 0, there is no hope that u can be perfectly reconstructed from u^δ. However, we assume the existence ofd(x) = inf{v_2| v ∈^2, u(x) = u^δ(x - v) }such that d_i_L^∞(Ω)≤ρ, for i ∈{1, 2}. For p > 0 and α≥ 0, we minimise𝒢_α, p(d) αd_L^2(Ω)^2 + ∫_Ω∇ (u^δ(x - d(x))_p^pdxsubject to d_i_L^∞(Ω)≤ρ, i ∈{1, 2}. In contrast to before, we only consider first-order derivatives of the sought image.Assuming piecewise constant images on a regular grid and integer displacements, we seek for each (i, j) ∈{1, …, m}×{1, …, n} an offset d_i, j∈ℒ with ℒ{-ρ, …, ρ}^2. In further consequence, we obtain𝒢_α, p(d)≈∑_i=1^m∑_j=1^nαd_i, j_2^2 + ∑_i=2^m∑_j=1^nu^δ_(i, j) - d_i, j - u^δ_(i - 1, j) - d_i - 1, j_p^p + ∑_i=1^m∑_j=2^nu^δ_(i, j) - d_i, j - u^δ_(i, j - 1) - d_i, j - 1_p^p.§ NUMERICAL SOLUTIONDynamic Programming on a Sequence. Suppose we are given n ∈ elements and we aim to assign to each element i a label from its associated space of labels ℒ_i. Without loss of generality, we assume that all ℒ_i are identical and contain finitely many labels. A labelling is denoted by x = (x_1, …, x_n)^⊤∈ℒ^n, where x_i∈ℒ is the label assigned to the i-th element.Let us consider the finite-dimensional minimisation problemmin_x ∈ℒ^n ∑_i=1^nφ_i(x_i) + ∑_i=2^nψ_i-1, i(x_i-1, x_i).We denote a minimiser by x^* and its value by E(x^*). Here, φ_i(x_i) is the penalty of assigning the label x_i∈ℒ to element i, whereas ψ_i-1, i(x_i-1, x_i) is the cost of assigning x_i-1 to the element i-1 and x_i to i, respectively. Several minimisers of (<ref>) might exist but finding one is sufficient for our purpose. Energies (<ref>) typically arise from the discretisation of computer vision problems such as one-dimensional signal denoising, stereo matching, or curve detection. We refer to <cit.> for a comprehensive survey and to <cit.> for a general introduction to DP.The basic idea for solving (<ref>) is to restate the problem in terms of smaller subproblems. Let ℒ denote the cardinality of ℒ. Then, for j ≤ n and ℓ≤ℒ, we define OPT(j, ℓ) as the minimum value of the above minimisation problem (<ref>) over the first j elements with the value of the last variable x_j being set to the ℓ-th label. That is,OPT(j, ℓ) min_x ∈ℒ^j-1 ∑_i=1^jφ_i(x_i) + ∑_i=2^jψ_i-1, i(x_i-1, x_i).Moreover, we define OPT(0, ℓ)0 for all ℓ≤ℒ and ψ_0, 1 0, and show the following recurrence: Let x_ℓ∈ℒ denote the label of the j-th element. Then,OPT(j, ℓ) = φ_j(x_ℓ) + min_x_j-1∈ℒ{OPT(j-1, x_j-1) + ψ_j-1, j(x_j-1, x_ℓ) }.By induction on the elements j ∈. Induction basis j = 1. Thus, OPT(1, ℓ) = φ_1(x_ℓ) holds. Inductive step. Assume it holds for j-1. Then,OPT(j, ℓ) = min_x ∈ℒ^j-1 ∑_i=1^jφ_i(x_i) + ∑_i=2^jψ_i-1, i(x_i-1, x_i)= min_x_j-1∈ℒ {OPT(j-1, x_j-1) + φ_j(x_ℓ) + ψ_j-1, j(x_j-1, x_ℓ) } = φ_j(x_ℓ) + min_x_j-1∈ℒ {OPT(j-1, x_j-1) + ψ_j-1, j(x_j-1, x_ℓ) }.It is straightforward to see that Alg. <ref> correctly computes all values of OPT(j, ℓ) and hence the minimum value of (<ref>). Its running time is in n ℒ^2 and its memory requirement in n ℒ. Recovering a minimiser to (<ref>) can be done either by a subsequent backward pass or even faster at the cost of additional memory by storing a minimising label x_i in each iteration.One can generalise the algorithm to energies involving higher-order terms of q ≥ 2 consecutive unknowns yielding a running time of n ℒ^q and a memory requirement of n ℒ^q-1. We refer to <cit.> for the details. The minimisation problems encountered in Sec. <ref> involve terms of order at most three. It is straightforward to apply the above framework to (<ref>), (<ref>), (<ref>), and (<ref>). Energy Minimisation on Graphs. A more general point of view is to consider (<ref>) on (undirected) graphs. Thereby, each element is associated with a vertex of the graph and one seeks a minimiser tomin_x ∈ℒ^n ∑_iφ_i(x_i) + ∑_i ∼ jψ_i, j(x_i, x_j).The first term sums over all n vertices whereas the second term sums over all pairs of vertices which are connected via an edge in the graph. Such energies typically arise from the discretisation of variational problems on regular grids, such as for instance image denoising. In general, (<ref>) is NP-hard. However, under certain restrictions on φ_i and ψ_i, j, there exist polynomial-time algorithms. Whenever the underlying structure is a tree or a sequence as above, the problem can be solved by means of DP without further restrictions. See <cit.> for a general introduction to the topic.Nevertheless, many interesting problems do fall in neither category. One remedy is block coordinate descent <cit.> in order to approximately minimise (<ref>), which we denote by F. The essential idea is to choose in each iteration t ∈ an index set ℐ^(t)⊂{1, …, n} which contains much less elements than n, and to consider (<ref>) only with respect to unknowns x_i, i ∈ℐ^(t). The values of all other unknowns are taken from the previous iteration. That is, one finds in each iterationx^(t)_x_i: i ∈ℐ^(t)F^(t)(x),where F^(t) denotes the energy in (<ref>) with all unknowns not in ℐ^(t) taking the values from x^(t-1). Typically, ℐ^(t) is chosen such that (<ref>) can be solved efficiently. It is straightforward to see that block coordinate descent generates a sequence { x^(t)}_t of solutions such that F never increases. The energy is thus guaranteed to converge to a local minimum with regard to the chosen ℐ^(t).We perform block coordinate descent for the solution of (<ref>) and iteratively consider instances of (<ref>). Following the ideas in <cit.>, we consecutively minimise in each iteration over all odd columns, all even columns, all odd rows, and finally over all even rows. During minimisation the displacements of all other rows, respectively columns, are fixed. See Table <ref> for the resulting algorithms. § NUMERICAL RESULTSWe present qualitative results on the basis of a test image[Taken from BSDS500 dataset at <https://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html>] and create image degraded by line jitter, line pixel jitter, and pixel jitter by sampling displacements from a Gaussian distribution with variance σ^2 = 1.5, and rounding them to the nearest integer, see Fig. <ref>. Vector-valued displacements are sampled component-wise. In addition, we create instances with additive Gaussian white noise η of variance σ_η^2 = 0.01. We then apply to each instance the appropriate algorithm with varying parameters α and p, and show the results which maximise jitter removal, see Figs. <ref>, <ref>, and <ref>. The value of ρ is set to the maximum displacement that occurred during creation. A Matlab/Java implementation is available online.[<https://www.csc.univie.ac.at>]While regularisation does not seem to be crucial for line jitter removal—apart from a centred image—for line pixel jitter and pixel jitter removal it is. Moreover, including second-order derivatives in our models does not necessarily improve the result. In all cases our results indicate effective removal of jitter, even in the presence of moderate noise. However, subsequent denoising seems obligatory. § RELATED WORK Kokaram et al. <cit.> were among the first to consider line dejittering. They employed a block-based autoregressive model for grey-valued images and developed an iterative multi-resolution scheme to estimate line displacements. Subsequent drift compensation removes low frequency oscillations. Their methods seek displacements which reduce the vertical gradients of the sought image.For line jitter removal, a naive approach consists of fixing the first line of the image and successive minimisation of a mismatch function between consecutive lines. This greedy algorithm tends to introduce vertical lines in the reconstructed image and fails in the presence of dominant non-vertical edges <cit.>. As a remedy, Laborelli <cit.> proposed to apply DP and to recover a horizontal displacement for each line by minimising the sum of the pixel-wise differences between two or three consecutive lines.Shen <cit.> proposed a variational model for line dejittering in a Bayesian framework and investigated its properties for images in the space of bounded variation. In order to minimise the non-linear and non-convex objective, an iterative algorithm that alternatively estimates the original image and the displacements is devised.Kang and Shen <cit.> proposed a two-step iterative method termed “bake and shake”. In a first step, a Perona-Malik-type diffusion process is applied in order to suppress high-frequency irregularities in the image and to smooth distorted object boundaries. In a second step, the displacement of each line is independently estimated by solving a non-linear least-squares problem. In another article <cit.>, they investigated properties of slicing moments of images with bounded variation and proposed a variational model based on moment regularisation.In <cit.>, Nikolova considered greedy algorithms for finite-dimensional line dejittering with bounded displacements. These algorithms consider vertical differences between consecutive lines up to third-order and are applicable to grey-value as well as colour images. In each step, a non-smooth and possibly non-convex function is minimised by enumeration, leading to an mnρ algorithm. For experimental evaluation various error measures are discussed.Lenzen and Scherzer <cit.> considered partial differential equations for displacement error correction in multi-channel data. Their framework is applicable to image interpolation, dejittering, and deinterlacing. For line pixel dejittering they derived gradient flow equations for a non-convex variational formulation involving the total variation of the reconstructed image.Dong et al. <cit.> treated the finite-dimensional models in <cit.> in an infinite-dimensional setting. They derived the corresponding energy flows and systematically investigated their applicability for various jitter problems. § CONCLUSION In this article we presented efficient algorithms for line jitter, line pixel jitter, and pixel jitter removal in a finite-dimensional setting. By assuming (approximate) invertibility of the image acquisition equation we were able to cast the minimisation problems into a well-known DP framework. Our experimental results indicate effective removal of jitter, even in the presence of moderate noise.' #10=#10=0 0 by1ptto00 .2em∘#1"17 #1 10BlaKohRot11 A. Blake, P. Kohli, and C. Rother, editors. Markov Random Fields for Vision and Image Processing. MIT Press, 2011.CheKol14 Q. Chen and V. Koltun. Fast MRF optimization with application to depth reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3914–3921. IEEE, June 2014.CorLeiRivSte09 T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to algorithms. MIT Press, Cambridge, MA, third edition, 2009.DonPatSchOek15 G. Dong, A.R. Patrone, O. Scherzer, and O. Öktem. Infinite dimensional optimization models and PDEs for dejittering. In Scale Space and Variational Methods in Computer Vision. SSVM 2015. Lecture Notes in Computer Science, volume 9087, pages 678–689. Springer, April 2015.FelZab11 P. F. Felzenszwalb and R. Zabih. Dynamic programming and graph algorithms in computer vision. IEEE Trans. Pattern Anal. Mach. Intell., 33(4):721–740, April 2011.KanShe06 S. H. Kang and J. Shen. Video dejittering by bake and shake. Image Vision Comput., 24(2):143–152, 2006.KanShe07 S. H. Kang and J. Shen. Image dejittering based on slicing moments. In X. C. Tai, K. A. Lie, T. F. Chan, and S. Osher, editors, Image Processing Based on Partial Differential Equations. Springer, 2007. Mathematics and Visualization.KokRay92 A. Kokaram and P. Rayner. An algorithm for line registration of TV images based on a 2-D AR model. In Proceedings of Signal Processing VI, Theories and Applications, pages 1283–1286. European association for signal processing, 1992.KokRayVanBie97 A. Kokaram, P. Rayner, P. Van Roosmalen, and J. Biemond. Line registration of jittered video. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 4, pages 2553–2556. IEEE Computer Society, 1997.Kok98 A. C. Kokaram. Motion Picture Restoration: Digital Algorithms for Artefact Suppression in Degraded Motion Picture Film and Video. Springer, London, UK, 1998.Lab03 L. Laborelli. Removal of video line jitter using a dynamic programming approach. In Proceedings of International Conference on Image Processing, 2003 (ICIP 2003), volume 2, pages II–331–334, Sept 2003.LenSch09 F. Lenzen and O. Scherzer. A geometric PDE for interpolation of m-channel data. In Scale Space and Variational Methods in Computer Vision. SSVM 2009. Lecture Notes in Computer Science, volume 5567, pages 413–425. Springer, 2009.LenSch11 F. Lenzen and O. Scherzer. Partial differential equations for zooming, deinterlacing and dejittering. Int. J. Comput. Vision, 92(2):162–176, April 2011.MenHeiGei15 M. Menze, C. Heipke, and A. Geiger. Discrete optimization for optical flow. In Proceedings of the 37th German Conference on Pattern Recognition, volume 9358 of Lecture Notes in Computer Science, pages 16–28. Springer, 2015.Nik09a M. Nikolova. Fast dejittering for digital video frames. In Scale Space and Variational Methods in Computer Vision. SSVM 2009, volume 5567 of Lecture Notes in Computer Science, pages 439–451. Springer, 2009.Nik09 M. Nikolova. One-iteration dejittering of digital video images. J. Vis. Commun. Image Represent., 20:254–274, 2009.SchGraGroHalLen09 O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, and F. Lenzen. Variational methods in imaging. Number 167 in Applied Mathematical Sciences. Springer, 2009.She04 J. Shen. Bayesian video dejittering by BV image model. SIAM J. Appl. Math., 64(5):1691–1708, 2004.
http://arxiv.org/abs/1703.09161v1
{ "authors": [ "Lukas F. Lang" ], "categories": [ "math.OC", "cs.CV" ], "primary_category": "math.OC", "published": "20170327160920", "title": "A Dynamic Programming Solution to Bounded Dejittering Problems" }
Token-based Function Computation with Memory Macheng Shen University of Michigan Department of Naval Architecture and Marine EngineeringAnn Arbor, USAEmail: macshen@umich.edu Jing Sun University of Michigan Department of Naval Architecture and Marine EngineeringAnn Arbor, USAEmail: jingsun@umich.edu Ding Zhao University of MichiganDepartment of Mechanical EngineeringAnn Arbor, USACorresponding author:zhaoding@umich.edu December 30, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================== Cooperative map matching (CMM) uses the Global Navigation Satellite System (GNSS) positioning of a group of vehicles to improve the standalone localization accuracy. It has been shown to reduce GNSS error from several meters to sub-meter level by matching the biased GNSS positioning of four vehicles to a digital map with road constraints in our previous work. While further error reduction is expected by increasing the number of participating vehicles, fundamental questions on how the vehicle membership of the CMM affects the performance of the GNSS-based localization results need to be addressed to provide guidelines for design and optimization of the vehicle network. The quantitative relationship between the estimation error and the road constraints has to be systematically investigated to provide insights. In this work, a theoretical study is presented that aims at developing a framework for quantitatively evaluating effects of the road constraints on the CMM accuracy and for eventual optimization of the CMM network. More specifically, a closed-form expression of the CMM error in terms of the road angles and GNSS error is first derived based on a simple CMM rule. Then a Branch and Bound algorithm and a Cross Entropy method are developed to minimize this error by selecting the optimal group of vehicles under two different assumptions about the GNSS error variance. § INTRODUCTIONLow-cost Global Navigation Satellite Systems (GNSS) are used for most mobile applications, with localization error of several meters. The limited accuracy of the low-cost GNSS is mainly due to the atmospheric error, referred to as the common error, as well as receiver noise and multipath error, referred to as the non-common error. Various techniques have been used to reduce GNSS localization error such as precise point positioning (PPP) <cit.>, real time kinematics (RTK) <cit.> and sensor fusion <cit.>. Nonetheless, improving the localization accuracy of the widespread GNSS without incurring additional hardware and infrastructure costs has motivated recent research activities on cooperative GNSS localization <cit.> and cooperative map matching (CMM) <cit.>, <cit.>, <cit.>. CMM improves the GNSS localization accuracy by matching the GNSS positioning of a group of connected vehicles to a digital map.With the reasonable assumption that vehicles should travel on the roads, the road constraints can be applied to correct the GNSS positioning. CMM is much more powerful in exploiting the road constraints than ego-localization with map matching. Fig. <ref> illustrates how error-corrupted GNSS information is corrected through a naive CMM approach. The GNSS solutions, denoted as red dots, of all the three vehicles are biased away from their true positions, while the biases are highly correlated. Virtual constraints from the red and the blue vehicle are applied to restrict the positioning of the green vehicle. The underlying assumption for the application of the virtual constraints is that the GNSS positioning biases of the three vehicles are the same. This example manifests the effect of road constraints on the CMM localization. The limited bandwidth of the Dedicated Short Range Communication (DSRC)-based connected vehicle network imposes an upper limit on the maximal number of connected vehicles to avoid frequent package loss<cit.>, but typically the number of vehicles within the communication range is more than the upper limit. This limitation motivates the development of an optimization scheme that selects M vehicles with optimal road constraints that minimizes CMM localization error, where M is the maximal number of connected vehicles. To achieve this goal, it is necessary to quantify the effect of road constraints on the CMM localization error. This is, nevertheless, not easily achievable as the algorithms that implement CMM are usually sophisticated, and there is no analytic expression relating the localization error with the road constraints for these algorithms. Recently, two different CMM algorithms have been developed for GNSS common error estimation problem, i.e., a non-Bayesian particle-based approach in Rohani et. al. <cit.> and a Bayesian approach based on a Rao-Blackwellized Particle Filter in our previous work <cit.>, <cit.>. One common feature of these two CMM algorithms is that the probabilistic property of the GNSS error is utilized. This increases the localization accuracy and robustness, but also leads to complicated mapping between the localization error and the road constraints, which makes the analysis of the effects of road constraints rather difficult.In our previous work <cit.>, the correlation between the localization error and the road constraints is quantified analytically. More specifically, a closed-form expression of the estimated localization error in terms of the road angles as well as the GNSS error is derived based on a simple CMM rule that neglects the probabilistic property of the GNSS error as well as historical GNSS measurements. As a result of these simplifications, this closed-form expression is not sufficient to accurately predict the localization error of the particle filter based CMM algorithm. Nonetheless, it can be used to select the optimal set of connected vehicles, on which the CMM error is expected to be minimized with the particle filter based CMM algorithm. In this work, two algorithms are developed to optimize the connected vehicle selection such that the closed-form expression of the estimated localization error is minimized. These results provide a guideline for the implementation of CMM.In the following sections, details about the derivation of the CMM localization error and the optimization algorithms are presented. In Section 2, the notions in CMM are introduced. Then a simple CMM rule is applied to derive an analytic expression of the localization error. Based on this error formula, in Section 3, two algorithms are presented to minimize the localization error through the optimal selection of connected vehicles. A Branch and Bound (B&B) <cit.> searching algorithm is developed in the case that all the vehicles have the same non-common GNSS error variance. A Cross Entropy (CE) method <cit.> combined with a heuristic pre-selection step is developed in the case that vehicles have different non-common GNSS error variances. In Section 4, the contributions and conclusions are summarized. § CMM LOCALIZATION ERRORIn this section, we propose a framework of vehicle positioning within a reference road framework to facilitate the analytic investigation.The following assumptions are essential in the derivation:* The GNSS common error is the same for all the connected vehicles within the vehicular network.* The road side can be locally approximated as a straight line. * The GNSS non-common error is small enough such that the exact expression for the estimation error can be approximated by its first order linearization with respect to the non-common error. * The GNSS non-common error is a random variable that obeys the Gaussian distribution. The first assumption is reasonably accurate as long as the connected vehicles are geographically close to each other, for example, within several miles. The second and the third assumptions are made for mathematical convenience. If they are violated, however, the exact expression of the estimation error will still be valid but the asymptotic approximation will be inaccurate. The last assumption has been experimentally verified in <cit.> under open sky conditions.Consider a network of connected vehicles, the coordinate of GNSS positioning of the i-th vehicle can be decomposed into a superposition of the coordinate of a point on the corresponding lane center, the deviation of the vehicle from the lane center, the common error and the non-common error as illustrated in Fig. <ref>. The grayscale image of the vehicle represents the GNSS positioning. This decomposition can be expressed mathematically asx^G_i=x^L_i+x^D_i+x^C+x^N_i, i=1,2,...,N,where x^G_i is the GNSS positioning of the i-th vehicle, x^L_i is the closest point on the center of the lane from the vehicle, x^D_i is the deviation of the vehicle coordinates from x^L_i, x^C is the GNSS common error, x^N_i is the GNSS non-common error.The fact that all the vehicles travel on the roads can be expressed as a set of inequalitiesg_i(x^G_i-x^C-x^N_i)<0.Applying the second assumption, the constraint functions g_i have simple analytic formsg_i(x)=(x-x^L_i)· n_i-w,where {·} is the dot product operator, n_i is the unit vector normal to the lane center point towards outside of the road and w is the half width of the lane. Eq. (<ref>) can be interpreted as the feasible set of the common error given the GNSS positioning and the non-common error. The non-common error is unknown, however, to the implementation of CMM. Thus, the following approximation of the feasible set by neglecting the non-common error is used instead of the exact feasible set,Ω ={τ|⋂_i=1^Ng_i(x^G_i-τ)<0}={τ|⋂_i=1^Ng_i(x^L_i+x^C+x̃^N_i-τ)<0}={τ|⋂_i=1^Ng̃_i(x^C+x̃^N_i-τ)<0},where x̃^N_i ≜ x^D_i+x^N_iand g̃_i(x)≜ g_i(x+x^L_i)=x· n_i-w.A point estimator of the common error is taken as the average over the approximate feasible set Ω,x̂^C=1/S∫_Ωτ dA, S=∫_Ω dA,where τ is the dummy integration variable and dA is the area element.The estimation error, that is the difference between the true common error and the estimated common error, is of practical interest, which can be evaluated,e =x^C-x̂^C=x^C-1/S∫_Ωτ dA=1/S∫_Ω (x^C-τ) dA=1/S∫_Ω'τ' dA,where τ'=x^C-τ,andΩ'={τ'|⋂_i=1^Ng̃_i(x̃^N_i+τ')<0}.Eq. (<ref>) and (<ref>) states that the estimation error equals to the geometric center of the intersection of the road constraints perturbed by the composite non-common error x̃^N_i. Eq. (<ref>) is a random variable as a nonlinear function of the non-common error. The third assumption implies that (<ref>) can be linearized with respect to the non-common error:e=e_0+Δ e=e_0+CX̃/S_0,where e_0=1/S_0∫_Ω_0τ' dA, Ω_0={τ'|⋂_i=1^Ng̃_i(τ')<0}, X̃=[x̃^N_1· n_1,x̃^N_2· n_2,...,x̃^N_N· n_N]^T,and C=S_0∂ e/∂X̃.C is a 2× N matrix whose components are related to the geometric quantities of the road constraints.The condition under which the linearization (<ref>) is valid is||X̃||_∞≪2π w/N,where w is the half width of the lane.With the fourth assumption that each non-common error obeys independent Gaussian distribution with zero mean, i.e., X̃∼𝒩(0_N× 1, diag(σ^2_1,σ^2_2,...,σ^2_N)), the expectation of the square error isE_X[e^2]=e^2_0+1/S^2_0tr(L^TC^TCL),where L=diag(σ_1,σ_2,...,σ_N) is the Cholesky decomposition of the joint covariance matrix.§ OPTIMIZING THE SELECTION OF VEHICLES FOR MINIMAL LOCALIZATION ERRORIn practice, the maximal number of connected vehicles implementing CMM is limited by the finite communication bandwidth. One interesting problem of practical importance is to optimally select a group of M(M ≤ N) vehicles out of the total N available connected vehicles to implement CMM. This optimization problem can be stated as determining the indices j_1,j_2,...,j_M such that the corresponding road angles θ_j_1,θ_j_2,...,θ_j_M and the non-common error variances σ^2_j_1,σ^2_j_2,...,σ^2_j_M minimize the mean square error (<ref>) as an objective function.This optimization problem is combinatorial. A B&B searching algorithm is developed to efficiently find the optimal solution when all the vehicles have the same non-common error variance. When the vehicles have different non-common error variances, it will be proved that the part of the localization error that does not depend on the non-common error would be minimized if the road angles obey a uniform distribution. Motivated by this optimality of the uniform distribution, a CE method with a heuristic pre-selection step is developed to find a near optimal solution efficiently. The performances of those optimization algorithms are illustrated on synthetic data.§.§ The B&B algorithmIn the case that all the non-common error variances have the same value, i.e., σ^2_1=σ^2_2=...=σ^2_N=σ^2, the objective function (<ref>) is a function of the road angles only. It is straightforward to verify that the corresponding continuous optimization problem has one global minimizerθ^T_opt=[0,2π/M,4π/M,...,2π(M-1)/M].It should be noted that this is not the solution to the connected vehicle selection problem as the actual road angles cannot happen to be the components of θ^T_opt. Nevertheless, this global minimizer can be used to derive the bound function which is an indispensable part of the B&B method, described as follows.The objective function (17) can be approximated by its truncated Taylor expansion as J(θ)=E_X[e^2]≈ J_0+1/2Δθ^T HΔθ, where J_0=J(θ_opt) is the minimum of the continuous problem, H is the Hessian at the minimum and Δθ is the deviation from θ_opt.The B&B algorithm solves the combinatorial optimization problem by searching the solution space represented as an enumeration tree. A bound function is used to estimate the lower bound of the objective function values of the subtree rooted at the active node and prune the branches that are guaranteed not to lead to the minimum.The bound function for the vehicle selection problem is obtained by minimizing the continuous relaxation of (<ref>) subject to equality constraints, where the equality constraints come from all the ancestors of the currently active node. This constrained minimization is equivalent to a non-constrained minimization of a quadratic function defined on ℝ^M^*, where M^*<M, which can be efficiently solved. As the B&B method is guaranteed to find the optimal solution. The performance of this method is evaluated by the computational complexity compared with that of the brute-force searching. Fig. <ref> shows the ratio between the average CPU time of the B&B method over 100 simulations and that of the brute-force searching. Instead of actually running the simulation for the brute-force searching, which can be computationally prohibitive, the computation time is estimated by the product of the average computation time for each objective function evaluation and the required number of evaluation. As the number of feasible solutions increases dramatically with the increase of N and M, the B&B method saves more computation time. For N=100 and M=10, the number of feasible solutions is C^10_100=1.73× 10^13, while the average computation time using the B&B method is only 0.5 s on MATLAB 2016a with an Intel i-7 6500U processor. Instead of evaluating the objective function for every possible combination, the B&B method only did O(10^4) evaluations on average, owing to the use of the bound function.§.§ Optimality of the uniform distributionIn this section, a related problem will be considered, which serves as the foundation of a CE algorithm that solves the vehicle selection problem when the non-common error variances are different. Consider the road angles as random variables drawn from some distribution, the square error (<ref>) becomes a random variable as a function of the road angles as well as the non-common error variances. It will be proved that in the limit that the number of vehicles goes to infinity, the expectation of the geometric error e^2_0 is minimized if the angles of the roads on which the vehicles travel obey a uniform distribution. Considering an arbitrary continuous distribution of the road angle p(θ),θ∈[0,2π), the periodic condition should be satisfied,p(0)=p(2π^-),as θ=0 and θ=2π represent the same angle.This periodicity motivates the following Fourier series expansion,p(θ)=1/2π+∑_m∈ Z^*C_mexp(imθ),withC_m=C^*_-m,where the integer m is the summation index, the asterisk denotes the complex conjugate and i=√(-1) is the imaginary unit. The constant 1/2π ensures that the normalization condition is satisfied.In the limit that N ∞, the leading order term of the localization error e^2_0 due to deviation of the geometric center can be approximated ase^2_0=4w^2/9∑_i=1^N tan^2(θ̃_̃ĩ/2)/π^2≈w^2/9∑_i=1^N θ̃_̃ĩ^2/π^2,where θ̃_̃ĩ is the difference between two adjacent angles θ_i+1 and θ_iθ̃_̃ĩ=θ_i+1-θ_ifor i=1,2,...,N-1 θ_1-θ_N+2π for i=NIn order to derive the expectation of e^2_0, the distribution of θ̃_̃ĩ denoted as f(θ̃_̃ĩ;N,p(θ_i)) will be considered first. f(θ̃_̃ĩ;N,p(θ_i)) is a nearest neighbor distribution, which satisfies the integral equation,f(θ̃_̃ĩ)=2Np(θ_i)(1-∫^θ̃_̃ĩ_0f(τ)dτ),where the dependence on the parameters N and p(θ_i) will be omitted hereafter.Together with the normalization condition, the solution to (<ref>) isf(θ̃_̃ĩ)=2Np(θ_i)exp(-2Np(θ_i)θ̃_̃ĩ).The number of vehicles N and the local density of the road angles p(θ_i) appear as parameters in the distribution. As the product Np(θ_i) increases, the angles distributed around θ_i become dense, thus increasing the probability of small differential angle θ̃_̃ĩ. The expectation of θ̃_̃ĩ^2 isE[θ̃_̃ĩ^2]=∫^∞_0 θ̃_̃ĩ^2 f(θ̃_̃ĩ) dθ̃_̃ĩ=1/2N^2p^2(θ_i).Combining (<ref>) and (<ref>), the expectation of e^2_0 can be derived as follows,E_θ[e^2_0]=w^2/9π^2∑ ^N_i=1 E[θ̃_̃ĩ^2]=w^2/36Nπ^2∑^N_i=11/p^2(θ_i)2π/N.The summation in (<ref>) can be interpreted as an integration as the number of the vehicles N goes to infinity,lim_N →∞∑^N_i=11/p^2(θ_i)2π/N =lim_N →∞∑^N_i=11/p^2(θ_i)Δθ=∫^2π_01/p^2(θ)dθ.The Fourier expansion of 1/p^2(θ) can be obtained and expressed in terms of the Fourier coefficients of p(θ), assuming the deviation from the uniform distribution is infinitesimal,1/p^2(θ) =4π^2-8π^3∑_m∈ Z^*C_mexp(imθ)+16π^4(∑_m∈ Z^*C_mexp(imθ))^2+O(C^3_m).Substituting (<ref>) into (<ref>),lim_N →∞∑^N_i=11/p^2(θ_i)2π/N=8π^3+32π^5 ∑^∞_m=1|C_m|^2.Substituting (<ref>) into (<ref>),E_θ[e^2_0]∼w^2/36Nπ^2(8π^3+32π^5 ∑^∞_m=1|C_m|^2).By taking C_m=0, m∈ Z^+, which corresponds to the uniform distribution, the expectation of the square error E_θ[e^2_0] is minimized. §.§ The CE method based two-step algorithmThe two-step algorithm is comprised of a heuristic pre-selection step to downsize the eligible vehicle population and a sampling-based searching step that applies the CE method for the optimization inspired by the existence of the optimal distribution, described as follows: §.§.§ Step one: pre-selectionAn important observation that motivates this pre-selection is that the objective function (<ref>) is monotonically increasing with respect to the non-common error variances σ^2_i,i=1,2,...,M. If two vehicles have close road angles, i.e., θ_i ≈θ_j and large difference of the non-common error variance, e.g., σ^2_i ≪σ^2_j, then it is very unlikely that selecting the j-th vehicle will lead to the optimal solution as substituting the j-th vehicle with the i-th vehicle would probably make the objective function smaller. The following procedure is used to eliminate those unpromising vehicles: * For all pairs that satisfy σ^2_i < σ^2_j:Sample randomly K pairs of groups of vehicles with indices: [n_l,1,n_l,2...,n_l,M-1,i],[n_l,1,n_l,2...,n_l,M-1,j], where l=1,2,...,K, and evaluate the corresponding pairs of the objective function values J_l,i,J_l,j.* If J_l,i < J_l,j for all l=1,2,...,K:Then eliminate the j-th vehicle from the selection candidates.§.§.§ Step two: CE methodAfter the pre-selection step, a CE method is applied, motivated by the existence of the optimal distribution (the uniform distribution) proved in the previous section. The CE method searches for the minimizer of the objective function by iteratively sampling from a parameterized distribution and updating the parameter according to the samples with good performance. In this particular optimization problem, a M-dimensional Gaussian distribution of the road angle is used, which is described by the following pseudo-code: * Iterate until the convergence criterion (for the distribution parameters) is satisfied:* Generate k samples from the Gaussian distribution denoted as [s_1,s_2,...,s_k], where each s_i=[θ_i,1,θ_i,2,...,θ_i,M],i=1,2,...,k is an M dimension vector of angles that are not necessarily equal to any of the actual road angles.* Round the angles to the nearest road angles, which leads to the rounded samples s̃_1,s̃_2,...,s̃_k, and evaluate the values of the objective function denoted as J_1,J_2,...,J_k.* Update the mean and covariance parameter as μ=∑^k_i=1 I[J_i ≥γ̂]s̃_i/k_ρ and cov=∑^k_i=1 I[J_i ≥γ̂](s̃_i-μ)(s̃_i-μ)^T/k_ρ-1 <cit.>, where γ̂ is the sample (1-ρ)- quantile of performance, k_ρ is the corresponding number of samples and I is the indicator function.The performance of this two-step algorithm is demonstrated on synthetic data with N=50 and M=5. In addition, the road angles are drawn from a uniform distribution and the non-common error variances are generated by σ^2_i=(0.5+|v_i|)m^2, where v_i∼𝒩(0,1). The other parameters are: K=10, k=1000, ρ=0.05. The initial parameters of the Gaussian distribution are μ_0=[0,2π/M,4π/M,...,2(M-1)π/M] and cov_0=diag([100(π/M)^2,100(π/M)^2...,100(π/M)^2]^T_M× 1).Fig. <ref> shows the distribution of the optimization results sorted into increasing order of the objective function values based on 1000 simulations and the sorted values of the objective function corresponding to the best 100 combinations of M vehicles (blue line). The total number of feasible solution is C^5_50=2,118,760, while the solutions found by the algorithm are among the best 20 ones in 95% of the cases and all the results are within the best 100 ones. The average computation time is 1.6 s for each simulation.The pre-selection step plays an important role in achieving this performance. Without the pre-selection, the CE algorithm would waste a lot of searching on the samples that are unpromising to be optimal. As a result, the performance would degrade.The performance of random searching based on 1000 simulations is shown in Fig. <ref> for a comparison with the CE method. In each simulation, the objective function is evaluated on N_r randomly selected combinations of M vehicles, and the optimal value is the minimum among these N_r values. The computation time and performance of this random searching depends on the fixed parameter N_r. A large N_r would take long computation time but good performance. For a fair comparison with the CE method, this parameter is determined such that the computation time is approximately equal to that required by the CE method, which is about 5000. Fig. <ref> shows that only a very small percent of the optimal values obtained through the random searching are close to the true optimal value, which reflects the good performance of the CE method. §.§ Comparison between the two optimization algorithmsIn the previous sections, two optimization algorithms have been presented. Their scopes of application and performances are summarized here.The B&B method is applied to the case where all the vehicles have the same non-common error variance. Although it is rare, in practice, to have exactly same non-common error variance for all the connected vehicles, the B&B method is expected to find a near optimal solution as long as the variances are approximately the same. The B&B method is a deterministic searching algorithm, which is guaranteed to find the optimal solution as long as the assumptions are satisfied. In the worst case, it would be necessary to search the solution space exhaustively, which would result in exponential computational complexity. In practice, however, the B&B method takes much less time than the brute-force searching does. In contrast, the CE method with the pre-selection can also be applied when the vehicles have different non-common error variances. It is a stochastic algorithm, which does not guarantee an optimal solution. In practice, however, it finds solutions of high quality. The computational complexity is fixed once the parameters of the problem are given.For vehicle selection problems of practical scales, the average computational time of both these two algorithms is of O(1) s. Thus they are promising for real-time application. § CONCLUSIONSIn this paper, a theoretical framework for evaluating and optimizing the effect of road constraints on the CMM localization accuracy is established. The major contributions and findings of this work are summarized:* A closed-form expression that expresses the mean square localization error in terms of the road angles and non-common error is derived based on a simple CMM rule, which serves as the foundation of the vehicle optimal selection problem. Based on this expression, it is proved that the optimal distribution of road angles that minimizes the localization error is the uniform distribution. * A B&B algorithm and a CE algorithm are developed to select the optimal group of vehicles that minimizes the localization error. The B&B algorithm can efficiently find the optimal group when all the vehicles have the same non-common error variance, while the CE algorithm can efficiently find a near optimal group when the vehicles have different non-common error variances.§ ACKNOWLEDGMENTThis work is funded by the Mobility Transformation Center at the University of Michigan under grant number N021548. IEEEtran
http://arxiv.org/abs/1703.08818v2
{ "authors": [ "Macheng Shen", "Jing Sun", "Ding Zhao" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170326141548", "title": "Optimization of Vehicle Connections in V2V-based Cooperative Localization" }
Approaching Confinement Structure for Light Quarks in a Holographic Soft Wall QCD Model Meng-Wei Li^a, Yi Yang^b and Pei-Hung Yuan^a Received ; accepted======================================================================================= § INTRODUCTIONThere are various phenomena of phase structure in Quantum Chromodynamics (QCD) theory. Confinement-deconfinement phase transition is one of the most important phenomena in QCD phase diagram. It is widely believed that, the system is in the confinement phase at low temperature T and small chemical potential μ (low quark density) region, in which quarks are confined to hadronic bound states, e.g. mesons and baryons. While it is in the deconfinement phase at high temperature and large chemical potential (high quark density) region, in which free quarks exist, e.g. quark gluon plasma (QGP). It is natural to conjecture that there exists a phase transition between the two phases as showed in Fig.<ref>(a) (carton phase diagram at m=0).To understand the confinement-deconfinement phase transition in QCD is a very important but extremely difficult task. The interaction becomes strong around the phase transition region, so that the conventional perturbation method in quantum field theory does not work. For a long time, lattice QCD is the only method to attack the problem of phase transition in QCD <cit.>. According to the calculation in lattice QCD, the confinement-deconfinement phase transition is first ordered in zero quark mass limit m_q=0. However, considering finite quark mass, part of the phase transition line would become crossover. For small and large quark masses, the behaviors of phase transformation at zero chemical potential μ=0 have been calculated by lattice QCD. The phase diagrams are conjectured and showed in figure (carton phase diagrams at finite m) for light quark (b) and heavy quark (c), respectively.However, lattice QCD faces the so-called sign problem at finite chemical potential. To study the full phase structure in QCD, we need new methods. During the last fifteen years, AdS/CFT duality has been developed in string theory <cit.>. Using this duality, the quantum properties in a conformal field theory can be investigated by studying its dual string theory in an AdS background. Lately, AdS/CFT duality has been generalized and applied to non-conformal field theory including QCD, namely AdS/QCD or holographic QCD <cit.>. Holographic QCD offers a suitable frame to study phase transition in QCD at all temperature and chemical potential. There are two type of holographic QCD models, i.e. top-down and bottom-up models. In this work, we will study a bottom-up model based on the soft-wall model <cit.>, which is the first holographic QCD model realizing the linear Regge spectrum of mesons<cit.>. The holographic QCD model to describe heavy quarks has been constructed in <cit.>. By analytic solving the full backreacted Einstein equation, meson Regge spectrum and phase structure in QCD were studied. In <cit.>, a probe open string in an AdS background was studied. In <cit.>, probe open strings were added in the holographic QCD model for heavy quarks and the behavior of open strings has been studied in the black hole background. By combining the various open strings configurations and phase structure of black hole background, a new physical picture of phase diagram for confinement-deconfinement phase transition in holographic QCD was suggested. The holographic QCD model to describe light quarks has also been tried in <cit.>. However, the scalar field becomes complex causes some problems in <cit.>.In this work, we construct a bottom-up holographic QCD model by studying its dual 5-dimensional gravity theory coupled to a Abelian gauge field and a neutral scalar field, i.e. Einstein-Maxwell-scalar system (EMS) . We analytical solve the equations of motion to obtain a family of black hole backgrounds which depend on two arbitrary functions f(z) and A(z). Since one of the crucial properties for the soft-wall holographic QCD models is that the vector meson spectrum satisfies the linear Regge trajectories at zero temperature and zero density, we are able to fix the function f(z) by requiring the linear meson spectrum. Then, by choosing a suitable function A(z), we obtain a black hole background which appropriately describe many important properties in QCD. We explore the phase structure of the black hole background by studying its thermodynamically quantities under different temperature and chemical potential. In addition, we study the Wilson loop <cit.> and the light quark potential in our holographic QCD model by putting a probe open string in the black hole background and studying the dynamics of the open string. We found three configurations for the open string in a black hole as in Fig.<ref>. Combining the background phase structure and the open string breaking effect, we obtain the phase diagram for confinement-deconfinement transition.The paper is organized as follows. In section II, we review the EMS system and obtain a family of analytic black hole background solutions. We study the thermodynamics of the black hole backgrounds to get their phase structure in section III. We add probe open string in the black hole background to study their various configurations. We calculate the expectation value of the Wilson loop and study the quark potential in section IV. By combining the background phase structure and the open string breaking effect, we obtain the phase diagram for confinement-deconfinement transition in section V. We conclude our result in section VI. § EINSTEIN-MAXWELL-SCALAR SYSTEM The EMS system has been widely studied in constructing bottom-up holographic QCD models. In this section, we briefly review EMS and, by solving the equations of motion analytically, we obtain a family of black hole solutions which the authors have previously studied in <cit.>. §.§ String Frame and Einstein FrameWe consider a 5-dimensional EMS system with probe matter fields. The system can be described by an action with two parts, background sector S_B and matter sector S_m, respectively,S=S_B+S_m.In the string frame, labeled by a sup-index s, the background part S_B includes a gravity field g^s_μν, a Maxwell field A_μ and a neutral scalar field ϕ^s,S_B = 1/16π G_5∫ d^5x√(-g^s)e^-2ϕ^s[R^s-f^s_B(ϕ^s)/4F^2 +4∂_μϕ^s∂^μϕ^s -V^s(ϕ^s)],where G_5 is the 5-dimensional Newtonian constant, F_μν = ∂_μA_ν-∂_νA_μ is the gauge field strength corresponding to the Maxwell field, f^s_B(ϕ^s) is the gauge kinetic function associated to Maxwell field and V^s(ϕ^s) is the potential of the scalar field. The matter part S_m of the MES system includes massless gauge fields A_μ^V and A_μ^Ṽ, which are treated as probes in the background, describing the degrees of freedom of vector mesons and pseudovector mesons on the 4-dimensional boundary,S_m = -1/16π G_5∫ d^5x√(-g^s)e^-2ϕ^s[f^s_m(ϕ^s)/4(F_V^2+F_Ṽ^2)],where f^s_m(ϕ^s) is the gauge kinetic function of the gauge fields A_μ^V and A_μ^Ṽ. It is worth to mention that the gauge kinetic functions f^s_B and f^s_m are positive-defined and are not necessary to be the same. For simplicity, in this paper, we set f^s_B=f^s_m=f^s. We have constructed the EMS system in the string frame, in which it is natural to impose the physical boundary conditions when solving the background solution according to the AdS/CFT dictionary. However, it is more convenient to solve the equations of motion and study the thermodynamical properties of QCD in Einstein frame.The string frame action is characterized by the exponential dilaton factor in front of the Einstein term, i.e. e^-2ϕ^sR^s. To transform the action from string frame to Einstein frame, in which the Einstein term is expressed in the conventional form, we make the following Weyl transformations,ϕ^s=√(3/8)ϕ ,  g^s_μν=g_μνe^√(2/3)ϕ ,  f^s(ϕ^s)=f(ϕ) e^√(2/3)ϕ ,  V^s(ϕ^s)=e^-√(2/3)ϕV(ϕ).Thus, in Einstein frame, the actions in Eqs.(<ref>-<ref>) becomeS_B = 1/16π G_5∫ d^5x√(-g)[R-f(ϕ)/4F^2 -1/2∂_μϕ∂^μϕ -V(ϕ) ], S_m =-1/16π G_5∫ d^5x√(-g)[f(ϕ)/4(F_V^2+F_Ṽ^2)].§.§ Black Holes SolutionNow we are in the stage to derive the equations of motion of our EMS system from Eqs.(<ref>-<ref>). We first study the background by turning off the probe matters of vector field A_μ^V and pseudovector field A_μ^Ṽ, the equations of motion can be derived as∇^2ϕ = ∂ V/∂ϕ+F^2/4∂ f/∂ϕ,∇_μ[ f(ϕ)F^μν]= 0, R_μν-1/2 g_μνR= f(ϕ)/2( F_μρF_ν^ ρ-1/4g_μνF^2) +1/2[∂_μϕ∂_νϕ-1/2g_μν(∂ϕ)^2-g_μνV(ϕ)] .Since we are going to study the thermodynamical properties of QCD at finite temperature by gauge/gravity correspondence, we consider the following blackening ansatz of the background metric in Einstein frame asds^2 = e^2A(z)/z^2[-g(z)dt^2 +dx⃗^2 +dz^2/g(z)], ϕ = ϕ(z),  A_μ=(A_t(z),0⃗,0),where z = 0 corresponds to the conformal boundary of the 5-dimensional space-time and g(z) stands for the blackening factor. Here we have set the radial of AdS_5 to be unit by scale invariant.Plugging the ansatz in Eq.(<ref>-<ref>) into the equations of motion (<ref>-<ref>) leads to the following equations of motion for the background fields,ϕ^''+(g^'/g+3A^'-3z) ϕ^'+( z^2e^-2AA_t^'2f_ϕ/2g-e^2AV_ϕ/z^2g)= 0,A_t^''+(f^'/f+A^'-1z)A_t^' = 0,A^''-A^'2+2zA^'+ϕ^'26 =0,g^''+(3A^'-3z)g^'-e^-2Az^2fA_t^'2 =0,A^''+3A^'2+(3g^'2g-6z)A^'-1z(3g^'2g-4z)+g^''6g+e^2AV/3z^2g =0.Next, we specify the physical boundary conditions to solve the Eqs.(<ref>-<ref>). We impose the conditions that the metric in the string frame to be asymptotic to AdS_5 at the boundary z=0 and the black hole solutions are regular at the horizon z=z_H. * z → 0: A(0)+√(1/6)ϕ(0)=0,  g(0)=1. * z=z_H:A_t(z_H)=g(z_H)=0. It is natural to introduce concepts of chemical potential μ and baryon density ρ in QCD from the temporal component of gauge field A_t by the holographic dictionary of the gauge/gravity correspondence,A_t(z)=μ+ρ z^2 + O(z^4).As mentioned in the introduction, one of the crucial properties for the soft-wall holographic QCD models is that the vector meson spectrum satisfies the linear Regge trajectories at zero temperature and zero density, i.e. μ=ρ=0. This issue was first addressed in the soft-wall model <cit.> using the method of AdS/QCD duality.We consider the 5-dimensional probe vector field V in the action (<ref>). The equation of motion for the vector field reads∇_μ[f(ϕ)F_V^μν]=0.Following <cit.>, we first use the gauge invariance to fix the gauge V_z=0, then the equation of motion of the transverse vector field V_μ (∂^μV_μ=0) in the background (<ref>) reduces to-ψ_i^''+U(z)ψ_i=(ω ^2g^2-p^2g)ψ_i,where we have performed the Fourier transformation for the vector field V_i asV_i(x,z)=∫d^4k(2π)^4 e^ik· xv_i(z),and further redefined the functions v_i(z) withv_i=(ze^Afg)^1/2ψ_i≡ Xψ_i,with the potential functionU(z)=2X^'2X^2-X^''X.In the case of zero temperature and zero chemical potential, we expect that the discrete spectrum of the vector mesons obeys the linear Regge trajectories. The above Eq.(<ref>) reduces to a Schrödinger equation-ψ_i^''+U(z)ψ_i=m^2ψ_i,where -m^2=k^2=-ω^2+p^2. To produce the discrete mass spectrum with the linear Regge trajectories, the potential U( z ) should be in certain forms. A simple choice is to fix the gauge kinetic function asf(z)=e^± cz^2-A(z),which leads the potential to beU(z)=-34z^2-c^2z^2.The Schrödinger Eq.(<ref>) with the potential in Eq.(<ref>) has the discrete eigenvaluesm_n^2=4cn,which is linear in the energy level n as we expect for the vector spectrum at zero temperature and zero density which was well known as the linear Regge trajectories <cit.>.Once we fix the gauge kinetic function f(z), the equations of motion (<ref>-<ref>) can be analytically solved asϕ^'(z)= √(-6( A^'' -A^'2+2zA^')),A_t(z)= μe^cz^2-e^cz_H^2 1-e^cz_H^2,g(z)=1-1∫_0^z_Hy^3e^-3Ady[ ∫_0^zy^3e^-3Ady -2cμ^2(1-e^cz_H^2)^2|[ ∫_0^z_Hy^3e^-3Ady ∫_0^z_Hy^3e^-3Ae^cy^2dy; ∫_z_H^zy^3e^-3Ady ∫_z_H^zy^3e^-3Ae^cy^2dy ]|],V(z)= -3z^2ge^-2A[A^''+3A^' 2+(3g^'2g-6z)A^'-1z(3g^'2g-4z)+g^''6g] .Eqs.(<ref>-<ref>) represent a family of solutions for the black hole background depending on the warped factor A(z), which could be arbitrary function which satisfies the boundary condition in Eq.(<ref>). Furthermore, we also need to ensure that the expression under the squared root in Eq.(<ref>) is positive for z∈(z,z_H) to guarantee a real scalar field ϕ(z). A simple choice of A(z)=az^2+bz^4 with a,b<0 has been used to study the phase structure of heavy quarks in holographic QCD in <cit.>. In <cit.>, a similar form has been tried to study the phase structure of light quarks in holographic QCD, but with a problem that the scalar field ϕ(z) becomes complex for some z. In this work, we choose the warped factor A(z) asA ( z )= -a ln (bz^2+1).Plugging Eq.(<ref>) into Eq.(<ref>), it is easy to show that the scalar field ϕ(z) is always real for positive a and b as shown in Fig.<ref>(a). By fitting the mass spectrum of ρ meson with its excitations and comparing the phase transition temperature with lattice calculation, we can determine the parameters in Eq.(<ref>) and Eq.(<ref>) as a=4.046,  b=0.01613 and c=0.227. § PHASE STRUCTURE OF THE BLACK HOLE BACKGROUNDIn this section, we will explore the phase structure of the black hole background in Eqs.(<ref>-<ref>) which we obtained in the last section. The entropy density and the Hawking temperature can be calculated as,s=√(g(x⃗))/4|_z_H,  T=g'(z_H)/4π,where g(x⃗) represents the metric of the internal space along x⃗. The free energy in grand canonical ensemble can be obtained from the first law of thermodynamics ,F=ϵ-Ts-μρ,where ϵ labels the internal energy density. Comparing the free energies between different sizes of black holes at the same temperature under certain finite value of chemical potential, we are able to obtain the phase structure of black holes which corresponds to the phase structure in the holographic QCD due to AdS/CFT correspondence. §.§ Black hole thermodynamicsThe entropy density, defined in Eq.(<ref>), can be easily obtained for our black hole background in Eqs.(<ref>-<ref>) as,s=e^3A(z_H)/4z_H^3,which is plotted in Fig.<ref>(b). We see that the entropy density is a monotonously decreasing function as horizon increasing. Based on the second law of thermodynamics, it implies that our black hole background prefers smaller size with larger entropy. It is worth to mention, the smaller value of horizon relates the bigger size of black hole.Another crucial thermal quantity in studying phase transition is the black hole temperature, also defined in Eq.(<ref>), which can be calculated for our black hole background asT=z_H^3e^-3A( z_H) 4π∫_0^z_H y^3e^-3Ady[ 1- 2cμ^2( e^cz_H^2∫_0^z_Hy^3e^-3Ady-∫_0^z_Hy^3e^-3Ae^cy^2dy ) ( 1-e^cz_H^2)^2].The temperature as the function of the horizon, at various chemical potentials, are plotted in Fig.<ref>. At small chemical potential, 0 ≤μ < μ_c, the temperature behaves as a monotonous function of horizon and decreases to zero at infinity horizon. It is clear that there is no phase transition because of the monotonous behavior of temperature . At large chemical potential, μ≥μ_c, the temperature becomes multivalued which implies that there will be a phase transition between black holes with different sizes. In order to determine the transition temperatures at each chemical potential T_BB(μ) between black holes with different sizes, we have to compute the free energy in grand canonical ensemble from the first law of thermodynamics. At fixed volume, we havedF=-sdT-ρ dμ.Thus, the free energy for a given chemical potential μ can be evaluated by an integral,F= -∫ sdT= ∫_zH^∞ s(z_H)T'(z_H)dz_H,where we have normalized the free energy of the black holes vanishing at z_H →∞, i.e. T=0, which is equal to the free energy of the thermal gas. The free energy v.s. temperature at various chemical potentials is plotted in Fig.<ref>(a). The intersection of free energy implies that there exists a phase transition between two black holes with different sizes at the temperature T=T_BB. For μ>μ_c, the free energy behaves as the swallow-tiled shape and shrinks into a singular point at μ=μ_c, then disappears for μ<μ_c. The behavior of the free energy implies that the system undergoes a first order phase transition at each fixed chemical potential μ>μ_c and ends at the critical endpoint at (μ_c,T_c) where the phase transition becomes second order. For μ<μ_c, the phase transition reduces to a crossover. The phase diagram of black hole to black hole phase transition is plotted in Fig.<ref>(b), which is consistent with the phase diagram for light quarks obtained in lattice QCD simulations <cit.>. §.§ Susceptibility and equations of stateTo justify the phase transition, we consider susceptibility and equations of state in the following. The susceptibility is defined asχ=(∂ρ/∂μ)_T.We plot the baryon density ρ v.s. chemical potential μ in Fig.<ref>. When T < T_c, the multivalued behavior indicates that there is a phase transition at certain values of chemical potential. While there is no transition happening for T > T_c, since ρ is a single-valued function of μ. At critical temperature T_c, the position where the slope becomes infinity locates critical point (μ_c,T_c) for the second phase transition, which is consistent with our previous result by comparing free energies between black holes with different sizes. The normalized entropy density s/T^3 v.s. temperature T is plotted in Fig.<ref>(a). The normalized entropy density becomes large in high temperature limit. The enlarged plot shows that the normalized entropy density is monotonous for μ<μ_c and multivalued for μ>μ_c.The square of speed of sound is defined asc_s^2 = ∂ln T/∂ln s,which is plotted in Fig.<ref>(b). The positive/negative part of c_s^2 corresponds to the dynamical stable/unstable black hole. More precisely, the imaginary part of c_s indicates the Gregory-Laflamme instability <cit.>, which is closely related to Gubser-Mitra conjecture that the dynamical stability of a horizon is equivalent to the thermodynamic stability <cit.>. c_s^2 is always positive for 0 ≤μ <μ_c, and reaches to zero at the criticalpoint μ_c, T_c. It is worth to notice that c_s^2 approaches the conformal limit, 1/3, in high temperature limit for every chemical potential μ.One of the important quantities to realize the thermodynamically stability is specific heat capacity, which is defined asC_v = T( ∂ s/∂ T) = s/c_s^2.The normalized specific heat capacity C_v/T^3 v.s. temperature T is plotted in Fig.<ref>(c). For 0 ≤μ < μ_c, C_v is always positive indicating that the black holes are thermodynamically stable. On the other hand, as μ > μ_c, C_v reveals the negative part which corresponds to the thermodynamically unstable. Furthermore, C_v and c_s^2 have exact the same behaves in sign. Therefore the imaginary part of speed of sound corresponds to the negative part of specific heat capacity, which implies that our system satisfies the Gubser-Mitra conjecture.Trace anomaly ϵ-3p is another important thermodynamic quantity which can be derived from the internal energy,ϵ=F+Ts+μρ.We plot the normalized trace anomaly (ϵ-3p)/T^4 v.s. temperature T in Fig.<ref>(d). As the chemical potential decreasing, the peak of trace anomaly decreases and the multivalued behaviour becomes single-valued.Finally, we summarize the behaviors of some important thermodynamic quantities in table.1.§ OPEN STRINGS IN THE BACKGROUNDIn the following of this paper, we will study the phase structure for our holographic QCD model by adding probe open strings in the black hole backgrounds in Eqs.(<ref>-<ref>). We consider open strings in the black hold background with their ends attaching on the boundary of the bulk at z=0 or black hole horizon at z=z_H. We find that there will be two configurations for an open string in the black hole background. One is the U-shape configuration with the open string reaching its maximum depth at z=z_0 and both of its ends attaching the boundary, another is the I-shape configuration with the straight open string having its two ends attached to the boundary and the horizon, respectively. The two configurations are showed in Fig.<ref>.Since the holographic QCD fields live on the boundary of the black hole background, it is natural to interpret the two ends of the open string as a quark-antiquark pair. The U-shape configuration corresponds to the quark-antiquark pair being connected by a string and can be identified as a meson state. While the I-shape configuration corresponds to a free quark or antiquark.The Nambu-Goto action of an open string isS_NG=∫ d^2ξ√(-G),where the induced metricG_ab=g_μν∂_aX^μ∂_bX^ν,on the 2-dimensional world-sheet that the string sweeps out as it moves with coordinates (ξ^0,ξ^1) is the pullback of 5-dimensional target space-time metric g_μν_s,ds^2=e^2A_s(z)/z^2( g(z)dt^2+dx⃗^2+1/g(z)dz^2),where we consider the Euclidean metric to study the thermal properties of the system to identify the black hole temperature of gravitational theory in bulk as thermal field theory on the boundary. §.§ Wilson LoopIt is known that one can read off the energy of such a pair of quark-antiquark from the expectation value of the Wilson loop <cit.>,⟨ W ( 𝒞) ⟩∼ e^-V_qq̅(r,T)/T,where the rectangular Wilson loop 𝒞 is along the directions (t,x) on the boundary of the AdS space attached by a pair of the quark and antiquark separated by r, and V(r,T) is the quark-antiquark potential.Based on string/gauge correspondence, if we consider a pair of quark-antiquark at (z=0, x=± r/2) are connected by an open string as in Fig.<ref>.the expectation value of the Wilson loop is given by⟨ W ( 𝒞) ⟩≃ e^-S_on-shell,where S_on-shell is the on-shell string action on a world-sheet bounded by a loop 𝒞 at the boundary of AdS space, which is proportional to the minimum area of the string world-sheet. Comparing Eq.(<ref>) and Eq.(<ref>), the free energy of the meson can be calculated asV_qq̅(r,T) = T S_on-shell(r,T).§.§ Configurations of Open StringsThe string world-sheet action is defined in Eq.(<ref>) with the induced metric on the string world-sheet in Eq.(<ref>). For the U-shape configuration, by choosing static gauge: ξ^0=t, ξ^1=x, the induced metric in string frame becomeds^2=G_abdξ^adξ^b =e^2A_s(z)/z^2g(z) dt^2+ e^2A_s(z)/z^2(1+z'^2/g(z)) dx^2,where the prime denotes a derivative with respect to x. The Lagrangian and Hamiltonian can be calculated asℒ =e^2A_s(z)/z^2√(g(z)+z'^2), ℋ =-e^2A_s(z)/z^2g(z)/√(g(z)+z'^2).Given the boundary conditionsz( x=±r/2)=0, z(x=0)=z_0, z'(x=0)=0,we obtain the conserved energy from Hamiltonian in Eq.(<ref>)ℋ(x=0)=-e^2A_s(z_0)/z_0^2√(g(z_0)).Therefore, the U-shape configuration of an open string can be solve byz'=√( g ( σ^2(z)/σ^2(z_0)-1 ) ),where σ is the effective string tension <cit.>,σ(z)=e^2A_s(z)√(g(z))/z^2and the warped factor in string frame becomesA_s(z)=A(z)+√(1/6)ϕ(z).The distance r between the pair of quark-antiquark can be calculated as,r=∫_-r/2^r/2 dx=2∫_0^z_0 dz 1/z'=2∫_0^z_0 dz [ g(z) ( σ^2(z)/σ^2(z_0)-1) ]^1/2,where z_0 is the maximum depth that the string can reach. The dependence of the distance r on z_0 at two different cases are plotted in Fig.<ref>. The red (upper) line corresponds to the case of small black hole horizon that the open string reaches a maximum depth z_m when r→∞ and can not reach the horizon, while the blue (lower) line corresponds to the case of large black hole horizon that the open string might reach the horizon but with a limited separation r≤ r_M.§.§ Cornell PotentialThe potential between a pair of quark-antiquark V_qq̅ in Eq.(<ref>) for the open strings in U-shape configurations can be calculated asV_qq̅=T S_on-shell =∫_-r/2^r/2 dx ℒ =2∫_0^z_0 dz σ(z)/√(g(z))[ 1-σ^2(z_0)/σ^2(z)]^-1/2.It is well known that the potential V_qq̅ can be expressed in the form of Cornell potential, which behaves as Coulomb type in short separation between quark and antiquark, but linear behavior in large separation with the coefficient σ_s, the string tension,V_qq̅=-κ/r+σ_s r +C.As, r → 0, i.e. z_0 → 0, we expand the distance r and the potential V_qq̅ at z_0=0,r=2∫_0^z_0 dz [g(z) ( σ^2(z)/σ^2(z_0)-1 ) ] ^-1/2=r_1z_0+O(z_0^2),V_qq̅ =2∫_0^z_0 dz σ(z)/√(g(z))[ 1-σ^2(z_0)/σ^2(z)]^-1/2=V_-1/z_0+O(1),where[We require the property of Beta function B(x/k,y)/k=∫_0^1 dt t^x-1 (1-t^k)^y-1.]r_1 =2∫_0^1dv ( 1/v^4 -1 )^-1/2 =1/2B( 3/4,1/2), V_-1 =2∫_0^1dv/v^2( 1-v^4)^-1/2 =1/2B( -1/4,1/2),which gives the Coulomb potential,V_qq̅=r_1 V_-1/r+...,with the coefficientκ = - r_1 V_-1≃ 1.4355.As r →∞, i.e. z_0→ z_m, we make a coordinate transformation z=z_0-z_0w^2. The distance r and the potential V_qq̅ becomer = 2∫_0^1 f_r(w) dw,   V = 2∫_0^1 f_V(w) dw,wheref_r(w)=2z_0w[g(z_0-z_0w^2) (σ^2(z_0-z_0w^2)σ^2( z_0)-1)]^-1/2,f_V(w)=2z_0wσ(z_0-z_0 w^2)/√(g(z_0-z_0w^2))[ 1-σ^2(z_0)σ^2(z_0-z_0 w^2)]^-1/2.From Fig.<ref>, the distance r is divergent at z_0=z_m. By carefully analysis, we find that this divergence also happens for the quark potential because both the integrands f_r(w) and f_V(w) are divergent near the lower limit w=0, i.e. z=z_0→ z_m. To study the behaviours of distance r and potential V_qq̅ near z_0=z_m, we expand f_r(w) and f_V(w) at w=0,f_r(w)=2 z_0 [ -2 z_0 g(z_0) σ'(z_0)/σ(z_0)]^-1/2+O(w), f_V(w)=2 z_0 σ(z_0) [-2 z_0 g(z_0)σ'(z_0)/σ(z_0)]^-1/2+O(w). The integrals in Eq.(<ref>) can be approximated by only considering the leading terms of f_r(w) and f_r(w) near z_0=z_m in Eqs.(<ref>-<ref>). This leads tor(z_0)≃4z_0 [ -2z_0 g(z_0) σ'(z_0)/σ(z_0)] ^-1/2,V(z_0)≃4z_0 σ(z_0) [ -2z_0 g(z_0) σ'(z_0)/σ(z_0)] ^-1/2 =σ(z_0) r(z_0).From the above expression, we obtain the expected linear potential V=σ_sr at long distance with the string tension,σ_s=.dVdr|_z_0=z_m=. dV/dz_0dr/dz_0| _z_0=z_m=.σ^'( z_0) r( z_0)+σ(z_0) r^'( z_0) r^'(z_0) | _z_0=z_m=σ( z_m) .The temperature dependence of the string tension for various chemical potentials is plotted in Fig.<ref>. We see that the string tension decreases when the temperature increases. At the confinement-deconfinement transformation temperature T_μ, the system transforms to the deconfinement phase and the string tension suddenly drops to zero as we expected <cit.>. The behaviour is consistent with the result of lattice QCD simulation <cit.>. In Fig.<ref>, μ_c=0.04779 is the critical chemical potential for the black holes phase transition in background and μ_c'=0.1043 is the critical chemical potential for confinement-deconfinement phase transition in our holographic QCD model. We will discuss these two phase transitions in detail in the next section.We therefore showed that the behaviours of the quark potential at short distance and long distance agrees with the form of the Cornell potential <cit.>,V( r) = -κr+σ_s r+C,which has been measured in great detail in lattice simulations <cit.>.In order to obtain the r dependence of V_qq̅, we will evaluate the integral in Eq.(<ref>), which is divergence due to integrand is not well defined at z=0. We regularize V_qq̅ by subtracting the divergent terms asV_qq̅^[R]=C(z_0) + 2∫_0^z_0dz[ σ(z)/√(g(z))[ 1-σ^2(z_0)/σ^2(z)]^-1/2-1/z^2[ 1+2A'(0) z] ] ,whereC(z_0) = -2/z_0+4A_s'(0)ln z_0.The regularized potentials are plotted in Fig.<ref>. For T<T_μ, V_qq̅^[R] has the form of Cornell potential with linear behavior for large r. For T>T_μ, the open string breaks at certain distant and V_qq̅^[R] become constant for the larger distance. § PHASE DIAGRAMIn the previous sections, we have constructed a holographic QCD model by studying the Einstein-Maxwell-scalar system. We obtained a family of black hole backgrounds and studied the phase transition between the black holes by computing their free energies. We also added probe open strings in the black hole backgrounds and studied the different string configurations at various temperatures, which corresponding to the confinement and deconfinement phases in the dual holographic QCD model. In this section, we are ready to discuss the phase diagram of QCD by combining the phase structure of black hole background and the configurations of the probe open strings in the black hole background. We have obtained the phase diagram for phase transitions of black hole background, as shown in Fig.<ref>, by comparing the free energies of black holes. We summarize our results in a schematic diagram Fig.<ref>. As black hole temperature grows up, black hole horizon grows up as well, i.e. z_H decreases. At the phase transition temperature, the small black hole with horizon z_H_s jumps to the large one with horizon z_H_l. §.§ Probe strings and Dynamical WallTo see the confinement-deconfinement phase transition, we added probe open strings into the black hole background. As shown in Fig.<ref>, for a small black hole z_H_s, the separation of the bounded quark-antiquark pair r can be as long as possible, but the depth of the string is limited by a maximum value z_m, which we call dynamical wall. In this case, the open strings are always connected in the U-shape to form a bounded states and the system is in the confinement phase. The dynamical wall is the crucial concept to understand the confinement-deconfinement phase transition in holographic QCD models. On the other side, for a large black hole z_H_l, as shown in Fig.<ref>, the separation of the quark-antiquark pair is bounded by a maximum distance r_M at the depth z_M. If the distance between the quark and antiquark is longer than r_M, an U-shaped string will break into two I-shaped open strings attached between the boundary and the horizon. In this case, free quark or antiquark might exist indicating the system is in the deconfinement phase. The open string breaking process corresponds to the melting of the bounded state <cit.>.We summarize our discussion in a schematic diagram Fig.<ref>. For a small black hole, as shown in Fig.<ref>(a), there exists a dynamical wall so that open strings are always in the U-shape corresponding to the confinement phase. While for a large black hole, as shown in Fig.<ref>(b), the open strings could be either U-shaped or I-shaped depending on the separating distant between the quark and antiquark corresponding to the deconfinement phase. The configurations of open strings are collected in table 2. Since the black hole temperature are closely associated to the black hole horizon, we expect that the system will undergo a phase transition from confinement to deconfinement when temperature increases.Since the role of dynamical wall is crucial to affirm the confinement phase in holographic QCD models, let us examine it carefully. To determine the position of the dynamical wall z_m, we use the fact that r(z_m)→∞, which leads to the equation σ'(z_m)=0. With the definition of string tension in Eq.(<ref>), weg'(z_m)/4g(z_m)+A'_s(z_m)+1/z_m=0.In the confinement phase, the value of horizon z_H is large, so that g(z) is almost a constant and g'(z)∼ 0, we thus haveA_s'(z_m)+1/z_m=0.We would like to remark that the position of the dynamical wall z_m is almost an universal value depending neither on the chemical potential nor on the temperature[Here we mean that the system in the confinement phase. In deconfinement phase, there is no dynamical wall.]. In our particular model with the choice of A(z) in Eq.(<ref>) and also string frame A_s in Eq.(<ref>), the position of the dynamical wall z_m can be obtained asz_m=√(a-1-√(a(a-4))/b(2a+1))≃ 4.22. For each chemical potential μ, we define the transformation temperature T_μ corresponding to the critical black hole horizon z_H_μ, at which the dynamical wall appears/disappears as shown in Fig.<ref>(a). The transformation temperature is associated to the transformation between the confinement and deconfinement phases in the holographic QCD. The transformation temperature T_μ at each chemical potential is plotted in Fig.<ref>(b). We should emphasize that the transformation between the confinement and deconfinement phases here is not a phase transition. The confinement phase smoothly transforms to the deconfinement phase as the temperature increases gradually. §.§ Confinement-deconfinement Phase DiagramTo obtain the completed phase diagram in our holographic QCD model, we combine the phase transition between large and small black holes as well as the different configurations of the probe open strings in the background. In Fig.<ref>(a), the black (dotted) line represents the confinement-deconfinement transformation, while the red (solid) line is the phase transition between black holes. Once the phase transition from a small black hole at lower temperature to a large black hole at higher temperature takes place, the black hole horizon jumps to a large value which is beyond the critical black hole horizon zH_μ and the system performs the confinement-deconfinement phase transition. The intersection of two lines is identified as the critical point at (μ_c'=0.1043,T_c'=0.1538), which is consistent with the recent result by lattice QCD in <cit.>.The final phase diagram of the confinement-deconfinement phase transition is plotted in <ref>(b). For a large chemical potential μ>μ_c', there is a first order confinement-deconfinement phase transition at the temperature T_μ shown as solid red line. At the critical point μ=μ_c' and T=T_c' shown as a black dot, the first order phase transition weakens to a second order phase transition. For small chemical potential μ<μ_c', the confinement-deconfinement phase transition reduces to the smooth crossover shown as the dotted black line.§ CONCLUSIONIn this paper, we constructed a bottom-up holographic QCD model by studying gravity coupled to a U(1) gauge field and a neutral scalar in five dimensional space-time, i.e. 5-dimensional Einstein-Maxwell-scalar system. By solved the equations of motion analytically, we obtained a family of black hole solutions which depend on two arbitrary functions f(z) and A(z). Different choices of the functions f(z) and A(z) corresponds to different black hole backgrounds. To include meson fields in QCD, probe gauge fields were added on the 5-dimensional backgrounds. The function f(z) can be fixed by requiring the linear Regge spectrum for mesons. By a suitable choice of the function A(z) as in Eq.(<ref>), we fixed our holographic QCD model.We obtained the phase structure of the black hole background by studying its thermodynamics quantities. To realize the confinement-deconfinement phase transition in QCD, we added probe open strings in the black hole background and studied their stable configurations of shape. Different configurations of U-shape and I-shape were identified to confinement and deconfinement phases, respectively. By combining the phase structure of the black hole background and the U-shape to I-shape transformation of the open strings, we obtain the phase diagram of confinement-deconfinement phase transition in our holographic QCD model. In our model, the critical point, where the first order phase transition becomes crossover, is predicted at (0.1043 GeV,0.1538 GeV) which is consistent with the recent result of lattice QCD result in <cit.>.We studied Wilson loop in QCD by calculating the world-sheet area of an open string based on the holographic correspondence. The heavy quark potential can be obtained from the Wilson loop. We obtained the Cornell potential which has been well studied by lattice QCD. The sketch diagram of the heavy quark potentials at different temperatures is plotted in Fig.<ref>. At low temperature T<T_μ, the potential is linear at large r corresponding to the confinement phase; While at high temperature T>T_μ, the potential becomes constant at large r corresponding to the deconfinement phase. There is a phase transition at T=T_μ as it is showed in the sketch diagram. The main properties in our holographic QCD model are summarized in the following: * The coupled Einstein-Maxwell-scalar system was solved analytically to obtain a family of black hole backgrounds in Eqs.(<ref>-<ref>).* The meson spectrum in our model satisfies the linear Regge behavior as in Eq.(<ref>).* For finite chemical potentials μ>μ_c, the background emerges a phase transition between a small black hole and a large black hole as shown in Fig.<ref>(b).* The dynamical wall appears/disappears for small/large black holes, which implies the confinement-deconfinement phase transition. In addition, in confinement phase, the position of the dynamical wall is nearly a constant as in Eq.(<ref>) independent of the chemical potential and the temperature.* We obtained the Cornell form of quark potential in Eq.(<ref>) by calculating the Wilson loop.§.§ Acknowledgements We would like to thank Rong-Gen Cai, Song He, Mei Huang, Danning Li, Xiaofeng Luo, Xiaoning Wu for useful discussions. This work is supported by the Ministry of Science and Technology (MOST 105-2112-M-009-010) and National Center for Theoretical Science, Taiwan.99 Cabibbo1975Cabibbo N and Parisi G 1975 Phys. Lett. B 59 67.1009.4089O. Philipsen, "Lattice QCD at non-zero temperature and baryon density", arXiv:1009.4089 [hep-lat].1203.5320P. Petreczky, "Lattice QCD at non-zero temperature", J.Phys. G39 (2012) 093002, arXiv:1203.5320 [hep-lat].9711200, "The Large N limit of superconformal field theories and supergravity", Int.J.Theor.Phys. 38 (1999) 1113-1133, hep-th/9711200.0304032M. Kruczenski, D. Mateos, R. C. Myers and D. J. Winters, "Meson spectroscopy in AdS / CFT with flavor", JHEP 0307, 049 (2003), hep-th/0304032.0306018J. Babington, J. Erdmenger, N. J. Evans, Z. Guralnik and I. Kirsch, "Chiral symmetry breaking and pions in nonsupersymmetric gauge/gravity duals", Phys. Rev. D 69, 066007 (2004), hep-th/0306018.0311270M. Kruczenski, D. Mateos, R. C. Myers and D. J. Winters, "Towards a holographic dual of large N(c) QCD", JHEP 0405, 041 (2004), hep-th/0311270.0412141T. Sakai and S. Sugimoto, "Low energy hadron physics in holographic QCD", Prog. Theor. Phys. 113, 843 (2005), hep-th/0412141.0507073T. Sakai and S. Sugimoto, "More on a holographic dual of QCD", Prog. Theor. Phys. 114, 1083 (2005), hep-th/0507073.0501128J. Erlich, E. Katz, D. T. Son and M. A. Stephanov, "QCD and a holographic model of hadrons", Phys. Rev. Lett. 95, 261602 (2005), hep-ph/0501128.0602229A. Karch, E. Katz, D. T. Son and M. A. Stephanov, "Linear confinement and AdS/QCD", Phys. Rev. D 74, 015005 (2006), hep-ph/0602229.0611099S. Kobayashi, D. Mateos, S. Matsuura, R. C. Myers and R. M. Thomson, "Holographic phase transitions at finite baryon density", JHEP 0702, 016 (2007), hep-th/0611099.0801.4383B. Batell and T. Gherghetta, "Dynamical Soft-Wall AdS/QCD", Phys. Rev. D 78, 026002 (2008), arXiv:0801.4383 [hep-ph].0804.0434S. S. Gubser and A. Nellore, "Mimicking the QCD equation of state with a dual black hole", Phys. Rev. D 78, 086007 (2008), arXiv:0804.0434 [hep-th].0806.3830W. de Paula, T. Frederico, H. Forkel and M. Beyer, "Dynamical AdS/QCD with area-law confinement and linear Regge trajectories", Phys. Rev. D 79, 075019 (2009), arXiv:0806.3830 [hep-ph].1005.4690C. Charmousis, B. Goutéraux, B. Kim, E. Kiritsis, R. Meyer, "Effective Holographic Theories for low-temperature condensed matter systems, JHEP 1011:151 (2010), arXiv:1005.4690 [hep-th].1006.5461U. Gursoy, E. Kiritsis, L. Mazzanti, G. Michalogiorgakis and F. Nitti, "Improved Holographic QCD", Lect. Notes Phys. 828, 79 (2011), arXiv:1006.5461 [hep-th] .1012.1864O. DeWolfe, S. S. Gubser and C. Rosen "A holographic critical point", Phys. Rev. D 83, 086005 (2011), arXiv:1012.1864 [hep-th].1103.5389Danning Li, Song He, Mei Huang, Qi-Shu Yan, "Thermodynamics of deformed AdS_5 model with a positive/negative quadratic correction in graviton-dilaton system", JHEP 1109, 041 (2011), arXiv:1103.5389 [hep-th].1108.2029O. DeWolfe, S. S. Gubser and C. Rosen, "Dynamic critical phenomena at a holographic critical point", Phys. Rev. D 84, 126014 (2011), arXiv:1108.2029 [hep-th].1108.0684Mohammed Mia, Keshav Dasgupta, Charles Gale, Sangyong Jeon, "A holographic model for large N thermal QCD", J.Phys. G39 (2012) 054004, arXiv:1108.0684 [hep-th].1111.4953Michael Fromm, Jens Langelage, Stefano Lottini, Owe Philipsen, "The QCD deconfinement transition for heavy quarks and all baryon chemical potentials", JHEP 01 (2012) 042, arXiv:1111.4953 [hep-lat].1209.4512Rong-Gen Cai, Shankhadeep Chakrabortty, Song He, Li Li, "Some aspects of QGP phase in a hQCD model", JHEP02(2013)068, arXiv:1209.4512 [hep-th].1301.0385Song He, Shang-Yu Wu, Yi Yang, Pei-Hung Yuan, "Phase Structure in a Dynamical Soft-Wall Holographic QCD Model", JHEP 04 (2013) 093, arXiv:1301.0385 [hep-th].1406.1865Yi Yang, Pei-Hung Yuan,"A Refined Holographic QCD Model and QCD Phase Structure", JHEP 11 (2014) 149, arXiv:1406.1865 [hep-th].1506.05930Yi Yang, Pei-Hung Yuan, "Confinement-Deconfinment Phase Transition for Heavy Quarks", JHEP12(2015)161, arXiv:1506.05930 [hep-th].0507246M. Shifman, "Highly Excited Hadrons in QCD and Beyond", hep-ph/0507246.9803135Soo-Jong Rey, Stefan Theisen, Jung-Tay Yee, "Wilson-Polyakov Loop at Finite Temperature in Large N Gauge Theory and Anti-de Sitter Supergravity", Nucl.Phys.B527:171-186 (1998), hep-th/9803135.9803137A. Brandhuber, N. Itzhaki, J. Sonnenschein, S. Yankielowicz, "Wilson Loops in the Large N Limit at Finite Temperature", Phys.Lett. B434 (1998), hep-th/9803137.0604204Oleg Andreev and Valentin I. Zakharov, "Heavy-Quark Potentials and AdS/QCD", Phys.Rev.D74:025023 (2006), hep-ph/0604204.0610135Henrique Boschi-Filho, Nelson R. F. Braga, "AdS/CFT Correspondence and Strong Interactions", PoSIC2006:035 (2006), hep-th/0610135.0611304Oleg Andreev and Valentin I. Zakharov, "On Heavy-Quark Free Energies, Entropies, Polyakov Loop, and AdS/QCD", JHEP 0704 (2007) 100, hep-ph/0611304.0701157C D White, "The Cornell Potential from General Geometries in AdS/QCD", Phys.Lett.B652:79-85,2007, hep-ph/0701157.0807.4747Javier L. Albacete, Yuri V. Kovchegov, Anastasios Taliotis, "Heavy Quark Potential at Finite Temperature Using the Holographic Correspondence", Phys.Rev.D78:115007 (2008), arXiv:0807.4747 [hep-th].1004.1880Song He, Mei Huang, Qi-Shu Yan, "Logarithmic correction in the deformed AdS_5 model to produce the heavy quark potential and QCD beta function", Phys.Rev.D83:045034 (2011), arXiv:1004.1880 [hep-ph].1008.3116Pietro Colangelo, Floriana Giannuzzi and Stefano Nicotri, "Holography, Heavy-Quark Free Energy, and the QCD Phase Diagram", Phys.Rev.D83:035015 (2011), arXiv:1008.3116 [hep-ph].1201.0820Rong-Gen Cai, Song He, Danning Li, "A hQCD model and its phase diagram in Einstein-Maxwell-Dilaton system", JHEP 1203, 033 (2012), arXiv:1201.0820 [hep-th].1206.2824Danning Li, Mei Huang, Qi-Shu Yan, "A dynamical holographic QCD model for chiral symmetry breaking and linear confinement", Eur. Phys. J. C (2013)73:2615, arXiv:1206.2824 [hep-th].1401.3635Yan Wu, Defu Hou, Hai-cang Ren, "Some Comments on the Holographic Heavy Quark Potential in a Thermal Bath ", arXiv:1401.3635 [hep-ph].9803002Juan M. Maldacena, "Wilson loops in large N field theories", Phys.Rev.Lett.80 (1998), hep-th/9803002.9301052R.Gregory, R.Laflamme,"Black Strings and p-Branes are Unstable", Phys.Rev.Lett.70(1993), hep-th/9301052.9404071Ruth Gregory, Raymond Laflamme,"The Instability of Charged Black Strings and p-Branes", Nucl.Phys. B428 (1994), hep-th/9404071.0009126Steven S. Gubser, Indrajit Mitra,"Instability of charged black holes in anti-de Sitter space", hep-th/0009126.0011127Steven S. Gubser, Indrajit Mitra,"The evolution of unstable black holes in anti-de Sitter space", JHEP 0108 (2001) 018, hep-th/0011127.0104071Elena Cuoco, Giovanni Losurdo, Giovanni Calamai, Leonardo Fabbroni, Massimo Mazzoni, Ruggero Stanga, Gianluca Guidi, Flavio Vetrano,"Noise parametric identification and whitening for LIGO 40-meter interferometer data", Phys.Rev.D 64, 122002 (2001), gr-qc/0104071.1006.0055Mohammed Mia, Keshav Dasgupta, Charles Gale and Sangyong Jeon, "Heavy Quarkonium Melting in Large N Thermal QCD", Phys.Lett.B694 460 (2011), arXiv:1006.0055 [hep-th].CornellE. Eichten, K. Gottfried, T. Konoshita, K.D. Lane and T.-M. Yan, Phys.Rev. D17, (1978) 3090; D21 (1980) 203.2001G.S. Bali, Phys.Rept. 343, 1(2001).1605.07181Carlo Ewerz, Olaf Kaczmarek, Andreas Samberg, "Free Energy of a Heavy Quark-Antiquark Pair in a Thermal Medium from AdS/CFT", arXiv:1605.07181 [hep-th].1701.04325A. Bazavov, H.-T. Ding, P. Hegde, O. Kaczmarek, F. Karsch, E. Laermann, Y. Maezawa, S. Mukherjee, H. Ohno, P. Petreczky, H. Sandmeyer, P. Steinbrecher, C. Schmidt, S. Sharma, W. Soeldner, M. Wagner, "The QCD Equation of State to𝒪(μ_B^ 6) from Lattice QCD", Phys. Rev.D95, 054504 (2017), arXiv:1701.04325 [hep-lat]
http://arxiv.org/abs/1703.09184v1
{ "authors": [ "Meng-Wei Li", "Yi Yang", "Pei-Hung Yuan" ], "categories": [ "hep-th", "hep-ph" ], "primary_category": "hep-th", "published": "20170327165830", "title": "Approaching Confinement Structure for Light Quarks in a Holographic Soft Wall QCD Model" }
Department of Physics and Astronomy - Wayne State University, 42 W Warren Ave, Detroit, MI 48202, USA [myfootnote1]salvatore.di.carlo@wayne.edu Dipartimento di Fisica e Chimica - Universita degli Studi di Palermo, Via Archirafi 36, 90133, Palermo (Italy) [myfootnote2]fabrizio.messina@unipa.itThe analysis of beamstrahlung radiation, emitted from a beam of charged particles due to the electromagnetic interaction with a second beam of charged particles, provides a diagnostic tool that can be used to monitor beam-beam collisions in a e^+e^- storage ring. In this paper we show that the beamstrahlung time profile is related to the timing of the collisions and the length of the beams, and how its measurement can be used to monitor and optimize collisions at the interaction point of the SuperKEKB collider. To measure the time dependence of beamstrahlung, we describe a method based on nonlinear frequency mixing in a nonlinear crystal of beamstrahlung radiation with photons from a pulsed laser. We demonstrate that the method allows to measure and optimize the relative timing and length of the colliding bunches with 1% accuracy. § INTRODUCTIONNowadays, the particle physics organizations are following two different but complementary approaches. The energy frontier approach consists in designing a particle accelerator that is able to provide the highest possible available energy to produce new particles or discover unknown processes at very high energies. This is the Atlas and CMS approach at LHC <cit.>. Another approach consists in working at lower energies, thereby designing the accelerator in order to optimize the production of certain well known resonances and, studying their rare decays, underline some new processes that are not contemplated within the Standard Model. The latter is the path followed by Belle II in the framework of the SuperKEKB accelerator <cit.>. In the latter case, one must deal with rare events (i.e., events with a small cross section) which show departures from the Standard Model. The rate of events production is given by the luminosity ℒ times the cross section σ <cit.>:dN/dt = ℒσ It is clear that the success and physical outreach of the Belle II experiment depends critically on luminosity, one of the two figures of merit of the accelerator together with the energy. The new SuperKEKB storage ring aims, through the use of nano-beams, to reach the very high luminosity of 8×10^35cm^-2s^-1 <cit.>. The nano-beams scheme, invented by Pantaleo Raimondi <cit.>, allows to reduce the longitudinal overlapping of the beams, minimize the "hourglass effect" <cit.>, and therefore increasing the luminosity <cit.>. The possibility of reaching such a high luminosity depends upon the ability to closely monitor the size and position of the beams. At SuperKEKB, direct monitoring of the beams at the interaction point (IP) is even more precious than in previous accelerators. The beam sizes are 50-60 times smaller than at previous colliders <cit.>, and the high crossing angle (83 mrad) introduces a novel possible way to lose luminosity, as the two beams have to simultaneously arrive at the IP. To monitor the beams, SuperKEKB is equipped with several pieces of instrumentation <cit.>. Both storage rings are equipped with beam position monitors (BPM), which are mainly derived from the KEKB original system <cit.><cit.>. The BPMs are used to monitor the position of the beam inside the beam-pipe. When a beam goes past a bending magnet, synchrotron radiation is emitted and can be used to monitor the size of the beam. At SuperKEKB there are two categories of such monitor systems: visible light monitors and X-ray monitors <cit.>. There are two kinds of visible light monitors: interferometers, used to measure the horizontal size of the beams (σ_x) <cit.>; streak cameras, used to measure the length of the beams (σ_z) <cit.>. X-ray monitors will be used to measure the vertical size of the beams (σ_y) <cit.>. The technique used is called ”coded aperture” and was initially developed by astronomers, with the purpose of measuring the size of stars <cit.>. The beam monitor systems described above can measure the properties of the beams far from the interaction point (IP), and therefore the properties at the IP must be extrapolated through calculations. The large angle beamstrahlung monitor (LABM) can measure the size of the beams at the IP <cit.>, and has been successfully tested during SuperKEKB Phase I. Beamstrahlung is the radiation emitted by one beam of charged particles interacting with another beam of charged particles <cit.>. A first prototype of LABM was designed to monitor the collisions at CESR, an e^+e^- storage ring located at Cornell University <cit.>. The LABM measures the polarization and spectrum of the radiation emitted at the IP during a collision. These properties are related to the size of the beams, therefore allowing to measure them <cit.>. The LABM collects the radiation using four vacuum mirrors located inside the beam pipes. The light is then extracted through vacuum windows and travels inside a series of pipes which constitute the four LABM's optical channels. Once extracted, the properties of the light are measured inside two optical boxes located outside the interaction region. In this paper, we want to show that, besides polarization and spectrum, there are other important properties of the beamstrahlung light that are related to beam parameters. Specifically, we want to study the time profile of the beamstrahlung pulse that is emitted during a collision. At SuperKEKB, due to the large crossing angle, the collision timing becomes of crucial importance: if the beams do not simultaneously arrive at the IP, luminosity is lost. We will demonstrate that the time profile of the beamstrahlung pulse can be exploited to extract fundamental information about the collision timing. If the beams do not simultaneously arrive at the IP, this measurement allows adjusting the relative timing of the beams. Indeed, with respect to KEKB, the timing precision needs to improve by two orders of magnitude for a bench test comparison of 1% luminosity loss due to timing <cit.>.Beside collision timing, the method can be used to measure the longitudinal distribution (i.e, the length) of the beams, directly at the IP. A method to measure the longitudinal distribution of charged beams was proposed and tested at the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory <cit.>. When a beam goes past a bending magnet, the time profile of the synchrotron radiation emitted is measured, and this is directly related to the longitudinal distribution of the radiating beam <cit.>. The method is based on frequency mixing the radiation with photons from a femtosecond laser. Provided that certain conditions are satisfied, when photons from the two sources simultaneously enter a nonlinear crystal, there is a finite probability that radiation photons are upconverted to higher-energy photons within the pulse duration of the femtosecond laser <cit.>. Therefore, using an ultrafast pulsed laser with pulsewidth much smaller than that of the radiation, the latter one can be sampled by the former, thereby allowing to reconstruct the time dependence of the radiation pulse.We propose an analog experimental method to measure the time profile of the beamstrahlung light emitted at the IP, by adding a new optical box exploiting the existing LABM optical channels. The novelty is that our method allows to estimate collision timing to 1% of the length of the beams, by exploiting its relation to the asymmetry of the beamstrahlung pulse, which will be proved in the paper. Because of the importance of precisely estimating collision timing at the IP, the strategy we propose improves beam monitoring methods currently available at SuperKEKB. As a bonus, our method can measure the length of the beams directly at the IP, as we will show that the length of the radiating beam can be mapped into the beamstrahlung pulsewidth. As we will see in the next sections, the method allows to measure the length of the radiating beam with 1% precision, which is much better than the 5% precision allowed by streak cameras, which have a typical temporal resolution that is no better than 1 ps <cit.>. We remark here the importance of measurements taken at the IP: if a property of the beams is measured far from the interaction point, it has to be transported to the IP through calculation, and this introduces errors which may be significant. An important feature of our measurement is that it is completely shape-like. This is potentially crucial, since a shape-like measurement does not depend on absolute efficiencies, but only on the shape of the signal. In an environment such an accelerator, where data are prone to extreme noise, shape-like measurements of high precision are an advantage. We are also aware that, for a new accelerator like SuperKEKB, multiple measurements are necessary as a feedback and also to understand the dynamics of the beams.In the first part of the paper, we present the parameters of the beams at SuperKEKB and an original Monte Carlo simulation of the collision. The original calculation is needed because in the LABM the whole "magnet" is observed, while the large angle of observation (compared to 1/γ) is not suitable for standard approximations used in synchrotron radiation calculations. From the simulation, we obtain the time dependence of the radiation electric field at the LABM vacuum mirrors and relate it to geometric properties of the beams. In the final part of the paper, we thoroughly introduce the experimental method defining the important quantities, showing the properties of the nonlinear crystals, and calculating the related efficiencies. Finally, we give a description of the components that would be part of an hypothetical Ultrafast LABM optical box.§ CALCULATION OF BEAMSTRAHLUNG FIELDS In an effort to achieve a very high luminosity, SuperKEKB aims to work with nano-beams. Indeed, the luminosity of a collider is inversely proportional to the transverse size of the beams <cit.>. The parameters for SuperKEKB HER (High Energy Ring) or electron beam and LER (Low Energy Ring) or positron beam are shown in Table <ref> <cit.>.As shown in Figure <ref>, at the IP the collision takes place at a crossing angle θ_c = 83 mrad. Every bunch is separated from the next one by 4ns, thereby allowing a collision frequency of 250 MhZ. In the collision, the electromagnetic interaction between the charged beams produces the emission of radiation, called beamstrahlung. Due to the relativistic velocities, the beamstrahlung is mostly emitted in the forward direction of motion of the beams. The two directions of motion, at the IP, are called the Oho direction for the electron beam and the Nikko direction for the positron beam. We proceed now to calculate the time dependence of the beamstrahlung emitted by one bunch of electrons colliding with a bunch of positrons. The beams travel in the x-z plane (see Figure <ref>). The beams are Gaussian, with sizes σ_x, σ_y, σ_z, and travel at a crossing angle θ_c respect to each other. The collision takes place in the origin of the (x,y,z,t) reference frame, which we will call the LAB frame. We will consider the beamstrahlung emitted by one electron interacting with one beam of positrons (Figure <ref>). The positron beam moves with velocity 𝐯_𝐋, while the electron moves with velocity 𝐯_𝐇. The starting point is the electric field produced by the positron beam in its rest frame. The electrostatic potential U in the rest frame (x',y',z',t') of a charged Gaussian beam has been calculated as <cit.>, and can be used to easily obtain the electric field components generated by the beam: U(x',y',z')=1/4πϵ_0Ne/√(π)∫_0^+∞ dq exp(-x'^2/a'^2+q-y'^2/b'^2+q-z'^2/d'^2+q)/√((a'^2+q)(b'^2+q)(d'^2+q)).9!E'_x(x',y',z') = -∂ U/∂ x' = 1/4πϵ_02Ne/√(π)∫_0^+∞ dq x'exp(-x'^2/a'^2+q-y'^2/b'^2+q-z'^2/d'^2+q)/(a'^2+q)√((a'^2+q)(b'^2+q)(d'^2+q)).9!E'_y(x',y',z') = -∂ U/∂ y' = 1/4πϵ_02Ne/√(π)∫_0^+∞ dq y'exp(-x'^2/a'^2+q-y'^2/b'^2+q-z'^2/d'^2+q)/(b'^2+q)√((a'^2+q)(b'^2+q)(d'^2+q)).9!E'_z(x',y',z') = -∂ U/∂ z' = 1/4πϵ_02Ne/√(π)∫_0^+∞ dq z'exp(-x'^2/a'^2+q-y'^2/b'^2+q-z'^2/d'^2+q)/(d'^2+q)√((a'^2+q)(b'^2+q)(d'^2+q)) where a', b', and d' are √(2) times the standard deviations of the beam (σ'_x, σ'_y, σ'_z) and all the primed quantities are calculated in the rest frame of the beam. Now we can move to the LAB frame using the appropriate Lorentz transformations of coordinates and fields <cit.>:x' = x y' = y z' = γ_L(z-v_Lt)t' = γ_L(t-v_L/c^2z)𝐄_∥ = 𝐄'_∥𝐁_∥ = 𝐁'_∥𝐄_⊥ = γ_L(𝐄'_⊥-𝐯_𝐋×𝐁')𝐁_⊥ = γ_L(𝐁'_⊥+1/c^2𝐯_𝐋×𝐄') Considering that 𝐁' = 0, the transformation greatly simplify, and the components of the electric and magnetic fields obtained are listed below. .9!E_x(x,y,z,t) = γ_L1/4πϵ_02Ne/√(π)∫_0^+∞ dq x exp(-x^2/a^2+q-y^2/b^2+q-(γ_L(z-v_Lt))^2/(γ_L d)^2+q)/(a^2+q)√((a^2+q)(b^2+q)((γ_L d)^2+q)).9!E_y(x,y,z,t) = γ_L1/4πϵ_02Ne/√(π)∫_0^+∞ dq y exp(-x^2/a^2+q-y^2/b^2+q-(γ_L(z-v_Lt))^2/(γ_L d)^2+q)/(b^2+q)√((a^2+q)(b^2+q)((γ_L d)^2+q)).9!E_z(x,y,z,t) = 1/4πϵ_02Ne/√(π)∫_0^+∞ dq (γ_L(z-v_Lt)) exp(-x^2/a^2+q-y^2/b^2+q-(γ_L(z-v_Lt))^2/(γ_L d)^2+q)/((γ_L d)^2+q)√((a^2+q)(b^2+q)((γ_L d)^2+q))𝐁(x,y,z,t) = (-v_L/c^2 E_y , v_L/c^2 E_x , 0) In the above formulas the gamma factor is given by γ_L=1/√(1-v^2_L/c^2). We are now ready to calculate the Lorentz force acting on the electron. The electron travels towards the origin with velocity𝐯_𝐇 = (-v_Hsinθ_c , 0 , -v_Hcosθ_c) The Lorentz force acting on the electron is 𝐅 = -e(𝐄+𝐯_𝐇×𝐁) and therefore, in our case we obtain:[left=]align F_x = -e (1 + v_Hv_L/c^2cosθ_c)E_xF_y = -e (1 + v_Hv_L/c^2cosθ_c)E_yF_z = -e (E_z - v_Hv_L/c^2sinθ_c E_x) At SuperKEKB, the beamstrahlung is collected by four LABM vacuum mirrors, located a few meters from the IP, in the forward propagating direction of the beam. There aretwo mirrors per side, one on the top (called Up) and the other on the bottom (called Down) of the beam pipe. Therefore, it is convenient to rotate the reference system from (x,y,z) to the (x̅,y̅,z̅) coordinates (see Figure <ref>). The rotation is done in such a way to set the z direction as the direction of flight of the electron beam, the y direction is unchanged, and the x direction is consequently given by the right hand rule. The mirror coordinates can be expressed as: 𝐫 = D 𝐳̂̅̂± D tanθ𝐲̂̅̂ where D is the distance of the mirrors from the IP, θ is the elevation angle from the beam pipe axis, and the ± sign refers to the Up and Down mirrors respectively. These quantities are given in Table <ref>, and the elevation angle in Eq. <ref> can be calculated as θ=(θ_min+θ_max)/2. In order to move from the (x,y,z) to the (x̅,y̅,z̅) reference system, we use the following transformations:[left=]align 𝐱̂ = -𝐱̂̅̂ cosθ_c -𝐳̂̅̂ sinθ_c 𝐲̂ = 𝐲̂̅̂𝐳̂ = 𝐱̂̅̂ sinθ_c -𝐳̂̅̂ cosθ_c from which we obtain:[left=]align F_x̅ = - F_xcosθ_c + F_zsinθ_c F_y̅ =F_yF_z̅ = - F_xsinθ_c - F_zcosθ_c In relativistic mechanics, the force is related to the acceleration through the following relation <cit.>: 𝐅 = γ_H^3 m 𝐚_∥ + γ_H m 𝐚_ where 𝐚_∥ and 𝐚_ are the components of the acceleration that are parallel and perpendicular to the velocity of the electron respectively. Incidentally, we notice that the component parallel to the velocity of the electron will be strongly suppressed. Equation <ref> can be inverted to obtain the acceleration as a function of the force: 𝐚_𝐇 = 1/mγ_H(𝐅 - 𝐯_H·𝐅/c^2𝐯_𝐇) where m is the mass of electron. For the case at hand, the acceleration can be expressed in components as [left=]align a_x̅ = F_x̅/mγ_H a_y̅ = F_y̅/mγ_Ha_z̅ = F_z̅/mγ_H^3and therefore, we notice that the z̅ component of the acceleration will be strongly suppressed because of the extra γ_H^2 factor in the denominator. A direct consequence is that the z̅ component of the radiation electric field at the mirror will be strongly suppressed as well. Knowing the acceleration of the electron, it is possible to calculate the radiation field at the position of the observer <cit.>: 𝐄_𝐫𝐚𝐝(𝐫,t) = -e/4πϵ_0 c[𝐧× [(𝐧-β)×β̇]/R(1-β·𝐧)^3]_ret where, having defined 𝐰 as the position of the electron and 𝐫 as the position of the observer, 𝐑=𝐫-𝐰, 𝐧=R/𝐑, β=𝐯_𝐇/c and β̇=𝐚_𝐇/c. All the quantities in the squared brackets must be calculated at the retarded time t-R/c. Having obtained the radiation electric field as a function of the time, the electric field as a function of the frequency is obtained by means of a Fourier Transform <cit.>: Ẽ_𝐫𝐚𝐝(𝐫,ω) = 1/√(2π)∫_-∞^∞𝐄_𝐫𝐚𝐝(𝐫,t) e^-i ω t dt Finally, the power received by the observer is given by: dU/dt = c ϵ_0𝐄_𝐫𝐚𝐝^2 (𝐧·𝐀) where 𝐀 is the small area of the mirror that receives the beamstrahlung. In our case, the mirrors have a surface of 2.0 × 2.8mm^2, and are inclined by 45^∘ with respect to the axis of the beam pipe. That makes the effective area seen from the IP equivalent to that of a 2.0 × 2.0mm^2 mirror. The equations obtained in this section are used in the beamstrahlung simulation discussed in the next section.§ SIMULATION AND RESULTSWe simulate by a Monte Carlo method the collision of a Gaussian beam of positrons with a Gaussian beam of electrons. We assume that the beams are rigid, meaning that the velocity of the particles are unchanged by the interaction during the collision. For the purpose of explaining the method, let us consider the electron beam as the radiating beam, and the positron beam as the one that provides the bending force, or target beam. The Monte Carlo simulates an electron colliding with the positron beam.Every electron will be accelerated in the collision, and therefore radiate according to the laws of classical electrodynamics <cit.> that are given in Section <ref>. The result of interest is the radiation electric field calculated at the position of the four LABM mirrors as a function of the time. The simulation is then repeated for 10,000 electrons randomly distributed, according to the Gaussian distributions, within the radiating beam. Finally, the results are rescaled to take in account the nominal number of electrons present in SuperKEKB beams. The total radiation will simply be the incoherent sum of single-electron contributions. In this section, the (x̅,y̅,z̅) coordinate system defined in section <ref> will be referred to as (x,y,z) to simplify the notation. In this reference system, the z component of the electric field is strongly suppressed (see Section <ref>) and therefore will be neglected in the following. Therefore, here and throughout this work, we will only show the results for the x and y polarizations of the radiation electric field calculated at the position of the LABM mirrors. Of course, the same simulation can be used to simulate the case when the positron beam is the radiating beam and the electron beam is the target beam. In the following, we will present the result for just one of the four LABM mirrors, namely Nikko Down, which receives Beamstrahlung emitted by the positron beam. §.§ Energy spectrumFourier transforming the beamstrahlung electric field from time to frequency domain, our Monte Carlo simulation allowed us to calculate the energy spectrum of the x and y polarization at the mirrors. The energy spectra for x polarization and y polarization at the mirror are given in Figure <ref>.The two polarizations show a different behavior at small frequencies. The spectrum for low frequencies, between 1 and 1000 THz, is shown in Figure <ref> (c,d). The spectrum for x polarization increases at low frequencies, reaches a peak, and then decreases. For the y polarization, instead, we notice that the energy spectrum is flat at low frequencies and then decreases. Of course, in practice the beamstrahlung can be measured only for a limited subset of frequencies. In this paper, we will focus on the visible spectrum (430-770 THz), since it is the part of spectrum which is of concern for our purposes. A quadratic fit of the data obtained by the simulation was used to gain a higher detail for the spectrum at small frequencies. The latter is shown in Figure <ref>.Finally, we have calculated the number of visible photons per pulse that arrive at the mirror. In table <ref>, we show the total visible energy per collision at the mirror, the corresponding number of photons, and the size of the spot where the light is collected.The temporal distribution of the visible photons within the pulse does not exactly follow the overall distribution. Indeed, the beamstrahlung pulsewidth for visible photons is somewhat larger than that for the total pulse. The reason is that hard photons are emitted mainly in the central part of the collision, while visible photons are emitted also in the tails and/or when beams are further apart. In the following, we will show the results obtained for photons corresponding to 600 THz.§.§ Pulse skewness and beam timingThe fundamental result of our simulation is that the symmetry of the beamstrahlung pulse depends on the timing of the colliding beams. In order to show this, we define Δz as the distance between the centers of the two beams at the instant when the center of the target beam corresponds to the IP. The calculated skewness for the beamstrahlung pulse is shown in Figure <ref> as a function of Δz. We see that if the radiating beam is delaying, the beamstrahlung pulse will have a positive skewness (Figure <ref>-a). If the radiating beam is in time, the beamstrahlung pulse will have zero skewness (Figure <ref>-b). Finally, if the radiating beam is in advance, the beamstrahlung pulse will have a negative skewness (Figure <ref>-c). Therefore, measuring the skewness of the beamstrahlung pulse, it is possible to establish if the emitting beam is advanced or delayed respect to the target beam, providing a measure of the relative timing of the two beams.We notice that the skewness dependence for the x polarization is approximately linear (Figure <ref>-d). We have positive skewness for negative Δz, corresponding to the radiating beam arriving at the interaction point after the target beam. Conversely, we have negative skewness for positive Δz, corresponding to the radiating beam arriving at the interaction point before the target beam. For the y polarization we have essentially zero skewness, the fluctuations due to the statistical nature of the simulation. Therefore, only the x polarization component of the pulse can be used to monitor the timing of the beams.We notice that the skewness is small for beams close to perfect timing, possibly making difficult a measure in case of very small Δz. However, we propose a strategy that makes a precise adjustment of the relative timing possible though the observation of the beamstrahlung skewness. Purposely changing the timing of the beams, we can move to large positive Δz_1 and then to large negative Δz_2 corresponding to the same skewness in absolute value. Finally, we can average Δz_1 and Δz_2, thereby obtaining the point of zero skewness, corresponding to beams perfectly in time.§.§ Pulse duration and beam length The other important measure is the beamstrahlung pulse duration. We show, in Figure <ref>, the power of the beamstrahlung pulse as a function of the time. The beamstrahlung pulse is received by the mirror about 15 ns after the beams collision, lasting for a time interval about 10 ps long (rms). We notice that the duration of the pulse depends on the polarization, with the x-polarization being slightly wider. The result shown in Figure <ref> was obtained with beams in nominal conditions. Interestingly, thebeamstrahlung pulse duration is strictly related to the length of the beam, σ_z. Of course, the length of the pulse depends also on the length of the target beam. We consider three cases here: nominal conditions of the target beam, target beam 10% shorter, and 10% longer. For each case we varied the length of the radiating beam, thereby obtaining the corresponding temporal duration (RMS) of the beamstrahlung pulse, reported in Figure <ref>.We notice that the dependence is approximately linear for a beam of length shorter or equal to the nominal value, while it tends to a plateau for a longer beam. This behavior for long beams is explained because, the target beam being much shorter, the interaction only takes place within the central part of the radiating beam. Therefore, the simulation demonstrates that by measuring the time dependence of the beamstrahlung pulse it is possible to establish the length of the radiating beam at the IP.From what we have seen, the x polarization is the richer one, since its study allows to measure both the timing of the collisions and the length of the beams. As an average situation, based on the results in Table <ref> we will consider 10 visible photons per collision arriving at the mirror with x polarization. In the next section, we will describe the method to measure such photons and reconstruct the beamstrahlung time dependence.§ METHOD OF MEASUREMENTDealing with very short light pulses is a challenging task, because electronics is not able to measure pulses shorter than about 100 ps <cit.>, while streak cameras have a typical temporal resolution that is ∼1 ps at best <cit.>. To overcome these limits, physicists have developed techniques which, using femtosecond lasers in combination with nonlinear optics, allow to manipulate and measure light pulses with a temporal resolution down to a few femtoseconds <cit.>. There are potentially many ultrafast techniques that would be suitable to measure the time profile of beamstrahlung pulses. For instance, one can exploit a material trasparent in the visible range, e.g. a wide band gap semiconductor or a UV-absorbing fluorescent dye, and excite it via non-degenerate two photon absorption (TPA) of visible photons from the beamstrahlung beam, arriving simultaneously to the near-IR photons from a femtosecond lasers<cit.>. Since TPA is only possible if the two pulses overlap in time, this phenomenon can be used to reconstruct the time profile of B pulses by scanning the delay between the two beams, and detecting either the fluorescence emitted by the excited sample (if any), or the change in transmission of the B beam caused by TPA. Any other process due to thenonlinear interaction of the two pulses, such as so-called cross-phase modulation<cit.>, may be similarly used to the same aim. Here we will focus on, and discuss in detail, a method that exploits the idea of photon upconversion, a powerful technique allowing to obtain a temporal resolution that is approximately given by the pulsewidth of the laser<cit.>. This method is founded on sum frequency generation of the beamstrahlung with an intense, pulsed laser beam within a nonlinear crystal. A similar approach is currently used to measure fluorescence emission with sub-picosecond time resolution<cit.><cit.>, and was also used to measure the length of beams emitting synchrotron radiation while progressing through a bending magnet <cit.>.In the following, the beamstrahlung pulse will be referred to as the B pulse, while the laser pulse will be referred to as the P pulse. In our discussion we will refer to a P beam of wavelength 800 nm, typical of femtosecond Ti:Sapphire lasers, while for the beamstrahlung we will consider photons of wavelength 500 nm, or equivalently a frequency of 600 THz. Nowadays P pulses are as short as few femtoseconds, therefore much shorter than the B pulsewidth, about 20 ps based on the results of the previous section. More specifically, we will consider a P laser of pulsewidth 50 fs, average energy per pulse 10 nJ, corresponding to a 0.2 MW peak power. The measurement method we propose can be shortly described as follows. The P and B pulses are sent to overlap within a crystal endowed with marked nonlinear optical properties, such as β-Barium Borate (BBO) or Lithium Iodate. Within the crystal, there is a finite probability that a sum-frequency generation process takes place, generating new photons with energies and wavevectors given by:[left=]align ν_B + ν_P =ν_S 𝐤_𝐁 + 𝐤_𝐏 =𝐤_𝐒 When P pulses are in the near-infrared and B pulses are in the visible range, the generated S photons will be in the ultraviolet (e.g. 308 nm in our example). This is tantamount to upconverting B photons to higher frequencies within the interaction time window with the P beam. In the following, the properties referred to upconverted photons will be labelled with an S. Upconverted photons are then measured and, if the delay between the two beams is changed during the measurement, one can thus use the shorter P pulse to scan the B pulse in order to reconstruct its original time profile. Of course, the synchronization of the laser with the bunch is of fundamental importance. This has already been achieved, while maintaining the laser stable and synchronized with the bunch within a resolution better than 150 fs <cit.> <cit.>. §.§ Phase-matching conditionsAssuming collinear beams, Eq. <ref> can be rewritten as:n_S/λ_S=n_B/λ_B+n_P/λ_P Because of the wavelength-dependence of the refractive indexes, this, so-called, phase-matching condition cannot be trivially satisfied, strongly limiting the efficiency of the nonlinear process. However, such a problem can be overcome by exploiting the birefringence of the nonlinear crystal, allowing to propagate waves with orientation-dependent refractive indexes. Here we will assume the use of uniaxial crystals such as BBO, namely crystals with only one symmetry axis. A wave propagating in such a crystal experiences a refractive index n_o(λ) (ordinary refractive index) if its polarization is perpendicular to the optical axis. In contrast, if the polarization lies in the plane defined by the wavevector and the optical axis, the beam is called extraordinary, and the refractive index is given by:1/n^2(θ,λ)=sin^2(θ)/n_e^2(λ)+cos^2(θ)/n_o^2(λ)where θ is the angle between the electric field and the optical axis and n_e(λ)is called the extraordinary refractive index.While the phase matching conditionscannot be satisfied if all waves, B, P, and S, are ordinary waves, Eq. <ref> can be fulfilled by a suitable choice of the beam polarizations and of the polar angle θ, because the latter allows to continuously tune the refractive index of the extraordinary wave through Eq. <ref>. In fact, considering an interaction where B and P beams are ordinary waves, while S is extraordinary (O + O → E interaction), the refractive indexes of the B, P, and S waves are n_o(λ_B), n_o(λ_P), n(θ,λ_S), respectively. Hence, from Eqs. <ref> and <ref>, we obtain the following expression for the phase-matching angle: sin^2θ_m = (1/n^2(θ_m,λ_S))-(1/n_o^2(λ_S))/(1/n_e^2(λ_S))-(1/n_o^2(λ_S)) where n(θ_m,λ_S) is obtained by Eq. <ref> as follows: n_S(θ_m,λ_S)=n_o(λ_B)λ_S/λ_B+n_o(λ_P)λ_S/λ_P Provided that B and P beams are polarized as ordinary waves, upconverted photons will be efficiently generated by adjusting the polar angle θ to the value θ_m. The choice of θ hence establishes the wavelength λ_B of the B photon that will be efficiently upconverted. In practice, upconversion will affect photons within a narrow bandwidth of frequencies centered about ν_B. The bandwidth can be expressed as <cit.>: Δν_B (Hz) = 0.88/L(cm)[γ_S(s/cm) - γ_B(s/cm)] where: γ_B=1/c[n_o(λ_B) - λ_B∂ n_o/∂λ|_λ = λ_B] and γ_S=1/c[n_S(θ_m,λ_S) - λ_S∂ n_S(θ_m,λ)/∂λ|_λ = λ_S] The bandwidth is shown in Figure <ref> as a function the crystal length and λ_B for two different nonlinear crystals. Its order of magnitude is about 1 THz for 1 mm-thick nonlinear crystals. Because the beamstrahlung radiation is very polycromatic, the limited spectral acceptance bandwidth of the nonlinear process will significantly reducethe rate of photons upconverted, and therefore it is an important parameter to take into account when estimating the efficiency of this measurement method. In order to enhance the efficiency of the process, it is useful to focus the beams in order to increase the local intensity traversing the nonlinear crystal. However, to have upconversion, the P and B pulses incoming on the crystal must arrive within a certain solid angle of acceptance. Because it is easier to regulate the convergence of the laser than of the B beam, the most critical condition concerns the latter. Under certain conditions, the acceptance angle for the B beam is approximately given by <cit.>: Δϕ= 2.78 n_o(λ_B) λ_B/L[1-(n_o(λ_B)λ_S)/(n_S(θ_m,λ_S)λ_B)] The angle of acceptance is plotted in Figure <ref>. To avoid a reduction of the upconversion rate, it is then important to focus the B beam within this solid angle. Given the expected 6 mm diameter of the B beam at the optical box, and assuming 1 mm-thick nonlinear crystals, we estimated that this condition can be fulfilled by focusing it with a converging lens (or mirror) with a focal length of about 2000 mm. In these conditions, the acceptance angle should not limit the overall conversion efficiency. §.§ Group Velocity Mismatch and time resolutionSince the refractive indexes depend on the wavelength, the three pulses B, P, and S have different group velocities within the crystal. This fact can cause a temporal broadening of the pulses and therefore a deterioration of the time resolution <cit.>. The group velocity mismatch is given by <cit.>: Δ t(s) = L(cm) [γ_P(s/cm) - γ_B(s/cm) ] where γ_P=1/c[n_o(λ_P) - λ_P∂ n_o(λ)/∂λ|_λ = λ_P] The group velocity is shown in Figure <ref> as a function the crystal length and λ_B. We clearly see that, if we want a resolution of at least 200 fs, we need to use a crystal no longer than 1 mm. Such a resolution is one hundredth of the total length of the B pulse, and therefore we can measure the B pulsewidth to 1% accuracy. It directly follows that, with such a resolution, it is possible to measure the length of the radiating beam with 1% confidence.§.§ Efficiency of photon upconversionWe are now ready to calculate the rate of beamstrahlung photons upconverted. For the nonlinear crystal, we will consider a BBO of length 1 mm. The pulsed laser will have a wavelength of 800 nm, pulsewidth 2σ_P=50 fs, average power 10 nJ, and peak power 0.2 MW. For the beamstrahlung we will consider photons of wavelength 500 nm. We remind the reader that the beamstrahlung pulsewidth (2 times the RMS) is about 2σ_B=20 ps, there are about n_VIS=10 visible photons per pulse with x polarization, and the collision frequency is f=250 MHz. We suppose to focalize the two pulses on an area A of diameter 400μ m on the nonlinear crystal. The rate of upconverted photons will be given by:dN_up/dt≈ n_VIS×f/3×Δν_B/Δ_VIS×σ_P/σ_B×η_0 where Δ_VIS=(770-430) ThZ=340 ThZ and Δν_B is the spectral bandwidth of upconversion. The collision frequency is divided by 3 because the repetition rate of a typical commercial Ti:Sapphire oscillator usually ranges around 80 MHz, which can be precisely synchronized to the third sub-harmonic of the collider (83.33 MHz),<cit.>, and therefore we can only measure one third of beamstrahlung pulses. The quantum efficiency of upconversion η_0, appearing in Eq. <ref>, is given by <cit.><cit.>: η_0 = 2 π^2 d_eff^2(P_P/A)L^2/λ_Bλ_S n_o(λ_B) n_o(λ_P) n_S(θ_m,λ_S)c ϵ_0^3where P_P is the peak power of the pulsed laser, A is the area where the P beam is focused on the crystal (assuming the B beam is focused on an area no larger than A) and d_eff the effective nonlinear coefficient of the crystal. The latter depends on the structure of the crystal and also on its orientation respect to the incoming beams. For a BBO crystal, phase-matched to upconvert 500 nm, the effective nonlinear coefficient equals 1.9 pm/V. From this value, and using the above equations, we obtain the rate of photons upconverted which is shown in figure <ref>. We notice that using a 1 mm BBO crystal we should able to upconvert, and therefore measure, about 660 photons per second. This is well above the typical noise floor of a photomultiplier capable of single photon counting. Thus it should be possible to acquire a single point (for a given B-P delay) in ∼15 seconds with a signal-to-noise ratio of the order of √(N)=10^2. If 100 delays (2 ps/200 fs) are used to scan the entire time profile of the B pulse, its duration and skewness can be reliably reconstructed in 10 to 20 minutes.§.§ Timing resolutionThere are essentially three sources of uncertainty that limit our timing resolution. The first two are systematic uncertainties due to the laser jitter and the group velocity mismatch, which were discussed above to be lower than 150 fs and 200 fs, respectively. The third one arises from the statistical error on the skewness, which was calculated through a toy Monte Carlo simulation. In this simulation, a measurement is reproduced by a Gaussian histogram with σ=10ps, 200 bins of width 200 fs, and peak value of 10000 counts. Such a measurement would last, according to the estimation given in the preceding section, about 15 seconds per bin, and therefore 50 minutes in total. For the case of small skewness and large number of entries, it is possible to calculate the error on the skewness as (δ s)^2≈∑_it_i^2 N_i/σ^2 (∑_iN_i)^2 where t_i and N_i are the centers and the contents of the bins, respectively. From Figure <ref>, we see that the skewness has an approximately linear dependence from the relative delay, with slope 0.36 obtained through linear fitting. Therefore, we have that the error on the relative delay is δ(Δ z/σ_z,nom) ≈δ s / 0.36. From the Monte Carlo simulation, we obtained δ(Δ z/σ_z,nom) ≈ 0.003, which correspond to an uncertainty in the timing of approximately 50 fs. Therefore, we have that all the uncertainties, both systematic ones and coming from statistics, lie below 200 fs. Considering all the uncertainties, we expect to be able to deliver a measure of the timing within an uncertainty of 1% of the length of the radiating beam.§.§ Ultrafast LABM optical boxThe optical channel used to extract the beamstrahlung is already part of the instrumentation at SuperKEKB, therefore we only need to realize an optical box containing all the elements necessary to the upconversion technique. The Ultrafast LABM optical box will consist of a pulsed laser, a delay stage, some optical elements, and a detecting device, for example a photomultiplier. The setup is shown in figure <ref>. The P pulse is much shorter then the B pulse, and it can be given a delay with a device called delay stage, which will be remotely controlled. The P pulse, with the given delay, interferes with the B pulse within the nonlinear crystal. Both pulses have to be focused of a small area of the crystal, in order to increase the efficiency. Suitably chosen mirrors, able to reflect only the visible portion of B radiation, will inject it into the optical box. Similarly, UV dielectric mirrors (or a filter) will be used after the nonlinear crystal, in order to eliminate photons which do not originate from upconversion, i.e. with a frequency lower than that expected for the upconverted photons. Finally, the photons are counted with a photomultiplier. Varying the relative delay of the pulses, it is possible to reconstruct the B pulse, and therefore have a measure of the timing of the beams and their length at the interaction point. § CONCLUSIONWe have described a beam monitoring method that can be used to measure the timing and the length of the SuperKEKB beams at the interaction point. We expect to be able to measure the timing and the length of the beams with 1% precision. The length of the beams can be measured with a resolution at the very least 5 times better than streak cameras. Beside this, the novelty of the method is that it allows establishing, with high accuracy, if the beams arrive at the IP simultaneously, or to fix them if they do not. Indeed, SuperKEKB beams will collide at a high crossing angle (83 mrad), introducing a novel possible way to lose luminosity when the bunches do not reach simultaneously the IP. It is noted that the method can be used also at synchrotron radiation sources, whenever a precise determination of the beam length is needed, by using a magnet short enough that the pulse time length is dominated by the beam length <cit.>. We have developed a completely original simulation of the collision of the beams in order to obtain the radiation field as a function of the time, which is ultimately what we aim to measure with our method. We have also presented a numerical calculation of the rate of photon upconversion, to show that the we have sufficient statistics to perform the measurement.The experimental method of measurement involves an ultrafast pulsed laser, the use of a nonlinear crystal, and the phenomenon of photon upconversion. The technique involved has been thoroughly described along the paper, together with a description of the needed setup for a new Ultrafast LABM optical box. Basically, beamstrahlung is mixed with photons from the laser within a nonlinear crystal. A small fraction of the beamstrahlung photons in the visible range get upconverted to the ultraviolet and measured, allowing to reconstruct the beamstrahlung time profile. We are aware that luminosity is the first concern for SuperKEKB, and that every innovative beam monitoring system could be of vital importance for the success of the project. We believe that the monitoring system described in this paper is a valid candidate to be part of the SuperKEKB beam instrumentation. § ACKNOWLEDGMENTWe would like to acknowledge Prof. Giovanni Bonvicini for many pieces of advice and fruitful discussions.10Evans:2008zzb Lyndon Evans and Philip Bryant. LHC Machine. JINST, 3:S08001, 2008.Abe:2010gxa T. Abe et al. Belle II Technical Design Report. 2010.Syphers:2013mhc M. J. Syphers and Frank Zimmermann. Accelerator Physics of Colliders. 2013.Raimondi:2006 P. Raimondi. Talk given at the 2nd SuperB workshop, Frascati. 2006.Lee:1999pt S. Y. Lee. Accelerator physics. 1999.Olive:2016xmw C. Patrignani et al. Review of Particle Physics. Chin. Phys., C40(10):100001, 2016.Arinaga:2012laa M. Arinaga et al. Beam instrumentation for the SuperKEKB rings. In Proceedings, 1st International Beam Instrumentation Conference (IBIC2012): Tsukuba, Japan, October 1-4, 2012, pages 6–10, 2012.Tejima:2000ns M. Tejima, M. Arinaga, H. Ishii, K. Mori, and S. Hiramatsu. Beam position monitor system for KEKB. 2000.Mulyani:2016lhr Emy Mulyani and John Flanagan. Design of Coded Aperture Optical Elements for SuperKEKB X-ray Beam Size Monitors. In Proceedings, 4th International Beam Instrumentation Conference, IBIC2015, page TUPB025, 2016.Dicke:1968 R.H. Dicke. Scatter-hole cameras for X-rays and gamma rays. The Astrophysical Journal, 153, August 1968.Bonvicini:1997cy G. Bonvicini and J. Welch. Large angle Beamstrahlung as a beam-beam monitoring tool. Nucl. Instrum. Meth., A418:223–232, 1998.Augustin:1978ah J. E. Augustin, N. Dikansky, Ya. Derbenev, J. Rees, Burton Richter, A. Skrinsky, M. Tigner, and H. Wiedemann. Limitations on Performance of e+ e- Storage Rings and Linear Colliding Beam Systems at High Energy. eConf, C781015:009, 1978.Detgen:1999cm N. Detgen, G. Bonvicini, D. Cinabro, D. Hartill, S. Henderson, G. Sun, and J. Welch. Preliminary Design of a Large Angel Beamstrahlung Detector at CESR. 1999.dicarlofarhatgillard:2017 G. Bonvicini, S. Di Carlo, H. Farhat, and R. Gillard. Calculation of beamstrahlung rates for crossing beams. To be submitted to Nuclear Instruments and Methods.Beche:2004 J.-F. Beche, J. Byrd, S. De Santis, P. Denes, M. Placidi, W. Turner, and M. Zolotorev. Measurement of the Beam Longitudinal Profile in a Storage Ring by Non‐Linear Laser Mixing. 2004.Shah:1988 J. Shah. Ultrafast Luminescence Spectroscopy Using Sum Frequency Generation. IEEE Journal of quantum electronics, 24(2):276–288, February 1988.Scheidt:2000yg K. Scheidt. Review of streak cameras for accelerators: Features, applications and results. In Particle accelerator. Proceedings, 7th European Conference, EPAC 2000, Vienna, Austria, June 26-30, 2000. Vol. 1-3, pages 182–186, 2000.Kheifets:1976 S. Kheifets. PETRA-Kurzmitteilung 119, DESY, 1976. Jackson:1998nia John David Jackson. Classical Electrodynamics. Wiley, 1998.Landau:1971vol2 L.D. Landau and E.M. Lifshitz. The Classical Theory of Fields. Pergamon Press, 1971.Hofmann:2004zk Albert Hofmann. "The physics of synchrotron radiation", volume 20. 2004.Ronzhin:2015zmc Anatoly Ronzhin. High time-resolution photodetectors for PET applications. Nucl. Instrum. Meth., 809:53–57, 2016.Xue:2015 B. Xue, C. Katan, J.A. Bjorgaard, and T. Kobayashi. Non-degenerate two photon absorption enhancement for laser dyes by precise lock-in detection. AIP Advances, 5, 2015.Lorenc2002 M. Lorenc, M. Ziolek, R. Naskrecki, J. Karolczak, J. Kubicki, and A. Maciejewski. Artifacts in femtosecond transient absorption spectroscopy. Applied Physics B, 74(1):19–27, 2002.B500108K Lijuan Zhao, J. Luis Perez Lustres, Vadim Farztdinov, and Nikolaus P. Ernsting. Femtosecond fluorescence spectroscopy by upconversion with tilted gate pulses. Phys. Chem. Chem. Phys., 7:1716–1725, 2005.Messina:2013 F. Messina, O. Bräm, A. Cannizzo, and M. Chergui. Real-time observation of the charge transfer to solvent dynamics. Nature Communications, 4, 2013.Schulz:2010 S. Schulz et al. Precision synchronization of the flash photoinjector laser. In Proceedings of IPAC’10, Kyoto, Japan. WEPEB076, 2010.Schulz:2015 S. Schulz et al. Femtosecond all-optical synchronization of an X-ray free-electron laser. Nature Communications, 6(5938), 2016. Zernike:1973 F. Zernike and J.E. Midwinter. Applied Nonlinear Optics. Wiley, 1973.Shen:1984 Y.R. Shen. The Principles of Nonlinear Optics. Wiley, 1984.Halcyon KMLabs. Halcyon, 2014.
http://arxiv.org/abs/1703.08760v1
{ "authors": [ "Salvatore Di Carlo", "Fabrizio Messina" ], "categories": [ "physics.ins-det", "physics.acc-ph" ], "primary_category": "physics.ins-det", "published": "20170326031302", "title": "Ultrafast Large Angle Beamstrahlung Monitor" }
Computer Science Department, KU Leuven, Belgium123gcollell@kuleuven.be, tedz.cs@gmail.com, sien.moens@cs.kuleuven.be Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings Guillem Collell1 Ted Zhang2 Marie-Francine Moens3 December 30, 2023 ==================================================================================== Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple method to build multimodal representations by learning a language-to-vision mapping and using its output to build multimodal embeddings. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently re-constructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped vectors not only implicitly encode multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more “human-like" judgments—particularly in zero-shot settings. § INTRODUCTIONConvolutional neural networks (CNN) and distributional-semantic models have provided breakthrough advances in representation learning in computer vision (CV) and natural language processing (NLP) respectively <cit.>. Lately, a large body of research has shown that using rich, multimodal representations created from combining textual and visual features instead of unimodal representations (a.k.a. embeddings) can improve the performance of semantic tasks. In other words, a single multimodal representation that captures information from two modalities (vision and language) is semantically richer than those from a single modality or unimodal (either vision or language). Building multimodal representations has become a popular problem in NLP that has yielded a wide variety of methods <cit.>. Additionally, the use of a mapping to bridge vision and language has also been explored, typically with the goal of zero-shot image classification <cit.>.Here, we propose a cognitively plausible approach to concept representation that consists of: (1) learning a language-to-vision mapping; and (2) using the outputs of the mapping as multimodal representations—with the second step being the main novelty of our approach. By re-constructing visual knowledge from textual input, our method behaves similarly as human memory, namely in an associative <cit.> and re-constructive manner <cit.>. Concretely, our method does not seek the perfect recall of visual representations but rather its re-construction and association with language. We leverage the intuitive fact that, by learning to predict, the mapping necessarily encodes information from both modalities—and in turn discards noise and irrelevant information from the visual vectors during the learning phase. Thus, given a word embedding as input, the mapped output is not purely a visual representation but rather a multimodal one. By using seven concept similarity benchmarks, we show that our representations not only are multimodal but they improve performance over strong unimodal baselines and state-of-the-art multimodal approaches—inclusive in a zero-shot setting. In turn, the fact that our evaluation tests are composed of human ratings of similarity supports our claim that our method provides more “human-like" judgments. Further details and insight can be found at the extended version of the present paper <cit.>. The rest of the paper is organized as follows. In the next section, we introduce related work. Next, we describe and provide insight on our method. Afterwards, we describe our experimental setup. Finally, we discuss our results, followed by conclusions.§ RELATED WORK AND BACKGROUND §.§ Cognitive grounding A large body of research evidences that human memory is inherently re-constructive <cit.>. That is, memories are not “static" exact copies of reality, but are rather re-constructed from their essential elements each time they are retrieved, triggered by either internal or external stimuli. Arguably, this mechanism is, in turn, what endows humans with the capacity to imagine themselves in yet-to-be experiences and to re-combine existing knowledge into new plans or structures of knowledge <cit.>. Moreover, the associative nature of human memory is also a widely accepted theory in experimental psychology <cit.> with identifiable neural correlates involved in both learning and retrieval processes <cit.>.In this respect, our method employs a retrieval process analogous to that of humans, in which the retrieval of a visual output is triggered and mediated by a linguistic input (Fig. <ref>). Effectively, visual information is not only retrieved (i.e., mapped), but also associated to the textual information thanks to the learned cross-modal mapping—analogous to a mental model that associates semantic and visual components of concepts, acquired through lifelong experience. Since the retrieved (mapped) visual information is often insufficient to completely describe a concept, it is of interest to preserve the linguistic component. Thus, we consider the concatenation of the “imagined" visual representations to the text representations as a comprehensive way of representing concepts.§.§ Multimodal representations It has been shown that visual and textual features capture complementary attributes <cit.>, and the advantages of combining both modalities have been largely demonstrated in a number of linguistic tasks <cit.>. Based on current literature, we suggest a classification of the existing strategies to build multimodal embeddings. Broadly, multimodal representations can be built by learning from raw input enriched with both modalities (simultaneous learning), or by learning each modality separately and integrating them afterwards (a posteriori combination). * A posteriori combination. * Concatenation. That is, the fusion of pre-learned visual and text features by concatenating them <cit.>. Concatenation has been proven effective in concept similarity tasks <cit.>, yet suffers from an obvious limitation: multimodal features can only be generated for those words that have images available. * Autoencoders form a more elaborated approach that do not suffer from the above problem. Encoders are fed with pre-learned visual and text features, and the hidden representations are then used as multimodal embeddings. This approach has shown to perform well in concept similarity tasks and categorization (i.e., grouping objects into categories such as “fruit", “furniture", etc.) <cit.>. * A mapping between visual and text modalities (i.e., our method). The outputs of the mapping themselves are used to build multimodal representations. * Simultaneous learning. Distributional semantic models are extended into the multimodal domain <cit.> by learning in a skip-gram manner from a corpus enriched with information from both modalities and using the learned parameters of the hidden layer as multimodal representations. Multimodal skip-gram methods have been proven effective in similarity tasks <cit.> and in zero-shot image labeling <cit.>. With this taxonomy, the gap that our method fills becomes more clear, with it being aligned with a re-constructive and associative view of knowledge representation. Furthermore, in contrast to other multimodal approaches such as skip-gram methods <cit.>, our method directly learns from pre-trained embeddings instead of training from a large multimodal corpus, rendering it thus simpler and faster. §.§ Cross-modal mappings Several studies have considered the use of mappings to bridge modalities. For instance, <cit.> and <cit.> use a linear vision-to-language projection in zero-shot image classification. Analogously, language-to-vision mappings have been considered, generally to generate missing perceptual information about abstract words <cit.> and in zero-shot image retrieval <cit.>. In contrast to our approach, the methods above do not aim to build multimodal representations to be used in natural language processing tasks.§ PROPOSED METHOD In this section we describe the three main steps of our method (Fig. <ref>): (1) Obtain visual representations of concepts; (2) Build a mapping from the linguistic to the visual space; and (3) Generate multimodal representations. §.§ Obtaining visual representationsWe employ raw, labeled images from ImageNet <cit.> as the source of visual information, although alternatives such as the ESP game data set <cit.> can be considered. To extract visual features from each image, we use the forward pass of a pre-trained CNN model. The hidden representation of the last layer (before the softmax) is taken as a feature vector, as it contains higher level features. For each concept w, we average the extracted visual features of individual images to build a single visual representation v_w. §.§ Learning to map language to visionLet ℒ⊂ℝ^d_l be the linguistic space and 𝒱⊂ℝ^d_v the visual space of representations, where d_l and d_v are their respective dimensionalities. Let l_w∈ℒ and v_w∈𝒱 denote the text and visual representations for the concept w respectively. Our goal is thus to learn a mapping (regression) f:ℒ→𝒱. The set of N visual representations along with their corresponding text representations compose the training data { (l_i,v_i)}^N_i=1 used to learn f. In this work, we consider two different mappings f.(1) Linear: A simple perceptron composed of a d_l-dimensional input layer and a linear output layer with d_v units. (2) Neural network: A network composed of a d_l-unit input layer, a single hidden layer of d_h Tanh units and a linear output layer of d_v units. For both mappings, a mean squared error (MSE) loss function is employed: Loss(y,ŷ) = 1/2 ||ŷ - y||^2_2, where y is the actual output and ŷ the model prediction.§.§ Generating multimodal representationsFinally, the mapped representation m_w of each concept w is calculated as the image f(l_w) of its linguistic embedding l_w. For instance, m_dog = f(l_dog). We henceforth refer to the mapped representations as MAP_f, where f indicates the mapping function employed (lin = linear, NN = neural network). As argued below, the mapped representations are effectively multimodal. However, since f(l_w) formally belongs to the visual domain, we also consider the concatenation of the ℓ_2-normalized mapped representations f(l_w) with the textual representations l_w, namely l_w⊕ f(l_w), where ⊕ denotes the concatenation operator. We denote these concatenated representations as MAP-C_f.Since the outputs of a text-to-vision mapping are strictly speaking, “visual predictions", it might not seem readily obvious that they are also grounded with textual knowledge. To gain insight, it is instructive to refer to the training phase where the parameters θ of f are learned as a function of the training data {(l_i,v_i)}^N_i=1. E.g., in gradient descent, θ is updated according to: θ←θ - η∂/∂θ Loss( θ ; {(l_i,v_i)}^N_i=1). Hence, the parameters θ of f are effectively a function of the training data points {(l_i,v_i)}^N_i=1 and it is therefore expected that the outputs f(l_w) are grounded with properties of the input data {l_i}^N_i=1. It can be additionally noted that the output of the mapping f(l_w) is a (continuous) transformation of the input vector l_w. Thus, unless the mapping is completely uninformative (e.g., constant or random), the input vector l_w is still “present"—yet transformed. Thus, the output of the mapping necessarily contains information from both modalities, vision and language, which is essentially the core idea of our method. Further insight is provided at the extended version of the article <cit.>.§ EXPERIMENTAL SETUP§.§ Word embeddingsWe use 300-dimensional GloVe[http://nlp.stanford.edu/projects/glove] vectors <cit.> pre-trained on the Common Crawl corpus consisting of 840B tokens and a 2.2M words vocabulary.§.§ Visual data and featuresWe use ImageNet <cit.> as our source of labeled images. ImageNet covers 21,841 WordNet synsets (or meanings) <cit.> and has 14,197,122 images. We only keep synsets with more than 50 images, and an upper bound of 500 images per synset is used to reduce computation time. With this selection, we cover 9,251 unique words.To extract visual features from each image, we use a pre-trained VGG-m-128 CNN <cit.> implemented with the Matlab MatConvNet toolkit <cit.>. We take the 128-dimensional activation of the last layer (before the softmax) as our visual features. §.§ Evaluation setsWe test the methods in seven benchmark tests, covering three tasks: (i) General relatedness: MEN <cit.> and Wordsim353-rel <cit.>; (ii) Semantic or taxonomic similarity: SemSim <cit.>, Simlex999 <cit.>, Wordsim353-sim <cit.> and SimVerb-3500 <cit.>; (iii) Visual similarity: VisSim <cit.> which contains the same word pairs as SemSim, rated for visual instead of semantic similarity. All tests contain word pairs along with their human similarity rating. The tests Wordsim353-sim and Wordsim353-rel are the similarity and relatedness subsets of Wordsim353 <cit.> proposed by <cit.> who noted that the distinction between similarity (e.g., “tiger" is similar to “cat") and relatedness (e.g., “stock" is related to “market") yields different results. Hence, for being redundant with its subsets, we do not count the whole Wordsim353 as an extra test set.A large part of words in our tests do not have a visual representation v_w available, i.e., they are not present in our training data. We refer to these words as zero-shot (ZS). §.§ Evaluation metric and prediction We use Spearman correlation ρ between model predictions and human similarity ratings as evaluation metric. The prediction of similarity between two concept representations, u_1 and u_2, is computed by their cosine similarity: cos(u_1,u_2) = u_1·u_2/u_1·u_2. §.§ Model settings Both, neural network and linear models are learned by stochastic gradient descent and nine parameter combinations are tested (learning_rate = [0.1, 0.01, 0.005] and dropout_rate = [0.5, 0.25, 0.1]). We find that the models are not very sensitive to parameter variations and all of them perform reasonably well. We report a linear model with learning rate of 0.1 and dropout rate of 0.1. For the neural network we use 300 hidden units, dropout rate of 0.25 and learning of 0.1. All mappings are implemented with the scikit-learn toolkit <cit.> in Python 2.7.§ RESULTS AND DISCUSSION In the following we summarize our main findings. For clarity, we refer to the concatenation of CNN_avg and GloVe as CONC. Overall, a post-hoc Nemenyi test including all disjoint regions (ZS and VIS) shows that both MAP-C methods (lin and NN) perform significantly better than GloVe (p ≈ 0.03) and than CNN_avg (p ≈ 0.06). Hence, our multimodal representations MAP-C clearly accomplish one of their foremost goals, namely to improve the unimodal representations of GloVe and CNN_avg. Clearly, the consistent improvement of MAP_lin and MAP_NN over CNN_avg in all seven test sets supports our claim that the imagined visual representations are more than purely visual representations and contain multimodal information—as argued in subsection <ref>. Moreover, the MAP-C method generally performs better than theMAP vectors alone, implying that even though the MAP vectors are indeed multimodal, they are still predominantly visual and their concatenation with textual representations helps.Using the concreteness ratings of <cit.> in a 1-5 scale (with 5 being the most concrete and 1 the most abstract) we find that the average concreteness is larger than 4.4 in all VIS regions, while it is lower than 3.3 in all ZS regions except in MEN and VisSim/SemSim test sets which average 4.2 and 4.8 respectively. Therefore, with the exceptions of MEN, VisSim and SemSim, the inclusion of multimodal information in the ZS regions is arguably less beneficial than in the VIS regions, given that visual information can only sensibly enrich representations of words that are to some extent visual.Both MAP_NN and MAP_lin exhibit an overall gain in MEN and in the VIS region of Wordsim353-rel. It might seem counter-intuitive that vision can help to improve relatedness understanding. However, a closer look reveals that visual features generally account for object co-occurrences, which is often a good indicator of their relatedness (e.g., between “car" and “garage" in Fig. <ref>). For instance, in MEN, the human relatedness rating between “car" and “garage" is 8.2 while GloVe's score is only 5.4. However, CNN_avg's rating is 8.7 and that of MAP_lin is 8.4—closer to the human score.Crucially, MAP-C_NN and MAP-C_lin significantly improve the performance of GloVe in all seven VIS regions (p ≈ 0.008), with an average improvement of 4.6% for MAP-C_NN. Conversely, the concatenation of GloVe with the original visual vectors (CONC) does not improve GloVe (p ≈ 0.7)—worsening it in 4 out of 7 test sets—suggesting that simple concatenation without seeking the association between modalities might be suboptimal. Moreover, the concatenation of the mapped visual vectors with GloVe (MAP-C_NN) outperforms the concatenation of the original visual vectors with GloVe (CONC) in 6 out of 7 test sets (p ≈ 0.06), which supports our claim that the mapped visual vectors are semantically richer than the original visual vectors.§ CONCLUSIONS We have presented a cognitively-inspired method capable of generating multimodal representations in a fast and simple way. In a variety of similarity tasks and seven benchmark tests, our method generally outperforms unimodal baselines and state-of-the-art multimodal methods. Moreover, the performance gain in zero-shot settings indicates that the method generalizes well and learns relevant cross-modal associations. Finally, the overall performance supports the claim that our approach builds more “human-like" concept representations. Ultimately, the present work sheds light on fundamental questions of natural language understanding such as whether the nature of the knowledge representation obtained by the fusion of vision and language should be static and additive (e.g., concatenation without associating modalities) or rather re-constructive and associative.ieeetr
http://arxiv.org/abs/1703.08737v1
{ "authors": [ "Guillem Collell", "Teddy Zhang", "Marie-Francine Moens" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170325200610", "title": "Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings" }
Group Cooperation with Optimal Resource Allocation in Wireless Powered Communication Networks Ke Xiong, Member, IEEE, Chen Chen, Gang Qu, Senior Member,  IEEEPingyi Fan, Senior Member, IEEE,  Khaled Ben Letaief, Fellow, IEEE Ke Xiong is with the School of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044,R.P. China. e-mail:kxiong@bjtu.edu.cn.Chen Chen and Gang Qu are with Department of Electrical & Computer Engineering, University of Maryland, College Park, USA. email: ccmmbupt@gmail.com, Gangqu@umd.edu.Pingyi Fan is with the Department of Electronic Engineering,Tsinghua University,Beijing,R.P. China, 100084. e-mail:fpy@tsinghua.edu.cn.K. B. Letaief is with the School of Engineering, Hong Kong University of Science & Technology (HKUST), China. e-mail:eekhaled@ece.ust.hk.================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper considers a wireless powered communication network (WPCN) with group cooperation, where two communication groups cooperate with each other via wireless power transfer and time sharing to fulfill their expected information delivering and achieve “win-win” collaboration. To explore the system performance limits, we formulate optimization problems to respectively maximize the weighted sum-rate (WSR) and minimize the total consumed power. The time assignment, beamforming vector and power allocation are jointly optimized under available power and quality of service (QoS) requirement constraints of both groups. For the WSR-maximization, both fixed and flexible power scenarios are investigated. As all problems are non-convex and have no known solution methods, we solve them by using proper variable substitutions and the semi-definite relaxation (SDR). We theoretically prove that our proposed solution method guarantees the global optimum for each problem. Numerical results are presented to show the system performance behaviors, which provide some useful insights for future WPCN design. It shows that in such a group cooperation-aware WPCN, optimal time assignment has the most great effect on the system performance than other factors.RF-energy harvesting, wireless powered communication networks, simultaneous wireless information and power transfer, energy beamforming, time allocation. § INTRODUCTION Recently, the fast development of radio frequency (RF)-based wireless power transfer (WPT) technology<cit.>makes it possible to build wireless powered communication networks (WPCNs)<cit.>, in whichcommunication devices can be remotely powered over the air by dedicated wireless energy transmitters.Compared with traditional battery-powered networks, WPCN avoids the manual battery replacement/recharging, which reduces the network maintenance and operation cost greatly. As the transmit power, waveforms, and occupied time/frequency dimensions, etc., of WPT are all controllable and tunable, it is capable of providing stable energy supply under various physical conditions and communication requirements in WPCNs<cit.>.It was reported that tens of micowatts RF power can be transferred to a distance of more than 10 meters by using RF-based WPT<cit.>.The energy is sufficient to power the low-power communication devices (e.g., sensors and RF identification (RFID) tags). Thanks to the rapid evolution of multi-antenna energy beamforming<cit.>, high-efficiency energy harvesting (EH) circuit design<cit.> and energy efficient communication system design<cit.>, RF-based WPT has been regarded as a promising and attractive solution to prolong the lifetime of low-power energy-constrained networks, such as wireless sensor networks (WSNs), wireless body area networks (WBANs) and Internet of Things (IoT) in future 5G systems<cit.>.Since RF signals also carry energy when they transfer information, simultaneous wireless information and power transfer (SWIPT) technologywas proposed<cit.>, which has captured greatly attention.It was proved that SWIPT is more efficient in spectrum usage than transmitting information and energy in orthogonal time/ frequency/spacial channels<cit.>. So far, SWIPT-enabled WPCNs have been attracting increasing interests, see e.g. <cit.>.In <cit.>, single-antenna hybrid access point (H-AP)-assisted WPCN was investigated, where the system throughput or weighted sum-rate (WSR) were maximized via optimal time assignments. Since only single antenna was assumed at the H-AP, no beamforming design was involved in their works. As is known, with multiple antennas equipped at the transmitter, beamforming can be employed improve the energy/infromation transmission efficiency due to its focusing effect of the signals on specific receivers. Thus, some works began to consider beamforming design in WPCNs, see e.g., <cit.>. In <cit.>, beamforming vectors were optimized to maximizing the system achievable information rate. In <cit.> and <cit.>, beamforming vectors were jointly optimized with time assignment to maximize the sum-rate of the WPCN with a multi-antenna H-AP. Seeing that WPCN provides a promising solution for WSN and IoT, in which information is often relayed over multiple hops from a source to its destination due to the limited coverage of each node, some works also investigated WPCN with relay technologies, see e.g. <cit.> and <cit.>, where amplify-and-forward (AF) and decode-and-forward (DF) relay operations were studied in <cit.> and <cit.>, respectively. Besides, some existing works also investigated the resource allocation of WPCN in various wireless networks, see e.g. <cit.>.However, existing works only studied the energy transfer and information delivering within the same communication group, which means that the energy was transferred from the H-AP to its users and the users used the harvested energy to transmit information to the H-AP orthe energy was transferred from the source to the energy constrained relay node and then the relay help to forward the information from the source to its destinations. Therefore, no group cooperation was involved in exsting works and the systems were designed only by considering the utility maximization of the single communication group.In this paper, we investigate the group cooperation with optimal resource allocation in WPCNs. We consider a network composed of two communication groups, where the group 1 has sufficient energy supply but no licensed bandwidth, and thegroup 2 has licensed bandwidth but no sufficient energy. Therefore, neither group can fulfill the information deliveringto meet its desired information transmission rate. Considering that SWIPT provides an effective approach for information transmission and energy cooperation between nodes, we introduce the energy cooperation and time sharing between the two groups, so that group 1 may transfer some energy to group 2 and then get some transmission time from group 2 in return. With this inter-group cooperation, both groups can achieve their expected information rates. For such a WPCN with group cooperation, our goal is to explore its performance limits in terms of WSR and the minimum consumed power.Compared with existing works, several other differences of our work are emphasized as follows. Firstly, different from some existing works on one-hop WPCNs, see e.g., <cit.>,where only point-to-point communication was investigated, in our work, cooperative relaying [In our work, DF relaying cooperation is employed since DF relaying often outperforms AF relaying, especially in relatively high signal-to-noise ratio (SNR) scenarios.] is involved. Although some works studied therelay-aided WPCN systems, see e.g. <cit.>, all nodes were assumed with single antenna, so that no beamforming was considered in their work. Secondly, although some works introduced cooperation into WPCNs, they did not investigate the “win-win” collaboration via energy and time cooperation between different groups. For example, in <cit.>, the user cooperation was studied in relay-aided WPCN, where the closer user was powered to help the farther user forward information. However, no energy transfer cooperation between the two users was involved and no beamforming was considered. In <cit.>, the cooperation between the primary users and secondary users in cognitive networks was studied, where however, only the sum-rate of the secondary users was maximized and the beamforming design also was not involved. Comparably, in our work, the group cooperation in terms of wireless power transfer and time sharing are involved to achieve a “win-win” collaboration and the SWIPT beamforming is also considered. Thirdly, different from most existing works, see e.g, <cit.>, where only one or two kinds of resources were optimized, in our work, cooperative relaying, time assignment, SWIPT beamforming and power allocation with group cooperation are jointly designed and optimized in a single system and we mathematically prove that our proposed optimization method achieves the global optimum.The contributions of our work are summarized as follows.Firstly, we propose a group-cooperation based cooperative transmission protocol for the considered WPCN, which is able to achieve “win-win" cooperation transmission between two communication groups via energy transfer and time sharing.Secondly, to explore the information transmission performance limit of the system,we formulate two optimization problems to maximize the system WSR by jointly optimizing the time assignment and beamforming vector under two different power constraints, i.e., the fixed power and the flexible power constraints. In order to achieve the “win-win" cooperation between the two groups and guarantee their QoS requirements, the minimal required information rate constraints of the two groups are also considered in the optimal system design. As both problems are non-convex and have no known solution methods, we transform them into equivalently ones with some variable substitutions and then solve them by using semi-definite relaxation (SDR) method.We theoretically prove that our proposed solution method can guarantee to find the global optimal solution.Thirdly, consider that WPCNs have promising application potentials in future energy-constrained networks, in which the power consumption reduction is very critical and the green communication design <cit.> is very essential. We formulate an optimization problem to minimize the total consumed power of the WPCN by jointly optimizing the time assignment and beamforming vector under required data rate constraints of the two groups. As the problem is non-convex, we also solve it efficiently by using some variable substitutions and the SDR method.The global optimum of our proposed minimal power consumption system design is also theoretically proved.Fourthly, numerical results are presented to discuss the system performance behaviors, which provide some useful insights for future WPCN design. It shows that the average power constrained system achieves higher WSR than the fixed power constrained system and in such a group cooperation-aware WPCN, optimal time assignment has the most great effect on the system performance than other factors. Besides, the effects of relay position on system performances are also discussed via simulations. The rest of the paper is organized as follows. Section II describes the system model. Section III and IV investigate theWSR maximization and power minimization design of our considered WPCN, respectively. Section V provides some simulation results and finally, Section VI concludes the paper.§ SYSTEM MODEL§.§ Network ModelConsider a wireless system consisting of two communication groups as shown in Figure <ref>, wherein group 1 source node S_1 desires to transmit information to D_1 and in group 2 source node S_2 desires to transmit information to D_2. For group 1, S_1 is with stable and sufficient energy supply but no licensed bandwidth, so it cannot transmit information to D_1. For group 2, S_2 has licensed bandwidth but it is located relatively far away from D_2, so it cannot achieve high enough data rate over S_2 →D_2 direct link to meet its required information rate. Thus, S_2 needs R to help it forward information to D_2. It is assumed that R is an energy-exhausted/selfish node, so that R cannot or is not willing to consume its own energy to help the information forwarding from S_2 to D_2. In this case, neither group 1 (i.e., the bandwidth-limited group) nor group 2 (i.e., the power-limited group) can fulfill its expected information delivery.Fortunately, by using WPT, the two groups is able to cooperate with each other in terms of energy and transmission time to achieve a “win-win” outcome to fulfill their respectively desired information transmission. Specifically, S_1transmits some energy to R to enable R participating in the information transmission from S_2 to D_2. In return, S_2 bestows a portion of its transmission time to S_1to help group 1 accomplish the information delivery. With such a cooperation, both groups, therefore, may successfully deliver their information. It is marked that our presented cooperation modelcan also be applied in cognitive radio networks, where group 2 can be regarded as the primary user with listened frequency band and group 1 can be regarded as secondary users with no licensed frequency band. In traditional underlay cognitive networks, group 1 transmits information only when group 2 is silent. If group 2 always transmits signals, group 1 has no opportunity to transmit its information. Besides, due to the week direct link in group 1, its achievable information rate may be pretty low. However, with our described energy and time sharing cooperation, group 1 is motivated to share its transmission time with group 2 and is able to get some energy to increase its information rate. Meanwhile, group 2 will not to passively wait for a chance to transmit its information and it can actively seek some transmission opportunity at the expense of some energy. Therefore, the underlay cognitive transmission of the primary and the secondary users, as two cooperative groups, could obtain their profits.In order to enhance the energy transfer efficiency, S_1 (e.g. a sink node in WSN) is assumed to be equipped with N antennas while all other nodes (e.g. sensor nodes) only support single antenna due to their size limitations. Block fading channel is considered, so that all channel coefficients can be regarded as constants during each fading block and vary from block to block independently, following Rayleigh distribution. h_u v(k) is used to denote the channel coefficient of the k-th block between node u and node v. n(k) ∼𝒞𝒩 (0, N_0) is the Additive White Gaussian Noise (AWGN) of the k-th block. So, h_u v(k) ∼𝒞𝒩 (0, d_u v^-β), where d_u v is the distance between node u and node v, and β is the path loss exponent factor. The time period of each fading block is denoted by T. §.§ Transmission ProtocolTo complete cooperation transmission, each time period T is divided into four phases, which are with time intervals of τ_1, τ_2, τ_3 and τ_4, respectively, where τ_m ≥ 0 with m=1,2,...,4. Without loss of generality, T is normalized to 1 in the sequel, so that ∑_m=1^4 τ_m = 1. Defining τ≜ [τ_1τ_2τ_3τ_4]^T as the time assignment vector of the four transmission phases, it satisfies that1^T τ = 1,τ≽0,where 1 is a column vector with all elements being 1.In the first phase with time interval τ_1, S_1 transfers energy to R and transmits information to D_1 simultaneously. Let x_S_1(k) with |x_S_1(k)|^2 = 1 be the transmitted symbol from S_1. The received signals at D_1 and R are, respectively, given byy_D_1(k) = √(P_S_1^(1))𝐡_S_1 D_1^H (k)ω x_S_1(k) + n(k)andy_ R(k) = √(P_S_1^(1))𝐡_S_1 R^H (k)ω x_S_1(k) + n(k) ,where 𝐡_S_1 D_1∈ℂ^N × 1 and 𝐡_S_1 R∈ℂ^N × 1 are the complex channel vectors from S_1 toD_1 and from S_1 to R, respectively.P_S_1^(1) is the available transmit power at S_1 in the first phase. ω∈ℂ^N × 1 represents the beamforming vector at S_1, satisfyingω^2 ≤ 1.The achievable information rate in the first phase at D_1 can be given byR_S_1^(1) = τ_1𝒞(P_S_1^(1) |𝐡_S_1 D_1^H ω|^2/N_0),where 𝒞(x) ≜log_2(1 + x) and the harvested energy at R isE_ R^(1) = ητ_1 P_S_1^(1)| 𝐡_S_1 R^H ω |^2,where η∈(0, 1] is a constant, accounting for the energy conversion efficiency. The larger the value of η, the higher the energy conversion efficiency. In particular, η = 1 means all received signal power can be perfectly converted to energy at the receiver.In the second phase, with time interval τ_2 rewarded by group 2, S_1 transmits its own information to D_1 via multiple antennas. As it is a typical multiple input single output (MISO) channel, by using the maximum rate transmission (MRT) strategy<cit.>, the achievable information rate from S_1 to D_1 in this phase can by given byR_S_1^(2) = τ_2𝒞(P_S_1^(2)𝐡_S_1 D_1^2/N_0),where P_S_1^(2) is the available transmit power at S_1 in the second phase. Because of the broadcast nature of wireless channel, in this phase, the transmitted signals from S_1 also can be collected by R for energy harvesting. So, the harvested energy in the second phase can be given by E_ R^(2)= ητ_2 P_S_1^(2)| 𝐡_S_1 R^H 𝐡_S_1 D_1∥𝐡_S_1 D_1∥|^2,where 𝐡_S_1 D_1∥𝐡_S_1 D_1∥ is the transmission precoding vector adopted at S_1 for MRT.In the third phase with time interval τ_3, S_2 broadcasts information to R and D_2. Let the transmitted symbol by S_2 be x_S_2(k) with |x_S_2(k)|^2 = 1. The signal received at R and D_2 can be, respectively, given byy_ R(k) = √(P_S_2^(3)) h_S_2 R(k) x_S_2(k) + n(k)andy_D_2(k) = √(P_S_2^(3)) h_S_2 D_2(k) x_S_2(k) + n(k),where P_S_2^(3) is the available transmit power at S_2.In the fourth phase with time interval τ_4, R decodes the information transmitted from S_2 and then helps to forward the decoded information to D_2 by using the harvested energy from S_1 in the first two phases. The received signal at D_2 from R in the third phase isy_D_2(k) = √(P_R) h_RD_2(k) x_R(k) + n(k) ,where P_R is the available transmit power at R, which is constrained by the sum of the harvested energy in the first two phases, i.e., E_ R^(1) in (<ref>) and E_ R^(2) in (<ref>). That is,τ_4 P_R≤E_R^(1)+E_R^(2) = ητ_1 P_S_1^(1)| 𝐡_S_1 R^H ω |^2 +ητ_2 P_S_1^(2)| 𝐡_S_1 R^H 𝐡_S_1 D_1∥𝐡_S_1 D_1∥|^2. Decode-and-forward (DF) relaying operation is employed at R, so the end-to-end information rate of group 2 satisfies that <cit.> R_S_2 ≤ min{ τ_3𝒞(P_S_2^(3) |h_S_2 R|^2/N_0), τ_3𝒞(P_S_2^(3) |h_S_2 D_2|^2/N_0)+τ_4𝒞(P_R |h_R D_2|^2/N_0) }.For the four phases described above, group 1 transmits information in both the first and the second phases. Combining R_S_1^(1) with R_S_1^(2), one can obtain the total achievable information rate from S_1 to D_1 in the k-th fading block as R_S_1 ≤R_S_1^(1) + R_S_1^(2) = τ_1𝒞(P_S_1^(1)|𝐡_S_1 D_1^H ω|^2/N_0) + τ_2𝒞(P_S_1^(2) 𝐡_S_1 D_1^2/N_0). Group 2 transmits information in the third and the fourth phases via the DF cooperative relaying, whose available information rate in the k-th fading block is given by (<ref>).Suppose the minimal required information rate of group i is r_S_i, where i ∈{1, 2}. The end-to-end achievable information rate R_S_i satisfies thatR_S_i≥ r_S_i,∀ i = 1, 2.Note that the minimal required data rate constraints in (<ref>) are reasonable and practical in the considered WPCN system, because only when the obtained data rates exceed the minimal required ones, the cooperation between the two groups brings benefits to both groups. Also, with the minimal required data rate constraints in (<ref>), the problems in section III may not have feasible solution. In this case, it indicates that there is no opportunity for the two groups to achieve win-win cooperation.§ WSR-MAXIMIZATION DESIGNLet α_i ≥ 0 be the weight of achievable information rate of group i, where i=1,2. The WSR of the system can be given byR_wsum = α_1 R_S_1 + α_2 R_S_2.We shall consider two different scenarios, i.e., the fixed and the flexible power scenarios, for the WSR-maximization design of the cooperative WPCN in the following two subsections. §.§ Fixed Power Scenario§.§.§ Problem FormulationIn the fixed power scenario, S_1 and S_2 have fixed instantaneous powers in their respective transmission phases. For S_1 it uses the same transmit power to transmit signals in phase 1 and phase 2, i.e., P_ S_1^(1) = P_ S_1^(2). For S_2 it transmits signals in phase 3 with the transmit power P_ S_2^(3). For clarity, we denote the fixed power at S_i to be P_ S_i, so we have that P_ S_1^(1) = P_ S_1^(2) = P_ S_1 and P_ S_2^(3) = P_ S_2. As a result, (<ref>) and (<ref>) can be respectively rewritten as R_S_2 ≤ min{ τ_3𝒞(P_S_2 |h_S_2 R|^2/N_0), τ_3𝒞(P_S_2 |h_S_2 D_2|^2/N_0)+τ_4𝒞(P_R |h_R D_2|^2/N_0) }, andR_S_1 ≤ τ_1𝒞(P_S_1|𝐡_S_1 D_1^H ω|^2/N_0) + τ_2𝒞(P_S_1 𝐡_S_1 D_1^2/N_0).Therefore, the WSR maximization problem for fixed power scenario can be mathematically expressed as𝐏_1: τ, ω, R_ S_1, R_ S_2maximizeα_1 R_ S_1 + α_2 R_ S_2subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) . It is worth nothing that Problem 𝐏_1 can be regarded as a general form of the data rate maximization oriented design for the considered cooperative WPCN. Particularly, when α_1=α_2≠ 0, the problem turns to be a rate-constrained sum-rate maximization. When α_i=0 and α_j≠ 0, where i,j∈{1,2} and i≠ j, the problem turns to be an optimization problem which maximizes the data rate of group j while guaranteing the minimal required data rate of group i. Nevertheless, it is observed that the right sides of (<ref>) and (<ref>) are non-linear w.r.t. τ and ω, so constraints (<ref>) and (<ref>) are non-convex sets. Moreover, (<ref>) and (<ref>) are also non-convex sets w.r.t. τ and ω. Therefore, 𝐏_1 is not a convex problem and cannot be solved with known solution methods. Thus, we solve it as follows.§.§.§ Problem Transformation and SolutionWe observe that ω always appears in a quadratic form as shown in constraints (<ref>), (<ref>) and (<ref>). By defining Ω≜ωω^H, the three constraints (<ref>), (<ref>) and (<ref>) can be re-interpreted asTr(Ω) ≤ 1, τ_4 P_R≤ητ_1 P_S_1𝐡_S_1 R^H Ω𝐡_S_1 R+ ητ_2 P_S_1| 𝐡_S_1 R^H 𝐡_S_1 D_1∥𝐡_S_1 D_1∥|^2,andR_S_1≤τ_1𝒞( P_S_1𝐡_S_1 D_1^H Ω𝐡_S_1 D_1/N_0) + τ_2𝒞( P_S_1𝐡_S_1 D_1^2/N_0). Note that in order to ensure that ω could be recovered by Ω uniquely, it must satisfy thatΩ≽ 0,andrank(Ω) = 1 .Therefore, by replacing ω with Ω, problem 𝐏_1 is equivalently transformed into the following problem 𝐏_1^',𝐏_1^': τ, Ω, R_ S_1, R_ S_2maximizeα_1 R_S_1 + α_2 R_S_2subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) .Problem 𝐏_1^' is still not jointly convex w.r.t. τ and Ω even though the rank-one constraint (<ref>) is removed. However, it can be observed that when the rank-one constraint is dropped, for a given τ, it is convex w.r.t. Ω. Meanwhile, for a given Ω, it is convex w.r.t. τ. Therefore, the relaxed problem of 𝐏_1^' can be solved by using traditional alternative iteration solution method. Nevertheless, with the traditional solution method, the convergence of the iteration can be proved, but it cannot be theoretically proved that the global optimal solution can always be guaranteed. Instead, we design a new solution method as follows, which is capable of finding the global optimal solution for Problem 𝐏_1^'.Define a new matrix variable ϝ∈ℂ^N× N such that ϝ = τ_1 Ω. According to (<ref>) and (<ref>), it is known thatϝ≽ 0,andrank(ϝ) = 1 .By substitution of Ω=ϝ/τ_1 into (<ref>) and (<ref>), the two constraints (<ref>) and (<ref>) can be respectively re-expressed byTr(ϝ) ≤τ_1,andR_S_1≤τ_1𝒞(P_S_1Tr (ϝ𝐡_S_1 D_1𝐡_S_1 D_1^H )/N_0 τ_1) + τ_2𝒞( P_S_1𝐡_S_1 D_1^2/N_0).Moreover, let ϕ_4 = τ_4 P_R. (<ref>) and (<ref>) can be respectively rewritten asϕ_4 ≤ηP_S_1Tr (ϝ𝐡_S_1 R𝐡_S_1 R^H )+ ητ_2 P_S_1| 𝐡_S_1 R^H 𝐡_S_1 D_1∥𝐡_S_1 D_1∥|^2andR_S_2 ≤ min{ τ_3𝒞( P_S_2|h_S_2 R|^2/N_0), τ_3𝒞( P_S_2|h_S_2 D_2|^2/N_0)+τ_4𝒞( ϕ_4|h_R D_2|^2/N_0 τ_4)}.With above variable substitution operations, i.e., ϝ = τ_1 Ω, and ϕ_4 = τ_4 P_R, Problem 𝐏_1^' is equivalently transformed into the following Problem 𝐏_1^'',𝐏_1^'': τ, ϝ, ϕ_4, R_S_1, R_S_2maximizeα_1 R_S_1 + α_2 R_S_2 subject to(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). By dropping the rank-1 constraint in (<ref>), we obtain that𝐏_1^''': τ, ϝ, ϕ_4, R_ S_1, R_ S_2minimize - α_1 R_S_1 - α_2 R_S_2subject to (<ref>), (<ref>), (<ref>),(<ref>), (<ref>), (<ref>), (<ref>).propProposition lemmaLemma𝐏_1^''' is a convex problem. The objective function of Problem 𝐏_1^''' is linear. The constraints (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) are all convex sets. Moreover, as ylog(1+x/y) is a perspective function of concave function log(1+x), which is joint concave w.r.t x and y<cit.>, it can be proved that (<ref>) and (<ref>) are also convex sets. Thus, we arrive at Proposition <ref>.Via the relaxation described above, the non-convex Problem 𝐏_1^'' is transformed to be the convex Problem of 𝐏_1^''' by using the SDR <cit.>. Therefore, by employing some known solution methods, e.g., interior point method, for convex problems <cit.>, the optimal [τ^*,ϝ^*, ϕ_4^*] of Problem 𝐏_1^''' can be obtained.§.§.§ Global Optimum Analysis for Our Proposed Solution MethodNote that our goal is to find the optimal [τ^*, ω^*] for Problem 𝐏_1 rather than the optimal [τ^*,ϝ^*, ϕ^*]. It is known that, only when rank(ϝ^*) = 1, [τ^*,ϝ^*, ϕ_4^*] is also the optimal solution of Problem 𝐏_1^''. In this case, the optimal[τ^*,ω^*] can be derived accordingly. Therefore, the key question lies in the rank of ϝ^*. Fortunately, we found that there exists an optimal ϝ^* such that rank(ϝ^*) = 1 for Problem 𝐏_1^''', which means the global optimum of the primary Problem 𝐏_1 can be guaranteed.Now we analyse the rank of ϝ^* with Theorem <ref>. Before that, we present Lemma <ref>, which was proved in <cit.>, for emphasis as follows. <cit.> Consider a problem 𝐏_0,𝐏_0: 𝐗_1, …, 𝐗_Lminimize∑_l = 1^L Tr(𝐂_l 𝐗_l)subject to∑_l = 1^LTr(𝐀_ml𝐗_l) _m b_m, m = 1,…,M,𝐗_l ≽ 0, l = 1,…,L,where 𝐂_l,l = 1,…,L and 𝐀_ml,m = 1,…,M,l = 1,…,L are Hermitian matrices, b ∈ℝ, _m ∈{≥, =, ≤},m = 1,…,M and the variables 𝐗_l,l = 1,…,L are Hermitian matrices. If Problem 𝐏_0 and its dual are solvable, then the Problem 𝐏_0 has always an optimal solution (𝐗_1^*, …, 𝐗_L^*) such that ∑_l = 1^Lrank^2 (𝐗_l^*) ≤ M. theoremTheoremThere exists an optimal ϝ^* of Problem 𝐏_1^''' such that rank(ϝ^*) = 1. The proof can be found in Appendix <ref>.corollaryCorollaryThe global optimal solution to Problem P_1 is guaranteed by using our proposed solution method.𝐏_1, 𝐏_1^' and 𝐏_1^'' are equivalent to each other. It is known that once the optimal solution of 𝐏_1^''' satisfies the rank-one constraint, it is equivalent to 𝐏_1, 𝐏_1^' and 𝐏_1^''. Theorem <ref> declares that 𝐏_1^''' has a rank-one optimal solution. Therefore, the optimal solution for Problem 𝐏_1 can always be found by using our proposed solution method.§.§ Flexible Power Scenario §.§.§ Problem FormulationIn flexible power scenario, S_1 and S_2 are allowed to transmit information/energy in different phases with different power, but the averaged power over each fading block is confined by P_S_1 and P_S_2 respectively. That is, the consumed powers at S_1 and S_2 respectively satisfy thatτ_1 P_S_1^(1) + τ_2 P_S_1^(2)≤ P_S_1,andτ_3 P_S_2^(3)≤ P_S_2 . For clarity, we define 𝐏≜ [P_S_1^(1)P_S_1^(2)P_S_2^(3)]^T, which can be regarded as the power allocation vector for the four phases. Thus, the WSR maximization problem can be mathematically expressed by𝐏_2: τ, ω, 𝐏,R_ S_1, R_ S_2maximizeα_1 R_S_1 + α_2 R_S_2subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). Compared with Problem 𝐏_1 for the fixed power scenario, in Problem 𝐏_2, the power 𝐏 consumed in each phase at the two sources are jointly optimized with τ and ω. Similar to Problem 𝐏_1, it can be observed that Problem 𝐏_2 is also non-convex. So we solve it as follows.§.§.§ Problem Transformation and SolutionLike the solution method designed for Problem 𝐏_1, we also deal with Problem 𝐏_2 by transforming it into a convex problem through variable substitution operations and SDR at first and then solve it efficiently.We also use the definition of Ω≜ωω^H by introducing a semi-definite square matrix Ω≽ 0. Then, (<ref>) can be equivalently replaced by (<ref>), and (<ref>) can be re-expressed byR_S_1≤τ_1𝒞(P_S_1^(1)𝐡_S_1 D_1^H Ω𝐡_S_1 D_1/N_0) + τ_2𝒞(P_S_1^(2)𝐡_S_1 D_1^2/N_0). Consequently, with the rank-one constraint of Ω, i.e., rank(Ω) = 1, Problem 𝐏_2 is equivalently transformed into the following Problem 𝐏_2^', i.e.,𝐏_2^':τ, Ω, 𝐏,R_S_1, R_S_2maximizeα_1 R_S_2 + α_2 R_S_1 subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>), (<ref>), (<ref>), (<ref>),(<ref>).Since Problem 𝐏_2^' is still non-convex, we further adopt the following variable substitutions by introducing five new variables, i.e.,ϕ_1 =τ_1 P_S_1^(1),ϕ_2 =τ_2P_S_1^(2), ϕ_3 =τ_3 P_S_2^(3),ϕ_4 = τ_4 P_R, G =τ_1 P_S_1^(1)Ω=ϕ_1Ω,withG≽ 0andrank(G)=1. With these linear definitions, (<ref>), (<ref>), (<ref>) and (<ref>) can be respectively replaced by (<ref>), (<ref>), (<ref>) and (<ref>). Moreover, (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) are respectively transformed intoR_S_2 ≤ min{ τ_3𝒞(ϕ_3 |h_S_2 R|^2/N_0 τ_3), τ_3𝒞(ϕ_3 |h_S_2 D_2|^2/N_0 τ_3)+τ_4𝒞(ϕ_4|h_R D_2|^2/N_0 τ_4)},ϕ_1 + ϕ_2 ≤ P_S_1 , ϕ_3 ≤ P_S_2 , Tr(G) ≤ϕ_1andR_S_1≤τ_1𝒞(Tr (G𝐡_S_1 D_1𝐡_S_1 D_1^H )/N_0 τ_1) + τ_2𝒞( ϕ_2 𝐡_S_1 D_1^2/N_0 τ_2) . Let ϕ=[ϕ_1ϕ_2ϕ_3ϕ_4]^T. With the definitions in (<ref>),Problem 𝐏_2^' can be equivalently transformed into the following Problem 𝐏_2^'', 𝐏_2^'': τ, G, ϕ,R_S_1, R_S_2maximize α_1 R_S_2 + α_2 R_S_1 subject to (<ref>),(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). It can be seen that the objective function of Problem 𝐏_2^'' is concave and all constraints except (<ref>) are convex sets. Therefore, by using SDR method with the dropping of (<ref>), Problem 𝐏_2^'' can be relaxed to a convex problem as follows,𝐏_2^''': τ, G, ϕ, R_S_1, R_S_2minimize - α_1 R_S_1 - α_2 R_S_2 subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). 𝐏_2^''' is a convex problem.The proof is similar to that of Proposition <ref>, which is omitted here.Therefore, the optimal solution [τ^*,G^*, ϕ^*] of Problem 𝐏_2^''' can be obtained by using some known solution methods.§.§.§ Global Optimum Analysis for Our Proposed Solution MethodSimilar to the situation of Problem 𝐏_1^''', only when rank(G^*) = 1, [τ^*,G^*, ϕ^*] is also the optimal solution of Problem 𝐏_2^''. In this case, the optimal[τ^*,ω^*,𝐏^*] can be derived accordingly. Therefore, the key question lies in the rank of G^*. Fortunately, we also found that rank(G^*) = 1 always holds for Problem 𝐏_2^''', which means the global optimum of the primary Problem 𝐏_2 also can be guaranteed by our adopted variable substitutions and SDR.Now we analyse the rank of G^* for the average power constrained scenario with Theorem <ref>.There exists an optimal G^* of Problem 𝐏_2^''' such that rank(G^*) = 1. The proof can be found in Appendix <ref>.The optimal solution of Problem 𝐏_2 to the flexible power scenario is guaranteed by using our proposed method. The proof of Corollary 2 is similar to that of Corollary 1. 𝐏_2, 𝐏_2^' and 𝐏_2^'' are equivalent to each other. Theorem <ref> declares that 𝐏_2^''' has a rank-one optimal solution. Therefore, the optimal solution to Problem 𝐏_2 can always be found by using our proposed solution method. § POWER-MINIMIZATION DESIGNBesides the throughput maximization design, the energy-saving design is another essential objective for practical energy-constrained wireless networks, e.g., WSNs, WPANs and WBANs, to extend their life time. Therefore, in this section, we investigate the minimum energy consumption design for the considered cooperative WPCN described in Section II. Our goal is to jointly optimize the beamforming, time allocation and power allocation to minimize the system total consumed power while guaranteeing the required information rates of the two groups. §.§ Problem FormulationAs described in Section III, S_1 transmits signals in the first and the second phases, while S_2 transmits signals only in the third phase. Specifically, in the first phase, the consumed energy at S_1 is τ_1 P_S_1^(1)ω^2, where ω^2≤1. In the second phase, the consumed energy at S_1 is τ_2 P_S_1^(2). In the third phase, the consumed energy at S_2 is τ_3 P_S_2^(3). As a result, the total consumed energy for information trasnmission is τ_1 ω^2 + τ_2 P_S_1^(2) + τ_3 P_S_2^(3). Since the time period T of the fading block is normalized to be 1, the total consumed power for the transmissions in the fading block is also expressed byP_ avg = τ_1 P_S_1^(1)ω^2 + τ_2 P_S_1^(2) + τ_3 P_S_2^(3). Therefore, the total power minimization problem under the minimal required data rates can be formulated as𝐏_3: τ, ω, 𝐏minimizeτ_3 P_S_2^(3) + τ_1 P_S_1^(1)ω^2 + τ_2 P_S_1^(2)subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),which is also not jointly convex w.r.t. τ, ω and P due to constraints (<ref>) and (<ref>), so that it cannot be solved directly by using known solution methods. Therefore, we solve it as follows. §.§ Problem Transformation and SolutionIf we use the same variable substitution, i.e., Ω = ωω^H as described in Section III, constraints (<ref>) and (<ref>) of Problem 𝐏_3 also can be equally replaced by (<ref>) and (<ref>), respectively. And, its objective function in (<ref>) can be rewritten asP_ avg = τ_3 P_S_2^(3) + τ_1P_S_1^(1)Tr (Ω) + τ_2 P_S_1^(2). In order to equivalently transform Problem 𝐏_3 into the following Problem 𝐏_3^', Ω must be semi-definite and rank one, as expressed by the constraints (<ref>) and (<ref>). Problem 𝐏_3^' then can be given by𝐏_3^': τ, Ω, 𝐏minimizeτ_3 P_S_2^(3) + τ_1P_S_1^(1)Tr (Ω) + τ_2 P_S_1^(2)subject to (<ref>), (<ref>), (<ref>), (<ref>),(<ref>),(<ref>), (<ref>),(<ref>),which is an equivalent transformation of Problem 𝐏_3. Since Problem 𝐏_3^' is still non-convex, we further transform it to be the following Problem 𝐏_3^'' by using the variable substitution defined in (<ref>).To make Problem 𝐏_3^'' be an equivalent version of Problem 𝐏_3^', G also should satisfy the semi-definite constraint and rank-one constraint, which can be expressed by (<ref>) and (<ref>). Moreover, with (<ref>), constraints (<ref>), (<ref>) and (<ref>) are replaced with (<ref>), (<ref>) and (<ref>) respectively. The objective function (<ref>) of Problem 𝐏_3^' can be transformed intoP_ avg = ϕ_3 + Tr (G) + ϕ_2.Also, let ϕ=[ϕ_4ϕ_3ϕ_1ϕ_2]^T. Problem 𝐏_3^'' can be given by𝐏_3^'': τ, G, ϕminimizeϕ_3 + Tr (G) + ϕ_2 subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). It can be seen that the objective function of Problem 𝐏_3^'' is convex and all constraints except the rank-one constraint (<ref>) are convex sets. Therefore, by using SDR method with the dropping of (<ref>), Problem 𝐏_3^'' can be relaxed to a convex problem as follows,𝐏_3^''': τ, G, ϕminimizeϕ_3 + Tr (G) + ϕ_2 subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). 𝐏_2^''' is a convex problem.The proof is similar to that of Proposition <ref>, which is omitted here.As a result, the optimal solution [τ^*,G^*, ϕ^*] of Problem 𝐏_3^''' can be obtained by using some known solution methods, such as the interior point method, etc. §.§ Global Optimum Analysis for Our Proposed Solution MethodAs is known, with SDR method, only when rank(G^*) = 1, [τ^*,G^*, ϕ^*] is also the optimal solution of Problem 𝐏_3^''. In this case, the optimal[τ^*,ω^*,𝐏^*] can be derived accordingly. Therefore, the key question lies in the rank of G^*. Fortunately, we also found that rank(G^*) = 1 always holds for Problem 𝐏_3^''', which means the global optimum of the primary Problem 𝐏_3 also can be guaranteed by our adopted variable substitutions and SDR.Now we analyse the rank of G^* for the minimum average power design with Theorem <ref>.There exists an optimal G^* of Problem 𝐏_3^''' such that rank(G^*) = 1. The proof can be found in Appendix <ref>.The optimal solution to Problem 𝐏_3 is guaranteed by using our proposed method. The proof of Corollary 3 is similar to that of Corollary 1. 𝐏_3, 𝐏_3^' and 𝐏_3^'' are equivalent to each other. Theorem <ref> declares that 𝐏_3^''' has a rank-one optimal solution. Therefore, the optimal solution to Problem 𝐏_3 can always be found by using our proposed solution method. § NUMERICAL RESULT & DISCUSSIONIn this section, we provide some numerical results to discuss the system performance of the optimized cooperative WPCN. For comparison, three benchmark systems are also simulated. In the first benchmark system, i.e., random beamforming with optimized time assignment (RBOT),only time assignment is optimized and the power of S_1 is randomly allocated to its antennas. In the second benchmark system, i.e., optimized beamforming with random time assignment (OBRT), only beamforming is optimized and random time assignment is adopted. In the third benchmark system,i.e., random beamforming with random time assignment (RBRT), both beamforming and time assignment are randomly generated.In the simulations, we set P_S_1=2Watt, P_S_2=0.2Watt and N_0=10^-6Watt. Moreover, the minimal required rates of the two groups are set as r_S_1=0.5bit/s and r_S_2=0.2bit/s, respectively. The distances between the nodes are d_S_1D_1= 9m, d_S_1R= 2m, d_S_2R = 10m and d_RD_2= 20m. A very weak direct link between S_2 and D_2 is assumed, which is with an equivalent distance as d_S_2D_2= 100m. The pass loss exponential factor is 4. The number of antenna N=4 and the energy conversion efficiency η=0.9.These configurations will not change unless otherwise specified. §.§ Maximum WSR PerformanceIn Figure <ref> and Figure <ref>, the system WSR versus P_S_1 and P_S_2 are respectively plotted, where α_1=α_2=1. It can be seen that with the increment of P_S_1 and P_S_2, the WSRs of all five systems increase. The reason isa little bit straightforward, because more power will bring higher information rate. It also can be observed that RBOT outperforms ORBT and RBRT, and RBRT achieves the lowest WSR among all systems. This indicates that in the considered WPCN system, the time assignment has greater impact on the system performance than the beamforming at S_1. The reason may be explained as follows. The beamforming design affects the system performance by energy transfer, which directly works on R and D_1. Since the power transfer over wireless channels is faded seriously, its effects is relatively limited; while the time assignment works on all source and relay nodes, which adjusts the system resources more systematically. Therefore, time assignment has much greater impact on system performance and it is more important in enhancing system performance. Besides, it is shown that compared with the fixed power constraints, flexible power configuration may greatly increase the system WSR. The performance gain between the system withflexible power constraint and the onewith fixed power constraint is yielded by power allocation, which indicates that with power allocation at the two sources, the system WSR can be greatly improved. In Figure <ref>, the WSR is plotted versus the number of antennas of S_1. One can see that as antenna number increases, the system WSR is also increasing. Moreover, it also shows that with the increment of the number of antennas, the increasing rate of the WSR roughly decreases, which means that increasing the number of antennas is able to enhance system WSR, but it cannot increase the system WSR infinitely. To discuss the effect of relay position on system performance, we also simulate the WSR versus different relay locations. In the simulations, we consider a network topology as shown in Figure <ref>, where S_2 is located at the origin of the coordination on the x-y plane, D_2 is located at the point with coordinate (x = 10, y = 0), S_1 is positioned at (x = 10, y = 10) and D_2 is placed at (x = 20, y = 10). The position of R is changed within the region of 1 ≤ x ≤ 19 and 0 ≤ y ≤ 9. From the result in Figure <ref> and Figure <ref>, it can be seen that the relay should be positioned closer to S_1 for higher system WSR. When it is closer to S_2, the system achieves relatively low WSR. In order to show this more clearly, the contour lines associated withFigure <ref> and Figure <ref> are plotted in Figure <ref> and Figure <ref>, respectively, which also shows that when the relay is placed closer to S_1 or D_2, a relatively high WSR can be achieved. This result can be applied to relay deployment or relay section in the practical cooperative WPCNs. §.§ Minimal Power performance simulationsIn Figure <ref> and Figure <ref>, the system minimum consumed power of our proposed method and the benchmark systems, i.e., RBOT, OBRT and RBRT are plotted versus r_S_1 and r_S_2, respectively. It can be seen that, with the increment of r_S_1 and r_S_2, the total consumed power of four systems increase, since to meet the higher data rate requirements of the two groups, more power are required. It also shows that the minimum consumed power of the four systems increase more quickly with the increment of r_S_2 than that with the increment of r_S_1. This indicates that to meet data rate requirement of group 2 consumes more power. The reason is that the available power at R is transferred from S_1 and during the energy transfer some energy is lostdue to path loss fading.In Figure <ref>, the system minimum consumed power is plotted versus the number of antennas of S_1. It can be seen that as antenna number increases, the total consumed power is reduced. However, with the increment of the number of antennas, the decreasing rate of the total consumed power decreases, which means that increasing the number of antennas is capable of decrease the system total consumed power, but it cannot decrease the system total consumed power infinitely.To discuss the effect of relay position on the system total consumed power, in Figure <ref>, we simulate the the minimum consumed power versus different relay locations. In the simulations, we also consider the topology as shown in Figure <ref>. The position of R is changed within the region of 1 ≤ x ≤ 19 and 0 ≤ y ≤ 9. From Figure <ref>, it can be seen that the relay should be positioned closer to S_1 or D_2 for achieving a lower total consumed power. When it is closer to S_2, the system achieves relatively high lower total consumed power. In order to show this more clearly, the contour lines are plotted in Figure <ref>. The results also can be used as a reference for relay deployment or relay section in the practical cooperative WPCNs. § CONCLUSIONThis paper studied the optimal resource allocation for the WPCN with group cooperation. We introduced energy cooperation and time sharing between the two groups, so that both groups could fulfill their expected information delivering. To explore the system performance limits, we formulatedoptimization problems to maximize the system WSR and minimize its total consumed power by jointly optimizing the time assignment, power allocation, and SWIPT beamforming vectors under the available power constraint and the QoS requirement constraints ofboth groups. We solved the problems by using proper variable substitutions and the SDR method.We theoretically proved that our proposed solution methods can guarantee the global optimal solutions.Numerical results were provided to discuss the system performance behaviors. It showed that in such a group cooperation-aware WPCN, optimal time assignment has the most great effect on the system performance than other factors. Besides, the effects of relay position on system performances are also discussed via simulations.In future systems, some advanced technologies, such as network coding<cit.>, OFDM<cit.> and cognitive sensing, etc may be instigated into WPCNs to enhance the system performance. Besides,such kind of WPCNs also may be extended to high-speed railway scenarios<cit.> for more widely application. § THE PROOF OF THEOREM 1First, we consider the following Problem 𝐐_1,𝐐_1: 𝐔minimizeTr(𝐔)subject to ϕ_4^* ≤ηP_S_1 Tr (P_S_1𝐔 𝐡_S_1 R 𝐡_S_1 R^H ), 𝐔 ≽0, R_S_1^* =τ_1^* 𝒞(Tr (P_S_1𝐔 𝐡_S_1 D_1 𝐡_S_1 D_1^H )/N_0 τ_1^*) +τ_2^*𝒞( P_S_1 𝐡_S_1 D_1^2/N_0),where τ_1^*, τ_2^*, ϕ_4^*and R_S_1^* areoptimal solutions of Problem 𝐏_1^'''. Further, it can be equivalently transformed into𝐐_1^':𝐔minimize Tr(𝐔)subject to P_S_1Tr (𝐔 𝐡_S_1 R 𝐡_S_1 R^H ) ≥ϕ_4^*/η,P_S_1Tr (𝐔 𝐡_S_1 D_1 𝐡_S_1 D_1^H ) = N_0 τ_1^* β,𝐔 ≽0, where β=( 2^R_S_1^* - τ_2^*𝒞( P_S_1𝐡_S_1 D_1^2/N_0)/τ_1^* - 1 ). According to Lemma <ref>, Problem 𝐐_1 has an optimal solution 𝐔^* which satisfies thatrank^2(𝐔^*) ≤ 2.Moreover, since rank(𝐔^*) ≠ 0, rank(𝐔^*) = 1. Let [τ^*,ϝ^*, ϕ_4^*] be the optimal solution of Problem 𝐏_1^'''. It can be inferred that ϝ^* is a feasible solution of Problem 𝐐_1. The reason is that [τ^*,ϝ^*, ϕ_4^*] also satisfy the constraints (<ref>) and (<ref>). The optimal value of Problem 𝐐_1 associated with 𝐔^* must be smaller than that associated with any other feasible solution. Therefore, Tr(𝐔^*) ≤Tr(ϝ^*) ≤τ_1^* P_S_1.If we construct a new tuple [τ^*,𝐔^*, ϕ_4^*], then it satisfy all constraints of Problem 𝐏_1^''', which means it is a feasible solution of Problem 𝐏_1^'''. Since the objective function of Problem 𝐏_1^''' is only related to τ and ϕ_4, [τ^*,𝐔^*, ϕ_4^*] and [τ^*,ϝ^*, ϕ_4^*] yield the same value of Problem 𝐏_1^''', which means that [τ^*,𝐔^*, ϕ_4^*] is also an optimal solution of Problem 𝐏_1^'''. Since we have proved that rank(𝐔^*) = 1, it can be concluded that 𝐏_1^''' has an optimal rank-one solution. § THE PROOF OF THEOREM 2First, we consider the following Problem 𝐐_2,𝐐_2:𝐔minimizeTr(𝐔)subject to ϕ_4^* ≤ηP_S_1^(1)Tr (𝐔 𝐡_S_1 R 𝐡_S_1 R^H ),R_S_1^* = τ_1^* 𝒞 ( P_S_1^(1)Tr (𝐔 𝐡_S_1 D_1 𝐡_S_1 D_1^H )/N_0 τ_1^*) + τ_2^*𝒞( ϕ_2^* 𝐡_S_1 D_1^2/N_0 τ_2^*), 𝐔 ≽0,where τ_1^*, τ_2^*, ϕ_4^*, ϕ_2^* and R_S_1^* areoptimal solutions of Problem 𝐏_2^'''. Problem 𝐐_2 is equivalently transformed into Problem 𝐐_2^',𝐐_2^':𝐔minimizeTr(𝐔)subject to P_S_1^(1)Tr (𝐔 𝐡_S_1 R 𝐡_S_1 R^H ) ≥ϕ_4^*/η,P_S_1^(1)Tr (𝐔 𝐡_S_1 D_1 𝐡_S_1 D_1^H ) =ξ, 𝐔 ≽0,where ξ=N_0 τ_1^* ( 2^R_S_1^* - τ_2^*𝒞(ϕ_2^* 𝐡_S_1 D_1^2/N_0 τ_2^*)/τ_1^* - 1 ). According to Lemma <ref>, Problem 𝐐_2 has an optimal solution 𝐔^* which satisfies thatrank^2(𝐔^*) ≤ 2.Since rank(𝐔^*) ≠ 0, we conclude that rank(𝐔^*) = 1. Let [τ^*,G^*, ϕ^*] be the optimal solution of Problem 𝐏_2^'''. It can be inferred that G^* is a feasible solution of Problem 𝐐_2. The reason is that [τ^*,G^*, ϕ^*] also satisfy the constraints (<ref>) and (<ref>). The optimal value of Problem 𝐐_2 associated with 𝐔^* must be smaller than that associated with any other feasible solution. Therefore, Tr(𝐔^*) ≤Tr(G^*) ≤ϕ_1^*/P_S_1^(1).If we construct a new tuple [τ^*,𝐔^*, ϕ^*], then it satisfy all constraints of Problem 𝐏_2^''', which means it is a feasible solution of Problem 𝐏_2^'''. Since the objective function of Problem 𝐏_2^''' is only related to τ and ϕ, [τ^*,𝐔^*, ϕ^*] and [τ^*,G^*, ϕ^*] yield the same value of Problem 𝐏_2^''', which means that [τ^*,𝐔^*, ϕ^*] is also an optimal solution of Problem 𝐏_2^'''. Since we have proved that rank(𝐔^*) = 1, we conclude that 𝐏_2^''' has an optimal rank-one solution. § THE PROOF OF THEOREM 3First, we apply the substitutionTr(G) = t,on Problem 𝐏_3^''' and get an equivalent Problem Δ,Δ: τ, G, ϕ, tminimizeϕ_3 + tP_S_1^(1) + ϕ_2 subject to (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). Next, we consider the following Problem 𝐐_3,𝐐_3:𝐔minimize Tr(𝐔)subject to ϕ_4^* ≤ηP_S_1^(1)Tr (𝐔 𝐡_S_1 R 𝐡_S_1 R^H ),R_S_1^* = τ_1^* 𝒞( P_S_1^(1)Tr (𝐔 𝐡_S_1 D_1 𝐡_S_1 D_1^H )/N_0 τ_1^*) +τ_2^*𝒞( ϕ_2^* 𝐡_S_1 D_1^2/N_0 τ_2^*),Tr(𝐔) = t^*, 𝐔 ≽0,where τ_1^*, τ_2^*, ϕ_4^*, ϕ_2^*, t^* and R_S_1^* are optimal solutions of Problem Δ. Problem 𝐐_3 is equivalently transformed into Problem 𝐐_3^',𝐐_3^':𝐔minimize Tr(𝐔)subject toTr (𝐔 𝐡_S_1 R 𝐡_S_1 R^H ) ≥ϕ_4^*/ηP_S_1^(1), P_S_1^(1)Tr (𝐔 𝐡_S_1 D_1 𝐡_S_1 D_1^H ) = ξ, Tr(𝐔) = t^*, 𝐔 ≽0. According to Lemma <ref>, Problem 𝐐_3 has an optimal solution 𝐔^* which satisfies thatrank^2(𝐔^*) ≤ 3.Since rank(𝐔^*) ≠ 0, we conclude that rank(𝐔^*) = 1. Let [τ^*,G^*, ϕ^*, t^*] be the optimal solution of Problem 𝐆. It can be inferred that G^* is a feasible solution of Problem 𝐐_3. The reason is that [τ^*,G^*, ϕ^*, t^*] also satisfy the constraints (<ref>), (<ref>) and (<ref>). The optimal value of Problem 𝐐_3 associated with 𝐔^* must be smaller than that associated with any other feasible solution. Therefore, Tr(𝐔^*) ≤Tr(G^*) ≤ϕ_1^*.If we construct a new tuple [τ^*,𝐔^*, ϕ^*, t^*], then it satisfy all constraints of Problem 𝐆, which means it is a feasible solution of Problem 𝐆. Since the objective function of Problem 𝐆 is only related to τ, ϕ and t. [τ^*,𝐔^*, ϕ^*, t^*] and [τ^*,G^*, ϕ^*, t^*] yield the same value of Problem 𝐆, which means that [τ^*,𝐔^*, ϕ^*, t^*] is also an optimal solution of Problem Δ. Since we have proved that rank(𝐔^*) = 1, we conclude that Δ has an optimal rank-one solution. We also know that Problem 𝐏_3^''' is equivalent to Problem Δ. So 𝐏_3^''' also has an optimal rank-one solution.10 RF2 X. Lu, P. Wang, D. Niyato, D. I. Kim, Z. Han, “Wireless network with RF energy harvesting: A contemporary survey,” IEEE Commun. Surveys & Tutorials, vol. 17, no. 2, pp. 757-789, 2015. avullers H. J. Visser and R. J. M. Vullers, “RF energy harvesting and transport for wireless sensor network applications: Principles and requirements,” Proc. of IEEE, vol. 101, no. 6, pp. 1410-1423, Jun. 2013.SbihoS. Bi, C. K. Ho, and R. Zhang, “Wireless powered communication: opportunities and challenges,” IEEE Commun. Mag., vol. 53, no. 4, pp. 117-125, Apr. 2015.Suzhibi S. Z. Bi, Y. Zeng, R. Zhang, “Wireless powered communication networks: an overview,” IEEE Wirel. Commun., vol. 23, no. 2, pp. 10 - 18, 2016.akjrliu M. L. Ku, W. Li, Y. Chen, and K. J. R. Liu, “Advances in energy harvesting communications: past, present, and future challenges,” IEEE Commun. Surveys & Tutorials, vol. 18, no. 2, pp. 1384-1412, May 2016.aulukus S. Ulukus, A. Yener, E. Erkip, O. Simeone, M. Zorzi, P. Grover and K. Huang, “Energy harvesting wireless communications: A review of recent advances,” IEEE J. Sel. Areas Commun., vol. 33, no. 3, pp. 360-381, March 2015.akrikidiss I. Krikidis, S. Timotheou, S. Nikolaou, G. Zheng, D. W. k. Ng, R. Schober, “Simultaneous wireless information and power transfer in modern communication systems,” IEEE Commun. Mag., vol. 52, no.11, pp. 104-110, Nov. 2014.WPCNch H. Chen, Y. Li, J. L. Rebelatto, B. F. Uchoa-Filho, and B. Vucetic, “Harvest-then-cooperate: wireless-powered cooperative communications,” IEEE Trans. Signal Process., vol. 63, pp. 1700-1711 Apr. 2015.Chen X.Chen,C.Yuen,andZ.Zhang,“Wirelessenergyandinformation transfer tradeoff for limited feedback multi-antenna systems with energy beamforming,”IEEE Trans. Veh. Technol., vol. 63, no. 1, pp. 407-412, Jan. 2014.zZHOU Z. Zhou, M. G. Peng, Z. Zhao, W. B. Wang, and Rick S. Blum, “Wireless-powered cooperative communications: power-splitting relaying with energy accumulation,” IEEE J. Sele. Areas Commun., vol. 34, no. 4, pp. 969-982, April 2016.Qzyao Q. Z. Yao, A. P. Huang, H. G. Shan, T. Q. S. Quek, W. Wang, “Delay-aware wireless powered communication networks - energy balancing and optimization,” early access in IEEE Trans. Wirel. Commun., vol. 99, no. 99, 2016SZbi S. Z. Bi and R. Zhang, “Placement optimization of energy and information access points in wireless powered communication networks,” IEEE Trans. Wirel. Commun., vol. 15, no. 3, pp. 2351-2364, March 2016. Slee S. Lee, L. Liu, and R. Zhang, “Collaborative wireless energy and information transfer in interference channel,” IEEE Trans. Wirel. Commun., vol. 14, no. 1, pp. 545-557, Jan. 2015.Georgiadis A. Georgiadis, et al, “Rectenna design and optimization using reciprocity theory and harmonic balance analysis for electromagnetic (EM) energy harvesting,” IEEE Antennas Wireless Propag. Lett., vol. 9, pp. 444-446, May 2010.Keee K. Xiong, P. Y. Fan, Y. Lu and K. B. Letaief , “Energy efficiency with proportional rate fairness in multi-relay OFDM networks,” IEEE J. Sele. Area Commun., vol. 34, no. 5, pp. 1431-1447, May 2016.Varshney L. R. Varshney, “Transporting information and energy simultaneously,” in Proc. IEEE ISIT, pp. 1612-1616, 2008.Grover P. Grover and A. Sahai, “Shannon meets Tesla: Wireless information and power transfer,” in Proc. IEEE ISIT, pp. 2363-2367, Jun. 2010.Rzhang R. Zhang and C. K. Ho, “MIMO bro adcasting for simultaneous wire- less information and power transfer,” IEEE Trans. Wireless Commun., vol. 12, no. 5, pp. 1989-2001, May 2013. ke K. Xiong, P. Y. Fan, C. Zhang, and K. B. Letaief, “Wireless information and energy transfer for two-hop non-regenerative MIMO-OFDM relay networks,” IEEE J. Sel. Areas Commun., vol. 33, no. 8, pp. 1595-1611, Aug. 2015.XZhou X. Zhou, R. Zhang, and C. K. Ho, “Wireless information and power transfer: Architecture design and rate-ener- gy tradeoff,” IEEE Trans. Commun., vol. 61, no. 11, pp. 4754-4767, Nov. 2013.SWIPT4 R. Morsi, D. S. Michalopoulos, R. Schober, “Multiuser scheduling schemes for simultaneous wireless information and power transfer over fading channels,” IEEE Trans. Wirel. Commun., vol. 14, no. 4, pp. 1967-1982, April 2015.SWIPT5 Z. Y. Zong, H. Feng, F. Richard Yu, N. Zhao, T. Yang, B. Hu, “Optimal transceiver design for SWIPT inK-User MIMO interference channels,” IEEE Trans. Wirel. Commun., vol. 15, no. 1, pp. 430-445, Jan. 2016.SWIPT6 G. Y. Amarasuriya, E. G. Larsson, H. V. Poor, “Wireless information and power transfer in multiway massive MIMO relay networks,” IEEE Trans. Wirel. Commun., vol. 15, no. 6, pp. 3837-3855, June 2016.SWIPT7 X. F. Di K. Xiong, P. Y. Fan, H.-C. Yang, “Simultaneous information and power transfer in cooperative relay networks with rateless codes,” early access in IEEE Trans. Veh. Tech., vol. pp, no. 99, 2016.JuThroughput H. Ju and R. Zhang, “Throughput maximization in wireless powered communication networks,” IEEE Trans. Wirel. Commun., vol. 13, no. 1, pp. 418-428, Jan. 2014. JuUser H. Ju and R. Zhang, “User cooperation in wireless powered communication networks,” in Proc. IEEE GLOBECOM, 2014.YLche Y. L. Che, L. J. Duan and R. Zhang, “Spatial throughput maximization of wireless powered communication networks” IEEE J. Sele. Areas Commun., vol. 33, no. 8, pp. 1534-1548, Aug. 2015QQwu Q. Q. Wu, M. X. Tao, D. W. Kwan Ng, W. Chen, and R. Schober, “Energy-efficient transmission for wireless powered multiuser communication networks,” in Proc. IEEE ICC, pp. 154-159, 2015.Fzhao F. Zhao, L. Wei, and H. B. Chen, “Optimal time allocation for wireless information and power transfer in wireless powered communication systems,” IEEE Trans. Veh. Tech., vol. 65, no. 3, pp. 1830-1835, March 2016. YLchexu Y. L. Che, J. Xu, L. J. Duan and R. Zhang, “Multiantenna wireless powered communication with cochannel energy and information transfer,” IEEE Commun. Lett., vol. 19, no. 12, pp. 2266-2269, Dec. 2015Liu L. Liu, R. Zhang, K.-C. Chua, “Multi-antenna wireless powered communication with energy beamforming,” IEEE Tran. Commun., vol. 62, no. 12, pp. 4349-4361, Dec. 2014.Sun D. Hwang, D. I. Kim, T.-J. Lee, “Throughput maximization for multiuser MIMO wireless powered communication networks,” IEEE Trans. Veh. Tech., vol. 65, no. 7, pp. 5743-5748, July 2016.Yyma I. Krikidis, “Relay selection in wireless powered cooperative networks with energy storage,” IEEE J. Sel. Areas Commun., vol. 33, no. 12, 2015.Hju H. Ju and R. Zhang, “Optimal resource allocation in full-duplex wireless-powered communication network,” IEEE Trans. Commun., vol. 62, no. 10, pp. 3528-3540, Oct. 2014.Hjkim H. J. Kim, H. Lee,M. Ahn, H. B. Kong, I. Lee, “Joint subcarrier and power allocation methods in full-duplex wireless powered communication networks for OFDM systems,” early access IEEE Trans. Wirel. Commun., vol. PP. no. 99, 2016. HoonLee H. Lee, K.-J. Lee, H. J. Kim, B. Clerckx and I. Lee, “Resource allocation techniques for wireless powered communication networks with energy storage constraint,” IEEE Trans. Wirel. Commun., vol. 15, no. 4, pp. 2619-2628, April 2016.Sanket S. S. Kalamkar, J. P. Jeyaraj, A. Banerjee, K. Rajawat, “Resource allocation and fairness in wireless powered cooperative cognitive radio networks,” IEEE Trans. Commun., vol. 64, no. 8, pp. 3246-3261, Aug. 2016.EE D. W. K. Ng, E. S. Lo, and R. Schober, “Wireless information and power transfer: Energy efficiency optimization in OFDMA systems,” IEEE Trans. Wireless Commun., vol. 12, no. 12, pp. 6352-6370, 2013.DF Y. Liang and V. V. Veeravalli, “Gaussian orthogonal relay channels: optimal resource allocation and capacity,” IEEE Trans. Inform. Theory, vol. 51, no. 9, pp. 3284-3289, Sep. 2005.Huang Y. W., Huang, and D. P. Palomar, “Rank-constrained separable semidefinite programming with applications to optimal beamforming,” IEEE Trans. Signal Process.,vol. 58, no. 2, pp. 664-678, 2010.Boyd S. Boyd, and L. Vandenberghe, Convex optimization, Cambridge university press, 2004.DTse D. Tse, and V. Pramod, Fundamentals of wireless communication, Cambridge university press, 2005. Luo Z. Q. Luo, W.-K. Ma, A. M.-C. So, Y. Y. Ye, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Process. Mag. vol. 27, no. 3, pp. 20-34, 2010.Fan1 H. Wang, P. Fan, K. B. Letaief, “Maximum flow and network capacity of network coding for ad-hoc networks,” IEEE trans. Wireless Commun., vol. 6, no. 12, pp. 4193-4198, Dec. 2007. Fan2 D. Zhang, P. Y. Fan, Z. G. Cao, “A novel narrowband interference canceller for OFDM systems,” in Proc. IEEE WCNC, vol. 3, pp. 1426 - 1430, 2004.Fan3 Y. Q. Yang, P. Y. Fan, Y. M. Huang, “Doppler frequency offsets estimation and diversity reception scheme of high speed railway with multiple antennas on separated carriages,” in Proc. IEEE WSCP, pp. 1-6, 2012.
http://arxiv.org/abs/1703.09008v1
{ "authors": [ "Ke Xiong", "Chen Chen", "Gang Qu", "Pingyi Fan", "Khaled Ben Letaief" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170327110745", "title": "Group Cooperation with Optimal Resource Allocation in Wireless Powered Communication Networks" }
Center for Astrochemical Studies, Max-Planck-Institut für extraterrestrische Physik,Gießenbachstraße 1, 85748 Garching (Germany)[bizzocchi,lattanzi,jclaas,spezzano,giuliano,prudenzano,cendres,osipila,caselli]@mpe.mpg.de and rest frequencies L. Bizzocchi et al.is a polar molecule that represents a useful proxy for its parent molecule CO_2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO_2 and its deuterated variant,, still lack an accurate spectroscopic characterisation.The aim of this work is to extend the study of the ground-state pure rotational spectraof and well into the sub-millimetre region.Ground-state transitions have been recorded in the laboratory using a frequency-modulationabsorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically-confined plasma generated in asuitable gas mixture. The ground-state spectra of and have been investigated in the 213–967 GHzfrequency range, with the detection of 94 new rotational transitions. Additionally, 46 line positions taken from the literature have been accurately remeasured.The newly-measured lines have significantly enlarged the available data sets for and, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis showed that all lines with K_a ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v_5=1 vibrationally-excited state.This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning.The improved sets of spectroscopic parameters provide enhanced lists of very accurate, sub-millimetre rest-frequencies of and for astrophysical applications. These new data challenges a recent tentative identification of toward a pre-stellarcore.Accurate sub-millimetre rest-frequencies for and ions L. Bizzocchi1 V. Lattanzi1 J. Laas1 S. Spezzano1 B. M. Giuliano1 D. Prudenzano1 C. Endres1 O. Sipilä1 P. Caselli1 December 30, 2023 ===================================================================================================================== § INTRODUCTION Protonated ions were first suggested as proxies for important interstellarmolecules by <cit.>, shortly after the first detection of chargedpolyatomic species in space (HCO^+, ; N_2H^+,). These first pioneering studies demonstrated that ion–molecule reactions must occurin the interstellar medium (ISM), and are capable of generating ionic forms of non-polarmolecules, such as N_2, C_2, CO_2, and HCCH. These species are likely to be present to a large extent in the dense gas, but theyescape radio-telescope detection owing to the lack of rotational spectra.Carbon dioxide (CO_2) is widespread in space. It is abundant in planetary atmospheres, comets, and especially interstellar ices, where it has been extensively detected in these latter environments by ISO andSpitzer telescopes towards several lines of sight<cit.>. In the solid phase, the CO_2:H_2O ratio has been observed to vary in the range of0.15–0.5 in molecular clouds and protostars <cit.>. Since the abundance of CO_2 has been observed to be lower in the gas phase by a factor of 100 <cit.>, its formation is thought to proceedon dust grains, via UV- or cosmic-ray-induced processing of a variety of icymixtures <cit.>. Nonetheless, speculation on the possible contribution of a gas-phase formation routeremains <cit.>, and the difficulties involved in the directobservation of CO_2 in the infrared hinder the clarification of this matter.Protonated carbon dioxide () provides a useful, indirect way to trace gaseousCO_2 in the ISM. <cit.> constrained the CO_2 abundance in the L1544 pre-stellarcore using an extensive chemical model that considered the following mainformation channels: CO_2 + H_3^+ →HOCO^+ + H_2,CO + H_3^+ →HCO^+ + H_2, followed byHCO^+ + OH→HOCO^+ + H .At the steady state, they derived an indirect estimate of the [CO_2]/[CO]ratio from [HOCO^+]/[HCO^+]. The assumptions involved in this approach hold for the external layers of dense cloudcores, where CO freeze-out rates are moderate. The same method, in a simplified form (i.e., neglecting reaction (<ref>)), wasalso adopted by <cit.> towards Srg B2(N), and by <cit.>in the Class 0 protostar IRAS 04368+2557 embedded in L1527.The first laboratory identification of protonated carbon dioxide was accomplished by <cit.>, who observed six rotational lines of in the 350–380 GHzfrequency range. This work substantiated the tentative interstellar detection proposed by<cit.>, and an additional, independent, confirmation was provided bythe laboratory observation of the ν_1 ro-vibrational band of <cit.>. Later, Bogey and co-workerspublished two more papers about laboratory studies in which they enlarged the frequency coverage and extended the study to theisotopic species and HO^13CO^+ <cit.>. More recently, the low-lying J_K_a,K_c = 1_0,0-0_0,0 line of was measuredby <cit.> using a pulsed-jet Fourier-transform microwave (FTMW)spectrometer.Despite these considerable experimental efforts, the spectroscopic characterisation of this astrophysically-relevant ion remains not fully satisfactory. Recordings of the pure rotational spectra are indeed restricted to a rather limitedfrequency range; only a few lines have been measured in the 3 mm band, and the wholespectral region above 420 GHz is completely unexplored. No b-type transitions were measured for , thus resulting in a poorlydetermined A rotational constant for this isotopic species. Moreover, anomalously large centrifugal distortion effects are present in both and. As a result, the line positions of many astronomically important features cannot becomputed to a desirable accuracy.For example, the rest frequencies used by <cit.> to assign the ,K_a = 1-0 ladder (J=1-7) observed towards the Galactic centre are affected by 1σ uncertainties of 300–400 kHz, as indicated by the JPL line catalogue<cit.>. Also, the tentative detection of in L1544 claimed by <cit.> isbased on a reference datum that is not fully reliable: the J_K_a,K_c = 5_0,5-4_0,4line position provided by the JPL catalogue is 100 359.55± 0.035 MHz, but acalculation performed using the “best” literature spectroscopic data<cit.> gives 100 359.14 MHz. The resulting 410 kHz discrepancy (1.2 km s^-1) therefore hints at possible issuesaffecting the spectral analysis of .With the aim of providing highly-accurate rest frequencies for astrophysical applications, we have performed a comprehensive laboratory investigation of the pure rotational spectra of and .About fifty new lines were recorded for each isotopologue and, in addition, many literaturetransitions were accurately remeasured, to either refine their frequency positions or torule out possible misassignments. The measurements presented in this work also extend towards the THz region, thusconsiderably enlarging the frequency range with respect to previous studies.The data analysis shows that the spectrum is affected by a ro-vibrationalinteraction, coupling the ground state with the low-lying v_5=1 vibrationally-excited state. This resonance is characteristic of many quasi-linear molecules, such asHNCO <cit.>, HNCS <cit.>, and HN_3<cit.>. In , this perturbation produces non-negligible effects for rotational levels havingthe quantum number K_a greater than 2. A special treatment was adopted to analyse this spectrum in order to retrieve a set ofspectroscopic constants with clear physical meaning. § EXPERIMENTS The spectra described in this work have been recorded with the frequency-modulation (FM)sub-millimetre absorption spectrometer recently developed at the Center for Astrochemical Studies (Max-Planck-Institut für extraterrestrische Physik) in Garching.The instrument is equipped with a negative glow-discharge cell made of a Pyrex tube (3 m long and 5 cm in diameter) containing two stainless steel, cylindrical hollowelectrodes separated by 2 m. The plasma region is cooled by liquid nitrogen circulation and is contained inside a2 m-long solenoid, which can produce a coaxial magnetic field up to ∼300 G toenhance the discharge negative column <cit.>.The radiation source is an active multiplier chain (Virginia Diodes) that is driven bya synthesizer (Keysight E8257D) operating at centimetre wavelengths. Using a series of frequency multiplication stages, this setup provides continuous coverage across the 82–1100 GHz frequency range. Accurate frequency and phase stabilisation is achieved by providing the synthesizer witha 10 MHz rubidium frequency standard (Stanford Research Systems). A liquid-He-cooled InSb hot electron bolometer (QMC Instr. Ltd.) is used as a detector. FM is achieved by modulating the carrier signal with a sine-wave at a rate of 15 kHz, and then demodulating the detector output at 2f using a digital lock-in amplifier(SRS SR830). In this way, the second derivative of the actual absorption profile isrecorded by the computer controlled acquisition system.and were produced by a DC discharge (5–15 mA, ∼ 2 kV) and a 3:1mixture of CO_2 and H_2/D_2 diluted in a buffer gas of Ar(total pressure ∼15 μBar). As for other protonated ions, cell cooling is critical to enhance the absorption signals.In the present case, the use of a condensable precursor (CO_2) imposes a practical lowerlimit of ∼ 150 K to the cell wall temperature. Also, magnetic plasma confinement by a ∼ 200 G field was found to provide the bestconditions for the production of the protonated CO_2 ion. § RESULTS AND DATA ANALYSIS Protonated carbon dioxide is a slightly asymmetric prolate rotor (k = -0.9996), with thea inertial axis being closely aligned to the slightly bent heavy-atom backbone(∡(O–C–O) ≈ 174^∘), and the hydrogen atom lying on theab plane (∡(H–O–C) ≈ 118^∘) <cit.>. Hence, both a- and b-type transitions are observable.The electric dipole moment was theoretically computed by <cit.> yielding μ_a = 2.0 D, and μ_b = 2.8 D. <cit.> pointed out that these values might be inaccurate and, indeed,our observations of the intensity ratio between a- andlines are not inagreement with the above figures. The latest theoretical studies on <cit.>do not report estimates of the dipole moments, thus we have performed an ab initiocalculation using the CFOUR software package[].At the CCSD(T) level of theory <cit.>, and using the cc-pCVQZ basis sets <cit.>, it yielded: μ_a = 2.7 D, and μ_b = 1.8 D. These values give fair agreement (∼ 30%) with the line intensity ratios observedexperimentally.The absorption profiles of the observed transitions were modelled with the line profile analysis code <cit.>, in orderto extract their central frequency with high accuracy. We adopted a modulated Voigt profile, and the complex component of the Fourier-transformeddipole correlation function (i.e. the dispersion term) was also taken into account tomodel the line asymmetry produced by the parasitic etalon effect of the absorption cell (i.e., background standing-waves between non-perfectly transmitting windows). The frequency accuracy is estimated to be in the range of 20–50 kHz, depending on the line width, the achieved signal-to-noise ratio (S/N), and the baseline. With a magnetic field applied during the plasma discharge, the ions are produced primarilyin the negative column, which is a nearly field-free region.Therefore we assume the Doppler shift caused by the drift velocity of the absorbing speciesis negligible <cit.>. §.§The search for new rotational transitions of was guided by the spectroscopicparameters previously reported by <cit.>, thus their assignmentwas accomplished in a straightforward way. However, at frequencies above 500 GHz, increasingly larger discrepancies(∼ 500 kHz) were found between observed and predicted line positions.Forty-three new rotational lines were recorded, reaching a maximum quantum number Jvalues of 29 and a frequency as high as 967 GHz.These data included 12 b-type transitions belonging to the ^bP_+1,-1, ^bQ_+1,-1, and ^bR_+1,+1 branches. In addition, 18 lines previously reported by <cit.> were remeasured to check/improve their frequency positions.Figure <ref> (left panel) shows the recording of theJ_K_a,K_c = 30_0,30-29_0,29 line of located at ca. 640 GHz, which is thehighest frequency reached for a-type transitions.The combined data set of literature and newly-measured lines was fitted employinga S-reduced, asymmetric rotor Hamiltonian in its I^r representation<cit.> using Pickett's CALPGM programme suite <cit.>. Statistical weights (w_i = 1/σ_i^2) were adopted for each i-th datum toaccount for the different measurement precisions. In our measurements, an estimated uncertainty (σ_i) of 20 kHz is assignedto the a-type lines, whereas 50 kHz is assigned to the weaker b-type transitions,the latter of which derive from comparatively noisier spectra.For the data taken from the literature, we adopted the assumed uncertainties given inthe corresponding papers. The complete list of the analysed rotational transitions is provided as electronic supplementary material at the CDS[]. An excerpt is reported in Table <ref> for guidance.The analysis clearly showed that the ground-state spectrum of is perturbed. An anomalous slow convergence of the rotational Hamiltonian had already been noted by <cit.>, such that high-order terms were used to achieve a satisfactoryfit of the measured frequencies. Similar perturbations have been observed in many quasi-linear molecules, for which HNCO (isoelectronic with ) serves as a case study <cit.>. These anomalies reflect the breakdown of the Watson-type asymmetric rotor Hamiltonianbecause of the accidental Δ K_a = ± 1 degeneracy occurring between ground-staterotational levels and those of a low-lying, totally-symmetric excited state.We carried out the analysis of the spectrum following two different approaches.The first, simpler analysis was performed by applying a cut-off at K_a = 2.This excluded from the least-squares fit all the lines affected by the resonance, andallowed us to consider the ground state as isolated. The K_a = 0, 1, 2 lines were fitted using a single state Hamiltonian, and theseresults are reported in the first column of Table <ref> (referred to as fit Ihereafter). This analysis provides a compact set of rotational parameters, including four quarticand two sextic centrifugal distortion constants.Having observed only one subset of b-type transitions (K_a=0-1), the D_K constantcould not be determined reliably, and was thus constrained to the value derived fromprevious infrared ν_1 measurements <cit.>.The H_J and H_K sextic constants were also held fixed at their correspondingtheoretically computed values <cit.>.In the second stage of the analysis, the interaction coupling the ground and thev_5 = 1 states was explicitly treated, and all the available transitions (K_a up to 5)were included in the least-squares fit.Assumptions for the rotational parameters of the v_5 = 1 state (actually unobserved) werederived from the ground state constants (A_0, B_0, C_0) and the theoretically computedvibration-rotation α constants from <cit.>. An optimal fit was achieved by adjusting the same ground-state constants of the previous,simplified analysis, plus the resonance parameters η_12^ab and η_12J^ab. The quartic centrifugal distortion constant D_K of the perturbing state was held fixedin the fit, but its value was updated iteratively until a minimum of the root mean square(RMS) deviation was reached. The resulting spectroscopic constants of this analysis are gathered in the second column of Table <ref> (referred hereafter to as fit II). Full details of this analysis are given in Appendix <ref>.§.§A limited portion of the millimetre spectrum of had been recorded by<cit.>, but the quality of the resulting spectral analysis was notcompletely adequate. Indeed, in a second study, <cit.> had encountered serious difficultiesin fitting new low-K_a transitions together with the bulk of the previously measureddata. They had thus excluded all the K_a=0 lines from the final, published analysis. This represents a severe shortage for the astronomical utility of this data, as the linesthat may be most-detected in the ISM arise from these low energy levels.We have thus re-investigated carefully the rotational spectrum of , focusing onlow-K_a lines that show predictions errors up to ca. 1 MHz.Extensive spectral searches had to be performed to identify b-type transitions due tothe large error associated to the previous determination of the A rotational constant. Lines belonging to the ^bQ_+1,-1 and ^bR_+1,+1 branches were finally assignedat a distance of about 500 MHz from the position computed from the literature data. Ten b-type transitions were recorded in total. The final data set comprises 85 lines, covering the frequency rangefrom 120 to 672 GHz, with quantum number J spanning values 5–30.The frequency list is provided as electronic supplementary material at the CDS[]. Figure <ref> (right panel) shows the recording of theJ_K_a,K_c = 16_1,15-15_1,14 line of , obtained in about 3.5 min of integration time.Contrary to the parent isotopologue, the ground-state spectrum of does notshow evidence of perturbation. For the deuterated variant, the A rotational constant is smaller (14.4compared to26.2for ), thus the Δ K_a =± 1 quasi-degeneracy between the ground andv_5 = 1 excited state occurs at a higher value of K_a. As a result, the transitions involving low energy levels are essentially unperturbed. The analysis was therefore carried out by considering the ground vibrational state ofas isolated, and including in the fit transitions with K_a up to 4. A few, possibly perturbed lines involving K_a = 5,6 showed large deviations and were then excluded from the final data set. We adopted the same weighting scheme described in the previous subsection, where theassumed uncertainties (σ_i) were set to 20 kHz and 50 kHz for a- and b-typelines, respectively. For the 3 mm lines previously recorded by <cit.>, we retained theerror reported by these authors. These fit results are reported in Table <ref>. § DISCUSSION The spectral analyses presented here, have been performed on an enlarged and improved dataset, and yielded a more precise set of rotational and centrifugal distortion constants forand . In comparison to the latest literature data <cit.>, the new spectroscopic constants presented in Tables <ref> and <ref> exhibita significant reduction of their standard uncertainties. The improvement is particularly relevant for thanks to theidentification of the weaker b-type spectrum, such that the precision of its Arotational constant has been enhanced by a factor of 10^4. As a consequence, predictive capabilities at millimetre and sub-millimetre wavelengths have been improved.Regarding , we have presented two different analyses.Fit I, (Table <ref>, Column 1) includes only lines originating fromlevels with K_a ≤ 2. Within the precision of our measurements, these transitions are essentially unaffectedby the centrifugal distortion resonance that perturbs the ground-state spectrum (see Appendix <ref>), and they can be treated with a standard asymmetricrotor Hamiltonian. This simple solution provides reliable spectral predictions for all the transitions withupper-state energies E_u/k < 220 K, hence it is perfectly suited to serve as a guidefor astronomical searches of in the cold ISM.In fit II, all the experimental data are considered, and the interaction that couplesthe ground state with the v_5=1 vibrationally-excited state has been treated explicitly(Table <ref>, Column 2). Here, extensive use of the latest high-level theoretical calculations<cit.> has been made to derive reliable assumptions for thosespectroscopic parameters which could not be directly determined from the measurements. Though more complex, fit II implements a more realistic representation of therotational dynamics of this molecule, and yields a set of spectroscopic constantswith clearer physical meaning. Indeed, the anomalously large centrifugal distortion effects noted in the previousinvestigations <cit.> have been effectivelyaccounted for. No octic (L_JK) or decic (P_JK) constants were required for the analysis,and the agreement between experimental and ab initio quartic centrifugaldistortion constants is reasonably good. On the other hand, the values determined for the H_JK and H_KJ sextic constants should be considered only as effective approximations, since they include spuriousresonance contributions not explicitly treated by this analysis. Finally, it is to be noted that fit I and fit II, when limited to K_a ≤ 2 lines,yield spectral predictions that are coincident within the 1σ computeduncertainties.New and line catalogues, based on the spectroscopic constants of Table <ref> (fit I, only) and Table <ref>, have been computed and are provided as supplementary data available at the CDS. These data listings include: the 1σ uncertainties (calculated taken into account the correlations between spectroscopic constants), the upper-state energies,the line strength factors S_ijμ_g^2 (g=a,b), and the Einstein A coefficientsfor spontaneous emission,A_ij = 16π^3 ν^3/3ϵ_0 hc^31/2J + 1 S_ijμ_g^2,where all the quantities are expressed in SI units and the line strengths S_ij areobtained by projecting the squared rotation matrix onto the basis set that diagonalisesthe rotational Hamiltonian <cit.>. The computation was performed using μ_a = 2.7 D and μ_b = 1.8 D. The compilation contains 57 lines for and 72 lines for . They are selected in the frequency range 60 GHz ≤ν≤ 1 THz and applying thecut offs E_u/k < 100 K. The precision of these rest-frequencies is at least 4×10^-8, which corresponds to 0.011 km s^-1 (or better) in unit of radial velocity.Our analysis suggests that the line observed in L1544 by <cit.> andtentatively assigned to , J_Ka,Kc = 5_0,5-4_0,4, cannot actually be attributed to this ion. The observed feature, once red-shifted by the 7.2 km s^-1 V_LSR of L1544,has a rest frequency of 100 359.81 MHz, whereas our predicted line position is100 359.515± 0.002 MHz. This 295 kHz discrepancy corresponds to 0.88 km s^-1, which is over twice the average FWHM of the lines detected in the same source. It is thus likely that the line tentatively detected by <cit.> doesnot belong to and, indeed, these authors stated that this needed confirmationfrom laboratory measurements. § CHEMICAL MODEL Predictions for the and abundances in L1544 can be derived from thechemical model developed by <cit.>, which considersa static physical structure and an evolving gas-grain chemistry (as described in). Table <ref> shows the model computed column densities of and for different cloud evolutionary ages averaged over a 30^'' beam. <cit.> found a column density of about2× 10^11 cm^-2.Comparison with the predicted column densities, shown in Table <ref> suggeststhat the best agreement is found at late evolutionary times.However, since above 3× 10^5 yr, the model predicts too much CO freeze-out compared tothe observations of <cit.>, we consider 3× 10^5 yr as the bestagreement time. At this stage, the /is about 10% and the predicted ,J = 5_0,5 - 4_0,4 main beam brightness temperature is 6 mK (assuming an excitationtemperature of 8.5 K, as deduced by <cit.> for their observedline).§ CONCLUSIONS This laboratory study substantially improves the spectroscopic characterisation of the protonated CO_2 ion. The spectral region sampled by the measurements has been considerably extended into the sub-millimetre regime, reaching maximum frequencies of 967 GHz for , and 672 GHz for . In addition, new recordings were obtained for most of the previously reported lines in order to enhance their measurement precision using a sophisticated line profile analysis. Our analysis shows that the line tentatively detected towards the pre-stellar core L1544 cannot be attributed to . The authors wish to thank Mr. Christian Deysenroth for the thorough assistance in theengineering of the molecular spectroscopy laboratory at the MPE/Garching. We are also grateful to Luca Dore for providing theline profileanalysis code.aa § ANALYSIS OF THE GROUND STATE ∼ V_5=1 INTERACTIONThe perturbation that affects the ground-state spectrum of is a common feature of quasi-linear molecules, and reflects a breakdown of the Watson-type asymmetric rotorHamiltonian caused by accidental rotational level degeneracies. It was first described by <cit.>, and the theory has been treated inparticular detail by <cit.>.Due to the small a component of the moment of inertia, K_a + 1 levels of theground state can become close in energy with the K_a levels of a low-lying,totally-symmetric vibrationally-excited state. These levels can be coupled by the Ĥ_12 term (i.e. the centrifugal distortionterm) of the molecular Hamiltonian defined as:Ĥ_12 = -∑_s ω_s q_s C_s^ab [Ĵ_b,Ĵ_a]_+,where the [,]_+ notation represents the anti-commutator, the sum runs over thetotally-symmetric s vibrational states, and ab refers to the principal axis planewhere the molecule lies.In , the closest totally-symmetric (A') state is the v_5=1, which is locatedca. 536above the ground state <cit.>.Therefore, given the magnitude of the A rotational constant at ≈ 26, itsK_a rotational levels are crossed by the K_a+1 ground state ones at K_a ≈ 10. While the transitions are expected to show the largest deviations around this value ofK_a, significant contributions are present already at K_a≥ 3 as sizeable centrifugal distortion effects.Substitution of s=5 to represent the v_5 normal mode thus reducesEq. (<ref>) toĤ_12 = -ω_5 q_5 C_5^ab [Ĵ_b,Ĵ_a]_+,where ω_5 is the harmonic frequency, and C_5^ab is the adimensional rotationalderivative relative to the principal axes a,b <cit.>. In practice, the treatment of this resonance is accomplished by fitting the empiricalparameters that multiply the rotational operator J_a J_b + J_b J_a. This parameter has the form:η^ab_5 = 1√(2)ω_5 C^ab_5. A full analysis of this kind of ro-vibrational interaction requires, in principle,measurements of perturbed lines belonging to both interacting states. For our study of , this approach is not feasible due to the lack of experimental datafor the perturbing v_5 = 1 state. Nonetheless, one can use the results of the theoretical study of <cit.>to derive reasonable assumptions for the missing data. For this case, the rotational constants were computed from the ground statevalues of A_0,B_0,C_0, and the relevant ab initio vibration-rotation interactionconstants (α_5^A,α_5^B,α_5^C).For the pure vibrational energy difference between the ground and v_5 = 1 state, weused the estimate of ν_5 that includes quartic anharmonicity. Finally, all five quartic and four sextic centrifugal distortion constants were assumedequal to the theoretically computed equilibrium values. All these parameters were held fixed in the least-squares analysis of the ground statelines, in which we adjusted the same set of ground state parameters from fit I, with theaddition of the resonance parameter η_12^ab and its centrifugal correctionη_12J^ab. By adopting this scheme, it was possible to reproduce the measured transitions forK_a ≤ 5 without the need of additional high-order centrifugal distortion terms. The fit was finally optimised by adjusting the D_K constants of the v_5=1 statesthrough a step-by-step procedure until the root-mean-square deviation was minimised.The determination of the value of η_12^ab (see Table <ref>, fit II)provides also an estimation of the magnitude of the adimensional resonance parameter,C_5^ab. Using the theoretical ω_5 value (535.6, ), we find ∼ 4× 10^-4. This value is in fair agreement to those of the isoelectronic molecules H_2CCO,H_2CNN (∼ 2× 10^-4, ), and HNCO(∼ 8× 10^-5, ).
http://arxiv.org/abs/1703.08609v1
{ "authors": [ "Luca Bizzocchi", "Valerio Lattanzi", "Jacob Laas", "Silvia Spezzano", "Barbara Michela Giuliano", "Domenico Prudenzano", "Christian Endres", "Olli Sipilä", "Paola Caselli" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170324214258", "title": "Accurate sub-millimetre rest-frequencies for HOCO$^+$ and DOCO$^+$ ions" }
A Sentence Simplification System for Improving Relation Extraction Christina Niklaus, Bernhard Bermeitinger, Siegfried Handschuh,André Freitas Faculty of Computer Science and MathematicsUniversity of PassauInnstr. 41, 94032 Passau, Germany{christina.niklaus, bernhard.bermeitinger, siegfried.handschuh, andre.freitas } @uni-passau.de =====================================================================================================================================================================================================================================================================================================In this paper we consider the application of direct methods for solving a sequence of saddle-point systems. Our goal is to design a method that reuses information from one factorization and applies it to the next one. In more detail, when we compute the pivoted LDL^T factorization we speed up computation by reusing already computed pivots and permutations. We develop our method in the frame of dynamical systems optimization. Experiments show that the method improves efficiency over Bunch-Parlett while delivering the same results. Keywords: saddle-point matrix; symmetric indefinite factorization; dynamical systems; sequential quadratic programming § INTRODUCTION Consider a sequence of saddle-point systems arising, for example, in Sequential Quadratic Programming (SQP) <cit.>, that isK_ix_i = y_i, i = 1,2, …where K_i∈^n × n, x_i∈^n and y_i ∈^n. Here, we assume matrices K_i to have the same structure of nonzero entries. Based on this we study how to solve the system K_ix_i = y_i, benefiting from information from previous iterations. Our goal is to develop a strategy that reduces the amount of work spent on searching the matrix for pivots in direct methods. In more detail we describe our strategy in Section <ref> and compare its stability with Bunch-Parlett <cit.>.We develop this strategy in the frame of direct methods for solving saddle-point systems that arise in a certain class of optimization problems arising in verification of dynamical systems <cit.>. Here, one seeks a solution of a dynamical system that originates in a given set of initial states and reaches another set of statesthat are to be avoided, unsafe states. Saddle-point matrices K_i that arise have specific structure which we try to exploit.Such optimization problem occur for example in control and verification of hybrid systems <cit.> and in motion planning <cit.>.In addition, the techniques described in this paper apply to general underdetermined boundary value problems for ordinary differential equations <cit.>. Moreover, similar saddle-point matrices to ours arise, for example, in the mixed and hybrid finite element discretizations <cit.>, in a class of interior point methods <cit.>, and time-harmonic Maxwell equations <cit.>.The saddle-point matrices that arise in such optimization problems and applications are sparse, whether one uses the SQP method <cit.> or the interior-point method <cit.>. Hence, direct methods for solving the saddle-point system look promising. However, the naive application of straightforward LDL^T factorization often results in a failure due to ill-conditioning <cit.> and the singularity of the (1,1) block <cit.>. Denoting the saddle-point matrix and its factorization byPKP^T = PHB-CP^T = LDL^T,in more detail, the main contributions of this paper are: description of a strategy for selecting and reusing pivots in the symmetric and indefinite PKP^T = LDL^T factorization; analysis of the growth factor in the reduced matrices; numerical comparison with Bunch-Parlett and Bunch-Kaufman on a series of benchmarks from dynamical systems optimization; description and exploitation of a specific structure of the saddle-point matrix K and the factor L in dynamical systems optimization problem <cit.>; Alg. <ref> that switches from unpivoted to pivoted factorization of the matrix K, balancing the speed with the stability of computation.The outline of the paper is as follows. In Section <ref> we briefly review the optimization problem we try to solve <cit.>. In Section <ref> we describe the structure of the saddle-point matrices. In Section <ref> we will compute the LDL^T factorization and prove that the factor L has a banded structure of nonzero elements. Then the discussion about the implementation of the LDL^T factorization follows and a hybrid method for solving the saddle-point system is described in Section <ref>. Sections <ref> contains a detailed description of reusing pivots and its effect on the stability of our method. Furthermore we include numerical results in Section <ref>. The whole paper is concluded with a summary and a brief discussion of results in Section <ref>. § MOTIVATION Our motivation originates from the field of computer aided verification <cit.>. Consider a system of ordinary differential equations such thatẋ(t) = f(x(t)),x(0) = x_0,where x: →^k is a function of variable t ≥ 0, x_0 ∈^k and f: ^k→^k iscontinuously differentiable. We denote the flow of the vector field f in (<ref>) by Φ: ×^k →^k and for the fixed x_0 one has the solution x(t) of (<ref>), where x(t) = Φ(t, x_0) for t ≥ 0.Denote the set of initial states byand the set of states we try to avoid by . Our goal is to find any solution x(t) of (<ref>) such that x_0 ∈ and Φ(t_f, x_0) ∈ for some t_f > 0, if it exists. In the previous work <cit.> we solve this boundary value problem by the multiple-shooting method <cit.>. That is, one computes a solution of (<ref>) from shorter solution segments. Suppose we have N solution segments of (<ref>) such that their initial states are denoted by x_0^i and their lengths by t_i > 0 for 1 ≤ i ≤ N. Then the desired solution to our problem satisfies: x_0^1 ∈, x_0^i+1 = Φ(t_i, x_0^i) for 1 ≤ i ≤ N-1 (these are the matching conditions), and Φ(t_N, x_0^N) ∈.Boundary conditions x_0^1 ∈ and Φ(t_N, x_0^N) ∈ can be formulated either as equalities (points belong to the boundaries of the sets), or it can be given as inequalities (points are insides of sets). Either way there are infinitely many solutions <cit.>, therefore, one needs to introduce a regularization. In the paper <cit.> we formulate an objective function in the form ∑ t_i^2, where t_i is the length of the i-th solution segment, that drives the solution segments to have the same lengths.In the end one solves a general nonlinear programming problem with N(k+1) parameters, where those parameters are lengths of solution segments t_i > 0 and initial states x_0^i ∈^k, 1 ≤ i≤ N. From now onwards we denote the number of parameters by n = N(k+1) and the number of constraints by m = (N-1)k + 2.§ BLOCKS OF SADDLE-POINT MATRIX The Line-search SQP method described in <cit.> requires in each iteration the solution of the saddle-point system (<ref>)HB-C[ x; y ] = [ f; g ]or Ku = b ,where H ∈^n × n, B ∈^n × m with n ≥ m and C ∈^m × m.The elements of the matrix B and C are recomputed and the matrix H is updated block by block by the BFGS scheme. The structure of nonzero elements of the matrix B in (<ref>) remains the same throughout the iterations. Since we are interested in the LDL^T factorization of the saddle-point matrix K that arises in the solution of reachability problems for dynamical systems <cit.>, the blocks H and B have the formH = [ H_1; ⋱; H_N ] ,B =[v -M_1^T ;-v_1^T; I- M_2^T; -v_2^T;I; ⋱ ; ;-M_N-1^T ;-v_N-1^T ; Iw; 0β ] .The matrix H consists of blocks H_i ∈^(k+1) × (k+1), 1 ≤ i ≤ N.The matrix C = diag(γ_1, 0, …, 0, γ_2), where γ_i ≥ 0 for i = 1,2. In the matrix B, there are blocks M_i ∈^k × k and vectors v_i ∈^k, 1 ≤ i ≤ N-1. The matrix I ∈^k × k is the identity matrix of order k, the vectors v and w are nonzero and belong to ^k, and β is a nonzero scalar. The first and the last columns of the matrix B correspond to the boundary constraints x_0^1 ∈ and Φ(t_N, x_0^N) ∈.A similar banded structure of nonzero elements to the one in the matrix B arises, for example, when one solves boundary value problems for ordinary differential equations <cit.>. However, we have additional entries v_i in (<ref>) because we consider the lengths of time intervals t_i (the lengths of solution segments) to be parameters and not fixed values.The saddle-point matrix K satisfies the following conditions <cit.>: the matrix H is symmetric positive definite (BFGS approximations of the Hessian), and the matrix B has full column rank. Under these conditions the saddle point matrix K is: nonsingular <cit.>, indefinite <cit.>, strongly factorizable <cit.>. § LDL^T FACTORIZATION In this section we give formulas for the LDL^T factorization of the saddle-point matrix K with blocks (<ref>). In addition, we describe the structure of nonzero elements of the unit lower triangular factor L for which K = LDL^T. A standard approach <cit.> to solving the linear system Ku = b, where K is symmetric and nonsingular, is Alg. <ref>.Denote by L_HD_HL_H^T the LDL^T factorization of the matrix H and by L_SD_SL_S^T the LDL^T factorization of the Schur complement S = K/H, whereS = -C - B^TH^-1B. When forming the factor L one factorizes H = L_HD_HL_H^T and then S = L_SD_SL_S^T. ThenHB-C=[ L_H 0; B^TL_H^-TD_H^-1 I ][ D_H 0; 0 S ][ L_H^T D_H^-1L_H^-1B; 0 I ]=[ L_H 0; B^TL_H^-TD_H^-1 L_S ][ D_H 0; 0 D_S ][ L_H^T D_H^-1L_H^-1B; 0 L_S^T ]= LDL^T .Computation of the factor L is summarized in Alg. <ref> and follows the framework of the solution of equilibrium systems <cit.>. Matrix H from (<ref>) is block diagonal, thenL_HD_HL_H^T = [ L_H, 1; ⋱ ;L_H,N ][ D_H, 1; ⋱ ; D_H, N ][ L_H, 1^T; ⋱ ;L_H,N^T ],where L_H, i∈^(k+1) × (k+1) is unit lower triangular and D_H, i∈^(k+1) × (k+1) is diagonal, 1 ≤ i ≤ N.Let K = LDL^T be the LDL^T factorization of the saddle-point matrix (<ref>) with blocks given in (<ref>). Then the (1,1) block L_H of the factor L is block diagonal and its blocks L_H,i∈^(k+1) × (k+1),1 ≤ i ≤ N, are unit lower triangular. The matrix H (<ref>) is symmetric positive definite, therefore, each H_i, 1 ≤ i ≤ N, is symmetric positive definite. When one carries out the LDL^T factorization of H_i, then one obtains factors L_H,i and D_H,i such that H_i = L_H,iD_H,iL_H,i^T for 1 ≤ i ≤ N. The LDL^T factorization of H can be written in the matrix form as in (<ref>), where L_H,i is unit lower triangular and D_H,i is diagonal with the 1 × 1 pivots on the diagonal, 1 ≤ i ≤ N.We proceed with the computation of B^TL_H^-TD_H^-1 in the factor L in (<ref>). Let K = LDL^T be the LDL^T factorization of the saddle-point matrix (<ref>) with blocks given in (<ref>). ThenB^TL_H^-TD_H^-1= [ s_1^T; X_1 Y_2; X_2 Y_3; ⋱; X_N-2 Y_N-1; X_N-1 Y_N; s_2^T ] ,where s_1 = D_H,1^-1L_H,1^-1[ v; 0 ]∈^k+1, X_i = [M_i v_i]L_H,i^-TD_H,i^-1∈^k × (k+1) , Y_i = [I 0]L_H,i^-TD_H,i^-1∈^k × (k+1) ,s_2 =D_H,N^-1L_H,N^-1[ w; β ]∈^k+1,where [v^T 0]^T ∈^k+1, [w^T β]^T ∈^k+1, [I 0] ∈^k × (k+1) and [M_i v_i] ∈^k × (k+1) for 1 ≤ i ≤ N-1. The result follows from the direct computation of the matrix product BL_H^-TD_H^-1. The matrix B is given in (<ref>) and factors L_H and D_H in (<ref>). Since L_H is block diagonal, then its inverse L_H^-1 is also block diagonal with blocks of the same size.To finish the description of the factor L in (<ref>) one needs to compute the Schur complement S = K/H and factorize it. Lemma <ref> shows the block 3-diagonal structure of the Schur complement S.The Schur complement S = -C - B^TH^-1B has the form S = - [ α_1 w_1^T; w_1 V_1 W_1^T; W_1 V_2; ⋱; V_N-2 W_N-2^T; W_N-2 V_N-1 w_2; w_2^T α_2 ] ,whereα_1 =[v^T 0]H_1^-1[ v; 0 ] + γ_1 ∈ , w_1 =[M_1^T v_1]H_1^-1[ v; 0 ]∈^k , V_i = [M_i^T v_i]H_i^-1[ M_i; v_i^T ] + [I 0]H_i+1^-1[ I; 0 ]∈^k× k , W_i =[M_i+1^T v_i+1]H_i+1^-1[ I; 0 ]∈^k× k , w_2 =[I 0]H_N^-1[ w; β ]∈^k,α_2 =[w^T β]H_N^-1[ w; β ] + γ_2 ∈ ,with [v^T 0]^T ∈^k+1, [w^T β]^T ∈^k+1, [I 0] ∈^k × (k+1), γ_1 ≥ 0 and γ_2 ≥ 0 from the matrix C,and [M_i^T v_i] ∈^k × (k+1) for 1 ≤ i ≤ N-1. The result follows from the matrix product BH^-1B^T, where H^-1 is a block diagonal matrix and B is given in (<ref>). The matrix C = diag(γ_1, 0, …, 0, γ_2), therefore, it only affects the values α_1 and α_2 in (<ref>).The Schur complement S is tri-block diagonal. We shall illustrate the process of LDL^T factorization for N = 3. Then-S =[ α_1 w_1^T; w_1 V_1 W_1^T; W_1 V_2 w_2; w_2^T α_2; ]⟶[ 1; w_1/α_1V̂_1 W_1^T; W_1 V_2 w_2; w_2^T α_2; ] ,where V̂_1 = V_1 - w_1w_1^T/α_1. Once we form the LDL^T factorization of V̂_1 = L̂_̂1̂D̂_̂1̂L̂_1^T, then[ 1; w_1/α_1V̂_1 W_1^T; W_1 V_2 w_2; w_2^T α_2; ]⟶[ 1; w_1/α_1L̂_1; W_1L̂_1^-TD̂_1^-1V̂_2 w_2; w_2^T α_2; ] ,where V̂_2 = V_2 - W_1L̂_1^-TD̂_̂1̂^-1L̂_1^-1W_1^T = V_2 - W_1V̂_1^-1W_1^T. Once more we form the LDL^T factorization of V̂_2 = L̂_2D̂_2L̂_2^T, then[ 1; w_1/α_1L̂_1; W_1L̂_1^-TD̂_1^-1V̂_2 w_2; w_2^T α_2; ]⟶[ 1; w_1/α_1L̂_1; W_1L̂_1^-TD̂_1^-1L̂_2; w_2^TL̂_2^-TD̂_2^-1α̂_2; ] ,where α̂_2 = α_2 - w_2^TL̂_2^-TD̂_2^-1L̂_2^-1w_2 = α_2 - w_2^TV̂_2^-1w_2. Finally, we put α̂_2 into the diagonal matrix D_S and set the last elemental on the diagonal in L_S to one.Let S = L_SD_SL_S^T be the LDL^T factorization of the Schur complement S = -C - B^TH^-1B, thenL_S = [1 ;l_1 L̂_1;W_1L̂_1^-TD_S,1^-1 L̂_2 ;⋱ ;W_N-2L̂_N-2^-TD_S,N-2^-1 L̂_N-1 ;l_2^T1 ] ,where L̂_i ∈^k× k, 1 ≤ i ≤ N-1, are unit lower-triangular, l_1 = w_1/α_1 ∈^k, l_2 = D̂_N-1^-TL̂_N-1^-1w_2 ∈^k and the diagonal matrix D_S is such that-D_S = [ d_1; D_S,1; ⋱; D_S,N-1; d_N ] ,where d_1 ∈, d_2 ∈ and D_S,i∈^k × k for 1 ≤ i ≤ N-1. Here d_1 = α_1 from (<ref>) and d_N = α_2 - w_2^TL̂_N-1^-TD_S,N-1^-1L̂_N-1^-1w_2. The scalar α_2 and the vector w_2 come from (<ref>).Lemmas <ref>, <ref> and <ref> describe the block structure of the factor L. Note that both matrices B^TL_H^-TD_H^-1 and L_S are banded and their width is independent of N. In Fig. <ref>, it is illustrated what the structures of nonzero entries of the saddle-point matrix K and its factor L are. We conclude this section by the observation that for the block L_S the option for parallel computation of blocks V_i and W_i at the same time in (<ref>) is no longer available as in the case of the block L_H in (<ref>). However, we do not need to keep the whole Schur complement S in memory to get L_S as it is shown in the discussion preceding Lemma <ref> and summarized in Alg. <ref>. § MEETING THE ILL-CONDITIONED H The accuracy of the computed solution u of the saddle-point system (<ref>) with blocks from (<ref>) by Alg. <ref> depends on the condition number of H <cit.>. For the matrix K is indefinite the condition number of K may be much smaller than the condition number of H <cit.>. Therefore, one may try to find a permutation matrix P such that the factorization PKP^T = LDL^T gives better numerical results. One can also find an ill-conditioned (1,1) block in a class of interior point methods, electrical networks modelling and in the finite elements for a heat application <cit.>. We also encountered the ill-conditioned (1,1) block of K in dynamical system optimization <cit.>. Note that there are applications where the (1,1) block is singular such as time-harmonic Maxwell equations <cit.>, and linear dynamical systems in the paper <cit.>.One such approach to computation of PKP^T = LDL^T is Bunch-Parlett <cit.>. However, this leads to a dense factor L as illustrated in Fig. <ref>. In addition, finding elements for a pivoting strategy is very expensive and one needs to search a matrix for its maximal off-diagonal element. For a nonsingular symmetric matrix A∈^n × n, the pivoting strategy requires between n^3/12 and n^3/6 comparisons <cit.>. On the other hand, the Bunch-Parlett factorization is nearly as stable as the LU factorization with complete pivoting <cit.>. BFGS approximations of the matrix H (<ref>) approach a singular matrix as the iteration process progresses <cit.>. Therefore, the accuracy of solutions computed by Alg. <ref> deteriorates <cit.>.This happened to us in several cases when we used benchmark problems from <cit.>.However, we observed that the application of Bunch-Parlett instead of Alg. <ref> did not fail and delivered desired results. This leads us to the formulation of a hybrid method Alg. <ref> that switches at some point from the straightforward LDL^T factorization without pivoting to Bunch-Parlett. Our idea is to use Alg. <ref> as long as possible until the condition number of D_H gets large. When this behaviour is detected the method switches for Bunch-Parlett to finish. From numerical testing we found a suitable rule for switching to be that condition number of the diagonal matrix D_H is greater than 1/√(ε), where the machine precision ε≈ 10^-16.Our goal in the next section is to reduce the amount of work spent on searching for pivots in Bunch-Parlett.§ UPDATING THE MATRIX PWe do not need to compute a new permutation for the Bunch-Parlett method in every iteration and may try to use and update the one from the previous iterationin Alg. <ref>. Let P_iK_iP_i^T = L_iD_iL_i^T be the Bunch-Parlett factorization of the saddle-point matrix K_i from the i-th iteration in Alg. <ref>. In the very next iteration we try to compute P_iK_i+1P_i^T = L_i+1D_i+1L_i+1^T, where D_i+1 has the same pattern of 1 × 1 and 2 × 2 pivots as the matrix D_i. Such a factorization of K_i+1 may not exist and we need to monitor the pivots and update the permutation matrix P_i if necessary. In the paper <cit.> there is an algorithm for updating a matrix factorization. However, the method works for matrices of the form K_i+1 = K_i + σ vv^T, where σ∈, v ∈^n+m and σ vv^T is a rank-one matrix. This is not the case in the problem we try to solve. We employ the following monitoring strategy as we follow the pattern of pivots of D_i and factorize the permuted saddle-point matrix P_iK_i+1P_i^T. If the 1 × 1 pivot β satisfies |β| > ε_1, where ε_1 > 0, we use it and leave P_i unchanged. If not, we apply the Bunch-Parlett method on the reduced matrix and update the permutation. In the case of the 2 × 2 pivot β = [ a b; b c ] we accept it if | ac - b^2 | > ε_1 and β < ε_2. If these conditions do not hold we apply the Bunch-Parlett method on the reduced matrix and update the permutation P_i.The 1 × 1 pivot is useful if and only if |β| is large relative to the largest off-diagonal element in absolute value <cit.>. Therefore, we only bound its modulus from below and the larger it is the better. However, it may happen that |β| is small compared to off-diagonal elements and that causes the increase in the magnitudes of elements in the factor L and the reduced matrix. That is the reason for introducing the second condition on the 2 × 2 pivots for which we require β < ε_2. Since we divide by the determinant of the2 × 2 pivot β we need it to be bounded away from zero <cit.>. With this monitoring strategy one is not restricted to Bunch-Parlett and we also tried it for the Bunch-Kaufman method. In the rest of this section, we compare the growth factor of the elements in the reduced matrices for the pivot monitoring strategy with Bunch-Parlett <cit.>. We use the same notation as in that paper. Let A ∈^n × n be symmetric and nonsingular such thatA = [ A_1,1 A_1,2; A_1,2^T A_2,2 ],where A_1,1∈^j× j, c ∈^j × (n-j) and A_2,2∈^(n-j)× (n-j). If A_1,1^-1 exists, thenA = [ I_j 0; A_1,2^TA_1,1^-1 I_n-j ][A_1,10;0 A_2,2 - A_1,2^TA_1,1^-1A_1,2 ][ I_j A_1,1^-TA_1,2; 0 I_n-j ],where I_j is the identity matrix of order j, and I_n-j or order n-j respectively. The elements of the matrix M = A_1,2^TA_1,1^-1 are called multipliers and we consider j = 1 or 2. We denote by A^(n) = A and let A^(k) be the reduced matrix of order k. In the end, let μ_0 = max_i,j{ |a_i,j|; a_i,j∈ A} and μ_1 = max_i{ |a_i,i|; a_i,i∈ A }.Suppose that the pivot β = A_1,1 is of order 1, that is j = 1. Under our monitoring strategy we accept β for the pivot if | β | > ε_1. Then the reduced matrix is A^(n-1) = A_2,2 - A_1,2^Tβ^-1A_1,2. Let ε_1 ∈ (0,1). If |β| > ε_1, thenm := max_i{ |m_i| ; m_i ∈ M} ≤μ_0/ε_1,μ_0^(n-1) := max_i,j{ |a_i,j| ; a_i,j∈ A^(n-1)} ≤( 1 + μ_0/ε_1)μ_0. We follow <cit.> and replace μ_1 with our lower bound ε_1 on the magnitude of the pivot β. We observe in Lemma <ref> that the bound is more pessimistic than the bound in <cit.> for Bunch-Parlett. In more detail the bound m < 1.562, see <cit.>, on multipliers may not hold under our monitoring strategy. The reason behind this is that we take the pivot β even if |β| < μ_0(1+√(17))/8 as long as |β| > ε_1. Therefore, during the factorization the elements in the reduced matrix may grow in magnitude rapidly.Suppose the pivot β is of order 2, that is j = 2. We accept β if |β| > ε_1 and β < ε_2. Then the matrix M ∈^(n-2)× 2. Let ε_1 ∈ (0,1). If |β| > ε_1, thenm := max_i,j{ |m_i,j| ; m_i,j∈ M} ≤μ_0(μ_0 + μ_1)/ε_1,μ_0^(n-2) := max_i,j{ |a_i,j| ; a_i,j∈ A^(n-2)} ≤( 1 + 2μ_0(μ_0 + μ_1)/ε_1)μ_0, ε_1 < |β |≤μ_0^2 + μ_1^2. The first and the second inequality follows from <cit.>, where we use the lower bound ε_1 instead of β in the denominator. The last chain of inequalities follows partially from our assumption that|β| > ε_1 and <cit.>. Note that we accept the pivot β even if μ_1 > μ_0 as long as|β| > ε_1 and β < ε_2 hold.Due to this fact we cannot derive the bound μ_0^(k) < (2.57)^n-kμ_0 as in the paper <cit.>. In our case the bound m on multipliers depends on 1/ε_1, therefore, the overall bound on the growth of elements in the reduced matrix A^(k) will contain powers of 1/ε_1. Because of this very reason we place the condition β < ε_2 on the 2×2 pivot trying to meet the undesired growth of elements in the reduced matrices.A less expensive method than Bunch-Parlett is Bunch-Kaufman <cit.>, since it requires only O(n^2) comparisons when searching the matrix for pivots. It is accepted as the algorithm of choice when solving symmetric indefinite linear systems. Similarly to Bunch-Parlett one can show that the growth of elements in the reduced matrices is bounded <cit.>, however, this time there is no bound on the entries of the factor L <cit.>. Therefore, it gives lower accuracy and can even be unstable <cit.>. There are ways around this as described in <cit.>, however, then the modified Bunch-Kaufman requires a higher number of comparisons lying somewhere between the number of comparisons ofBunch-Kaufman and Bunch-Parlett.Our proposed heuristic also does not bound entries in the factor L, however, it tries to skip the search for pivots. In Section <ref> we compare the pivot monitoring strategy with Bunch-Parlett and Bunch-Kaufman on a series of benchmarks.§ COMPUTATIONAL EXPERIMENTS In this section we apply our method <ref> to two benchmark problems from the paper <cit.>. We test Alg. <ref> with and without the monitoring strategy for updating the permutation matrix and compare the results. Both following benchmark problems are described in detail in the paper <cit.>. In this paper, we consider only equality constraints, hence, the block C in (<ref>) is the zero matrix. For the reader's convenience we describe the governing differential equations of those dynamical systems.The first benchmark problem <cit.> is a linear dynamical system given byẋ =[ [01; -10 ]; ⋱ ; [01; -10 ] ]x,where the statespace dimension is k ∈ℕ. It was shown in <cit.> that the Hessian matrix is singular and observed that the BFGS approximation of the Hessian approaches a singular matrix. This leads to a saddle-point matrix K that has an ill-conditioned (1,1) block.The second benchmark problem <cit.> is a nonlinear dynamical system such thatẋ =[ [01; -10 ]; ⋱ ; [01; -10 ] ]x + [ sin(x_k);⋮; sin(x_1) ],where the statespace dimension is k ∈ℕ. Similar to the benchmark above the BFGS approximations of the Hessian approach singular matrix <cit.>. In both benchmark problems the setsandare balls of radius 1/4. The stopping criteria on the norm of the gradient, the norm of the vector of constraints, the maximum number of iterations and the minimal step-size are the same as in the paper <cit.>. For our monitoring strategy in Alg. <ref> we set ε_1 = 10^-3 and ε_2 = 10^6. The results for the first benchmark problem are shown in the Tab. <ref> and for the second benchmark problem in the Tab. <ref> respectively. In all instances we were able to find a desired solution from N solution segments for which x_0^1 ∈ and Φ(∑ t_i, x_0^1) ∈.Both tables <ref> and <ref> have four parts. The first part consists of two columns denoted by k – the statespace dimension of the dynamical system and by N – the number of solution segments. The second part corresponds to Alg. <ref> with no monitoring of pivots and there are three columns: #IT – the number of iterations in Alg. <ref>, #LDL^T – the number of straightforward LDL^T factorizations, and #B-P/B-K – the number of Bunch-Parlett/Bunch-Kaufman factorizations. The third part shows the results of Alg. <ref> with the monitoring of pivots. The meaning of columns #IT and #LDL^T remains the same. However, the column denoted by “#upd of P” shows how many times the matrix P was computed and updated. In the end, the last column denoted by R gives the ratio of the number of Bunch-Parlett/Bunch-Kaufman factorizations to the number of updates of P using the pivot monitoring, that isR :=⌊#B-P/B-K/#upd of P⌋ .One can interpret R in the following way. As R approaches #B-P/B-K, then Alg. <ref> reuses pivots almost all the time. Especially, when R = #B-P/B-K, then we carry the search for pivots only once. When R approaches 1, then Alg <ref> searches K for pivots more frequently, and ultimately when R = 1 then it uses standard Bunch-Parlett/Bunch-Kaufman throughout. It may happen that the sum of numbers from #LDL^T and #B-P/B-K columns is greater than the number in the column #IT because of the restarts in the LS-SQP method <cit.>. Whenever there is a single value in a column, then both Bunch-Parlett and Bunch-Kaufman yield the same results. If the values in columns #IT in one row differ we do not compute R. The same applies when only the LDL^T factorization with no pivoting was used.We read the results in Tab. <ref> and <ref> in the following way. For example the last row of Tab. <ref> is: the statespace dimension k = 40 and the number of solution segments N = 30 result in the optimization problem with 30(40+1) = 1230 parameters and (30-1)40+2 = 1162 equality constraints. Then the saddle-point matrix K is of order 2392; Alg. <ref> with no monitoring of pivots took 59 iterations from which the matrix K was factorized 9 times by LDL^T with no pivoting and 50 times by Bunch-Parlett/Bunch-Kaufman. When we used Alg. <ref> with the monitoring of pivots the matrix P was computed once in the 10th iteration. From this point onwards it was updated twice, therefore, the matrix P was reused 47 times. The ratio R is then ⌊50:3⌋ = 16. We demonstrated in Tab. <ref> and <ref> that we can switch between a cheap factorization (LDL^T without pivoting) and Bunch-Parlett. In addition we can minimize the cost of finding the pivots in the Bunch-Parlett method. The monitoring strategy that allows us to reuse the permutation matrices in the Bunch-Parlett method is independent of our application and may be used in other problems as well.We can compare the results in Tab. <ref> and <ref> with the results in the paper <cit.>, where the preconditioned projected conjugate gradient (PPCG) method <cit.> was used. For the linear benchmark problem the Alg. <ref> required less iterations of the LS-SQP. However, in the nonlinear case, the results are inconclusive.All the computations were carried out in Scilab 5.5.2 <cit.> on a computer Intel(R) Xeon(R) CPU X5680 @ 3.33GHz with the operating system Cent OS 6.8. We used the built in ode solver ode in the default settings and the backslash operator for solving systems of linear equations in Alg. <ref>–<ref> and <ref>.§ CONCLUSION We proposed and tested a pivot monitoring strategy that allows us to reuse and update permutation matrices. Therefore, we reduced the cost of finding the pivots in solving a sequence of a saddle-point systems. Numerical experiments show that this successfully speeds up computation in the frame of dynamical systems optimization. The result is a method that is less stable than the unmodified Bunch-Parlett, as shown in Section <ref>. However, practice has shown this is very often not a big concern. For example Ashcraft at al. <cit.>, observe that for sparse matrices and symmetric factorizations “very often less stable algorithms appear to perform numerically just as well as more reliable algorithms.” Another observation is that the unpivoted factorization requires less time and storage than pivoted factorizations with stability guarantees <cit.>. Our experiments confirm those observations.Another observation that we find useful is the following. When matrices K_i have fixed structure of nonzero entries and the matrix P remains unchanged, then the structure of nonzero entries in the factor L remains the same. This becomes interesting for memory allocation of sparse matrices. Keeping the same matrix P is a sort of data preprocessing. We can arrange matrices so that the pivots are on the diagonal during the factorization. abbrv
http://arxiv.org/abs/1703.09012v2
{ "authors": [ "Jan Kuřátko" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170327111335", "title": "Factorization of Saddle-point Matrices in Dynamical Systems Optimization---Reusing Pivots" }
e1e-mail: sunil@unizwa.edu.om e2e-mail: megandhreng@dut.ac.za Department of Mathematical and Physical Sciences, College of Arts and Science, University of Nizwa, Nizwa, Sultanate of OmanDepartment of Mathematics, Faculty of Applied Sciences, Durban University of Technology, Durban, South Africa Generating physically realizable stellar structures via embedding S.K. Mauryae1,addr1 M. Govendere2,addr2. Received: date / Accepted: date =================================================================In this work we present an exact solution of the Einstein-Maxwell field equations describing compact, charged objects within the framework of classical general relativity. Our model is constructed by embedding a four-dimensional spherically symmetric static metric into a five dimensional flat metric. The source term for the matter field is composed of a perfect fluid distribution with charge. We show that our model obeys all the physical requirements and stability conditions necessary for a realistic stellar model. Our theoretical model approximates observations of neutron stars and pulsars to a very good degree of accuracy.Generating physically realizable stellar structures via embedding S.K. Mauryae1,addr1 M. Govendere2,addr2. Received: date / Accepted: date ================================================================= § INTRODUCTION Einstein's general theory of relativity has successfully accounted for various observations cosmological scales as well as in astrophysical contexts<cit.>. The golden age of cosmology has seen the theory fine-tuned to a high degree of accuracy in explaining the Hubble rate, matter content, baryogenesis, nucleosynthesis, as well as the possible origin and subsequent evolution of the Universe. General relativity, as an extension of Newtonian gravity is especially useful in describing compact objects in which the gravitational fields are very strong. Some of these objects include neutron stars, pulsars and black holes where densities are of the order of 10^14 g.cm^-3 or greater. The first exact solution of the Einstein field equations representing a bounded matter distribution was provided by Schwarzschild in 1916<cit.>. This solution described a constant density sphere with the exterior being empty. The constant density Schwarzschild solution was a toy model which cast light on the continuity of the gravitational potentials and the behaviour of the pressure at the surface of the star. However, the interior Schwarzschild solution was noncausal in the sense that it allowed for faster then light propagation velocities within the stellar interior. This prompted the search for physically viable solutions of the Einstein field equations describing realistic stars. A century later and we have thousands of exact solutions of the field equations describing a multitude of stellar objects ranging from perfect fluids, charged bodies, anisotropic matter distributions, higher dimensional stars and exotic matter configurations. Spherical symmetry is the most natural assumption to describe stellar objects. However, there is a wide range of stellar solutions exhibiting departure from sphericity. These solutions include the Kerr metric which describes the exterior gravitational field of a rotating stellar object<cit.>. In the limit of vanishing angular momentum, the Kerr solution reduces to the exterior Schwarzschild solution. There have also been numerous attempts at extending the Kerr metric to allow for dissipation and rotation<cit.>.In order to generate exact solutions of the Einstein field equations, researchers have employed a wide range of techniques to close this system of highly nonlinear, coupled partial differential equations. In the quest to obtain exact solutions describing static compact objects one imposes (i) symmetry requirements such as spherical symmetry, (ii) an equation of state relating the pressure and energy density of the stellar fluid, (iii) the behaviour of the pressure anisotropy or isotropy, (iv) vanishing of the Weyl stresses, (v) spacetime dimensionality, to name just a few<cit.>. These assumptions render the problem of finding exact solutions of the field equations mathematically more tractable. There is no guarantee that the resulting stellar model actually describes a physically realizable stellar structure. In the case of nonstatic, radiating stars, various exact solutions are known in the literature ranging from acceleration-free collapse, Weyl-free collapse, vanishing of shear, collapse from/to an initial/final static configuration as well as anisotropic collapse models.The Randall-Sundrum braneworld scenario has generated an intense interest in higher dimensional gravity and modified theories of gravity<cit.>. Braneworld stars were shown to have nonunique exteriors due to radiative-type stresses arising from 5-dimensional graviton effects emitting from the bulk<cit.>. Govender and Dadhich showed that the gravitational collapse of a star on the brane is accompanied by Weyl radiation<cit.>. They concluded that a collapsing sphere on the brane is enveloped by the brane generalised Vaidya solution which is in turn matched to the Reissner-Nordstrom metric. The mediation of the Vaidya envelope is a unique feature of the braneworld collapse which is not present in standard 4-d Einstein gravity. A recent model by Banerjee et al. showed that Weyl stresses lead naturally to anisotropic pressures within the core of a braneworld gravastar<cit.>. In their model the Mazur and Mottola gravastar picture <cit.> is considered within the Randall-Sundrum II type braneworld scenario.Recently, Dadhich and coworkers demonstrated the universality of the constant density Schwarzschild solution in general Einstein-Lovelock gravity and the universality of the isothermal sphere for pure Lovelock gravity when d ≥ 2N + 2. In a recent paper by Chakrabory and Dadhich, they ask a pertinent question: "Do we really live in four dimensions or higher?" This question arises from the fact that while gravity is free to propagate in higher dimensions while all other matter fields are confined to 4-dimensions, gravity cannot distinguish between 4-d Einstein or in particular, 7-d pure Gauss-Bonnet dynamics<cit.>.The idea of embedding a purely gravitational field represented by a 4-dimensional Riemannian metric into a flat space of higher dimensions has resurrected interest in so-called class one spacetimes. Karmarkar derived the necessary condition for a general spherically symmetric metric to be of class one<cit.>. In general, if the lowest number of dimensions of flat space in which a Riemannian space of dimension n can be embedded in n + p, then the Riemannian space is referred to as class p. Class one spacetimes have been successfully utilised to model compact objects such as strange star candidates, neutron stars and pulsars<cit.>. These theoretical models accurately predict and agree with observations regarding the masses, radii, compactness and densities of these objects within experimental error. On the other hand Momeni et al. <cit.> have obtained the realistic compact objects for Tolman-Oppenheimer-Volkoff equations in f(R) gravity in different context.In this work we use the condition arising from embedding a 4-d spherically symmetric static metric in Schwarzschild coordinates into a 5-d flat spacetime to model a charged compact object. By choosing one of the metric potentials on physical grounds, the embedding condition gives us the second metric potential which then completely describes the gravitational behaviour of the model. This paper is structured as follows: In Section two we introduce the 4-d Einstein spacetime and provide the necessary and sufficient condition for embedding this spacetime into a 5-d flat spacetime. The Einstein-Maxwell field equations describing the gravitational behaviour of our stellar model are presented in Section three. In Section four we derive an exact solution of the Einstein-Maxwell equations describing a charged, static sphere by making use of the embedding condition derived in the previous section. The boundary conditions required for the smooth matching of the interior of the star to the vacuum Schwarzschild exterior solution is given in Section five. The physical viability of our model is considered in Section six. We conclude with a discussion in Section seven.§ CLASS ONE CONDITION FOR SPHERICAL SYMMETRIC METRIC: The spherically symmetric line element in Schwarzschild co-ordinates (x^i)=(t,r,θ,ϕ) is given as:ds^2=e^ν(r)dt^2-e^λ(r)dr^2-r^2(dθ^2+sin^2θ dϕ^2)where λ and ν are the functions of the radial coordinate r. To determine the class one condition of the metric above (<ref>), we suppose that the 5-dimensional metric is flat ds^2=-(dz^1)^2-(dz^2)^2-(dz^3)^2-(dz^4)^2+(dz^5)^2, wherez^1=r sinθ cosϕ,z^2=r sinθ sinϕ, z^3=r cosθ, z^4=√(K) e^ν/2 cosht/√(K), z^5=√(K) e^ν/2 sinht/√(K),.and K is a positive constant. On inserting the components z^1,z^2, z^3, z^4 and z^5 into the metric (<ref>), we obtainds^2=-( 1+K e^ν/4 ν'^2 ) dr^2-r^2(dθ^2+sin^2θ dϕ^2)+e^ν(r)dt^2, Comparing the line element (<ref>) with the line element (<ref>) we get e^λ=( 1+K e^ν/4 ν'^2 ),The condition (<ref>) implies that the class of metric is one because we have embedded 4-dimensional space time into 5-dimensional flat space time. We should point out that (<ref>) is equivalent to the condition derived by Karmarkar in terms of the Riemann tensor components R_1414R_2323 = R_1212R_3434 + R_1224R_1334 where R_2323≠ 0.§ EINSTEIN-MAXWELL FIELD EQUATIONS The Einstein-Maxwell field equations can be written as 8π ( T_ν^μ+E_ν^μ )= R_ν^μ-1/2 R g_ν^μ,Here we assume that the matter is a perfect fluid within the star, then T_ν^μ and E_ν^μ are the corresponding energy-momentum tensor and electromagnetic field tensor, respectively defined byT^ν_μ =(ρ + p)v^ν v_μ - pδ^ν_μE^ν_μ = 1/4π(-F^νγF_μγ + 1/4δ^ν_μ F^γμF_γμ),where ρ is the energy density, p is the isotropic pressure and v^ν is the fluid four-velocity given as e^-ν(r)/2v^ν=δ^ν_μ. We are using geometrized units and thus take κ=8π and G=c=1. The components of T^ν_μ and E^ν_μ are as follows: T^1_1=-p, T^2_2=T^3_3=-p, T^4_4=ρ and E^1_1=-E^2_2=-E^3_3=E^4_4=1/8 π q^2/r^4.For the spherically symmetric metric Eq.(<ref>), the Einstein-Maxwell field equations (<ref>) are (<cit.>):e^-λ-1/r^2+e^-λν'/r=8πp-q^2/r^4e^-λ(ν”/2+ν'^2/4-ν'λ'/4+ν'-λ'/2r)=8πp + q^2/r^4.1-e^-λ/r^2+e^-λλ'/r=8π ρ + q^2/r^4, If we now demand that the radial and transverse stresses are equal at each interior point of the stellar configuration we obtain from equatingEqs. (<ref>)and (<ref>) 2 q^2/r^4= e^-λ [2 ν”-ν' λ'+ν'^2/4-λ'+ν'/2r]-e^-λ-1/r^2known as the condition of pressure isotropy. We note that the Einstein-Maxwell equations (<ref>) – (<ref>) can be viewed as describing an perfect fluid with anisotropic pressure. Eqn. (<ref>) can be used as a definition for the electric field intensity. Alternatively, if we specify the nature of the electric field intensity then Eqn. (<ref>) gives a relation between ν and λ. This is a common approach in solving the Einstein-Maxwell system. In our approach we will utilise the embedding condition given in Eqn. (<ref>) to obtain an exact solution of the Einstein-Maxwell field equations.However if m(r) is the mass function for electrically charged compact star model then it can be defined in terms of metric function e^λand electric charge q as m(r)=r/2 [ 1-e^-λ(r)+q^2/r^2 ] § NEW CLASS OF GENERAL SOLUTIONS FOR A CHARGED COMPACT STAR: We note that Eqn. (<ref>) relates the metric functions ν and λ thus reducing the task of finding exact solutions to a single-generating function. Now to determine the mass function m(r) and electric charge q, we assume the following form for the metric function e^ν:e^ν=B (1-Ar^2)^n, where, A and B are positive constants and n ≤ -1. This form of the metric function is well-motivated and has been utilised by Maurya et al.<cit.> to model charged compact stars arising from the Karmarkar condition. In these models they took n > 2. The parameter n acts as a 'switch' and characterises various well-known models available in the literature. It is clear from Eqn. (<ref>) that n = 0 renders the spacetime flat which is meaningless in the present context of this paper. It was first pointed out by Tikekar and more recently by Maurya et al.<cit.> that the Karmarkar condition together with isotropic pressure (in the case of neutral fluids) admits two solutions: the Schwarzschild interior solution and the Kohler-Chao-Tikekar solution<cit.>. The Schwarzschild solution is conformally flat, ie., the Weyl tensor vanishes at each interior point of the sphere. The Kohler-Chao-Tikekar solution is not conformally flat and furthermore represents a cosmological solution. This is to say that there is no finite radius for which the pressure vanishes in the Kohler-Chao-Tikekar solution. We regain the Kohlar-Chao-Tikekar solution when we set n = 1 in Eqn. (<ref>). Furthermore, we observe from Table 3 that the product nA is approximately constant for large n. As pointed out here that as n → -∞ the metric function ν = Cr^2 + lnB where we have defined here C = - nA. This form of the metric function ν has been already used to construct electromagnetic mass (EMMM) models by Maurya et al.<cit.>. These models have the peculiar feature of vanishing electromagnetic field, mass, pressure and density when the parameter n = 0. In addition, the fluid obeys an equation of state of the form p + ρ = 0 implying that the pressure within the bounded configuration is negative. In this study we will consider solutions for n < 0. We should point out that the solution describes a physical viable compact star when n ≥ -2.7. For n < -2.7, causality is violated within the stellar fluid as the sound speed exceeds unity. We have started our physical analysis with n =-6.5 since there are no physically realizable stars between n=-2.7 to -6.5 as observed by Gangopadhyay et al.<cit.> In the limiting case n= -2.7 one expects low mass stars.Now by plugging our choice of e^ν from Eq.(<ref>) into Eq.(<ref>), we obtain e^λ=[1+D Ar^2 (1-Ar^2)^n-2], where D=n^2 A BK.Then on inserting e^ν and e^λ from Eqs.(<ref>) and (<ref>) respectively into Eqs.(<ref>) and (<ref>), we get: 2q^2/r^4=Ar^2 [n^2 ψ^2-2 n ψ (ψ-D ψ^n)+D ψ^n (-2ψ+D ψ^n)/(ψ^2 + D Ar^2 ψ^n)^2]m(r)=A^2r^5 [3D^2ψ^2n+n(n-2) ψ^2]+2 D Ar^3 ψ^n+1[1+(n-2) Ar^2]/4 [ψ^2 + D Ar^2 ψ^n]^2 where, ψ=(1-Ar^2),The expressions for the pressure and energy density are determined from Eqs(<ref>) and (<ref>) respectively and can be written as 8π p/A=n^2 Ar^2 ψ^2-D ψ^n [2ψ+D Ar^2 ψ^n]-2 n ψ [(2-Ar^2) ψ+D Ar^2 ψ^n]/2[ψ^2 + D Ar^2 ψ^n]^28π ρ/A=D^2 Ar^2ψ^2n-n(n-2) Ar^2 ψ^2-2D ψ^n+1 [-3+(3n-2) Ar^2]/2[ψ^2 + D Ar^2 ψ^n]^2 § BOUNDARY CONDITIONS In order to fix the constants appearing in our solution the following conditions must be satisfied: (i) The interior metric must join smoothly with the exterior Reissner-Nor̈dstrom metric at the boundary of charged compact star (r=R). The Reissner-Nor̈dstrom metric take takes the formds^2 =(1-2M/r+q^2/r^2)dt^2 -(1-2M/r+q^2/r^2)^-1 dr^2-r^2 (dθ ^2 +sin ^2θdϕ ^2 ), where M is a constant representing the total mass of the charged compact star.(ii) The radial pressure p_r must vanish at the boundary (r = R) of the star ( i.e. the continuity of∂ g_tt/∂ r across the boundary of the star)  <cit.>, which is known as the second fundamental form. Vanishing of the radial pressure at boundary p_r(R)=0 yields:D=-2 Ψ^n+1 (1+n AR^2)+√(4 (1+n AR^2)^2 Ψ^2n+2+4 AR^2 Ψ^2n Φ(R))/2 AR^2 (1-AR^2)^2n where we have defined Ψ = (1-AR^2), Φ(R) = [-4n+10n AR^2+n^2 AR^2-2n A^2R^4 (4+n)+n A^3R^6 (2+n)].The constant B can be determined by using the condition e^ν(R)=e^-λ(R), which yields: B=1/(1-AR^2)^n [1+D AR^2 (1-AR^2)^n-2] The condition e^-λ(R)=1-2M/R+Q^2/R^2 gives the total mass of the charged compact star as:M/R=A^2R^4 [3D^2Ψ^2n+n(n-2) Ψ^2]+2 D AR^2 Ψ^n+1[1+(n-2) AR^2]/4 [Ψ^2 + D AR^2 Ψ^n]^2 By using the density of the star at surface, the value of constant A can be determined from the expression:A=16π ρ_s[Ψ^2+D AR^2 Ψ^n]^2/D^2 AR^2Ψ^2n-n(n-2) AR^2 Ψ^2-2D Ψ^n+1 [-3+(3n-2) AR^2] The expressions for the pressure gradient and density gradient, respectively are: 8π dp/dr=2 A^2 r [2n^3 D ψ^n+1 A^2r^4-n^2 ϕ_1(r) +ϕ_2(r)+ϕ_3(r)]/2 [ψ^2 + D Ar^2 ψ^n]^28π dρ/dr=2 A^2 r [-2D n^3 ψ^n+1 A^2r^4+n^2ϕ_4(r)+ϕ_5(r)+2n ϕ_6(r)]/2 [ψ^2 + D Ar^2 ψ^n]^3 where, ϕ_1(r)=[-1+Ar^2(2+7Dψ^n)-2Dψ^n A^2r^4 (4-D ψ^n)-(2-D ψ^n)A^3r^6+A^4r^8], ϕ_2(r)=2 n ψ [(Ar^2-3) ψ^2+D^2 Ar^2 ψ^2n+D ψ^n (4-3Ar^2+A^2r^4)], ϕ_3(r)=D ψ^n [-6 ψ^2+D^2 ψ^2n Ar^2+D ψ^n (3-4Ar^2+3A^2r^4)], ϕ_4(r)=[-1+(2+7D ψ^n)Ar^2-2 D ψ^n A^2r^4 (4+3D ψ^n)-(2-D ψ^n)A^3r^6+A^4r^8], ϕ_5(r)=-D ψ^n [D^2 ψ^2n Ar^2-2 ψ^2 (11+4 Ar^2)+Dψ^n (11-4Ar^2+3 A^2r^4)], ϕ_6(r)=[ψ^3(1+Ar^2)+D^2 ψ^2n Ar^2 (5+3Ar^2)-D ψ^n (6-3Ar^2-10A^2r^2+7A^3r^6)]. § PHYSICAL PROPERTIES OF THE SOLUTION:§.§ Regularity(i) Metric functions at the centre, r=0: we observe from Eqs. (<ref>) and (<ref>) that the metric functions at the centre r=0 assume the values e^ν(0)=Band e^λ(0)=1. This shows that metric functions are free from singularity and positive at the centre (since B is positive). Also, both metric functions e^ν and e^λ are monotonically increasing function of r (Fig. 1 & 2). (ii) Pressure at the centre r=0: From Eq.(<ref>), we obtain the pressure p at centre r=0 asp_0=-A (D+2n)/8 π. Since A and D are positive, it follows that the central pressure is positive provided that D < - 2n. (iii) Matter density at the centre r=0: We require that the matter density be positive at central point of the star. Observation of Eq.(<ref>) gives us ρ_0=(3 A D/8 π). Since A and D(=A B n^2 K) are positive due to positivity of A,B, n^2 andK. This implies that the central density ρ_c is positive.§.§ Causality Causality requires that the speed of sound be less than the speed of light within the stellar interior. The speed of sound for the charged fluid sphere should be monotonically decreasing from centre to the boundary of the star (v=√(dp/dρ) < 1). It is clear from Fig. (6) that speed of sound is monotonically decreasing away from the centre and less than 1. This implies that our fluid model fulfills causality requirements.§.§ Energy conditionsThe charged fluid sphere should satisfy the following three energy conditions, viz., (i)null energy condition (NEC), (ii) weak energy condition (WEC) and (iii) strong energy condition (SEC). For satisfying the above energy conditions, the following inequalities must be hold simultaneously inside the charged fluid sphere: NEC: ρ+E^2/8π≥ 0,WEC: ρ+p ≥0SEC: ρ+3p-E^2/4π≥0. It is clear from Fig. (7) that all three energy conditions are satisfied at each interior point of the configuration.§.§.§ Equilibrium condition The Tolman-Oppenheimer-Volkoff (TOV) equation <cit.> in the presence of charge is given by-M_G(ρ+p_r)/r^2e^λ-ν/2-dp/dr+ σq/r^2e^λ/2 =0, where M_G is the effective gravitational mass given by: M_G(r)=1/2r^2 ν^'e^(ν - λ)/2. Plugging the value of M_G(r) in equation (<ref>), we get-ν'/2(ρ+p_r)-dp/dr+σq/r^2e^λ/2 =0, The above equation can be expressed into three different components gravitational (F_g), hydrostatic (F_h) and electric (F_e), which are defined as: F_g=-ν'/2(ρ+p_r)=-n A^2 r/4 π [-D ψ^n (1+Ar^2)+ n ψ^2 + 2n D Ar^2 ψ^n]/[ψ^2+D Ar^2 ψ^n]^2F_h=-dp_r/drF_e=A^2 r/4 π [ 2 D n^3 ψ^n+1 A^2r^4 + n^2 F_e1 + D ψ^nF_e2 - 2 n F_e3 ]/2 [ψ^2 + D Ar^2 ψ^n]^3 where,F_e1=[3-(10+ D ψ^n)Ar^2 +2(6-2 Dψ^n + D^2 ψ^2 n)A^2r^4 - (6-5 D ψ^n) A^3r^6 + A^4r^8], F_e2=[-6 ψ^2 + D^2 ψ^2 nAr^2 + D ψ^n(3 - 4 Ar^2 + 3 A^2r^4)], F_e3=[-(Ar^2-3) ψ^3 + 2 D^2 ψ^2 nA^2r^4 + D ψ^n (-3 + 6 Ar^2 - 5 A^2r^4 + 2 A^3r^6)].The balancing of these three forces within the stellar interior leads to hydrostatic equilibrium of the fluid sphere. §.§.§ Stability through adiabatic indexThe stability of the charged fluid models depends on the adiabatic index γ. Heintzmann and Hillebrandt <cit.> proposed that a neutron star model with equation of state is stable if γ > 1. This condition for stable model is necessary but not sufficient model (<cit.>). In the Newton's theory of gravitation, it is also well known that there has no upper mass limit if the equation of state has an adiabatic index γ > 4/3. Γ=p+ρ/p dp/dρ Relation (<ref>) arises from an assumption within the Harrison-Wheeler formalism<cit.>. Chan et al. <cit.> in their study of dissipative gravitational collapse of an initially static matter distribution which is perturbed showed that Eq. (<ref>) follows from the equation of state of the unperturbed, static matter distribution.In the case of anisotropic fluids the ratio of the specific heats assumes the following formΓ > 4/3 - [4/3p_r - p_t/rp_r']_maxAs pointed earlier a charged mass distribution can be viewed as an anisotropic system in which the radial and tangential stresses are unequal. In the case of isotropic pressure (p_r = p_t) we regain the classical Newtonian result from Eq. (<ref>). It is clear from Eq. (<ref>) that the instability is increased when p_r < p_t and decreases when p_r > p_t. §.§.§ Harrison-Zeldovich-Novikov static stability criterion In order for the configuration to be stable, the Harrison-Zeldovich-Novikov static stability criterion requires that the mass of the star increases with central density i.e. dM/dρ_0 > 0 and unstable if dM/dρ_0 ≤ 0. M=R ρ_1^2[3D^2Φ^2 n ρ_1 + n (n-2 ) Φ^2 ρ_1] + 2 D R ρ_1 Φ^n+1 [ 1+(n-2) ρ_1 ]/4 [Φ^2 + D ρ_1 Φ^n]^2dM/dρ_0=R^3 [ M_1+M_2+M_3+M_4 ]/2 [Φ^2 + D ρ_1 Φ^n]^3 where ρ_1=8 π ρ_0/3 D R^2, Φ=1-ρ_1, ρ_0= central density, M_1=D Φ^n - (n-2) ρ_1^4[n-D Φ^n + Dn^2 Φ^n], M_2=ρ_1 [n^2 + n (-2 + D Φ^n) + 2D Φ^n(-2 + D Φ^n)], M_3=ρ_1^3 [D n^3 Φ^n -D Φ^n(-2 + D Φ^n) -3n(2 + D Φ^n) + n^2(3 - D Φ^n + D^2 Φ^2 n)], M_4=-ρ_1^2 [-3D Φ^n + n^2(3 + D Φ^n) - n(6 + D Φ^n - 2D^2 Φ^2 n)].Fig. 10 shows thatdM/dρ_0 > 0 thus indicating that our model is stable. We further note that dM/dρ_0 is independent of n for low density stars. It is clear that dM/dρ_0 decreases as |n| increases for high density stars. §.§ Electric charge Table 1. displays the magnitude of the charge at the centre and boundary for different stars. Also, from Fig. 3, it is clear that the charge profile is zero at the centre (corresponding to vanishing electric field) and monotonically increasing away from the centre, acquiring a maximum value at the boundary of the star. We further note that the charge increases with an increase in |n| with the difference becoming indistinguishable at the stellar surface for very large |n|. We may then interpret n as a 'stabilizing' factor. The variation of charge with n suggests that lower values of n imply lower charge which in turn means smaller electromagnetic repulsion. Fig. 3 shows that larger |n| leads to greater surface charge thus indicating greater electromagnetic repulsion here. This would mean that the surface layers of the charged body is more stable than the inner core. The onset of collapse of such a body could proceed in an anisotropic manner or the collapse could lead to the cracking of the object thus avoiding the formation of a black hole.As pointed out by Ray et al. <cit.> the charge can be as high as 10^20 coulombs and hydrostatic equilibrium may still be achieved however these equilibrium states are unstable. Bekenstein <cit.> argued that high charge densities will generate very intense electric fields. This will in turn induce pair production within the star thus destablizing the core. As an illustration we calculate the amount of charge at the boundary in coulomb unit for the compact star 4U1608-52 as follows: (i) 8.90468 ×10^19 Coulomb for n = - 6.5, (ii) 9.52895×10^19 Coulomb for n = -10, (iii) 1.0370×10^20 Coulomb for n = - 50, (iv) 1.05471×10^20 Coulomb for n= - 500, (v) 1.05645×10^20 Coulomb for n= - 5000, (vi)1.05662×10^20 Coulomb for n = - 50000. However, the amount of charge in coulomb unit throughout the star can be determined by multiplying every recorded value in table 1. by a factor of 1.1659×10^20. §.§ Effective mass and compactness parameter for the charged compact starThe maximal absolute limit of mass-to-radius (M/R) ratio as proposed by Buchdahl<cit.> for static spherically symmetric isotropic fluid models is given by 2M/R≤ 8/9. On the other hand, <cit.> proved that for a compact charged fluid sphere there is a lower bound for the mass-radius ratioQ^2(18 R^2+ Q^2) /2R^2(12R^2+Q^2)≤M/R,for the constraint Q < M.However this upper bound of the mass-radius ratio for charged compact star was generalized by <cit.> who proved thatM/R≤[4R^2+3Q^2/9R^2 +2/9R √(R^2+3Q^2)]. TheEqs. <ref> and <ref> imply thatQ^2(18 R^2+ Q^2) /2R^2(12R^2+Q^2)≤M/R≤[4R^2+3Q^2/9R^2 +2/9R √(R^2+3Q^2)]The effective mass of the charged fluid sphere can be determined as:m_eff=4π∫^R_0(ρ+E^2/8 π) r^2 dr=R/2[1-e^-λ(R)]where e^-λ is given by the equation (<ref>)and compactness u(r) is defined as: u(R)=m_eff(R)/R=1/2[1-e^-λ(R)]§.§ Redshift The maximum possible surface redshift for a bounded configuration with isotropic pressure is Z_s = 4.77. Bowers and Liang showed that this upper bound can be exceeded in the presence of pressure anisotropy<cit.>. When the anisotropy parameter is positive (p_t > p_r) the surface redshift is greater than its isotropic counterpart. Haensel et al. <cit.> showed that for strange quark stars the surface redshift is higher in low mass stars with the difference being as high as 30% for a 0.5 solar mass star and 15% for a 1.4 solar mass star. The gravitational surface red-shift (Z_s) is given as: Z_s= (1-2 u)^-1/2 -1=√(1+D AR^2 (1-AR^2))-1, From Eq.(<ref>), we note that the surface redshift depends upon the compactness u, which implies that the surface redshift for any star can not be arbitrary large because compactness u satisfies the Buchdhal maximal allowable mass-radius ratio. However, surface redshift will increase with increase of compactness u. Also, from Table 5. we observe that the surface redshift decreases with an increase in |n|. §.§ Equation of stateAn equation of state (EoS), p = p(ρ) relates the pressure and the density of the stellar fluid and is an important indicator of the nature of the matter making up the configuration. The MIT bag model arising from observations in fundamental particle physics relates the pressure to the density of the star via a linear relation of the form p = αρ - B where B is the Bag constant. This equation of state has been successfully used to model compact objects in general relativity ranging from neutron stars through to strange star candidates. A recent model of a radiating star in which the collapse proceeds from an initial static configuration obeying a linear equation of state of the form p_r = α (ρ - ρ_s) where p_r is the radial pressure, ρ_s is the surface energy density and α is the EoS parameter showed that the variation of α affects the temperature profile of the collapsing body. Fig. 9 shows the variation of the ratio p/ρ with r/R. We note that the pressure is less than the density at each interior point of the configuration. This ratio is also positive everywhere inside the star. As |n| increases, the ratio p/ρ decreases with the differences tending to zero towards the surface layers of the star. § DISCUSSION In this paper we attempted to obtain electromagnetic mass models (EMMM) which were first addressed by Lorentz. The Lorentz electromagnetic mass models had the distinguishing feature that vanishing charge density is accompanied by the simultaneous vanishing of all other thermodynamical quantities. In addition, the equation of state of these models is of the form ρ + p = 0 giving rise to negative pressure. The solution obtained in this work relaxes this particular equation of state, allowing for positive pressure. The gravitational and thermodynamical behaviour of our model is controlled by a parameter n. Switching off n results in the vanishing of charge density and all other thermodynamical quantities such as density and pressure. We use a novel approach of embedding a spherically symmetric, staticmetric in Schwarzschild coordinates into a five-dimensional flat metric. This embedding is equivalent to the Karmarkar condition: the requirement for a spherically symmetric metric to be of embedding class 1. The condition obtained from this embedding relates the gravitational potentials thus reducing the problem of finding an exact solution of the Einstein-Maxwell field equations to a single-generating function. By specifying one of the gravitational potentials on physical grounds, we obtain the second potential which completely describes the gravitational behaviour of the compact object. The junction conditions required for the smooth matching of the interior spacetime to the exterior Reissner-Nor̈dstrom spacetime fixes the constants in our solution and determines the mass contained within the charged sphere. Our model displays many salient features which are bodes well for describing a compact, self-gravitating object. Graphical analysis of the solution shows that the density and pressure are monotonically decreasing functions of the radial coordinate. The pressure vanishes at some finite radius. This indicates that our solution can be utilised to describe a bounded object unlike the Kohler-Chao solution which arises from imposition of the Karmarkar condition together with pressure isotropy. Causality is obeyed at each interior point of the configuration. Stability analysis via the adiabatic index and the Harrison-Zeldovich-Novikov static stability criterion indicate that our model is stable. Analysis of the variation of charge with the radial coordinate reveals an interesting characteristic of our model. The charge increases with the parameter |n|. This increase is largest towards the surface layers of the charged object becoming simultaneously indistinguishable for very large values at the surface. This implies that the surface layers are more stable (larger repulsive forces here) than inner core layers. This 'differentiated' stability may lead to anisotropic collapse or the subsequent cracking of the sphere should this object starts to collapse. This phenomenon has not been discussed elsewhere in the literature. The influence of the parameter n is clearly drawn out in tables 1 - 6. Table 2. shows that our theoretical model describes compact objects to a very good degree of accuracy with regards to observed masses and radii of stars. Tables 3 to 5 clearly show that variations in the model parameters stabilise for very large n. Table 6. illustrates the influence of the parameter n on the central density, surface density, central pressure and surface redshift. It is clear that for very large n variations in these physical quantities tend to zero. This feature indicates that the parameter n can be viewed as a 'building' constant, that is to say, that an increase in n is accompanied by an increase in mass, radius and charge which builds up the star from r = 0 through to the surface. In this work we have utilised n < 0 and the case n ≥ 0 was studied by <cit.>. Future work has been initiated to consider the case of general n.100 tipler1980 F. J. Tipler, C. J. S. Clarke and G. F. R. Ellis, General Relativity and Gravitation, Vol 2., ed. A. Held (Plenum, New York, 1980). shap1983 S. L. Shapiro and S. A. Teukolsky, Black Holes, White Dwarfs and Neutron Stars (Wiley-Interscience, New York, 1983).Sch1916 K. Schwarzschild, Sitzungsber. Dtsch. Akad. Wiss. Berl. L. Math.Phys.Tech, 424 (1916).kerr1963 R. P. Kerr, Phys. Rev. Lett. 11, 237 (1963).vaidya1 P. C. Vaidya and L. K. Patel, Phys. Rev. D 7, 3590 (1973).CarM. Carmeli and M. Kaye, Ann. Phys. (N.Y.) 103, 97 (1977).kramer1 D. Kramer and U. Hähner, Class. Quantum Gravit. 12, 2287 (1995).bowers R. L. Bowers and E. P. T. Liang, Astro. Phys. J. 188, 657 (1974).sharmaeos R. Sharma and S. D. Maharaj, MNRAS 375, 1265 (2007).hereos L. Herrera and W. Barreto, Phys. Rev. D 88, 084022 (2013).bharaniso P. Bhar, Astrophys. Space Sci. 359, 41 (2015).sharmadark F. Rahaman, R. Maulick, A. K. Yadav, S. Ray and R. Sharma, Gen. Relativ. Gravit. 44, 107 (2012).farookn P. Bhar, F. Rahaman, S. Ray and V. Chatterjee, Euro. Phys. J. C 75, 190 (2015).govegb P. Bhar, M. Govender and R. Sharma, Euro. Phys. J. C 77, 109 (2017).Randal L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 4690 (1990).germani C. Germani and R. Maartens, Phys. Rev. D 64, 124010 (2001).govbrane M. Govender and N. Dadhich, Phys. Lett. B 538 233 (2002). bangrav A. Banerjee, F. Rahaman, S. Islam and M. Govender, Euro. Phys. J. C 76, 34 (2016).mot P.O. Mazur and E. Mottola, Proc. Nat. Acad. Sci. 101, 9545 (2004). dad1N. Dadhich, A. Molina and A. Khugaev, Phys. Rev. D 92, 041302 (2015).dad2 S. Chakraborty and N. Dadhich,Do we really live in four or in higher dimensions,arXiv:1605.01961 (2016) kar48K. R. Karmarkar, Proc. Indian. Acad. Sci. A 27, 56 (1948). k1 S.K. Maurya e al., Eur. Phys. J. A 52, 191 (2016). k2 S.K. Maurya et al., Eur. Phys. J. C 75, 225 (2015). k3S.K. Maurya, Y.K. Gupta, B. Dayanandan, and S. Ray, Eur. Phys. J. C 76, 266 (2016). k4K. N. Singh et al.,Eur. Phys. J. C 77, 100 (2017). k5K. N. Singh et al., Chin. Phys. C 41, 015103 (2017). k6S. K. Maurya et al., Astrophys. Space Sci. 361, 351 (2016). k7K. N. Singh, N. Pant, Astrophys. Space Sci. 361, 177 (2016). Momeni1 D. Momeni, G. Abbas, S. Qaisar, Zaid Zaz, R. Myrzakulov, arXiv:1611.03727 (2016) Momeni2 D. Momeni, M. Faizal, K. Myrzakulov, R. Myrzakulov:Eur. Phys. C 77, 37 (2017) Momeni3 D. Momeni et al, Int. J. Mod. Phys. A 30, 1550093 (2015) Dionysiou D.D. Dionysiou : Astrophys. Space Sci. 85, 331 (1982) maurya11 S.K. Maurya et al., Eur. Phys. J. C77 45 (2017). maurya S. K. Maurya et al., Eur. Phys. J. C75, 389 (2015) GA T. Gangopadhyay, S. Ray,X. -D Li, J. Dey and M. Dey,Mon. Not. R. Astron. Soc., 431, 3216 (2013). Tolman1939 R.C. Tolman, Phys. Rev. 55, 364 (1939). Oppenheimer1939 J.R. Oppenheimer and G.M. Volkoff, Phys. Rev. 55, 374 (1939). pandey S.N. Pandey, S. P. Sharma, Gen. Relativ. Gravit. 14, 113 (1981) schw K. Schwarzschild, Sitz. Deut. Akad. Wiss. Math. Phys. Berlin 24, 424 (1916) kc M. Kohler, K. L. Chao, Z. Naturforchg 20, 1537 (1965) tik R. R. Tikekar, Current Sci.39, 460 (1970)Tupper1983 B.O.J. Tupper, Gen. Relativ. Gravit. 15, 47 (1983) Heintzmann1975 H. Heintzmann, W. Hillebrandt, Astron. Astrophys. 38, 51 (1975) wheel1 B. K. Harrison and J. A. Wheeler, Onzieme Conseil de Physique Solvay: la Structure et l' Evolution de l' Univers (Editions Stoops, Brussels, 1959) channy R. Chan, N. O. Santos, S. Kichenassamy and G. Le Denmat, MNRAS 239, 91 (1989) rayb S. Ray, A. L. Espindola, M. Malheiro, J. P. S. Lemos and V. T. Zanchin, Phys Rev. D 68, 084004 (2003) bek J. D. Bekenstein, Phys. Rev. D 4, 2185 (1971) AndH. Andréasson,: Commun. Math. Phys. 288, 715 (2009)bowers R. L. Bowers and E. P. T. Liang, Astrophys. J. 188, 657 (1974) strange P. Haensel, J. L. Zdunik and R. Schaefer 160, 121 (1986) Buchdahl1959 H.A. Buchdahl, : Phys. Rev. 116,1027 (1959). Misner1964 C.W. Misner and D.H. Sharp, Phys. Rev. B 136, 571 (1964). Boehmer2006 C.G. Böhmer and T. Harko, Class. Quantum Gravit. 23, 6479 (2006). Germani Maartens2001 C. Germani and R. Maartens, Phys. Rev. D 64, 124010 (2001).
http://arxiv.org/abs/1703.10037v1
{ "authors": [ "S. K. Maurya", "M. Govender" ], "categories": [ "physics.gen-ph", "gr-qc" ], "primary_category": "physics.gen-ph", "published": "20170324214858", "title": "Generating physically realizable stellar structures via embedding" }
firstpage–lastpage 2017Young, active radio stars in the AB Doradus moving group R. Azulay 1,2Guest student of the International Max Planck Research School for Astronomy and Astrophysics at the Universities of Bonn and Cologne, J. C. Guirado3,1, J. M. Marcaide1, I. Martí-Vidal4, E. Ros2,1,3, E. Tognelli5,6, F. Hormuth7, J. L. Ortiz8Accepted 2017 xxxx. Received 2017 xxxx; in original form 2017 xxxx ==================================================================================================================================================================================================================================================================We address the problem of magnetic field dissipation in the neutron star cores, focusing on the role of neutron superfluidity.Contrary to the results in the literature,we show that in thefinite-temperaturesuperfluid matter composed of neutrons, protons, and electrons,magnetic field dissipates exclusively due toOhmic losses and non-equilibrium beta-processes, and only anadmixture of muons restores (to some extent) the roleof particle relative motion for the field dissipation. The reason for this discrepancy is discussed.stars: neutron – stars: interiors – stars: magnetic field Young, active radio stars in the AB Doradus moving group R. Azulay 1,2Guest student of the International Max Planck Research School for Astronomy and Astrophysics at the Universities of Bonn and Cologne, J. C. Guirado3,1, J. M. Marcaide1, I. Martí-Vidal4, E. Ros2,1,3, E. Tognelli5,6, F. Hormuth7, J. L. Ortiz8Accepted 2017 xxxx. Received 2017 xxxx; in original form 2017 xxxx ==================================================================================================================================================================================================================================================================§ INTRODUCTION Since the pioneering works on the magnetic field dissipationin the neutron star (NS) cores by <cit.> a significant progress has been made (e.g., ), but many questions have still remained unanswered. One of these questions regards the effects of nucleon superfluidity andsuperconductivity.How do they affect the magnetic field evolution? In this short note we partly address this questionby analyzing the effects of neutron superfluidity. As weargue, this analysis is more accurate (and simple),than the previoustreatments (),which are only validat stellar temperatures Tmuch smaller than the neutron critical temperature T_cn, and it leads us to interesting conclusions. In essence, weshow that the neutron superfluidity drastically modifiesthe fluid dynamics, imposing an additional (in comparison to the nonsuperfluid matter)constraint on the velocities of different particle species. As a result, ambipolar diffusion becomes completelyirrelevant for the magnetic field evolutionin the superfluid neutron-proton-electron (npe) matter,once the stellar temperature T falls (even slightly)belowT_cn. Then only Ohmic decay and dissipation due tonon-equilibrium beta-processes remain active. An admixture of muons (μ) introduces additional degree of freedomso that dissipation due to particle relative motion can again play a role. § MAGNETIC FIELD ENERGY DISSIPATION§.§ Our assumptions and superfluid equation We follow the general strategy outlined in <cit.>. For simplicity,we consider a Newtonian non-rotating NS with the superfluid core composedof relativistic finite-temperature npe (or npeμ) matter, where neutrons are superfluid,and protons are normal (nonsuperconducting). In the absence of magnetic field B the star is in full thermodynamic and hydrostatic equilibrium, the velocities of all particle species vanish. We assume that the magnetic field is the only mechanism that drives the star out of diffusive and beta-equilibrium. Since the evolution occurs on a very long timescale (), it proceeds through a set of quasistationary states,which means that the time derivatives in the “Euler equations”(Eqs. <ref>, <ref>–<ref> and <ref>–<ref>, see below),as well as in the continuity equations, can be neglected. Next, the magnetic field is considered as a small perturbation, hence the induced particle velocities are also small and the terms depending onsquare of these velocities in the dynamic equations can be omitted.For simplicity, we also ignore all surface integrals which could appear in the formulas; they can be easily written out if necessary.The magnetic energy dissipates when the system evolves towards equilibrium.The rate of the magnetic energy change,Ė_ B=∫_VB/4π·ḂdV, can be presented as (e.g., ) Ė_ B=-∫_V E·jdV,where E is the electric field; j is the charge current density and V is the system volume.Because neutrons are assumed to be in the superfluid state,there is an additional degree of freedom in the system –the neutron superfluid velocity V_sn,which is related to the wave function phase Φ_n of the neutron Cooper-pair condensateby the condition V_sn = ∇Φ_n/(2 m_n).Generally, this velocity differs from the velocity of normal neutron component, u_n (i.e., the velocity of neutron thermal Bogoliubov excitations). At T<T_cn both normal and superfluid components contribute to the neutron current density, j_n=n_snV_sn +(n_n-n_sn)u_n, where n_n is the neutron number density and n_snis the number density corresponding to superfluid neutron component. In the absence of rotation (no Feynman-Onsager vortices)V_sn obeys the standard “superfluid” equation, which is valid at arbitrary T < T_cn(e.g., ;;note that the quadraticallysmall velocity-dependent terms in this equation, as well as the bulk viscosity terms,are already neglected), m_n ∂ V_sn/∂ t + ∇μ_n^∞=0, wherem_n is the neutron bare mass and μ_n^∞ is the redshifted relativistic neutron chemical potential. The latter is given byμ_n^∞=μ_ne^ϕ/c^2, where μ_n is the neutron chemical potential, c is the speed of light,and ϕ is the gravitational potential.In a Newtonian star (ϕ≪ c^2) ∇μ_n^∞ can be represented as ∇μ_n^∞≈∇μ_n + μ_n ∇ϕ/c^2.Equation (<ref>)can be further simplifiedby neglecting inertia term which, as we have already discussed above,is small for a quasistationary evolving NS.Then it reduces to∇μ_n^∞=0⇔∇μ_n + μ_n ∇ϕ/c^2=0(foraNewtonianstar). Equation (<ref>),which is valid at arbitrary T< T_cn,deserves a comment. In the NS literature it is customary to use a different form ofthis equation(see, e.g., equation 3 inand equation 1 in ) with a friction force density F_ fr in its right-hand side, ∇μ_n^∞=F_ fr/n_n. The force density F_ fr describesfriction of neutrons with electrons and protons, and is usually chosen to be equal to (see, e.g., equations 3, 18, and 40 inand equations 1, 20, and 21 in ), [ The expression (<ref>) is written for our simplified problem, i.e., assumingthat protons and electrons are normal (nonsuperconducting). ] F_ fr=J_en (u_e-v_n)+J_np(u_p-v_n), where u_e and u_p are the electron and protonvelocities, respectively;v_n≡j_n/n_n is the velocity of neutron liquid as a whole(note that, generally, v_n≠V_sn≠u_n);and the `friction' coefficients J_en and J_np are defined in Sec. <ref>.Generally, the `neutron' equation in the form (<ref>) contradicts Eq. (<ref>). Both equations coincide only in the limitT≪ T_cn, when J_en and J_np are suppressed by the neutron superfluidity (e.g., ),so that F_ fr in Eq. (<ref>) can be neglected. So, which equation is correct?On the one hand, Eq. (<ref>) is obtained from theneutron superfluid equation (<ref>), which has a standard form (e.g., ). [ Note that any substantial modification of this equationis forbidden by the basic principles of the theory of superfluidity. For example, introduction of a friction force in its right-hand side, m_n ∂ V_sn/∂ t + ∇μ_n^∞=F_ fr/n_n, violates the potentiality conditionfor superfluid velocity,∇× V_sn=0, which must be satisfied in a nonrotating star (e.g., ). ] Note that, this equation remains unchangedeven for mixtures of nonrotating superfluid and normal liquids, as is clearly demonstrated, e.g., in the monograph by <cit.>, who analyzed dissipative hydrodynamic equationsfor solutions of superfluid helium-II and normal ^3He taking into account the diffusion effects (see Chapter 24 and, in particular, equations 24.37 and 24.30 in that reference). On the other hand,Eq. (<ref>) is (as far as we are aware) presented without detailed derivationand, as we believe, is the result of unjustifiedapplicationof zero-temperature superfluid hydrodynamics to the case of finite temperatures. Thus, we conclude, that equation (<ref>)is inaccurate at T ≲ T_cn and should be disregarded.§.§ npe-matter The equations of motion for electrons and nonsuperconducting protons take the form () -e( E + 1/c u_e × B)- ∇μ_e -μ_e/c^2 ∇ϕ -J_ep/n_e( u_e -u_p) -J_en/n_e ( u_e -u_n)=0,e ( E + 1/c u_p × B)-∇μ_p -μ_p/c^2 ∇ϕ -J_ep/n_p( u_p -u_e) -J_np/n_p ( u_p -u_n)=0, where e is the proton electric charge; μ_e and μ_p are the electron and protonrelativistic chemical potentials, respectively. Further,n_i is the number density of particle species i=p, e and J_ik=J_ki is the symmetric coefficient related to the effective relaxation time τ_ikfor scattering of particles i on particles k by the formula:τ_ik=n_i μ_i/(c^2 J_ik)(see ). In Eqs. (<ref>) and (<ref>)thermo-diffusion terms are neglected.Equations (<ref>)–(<ref>) should be supplemented by the total force balance equation, which, for the problem in question, takes the standard form (the same for superfluid and nonsuperfluid liquids), j × B/c=∇ P+(P+ϵ)/c^2∇ϕ= ∑_i=n,p,e(n_i ∇μ_i+n_i μ_i/c^2 ∇ϕ), where P and ϵ are the pressure and energy density,and j=e n_pu_p - e n_eu_e.Let us now compose the following combination,[n_e× (<ref>) +n_p× (<ref>)-n_n× (<ref>)]-(<ref>).Taking into account the quasineutrality condition, n_e=n_p, we get J_en ( u_e -u_n)+J_np ( u_p -u_n)=0. This equation imposes an additional(in comparison to the non-superfluid matter) constraint on the velocities u_i.For example, if we neglect collisions betweenneutrons and electrons (J_en=0), we obtain u_p= u_n.Now, summing up n_e× (<ref>)+n_p× (<ref>)+n_e× (<ref>),we arrive at j × B/c=-n_p ∇Δμ_e-n_pΔμ_e/c^2 ∇ϕ= - e^-ϕ/c^2 n_p ∇ (Δμ_e e^ϕ/c^2), where Δμ_e ≡μ_n- μ_p- μ_e.In the Newtonian limit n_p (Δμ_e/c^2)∇ϕ≪ n_p ∇Δμ_e, and we have j × B/c=-n_p ∇Δμ_e. This is a Grad-Shafranov type equationfor the magnetic field, as in the case of magnetic equilibria in barotropic fluids. Note that the Lorentz force density j × B/cdepends onthe gradient of only one scalar function(in contrast to the non-superfluid matter,where itdepends on the gradients of two scalar functions, see ).This means that only very specific magnetic field configurations canrestore hydrostatic equilibrium when neutrons are superfluid ().Thus, once NS temperature drops below the neutron critical temperature T_cnat a given point,the magnetic field has to rearrange itselfto meet the new hydrostatic equilibrium condition (<ref>).This rearrangement may result in magnetar activity and effective dissipation of the magnetic field energy on a typical timescale of NS cooling. [As results of <cit.> indicate, the same conclusion also applies if protonsare superconducting.]To find the dissipation rate Ė_B (<ref>), we express E fromEq. (<ref>) for protons, E=-u_p× B/c+∇μ_p/e+μ_p ∇ϕ/c^2 e+ J_ep ( u_p- u_e) + J_np ( u_p- u_n)/ e n_p. The second and third terms in Eq. (<ref>)are potential[ In the approximation of a Newtonian star (ϕ/c^2≪ 1), employed in this paper,these terms can be presented as ∇μ_p/e+μ_p ∇ϕ/(c^2 e)=e^-ϕ/c^2∇(μ_p e^ϕ/c^2/e) ≈∇(μ_p e^ϕ/c^2/e) and hence are indeed potential. It is interesting to note that fully relativistic calculation would not change this result. ] and thus do not contributeto the magnetic field dissipation(to see this, integrate Eq. <ref> by parts and use the continuity equation, ∇· j=0).Thus, Ė_ B=-∫_V [-u_p× B/c+J_ep ( u_p- u_e) + J_np ( u_p- u_n) / e n_p]·jdV. The first term here can be rewritten as ∫_V ( u_p× B/c)·jdV=-∫_V ( j× B/c)·u_pdV. Substituting now (<ref>) into (<ref>) -∫_V ( j× B/c)·u_p dV =∫_V (n_p ∇Δμ_e )·u_pdV, and integrating by parts, one finds (the integral over the distant surface is omitted) ∫_V ( u_p× B/c)·jdV= ∫_V - ∇· (n_p u_p) Δμ_e dV. The divergence term here can be expressed with the help of the continuity equation for protons, ∇· (n_p u_p)=ΔΓ,where the source ΔΓaccounts for the non-equilibrium beta-processes. When Δμ_e ≪ k_ B T,ΔΓ can be approximated as ΔΓ≈λ_e Δμ_e(λ_e is the density and temperature-dependent coefficient and k_ B is the Boltzmann constant), so that (<ref>) reduces to ∫_V ( u_p× B/c)·jdV= ∫_V -λ_e Δμ_e^2 dV. Returning now to Eq. (<ref>), it can be represented as Ė_ B=-∫_V E·jdV= ∫_V [-λ_e Δμ_e^2 -J_en ( u_e -u_n)^2 - J_ep ( u_e -u_p)^2 - J_np ( u_n -u_p)^2 ]dV +∫_V ( u_e -u_n) [J_en ( u_e -u_n) + J_np ( u_p -u_n)] dV, or, in view of Eq. (<ref>), Ė_ B=-∫_V E·jdV= ∫_V [-λ_e Δμ_e^2 -J_en ( u_e -u_n)^2 - J_ep ( u_e -u_p)^2 - J_np ( u_n -u_p)^2 ]dV. Clearly, magnetic field dissipates due to particle mutual transformations and relative motion (diffusion).In principle, the same expression can also be derivedfor non-superfluid npe matter (Gusakov et al., in preparation),but now the velocities are related by the constraint (<ref>).Using this constraint, Eq. (<ref>) can be rewritten as Ė_ B= ∫_V [-λ_e Δμ_e^2- j^2/σ_0]dV, where σ_0=e^2 n_e^2/(J_ep+J_enJ_pn/J_en+J_pn)is the electrical conductivity in the absence of a magnetic field. We come to an important conclusion that in superfluid npe matter there is no magnetic field dissipationdue to ambipolar diffusion: Magnetic field dissipates exclusivelydue to Ohmic decay and non-equilibriumparticletransformations.The former is extremely inefficientin neutron stars [A typical timescale ist_ Ohmic=4π L^2 σ_0/c^2 ∼ 10^14T_8^-5/3yrs,where L is the lengthscale of the magnetic field variation (we take L ∼ 10^6 cm) and T_8 is the temperature of the NS core, normalized to 10^8K ().],while the latter strongly dependson the rate of non-equilibrium beta-processes in NS matter. A typical dissipation timescale associated with these processes can be estimated as t_ reactions∼ B^2/(4 πλ_e δμ_e^2) ∼ 4 π n_p^2/(λ_e B^2). In the case of modified Urca (mUrca) processes this estimate gives [The coefficient λ_e is estimated usingthe formulas given in the review by <cit.>.] t_ reactions≳ 3 × 10^12 T_8^-6B_14^-2yrs,too much to affect the magnetic field evolution. In turn, forthe direct Urca (dUrca) process we get t_ reactions≳ 2× 10^4 T_8^-4B_14^-2yrs,i.e., dUrca can be effective dissipation agentfor sufficiently strong magnetic fields.It should be emphasized that the fact that magnetic field does not dissipate through the ambipolar diffusion in the NS region where neutrons are superfluid, was clearly realised longago by <cit.>. However, these authors only considered the case of vanishing temperature (T=0), when all neutrons condense in Cooper pairs and simply cannot scatter off the protons. In contrast, here we argue that ambipolar diffusion is not importantfor any temperature, even slightly smaller than the critical temperature T_cn (when almost all neutrons are unpaired). Note that, this conclusion is in contrast to the generally held view(e.g., ) about the possible important role of ambipolar diffusion at temperatures T comparable to T_cn, which is based on the analysis of equations, strictlyvalid only at T ≪ T_cn(see Sec. <ref> for details).§.§ npeμ-matter An admixture of muons introduces an additional degree of freedom into the system. The dynamic equations for superfluid npeμ mixture consist ofsuperfluid (neutron) Eq. (<ref>) together with the three equations on the charged components, -e( E + 1/c u_e × B) - ∇μ_e -μ_e/c^2∇ϕ-J_ep/n_e( u_e -u_p) -J_en/n_e ( u_e -u_n)-J_eμ/n_e ( u_e -u_μ)=0, -e( E + 1/c u_μ× B) - ∇μ_μ -μ_μ/c^2∇ϕ-J_μ p/n_μ( u_μ -u_p) -J_μ n/n_μ ( u_μ -u_n)-J_eμ/n_μ ( u_μ -u_e)=0, e ( E + 1/c u_p × B) -∇μ_p -μ_p/c^2∇ϕ-J_ep/n_p( u_p -u_e) -J_np/n_p ( u_p -u_n)-J_μ p/n_p ( u_p -u_μ)=0. In Eqs. (<ref>)–(<ref>)u_μ and μ_μ are the muon velocityand relativistic chemical potential, respectively. In analogy with Eq. (<ref>)one can derive the force balance equationfor superfluid npeμ matter, j × B/c=-n_e ∇Δμ_e-n_μ∇Δμ_μ, and find that the Lorentz force density is determined by the gradients of two scalars,∇Δμ_e and ∇Δμ_μ,where Δμ_μ≡μ_n- μ_p- μ_μ.Therefore, comparing to superfluid npe-matter,there is more freedom to choose possible magnetic field configurations. Proceeding in a similar way as in the case of npe matter,one can show thatthe magnetic field dissipation rate is given by[The same expression is also valid for non-superfluid npeμ matter, but then it is not constrained by Eq. (<ref>).]Ė_ B= ∫_V [-λ_e Δμ_e^2-λ_μΔμ_μ^2 -1/2∑_i,k,i≠ kJ_ik ( u_i -u_k)^2 ]dV, while the velocities u_iare related by (compare this result with the constraint <ref>) J_μ n ( u_μ -u_n)+J_en ( u_e -u_n)+J_np ( u_p -u_n)=0. In Eq. (<ref>) the indices i,k run over n,p,e,μ; λ_μ has the same meaning as λ_e, but is defined for Urca-reactions involving muons. Generally, as follows from Eqs. (<ref>) and (<ref>), the magnetic field dissipation in npeμ matteris not associated exclusively with the Ohmic decay and non-equilibrium beta-processes. Even if we neglect the lepton interactions with neutrons(J_en=J_μ n=0), and find that u_p = u_n from Eq. (<ref>) (i.e., normal neutrons move with protons),the relative motion between electrons and muons will still enter the dissipation rate. The effect of such motion on the magnetic field dissipationcan be relatively small, especially at large densities(when muons more and more resemble electrons). But this should be checked by direct calculation which is beyond the scope of the present short note. Concluding, the presence of muons (or other, more exotic, particle species) complicates the things and may, in principle,affect the magnetic field evolution in superfluid NSs.§ CONCLUSIONS In this note we considered a simplified illustrative problemof the magnetic field evolutionin a NS, whose core is composed of superfluid neutrons,nonsuperconducting protons and electronswith possible admixture of muons. A star is assumed to be non-rotating, i.e., it does not have neutronvortices in its interiors. Perturbation of the system by the magnetic field is assumed to be smalland the NS evolution is supposed to be quasistationary –standard assumptions (e.g., ), that allow us to neglect time derivatives and velocity-dependent nonlinear termsin the Euler-type and particle continuity equations. Bearing in mind simplifications described above,we arrived at the following conclusions:1. Ambipolar diffusion is irrelevant for the magnetic field dissipationin superfluid npe matter at temperatures T even slightly smaller than T_cn.Magnetic field in this case dissipates only because of Ohmic losses(inefficient mechanism in the neutron star cores)andnon-equilibrium Urca processes (can be efficient if the direct Urca process is open). This result is in contrast to the results of <cit.> and <cit.> who, as we argue in Sec. <ref>,used a superfluid dynamic equation for neutrons, whichis correct only in the limit T≪ T_cn. 2. Since only very specific magnetic field configurationscan support hydrostatic equilibriumin superfluid npe matter(see section <ref> and the work by ),magnetic field should“feel” expansion of the superfluid region upon NS cooling and reorganize itself accordingly on a cooling timescale. This may result in an increased magnetic activity, e.g., in magnetars.3. An admixture of muons will restore, to some extent, the role of ambipolar diffusion for the magnetic field evolution,although themagnetic energy dissipation rate (<ref>) will differ from that in non-superfluid matter because of(i) suppression of np, ne, and nμ collisions by neutron superfluidity (e.g., ) and (ii) an additional constraint (<ref>)relating normal particle velocities u_i (i=n, p, e, μ). In particular, if we neglectthe lepton-neutron collisions, there will be no relative motion between normal neutrons and protons,u_n= u_p.§ ACKNOWLEDGEMENTSThis study was supported by the Russian Science Foundation(grant number 14-12-00316). 21 natexlab#1#1[Baym et al(1969)Baym, Pethick, & Pines]bpp69 Baym G., Pethick C., Pines D., 1969, , 224, 674 [Beloborodov & Li(2016)]bl16 Beloborodov A. M., Li X., 2016, , 833, 261 [Castillo et al(2017)Castillo, Reisenegger, & Valdivia]crv17 Castillo F., Reisenegger A., Valdivia J. A., 2017, , 471, 507 [Glampedakis et al(2011)Glampedakis, Jones, & Samuelsson]gjs11 Glampedakis K., Jones D. I., Samuelsson L., 2011, , 413, 2021 [Glampedakis & Lasky(2016)]gl16 Glampedakis K., Lasky P. D., 2016, , 463, 2542 [Goldreich & Reisenegger(1992)]gr92 Goldreich P., Reisenegger A., 1992, , 395, 250 [Gusakov & Andersson(2006)]ga06 Gusakov M. E., Andersson N., 2006, Mon. Not. R. Astron. Soc., 372, 1776 [Haensel et al(1990)Haensel, Urpin, & Iakovlev]hui90 Haensel P., Urpin V. A., Iakovlev D. G., 1990, , 229, 133 [Hoyos et al(2008)Hoyos, Reisenegger, & Valdivia]hrv08 Hoyos J., Reisenegger A., Valdivia J. A., 2008, , 487, 789 [Hoyos et al(2010)Hoyos, Reisenegger, & Valdivia]hrv10 Hoyos J. H., Reisenegger A., Valdivia J. A., 2010, , 408, 1730 [Iakovlev & Shalybkov(1991)]ys91a Iakovlev D. G., Shalybkov D. A., 1991, , 176, 171 [Khalatnikov(1989)]khalatnikov89 Khalatnikov I. M., 1989, An Introduction to the Theory of Superfluidity. Addison-Wesley, New York [Passamonti et al(2017)Passamonti, Akgün, Pons, & Miralles]papm17 Passamonti A., Akgün T., Pons J. A., Miralles J. A., 2017, , 465, 3416 [Putterman(1974)]putterman74 Putterman S., 1974, Superfluid hydrodynamics, North-Holland series in low temperature physics. North-Holland Pub. Co. [Shalybkov & Urpin(1995)]su95 Shalybkov D. A., Urpin V. A., 1995, , 273, 643 [Shternin(2008)]shternin08 Shternin P. S., 2008, Soviet Journal of Experimental and Theoretical Physics, 107, 212 [Thompson & Duncan(1996)]td96 Thompson C., Duncan R. C., 1996, , 473, 322 [Urpin & Shalybkov(1999)]us99 Urpin V., Shalybkov D., 1999, , 304, 451 [Urpin & Shalybkov(1995)]us95 Urpin V. A., Shalybkov D. A., 1995, Astronomy Reports, 39, 332 [Yakovlev et al(2001)Yakovlev, Kaminker, Gnedin, & Haensel]ykgh01 Yakovlev D. G., Kaminker A. D., Gnedin O. Y., Haensel P., 2001, , 354, 1 [Yakovlev & Shalybkov(1991)]ys91b Yakovlev D. G., Shalybkov D. A., 1991, , 176, 191
http://arxiv.org/abs/1703.09216v2
{ "authors": [ "E. M. Kantor", "M. E. Gusakov" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170326214336", "title": "A note on the ambipolar diffusion in superfluid neutron stars" }
unsrt ^aForschungszentrum Jülich, Institute for Advanced Simulation, Institut für Kernphysik and Jülich Center for Hadron Physics, D-52425 Jülich, Germany ^b Helmholtz-Institut für Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Universität Bonn, D-53115 Bonn, Germany We investigate the process B_c^+→ B_s^0π^+π^0 viaBK̅^* rescattering. The kinematic conditions fortriangle singularities are perfectly satisfied in the rescattering diagrams. A resonance-like structure around the BK̅ threshold, which we denote as X(5777), is predicted to be present in the invariant mass distribution of B_s^0 π^+. Because the relative weak BK̅ (I=1) interaction does not support the existence of a dynamically generated hadronic molecule, the X(5777) can be identified as a pure kinematical effect due to the triangle singularity. Its observation may helpto establish a non-resonance interpretation for some XYZ particles. Keywords: Molecular state; Rescattering effect; Triangle singularity. 14.40.Rt, 12.39.Mk, 14.40.NdGenerating a resonance-like structurein the reaction B_c→ B_s ππ Ulf-G. Meißner^a,b[meissner@hiskp.uni-bonn.de]December 30, 2023 ================================================================== Introduction. — Hadron spectroscopy, in particular due tothe appearance of theso-called exotic states, is experiencing a renaissance in recent years. Since 2003, dozens of resonance-like structures have been observed by many experimental collaborations in various reactions. These structures are usually denoted as XYZ particles, because most of them do not fit into the conventional quark model (QM), which has been very successful in describing the low-lying hadrons. For instance, the observed masses of the X(3872) andthe D_s0^*(2317) are much smaller than the expected values of the conventional QM states χ_c1(2 ^3P_1) and D_s0^* (1 ^3P_0), respectively. Some of these states definitely cannot be conventional qq̅-mesons or qqq-baryons, such as the charged Z_c^±/Z_b^± states observed in J/ψπ^±/Υ(nS)π^± invariant mass distributions, the P_c(4380) and P_c(4450) observed in J/ψ p distributions, and so on. These experimental observations have also inspired a flurry of theoretical investigations trying to understand their intrinsic structures. We refer toRefs. <cit.> for some recent reviews about the study of exotic hadrons.Among the popular theoretical interpretations about exotic hadrons, the multi-quark (tetraquark, pentaquark, etc.) interpretation usually tends to imply the existence of a large number of degenerate states. In contrast, the observed spectrum in experiments appears to be very sparse, which is a challenge for this interpretation. An intriguing characteristic of the XYZ states is that many of them are located around two-meson (or one meson and one baryon) thresholds. For example, the masses of the D_s0^*(2317), X(3872), Y(4260), Z_b(10610) and Z_b(10650) are very close to the threshold of DK, DD̅^*, D_1D̅, BB̅^* and B^*B̅^*, respectively. This phenomenon can be considered an evidence for regarding some XYZ states as hadronic molecules – bound systems of two hadrons analogous to conventional nuclei. The deuteron, which is composed of a proton and a neutron, is the one of the few well established hadronic molecule up to now. With proper interactions, the existence of molecular states composed of other hadrons is also expected. A prime example is the Λ(1405), which was predicted as a K̅N molecule long before the QM. In many cases, however, the detailed multi-hadron dynamics is not so well understood. For a recent review on hadronic molecules, see Ref. <cit.>.Concerning the underlying structures of those XYZ states, besides genuine resonances interpretations mentioned above, some non-resonance interpretations which connect the kinematic singularities of the rescattering amplitudes with the resonance-like peaks were also proposed in literatures, such as the cusp model <cit.>,or the triangle singularity (TS) mechanism. The TS mechanismwas first noticed in 1960s <cit.>.Unfortunately, most of the proposed reactions at that time were lackingexperimental data. It was rediscoveredin recent years and used to interpret some exotic phenomena, such as the large isospin violation in η(1405)→ 3π, the production of the axial-vector state a_1(1420), the production of the Z_c^±(3900) and so on<cit.>.It is shown that sometimes it is not necessary to introduce a genuine resonance to describe a resonance-like peak, because the TSs of the rescattering amplitudes could generate bumps in the corresponding invariant mass distributions. Before claiming that a resonance-like peak corresponds to a genuine particle, it is also necessary to exclude or confirm the possibility of this non-resonance interpretation. As for the cusp model, it should be mentioned that in Ref. <cit.> it was shown that the kinematic threshold cusp cannot produce a narrow peak in the invariantmass distribution of the elastic channel in contrast with a genuine S-matrix pole.The position ofthe TS peak usually stays in the vicinity of the threshold of the scattering particles. From this point of view, the TS mechanism is similar to the hadronic molecule interpretation, and it also implies that the genuine dynamic pole may mix with the TS peak. This brings some ambiguities to our understanding about the nature of some resonance-like peaks observed in experiments. One way to distinguish TS peaks from genuine resonances is finding some “clean" processes.Since the pole position of a genuine state should not depend on a specific process, while the TS peak is rather sensitive to the kinematic conditions, one would expect that a genuine state should still appear in the processes where kinematic conditions for the TS are not fulfilled, but the TS peak should disappear. Vice versa, if one observes a resonance-like peak in a process where the genuine state does not contribute but the TS kinematic conditions can be fulfilled, it will also help to establish the TS mechanism. In this paper, we focus on a process through which the TS mechanism could be confirmed in experiments. TS Mechanism. — For the triangle Feynman diagrams describing rescattering processes, such as those illustrated in Fig. <ref>, there are two kinds of intriguing singularities which may appear in the rescattering amplitudes. When only two of the three intermediate states are on-shell, the singularity at threshold is a finite square-root branch point, which corresponds to a cusp effect. In some special kinematical configurations, all of the three intermediate states can be on-shell simultaneously, which corresponds to the leading Landau singularity of the triangle diagram. This leading Landau singularity is usually called the TS, which may result in a narrow peak in the corresponding spectrum. For the decay process B_c^+→ B_s^0π^+π^0 via the K̅^*B K̅-loop in Fig. <ref>(a), we define the invariants s_1≡ p_B_c^+^2=m_B_c^+^2, and s_2≡ (p_B_s^0+p_π^+)^2=M_B_s^0π^+^2. The position of the TS in the s_1 or s_2 complex plane of the scattering amplitude 𝒜(s_1,s_2) can be determined by solving the so-called Landau equation <cit.>.Assuming we do not know the physical mass m_B_c^+, when√(s_1) increases from the BK̅^* threshold 6.175 GeV to 6.297 GeV, the TS in √(s_2) moves from 5.849 GeV to the BK̅ thresholdat 5.777 GeV. Vice versa, when √(s_2) increases from 5.777 to 5.849 GeV, the TS in √(s_1) moves from 6.297 to 6.175 GeV. These are the kinematical regions where the TS can be present in the physical rescattering amplitude. It is interesting to note that the mass of B_c^+∼ 6.276 GeV just falls into the TS kinematical region. Taking Fig. <ref>(a) as an example, the physical picture concerning the TS mechanism can be understood like this: The initial particle B_c^+ first decays into B^+ and K̅^*0, then the particle K̅^0 emitted from K̅^*0 catches up with the B^+, and finally B^+K̅^0scatters into B_s^0 π^+. This implies that the rescattering diagram can be interpreted as a classical process in space-time with the presence of TS, and the TS will be located on the physical boundary of the rescattering amplitude <cit.>.Rescattering Amplitude. —The B_c^+ meson, lying below the BD threshold, can only decay via the weak interactions, and about 70% of its width is due to c quark decay with the b quark asspectator <cit.>. The decay B_c^+→ B^+ K̅^*0, as a Cabibbo-favored process, is expected to be one of the dominant nonleptonic decay modes of the B_c^+ <cit.>. There is no direct measurement on this channel at present. Its branching ratio is usually predicted to be larger than 10^-3, which implies the rescattering processes in Fig. <ref> may play a role in B_c^+→ B_s^0π^+π^0. Following Refs. <cit.>, by means of the factorization approach, the decay amplitude can be expressed as𝒜(B_c^+→ B^+ K̅^*0)= √(2) G_F F_1^B_c → B_u f_K^* m_K̅^*×(p_B_c^+·ϵ^*_K̅^*) V_ud^ V_cs^* a_2,where G_F is the Fermi coupling constant, F_1^B_c → B_u is the B_c^+→ B^+ transition form factor, f_K^* is the decay constant of the K^*, V_ud^,V_cs^* are the CKM matrix elements, and a_2 is a combination of Wilson coefficients. For B_c^+→ B^+ K̅^*0, the velocity of the recoiling B^+ is very low in the rest-frame of the B_c^+, and the wave functions of B_c^+ and B^+ overlapstrongly. The form factor F_1^B_c → B_u is then expected to be close to unity <cit.>. In our numerical calculation, we takeF_1^B_c → B_u=1 as an approximation. Thedecay constant f_K^* and coefficient a_2 are fixed to be 220 and -0.4, respectively <cit.>. Concerning the other parameters in Eq. (<ref>), we input the standard Particle Data Group values  <cit.>. For K̅^*→K̅π, the amplitudes take the form𝒜(K̅^*0→K̅^0π^0) =2G_V p_π^0·ϵ_K̅^*,𝒜(K̅^*0→ K^-π^+) =-2√(2) G_V p_π^+·ϵ_K̅^*,where the coupling constant G_V can be determined by the decay width of the K̅^*. There have been many theoretical studies about the pesudo-Nambu-Goldstone-bosons (π, K, etc.) scattering off the heavy-light mesons (D^(*), B^(*), etc.).By means oflattice QCD (LQCD) simulations and chiral extrapolation, in Ref. <cit.> the S-wave scattering length of the isoscalar DK channel a_DK^I=0 is predicted to be -0.86± 0.03fm at the physical pion mass. Employing both s̅c and DK interpolating fields, in Ref. <cit.> the authors performed a direct lattice simulation and obtain the DK scattering length a_DK^I=0= -1.33(20)fm, which qualitatively agrees with the result of Ref. <cit.>. The large negative scattering length a_DK^I=0 means the DK (I=0) interaction is strong, and indicates the presence of an isoscalar state below threshold. It is generally supposed that the D_s0^*(2317)/D_s1(2460) is the hadronic molecule dynamically generated by the strong DK/D^*K (I=0) interaction in the coupled-channels dynamics <cit.>. On the other hand, the scattering length of the isospin-1 DK channel a_DK^I=1 is predicted to be 0.07± 0.03 + i(0.17^+0.02_-0.01)fm in Ref. <cit.>, which is much smaller than a_DK^I=0 and implies the DK (I=1) interaction is weak. According to the heavy quark spin and flavor symmetry, the above results can be easily extended to the B^(*)K̅ cases. The bottom-quark counterparts of D_s0^*(2317) and D_s1(2460) are B_s0^* and B_s1, which are supposed to be the BK̅ and B^*K̅ molecular states, respectively. But these two states have not been observed in experiments yet. The predicted masses of B_s0^*/B_s1 are usually tens of MeV below theBK̅/B^*K̅ threshold. Being similar to the DK (I=1) interaction, the BK̅ (I=1) interaction is also generally supposed to be weak. Within the framework of an unitary chiral effective field theory, the S-wave scattering length of isovector BK̅ channel a_BK̅^I=1 is predicted to be 0.02-0.23ifm  <cit.>. The relative weak interactions in the BK̅-B_sπ coupled-channels do not support the presence of an isovector hadronic molecule around BK̅ threshold.In 2016, the D0 collaboration reported the observation of a narrow structure X(5568) in the B_s^0π^± invariant mass spectrum <cit.>. The mass and width are measured to be M_X=5567.8± 2.9^+2.9_-1.9 MeV and Γ_X=21.9± 6.4^+5.0_-2.5 MeV, respectively. The quark components of the decaying final state B_s^0 π^± are sub̅d̅ (or sdb̅u̅), which requires X(5568) should be a structure with four different valence quarks. Considering its mass and quark contents, some theorists suppose it could be an isovector hadronic molecule composed of BK̅ <cit.>. Using a chiral unitary approach,the authors reproduce the reported spectrum of D0 collaboration. However, the authors of Ref. <cit.> also point out to reproduce the spectrum an“unnatural” cutoff Λ≃ 2.8 is adopted in the T-matrix regularization, which is much larger than the “natural” value Λ≃ 1. Furthermore, in Ref. <cit.>, only the leading order (LO) potential was adopted, but in Ref. <cit.> it was shown that the LO potential cannot describe theLQCD scattering lengths of Ref. <cit.>. Employing the covariant formulation of the unitary chiral perturbation theory (UChPT), the authors found no bound state or resonant state via a direct searching on different Riemann sheets in Refs. <cit.>, where the driving potentials up to next-to-leading order (NLO) are constructed.In a recent experimental result reported by the LHCb collaboration <cit.>, the existence of X(5568) is not confirmed based on their pp collision data, which makes the production mechanism and underlying structure of X(5568) more puzzling. In fact, right after the observation by D0, the possible existence ofthis state was challengend on theoretical grounds, see Refs. <cit.>. The reason of its appearance in the D0 and absence in LHCb and CMS experiments is discussed inRef. <cit.>. What we are interested in this paper is not the X(5568) but a predicted resonance-like peakdenoted as X^± (5777) located around the BK̅ threshold in the B_sπ^± distributions.Because the existence of an isovector BK̅ hadronic molecule is rather questionable,for the decay process B_c^+→ B_s^0 π^+π^0, if one finds a peak in the B_s^0π^+ invariantmass spectrum around BK̅ threshold, it is quite reasonable to suppose that the peak isinduced by the TS mechanism as illustrated in Fig. <ref>. For the vertex BK̅→ B_sπ in Fig. <ref>, we employ the amplitude which is unitarized according to the method of UChPT  <cit.>. We consider the S-wave BK̅ and B_sπ coupled-channel scattering. The unitary T-matrix is given byT=(1-VG)^-1V,where V stands for the S-wave projection of the driving potential, and G is a diagonal matrix composed of two-meson-scalar-loop functions <cit.>. We only focus on the S-wave scattering in this paper, because the higher partial wave contributions will be highly suppressed for the near-threshold scattering. In our numerical calculations, the NLO potential is used. For the pertinent low-energy-constants and subtraction constant, we adopt the values of Ref. <cit.>, which are determined by fitting the recent LQCD result of Ref. <cit.>. See Refs. <cit.> for more details about the formulation of NLO potentials. The rescattering amplitude of B_c^+→ B_s^0π^+π^0 via the K̅^*0(q_1) B^+(q_2) K̅^0 (q_3)-loop in Fig. <ref> (a) is given by𝒜_B_c^+→ B_s^0π^+π^0^[ K̅^*0 B^+ K̅^0 ] = 1/i∫d^4q_3/(2π)^4𝒜(B_c^+→ B^+ K̅^*0)/ (q_1^2-m_K̅^*^2 +i m_K̅^*Γ_K̅^*) ×𝒜(K̅^*0→K̅^0π^0) 𝒜(B^+K̅^0→ B_s^0π^+) / (q_2^2-m_B^+^2) (q_3^2-m_K̅^0^2) 𝔽(q_3^2),where the sum over polarizations of intermediate state is implicit. The amplitude of Fig. <ref>(b) is similar to that of Fig. <ref>(a). As long as the TS kinematical conditions are satisfied, it implies that one of the intermediate state (K̅^* here) must be unstable. It is necessary to take into account the width effect of intermediate state. We therefore employ a Breit-Wigner (BW) type propagator in Eq. (<ref>). The complex mass in the propagator will remove the TS from physical boundary by a small distance, and makes the physical scattering amplitude finite. Since the location of TS is not far from the physical boundary, the physical amplitude can still feel its influence. In Eq. (<ref>), we also introduce a monopole form factor 𝔽(q_3^2)=(m_K̅^2-Λ^2 )/(q_3^2-Λ^2)to account for the off-shell effect and kill the ultraviolet divergence that appears in the loop integral.In the future, this has to be replaced by a better regularization procedure. The numerical results of B_s^0π^+ distributions via the rescattering processes are displayed in Fig. <ref>, where the cutoff energy Λ is taken to be 1 GeV or 3 GeV. It can be seen that the lineshape is not sensitive to the value of Λ. The two curves nearly coincide with each other, even though the cutoff energies are rather different. This is because the dominant contribution to the loop integral in Eq. (<ref>) comes from the region where intermediate particles are (nearly) on-shell, i.e. when q_3^2=m_K̅^2, 𝔽(q_3^2) gives 1. A narrow peak around 5.777 GeVcan be seen in Fig. <ref>. This resonance-like peak is what wecall the X(5777). As analyzed above, the X(5777) discussed here is not a dynamically generated pole in the coupled-channel dynamics. Its presence is due to the TS kinematical conditions being fulfilledin the rescattering diagram. The bump around 5.9 GeV in Fig. <ref> is due to reflectioneffects in the Dalitz plot and interference between Figs.<ref> (a) and (b).Background Analysis. —The rescattering processes in Fig. <ref> is just one of the contributions tothree-body decay B_c^+→ B_s^0π^+π^0. Since the TS peak can appear in these diagrams,we define them as the “signal” processes. But the dominant contribution to B_c^+→ B_s^0π^+π^0is expected to be via the process B_c^+→ B_s^0 ρ^+ → B_s^0π^+π^0. This is becausecompared to B_c^+→ B_s^0 ρ^+, B_c^+→ B^+ K̅^*0is a color-suppressed process inthe naive factorization approach. The branching ratio of B_c^+→ B_s^0 ρ^+ is generallypredicted to be larger than 1%  <cit.>, which is about one order of magnitudelarger than that of B_c^+→ B^+ K̅^*0. To study the “signal" in the B_s^0π^+distribution, it is also necessary to know the influence of possible backgrounds,especially the B_c^+→ B_s^0 ρ^+. Using the factorization approach, the amplitude of B_c^+→ B_s^0 ρ^+ can be written as𝒜(B_c^+→ B_s^0 ρ^+)= √(2) G_F F_1^B_c → B_s f_ρ m_ρ×(p_B_c^+·ϵ^*_ρ) V_ud^ V_cs^* a_1,where we use F_1^B_c → B_s=1, f_ρ=216, a_1=1.22 in the numerical calculations <cit.>. The amplitude of ρ^+→π^+π^0 reads 𝒜(ρ^+→π^+π^0) = 4G_V p_π^0·ϵ_ρ. The complete amplitude of B_c^+→ B_s^0π^+π^0 is then given by𝒜(B_c^+→ B_s^0π^+π^0) =e^iθ𝒜_ρ^ +𝒜^ℱ(s_ππ),where 𝒜_ρ^ is the amplitude of a tree diagram via intermediate ρ meson decay, and the normal BW type propagator is adopted in 𝒜_ρ^. The factor e^iθ stands for the relative phase between 𝒜(B_c^+→ B_s^0 ρ^+) and 𝒜(B_c^+→ B^+ K̅^*0), which is actually not fixed in the factorization approach. In the above equation, we also introduce a function ℱ(s_ππ) to account for the strong ππ final-state-interaction  <cit.>, where s_ππ is π^+π^0 invariant mass squared.Due to the generalized Bose statistics, π^+π^0 can only stay in relative odd partial waves. For the lowest P-wave ππ scattering, the phase shift in the isospin-1 channel can be well reproduced by the intermediate ρ-meson exchange. The function ℱ(s_ππ) can be further parametrized as ℱ(s_ππ)=α(s_ππ)/(s_ππ-m_ρ^2+i m_ρΓ_ρ). α(s_ππ) is a polynomial function of s_ππ, which should be fixed according to the experimental data. But since we are going to make a prediction here, we approximately takeα(s_ππ)=s_ππ-∘m_ρ^2, where ∘m_ρ is the bare mass of ρ meson without the effect of ππ meson loop. By reproducing the P-wave ππ scattering phase shift data,∘m_ρ is fixed to be 0.81 GeV according to a vector-meson-dominance model employed in Ref. <cit.>. This rather model-dependent scheme should eventually be replaced by taking a more improved spectral function, see e.g. Refs. <cit.> (and references therein). In terms of Eq. (<ref>), the simulated B_s^0π^+ distribution is displayed in Fig. <ref>, where the relative phase θ is taken to be 0, π/2, π and 3π/2, respectively, corresponding to the different curves. The cutoff energy Λ is fixed to be 1GeV in the simulation. The B_s^0π^+ distribution is dominated by the reflection of the ρ signal in the Dalitz plot, but all of the four curves in Fig. <ref> deviate significantly from the reflection around 5.777 GeV. When θ=0 (π), there is a sudden fall (rise) in the distributions. When θ=π/2 (3π/2), there is a narrow peak (dip) in the distributions. The TS of the rescattering process generatesdifferent structures due to different interferences. Another background may come from the isospin-violation process B_c^+→ B_s0^*π^+→ B_s^0π^0π^+. But since the B_s0^* peak in the B_s^0π^0 distribution may not have a very large influence in the B_s^0π^+ distribution, this contribution is neglected in the current work. Summary. —We have investigated the possibility of generating a resonance-like structure X(5777) in the B_s^0π^+ distribution in reaction B_c^+→ B_s^0π^+π^0. There are several advantages that the proposed rescattering processes may help us to establish a non-resonance interpretation of some XYZ particles, i.e., the TS mechanism. First, the TS kinematical conditions are perfectly fulfilled in those triangle rescattering diagrams. Second, the weak BK̅ (I=1) interaction does not support the existence of a narrow dynamically generated resonant or bound state.Third, all of the relevant couplings in the rescattering diagrams are under good theoretical control, which reduces the model dependence of final results. Further more, the relevant backgrounds in this channel are also expected to be simple. Therefore, if one observes the X(5777) structure in the invariant mass spectrum ofB_s^0π^+, it is very likely to conclude that this structure originates from the TS and is not a genuine particle. A similar analysis of this paper can be naively extended to the charge conjugate channel B_c^-→B̅_s^0π^-π^0.The corresponding experiments should be performed in LHCb. Note, however, a disadvantage for the proposed rescattering processes: there is a neutral pion in the final states. For the LHCb experiments, it is not easy to identify a neutral pion, and thus this poses a severe challenges. Acknowledgments. —X. H. Liu is grateful to C. Hanhart for stimulating discussions concerning some of the material presented here. Helpful discussions with L. Y. Dai, C. W. Xiao and W. Wang are also gratefully acknowledged. This work is supported by the DFG and the NSFC through funds provided to the Sino-German CRC 110 “Symmetries and the Emergence of Structure in QCD” (NSFC Grant No. 11261130311).99 Chen:2016sprH. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu, arXiv:1609.08928 [hep-ph].Brambilla:2010csN. Brambilla et al.,Eur. Phys. J. C 71, 1534 (2011). Olsen:2014qnaS. L. Olsen,Front. Phys. (Beijing) 10, no. 2, 121 (2015)[arXiv:1411.7738 [hep-ex]].Brambilla:2014jmpN. Brambilla et al.,Eur. Phys. J. C 74, no. 10, 2981 (2014). Chen:2016qjuH. X. Chen, W. Chen, X. Liu and S. L. Zhu,Phys. Rept.639, 1 (2016). Esposito:2016nozA. Esposito, A. Pilloni and A. D. Polosa,Phys. Rept.668, 1 (2016).RMP F.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao and B.-S. Zhou, commissioned article for Rev. Mod. Phys. (2017). Chen:2011pvD. Y. Chen and X. Liu,Phys. Rev. D 84, 094003 (2011). Bugg:2011jrD. V. Bugg,Europhys. Lett.96, 11002 (2011). Swanson:2014traE. S. Swanson,Phys. Rev. D 91, no. 3, 034009 (2015). Aitchison:1969tqI. J. R. Aitchison and C. Kacser,Phys. Rev.173, 1700 (1968). Coleman:1965xmS. Coleman and R. E. Norton,Nuovo Cim.38, 438 (1965). Bronzan:1964zzJ. B. Bronzan,Phys. Rev.134, B687 (1964). Schmid:1967ojmC. Schmid,Phys. Rev.154, no. 5, 1363 (1967). Wu:2011yxJ. J. Wu, X. H. Liu, Q. Zhao and B. S. Zou,Phys. Rev. Lett.108, 081803 (2012). Wang:2013cyaQ. Wang, C. Hanhart and Q. Zhao,Phys. Rev. Lett.111, no. 13, 132003 (2013). Guo:2014iyaF. K. Guo, C. Hanhart, Q. Wang and Q. Zhao,Phys. Rev. D 91, no. 5, 051504 (2015). Ketzer:2015tqaM. Mikhasenko, B. Ketzer and A. Sarantsev,Phys. Rev. D 91, no. 9, 094015 (2015). Szczepaniak:2015ezaA. P. Szczepaniak,Phys. Lett. B 747, 410 (2015). Guo:2015umnF. K. Guo, U.-G. Meißner, W. Wang and Z. Yang,Phys. Rev. D 92, no. 7, 071502 (2015). Liu:2015taaX. H. Liu, M. Oka and Q. Zhao,Phys. Lett. B 753, 297 (2016). Liu:2015feaX. H. Liu, Q. Wang and Q. Zhao,Phys. Lett. B 757, 231 (2016). Achasov:2015uuaN. N. Achasov, A. A. Kozhevnikov and G. N. Shestakov,Phys. Rev. D 92, no. 3, 036003 (2015). Guo:2016bklF. K. Guo, U.-G. Meißner, J. Nieves and Z. Yang,Eur. Phys. J. A 52, no. 10, 318 (2016). Roca:2017bvyL. Roca and E. Oset,arXiv:1702.07220 [hep-ph]. Landau:1959fiL. D. Landau,Nucl. Phys.13, 181 (1959). Brambilla:2004wfN. Brambilla et al. [Quarkonium Working Group],hep-ph/0412158.Du:1988wsD. s. Du and Z. Wang,Phys. Rev. D 39, 1342 (1989). Sun:2015exaJ. Sun, N. Wang, Q. Chang and Y. Yang,Adv. High Energy Phys.2015, 104378 (2015)[arXiv:1504.01286 [hep-ph]].Wirbel:1985jiM. Wirbel, B. Stech and M. Bauer,Z. Phys. C 29, 637 (1985). Olive:2016xmwC. Patrignani et al. [Particle Data Group],Chin. Phys. C 40, no. 10, 100001 (2016). Liu:2012zyaL. Liu, K. Orginos, F. K. Guo, C. Hanhart and U.-G. Meißner,Phys. Rev. D 87, no. 1, 014508 (2013). Mohler:2013rwaD. Mohler, C. B. Lang, L. Leskovec, S. Prelovsek and R. M. Woloshyn,Phys. Rev. Lett.111, no. 22, 222001 (2013). Guo:2006fuF. K. Guo, P. N. Shen, H. C. Chiang, R. G. Ping and B. S. Zou,Phys. Lett. B 641, 278 (2006). Guo:2009ctF. K. Guo, C. Hanhart and U.-G. Meißner,Eur. Phys. J. A 40, 171 (2009). Liu:2009uzY. R. Liu, X. Liu and S. L. Zhu,Phys. Rev. D 79, 094026 (2009). Kolomeitsev:2003acE. E. Kolomeitsev and M. F. M. Lutz,Phys. Lett. B 582, 39 (2004). Altenbuchinger:2013vwaM. Altenbuchinger, L.-S. Geng and W. Weise,Phys. Rev. D 89, no. 1, 014026 (2014). Guo:2015dhaZ. H. Guo, U.-G. Meißner and D. L. Yao,Phys. Rev. D 92, no. 9, 094008 (2015). Lu:2016kxmJ. X. Lu, X. L. Ren and L. S. Geng,Eur. Phys. J. C 77, no. 2, 94 (2017). D0:2016mwdV. M. Abazov et al. [D0 Collaboration],Phys. Rev. Lett.117, no. 2, 022003 (2016). Albaladejo:2016epsM. Albaladejo, J. Nieves, E. Oset, Z. F. Sun and X. Liu,Phys. Lett. B 757, 515 (2016). Aaij:2016ievR. Aaij et al. [LHCb Collaboration],Phys. Rev. Lett.117, no. 15, 152003 (2016) Addendum: [Phys. Rev. Lett.118, no. 10, 109904 (2017)]. Burns:2016gvyT. J. Burns and E. S. Swanson,Phys. Lett. B 760, 627 (2016). Guo:2016nhbF. K. Guo, U.-G. Meißner and B. S. Zou,Commun. Theor. Phys.65, no. 5, 593 (2016). Yang:2016swsZ. Yang, Q. Wang and U.-G. Meißner,Phys. Lett. B 767, 470 (2017). Oller:2000fjJ. A. Oller and U.-G. Meißner,Phys. Lett. B 500, 263 (2001).Oller:2000maJ. A. Oller, E. Oset and A. Ramos,Prog. Part. Nucl. Phys.45, 157 (2000).Oller:1997ngJ. A. Oller, E. Oset and J. R. Pelaez,Phys. Rev. Lett.80, 3452 (1998). Au:1986vsK. L. Au, D. Morgan and M. R. Pennington,Phys. Rev. D 35, 1633 (1987).Dai:2012pbL. Y. Dai, M. Shi, G. Y. Tang and H. Q. Zheng,Phys. Rev. D 92, no. 1, 014020 (2015).Klingl:1996byF. Klingl, N. Kaiser and W. Weise,Z. Phys. A 356, 193 (1996). Daub:2012muJ. T. Daub, H. K. Dreiner, C. Hanhart, B. Kubis and U.-G. Meißner,JHEP 1301, 179 (2013).Daub:2015xjaJ. T. Daub, C. Hanhart and B. Kubis,JHEP 1602, 009 (2016). Chen:2015jglY. H. Chen, J. T. Daub, F. K. Guo, B. Kubis, U. G. Meißner and B. S. Zou,Phys. Rev. D 93, no. 3, 034030 (2016). Chen:2016mjnY. H. Chen, M. Cleven, J. T. Daub, F. K. Guo, C. Hanhart, B. Kubis, U. G. Meißner and B. S. Zou,Phys. Rev. D 95, no. 3, 034022 (2017).
http://arxiv.org/abs/1703.09043v1
{ "authors": [ "Xiao-Hai Liu", "Ulf-G. Meißner" ], "categories": [ "hep-ph", "hep-ex", "nucl-th" ], "primary_category": "hep-ph", "published": "20170327125127", "title": "Generating a resonance-like structure in the reaction $B_c\\to B_s ππ$" }
GW150914 with future detectors]How would GW150914 look with future gravitational wave detector networks?School of Physics and Astronomy & Birmingham Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom sgaebel@star.sr.bham.ac.uk The first detected gravitational wave signal, GW150914 <cit.>, was produced by the coalescence of a stellar-mass binary black hole. Along with the subsequent detection of GW151226, GW170104 and the candidate event LVT151012, this gives us evidence for a population of black hole binaries with component masses in the tens of solar masses <cit.>. As detector sensitivity improves, this type of source is expected to make a large contribution to the overall number of detections, but has received little attention compared to binary neutron star systems in studies of projected network performance. We simulate the observation of a system like GW150914 with different proposed network configurations, and study the precision of parameter estimates, particularly source location, orientation and masses. We find that the improvements to low frequency sensitivity that are expected with continued commissioning <cit.> will improve the precision of chirp mass estimates by an order of magnitude, whereas the improvements in sky location and orientation are driven by the expanded network configuration. This demonstrates that both sensitivity and number of detectors will be important factors in the scientific potential of second generation detector networks.04.30.Tv, 95.85.Sz [ Sebastian M. Gaebel and John Veitch December 30, 2023 =======================================#1#2@pt#2*65536/1#1 § INTRODUCTIONThe first gravitational wave (GW) signal GW150914, from a black hole binary merger, was observed by the two Advanced LIGO (aLIGO, <cit.>) detectors in Hanford and Livingston <cit.>. The masses of the two black holes were inferred to have masses 36.2^+5.2_-3.8 and 29.1^+3.7_-4.4 in their rest frame, forming a merger product of mass 62.3^+3.7_-3.1  <cit.>. The inferred source location was the target of follow-up observations by a range of instruments spanning the electromagnetic spectrum from radio to gamma rays <cit.>. The sky localisation of this event was poorly constrained as it is largely determined by the difference in arrival time at the active detectors, and with only two operating aLIGO detectors the position was resolved to an annulus within a ring of constant time delay between the two sites <cit.>. However the Advanced Virgo (AdVirgo, <cit.>) detector is currently being commissioned and will join the network in July 2017 during the second observing run of Advanced LIGO, and KAGRA <cit.> and LIGO-India <cit.> to follow <cit.>. This raises the question as to how well those future networks can be expected to localize an event like GW150914, and how well its parameters could be measured with the upcoming second generation detector networks. The subsequent detection of GW151226 <cit.>, GW170104 <cit.> and the gravitational wave candidate LVT151012 <cit.> provide evidence for a population of massive black hole binaries, which are likely to produce multiple further detections in the future <cit.>.Projections for future sensitivity improvements and network configurations are given in<cit.>, which also studies the sky location performance. However this study, in common with the majority of previous works <cit.>, considers only the binary neutron star case. Expectations for localisation of generic systems were given in <cit.> using geometric arguments, which are a useful guide for qualitative interpretation of actual simulations in the 3+ detector case. However, <cit.> indicate quantitative differences between such arguments and full Bayesian parameter estimation results, and qualitative differences in the two-detector network from the availability of amplitude measurements. Vitale et al <cit.> studied the parameter estimation expectations for generic systems from a heavy BBH population which extends upward in mass above GW150914, while focusing mainly on mass and spin measurements. Most of the results are obtained using a network of one AdVirgo and two aLIGO detectors, although the five-detector network including LIGO-India and KAGRA was considered in an appendix but without comparing identical events. Essick et al <cit.> studied sky localisation for short transient signals, using generic burst algorithms, however these can be systematically different from sky localisation which uses a compact binary signal model <cit.>.In this article we address the question of localisation and parameter estimation for massive BH binaries from a different angle. Using GW150914 as a template, we perform a set of simulations based on an evolving network configuration, keeping the injected signals the same. This allows us to study the improvements in parameter estimation and localisation systematically, using the initial Hanford-Livingston network as a reference, and studying the separate improvements produced by the expansion of the detector network and the general increase in sensitivity of these detectors.For similar sources such as GW151226, LVT151012 or GW170104 we expect to see qualitatively similar behaviour, although for the lower mass systems the increased visibility of the inspiral portion of the signal will give a better overall constraint on the chirp mass. This is because the phase of the inspiral portion of the signal depends primarily on the chirp mass whereas the merger and ring-down portions depend more on the total mass (through the mass of the final BH), which can be seen in the comparisons in <cit.>. The lower mass signals also have a greater bandwidth in frequency, which should lead to a more precise localisation in general, although we expect the relative improvements from different network configurations to be similar.We considered a variety of network configurations of GW detectors, based on the projections in <cit.>. We start with the sensitivity of the aLIGO Hanford and Livingston detectors in the first observing run (O1) and the eight engineering run (ER8b) which immediately preceded it, as a comparison point <cit.>, then add the Virgo detector with an initial noise curve as projected in <cit.>. We compare this configuration to the network of Hanford, Livingston and Virgo at design sensitivity <cit.>, and with a network expanded to include LIGO India <cit.>, KAGRA <cit.> and both. Over the lifetime of the second generation instruments we expect the performance of the global network to improve parameter estimation in three important ways. The expansion of the global detector network will give better sky resolution and ability to better measure the signal polarisation (the Hanford and Livingston detectors are nearly co-aligned); the improvements at low frequency <cit.> will increase the observable duration of the signals and lead to more cycles of the inspiral part of the waveform being observable <cit.>; and finally the overall decrease in noise levels will greatly increase the signal-to-noise ratio (SNR) of the source. We investigate the effect of these improvements on sky localisation, mass measurement, and distance and inclination accuracy for a GW150914-like system.§ METHODTo compare the different network configurations we use a set of sixteen simulated signals and perform the full parameter estimation for each network set-up and each signal. The number of signals was chosen to balance computational cost and capturing the GW150914-like parameter space. For both simulation and parameter estimation we used the reduced order model of the SEOBNRv2 waveform <cit.>, which models the inspiral, merger and ringdown parts of the GW signal and includes the effect of aligned spins on both component bodies. The signal parameters were chosen to lie within the posterior distribution for the GW150914 event, so our simulations will appear to have the same relative amplitude in each detector as GW150914 did. This allowed us to easily verify that the results appeared similar to GW150914 when using the Early Hanford-Livingston detector network.We use a set of different network configurations which are designated by an identifier with three parts: The detector network, lower cut-off frequency, and noise spectrum. The detector network setup is a combination of aLIGO Hanford (H), aLIGO Livingston (L), AdVirgo (V), LIGO-India (I), and KAGRA (J). The noise spectrum is labelled as either “Early” , which indicates empirical ER8b/O1spectra for H and L, and the projected early low curve for Virgo <cit.>, or “Design” for the expected sensitivities at final design specification <cit.>. The lower cut-off frequency is either 10 Hz or 30 Hz. We selected twelve combinations of these to form the range of simulated network configurations. These configurations are shown along with results averaged over the 16 simulations in table <ref>.We neglected 10 Hz runs for early networks as they would yield little benefits due to the high noise levels at low frequency. KAGRA and LIGO India are still in construction phase and cannot be expected to start observing for some years, therefore the HLV detectors form the basis of the runs at design sensitivity. As the actual orientation of the future LIGO India detector was not available to us we assumed the arms to be aligned to North and East. Two-detector HL runs are included to represent the minimal possible configuration and to give results comparable to GW150914 <cit.>.The power spectral densities for all sensitivity curves are given in figure <ref>. The prior range of the component masses of the binary is 10-80 M_⊙ so that it is wide enough to contain all possible simulations drawn from the GW150914 posterior and the posterior of those systems. Similarly, the duration of the data segment which was analysed was set to 8 s for 30 Hz runs and 160 s for the 10 Hz runs, based on the time spent in the analysed frequency band combined with a safety margin. We chose to set the noise realization to zero, meaning that the data contains only the simulated signal, so as to avoid noise perturbations affecting the comparisons. We assume no uncertainty in the phase and amplitude calibration of the detectors for our main results, although we additionally consider the effect of 10% uncertainty in amplitude and 10^∘ in phase in the appendix. All parameter estimation was performed using LALInference <cit.> in its nested sampling mode.§ RESULTSIndividual parameters are characterized by different features of the waveform, and therefore affected differently by the improvements in noise levels or network extensions. This reflects the distinction between intrinsic parameters, which are properties of the source itself, and extrinsic parameters, which are related to the relative positions and orientations of the source and the detectors. For this reason we present the results for different parameters in their individual sections, which also include the discussion of the results, and their comparison to the expected scalings with SNR that have been derived from analytic approximations in the literature.Table <ref> contains an overview of the results for all discussed parameters with each run averaged over all simulations. The quantities used to measure the precision of the parameter estimation are the sizes of the 90% credible interval (C.I.) or area (C.A.) for the chirp mass, distance, and sky area, and the value from the maximum likelihood sample (ℒ_max) for the SNR. All figures in the section show the results for only one simulated signal to increase readability and to show qualitative behaviour. The combined results for all simulations are given in table <ref> §.§ Signal to Noise Ratio We use the optimal signal-to-noise ratio, which we define as SNR=√(⟨h | h|⟩), as a metric for comparing the strength of a signal against the background. We define ⟨·|·|$⟩ as the noise-weighted inner product ⟨a|b|=⟩ 4 ∫_f_min^f_max df a^*(f)b(f)/S(f)withh(f)andS(f)being the waveform template and noise power spectral density respectively.f_minis the low frequency cut-off, which is chosen according to the noise properties so that the signal does not accumulate significant SNR below that value.f_maxis chosen to be above the highest frequency contribution in the signal. This relation shows that both lowering the cut-off frequencyf_minand decreasing the noiseS(f)improve the SNR by either increasing the interval over which the SNR can be accumulated, or increasing the integrand itself <cit.>. The amount to which these increase the value depends on the noise spectrum in the region of interest. Observing a signal inNdetectors is expected to increase the SNR by a factor of√(N)relative to using a single detector only, and not taking sensitivity patterns into account, since SNR adds in quadrature for independent measurements. Figure <ref> shows that the noise levels start to rise quickly for frequencies lower than≈50 Hzfor all sensitivities. This explains why we see only minor differences in the SNR between10 Hzand30 Hzruns, which increases by a factor of only1.01-1.05. When increasing the detector sensitivity to the full design sensitivity however the SNR increases by a factor of≈2.7-3.0. The gains from adding Virgo to the network in the low sensitivity case are minor, with a factor of1.03. This is, again, expected as the early low AdVirgo sensitivity is significantly less than that of the aLIGO detectors so it does not contribute much to the SNR. In the high sensitivity case the difference is noticeable with Virgo increasing the SNR by a factor of≈1.2, bringing the total to≈84for the whole network. The fourth detector increases the combined SNR by a factor of≈1.2for LIGO India and≈1.3for KAGRA which suggests that KAGRA was in a more advantageous position for this event. Adding both LIGO India and KAGRA to the 3 detector setups brings the total SNR to≈116, which is≈1.4times higher than the three detector value. The gains are roughly compatible with the expected values derived above, though we would not expect an exact match as the argument neglects differences in noise spectra and the impact of the antenna patterns for the different detectors. The measured SNRs for one simulation are shown in the right-hand panel figure <ref>, while the combined results for all simulations are available in table <ref>. §.§ Chirp MassThe chirp mass, defined as ℳ = (m_1 m_2)^^3/_5(m_1 + m_2)^^-1/_5 is the most important quantity in determining the frequency evolution for a GW from compact binaries. Accordinglyℳcan be measured precisely from the phase evolution of the waveform <cit.>, in contrast to extrinsic parameters such as the distance, which are measured from the signal amplitude as measured in multiple detectors.Generally speaking, the measurement of the chirp mass improves due to the SNR according to the following relation for post-Newtonian inspiral signals <cit.> Δ (ln ℳ) ∝ SNR^-1^^5/_3,which applies when the second order expansion of the posterior around the maximum is a good approximation, in the limit of high SNR <cit.>. We therefore expect the improved sensitivity to be helpful since it reduces the relative obfuscation of the waveform due to noise, increasing the SNR. Additionally, the sensitivity improvement at low frequencies, allowing for a reduced lower frequency limit for the observed signal, is expected to be beneficial as it enables us to detect additional cycles of the inspiral which contain information about the chirp mass. If each detector were equally sensitive the overall SNR would scale as√(N), and therefore the measurement error would be expected to scale as1/√(N)withNbeing the number of detectors <cit.>.The results for the full set of networks considered are shown in table <ref>. We report the detector-frame chirp mass measurements, which are affected by the red-shift of the source, but are the most easily comparable when looking at multiple systems which appear similar to the detectors. We find that with the ER8/O1 HL sensitivity the90%credible intervalΔ_ℳwas a mean of5.0, which is slightly higher than the range of3.9reported in <cit.> for GW150914 using the SEOBNRv2 model, although this can be largely attributed to our use off_min=30Hz as opposed to 20 Hz. When using a lower cut-off of20 Hzwe find a mean width of the 90% credible interval of3.1 ± 0.3 which is slightly smaller than the GW150914 results, and to be expected since we assume perfect calibration and a zero-noise realisation.When adding detectors we see minor gains, improving the chirp mass estimate by factors of≈1.02and≈1.04-1.15per detector added, for Early and Design sensitivity runs respectively. This is due to the relative sensitivity of the Advanced LIGO and Advanced Virgo instruments, such that the SNR increases less than the√(N)formula implies. Improving the sensitivity proves much more rewarding, yielding an improvement factor of≈2.4-2.7when using the HL or HLV set-up at30 Hz. The gains with a lowered frequency cut-off are even higher, improving the measurements by factor of≈7.3-8.7. The left panel of figure <ref> shows a representative for each of the three distinct groups with nearly identical distributions. These groups are composed of the high cut-off, low sensitivity runs in the very wide case, the design sensitivity30 Hzruns for the intermediate peak, and the sharply peaked results from the two10 Hzruns.§.§ Sky localisation The sky localisation is mainly determined by the timing measurements between the individual detectors <cit.>. This means that there are two components to the measurement: the layout and synchronization of the detectors, and the measurement of the time delay using this external information. The main factor in how the layout of the detector network affects the sky localisation is in the distances between detectors. Larger baselines for the measurement of differences in arrival time translate into smaller relative errors, which then results in smaller uncertainties on the sky angles <cit.>. The timing accuracy is inversely proportional to both SNR and the effective bandwidth <cit.>. For small areas we can approximate the relevant section of the sphere as being flat, therefore the localisation is proportional to the square of the timing error, so we get:σ_area∝σ_RAσ_Dec∝SNR^-2Even assuming perfect measurements, the nature of triangulation limits our ability to localize the source. Using only triangulation, with two detectors the source can be constrained to a circle, with three detectors to two points, and only the fourth detector allows us to narrow to location down to a single point. As the measurements are not perfect we do, however, still expect improvements from additional detectors beyond the fourth. Due to the fact that adding detectors does not only provide additional baselines for triangulation but also increases the SNR (see section <ref>), we expect massive improvements in the sky localisation when detectors are added to the network. These gains should be the highest for the third detectors as it reduces the annulus to two single points, and to a lesser degree from the fourth detector which breaks the last degeneracy stemming from the symmetry under reflections on the plane of three detectors. Another advantage of an expanded detector network is rooted in the non-uniform antenna pattern of GW detectors which is shown in the upper panel of figure <ref>. This causes detectors to have “blind spots” with low sensitivity, which can be compensated for by carefully choosing the position and orientation of other detectors. This helps to provide uniform sensitivity across the sky, and could increase the chances of making prompt electromagnetic follow-up observations of sources <cit.>.In addition to the timing triangulation, the relative amplitudes of the source in each detector, as determined by the angle-dependent antenna response functions, provides additional information about the position of the source which is naturally incorporated in our coherent analysis. This can break the ring-like or bimodal degeneracy in the two or three detector cases.We observe that adding Advanced Virgo to the two Advanced LIGO detectors improves the localisation by factors of≈22and≈64for ER8b/O1 and design sensitivities respectively. Adding a fourth detector improves it by a factor of≈2.8when adding LIGO India or by≈4.0in the case of KAGRA over the HLV setup. The difference between these two possible 4 detector configurations is due to differences in sensitivity, as well as the antenna pattern. The full 5-detector configuration yields an area of≈0.1 deg^2on average, which is smaller than the 3-detector result by a factor of≈5.3. The areas range from tenths to hundreds of square degrees and are given in table <ref>. Unexpectedly, for the 3+ detector networks lowering the cut-off frequency did not improve the localisation despite the slight increase in SNR. We found that this is indirectly caused by a shift in the distance posterior, which causes the marginal distribution on sky angles to widen via the correlation shown in figure <ref>, where the distribution for the angular parameters on the sky is larger at higher distances. While only the right ascension is shown this behaviour is identical for the declination. The cause of the shift in distance seems to be that in the 10 Hz runs the improvement in SNR and therefore amplitude uncertainty has translated slightly asymmetrically into a change in the distance posterior, influenced by the uniform-in-volume prior (p(D)∝D^2) which tends to favour higher distances. The effect is small as can be seen from the relatively small change inΔ_D, but it does seem to appear for all scenarios with 3+ detectors where the position is well constrained when comparing 10 Hz and 30 Hz cut-off frequencies. In the two-detector case, the correlation between higher distances and higher areas is reversed, so that although the posterior is shifted a little the effect is to reduce the area by a factor1.08not increase it. This effect seems to vary between different simulations in our set, depending on the precise geometry of the source and detectors so we do not believe this to be an important systematic trend. The small difference can be seen in figure <ref> by comparing the red dot-dashed and cyan solid lines in the lower panels. §.§ Distance and Inclination The inclination angle is the angle between the line of sight between the source and the observerN⃗, and the vector of the orbital angular momentumL⃗, which is aligned with the total angular momentumJ⃗in the aligned spins case considered here.It is a parameter which is typically weakly constrained by the GW observations, since it affects the relative amplitudes of the+and×polarisations which are not individually resolvable by a single interferometer. Restricting to the dominantl=m=2mode, the signal observed by the detector can be written as <cit.> h(t) = 1/2 (1 + cos^2(θ_JN)) F_+ A(t)cosΦ(t)+ cos(θ_JN)F_× A(t)sinΦ(t),withθ_JNbeing the inclination angle,A(t),Φ(t)the amplitude and phase of the GW, andF_+,F_×the detector response functions for the+and×polarisations, which depend on the relative position and polarisation of the source (see fig. <ref>). As the two aLIGO detectors are nearly co-aligned they cannot on their own resolve both polarisations very well, leading to a degeneracy between left and right elliptically polarised waves, i.e. under the transformationθ_JN↦π- θ_JN. As the amplitudeA(t)is inversely proportional to the luminosity distance between source and observer <cit.>, there is a further relationship between the inclination angle and the distance which allows edge-on nearby sources to appear similar to distance face-on (or face-off) sources. Together, these degeneracies produce the characteristic V-shaped posterior distributions as shown in e.g. fig. 2 of <cit.>, and the bimodalθ_JNmarginal distributions shown in the right panel of fig. <ref> for the HL networks and for the Early HLV network. While the inclination angle itself has little physical importance, the distance is important not only for the 3D source localisation, but also for the measurement of the masses in the source frame which needs to take the cosmological red-shift into account. This effect is already significant for GW150914 with a red-shift of only≈0.1and will only become more important for future detector networks, and especially third generation networks <cit.> as higher sensitivities greatly increase the number of observable sources at high distances. Figure <ref> shows the posterior distribution for these two related parameters, with numeric values for the 90% credible intervalsΔ_DandΔ_θ_JNavailable in table <ref>. The main feature is that both parameters are only weakly constrained for all network configurations. We found that not only is the two-detector early HL network unable to break the degeneracy betweenθ_JNandπ-θ_JN(left and right hand elliptically polarised waves), but the Early AdVirgo detector was not sensitive enough in comparison to Early aLIGO to do this either. With the HL network at design sensitivity most but not all of the signals were isolated to one of theθ_JNmodes. As soon as three or more design sensitivity detectors are available the degeneracy was broken and the signal was isolated to only one of the lobes inθ_JN- the width of the 90% credible intervals for these single-moded posterior distributions are shown in the rightmost column of table <ref>. This shows the benefit to having a global network of detectors that are able to measure the amplitude of both GW polarisations and therefore distinguishθ_JN. The addition of KAGRA makes a further qualitative difference to the results, improving the width of the 90% credible interval by a factor of≈1.5-1.8and shifting the peak in figure <ref> to the true value, overcoming the prior which tends to prefer more distant, face-off orientations. In table <ref> this can be seen in the large differences between the 4-detector configurations, depending on whether they include KAGRA or LIGO India. This is the only qualitative change and illustrates the importance of detectors in various locations and orientations being available to break degeneracies and extract parameters. This behaviour is only observed without calibration uncertainty, the impact of introducing calibration uncertainty is discussed further in the appendix.As there is an inverse relationship between distance and the measured signal amplitude, which is the quantity that is actually measured by the detectors, one might expect the fractional uncertainty on distance to scale asΔD/D ∝SNR^-1, since the absolute uncertainty on amplitude is set by the noise level and the absolute value is proportionate to SNR <cit.>. However due to the correlations, the improved and extended detector network has a far lower effect on this set of parameters as compared to mass parameters. The size of the 90% credible intervals for distance and inclination respectively decrease from≈306 Mpcand≈π/4for the ER8/O1 2-detector network to≈90 Mpcand≈π/7for the complete design sensitivity network. This under-performs in comparison to the improvement of∼4.5that one might expect from theSNR^-1scaling.In a fashion similar to the slight worsening of the sky localisation, the size of the 90% credible distance interval does not always decrease when switching to a10 Hzlower frequency cut-off. This is also caused by the small shift to higher distances, although the relative errors do decrease slightly as expected.§ CONCLUSION Although based around the GW150914 system, the results presented here give a good indication of the qualitative behaviour of parameter estimation for binary black hole systems as the global network of GW detectors continues to expand and improve in sensitivity. A less extensive subsequent analysis using the same procedure found that GW151226 shows a similar behaviour. There are minor differences caused by the lower mass which gives more importance to the inspiral over the merger and ring-down, as noted in <cit.>. A large difference was observed in the ratios of sky areas caused by the initially poor localisation of the early HL network. This is highly dependent on the time delay between the two LIGO detectors: for systems which appear with a maximal time delay ( 10 ms) between the two sites the position is constrained to a small ring oriented near the projection of the vector between the sites onto the sky. On the other hand, if the time delay is near zero, as in the case of GW151226 ( 1.1 ms <cit.>), the ring has a large opening angle and therefore the projected area is much larger. Adding a third or more detectors to the network will mitigate this to a large degree <cit.>.While the observed improvements in chirp mass were comparable to the approximate scaling relationship with SNR that may be derived from Fisher matrix calculations, they tend to under-perform slightly. This is expected since the detectors are not identical and the noise curves differ, especially at the low frequency end which is relevant for the chirp mass measurement. For distance and inclination, the behaviour is poorer due to the correlation and degeneracy shown in the posterior distribution, and so the scaling relationship based on the Fisher matrix expansion around a single maximum cannot hold, even for SNRs of 26 and above . Instead, the greatest effect comes from the expansion of the network and elimination of a large region of the sky, and the relative geometry of source and detectors. In general we expect the breaking of degeneracies to play an important role, but one that can vary significantly between different sky positions, as the relative detector responses change the amplitude of the signal in each detector.With the combined improvements in sky localisation and distance measurement the volume to which future coalescence events will be constrained can be expected to decrease substantially as detectors are added and improved. As soon as a third detector joins the network the area which needs to be covered by electromagnetic observers decreases by factors of 20-60, which can be seen by comparing HL and HLV networks in Table <ref>. This will allow for a more complete coverage and greater depth to increase the chance of observing potential counterparts, or the (statistical) identification of a host galaxy <cit.>. Breaking the distance-inclination degeneracy will also aid the ability to perform cosmology with GW sources <cit.>. For more detailed knowledge about the intrinsic properties of the sources themselves, the main driver is the improvement of the sensitivity. In case of the chirp mass the most important region is at low frequency where decreasing the cut-off from30 Hzto10 Hzcan tighten the constraints by an order of magnitude.In summary, although the field of gravitational wave astronomy as a true observational science has only just begun, the currently planned upgrades and expansions of the global network of detectors offer good observational prospects for heavy stellar mass binary black holes such as GW150914. Our work highlights the differing roles of (low-frequency) sensitivity and network geometry in aspects of constraining the source, indicating that a global network of comparable detectors will be necessary to achieve the best results for both mass estimates and source localisation. We thank Carl-Johan Haster, Simon Stevenson, Christopher Berry, and Alejandro Vigna-Gómez for useful discussions, and Salvatore Vitale for commenting on a draft of this manuscript. JV and SG were supported by STFC grants ST/K005014/1 and ST/M004090/1 respectively. We gratefully acknowledge UK Advanced LIGO computing resources supported by STFC grant ST/I006285/1.§ CALIBRATION UNCERTAINTYIn addition to the main network properties investigated above we replicated the analysis with a calibration uncertainty of10%in amplitude, and10^∘in phase, using the same interpolating spline model <cit.> as used in <cit.>, which is a conservative estimate of the uncertainty that may be expected for on-line calibration (and therefore relevant for initial parameter estimates) <cit.>. At10Hzwe used only the HL and HLVI configurations due to large parameter spaces and consequent resource consumption.The most significant differences appear in the extrinsic parameters. The sky localization worsens by a factor of2.6-3.3for the early networks, and7.5-10at design sensitivity. For distance and inclination angle the calibration improves the constraints by factors of1.2-1.4for the HLVI configuration and1.6to1.8for networks including KAGRA.The observation of a sharp peak around the true value of both inclination and distance is a feature that starts to appear for 4+ design sensitivity networks. It is absent when using the10%/10^∘calibration. The chirp mass is affected to a much smaller degree. It worsens by factors of1.1for early networks and1.2-1.4at design sensitivity. The changes to SNR are on a level below1%.Numeric values for all runs including calibration uncertainty are given in table <ref>. We observe that, while the improvement of individual detectors and expansion of the network are important, improving the calibration is essential to obtaining the best possible results from the available detectors.iopart-num
http://arxiv.org/abs/1703.08988v2
{ "authors": [ "Sebastian M. Gaebel", "John Veitch" ], "categories": [ "astro-ph.IM", "astro-ph.HE", "gr-qc" ], "primary_category": "astro-ph.IM", "published": "20170327095655", "title": "How would GW150914 look with future GW detector networks?" }
Department of Chemistry and Biochemistry, University of Texas at Austin, Austin, TX 78712 USA McKetta Department of Chemical Engineering, University of Texas at Austin, Austin, TX 78712 USA Using a recently introduced formulation of the ground-state inverse design problem for a targeted lattice [Piñeros et al.,J. Chem. Phys. 144, 084502 (2016)], we discover purely repulsive and isotropic pair interactions that stabilize low-density truncated square and truncated hexagonal crystals, as well as promote their assembly in Monte Carlo simulations upon isochoric cooling from a high-temperature fluid phase. The results illustrate that the primary challenge to stabilizing very open two-dimensional lattices is to design interactions that can favor the target structure over competing stripe microphases. Designing Pairwise Interactions that Stabilize Open Crystals: Truncated Square and Truncated Hexagonal Lattices Thomas M. Truskett December 30, 2023 =============================================================================================================== § INTRODUCTION Manufacture of materials with precisely defined nanometer scale structural features remains a formidable challenge. While some top-down fabrication methods (e.g. lithography) have improved significantly in recent years to help address this challenge,<cit.> such approaches remain prohibitively slow and expensive for many commercial applications. Self-assembly–the spontaneous ordering of a material's constituent building blocks to arrive at a targeted equilibrium state–provides a promising (if still nascent) bottom-up alternative to create such nanostructured materials. In self assembly, specific structural control is achieved through systematic modification of the relevant interactions for the building blocks to drive their organization into desired morphologies,<cit.> a strategy enabled through statistical mechanical modeling and recent advances in colloid science and materials chemistry.<cit.> In designing interactions for targeted self assembly, one can consider forward or inverse approaches.<cit.> Forward methods often discover new systems via trial and error searches through parameter space, where the key properties of the candidate materials are measured and ranked in terms of their `fitness' relative to those of the target. Such Edisonian approaches, although simple to implement, can unfortunately be inefficient and expensive design strategies. Inverse approaches, on the other hand, offer a more direct means for design, typically via the use of statistical mechanical models solved via constrained optimization algorithms. Though they present significant theoretical and computational challenges, inverse methods can be highly effective at helping to navigate the rugged and high dimensional fitness hypersurfaces encountered in materials design problems. A classic example of an inverse design problem is the determination of parameters {α} of a given isotropic pair potential ϕ(r;{α}) that maximize stability of a specified periodic lattice structure in the ground state. Studies focusing on this type of optimization problem have employed a number of different constraints on the pair potential as well as different objective functions quantifying various aspects of target structure stability, and have consequently discovered a diverse array of interaction types capable of stabilizing even relatively open two-dimensional (2D) and three-dimensional (3D) morphologies (e.g., honeycomb <cit.>, kagome<cit.>, simple cubic<cit.>, and diamond <cit.> lattices, to mention a few). For many such cases, systems of particles interacting via the designed pair potentials have been found to spontaneously self-assemble into the target structures upon cooling from a high-temperature fluid phase in Monte Carlo simulations. Ground-state inverse design problems such as these can be formulated as analytical non-linear programs<cit.> amenable to high-performance numerical solvers, such as those integrated into GAMS (General Algebraic Modeling System)<cit.>. This allows for extensive study of theoretical material design questions that were previously inconvenient (or, in cases, impractical) to address with slower converging stochastic optimization methods such as simulated annealing. One recent example relates to understanding qualitative differences between interactions designed to maximize the density range over which a target lattice is the stable ground state versus those designed to maximize the target structure's thermal stability (encoded in the magnitude of the free energy difference between the target and its competitors).<cit.> Another pertains to the ability of isotropic pair potentials to stabilize lattices with highly asymmetric angular distributions of particles at a given distance;<cit.> an archetypal example of which is the 2D snub-square lattice, which has only a single particle in its third coordination shell. Here, we adopt this type of formulation to test the extent to which isotropic, repulsive pair potentials can be designed to stabilize ground states of particles organized in low-density periodic lattice structures. We further use Monte Carlo simulations to study whether particles interacting via the designed pair potentials can readily assemble into the target structures from the fluid following a rapid temperature quench. Porous materials such as these, more commonly stabilized by directional attractive interactions (e.g., physical `bonds' between patchy colloids<cit.>), can find application in optical<cit.>, chemical storage<cit.>, and separation <cit.> technologies. Thus, the discovery of new ways to assemble them from a wide variety of material building blocks and interaction types remains an active area of research. The specific periodic structures that we focus on in this investigation are the 2D truncated square (TS) and truncated hexagonal (TH) lattices, which are characterized by central octagonal or dodecagonal motifs, respectively, that resemble `pores' of empty space within the matrix of surrounding lattice particles. The TH lattice exhibits one of the lowest packing fractions for a 2D close-packed system (η≈ 0.39) which is approximately half that of the close-packed square lattice and two thirds that of the close-packed honeycomb lattice; the packing fraction of the TS lattice is approximately 12% lower than that of the honeycomb lattice if the two are compared in their respective close-packed states. The balance of this article is organized as follows. In section <ref>, brief descriptions of the ground-state inverse design problem, the strategy we adopt for determining which structures closely compete with the target lattice, and the Monte Carlo simulations that we use to observe assembly from the fluid phase are presented. The results of our study, including the optimized pair potentials and an analysis of the target structures assembled in Monte Carlo simulations, are discussed in section <ref>, where the differences in designed potentials and assembly behaviors of the TS and TH target lattices are also explored. Concluding remarks and implications of the work are presented in section <ref>. § METHODS§.§ Design ModelOur design model is framed around an analytical formulation of the inverse ground state problem for a target lattice in terms of constraints on the interparticle interactions [provided by form of the pair potential, ϕ(r;{α})] and an objective function choice. For this work, we define ϕ(r;{α}) as ϕ(r/σ)= ϵ{ A(r/σ)^-n + ∑_i=1^N_hλ_i(1-tanh[k_i(r/σ-d_i)]) +f_shift(r/σ) } H[(r_c-r)/σ]where A, n,λ_i, k_i, d_i are design parameters (i.e. {α} ), N_h is the number of hyperbolic tangent terms used in the pair potential, H is the Heaviside function, r_c is the cut off radius, and f_shift(r/σ)= P (r/σ)^2 + Q r/σ + R is a quadratic shift function added to enforce ϕ(r_c/σ)= ϕ'(r_c/σ)= ϕ”(r_c/σ)= 0. In what follows, N_h=2,3 for the TS and TH lattice, respectively. We require ϕ(r/σ) >0 and ϕ'(r/σ)<0 to ensure a monotonically decreasing (i.e., purely repulsive) pair potential which is flexible and can mimic the various soft-repulsive effective (i.e., center-of-mass) interactions that can be observed between, e.g., solvated star polymers, dendrimers, micelles, microgel particles, etc. Of course, additional (or simply different) constraints could be explored in future studies for designing assemblies of specific material systems. For notational convenience, we implicitly nondimensionalize quantities by appropriate combinations of ϵ and σ.As described in detail previously,<cit.> with interactions of this type, one can analytically formulate a nonlinear program whose numerical solution provides pair potential parameters that minimize the objective function F = ∑_j (μ_t-μ_l,j).Here, μ_t is the zero-temperature [T=0] chemical potential of the target lattice at a specified density ρ_0, and μ_l,j is that of an equi-pressure lattice j from a specified set of competitive `flag-point' structures (discussed below); the sum is over all such flag-point competitors. In this work, we search for parameters that stabilize the target structure ground state over the widest range of density Δρ, while ensuring a chemical potential advantage of the target relative to each flag-point competitor that is greater than a minimum specified threshold (here, we useμ_t-μ_l,j≤ -0.01). Specific information on the program formulation, including the equations used and their numerical solution using solvers in GAMS, is provided elsewhere.<cit.> §.§ Competing Pool SelectionTo use the strategy discussed above for designing a pair potential ϕ(r;{α}) that stabilizes a given target structure in the ground state, one first needs to establish a finite (preferably small) pool of the most competitive alternative structures at zero temperature and the same pressure. To do this, we adopt an iterative procedure. First, we carry out a preliminary optimization comparing the chemical potential of the target to others in an initial pool comprising a few select lattice and mesophase structures (e.g., stripes) known to be competitive for systems with isotropic, repulsive interactions.<cit.> We then carry out a `forward' calculation that considers more comprehensively equi-pressure competitors. For classes of competing structures that contain free parameters, the values of those parameters are determined by minimizing the chemical potential (using GAMS) under the optimized pair potential (for details see appendix of ref. 25). Any structures that are revealed by this calculation to be more stable than the target lattice are added to the competing pool to be used in the next iteration of the pair potential optimization. This process is repeated until no new structures that closely compete with the target are found in the ground-state phase diagram calculation of the optimized potential. Unlike for previous ground-state optimizations targeting denser structures,<cit.> the structures within the competing pools for the low-density TS and TH lattices are too numerous to list in detail (totaling 60+).Instead, it is more insightful to consider competitors as general classes of stripe motifs with a variety of internal degrees of freedom.This is shown more clearly in schematic figure <ref> where competitor classes are illustrated in each panel (a-f) and red particles represent fundamental lattice cells.For example, panel a) shows two stripes of particles separated by a given distance. Possible degrees of freedom include this separation distance as well as shears along the stripe axis, which in this case produce rectangular or oblique lattices. Panels b-f denote similar stripe-like classes, but now with increasing number of particles per cell (black particles) and more specific motifs. Relevant degrees of freedom here include the distance between stripes, shears along the stripe axis, but also more specific possibilities (e.g., motif distortions or rotations). Altogether, these six classes represent stripe microphases that constitute most of the strong competitors found for both design targets of this study. In what follows, we list the final competitor pools for each target as a tally of competitors belonging to each class as well as any general or specialized competitor not included in this set. For the TS lattice target, the final pool of competitors included the following standard periodic lattices that are not part of the aforementioned stripe classes: square, hexagonal<cit.>, honeycomb, snub square, snub trihexagonal, and distorted kagome (2 competitors). The `stripe-class' competitors for the TS lattice included four structures from class a), three from class b), five from class c), four from class d), and one from class e).For the TH target, the non-stripe class competitors included the following standard lattices: square, hexagonal, honeycomb, snub trihexagonal, TS, and snub square with aspect ratio b/a=1.8. The stripe-class competitors for the TH lattice included seven structures from class a), seven from class b), seven from class c), five from class d), and seven from class f). Additionally, two specialized competitors arose for the TH lattice; one was a cluster of five particles repeating across an open oblique lattice (figure S1) and another was an open decagonal motif with a particle in the center (figure S2). Finally, note that while all competitors were ensured to have chemical potentials greater than those of the target with the optimized interactions, only representative members of each stripe class and other lattices (so-called `flag-point' lattices<cit.>) can be effectively used in objective function evaluations for this formulation and in ensuring the minimum required chemical potential advantage of the target described above. For these targets, the particular identity of a stripe class flag-point competitor is not too important as long as the overall flag-point set spans one member of each class. On the other hand, standard lattices (e.g. hexagonal) or uniquely specialized competitors like the decagonal motif structure for TH or snub trihexagonal for TS enter directly as flag-point competitors by default. §.§ Monte Carlo Simulations To explore the feasibility of self assembly from fluids of particles interacting via the optimized pair potentials, Monte Carlo simulations were carried out in the canonical ensemble as follows. For the potential optimized for the TS lattice, a system of N=100 particles (in a periodically replicated simulation cell with dimensions chosen to fix the number density, ρ_0=1.03) was isochorically heated to a high temperature, melting the perfect crystal to form a fluid. The fluid was then isochorically quenched from high temperature back to a crystal at T=0.0091. The crystal was then further cooled to T=0.005 for structure refinement and computation of the radial distribution function.For the potential optimized for the TH lattice, a system of N=96 particles (in a periodically replicated cell with dimensions set to fix ρ_0=1.075) was melted from the perfect crystal to form a fluid.Two dozen identical fluid configurations were seeded with a small frozen crystal of 21 particles pinned into perfect lattice positions. These configurations were then quenched from high temperature to T=0.06 over 4 million Monte Carlo steps. For systems displaying assembly of the target structure, the seed particles were subsequently unpinned, and the whole system was allowed to relax for 90,000 Monte Carlo steps for computation of the radial distribution function.§ RESULTS AND DISCUSSION Using the problem formulation described in section <ref>, we were able to solve for parameters of the monotonically decreasing pair potential ϕ(r) (given by eq. <ref>) that maximize the density range over which the TS lattice is the stable ground state structure (here, 0.98 ≤ρ≤ 1.08), while also ensuring that the ground state exhibits, at ρ_0=1.03, a chemical potential advantage of at least Δμ=0.01 over equi-pressure flag-point competitors. Importantly, the latter ensures a significant free energy separation of the target from various closely competing stripe microphases. The resulting pair potential ϕ(r) is shown in figure <ref>a, and the list of optimized potential parameters is provided in table S1. As can be seen, ϕ(r) has a simple, ramp-like form with a steeply repulsive core at r∼ 0.7. This is interesting because particles interacting via a similar hard-core plus linear-ramp repulsion are known to exhibit rich ground-state behavior as a function of density and the parameters of the pair potential,<cit.> displaying a variety of periodic crystalline structures (including some with nonequivalent lattice sites or multiple particles per unit cell) as well as a random quasicrystal. As discussed in detail previously,<cit.> to understand the stability of ground-state structures, it is helpful to consider the function ψ(r) ψ(r) ≡ϕ(r)/2- r ϕ'(r)/4which determines the zero-temperature chemical potential μ_l of lattice l via the relation μ_l=∑_i^r_i,l < r_c n_i,lψ(r_i,l(ρ_l)), where r_i,l denotes the i^th coordination shell distance for that lattice at density ρ_l. In short, ψ(r) quantifies the radially-varying `weights' (due to the form of the pair potential) that multiply the occupation numbers n_i,l in a given lattice l to determine the coordination shell contributions to its chemical potential. A plot of ψ(r) is shown in figure <ref>b with vertical black lines corresponding to the first nine coordination shell positions of the TS crystal at the midpoint of its stable density range ρ=1.03. As seen, ψ(r) displays two characteristicplateau features: the first for separations in the range 0.7 ≲ r ≲ 1.3 and the second for 2.4 ≲ r ≲ 2.7. The function of these plateaus can be qualitatively understood as follows. The first plateau helps to destabilize standard Bravais and non-Bravais lattices (e.g., hexagonal and snub square patterns) which have relatively high coordination numbers (six and five in the first shell, respectively)–and, hence, higher contributions to the chemical potential–at these distances. The second plateau helps destabilize more closely related competitors that otherwise share or closely track the coordination shells of the TS lattice. For instance, the seventh shell of the target TS lattice is positioned right at the point where the second plateau starts to decrease (r∼2.6) so that related shells for many of the stripe competitors at slightly smaller separations are destabilized more harshly. Lastly, the strongly repulsive `core' serves to destabilize competitors whose first shell is at a shorter distance than that of the TS lattice. Despite these features, note that the resulting ψ(r) is still relatively smoothly varying, which–as discussed previously<cit.>–is consistent with a target designed to display stability over a broad density range. Carrying out Monte Carlo simulations of particles interacting via the optimized pair potential as described in section <ref>, we verify the TS crystal can indeed readily assemble from the fluid phase upon isochoric cooling. A representative configuration of the assembled structure is displayed in figure <ref>, showing that–aside from the usual minor defects due to the misalignment of the crystal and the boundaries of the periodically replicated simulation cell–a near defect-free TS lattice is obtained. The quality of the assembly is characterized more systematically (see figure <ref>) by comparing the radial distribution function g(r) at the final temperature of the quench to that of an equilibrated crystal initiated from the perfect configuration at that temperature. As can be observed, g(r) of the assembled system matches well with that of the equilibrium crystal. Note in particular the well resolved second peak, a shell where just a single neighbor is expected to reside. The fact that the assembled structure accurately captures it highlights the robustness of the optimized interactions. The second target structure considered in this study, the TH crystal, provided a significantly more difficult design challenge. Despite the fact that the underlying structural motif of the TH lattice is similar to that of the TS lattice (see discussion below), we found that solution of the design problem for the more open TH lattice required consideration of nearly 50% more competing structures as well as a more flexible pair potential (i.e., inclusion of a third hyperbolic tangent term in eq <ref>). While the pair potential ϕ(r) obtained from the optimization indeed stabilizes the TH crystal ground state, it does so only over a very narrow density range (1.07 ≤ρ≤ 1.08) and by assuming a more complex repulsive form (see figure <ref>a and the associated parameters in table S2).Note for instance the presence of two step-like features in ϕ(r) that are superimposed on a ramp-like repulsion similar to that of the optimized pair potential for the TS lattice. As shown in figure <ref>b, this form gives rise to two sharp peaks in ψ(r) at r∼ 1.2 and r∼ 2.4, which border a plateau region from 0.7 ≲ r ≲ 1.15 and a broad hump from 1.35 ≲ r ≲ 2.35. Each of these features are important for stabilizing the TH lattice relative to its competitors and can be understood as follows. An analysis of coordination shell distances and occupation numbers shows that the plateau and broad hump features in ψ(r) destabilize standard Bravais and non-Bravais competitors (hexagonal, snub square, honeycomb, etc) relative to the TH lattice because the former have more highly coordinated shells at those distances, and thus larger associated contributions to the chemical potential. It also shows that the sharp peak in ψ(r) at r∼1.2 destabilizes stripe classes a-c) (refer to figure <ref>), which have first and second shell separations that closely track, but are slightly less than, those of the TH lattice. This is especially true for class c) stripes that have triangular motifs similar to those in the target structure. The main role of the sharp peak in ψ(r) at r∼2.4 is to penalize class d) and f) competitors whose first few shells share the same `Y' shaped motifwith the TH lattice (effectively shadowing TH shell distances) and thus can only be explicitly destabilized at these larger distances (more distant shells) where they display their stripe character. Lastly, note that the strongly repulsive `core' acts as an extra destabilizing factor for stripe competitors with first shells that are slightly closer in than those of the TH lattice. To further understand why the TH lattice presents such significant design challenges not encountered for the TS lattice, consider figure <ref>. Whereas the TS lattice (left) can be considered a class d) stripe structure (red rectangles) spanned by its internal motif (gray rectangle) with a specific inter-stripe distance, the TH lattice (right) cannot. Instead, the TH lattice displays a `staggered' arrangement of the internal motif. Translated into our design process, this means that while the TS lattice must only be stabilized against deformations of its motif and interdistance stripe configuration, the TH lattice must instead compete with whole classes of highly variable stripe configurations that mimic its underlying motif structure and make ring closure–the staggered configuration–difficult to realize.This means narrow distinctions amongst many very closely related competitors that can only be meaningfully destabilized by sharply varying interactions (and the corresponding peaks seen in ψ(r)) that greatly complicate the optimization process. Consistent with this, the only other pair potential designed to stabilize the TH lattice<cit.> also exhibits such step-like features. Design challenges aside, we were able to verify self-assembly of the TH lattice from fluid configurations of particles interacting with the optimized pair potential via isochoric Monte Carlo temperature quenches. In this case, as described in section <ref>, assembly of the target structure (on computational time scales readily accessible via simulation) required the addition of a small seed crystal during the quenching process. As expected, success of crystallization depended largely on simulation time, with larger crystals or longer runs resulting in higher crystallization yield.For results shown here, we used a seed size (21 particles) such that approximately 50% of parallel runs quenched into the crystal structure during the course of the simulation (see figure S3 for illustrative results). Shown in figure <ref> are the initial and final configurations of one such seed run. The radial distribution of the assembled structure is provided in fig <ref> and compared to that of a similar run started from the perfect crystal configuration at the final temperature and density. The excellent agreement shown demonstrates the success of the designed interaction for stabilizing the TH lattice. § CONCLUSION Using an efficient, recently introduced formulation of the inverse design problem for discovering interactions that favor a targeted ground-state crystal, we were able to determine repulsive, new isotropic interactions that stabilize open 2D TS and TH crystal lattices, respectively.For the TS crystal, the optimized interactions stabilized the target structure in the ground state over a wide range of density, and particles interacting via the designed potential were shown to readily self-assemble into the TS crystal in isochoric Monte Carlo temperature quenches from a high-temperature fluid. The open TH crystal proved to be a far more challenging design target, and its solution required consideration of significantly more competing structures as well as a more flexible repulsive pair potential. We demonstrated that while the TS crystal can be interpreted as a specific example of a stripe microphase, the TH crystal requires comparison against a highly varied field of stripe microphase competitors, and that the ring closure for the TH lattice required explicit staggering of underlying motifs that demanded very specific, sharply targeted interactions that greatly elevated the complexity of the problem. Despite this added difficulty, we found that particles with the designed interactions self-assemble into the TH crystal in isochoric Monte Carlo temperature quenches from a high-temperature fluid seeded with a small target crystal.§ SUPPLEMENTARY MATERIAL See supplementary material for figures of specialized TH competitors, optimized pair potential parameters and seeded TH Monte Carlo runs. T.M.T. acknowledges support of the Welch Foundation (F-1696) and the National Science Foundation (CBET-1403768). We also acknowledge the Texas Advanced Computing Center (TACC) at the University of Texas at Austin for providing computing resources used to obtain results presented in this paper. 32 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Soukoulis and Wegener(2011)]PhotonicMatsDesign author author C. M. Soukoulis and author M. Wegener, @noopjournal journal Nature Photonics volume 5, pages 523 (year 2011)NoStop [Corbitt, Francour, andRaeymaekers(2015)]PhotonicMatsDesign2 author author S. J. Corbitt, author M. Francour, and author B. Raeymaekers,@noopjournal journal Journal of Quantitative Spectroscopy & Radiative Transfer volume 158, pages 3 (year 2015)NoStop [Zhang, Luijten, and Granick(2015)]JanusParticlesSelfAssemblyRev author author J. Zhang, author E. Luijten, and author S. Granick,@noopjournal journal Annual Review of Physical Chemistry volume 66, pages 581 (year 2015)NoStop [Wang et al.(2015)Wang, Siu, Zhang, and Fang]SelfAssemblyforcesReview author author C. Wang, author C. Siu, author J. Zhang,and author J. Fang, @noopjournal journal Nano Research volume 8, pages 2445 (year 2015)NoStop [Auyeung et al.(2015)Auyeung, Morris, Mondloch, Hupp, Farha, and Mirkin]SelfAssemblySuperlatticeCatalysis author author E. Auyeung, author W. Morris, author J. E. Mondloch, author J. T. Hupp, author O. K. Farha,and author C. A. Mirkin, @noopjournal journal Journal of the American Chemical Societyvolume 137, pages 1658 (year 2015)NoStop [Yethiraj and van Blaaderen(2003)]ColloidInteractionTuning_1 author author A. Yethiraj and author A. van Blaaderen, @noopjournal journal Nature volume 421, pages 513 (year 2003)NoStop [Likos(2001)]colloidInteractionsReview author author C. N. Likos, @noopjournal journal Physics Reports volume 348, pages 267 (year 2001)NoStop [Zhang and Glotzer(2004)]SelfAssemblyPatchyParticles1 author author Z. Zhang and author S. C. Glotzer, @noopjournal journal Nano Letters volume 4, pages 1407 (year 2004)NoStop [Damasceno, Engel, andGlotzer(2012)]SelfAssemblyPolyhedraParticles author author P. F. Damasceno, author M. Engel, and author S. C. Glotzer,@noopjournal journal Science volume 337, pages 453 (year 2012)NoStop [Torquato(2009)]InvDesignTechRev author author S. Torquato, @noopjournal journal Soft Matter volume 5, pages 1157 (year 2009)NoStop [Mlinar(2015)]InvDesignGeneral author author V. Mlinar, @noopjournal journal Annalen der Physik volume 527, pages 187 (year 2015)NoStop [Jain, Bollinger, and Truskett(2014)]InvDesignPerspective author author A. Jain, author J. A. Bollinger,and author T. M. Truskett,@noopjournal journal AlChE Journalvolume 60, pages 2732 (year 2014)NoStop [Jain, Errington, and Truskett(2014)]AvniDimTransfer author author A. Jain, author J. R. Errington,and author T. M. Truskett,@noopjournal journal Physical Review X volume 4, pages 031049 (year 2014)NoStop [Marcotte, Stillinger, andTorquato(2011)]MT_SquareHoneyConvexCom author author E. Marcotte, author F. Stillinger,and author S. Torquato, @noopjournal journal Soft Matter volume 7, pages 2332 (year 2011)NoStop [Edlund, Lindgren, andJacobi(2013)]InvDesignKagomeDiamond author author E. Edlund, author O. Lindgren, and author M. N. Jacobi,@noopjournal journal Journal of Chemical Physics volume 139, pages 024107 (year 2013)NoStop [Edlund, Lindgren, andJacobi(2011)]InvDesignKagome author author E. Edlund, author O. Lindgren, and author M. N. Jacobi,@noopjournal journal Physical Review Letters volume 107, pages 085503 (year 2011)NoStop [Torikai(2015)]InvDesignKagomeFunctionalMethod author author M. Torikai, @noopjournal journal Journal of Chemical Physics volume 142, pages 144102 (year 2015)NoStop [Zhang, Stillinger, andTorquato(2013)]ZT_MuOptKagomeAsymLats author author G. Zhang, author F. H. Stillinger,and author S. Torquato, @noopjournal journal Physical Review E volume 88, pages 042309 (year 2013)NoStop [Jain, Errington, and Truskett(2013)]Avni3DLattices author author A. Jain, author J. R. Errington,and author T. M. Truskett,@noopjournal journal Soft Mattervolume 9, pages 3866 (year 2013)NoStop [Piñeros, Baldea, andTruskett(2016a)]SquLat_dmu_opt author author W. Piñeros, author M. Baldea,and author T. M. Truskett,@noopjournal journal Journal of Chemical Physics volume 144, pages 084502 (year 2016a)NoStop [Bisschop and Meeraus(1982)]GAMSWorldBank author author J. Bisschop and author A. Meeraus, in @noopbooktitle Applications, series Mathematical Programming Studies, Vol. volume 20 (publisher Springer Berlin Heidelberg, year 1982) pp. pages 1–29NoStop [Gam(2015)]GamsGuide2013 <http://www.gams.com/dd/docs/bigdocs/GAMSUsersGuide.pdf> title GAMS - A User's Guide, GAMS Release 24.2.3, organization GAMS Development Corporation, address Washington, DC, USA (year 2015)NoStop [GAMS Development Corporation(2015)]GamsSoftware2013 author author GAMS Development Corporation, <http://www.gams.com/> title General Algebraic Modeling System (GAMS) Release 24.2.3,howpublished Washington, DC, USA (year 2015)NoStop [Piñeros, Baldea, andTruskett(2016b)]KagSnub_gams_opt author author W. Piñeros, author M. Baldea,and author T. M. Truskett,@noopjournal journal Journal of Chemical Physics volume 145, pages 054901 (year 2016b)NoStop [Rocklin and Mao(2014)]C4SM00587B author author D. Z. Rocklin and author X. Mao,10.1039/C4SM00587B journal journal Soft Matter volume 10, pages 7569 (year 2014)NoStop [Antlanger, Doppelbauer, andKahl(2011)]0953-8984-23-40-404206 author author M. Antlanger, author G. Doppelbauer,and author G. Kahl, http://stacks.iop.org/0953-8984/23/i=40/a=404206 journal journal Journal of Physics: Condensed Matter volume 23, pages 404206 (year 2011)NoStop [Morris and Wheatley(2008)]PorousMat_storage_review author author R. E. Morris and author P. S. Wheatley, @noopjournal journal Angewandte Chemie volume 47, pages 4966 (year 2008)NoStop [Lu and Zhao(2004)]NanoporousMaterials_book author author G. Q. M.Lu and author X. S.Zhao, @nooptitle Nanoporous Materials: Science and Engineering, series Series on Chemical Engineering, Vol. volume 4 (publisher Imperial College Press, year 2004)NoStop [Bra()]BravaisNote @noopnoteWhile square and hexagonal could be said to belong to class a stripes as per our chart, these standard lattices are sufficiently common and important to be listed separately by name for clarity NoStop [Jagla(1998)]PhysRevE.58.1478 author author E. A. Jagla, 10.1103/PhysRevE.58.1478 journal journal Phys. Rev. E volume 58,pages 1478 (year 1998)NoStop [Lindquist, Jadrich, andTruskett(2016)]RelEntropy_2D_structures author author B. A. Lindquist, author R. B. Jadrich,and author T. M. Truskett, http://dx.doi.org/10.1063/1.4962754 journal journal The Journal of Chemical Physicsvolume 145, eid 111101 (year 2016), http://dx.doi.org/10.1063/1.4962754NoStop
http://arxiv.org/abs/1703.08615v1
{ "authors": [ "William D. Piñeros", "Thomas M. Truskett" ], "categories": [ "cond-mat.soft" ], "primary_category": "cond-mat.soft", "published": "20170324222456", "title": "Designing Pairwise Interactions that Stabilize Open Crystals: Truncated Square and Truncated Hexagonal Lattices" }
Institute of Physics, Vietnam Academy of Science and Technology, 10 Dao Tan, Ba Dinh, Ha Noi, Viet Nam Graduate University of Science and Technology, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Cau Giay, Ha Noi, Viet Nam Vietnam Military Medical University, 160 Phung Hung, Ha Dong, Ha Noi, Viet Nam Institute of Research and Development, Duy Tan University, K7/25 Quang Trung, Da Nang, Viet Nam hoang@iop.vast.vn Institute of Physics, Vietnam Academy of Science and Technology, 10 Dao Tan, Ba Dinh, Ha Noi, Viet Nam Graduate University of Science and Technology, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Cau Giay, Ha Noi, Viet Nam Deciphering the links between amino acid sequence and amyloid fibril formation is key for understanding protein misfolding diseases. Here we use Monte Carlo simulations to study aggregation of short peptides in a coarse-grained model with hydrophobic-polar (HP) amino acid sequences and correlated side chain orientations for hydrophobic contacts. A significant heterogeneity is observed in the aggregate structures and in the thermodynamics of aggregation for systems of different HP sequences and different number of peptides.Fibril-like ordered aggregates are found for several sequences that contain the common HPH pattern while other sequences may form helix bundles or disordered aggregates. A wide variation of the aggregation transition temperatures among sequences, even among those of the same hydrophobic fraction, indicates that not all sequences undergo aggregation at a presumable physiological temperature. The transition is found to be the most cooperativefor sequences forming fibril-like structures. For a fibril-prone sequence, it is shown that fibril formation follows the nucleation and growth mechanism. Interestingly, a binary mixture of peptides of an aggregation-prone and a non-aggregation-prone sequence shows association and conversion of the latter to the fibrillar structure. Our study highlights the role of sequence in selecting fibril-like aggregates and also the impact of structural template on fibril formation by peptides of unrelated sequences. Sequence dependent aggregation of peptides and fibril formation Trinh X. Hoang December 30, 2023 ===============================================================§ INTRODUCTION The phenomenon in which soluble proteins or protein fragments self-assemble into insoluble aggregates is considered as a fundamental issue of protein folding with serious impact on human health <cit.>. A predominant class of these aggregates, that have a long straight shape and are rich in β-sheets, known as amyloid fibrils, is associated to a range of debilitating human pathologies, such as Alzeihmer's, Parkinson's, type II diabetes and transmissible spongiform encephalopathies <cit.>. These fibrils, formed by numerous proteins and peptides including those unrelated to disease <cit.>, have strikingly similar structural features regardless of the amino acid sequence. An widely adopted view is that the tendency of forming amyloid fibrils isa common property of all proteins, supposedly due to their commonpolypeptide backbone <cit.>.It has been shown that poly-aminoacids can also form amyloid under appropriate condition <cit.>. However, the propensity of a given polypeptide to form amyloid fibrils as well as the condition under which they form depends very significantly on its amino acid sequence showing that the problem is much more complex than it could be initially thought of but also giving hope for curing amyloid diseases <cit.>. X-ray fiber diffraction data indicate that amyloid fibrils arecommonly characterized by the cross-β-sheets with strands running perpendicularly to the fibril's longitudinal axis <cit.>. The cross-β-structures at atomic resolution have been obtained for the fibrils of a few proteins and protein fragments including those of insulin <cit.>, β-amyloid peptide <cit.>,yeast prion protein sup35p <cit.>, HET-s prion <cit.>, and α-synuclein <cit.> by using cryo-electron microscopy, X-ray and solid-state NMR. It is found that they are highly ordered and composed of β-strands of the same segments of repetitive protein molecules. Between the mated β-sheets is a complete dry and complementary packing of amino acid side chains with a well-formed hydrophobic core <cit.>.Even though there are evidence of polymorphism <cit.> in amyloid fibrils, the observed packing of side chains in the resolved structures has suggested that the amino acid sequence dictates much the amyloid fold <cit.>, in the same manner as in protein folding.The sequence determinant of amyloid formation has been studied with various experi-mental <cit.> and theoretical <cit.> approaches.It has been shown that the overall hydropho-bicity <cit.> and net charge <cit.> of a peptide, to some extent, may impact the aggregation rate. There are increasing evidence that the capability of a protein to form amyloids strongly depends on certain short amino acid stretches in the sequence <cit.>. To support a proteome-wide search for aggregation-prone peptide segments, a number of predictors have been made available <cit.>. However, the problem still substantially needs better understanding.In this study, we investigate the selectivity of aggregate structures by the amino acid sequence and the mechanism of fibril formation by using the tube model of protein developed by Hoang et al. <cit.>. The latter is a C_α-based model exploiting the tube-like symmetry <cit.> of a polypeptide chain and geometrical constraints imposed by hydrogen bonds <cit.>. Such symmetry and geometry consideration leads to a presculpted free energy landscape <cit.> with marginally compact protein-like ground states and low energy minima<cit.>.Interestingly, the model also shows a strong tendency of multiple chains to form amyloid-like aggregates <cit.>, similar to that found in higher resolution models <cit.>. Extensive simulations have been carried out by Auer and coworkers <cit.> to study the fibril formation of 12-mer homo-peptides using the tube model with a slightly different constraint on self-avoidance, showing useful insights on the nucleation mechanism <cit.> of fibril formation andon the equilibrium conditions between the fibrillar aggregates and the peptide solution <cit.>. In the present study, we focus on the impact of amino acid sequence on the aggregation properties in the tube model with a renewed consideration of hydrophobic interaction. In the original tube model, the latter was based on an isotropic contact potential between centroids represented by the C_α atoms. We introduce here a new model for hydrophobic contact between amino acids that takes into account the side chain orientations. We find that the latter can direct the interaction between β-sheets and promote the formation of ordered and elongated fibril-like aggregates.We restrict ourself to hydrophobic-polar (HP) sequences and short peptides of length equal to 8 residues. The consideration of HP sequences is a minimalist approach in terms of sequence specificity, however is well supported in protein folding <cit.>. Furthermore, the rather simplicity of amyloid fibril structures also indicates a possible simplification of the amino acid sequence in determining aggregation properties.It will be shown that even with a short length and a few sequences, the systems considered already exhibit a rich behavior in the morphologies of the aggregates and in their thermodynamic properties.For an aggregation-prone sequence, we have studied also the kinetics of fibril formation. We will try to elucidate the nucleation and growth mechanism of this process at molecular detail and show evidence of a lag phase. Finally, we have studied a binary mixture of peptides of two different sequences and find that amyloid formation can be sequence non-specific, that is a fibril-like template formed by an aggregation-prone sequence may induce aggregation of a non-aggregation-prone sequence for a fraction of all peptides. This strong impact of the template decreases somewhat the sequence determination of aggregation propensity and suggests that amyloid fibrils could be heterogeneous in their peptide composition.§ MODELS AND METHODS Details of the tube model can be found in Ref. <cit.>.Briefly, it is a C_α-based coarse-grained model, in which the C_α atoms representing amino acid residues are placed along the axis of a self-avoiding tube of cross-sectional radius Δ=2.5Å. The finite thickness of the tube is imposed by requiring the radius of circle drawn through any three C_α atoms must be larger than Δ <cit.>. The energy of a given conformation is the sum of the bending energy, hydrogen bonding energy and hydrophobic interaction energy.A local bending energy penalty of e_R = 0.3 ϵ >0, with ϵ an energy unit, is applied if the chain local radius of curvature at a given bead is less than 3.2 Å. Hydrogen bonds between amino acids are required to satisfy a set of distance and angular constraints on the local properties of the chain as found by a statistical analysis of protein PDB structures <cit.>. Local hydrogen bond, which is formed by residues separated by three peptide bonds along the chain, is given an energy of -ϵ, whereas non-local hydrogen bond is given an energy of -0.7ϵ. Additionally, a cooperative energy of -0.3ϵ is given for each pair of hydrogen bonds that are formed by pairs of consecutive amino acids in the sequence. To avoid spurious effects of the chain termini, hydrogen bonds involving a terminal residue are given a reduced energy of -0.5ϵ.Hydrophobic interaction is based on the pairwise contacts between amino acids, considered to be either hydrophobic (H) or polar (P). It is also assumed that only contacts between H residues are favorable, and thus the contact energies of different residues pairing are e_HH=-0.5ϵ, and e_HP= e_PP = 0. In the original tube model, a contact is defined if the distance between two residues is less than 7.5 Å. In the present study, we apply an additional constraint on hydrophobic contact by taking into account the side chain orientation <cit.> (Fig. <ref>a,b). The latter are approximately given by the inverse direction to the normal vector <cit.> at the chain's local position. The new constraint requires that two residues i and j make a hydrophobic contact if n_i · c_ij < 0.5 and n_j · c_ji < 0.5 where n_i and n_j are the normal vectors of the Frenet frames associated with bead i and j, respectively; c_ij is an unit vector pointing from bead i to bead j; and c_ji=- c_ij. These vectors are given byn_i =r_i-1+ r_i+1 - 2r_i/| r_i-1+ r_i+1 - 2r_i| ,andc_ij =r_j -r_i/| r_j -r_i| ,where r_i is the position of bead i.The new constraint is in accordance with the statistics drawn from an analysis of PDB structures (Fig. <ref>b). We consider 12 HP sequences of length N=8 as given in Table I. The sequences, denoted as S1 through S12, are selected in such a way that they contain only 2 or 3 H residues, corresponding to hydrophobic fraction of 25% and 37.5%, respectively. We have chosen sequences that are symmetric as much as possible from the two ends having in mind that the relative positions of the H residues are more important than their absolute positions in the sequence. One characterization of these relative positions is the minimum separation between two consecutive H residues given by the parameter s in Table I.We will study systems of M peptides in a cubic box of size L with periodic boundary conditions. For a given peptide concentration c, the box size L is calculated depending on M as L=(M/c)^1/3. For example, for c=1 mM (millimolar) and M=10 one gets L=255.15 Å.Parallel tempering <cit.> Monte Carlo schemes with 16-24 replicas at different temperatures are employed for obtaining the ground state and equilibrium characteristics. For each replica, the simulation is carried out with pivot, crankshaft and translation moves and with the Metropolis algorithm for move acceptance at its own temperature T_i. A replica exchange attempt is made every 10 MC sweeps (one sweep corresponds to a number of move attempts equal to the number of residues). The exchange of replicas i and j is accepted with a probability p=min{ 1, exp[(β_i-β_j)(E_i - E_j)] }, where β=(k_B T)^-1 is the inverse temperature, k_B is the Boltzmann constant, and E_i and E_j are the energies of the replicas at the time of the exchange. The temperature range in parallel tempering simulations are chosen such that it covers the transition from a gas phase of separated peptides at a high temperature to the condensed phase of the aggregates at a low temperature. Thereplica temperatures are chosen such that acceptance rates of replica exchanges for neighboring temperatures are significant, of at least about 20%. Practically, one needs to change the set of temperatures several times in such a way that there are more temperatures near the specific heat's peak, where the energy fluctuation is large.For example, for sequence S2 with M=10, the final set of temperaturesfor 20 replicasis {0.15, 0.16, 0.17, 0.18, 0.19, 0.20, 0.21, 0.212, 0.214, 0.216, 0.218, 0.22, 0.222, 0.224, 0.226, 0.228, 0.23, 0.24, 0.25, 0.26} in units of ϵ/k_B. The number of Monte Carlo attempted moves is of the order of 10^9 per replica. The weighted multiple histogram technique <cit.> is employed for the calculation of equilibrium properties such as the specific heat and the effective free energy.For studying the kinetics of fibril growth, we carry out multiple independent Monte Carlo simulations that start from random configurations of dispersed monomers. These initial configurations are equilibrated at a high temperature before being used. We are interested in three quantities: the number of aggregates, the maximum size of the aggregates, and the number of peptides in β-sheet conformation during the time evolution. A peptide is said to be in a β-sheet conformation if it forms at least 4 consecutive hydrogen bonds with another peptide. § RESULTS§.§ Sequence dependence of aggregate structuresWe first study the dependence of aggregate structure on the amino acid sequence for systems of M=10 identical peptides at a fixed concentration of 1 mM. Fig. <ref> shows that the lowest energy conformation obtained in the simulations, supposed to be the ground state of a given system, strongly depends on the sequence. Two sequences, S2 and S11, form a double layer β-sheet structure with characteristics similar to that of a cross-β structure. In these structures, an axis of the aggregate approximately perpendicular to the β-strands can be drawn. A similar structure but less fibril-like is also found for sequence S12 with some parts that are non-β-sheet. Both sequences S3 and S4 form a α-helix bundle. The helix bundle of sequence S4 however is more ordered and has an approximate cylinder shape, in which the α-helices are almost parallel to each other. This type of aggregate is akin to non-amyloid filaments formed by globular proteins such as the actin filament <cit.>. Other sequences form some sorts of disordered aggregates. In these disordered structures one may also find a significant amount of β-sheets.In our model, residues participating in consecutive local and non-local hydrogen bonds are identified as forming α-helix and β-sheet, respectively <cit.>.The role of hydrophobic residues in aggregation can be figured out from the structures of the aggregates. In all cases, one finds the presence of a well-formed hydrophobic core with the putative hydrophobic side chains oriented inwards to the body of the aggregate. The packing of hydrophobic side chains is best observed for sequences S2 and S11, for which the hydrophobic residues are aligned within each β-sheet and the hydrophobic side chains from the two β-sheets are facing each other.This packing is possible due to the HPH pattern in these sequences which position the hydrophobic side chains on one side of each β-sheet. An alignment of hydrophobic residues is also seen for sequence S12 due to the HPH segment of this sequence. In the aggregate of sequences S4, which is a helix bundle, the hydrophobic side chains are gathered along the bundle axis, thanks to to the alignment of hydrophobic side chains along one side of each α-helix.This alignment is due to the HPPPH pattern in the S4 sequence. On the other hand, the S3 sequence with the HPPH pattern also forms a helix but the hydrophobic side chains are not well aligned in the helix, leading to a less ordered aggregate.The structure of the aggregate also depends on the number of chains M. In Fig. <ref> and Fig. <ref>, the ground states for M varying between 1 and 10 are shown for sequence S2 and S4, respectively. Interestingly, for sequence S2 (Fig.<ref>) as M increases one sees transitions from single helix to two-helix bundle, then to single β-sheet (M=3) and to double β-sheets (M≥ 4). One can also notice that as M increases the β-sheet aggregates become more ordered and more fibril-like as their β-strands become more parallel. For sequences S4 (Fig. <ref>), only helix bundles are formed for all M>1, but the bundle also becomes more ordered as M increases. Thus, the increasing orderness with the system size is observed for both β-sheet and α-helical aggregates.§.§ Thermodynamics of aggregation It can be expected that the thermodynamics of aggregation depend on the aggregate structure due to distinct contributions of intermolecular and intramolecular interactions in different structures. Furthermore, the formations of ordered and non-ordered aggregates can be different from the perspective of a phase transition. We will consider the the system's specific heat, C, for the analysis of the thermodynamics.We are particularly interested in the temperature of the main peak of the specific heat, T_peak, and the peak height, C_peak.T_peak corresponds to the aggregation transition temperature.Higher T_peak means a more stable aggregate, whereashigher C_peak indicates that the aggregation transition is more cooperative <cit.>. For all multi-peptide systems considered, it is found that the energy distribution at T_peak has a bimodal shape, suggesting that the aggregation transition is first-order like. Note that the discontinuity of the aggregation transition has been also shown for the simple off-lattice AB model without the directional hydrogen bonds <cit.>. We find that the specific heat strongly depends on both the sequence and the system size. Fig. <ref> and Fig. <ref> show the temperature dependence of the specific heat per molecule for various system sizes for sequences S2 and S4, respectively. For sequence S2, the case in which fibril-like aggregates form, it is shown that as M increases the specific heat's peak shifts toward higher temperature and its height increases (Fig. <ref>). This result indicates that the aggregate becomes increasingly stable and the transition becomes more cooperative as the system size increases. The increasing cooperativeness of the aggregation transition correlates with the increasing orderness in the structure of the aggregate. For sequence S4, for which the aggregates are helix bundles, the height of the main peak increases with M but the position of the peak varies non-monotonically (Fig. <ref>). Note that the aggregation transition for sequences S4 is always found at a slightly lower temperature than the folding transition of individual chain. This is in contrast with sequence S2, whose aggregation transition temperature is always higher than the folding temperature of a single chain.In Fig. <ref>, the results of the maximum specific heat per molecule, C_peak/M, and the temperature of the peak, T_peak, are combined for all sequences considered and for several values of M. It is shown that the variation of both C_peak/M and T_peak increases with M. Note that for M=10, the highest specific heat maxima correspond to sequences S2 and S11 whose aggregates are fibril-like (see Fig. <ref>).Apart from the absolute value of C_peak, the increase of C_peak/M with M is also a signature of cooperativity. For sequences S2 and S11, C_peak/M is not only the highest among all sequences but also increases with M much faster than other sequences, suggesting that these sequences have the most cooperative aggregation transitions. Our results indicates that the propensity of forming fibril-like aggregates is associated with the cooperativity of the aggregation transition.The wide variation in the transition temperatures T_peak among sequences, as shown in Fig. <ref>b, suggests another interesting aspect of aggregation. Suppose that we consider the systems at the physiological temperature, T^*. In our model, a rough estimate of T^* could be 0.2 ϵ/k_B, which corresponds to a local hydrogen bond energy of 5 k_BT^*. For M=1, one finds that all sequences but S10 has T_peak < T^* suggesting that the peptides are substantially unstructured at T^* as a single chain. For M=6 and M=10, only three sequences, S3, S4 and S5, have T_peak < T^*, while the other have T_peak > T^*. Thus, sequences S3, S4 and S5 do not aggregate at T^* while other sequences do. This result indicates that the variation of aggregation transition temperatures among sequences is also a reason why protein sequences behave differently towards aggregation at the physiological temperature. Some sequences do not aggregate because aggregation is thermodynamically unfavorable at this temperature.Note that the ability of forming fibril-like aggregates is not necessarily associated with a high aggregation transition temperature. In fact, Fig. <ref>b shows that sequences S2 and S11 have only a medium value of T_peak among all sequences, for both M=6 and M=10. Some sequences with a higher T_peak, such as S8, S9 and S10, form disordered aggregates. The dependence of specific heat on the system size also reveals a condition for aggregation. Fig. <ref> shows that for sequence S2, systems of M ≤ 4 have the specific heat peaked at a lower temperature than T^*=0.2ϵ/k_B, which means that these systems do not aggregate at T^*. Only for M > 4, the specific heat peak temperature is higher than T^* indicating that the fibril-like aggregates formed by this sequence are stable at T^*. Thus, a sufficient number of peptides is needed for the aggregation to happen at a given temperature.We also find that the lower peak in the specific heat of the system of M=4 (Fig. <ref>) corresponds to a transition from metastable aggregates at intermediate temperature to the ground state at low temperature. Fig. <ref> shows the trajectory of an equilibrium simulation at T=0.2ϵ/k_B for sequences S2 with M=4.The time dependence of the system's energy in this trajectory indicates that the peptides do not aggregate most of the time, so that the energy is relatively high, but for some short periods they can spontaneously form ametastable aggregate of a much lower energy. This metastable aggregate has a three-stranded β-sheet (Fig. <ref>, inset) and could act as a template for fibril growth in systems of more peptides.§.§ Kinetics of fibril formation It is well-established that amyloid fibril formation follows the nucleation-growth mechanism, familiar to that found in studies of crystallization and polymer growth <cit.>. The time dependence of fibril mass is characterized by an initial lag phase, during which the growth rate is small, before a period of rapid growth, resulting in sigmoidal kinetics <cit.>. Nucleation gives rise to the lag phase and is a rate-limiting step. A primary nucleation event corresponds to the initial formation of an amyloid-like aggregate from soluble species, which is followed by an elongation of the fibrils through the templated addition of species. Analyses of experimental kinetic data using master equation indicate that amyloid fibril growth can be dominated by secondary nucleation events such as fragmentation <cit.> and surface-catalyzed nucleation <cit.>.The nucleated and templated polymerization properties of fibril formation have been shown in coarse-grained <cit.> and all-atom <cit.> simulations of short peptides.Studies of crystal-based lattice models by using classical nucleation theory <cit.> and simulations <cit.> provide characterizations of the nucleation barriers in terms of β-sheet growth within a layer and intersheet couplings, together with extensive temperature and concentration dependence.In the following, we will investigate the behavior of fibril growth within our tube model for sequence S2. Since the ground state for this sequence is a two-layered β-sheet structure, we do not expect it to display very rich behavior, such as the increase of fibril thickness by multi-step β-sheet layer addition. Nevertheless, the system may be useful for understanding the formation of a single protofilament.First, we consider a system of M=10 peptides with concentration c=1 mM under equilibrium condition. Fig. <ref> shows the dependence of the total free energy of the system on the size of the largest aggregate, m, formed at three temperatures slightly below T_peak including T=T^*=0.2 ϵ/k_B.This free energy is defined as F(m,T) = - k_B T log P(m,T), where P(m,T) is the probability of observing a conformation with the largest aggregate size equal to m at temperature T. P(m,T) was determined from parallel tempering simulations with the weighted histogram method <cit.>. It is shown that for all these temperatures the free energy has a maximum at m=3, suggesting that m=3 could be the size of the critical nucleus for fibril formation. Interestingly, M=3 is also the system size at which the ground state changes from a helix bundle to a β-sheet on increasing M, and this β-sheet is unstable at temperatures larger or equal T^* (see Fig.<ref>). Thus, there is a consistency between the equilibrium data obtained with a small and a larger M in terms of aggregation properties.The free energy barrier for aggregation in Fig. <ref> is found to increase with T and is about of 1 k_B T to 4 k_B T.This barrier is not large and is consistent with the fact that the sequence considered is highly aggregation-prone.For m > 3, Fig. <ref> shows that the free energy decreases almost linearly with n, which is consistent with the fact that the growth of the aggregate in size is essentially one-dimensional. After a certain size, new peptides join an existing aggregate from either of its two ends and establish the elongation of the β-sheets.We then considered a larger system of M=20 peptides and studied the time evolutions from random configurations of dispersed monomers. Up to 100 independent trajectories are carried out to determine the statistics.We first consider the system at concentration c=1 mM and T=0.2 ϵ/k_B.Fig. <ref> (a and b) shows three typical trajectories with the total energy E and the size of the largest aggregate m as functions of time. Interestingly, these trajectories show clear evidence of an initial lag time, during which m fluctuates but remains small (m ≤ 3) before a rapid and almost monotonic growth (Fig. <ref> b). They also shows that nucleation is complete for m=3, in consistency with the equilibrium analysis obtained before for M=10.A peptide configuration at a nucleation event is shown on Fig. <ref>d indicating that a possible nucleus is a three-stranded β-sheet formed by three peptides (Fig. <ref>e). Fig. <ref>c shows that the system can form multiple aggregates of various sizes. The distribution of the aggregate size obtained after a sufficient long time is bimodal reflecting the fact that the system size is finite and clusters of less than 4 peptides are unstable. Thus, one either observes one large cluster with size close to the system size or several smaller clusters. The largest aggregates of m=20 peptides have the form of an elongated double β-sheet strongly resemble a cross-β-structure (Fig. <ref>f). Consider now the number of peptides in β-sheet conformation, n_β, which counts all the peptides that have at least 4 consecutive hydrogen bonds with another peptide. Fig. <ref> shows the dependence of n_β on time t, with t measured in number of MC steps, averaged over the trajectories, for two different temperatures and for various concentrations. It is shown in Fig. <ref> (a and b) that for T=0.2 ϵ/k_B, the time dependence of ⟨ n_β⟩ can be fitted well to the exponential relaxation function of M(1-e^-t/t_0), where t_0 is the characteristic time of aggregation. This time dependence also depends strongly on the concentration c with t_0 increases more than 3 times by changing c from 1 mM to 0.5 mM.There seems to be no evidence of a lag phase at T=0.2 ϵ/k_B as ⟨ n_β⟩ increases linearly with t for small t (Fig.<ref>b). This lack ofevidence, however, may be due to the fact that the deviation from the exponential growth is too small to be observed. Indeed, we find that if the temperature is increased a little to T=0.21 ϵ/k_B, the lag phase can be observed.Fig. <ref>c shows that the growth of ⟨ n_β⟩ in time is significantly deviated from the exponential relaxation function at small time. This growth when plotted in a log-log scale (Fig. <ref>c) shows that at small time ⟨ n_β⟩∝ t^α with α≈ 1.25. The exponent α > 1 indicates that the time dependence of ⟨ n_β⟩ behaves like a convex function, which proves the existence of the lag phase at small time. The stronger evidence of the lag phase at T=0.21 ϵ/k_B compared to that at T=0.2 ϵ/k_B is consistent with the higher free energy barrier for nucleation at the former temperature previously shown in Fig. <ref>. Note that the lag phase has been also observed in the aggregation of homopolymers with a similar model but for a larger system <cit.>.With the limited system size and time scale considered, we have not observed fragmentation of the fibril-like aggregates. On the other hand, the surface-catalyzed nucleation may exist from perspective of a two-layer β-sheet structure. The exposed hydrophobic side chains of the nucleated three-stranded β-sheet promotes association of other peptides by hydrophobic attraction. We find that clusters of 4 to 6 peptides often transform into a double β-sheet structure before continuing to grow. Thus, this secondary nucleation is surface-catalyzed and follows immediately after the primary nucleation event. The secondary nucleation also helps to stabilize the primary nucleus. §.§ Aggregation of mixed sequences Finally, we study the aggregation for a binary mixture of two sequences, S2 and S4. It was shown that in homogeneous systems, the first sequence is strongly fibril-prone, whereas the second one forms only α-helices. Furthermore, the sequence S4 has the aggregation transition temperature lower than T^*, so the its aggregate is not stable at T^*. Strikingly, our simulations at T^* show that in a binary system of equally 10 chains of each sequence, after a sufficiently long time, a fraction of the S4 chains aggregate and convert into β-sheet conformation on an existing aggregate formed by the S2 chains (see Fig. <ref>). Though this fraction is only about 10% on average, this observation shows that the template-based mechanism for fibril formation can be effective for polypeptides of very different natures. Here, the fibril-like aggregate formed by the aggregation-prone peptides acts as the template for the aggregation of non-aggregation-prone peptides. Note that due to the mismatch of different hydrophobic patterns in the two sequences, the aggregates formed by the two sequences are more disordered than the homogeneous ones (Fig. <ref>b). It is also shown in Fig. <ref>c that the growth of this mixed aggregate at the given temperature remains exponential but the characteristic time for aggregation is larger than in corresponding homogeneous system of sequence S2.§ DISCUSSION Previous study of the tube model <cit.> has shown that hydrophobic-polar sequence can select protein's secondary and tertiary structures. In particular, the HPPH and HPPPH patterns have been identified as strong α-formers, whereas the HPH pattern is a β-former. Strikingly, exactly the same binary patterns have been used in experiments that allow the successful design of de novo proteins <cit.>.In the present study, we find that these simple selection rules still hold for the peptides in aggregates, even though the model has been changed by considering the orientations of side chains. The present study shows that the binary pattern also determines the orderness of the aggregate. In particular, there should be some compatibility between the alignment of hydrophobic side chains and the overall symmetry of the aggregate. Interestingly, the HPH pattern appears to be both a strong β-former and a highly aggregation-prone sequence. Our finding is in a full agreement with experimental design of amyloids <cit.>, which shows that segments of alternating hydrophobic and polar pattern (such as PHPHPHP) can direct protein sequences to form amyloid-like fibrils. The effect of this pattern has been also reported in simulations of an off-lattice model <cit.> and also in a recent study of a lattice model <cit.>. Interesting, it has been found that Nature disfavors this pattern in natural proteins <cit.>.The role of side-chains in amyloid fibril formation has been stressed in early all-atom simulations of short peptides. The study by Gsponer et al. <cit.> showed that backbone hydrogen bonds favor the antiparallel β-sheet packing but side-chain interactions stabilize the in-register parallel β-sheet aggregate. The simulations performed by de la Paz et al. <cit.> indicated the importance of specific contacts among side-chains at specific sequence position for the formation and stabilization of β-sheet oligomers and ordered fibrils. The exclude volume of side-chains alone has been show to enhance the formation of helices <cit.> and planar sheets <cit.>.A recent lattice model showing the formation of ordered fibrils includes the side chain directionality <cit.>. Here, we show that the correlated orientations of hydrophobic side-chains are important for both the ordered packing of β-strands within a β-sheet and the stacking of β-sheets in the fibril. In particular, the alternating hydrophobic polar pattern leads to β-sheets of hydrophobic side chains oriented on one side of the β-sheet.This one-sided orientation stabilizes the two-layered β-sheet aggregate, which is the system's ground state and can grow into a long fibril, as shown for the case of sequence S2. Note that the asymmetry of hydrophobic β-sheet surfaces has been considered in a lattice model <cit.>, showing increased stabilities of multi-layered β-sheets that have weak hydrophobic surfaces exposed. Our study shows how this asymmetry is induced by the sequence at molecular level. Previous studies <cit.> have indicated that few-layered β-sheet aggregates can be stable with respect to the peptide solution and to liquid-like oligomers in certain ranges of temperature and concentration, but are metastable with respect to the aggregates of large and infinite number of β-sheet layers. The example given by our sequence S2 shows that it is possible to design a thermodynamically stable fibril of a fixed small number of β-sheet layers by using appropriate amino acid sequences. This result is supported by the common observation of the finite and rather uniform thickness of amyloid fibrils, even though some short peptides are reported to form nanocrystals <cit.> at low peptide concentrations. Our thermodynamics calculations show that the formation of fibril-like aggregates is much more cooperative than that of non-fibril-like aggregates. This cooperativity was indicated by both the height of the specific heat peak and the increase of the maximum specific heat per molecule with the system size. The high cooperativity of fibril formation can be understood as due to the highly ordered nature of fibril structures and the dominating contribution of intermolecular interactions in these structures.We also find that thermodynamic stability is not a distinguished feature of fibril-like aggregates. In particular, sequences associated with very high aggregation transition temperature do not necessarily form fibril-like aggregates. The increased overall hydrophobicity of the sequence is shown to enhance the stability of the aggregates without impact on their fibril characteristics. It has been suggested <cit.> that on increasing peptide concentration or peptide hydrophobicity, amyloid fibril nucleation changes from one-step, i.e. the ordered nucleus is formed directly by monomeric peptides from the solution, to the two-step condensation-ordering mechanism, in which nucleation is preceded by the formation of large disordered oligomers. It has been also shown that the nucleation pathway depends on the sequence and its hydrophobicity <cit.>. The sequence S2 in our study shows the one-step nucleation, consistent with the scenario suggested in Ref. <cit.>, given that this sequence has a relatively low hydrophobicity and the 1 mM concentration considered in the simulations is not high compared to those considered in Ref. <cit.>. The impact of the HP sequence on nucleation is also associated with a small nucleation barrier and the rapid nucleation with almost invisible lag phase observed for this sequence.For this fibril-prone sequence, it is found that the non-equilibrium behavior of a larger system is consistent with equilibrium properties of smaller systems at the same peptide concentration. In particular, the frequent formation and dissolving of the aggregates before nucleation and the growth of the aggregates after nucleation are in accord with their thermodynamic stabilities as isolated systems. Note that in general, fibril formation can be kinetics dependent <cit.> rather than thermodynamics, especially at very low or very high concentrations.Interestingly, the small size of the critical nucleus found in our study agrees with those obtained in homopolymer studies <cit.> as well as in lattice heteropolymer <cit.> and all-atom <cit.> simulations of short peptides.In a recent experiment, Ridgley et al. <cit.> show that mixtures of aggregation-prone peptides and proteins, including the rich in α-helices myoglobin, self-assemble into amyloid fibers with increased amounts of cross-β content. It was suggested that the β-sheet template formed by the peptides promotes the α to β conversion in the proteins and their involvement in the cross-β structure. Our simulation result on the peptide binary mixture is fully consistent with this experiment and shows that a cross-β-sheet can be heterogeneous in its peptide composition. It is possible that naturally occurring amyloid fibrils can possess this heterogeneity due to the templated self-assembly process. A certain degree of heterogeneity can be seen in the fibril structure of HET-s prion protein <cit.>, which shows that the cross-β-sheets are formed by repeating `in-register' protein segments but neighboring β-strands do not have the same amino acid sequence.§ CONCLUSION The present study has highlighted several aspects of amyloid fibril formation that include the sequence determination of fibrillar structures, the role of side chain directionality, the thermodynamics of aggregation, and the nucleation and template-based growth mechanism. In agreement with various experimental findings, our results indicate that fibril-like aggregates form very much under the same principles as in protein folding, such as the alignment of hydrophobic residues in a β-sheet, the packing of hydrophobic side chains, and the cooperativity of the aggregation transition. These principles are mainly associated to the specificity of a sequence. Our simulations also show another feature of amyloid formation, that is considerably non-specific to a sequence, namely the fibril-induced aggregation of a non-aggregation-prone sequence. This templating property certainly complicates the problem of amyloid formation as it suggests that the cross-β-structure can be heterogeneous in their sequence or peptide composition. Our study provides a basis for finding the routes to deal with the problem.This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 103.01-2016.61.The use of computer cluster at CIC-VAST is gratefully acknowledged.
http://arxiv.org/abs/1703.08851v2
{ "authors": [ "Nguyen Ba Hung", "Duy-Manh Le", "Trinh X. Hoang" ], "categories": [ "q-bio.BM" ], "primary_category": "q-bio.BM", "published": "20170326171012", "title": "Sequence dependent aggregation of peptides and fibril formation" }
#1#2#3Class. Quantum Grav. #1 (#2) #3 #1#2#3Nucl. Phys. B#1 (#2) #3 #1#2#3Phys. Lett. B#1 (#2) #3 #1#2#3Phys. Lett. A#1 (#2) #3 #1#2#3Phys. Rev. Lett.#1 (#2) #3 #1#2#3Phys. Rev. D#1 (#2) #3 #1#2#3Phys. Rev. B#1 (#2) #3 #1#2#3Ann. Phys. #1 (#2) #3 #1#2#3Fortschr. Phys. #1 (#2) #3 #1#2#3Phys. Rep. #1 (#2) #3 #1#2#3Rev. Mod. Phys. #1 (#2) #3 #1#2#3Rev. Math. Phys. #1 (#2) #3 #1#2#3Comm. Math. Phys. #1 (#2) #3 #1#2#3Mod. Phys. Lett. #1 (#2) #3 #1#2#3Int. J. Mod. Phys. #1 (#2) #3 #1#2#3Lett. Math. Phys. #1 (#2) #3 #1#2#3Annalen Phys. #1 (#2) #3 #1#2#3Theor. Math. Phys. #1 (#2) #3 #1#2#3J. Math. Phys. #1 (#2) #3 #1#2#3Proc. Roy. Soc. London #1 (#2) #3 #1#2#3JHEP #1 (#2) #3 #1#2#3Lett. Acta Phys. Pol B #1 (#2) #3 #1hep-th/#1 #1gr-qc/#1#1(<ref>) #1#1#1#2#3#4#5#1#2#3#4#5#6#7#1#2#3#4#5#6#7#8#1#2#3#4#5#6#7#8#1#2#3#4#5#6#7#8 ϖti.e. N R C Zdet diag sign T adjStrtrtr_RTr_RIm Re Ei Imτ Det⟨⟨ ⟩⟩ ⟩ ⟨ ()[ ] ⟨ ⟩ 12 12ϱ φ 𝒜 ℬ 𝒞 𝒟 ℰ ℱ 𝒬 𝒫 ℳ 𝒩 𝒱 𝒴 𝒵 AMSaUmsamn AMSbUmsbmn AMSb"52 =3mm X̅F̅Y̅B̅U̅V̅W̅G̅L̅H̅M̅Z̅E̅∇̅λ̅ω̅t̅t̅_n a̅b̅k̅l̅m̅n̅w̅x̅y̅z̅u̅v̅q̅α̅β̅γ̅λ̅μ̅ν̅ζ̅ζζ̅ξ̅π̅ÊM̂K̂ĈD̂V̂N̂V̂̅̂Θ̂_γên̂γ̃η^♭η̃^♭v^♭^♭u^♭^♭x^♭r^♭ζζ̅ζ̂ζ_+ζ_-ζ̆_+ζ̆_-ζ_±ζ_∓ζ̆_±ζ̆_∓hH̃'_(1)H̃H_(1)H̃_(1)#1H̃_(1)#1H̅_(1)#1H_(1)#1#1^[#1]#1_[#1]#1c^[#1]#1c^[#1]#1μ^[#1]#1ξ̃^[#1]#1α^[#1]#1ν_[#1]#1ξ_[#1]#1α^[#1]#1S^[#1]#1S̃^[#1]#1Ŝ^[#1]#1G^[#1]#1T^[#1]#1T^[-#1]#1H^[#1]#1H^'[#1]#1H̃^[#1]#1#1_(1)#1#1_(1)#1#1_(1)#1Ĥ^[#1]#1Ĥ^[#1]#1Ξ_γ^[#1]#1#2Ξ_γ_#2^[#1]#1Ξ_γ_#1#1Ξ^+_γ_#1#1Ξ^-_γ_#1#1Ξ^±_γ_#1#1Ξ^∓_γ_#1#1H_(#1)#1#2#2_(#1)_γW_γZ_γZ̅_γ_γΘ_γ_γΞ_γϵ_γn_γ_γG_γG̅_γ#1ℓ_#1n_k^(0)n_q^(0)#1_γ_#1#1W_γ_#1#1Z_γ_#1#1Z̅_γ_#1#1_γ_#1#1Θ_γ_#1#1_γ_#1#1W_γ_#1#1W̅_γ_#1#1Z_γ_#1#1Z̅_γ_#1#1_γ_#1#1Θ_γ_#1#1Ω_γ_#1#1Ω_#1#1Ω̅_#1_γ^(1)#1^(1,#1)^(2)#1^(2,#1)^(3)#1^(3,#1)+_γ-_γ±_γ#1+_#1#1-_#1#1_γ_#1#1_#1#1_#1#1+_#1#1-_#1_γ+_γ-_γ±_γ#1++_#1#1–_#1++_γ–_γ±±_γ#1#2^(#1)_#2#1#2#3^(#1,#2)_#3_γ^(1)#1^(1,#1)^(2)#1^(2,#1)^(3)#1^(3,#1)+_γ-_γ±_γ#1+_#1#1-_#1#1_γ_#1#1_#1#1_#1#1+_#1#1-_#1_γ+_γ-_γ±_γ#1++_#1#1–_#1++_γ–_γ±±_γS^+^-^±^∓𝐒^+^-^±^∓#1_γ_#1#1G_#1#1G̅_#1#1Σ_γ_#1#1Ξ̂_γ_#1#1_#1#1_γ_#1Ξ#1_γ_#1Ξ'ℰ#1v_[#1]#1w^[#1]#1μ^[#1]#1ν_[#1]#1η_[#1]#1v̅_[#1]#1w̅^[#1]#1μ^[τ(#1)]#1ν_[τ(#1)]#1μ̃^[#1]#1ν̃_[#1]#1ν̆_[#1]#1μ̆^[#1]μ̂ν̂η̂#1μ̂^[#1]#1ν̂_[#1]#1η̂_[#1]#1ρ^[#1]#1#2#3μ^[#2]#3_(#1)#1#2#3ν_(#1)[#2]^#3#1#2#3η_(#1)[#2]^#3#1#2μ^#2_(#1)#1#2ν_(#1)^#2#1#2η_(#1)^#2η_(0)#1c#1c𝐀𝐌𝐦v vvMU𝐔A_(σ)H#1H^[#1]#1#2H^[#1]_#2_ sk𝔜ŶS^D-inF^ cl^ clσ_γσ_γ'#1σ_#1υγñxyt_+t_-#1Λ^(#1)#1^(#1)V^(φ)N_ inst=0pt#1 2## ###1 empty to IPMU17-0029June 2017 NORDITA-2017-23revised version2.0cm No inflation in type IIA strings on rigid CY spaces.3inYuki Wakimoto ^a and Sergei V. Ketov ^a,b,c,d.1in ^a Department of Physics, Tokyo Metropolitan University, Minami-ohsawa 1-1, Hachioji-shi, Tokyo 192-0397, Japan^b Institute of Physics and Technology, Tomsk Polytechnic University,30 Lenin Avenue, Tomsk 634050, Russian Federation^c Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden ^d Kavli Institute for the Physics and Mathematics of the Universe (IPMU),The University of Tokyo, Chiba 277-8568, Japan.1in wakimoto-yuki@ed.tmu.ac.jp, ketov@tmu.ac.jp,.3inAbstractWe investigate whether cosmological inflation is possible in a class of flux compactifications of type IIA strings on rigid Calabi-Yau manifolds, when all perturbative string corrections are taken into account.We confine ourselves to the universal hypermultiplet and an abelian vector multiplet, representing matter in four dimensions. Since all axions can be stabilized by D-instantons, we choose dilaton and a Kähler modulus as the only running scalars. Though positivity of their scalar potential can be achieved, we find that there is no slow roll (ε > 13/6), and no Graceful Exit because the scalar potential has the run-away behaviour resulting in decompactification. We conclude that it is impossible to generate phenomenologically viable inflation in the given class of flux compactifications of type IIA strings, without explicit breaking of N=2 local supersymmetry of the low-energy effective action.§ INTRODUCTION The main problem of phenomenological applications of string theory to cosmology and particle physics is moduli stabilization <cit.>. On the one hand, though it is possible to stabilize all moduli at the classical level <cit.>, the existing no-go theorems forbid de Sitter (dS) vacua in the classical supergravity compactifications <cit.> and, hence, require taking into account either quantum corrections or non-geometric fluxes <cit.>. On the other hand, all phenomenologically viable inflationary models are sensitive to Planck-scale physics and thus require a UV-completion, which raises the question aboutderivation of any of them from a fundamental theory of quantum gravity such as superstrings. To the best of our knowledge, there is no compelling inflationary model of string cosmology, despite of huge theoretical efforts and many proposals in the literature <cit.>.As regards the type IIA strings compactified on Calabi-Yau (CY) threefolds, there is the "no-go" statement (i.e. no possibility ofinflation) in the literature<cit.>, derived in the certain (semi-classical)limit at large volume and small string coupling, by using the scaling arguments. However, it is worthwhile to raise the same questionbeyond the semi-classical limit (i.e. for any values of volume and string coupling), by including (quantum) string corrections both in α' and g_s, thus also beyond the ten-dimensional IIA supergravity approximation, where the assumptions used in <cit.> do not apply.Quantum corrections are under control in the case of type II string compactifications on CY. In this case, the low energy effective action (LEEA) in four dimensions preserves N=2 local supersymmetry (8 supercharges) and is completely determined by the geometry of its moduli space spanned by the scalar fields of N=2 vector- and hyper-multiplets. The only way to generate a scalar potential in the effective N=2 supergravity in four dimensions is via adding the NS- and RR-fluxes leading to the gauging of some of the isometries of the moduli space of the original fluxless compactification.Actually, the integrated Bianchi identities give rise to certain tadpole cancellation conditions, which in the presence of fluxes generically can be satisfied only by adding orientifolds reducing supersymmetry to N=1<cit.>. However, in type IIA string theory it is possible to choose such fluxes that the tadpole cancellation condition holds automatically as e.g., is the case with the NS H-fluxes and the RR F_4- and F_6-fluxes, provided that one ignores their backreaction <cit.>. We follow the same strategy in this paper.Explicit calculations are possible in the case of arigid CY threefold . Such manifold has the vanishing Hodge number h^2,1()=0, so that the LEEA is described by N=2 supergravity interacting with the so-calleduniversal hypermultiplet (UH), and some number h^1,1()> 0 of vector multiplets. As was found in <cit.>, this class of N=2 compactifications does not allow meta-stable vacua, within validity of the approximation used. However, it is still has to be checked whether the same scalar potential is suitable for slow roll inflation. In this paper, we restrict ourselves to the perturbative approximation where all instanton contributions are neglected, but the perturbative α' and g_s-corrections are retained. For even more simplicity, we consider the case with only one Kähler modulus, i.e. a rigid CY with h^1,1=1.Since no such rigid CY space was found yet, our investigation should be merely viewed as a negative test (i.e. no formal proof of our title yet). The use of fictitious CY is commonplace in modern string theory. Our paper is organized as follows. In the next section 2 we review basic information about the relevant CY string compactifications with fluxes, and provide the scalar potential in the corresponding (gauged) N=2 supergravity by following<cit.> that is our starting point. In section <ref> we study the perturbative scalar potential and slow roll conditions. Sec. 4 is our conclusion. Our notation about special and quaternionic geometries is collected in Appendix A. Details of theUH moduli space metric are collected in Appendix B.§ SCALAR POTENTIAL IN TYPE-II FLUX-COMPACTIFICATIONS In this section we recall the known facts about the scalar potential ingeneric type II string compactifications with fluxes, in the context of N=2 supergravity in four dimensions by following <cit.>, and describe our setup.The four-dimensional LEEA of type II strings compactified on a Calabi-Yau threefoldis given by N=2 supergravity coupled to N=2 vector and hypermultiplets. In the two-derivative approximation, where one ignores the higher curvature terms appearing as α'-corrections, the bosonic part of the action comprises only kinetic terms for the metric, vector and scalar fields arising after compactification. The couplings of these kinetic terms are non-trivial, being restricted by N=2 supersymmetry in terms of the metrics on the vector and hypermultiplet moduli spaces, _V and _H, parametrized by the scalars of the corresponding multiplets. Furthermore, N=2 supersymmetry restricts _V to be a special Kähler manifold, with a Kähler potential (z^i,^) (with i=1,…, h^1,1 in type IIA) determined by a holomorphic prepotential F(X^I) (with I=(0,i)=0,… ,h^1,1 and z^i=X^i/X^0), a homogeneous function of degree 2. Similarly, _H must be a quaternion-Kähler (QK) manifold of dimension 4(h^2,1+1)<cit.>. We denote the metrics on the two moduli spaces by _i and g_uv, respectively.The resulting theory is not viable from the phenomenological point of view since it does not have a scalar potential, so that all moduli remain unspecified. This gives rise to the problem ofmoduli stabilization, i.e. generating a potential for the moduli. This requires N=2 gauged supergravity. The latter can be constructed from the ungauged supergravity whenthe moduli space _V×_H has some isometries, which are to be gauged with respect to the vector fields A^I comprising, besides those of vector multiplets, the gravi-photon A^0 of the gravitational multiplet. Physically, this means that the scalar fields affected by the isometries acquire charges under the vector fields used in the gauging. The charges are proportional to the components of the Killing vectors k_α corresponding to the gauged isometries. We consider only abelian gaugings of isometries of the hypermultiplet moduli space _H because quantum corrections are known to break any non-abelian isometries. Then the charges are characterized by the vectors _I=Θ_I^α k_α∈ T_H where Θ_I^α is known as the embedding tensor.The geometry of the moduli space together with the charge vectorscompletely fix the scalar potential as <cit.> [See Appendix B and Ref. <cit.> for more details about our notation.] V=4 e^^u_I ^v_J g_uv X^I ^J + e^^iD_i X^I D_^J-3 X^I ^Jμ⃗_I·μ⃗_J,where D_i X^I=(_i+_i )X^I and μ⃗_I is the triplet of moment maps which quaternionic geometry of _H assigns to each isometry _I<cit.>.In string theory, N=2 gauged supergravity can be obtained by adding closed string fluxes to a CY compactification (see <cit.> for a review). We take the common strategy (see, for instance, <cit.>) by ignoring the backreaction and assuming the compactification manifold to be CY. The LEEA of flux compactifications on CY is known to perfectly fit the framework of N=2 gauged supergravity <cit.>. In type IIA, the vector multiplet moduli space _V describes the complexified Kähler moduli ofparametrizing deformations of the Kähler structure and the periods of the B-field along two-dimensional cycles, z^i=b^i+ t^i. The hypermultiplet moduli space _H consists of* u^a— complex structure moduli of(a=1,…,h^2,1),* ζ^Λ,_Λ— RR-scalars given by periods of the RR 3-form potential along three-dimensional cycles of(Λ=(0,a)=0,…,h^2,1),* σ— NS-axion, dual to the 2-form B-field in four dimensions,* ϕ— dilaton, determining the value of the four-dimensional string coupling, g_s^-2=e^ϕ≡ r. The Kaluza-Klein reduction from ten dimensions <cit.> leads to the classical metrics on _V and _H. The former is the special Kähler metric _i given by the derivatives of the Kähler potential =-log ^I _I-X^I_I ,where _I=_X^I are the derivatives of the classical holomorphic prepotential (X)=-κ_ijk X^iX^j X^k/6X^0,which is determined by the triple intersection numbers κ_ijk of . The hypermultiplet metric is given by the so-calledc-map<cit.> which produces a QK metric out of another holomorphic prepotential, characterizing the complex structure moduli, which carries a Heisenberg group of continuous isometries acting by shifts on the RR-scalars and the NS-axion. The corresponding Killing vectors arek^Λ=__Λ-ζ^Λ_σ, _Λ=_ζ^Λ+_Λ_σ,k_σ=2_σ.It is these isometries that are gauged by adding fluxes.Type IIA strings on CY admit the NS-fluxes described by the field strength of the B-field:H^ flux_3=h^Λα̃_Λ-_Λα^Λ,where (α^Λ,α̃_Λ) is a symplectic basis of harmonic 3-forms, and the RR-fluxes given by the 2- and 4-form field strengthsF^ flux_2=-m^iω̃_i,F^ flux_4=e_iω^i,where ω̃_i and ω^i are bases of H^2() and H^4(), respectively.As regards the metric on _V, it receives the α'-corrections which are all captured by a modification of the holomorphic prepotential (<ref>) as <cit.> F(X)=(X)+χ_ ζ(3)(X^0)^2/16π^3 -(X^0)^2/8π^3∑_k_iγ^i∈ H_2^+()_3e^2π k_iX^i/X^0,where χ_=2(h^1,1-h^2,1) is Euler characteristic of CY,are the genus-zero Gopakumar-Vafa invariants, and the sum goes over the effective homology classes, i.e. k_i≥ 0 for all i, with not all of them vanishing simultaneously. The quantum terms are given by a sum of the perturbative correction and the contribution of worldsheet instantons, respectively.The non-perturbative quantum corrections are known to be important for stabilizing all axions <cit.>. We ignore the axions in what follows, by assigning heavy masses to them via tuning the parameters of our model. [The alternative possibility arises when some of the axions are lighter than the other moduli. This case is technically more involved and is not studied here.]As regards _H, though its complete non-perturbative description is still beyond reach, a significant progress in this direction was achieved by using twistorial methods (see <cit.> for reviews). In contrast to _V, the hypermultiplet metric is exact in α', but receives g_s-corrections. At the perturbative level,it is known explicitly <cit.>, and is given by a one-parameter deformation of the classical c-map metric, whosedeformation parameter is controlled by χ_<cit.>. At the non-perturbative level, the metric gets the instanton contributions coming from D2-branes wrapping 3-cycles (and, hence, parametrized by a charge γ=(p^Λ, q_Λ)) and NS5-branes wrapping the whole CY. The D-instantons were incorporated to all orders using the twistor description of QK manifolds <cit.>, so that only NS5-instanton contributions still remain unknown.In the case ofrigid CY, capital Greek indices can be omitted, so that the_Hhas the lowest possible dimension and thus represents the simplest theoretical laboratory for explicit calculations<cit.>. The metric on four-dimensional QK spaces allows an explicit parametrization <cit.>, which reduces it to a solution of an integrable system. In the presence of a continuous isometry, it is encoded in a solution of the integrableToda equation <cit.>.The UH metric reads <cit.> s^2=2/r^2[1-2r/^2( r)^2+^2/4 ||^2..+1/641-2r/^2^-1σ +ζ-ζ+^2],where the functions , ,and , are defined in Appendix B.The charge vectors corresponding to our choice of fluxes and generating the isometries of the metric (<ref>)are given by <cit.>_0=_+h_ζ +2e_0+h-ζ_σ,_i= 2e_i_σ. The associated moment maps μ⃗_I are <cit.>μ_i^+= 0, μ_i^3=e_i/2r,μ_0^+=/2r-λ h, μ_0^3=1/2re_0+h-ζ. Using all the data above in Eq. (<ref>)results in the scalar potential <cit.> V=e^/4r^2[ 2|E+|^2/1-2r/^2 +^ie_i+E_ie_j+_-3|E|^2 +4^2|-λ h|^2^i_i_-1-4r/^2],where _i=_i and E= e_0+h-ζ+e_i z^i,=hι__ζ+ι__. Both the metric and the potential are invariant under the symplectic transformations induced by a change of basis of 3-cycles on . This invariance can be used to put h-flux to zero, which we assume from now on. In this symplectic frame, only electrically charged instantons contribute to the potential. Using this simplification, one can show that =4 r/ (||^2+||^2)  ,whereas the other quantities introduced in Appendix B can be computed explicitly as <cit.> =384 c∑_q>0s(q) q^2 sin(2π qζ)K_1(4π q), =2λ_2+384 c∑_q>0s(q) q^2 cos(2π qζ)K_0(4π q), r= λ_2^2/2-c-24c/π∑_q>0s(q) q cos(2π qζ)K_1(4π q) . The function , appearing in the potential (<ref>), is given by (<ref>). The divisor function is defined bys(q)≡σ_-2(q)=∑_d|nd^-2,and, via Eqs. (<ref>) and (<ref>), encodes the DT invariants counting the D-instantons, with the parameter c. As a result, all g_s-corrections affecting the scalar potential are controlled by only one topological (Euler) number.At h=0 the potential explicitly depends on dilaton r, Kähler moduli t^i, periods b^i of the B-field, and the RR scalar ζ, being independent of another RR scalarand the NS-axion σ. Since the last two scalars are used for the gauging, one can redefine some of the gauge fields to absorbthem in the effective action, where these scalars disappear from the spectrum, whereas the corresponding gauge fields become massive.In the perturbative approximation, the potential depends on the fields b^i and ζ, known asaxions, only through the combination e_i b^i-ζ appearing in (<ref>). The other h^1,1 independent combinations of these fields enter the potential only via instanton corrections. This shows that the instanton corrections areindispensable for axion stabilization, and allow a simple solution<cit.>, ζ=n/2,b^i=ℓ^i/2. Having restricted ourselves to this solution, we can simplify the scalar potential as(r,t^i)≡.V|_ζ=n/2 b^i=ℓ^i/2 =e^/4r^2 4r(et)^2/^2-2r -e^-^ije_i e_j +4^2^2/e^N_ijt^i t^j-1-16^2 r/ . In what follows, we add two more simplifications by (i) neglecting all non-perturbative (instanton) corrections, and (ii) limiting ourselves to a single Kähler modulus, by omitting all the lower-case latin indices too.As was shown in <cit.>, the metric (<ref>) has a curvature singularity at the hypersurface determined by the equation r= ^2. This singularity is an artefact of our approximation. It implies that near the singularity the metric (<ref>) and, hence, the corresponding scalar potential (<ref>) cannot be trusted. In other words, we should require that r> r_ cr. In the perturbative approximation one has r_ cr=-2c.§ PERTURBATIVE APPROXIMATION Given a single vector multiplet of matter and, hence, a single Kähler modulus, we can omit all the lower case latin indices in (<ref>) and rewrite it to the formV((r),t)= e^/4r^2( 4r(et)^2/^2M^2-2r -e^-Ne^2 +4h̃^2/e^Nt^2-1 -16 h̃^2r/M)as the function of only two real variables, r and t, representing dilaton and the imaginary part of the Kähler modulus, respectively. In accordance to Appendices A and B, the functions entering (<ref>) and (<ref>) are greatly simplified in the perturbative approximation, where all instantons are ignored, as follows:e^-=8-C,=κ t^3/6,=√(2(r+c)/λ_2), N=2κ t. The function (<ref>) subject to the definitions (<ref>) is a fully explicit elementary (complicated) function that can be studied both analytically and numerically (we used Wolfram Mathematica). Its profile is given in Fig. 1. Our idea is to exploit a competition of four different terms in(<ref>) by varying the flux parameters, in order to compile them into a slow roll inflationary potentialfor some values of r and t.The variablesand t have to be restricted from below, >_c and t>t_c, because the potential diverges at _c = √(2c/λ_2-4λ_2^2)and t_c = √(3C/4κ)  . When being expanded near t=t_c, the potential reads V= A()/t-t_c+O(t-t_c) ,where the residue is given byA()=[ (6C)^2/3e^2λ_2-8κ^2/3h̃^2(2c-^2λ_2+4^2λ_2^2)/2(6C)^2/3κλ_2(-2c+^2λ_2)(2c-^2λ_2+4^2λ_2^2)] . There exist the critical value _c^(2) wherethe residue vanishes, A(_c^(2))=0, and the potential V drastically changes its shape, as is shown in Fig. 2. We find_c^(2)=√(16cκ^2/3h̃^2-(6C)^2/3e^2λ_2)/2√(2)√(κ^2/3h̃^2(1-4λ_2)λ_2)  .The potential V at large values of t oris not affected by that.As regards the behaviour of the potential V at large values of r and t, we find the following asymptotic expansion near the origin in a plane (1/r,1/t): V(r,t)= -e^2(λ_2-1)/2κ(4λ_2-1)1/r^21/t +3ce^2λ_2/κ(4λ_2-1)1/r^31/t -3h̃(h̃-2)/2κλ_21/r1/t^3 +O^5(1/r,1/t). The coefficients of the three leading terms in this expansion are all positive when 1/4<λ_2<1 and h̃<2.In this case, the potential V takespositive values, as is shown inFig. <ref>. We find that the (relative) first and second derivatives of the scalar potential aboveare allindependent upon the (flux) parameters of V, namely,|V_r/V| = 1/r + O^2(1/r) ,V_rr/V = 2/r^2 + O^2(1/r) ,and|V_t/V| =1/t + O^2(1/t) , V_tt/V = 2/t^2 + O^2(1/t) ,i.e. they exhibit theuniversal behaviour for large values of r and t.It follows from Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) in the large field approximation that thekinetic terms of the fields t and r read 3(∂ t)^2/t^2 and 1/4(∂ r)^2/r^2, respectively. In terms of the canonical fields defined by t=e^√(1/6)χ and r=e^√(2)φ, the standard slow roll parameters are thus givenε >13/6 andη >2 .They violate the necessary conditions (ε≪ 1and η≪ 1) for slow roll inflation. The large field limitalso implies the runaway decompactification driving the theory towards ten dimensions, which is unacceptablefor our Universe. § CONCLUSION We considered a simple class of flux compactificationsof type IIA strings, preserving N=2 local supersymmetry in the four-dimensional low energy effective action. When back-reaction of fluxes is ignored, we obtained the non-perturbative scalar potentialthat leads to the axion stabilization. The latter greatly simplifies the scalar potential as merely a function of dilaton and Kähler moduli. We simplified it even further by going to the perturbative approximation where all instanton contributions are ignored. We used the restricted set of electric fluxes, without magnetic fluxes and with vanishing Romans mass, which automatically obeys the tadpole cancellation condition.It also preserves N=2 local supersymmetry that allowed us to control (in principle) all quantum corrections and do explicit calculations. Even though we assumed only one Kähler modulus, we expect that our results qualitatively do not change with a larger number of Kähler moduli and, perhaps, even with a larger number of hypermultiplets.We found the universal behaviour of the scalar potential for large r and t, i.e. in the perturbative region, where all slow roll parameters become independent upon the flux parameters.Our results extend the applicability of the "no-go" statement <cit.>, ruling out slow roll inflation in typeIIA/CY strings,beyond the semi-classical approximation in the case of rigid CY. The absence of viable inflation in type IIA strings on rigid CY with N=2 supersymmetry in four dimensions isalso related to the absence of meta-stable vacua found in <cit.> for a limited range of the string coupling values, but including all D-instanton corrections.Though unbroken N=2 supersymmetry certainly does not describe our universe, it may facilitate a construction of viable inflationary modelswhen using N=2 gauged supergravity as the starting point (or as the 0th-order approximation) and then breaking extended supersymmetry by additional structures, such as orientifolds with negative tension. However, this would require a much better understanding of quantum effects in N=1 supersymmetric flux compactifications, which are not under theoretical control at present.§ ACKNOWLEDGEMENTS SVK is supported by a Grant-in-Aid of the Japanese Society for Promotion of Science (JSPS) under No. 26400252, the World Premier International Research Centre Initiative (WPI Initiative), MEXT, Japan, and the Competitiveness Enhancement Program of Tomsk Polytechnic University in Russia. One of the authors (SVK) thanks NORDITA in Stockholm, Sweden, for kind hospitality extended to him during completion of this paper. The authors are also grateful to the referee for careful reading of the manuscript and critical comments.§ APPENDIX A: SPECIAL GEOMETRY A special Kähler manifoldis determined by a holomorphic prepotential F(X^I), a homogeneous function of degree 2. The homogeneous coordinates X^I are related to the coordinates on the manifold z^i by z^i=X^I/X^0 and, for simplicity, we choose the gauge where X^0=1. Given the prepotential, it is convenient to define the matrixN_IJ=-2 F_IJ.It is invertible, but has a split signature (b_2,1). A related invertible matrix with a definite signature can be constructed as follows. Let us define _IJ=_IJ-N_IKX^K N_JLX^L/N_MNX^M X^N . _IJ appears as the coupling matrix of the gauge fields in the low-energy effective action and its imaginary part is negative definite.In terms of the matrix (<ref>), the Kähler potential onis given by =-logX^I N_IJ^J.Its derivatives with respect to z^i and ^ are _i = -e^ N_i I^I,_i = -e^ N_ij+_i_,where we have used homogeneity of the holomorphic prepotential and z^i=b^i+ t^i. In particular, (<ref>) provides the metric on . § APPENDIX B: THE UH METRIC To write down the metric computed in <cit.>,let us summarize the data characterizing a rigid CY manifold:*The intersection numbers κ_ijk, which specify the classical holomorphic prepotential (<ref>) on the Kähler moduli space.*The Euler characteristic χ_=2h^1,1>0, which appears in the α'-corrected prepotential (<ref>), and is always positive for rigid . We also use the following parameter:c=-χ_/192π=-π^2/48ζ(3)C.*The complex number λ≡λ_1 -λ_2=∫_Ω/∫_Ω given by the ratio of periods of the holomorphic 3-form Ω∈ H^3,0() over an integral symplectic basis (,) of H_3(,). The geometry requires that λ_2>0, which explains the minus sign in (<ref>).*The generalized Donaldson-Thomas (DT) invariants Ω_γ, which are integers counting, roughly, the number of BPS instantons of charge γ=(p,q). In the case of the vanishing magnetic charge p and arbitrary electric charge q, they coincide with the Euler characteristic, Ω_(0,q)=χ_. The central chargeZ_γ=q-λ pcharacterizes a D-instanton of charge γ. It is used to define the function _γ(t)=(-1)^qpexp -2πqζ-p+ϖ^-1Z_γ-ϖ_γ ,whereis a function on the moduli space, which is fixed below. Geometrically, t parametrizes the fiber of the twistor space , abundle over _H, whereas _γ are Fourier modes of holomorphic Darboux coordinates on <cit.>. Using (<ref>), one defines [ = ∫_γϖ̣/ϖ log1-_γ, = ∫_γϖ̣/ϖ _γ/1-_γ,; = ±∫_γϖ̣/ϖ^1± 1 log1-_γ, = ±∫_γϖ̣/ϖ^1± 1 _γ/1-_γ, ] where γ is a contour onjoining t=0 and t=∞ along the direction fixed by the phase of the central charge, γ= Z_γ^+. Expanding the integrands in powers of _γ, these functions can be expressed as series of the modified Bessel functions of the second kind K_n.The set of functions (<ref>) encodes the D-instanton corrections to the moduli space. The related quantities appearing in the metric are*the functions=1/2π∑_γγ |Z_γ|^2 ,= 2λ_2-1/2π∑_γγ |Z_γ|^2 , =+^-1||^2,*the one-forms= -λζ-/4π∑_γγ Z_γ-^-1qζ-p -2/r, = 2r/πλ_2∑_γγ Z_γ+^-1 q-λ_1 p-λ_1ζ+λ_2^2 pζ . Finally, the functionentering (<ref>) is implicitly determined as a solution to the following equation:r= λ_2^2/2-c -/32π^2∑_γZ_γ+_γ,where r=e^ϕ is the four-dimensional dilaton.With all the notations above, the D-instanton corrected metric on the four-dimensional hypermultiplet moduli space is given by Eq. (<ref>) in the main text. As was proven in <cit.> , the metric (<ref>) agrees with the Tod ansatz <cit.> where the role of the Tod potential satisfying the Toda equation is played by the function T=2log(/2). 10Baumann:2014nda D. Baumann and L. McAllister, Inflation and String Theory.Cambridge University Press, 2015. DeWolfe:2005uu O. DeWolfe, A. Giryavets, S. Kachru, and W. Taylor, “Type IIA moduli stabilization,” JHEP 0507 (2005) 066, http://www.arXiv.org/abs/hep-th/0505160 hep-th/0505160. Maldacena:2000mw J. M. Maldacena and C. Nunez, “Supergravity description of field theories on curved manifolds and a no go theorem,” Int. J. Mod. Phys. A16 (2001) 822–855, http://www.arXiv.org/abs/hep-th/0007018 hep-th/0007018. Ivanov:2000fg S. Ivanov and G. Papadopoulos, “A No go theorem for string warped compactifications,” Phys. Lett. B497 (2001) 309–316, http://www.arXiv.org/abs/hep-th/0008232 hep-th/0008232. Kachru:2003aw S. Kachru, R. Kallosh, A. D. Linde, and S. P. Trivedi, “De Sitter vacua in string theory,” Phys.Rev. D68 (2003) 046005, http://www.arXiv.org/abs/hep-th/0301240 hep-th/0301240. Balasubramanian:2005zx V. Balasubramanian, P. Berglund, J. P. Conlon, and F. Quevedo, “Systematics of moduli stabilisation in Calabi-Yau flux compactifications,” JHEP 03 (2005) 007, http://www.arXiv.org/abs/hep-th/0502058 hep-th/0502058. Conlon:2005ki J. P. Conlon, F. Quevedo, and K. Suruliz, “Large-volume flux compactifications: Moduli spectrum and D3/D7 soft supersymmetry breaking,” JHEP 08 (2005) 007, http://www.arXiv.org/abs/hep-th/0505076 hep-th/0505076. Westphal:2006tn A. Westphal, “de Sitter string vacua from Kahler uplifting,” JHEP 03 (2007) 102, http://www.arXiv.org/abs/hep-th/0611332 hep-th/0611332. deCarlos:2009qm B. de Carlos, A. Guarino, and J. M. Moreno, “Complete classification of Minkowski vacua in generalised flux models,” JHEP 02 (2010) 076, http://www.arXiv.org/abs/0911.2876 0911.2876. Louis:2012nb J. Louis, M. Rummel, R. Valandro, and A. Westphal, “Building an explicit de Sitter,” JHEP 10 (2012) 163, http://www.arXiv.org/abs/1208.3208 1208.3208. Danielsson:2012by U. Danielsson and G. Dibitetto, “On the distribution of stable de Sitter vacua,” JHEP 03 (2013) 018, http://www.arXiv.org/abs/1212.4984 1212.4984. Blaback:2013qza J. Blaback, D. Roest, and I. Zavala, “De Sitter Vacua from Nonperturbative Flux Compactifications,” Phys. Rev. D90 (2014), no. 2, 024065, http://www.arXiv.org/abs/1312.5328 1312.5328. Hassler:2014mla F. Hassler, D. Lüst, and S. Massai, “On Inflation and de Sitter in Non-Geometric String Backgrounds,”http://www.arXiv.org/abs/1405.2325 1405.2325. Hertzberg:2007wc M. P. Hertzberg, S. Kachru, W. Taylor and M. Tegmark,“Inflationary Constraints on Type IIA String Theory",JHEP 12 (2007) 095, http://www.arXiv.org/abs/0711.2512 0711.2512. Giddings:2001yu S. B. Giddings, S. Kachru, and J. Polchinski, “Hierarchies from fluxes in string compactifications,” Phys.Rev. D66 (2002) 106006, http://www.arXiv.org/abs/hep-th/0105097 hep-th/0105097. Kachru:2004jr S. Kachru and A.-K. Kashani-Poor, “Moduli potentials in type IIA compactifications with RR and NS flux,” JHEP 03 (2005) 066, http://www.arXiv.org/abs/hep-th/0411279 hep-th/0411279. Alexandrov:2016plh S. Alexandrov, S. V. Ketov, and Y. Wakimoto,“ Non-perturbative scalar potential inspired by type IIA strings on rigid CY,” JHEP 11 (2016) 066, http://www.arXiv.org/abs/1607.05293 1607.05293. Davidse:2005ef M. Davidse, F. Saueressig, U. Theis, and S. Vandoren, “Membrane instantons and de Sitter vacua,” JHEP 09 (2005) 065, http://www.arXiv.org/abs/hep-th/0506097 hep-th/0506097. Bagger:1983tt J. Bagger and E. Witten, “Matter couplings in 𝒩=2 supergravity,” Nucl. Phys. B222 (1983) 1. D'Auria:1990fj R. D'Auria, S. Ferrara, and P. Fre, “Special and quaternionic isometries: General couplings in N=2 supergravity and the scalar potential,” Nucl. Phys. B359 (1991) 705–740. Andrianopoli:1996cm L. Andrianopoli, M. Bertolini, A. Ceresole, R. D'Auria, S. Ferrara, P. Fre, and T. Magri, “N=2 supergravity and N=2 superYang-Mills theory on general scalar manifolds: Symplectic covariance, gaugings and the momentum map,” J. Geom. Phys. 23 (1997) 111–189, http://www.arXiv.org/abs/hep-th/9605032 hep-th/9605032. deWit:2001bk B. de Wit, M. Roček, and S. Vandoren, “Gauging isometries on hyperkaehler cones and quaternion- kaehler manifolds,” Phys. Lett. B511 (2001) 302–310, http://www.arXiv.org/abs/hep-th/0104215 hep-th/0104215. MR872143 K. Galicki, “A generalization of the momentum mapping construction for quaternionic Kähler manifolds,” Comm. Math. Phys. 108 (1987), no. 1, 117–138.Grana:2005jc M. Grana, “Flux compactifications in string theory: A Comprehensive review,” Phys. Rept. 423 (2006) 91–158, http://www.arXiv.org/abs/hep-th/0509003 hep-th/0509003. Louis:2002ny J. Louis and A. Micu, “Type II theories compactified on Calabi-Yau threefolds in the presence of background fluxes,” Nucl. Phys. B635 (2002) 395–431, http://www.arXiv.org/abs/hep-th/0202168 hep-th/0202168. Giryavets:2003vd A. Giryavets, S. Kachru, P. K. Tripathy, and S. P. Trivedi, “Flux compactifications on Calabi-Yau threefolds,” JHEP 0404 (2004) 003, http://www.arXiv.org/abs/hep-th/0312104 hep-th/0312104. Cecotti:1989qn S. Cecotti, S. Ferrara, and L. Girardello, “Geometry of type II superstrings and the moduli of superconformal field theories,” Int. J. Mod. Phys. A4 (1989) 2475. Candelas:1990rm P. Candelas, X. C. de la Ossa, P. S. Green, and L. Parkes, “A pair of Calabi-Yau manifolds as an exactly soluble superconformal theory,” Nucl. Phys. B359 (1991) 21–74. Hosono:1993qy S. Hosono, A. Klemm, S. Theisen, and S.-T. Yau, “Mirror symmetry, mirror map and applications to Calabi-Yau hypersurfaces,” Commun. Math. Phys. 167 (1995) 301–350, http://www.arXiv.org/abs/hep-th/9308122 hep-th/9308122. Alexandrov:2011va S. Alexandrov, “Twistor Approach to String Compactifications: a Review,” Phys. Rept. 522 (2013) 1–57, http://www.arXiv.org/abs/1111.2892 1111.2892. Alexandrov:2013yva S. Alexandrov, J. Manschot, D. Persson, and B. Pioline, “Quantum hypermultiplet moduli spaces in N=2 string vacua: a review,” in Proceedings, String-Math 2012, Bonn, Germany, July 16-21, 2012, pp. 181–212.2013. http://www.arXiv.org/abs/1304.0766 1304.0766. Alexandrov:2007ec S. Alexandrov, “Quantum covariant c-map,” JHEP 05 (2007) 094, http://www.arXiv.org/abs/hep-th/0702203 hep-th/0702203. Antoniadis:1997eg I. Antoniadis, S. Ferrara, R. Minasian, and K. S. Narain, “R^4 couplings in M- and type II theories on Calabi-Yau spaces,” Nucl. Phys. B507 (1997) 571–588, http://www.arXiv.org/abs/hep-th/9707013 hep-th/9707013. Antoniadis:2003sw I. Antoniadis, R. Minasian, S. Theisen, and P. Vanhove, “String loop corrections to the universal hypermultiplet,” Class. Quant. Grav. 20 (2003) 5079–5102, http://www.arXiv.org/abs/hep-th/0307268 hep-th/0307268. RoblesLlana:2006ez D. Robles-Llana, F. Saueressig, and S. Vandoren, “String loop corrected hypermultiplet moduli spaces,” JHEP 03 (2006) 081, http://www.arXiv.org/abs/hep-th/0602164 hep-th/0602164. RoblesLlana:2006is D. Robles-Llana, M. Roček, F. Saueressig, U. Theis, and S. Vandoren, “Nonperturbative corrections to 4D string theory effective actions from SL(2,Z) duality and supersymmetry,” Phys. Rev. Lett. 98 (2007) 211602, http://www.arXiv.org/abs/hep-th/0612027 hep-th/0612027. Alexandrov:2008nk S. Alexandrov, B. Pioline, F. Saueressig, and S. Vandoren, “Linear perturbations of quaternionic metrics,” Commun. Math. Phys. 296 (2010) 353–403, http://www.arXiv.org/abs/0810.1675 0810.1675. Alexandrov:2008gh S. Alexandrov, B. Pioline, F. Saueressig, and S. Vandoren, “D-instantons and twistors,” JHEP 03 (2009) 044, http://www.arXiv.org/abs/0812.4219 0812.4219. Alexandrov:2009zh S. Alexandrov, “D-instantons and twistors: some exact results,” J. Phys. A42 (2009) 335402, http://www.arXiv.org/abs/0902.2761 0902.2761. Strominger:1997eb A. Strominger, “Loop corrections to the universal hypermultiplet,” Phys. Lett. B421 (1998) 139–148, http://www.arXiv.org/abs/hep-th/9706195 hep-th/9706195. Gutperle:2000sb M. Gutperle and M. Spalinski, “Supergravity instantons and the universal hypermultiplet,” JHEP 06 (2000) 037, http://www.arXiv.org/abs/hep-th/0005068 hep-th/0005068. Ceresole:2001wi A. Ceresole, G. Dall'Agata, R. Kallosh, and A. Van Proeyen, “Hypermultiplets, domain walls and supersymmetric attractors,” Phys. Rev. D64 (2001) 104006, http://www.arXiv.org/abs/hep-th/0104056 hep-th/0104056. Davidse:2004gg M. Davidse, U. Theis, and S. Vandoren, “Fivebrane Instanton Corrections to the Universal Hypermultiplet,” Nucl. Phys. B697 (2004) 48–88, http://www.arXiv.org/abs/hep-th/0404147 hep-th/0404147. Bao:2009fg L. Bao, A. Kleinschmidt, B. E. W. Nilsson, D. Persson, and B. Pioline, “Instanton Corrections to the Universal Hypermultiplet and Automorphic Forms on SU(2,1),” Commun. Num. Theor. Phys. 4 (2010) 187–266, http://www.arXiv.org/abs/0909.4299 0909.4299. Catino:2013syn F. Catino, C. A. Scrucca, and P. Smyth, “Simple metastable de Sitter vacua in N=2 gauged supergravity,” JHEP 04 (2013) 056, http://www.arXiv.org/abs/1302.1754 1302.1754. Przanowski:1991ru M. Przanowski, “Killing vector fields in selfdual, Euclidean Einstein spaces with Lambda not equal 0,” J. Math. Phys. 32 (1991) 1004–1010. MR1423177 K. P. Tod, “The SU(∞)-Toda field equation and special four-dimensional metrics,” inGeometry and physics (Aarhus, 1995), vol. 184 ofLecture Notes in Pure and Appl. Math., pp. 307–312.Dekker, New York, 1997.Ketov:2001ky S. V. Ketov, “D instantons and universal hypermultiplet,”http://www.arXiv.org/abs/hep-th/0112012 hep-th/0112012. Ketov:2001gq S. V. Ketov, “Universal hypermultiplet metrics,” Nucl.Phys. B604 (2001) 256–280, http://www.arXiv.org/abs/hep-th/0102099 hep-th/0102099. Ketov:2002vr S. V. Ketov, “Summing up D-instantons in N = 2 supergravity,” Nucl. Phys. B649 (2003) 365–388, http://www.arXiv.org/abs/hep-th/0209003 hep-th/0209003. Alexandrov:2006hx S. Alexandrov, F. Saueressig, and S. Vandoren, “Membrane and fivebrane instantons from quaternionic geometry,” JHEP 09 (2006) 040, http://www.arXiv.org/abs/hep-th/0606259 hep-th/0606259. Alexandrov:2012np S. Alexandrov, “c-map as c=1 string,” Nucl. Phys. B863 (2012) 329–346, http://www.arXiv.org/abs/1201.4392 1201.4392. Alexandrov:2014sya S. Alexandrov and S. Banerjee, “Hypermultiplet metric and D-instantons,” JHEP 1502 (2015) 176, http://www.arXiv.org/abs/1412.8182 1412.8182. Alexandrov:2009qq S. Alexandrov and F. Saueressig, “Quantum mirror symmetry and twistors,” JHEP 09 (2009) 108, http://www.arXiv.org/abs/0906.3743 0906.3743. Freitag:2011 E. Freitag and R. S. Manni, “On Siegel three folds with a projective Calabi–Yau model,”http://www.arXiv.org/abs/1103.2040 1103.2040. Freitag:2015 E. Freitag, “A rigid Calabi-Yau manifold with Picard number two,”http://www.arXiv.org/abs/1506.00892 1506.00892. Cremmer:1984hj E. Cremmer, B. de Wit, J. P. Derendinger, S. Ferrara, L. Girardello, C. Kounnas, and A. Van Proeyen, “Vector multiplets coupled to =2 supergravity: Superhiggs effect, flat potentials and geometric structure,” Nucl. Phys. B250 (1985) 385. GomezReino:2008bi M. Gomez-Reino, J. Louis, and C. A. Scrucca, “No metastable de Sitter vacua in N=2 supergravity with only hypermultiplets,” JHEP 02 (2009) 003, http://www.arXiv.org/abs/0812.0884 0812.0884. Fre:2002pd P. Fré, M. Trigiante, and A. Van Proeyen, “Stable de Sitter vacua from N=2 supergravity,” Class.Quant.Grav. 19 (2002) 4167–4194, http://www.arXiv.org/abs/hep-th/0205119 hep-th/0205119. Ceresole:2014vpa A. Ceresole, G. Dall'Agata, S. Ferrara, M. Trigiante, and A. Van Proeyen, “A search for an 𝒩 =2 inflaton potential,” Fortsch.Phys. 62 (2014) 584–606, http://www.arXiv.org/abs/1404.1745 1404.1745. Fre:2014pca P. Fr00E9, A. S. Sorin, and M. Trigiante, “The c-map, Tits Satake subalgebras and the search for 𝒩=2 inflaton potentials,” Fortsch. Phys. 63 (2015) 198–258, http://www.arXiv.org/abs/1407.6956 1407.6956. MR664330 S. M. Salamon, “Quaternionic Kähler manifolds,” Invent. Math. 67 (1982), no. 1, 143–171. VanProeyen:1995sw A. Van Proeyen, “Vector multiplets in N=2 supersymmetry and its associated moduli spaces,” in High-energy physics and cosmology. Proceedings, Summer School, Trieste, Italy, June 12-July 28, 1995.1995. http://www.arXiv.org/abs/hep-th/9512139 hep-th/9512139. Strominger:1986uh A. Strominger, “Superstrings with Torsion,” Nucl. Phys. B274 (1986) 253. Polchinski:1995sm J. Polchinski and A. Strominger, “New vacua for type II string theory,” Phys.Lett. B388 (1996) 736–742, http://www.arXiv.org/abs/hep-th/9510227 hep-th/9510227. Michelson:1996pn J. Michelson, “Compactifications of type IIB strings to four-dimensions with nontrivial classical potential,” Nucl. Phys. B495 (1997) 127–148, http://www.arXiv.org/abs/hep-th/9610151 hep-th/9610151. Hitchin:2004ut N. Hitchin, “Generalized Calabi-Yau manifolds,” Quart. J. Math. 54 (2003) 281–308, http://www.arXiv.org/abs/math/0209099 math/0209099. Romans:1985tz L. J. Romans, “Massive N=2a Supergravity in Ten-Dimensions,” Phys. Lett. B169 (1986) 374. Alexandrov:2010ca S. Alexandrov, D. Persson, and B. Pioline, “Fivebrane instantons, topological wave functions and hypermultiplet moduli spaces,” JHEP 1103 (2011) 111, http://www.arXiv.org/abs/1010.5792 1010.5792.Alexandrov:2014mfa S. Alexandrov and S. Banerjee, “Fivebrane instantons in Calabi-Yau compactifications,” Phys.Rev. D90 (2014) 041902, http://www.arXiv.org/abs/1403.1265 1403.1265. Alexandrov:2014rca S. Alexandrov and S. Banerjee, “Dualities and fivebrane instantons,” JHEP 1411 (2014) 040, http://www.arXiv.org/abs/1405.0291 1405.0291. Grana:2014vva M. Grana, J. Louis, U. Theis, and D. Waldram, “Quantum Corrections in String Compactifications on SU(3) Structure Geometries,” JHEP 01 (2015) 057, http://www.arXiv.org/abs/1406.0958 1406.0958. KashaniPoor:2005si A.-K. Kashani-Poor and A. Tomasiello, “A stringy test of flux-induced isometry gauging,” Nucl.Phys. B728 (2005) 135–147, http://www.arXiv.org/abs/hep-th/0505208 hep-th/0505208.
http://arxiv.org/abs/1703.08993v3
{ "authors": [ "Yuki Wakimoto", "Sergei V. Ketov" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170327100744", "title": "No inflation in type IIA strings on rigid CY spaces" }
a1]Okpeafoh S. Agimelencor1 okpeafoh.agimelen@strath.ac.uka1]Vaclav Svobodaa2]Bilal Ahmeda3]Javier Cardonaa4]Jerzy Dziewierza2]Cameron J. Browna2]Thomas McGlonea3]Alison Clearya3]Christos Tachtatzisa3]Craig Michiea2]Alastair J. Florencea3]Ivan Andonovica5]Anthony J. Mulhollanda1]Jan Sefcikcor1 jan.sefcik@strath.ac.uk[cor1]Corresponding authors [a1]EPSRC Centre for Innovative Manufacturing in Continuous Manufacturing and Crystallisation, Department of Chemical and Process Engineering, University of Strathclyde, James Weir Building, 75 Montrose Street, Glasgow, G1 1XJ, United Kingdom.[a2]EPSRC Centre for Innovative Manufacturing in Continuous Manufacturing and Crystallisation, Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, 161 Cathedral Street, Glasgow, G4 0RE, United Kingdom.[a3]Centre for Intelligent Dynamic Communications, Department of Electronic and Electrical Engineering, University Of Strathclyde, Royal College Building, 204 George Street, Glasgow, G1 1XW,United Kingdom. [a4]The Centre for Ultrasonic Engineering, Department of Electronic and Electrical Engineering, University Of Strathclyde, Royal College Building, 204 George Street, Glasgow, G1 1XW,United Kingdom. [a5]Department of Mathematics and Statistics, University of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow, G1 1XH, United Kingdom.The success of the various secondary operations involved in the production of particulate products depends on the production of particles with a desired size and shape from a previous primary operation such as crystallisation. This is because these properties of size and shape affect the behaviour of the particles in the secondary processes. The size and the shape of the particles are very sensitive to the conditions of the crystallisation processes, and so control of these processes is essential. This control requires the development of software tools that can effectively and efficiently process the sensor data captured in situ. However, these tools have various strengths and limitations depending on the process conditions and the nature of the particles.In this work, we employ wet milling of crystalline particles as a case study of a process which produces effects typical to crystallisation processes. We study some of the strengths and limitations of our previously introduced tools for estimating the particle size distribution (PSD) and the aspect ratio from chord length distribution (CLD) and imaging data. We find situations where the CLD tool works better than the imaging tool and vice versa. However, in general both tools complement each other, and can therefore be employed in a suitable multi-objective optimisation approach to estimate PSD and aspect ratio.Particle size distribution chord length distribution imaging particle shape crystallisation inverse problems wet milling § INTRODUCTIONThe success of any manufacturing process for particulate products depends on some key attributes of the particles which influence the outcome of various downstream operations carried out in the manufacturing process. In particular, the particle attributes of size and shape influence their behaviours such as flowability, filterability, ease of dissolution and so on. These behaviours in turn determine if the various downstream operations will be successful or not. Hence it is necessary that the particles possess the desired particle size distribution (PSD) and shape for a particular process <cit.>.A very important upstream operation which is crucial to the manufacture of particulate products is the crystallisation process. The PSD and shape of the particles produced during this crystallisation process vary due to a number of factors which include the nature of the material of which the particles are composed and the crystallisation process conditions. The combination of factors that will lead to the production of particles with the desired PSD and shape are not easily determined. Therefore, for the production of particles with the desired PSD and shape during crystallisation processes, it is necessary that the crystallisation process be controlled so as to produce tailor made particles. In order to control crystallisation processes, it is necessary that the PSD and shape of the particles produced in the process be monitored in situ, and this can be achieved by the use of inline sensors. However, these inline sensors have limitations on their applicability which could be due to the working principles of the sensors or various effects from the process or both. These limitations will influence the level of accuracy of the estimated PSD and aspect ratio (the metric for quantifying particle shape) obtained using these sensors. This will then determine if the estimated PSD and aspect ratio obtained from the sensor data is representative of the particles being measured.In this work, we examine various process conditions and their effects on two inline sensors; the Mettler Toledo focused beam reflectance measurement (FBRM) and the particle vision and measurement (PVM) sensors. These two sensors have been chosen because of their wide applicability in various particulate processes. We employ these sensors in different wet milling processes of three different organic crystalline particles. Wet milling is one of the key processes carried out in industry during the production of particulate products. This process produces different effects typical of crystallisation processes. We explore the performance of these sensors under these effects.§ METHODSThe methodology employed in this work consisted of wet milling experiments and subsequent analysis of data acquired inline and offline. The materials, equipment and experimental procedure employed are described in subsections <ref> to <ref>, while the methods of data analysis used are described in subsection <ref>. §.§ MaterialsThe following materials were used in this work: paracetamol (98.0-102.0% USP), benzoic acid (>99.5%), and metformin hydrochloride (reagent grade). Both paracetamol and benzoic acid were purchased from Sigma-Aldrich while metformin hydrochloride was purchased from Molekula. The benzoic acid particles were suspended in distilled water obtained from an in-house purification system, and the surfactant Tween 20 from Sigma-Aldrich was added to the benzoic acid slurry to ease dispersion of the particles and avoid foaming. However, the paracetamol and metformin hydrochloride were both suspended in 2-propanol (reagent grade, CAS: 67-63-0, Assay (GLC) >99.5%) obtained from from Fisher Scientific, UK. §.§ EquipmentThe experiments were conducted in a closed loop consisting of a Mettler Toledo OptiMax Workstation made up of a 1L stirred tank crystallizer equipped with an inline Hastelloy Pt100 temperature sensor. The Workstation was connected to a Watson Marlow Du520 peristaltic pump, which was subsequently connected to an IKA MagicLab (module UTL) rotor stator wet mill. The wet mill was finally connected back to the Workstation to close the loop as sketched in Fig. <ref>. The temperature of the wet mill was controlled with a Lauda heater/chiller unit as shown in Fig. <ref>. The process conditions of temperature and stirring speed of the slurry in the Workstation were controlled using the iControl v5.2 software from Mettler Toledo. Data related to the size and the shape of the particles in the wet milling processes was captured with the Mettler Toledo FBRM G400 series and PVM V819sensors within the stirred tank as sketched in Fig. <ref>. The FBRM sensor produces a narrow laser beam which moves in a circular trajectory. The beam, when incident on a particle, traces out a chord on the particle. The lengths of chords measured over a pre-set period of time for particles in the slurry is then reported as a chord length distribution (CLD) <cit.>. The PVM sensor takes images of the particles using eight (some of which can be switched off) laser beams. The images are recorded on a CCD array which are subsequently transferred to a computer. The size of each image of the PVM V819 is 1360× 1024 pixels with a pixel size of 0.8μm <cit.>. Offline particle size and shape analyses were carried out using the Malvern Morphologi G3 instrument. The Morphologi instrument consists of a dispersion unit which utilizes compressed air to disperse the particles over a glass plate. Images of particles on the plate are captured using a camera with a microscope lens. The images are then analysed by the instrument software for size and shape information.§.§ Experimental procedureAt the start of each experiment, a saturated solution of approximately 900ml was generated inside the OptiMax vessel at 25^∘C by adding the required quantity of solid.The temperature was ramped to 40 - 50^∘C to speed up dissolution and subsequently cooled to 25^∘C over 20min. Once the temperature had reached the setpoint value, some solid particles (whose mass varied with the different materials) was added and allowed to equilibrate for 60min. Before the addition of these solid particles, a sample of the original material (starting material) was initially analysed with the offline Morphologi instrument for PSD and aspect ratio information. After the equilibration period (covering a period T_1), the peristaltic pump and wet mill were started simultaneously. The speed of the pump was maintained at 50rpm throughout the experiments while that of the wet mill was initially set to 6000rpm (for a duration T_2) after which it was increased in stages. At the next stage (with duration T_3) of the process, the speed of the wet mill was increased to 10,000rpm, and subsequently to 14,000rpm (for a duration T_4) and finally to 18,000rpm (for a duration T_5). The temperature of the mill outlet was regulated manually by adjusting the heater chiller setpoint in order to maintain it at 25^∘C and prevent dissolution.The time intervals T_1 to T_5 varied from 30 - 90min for each material.At the end of the time interval T_5, the suspension was filtered and washed in a Buchner funnel. The same solvents which were used in the experiments for benzoic acid and metformin hydrochloride were used for washing each material at the end of their respective experiments, while paracetamol was washed with chilled water. Each of the cakes obtained at the end of each wet milling process was dried overnight in a vacuum oven. Likethe starting material, samples of the dry cakes (milled product) obtained at the end of each wet milling process were analysed for PSD and aspect ratio information using the offline Morphologi instrument.§.§.§ Benzoic acidBenzoic acid particles were prepared by using antisolvent crystallisation (after dissolution of the original sample from Sigma-Aldrich) in order to obtain long needle shaped crystals. The particles were filtered and dried before being suspended in water for the milling experiment. The particles were suspended in water (saturated with benzoic acid) due to the low solubility of benzoic acid in water. However, due to poor wettability of benzoic acid in water, the surfactant Tween 20 was used at a concentration of 2ml/L. The solid loading in this experiment was 1.6% w/w.§.§.§ ParacetamolThe original paracetamol sample from Sigma-Aldrich was dissolved in isoamyl alcohol after which prism like particles were obtained by cooling crystallisation. The particles obtained from the cooling crystallisation were then suspended in a saturated solution of paracetamol in 2-propanol for the wet milling experiment. The solid loading in this case was 4.2% w/w. Although the solubility of paracetamol in 2-propanol is relatively high, the solvent was chosen to avoid agglomeration.§.§.§ Metformin hydrochlorideThe metformin sample from Molekula was used directly as the particles were already needle shaped. The particles were then suspended in a saturated solution of metformin in 2-propanol (in which metformin has a low solubility and good dispersion) for the wet milling process. The solid loading in this case was 3.5% w/w. The wet milling process for metformin was stopped at the stage T_4 (with the mill speed of 14,000 rpm) as the particles were quickly broken in this case. §.§ Data analysisAs mentioned in section <ref>, the starting material and the milled product for each material were analysed for PSD and aspect ratio information using the offline Morphologi instrument.The CLD data acquired using the inline FBRM sensor were analysed using a previously developed algorithm <cit.> for PSD and aspect ratio information. Similarly, the images captured using the inline PVM sensor were analysed using a previously developed <cit.> image processing algorithm also for size and shape information.§.§.§ Estimating relative number of particlesThe number of particles produced during each wet milling process relative to the number of initially suspended particles (for each of the materials) can be estimated from analysis of images and CLD data. The number of initially suspended particles can be estimated from the mass of initially suspended particles and the volume based PSD of each starting material estimated with the offline Morphologi instrument. Even though the sample of each starting material initially analysed with the offline Morphologi instrument is not the same as the sample that was suspended for each wet milling process, the estimated number of initially suspended particles will still be reasonable, as long as the particles in the original powder of each material were well mixed.To estimate the number of initially suspended particles, the particle length L is discretised and classified into N bins with the characteristic length L_i = √(L_iL_i+1) of bin i representing the length of particles in bin i, where L_i and L_i+1 are the bin boundaries of bin i. The number of particles N_i in bin i is given as N_i = m̃_iM_0/ρ v_i. Where, m̃_i is the mass fraction of the particles in bin i, M_0 is the mass of the initially suspended particles, ρ is the density of particles and v_i is the volume of the particles in bin i. Approximating the shape of all particles in each bin with an ellipsoid of semi-major axis length a_i = L_i/2 and two equal semi-minor axis length b_i = r_ia_i, where r_i is the mean aspect ratio of all the particles in bin i, gives the volume of the particles in bin i as v_i = πr_i^2L_i^3/6. Since all particles have the same density, the mass fraction of the particles in bin i can be replaced by their volume fraction ṽ_i. Then the number of particles in bin i becomes N_i = 6ṽ_iM_0/ρπr_i^2L_i^3, and the number of initially suspended particles 𝒩 is given as 𝒩=∑_i=1^NN_i.The number of particles in the slurry can be estimated from CLD data using Eqs. (<ref>) and (<ref>). However, since Eq. (<ref>) requires the volume fraction of particles, the volume based PSD is first estimated from the CLD data. This is accomplished using the inversion algorithm developed in <cit.>. Then the volume fraction of particles can be estimated, and hence the number of particles 𝒩_CLD in the slurry using Eq. (<ref>). Subsequently, the number of particles in the slurry relative to the number of initially suspended particles N̂_CLD = 𝒩_CLD/𝒩 can be estimated.In the case of image analysis, the mean number of objects per frame 𝒩_IMG is estimated by first counting all objects which were detected in focus and contained wholly within the image frames. This number of objects is then divided by the number of frames containing at least one object in focus to obtain 𝒩_IMG. Subsequently, the number of particles in images relative to the number of initially suspended particles N̂_IMG = 𝒩_IMG/𝒩 is estimated.§ RESULTS AND DISCUSSIONSThe results obtained from the analyses of data captured with the offline Morphologi instrument and as well those acquired inline (using the FBRM and PVM sensors) during the wet milling processes are discussed in subsections <ref> to <ref>.§.§ Analysis with the offline Morphologi instrumentThe volume based PSDs estimated for both the starting material and milled product for benzoic acid using the offline Morphologi instrument are shown in Figs. <ref>(a) and <ref>(b) while the distribution of aspect ratio for the same samples obtained with the same instrument are shown in Figs. <ref>(c) and <ref>(d). The data clearly show particle breakage as the peak and right tail of the volume based PSD moves to the left from the starting material to the milled product in Figs. <ref>(a) and <ref>(b). The reduction of particle length of the milled product when compared to that of the starting material in Fig. <ref>(c) also suggests particle breakage. The estimated probability density function (PDF) of aspect ratio (obtained with the offline Morphologi instrument) of the milled product of benzoic acid is very similar to that of the starting material as seen in Fig. <ref>(d). This could be because this aspect ratio PDF is number based, and hence more sensitive to fines which may be more rounded.Similar results are obtained for paracetamol. The reduction of particle sizes as a result of the wet milling process can be seen in the left shift of the volume based PSD in Figs. <ref>(e) and <ref>(f), and the reduction in particle lengths seen in Fig. <ref>(a). This is similar to the case of benzoic acid in Figs. <ref>(a) - <ref>(c). However, the left shift of the volume based PSD for paracetamol is more pronounced than in the case of benzoic acid. Similarly, there is a more pronounced shift of the aspect ratio PDF in the case of paracetamol (in Fig. <ref>(b)) when compared with that of benzoic acid in Fig. <ref>(d). This could be due to differences in material properties between benzoic acid and paracetamol making them respond differently to the wet mill. Differences in agglomeration of the starting materials, de-agglomeration during agitation and milling and re-agglomeration during filtration and drying of the milled product could have also contributed to the differences. See section .... of the supplementary information for sample images of the starting materials and milled products for benzoic acid, paracetamol and metformin.The situation with metformin is similar to those of benzoic acid and paracetamol. However, the left shift (due to breakage) in the volume based PSD is more significant (as seen in Figs. <ref>(c) and <ref>(d)) than in the previous two cases. This is also reflected in the shift in particle length on moving from the starting material to the milled product seen in Fig. <ref>(e). This is because of the brittle nature of metformin which resulted in most of the large particles being broken down to fines at stage T_1 of the wet milling process. However, as the aspect ratio PDF is number based, it does not show much of a shift on moving from the starting material to the milled product in Fig. <ref>(f). §.§ Analysis of CLD dataThe total CLD counts at the different time intervals T_1 to T_5 during the wet milling of benzoic acid are shown in Fig. <ref>(a), while the mean CLD captured in the last 10 mins of each time interval T_1 to T_5 are shown by the symbols in Fig. <ref>(b).The increase in total chord counts over the wet milling stages T_1 to T_5 seen in Fig. <ref>(a) clearly suggests breakage of particles during the process as the process conditions were such that there was no nucleation or growth of particles. This increase in chord counts is also seen in the increase in the peaks of the mean CLDs shown in Fig. <ref>(b).The solid lines (with the colours corresponding to the symbols) in Fig. <ref>(b) are the estimated CLDs obtained by solving the associated inverse problem. They show near perfect agreement with the corresponding experimental data. This involves searching for a PSD at different aspect ratios[The ratio of width to length of particles] r (all particles are assumed to have the same mean aspect ratio) whose corresponding CLD gives the best fit to the measured CLD <cit.>. In the case of Fig. <ref>(b), these best fits were obtained at r=0.5 (T_1), r=0.6 (T_2), r=0.5 (T_3), r=0.5 (T_4) and r=0.6 (T_5) as indicated in the Fig. These values of aspect ratios for obtaining the best fit to the experimentally measured CLD are close to the peak of the estimated aspect ratio PDF obtained with the offline Morphologi instrument shown in Fig. <ref>(d). The PSDs estimated from the CLDs in Fig. <ref>(b) (at the best fit values of r) are shown in Fig. <ref>(c) (as functions of particle length) and <ref>(d) (as functions of circular equivalent (CE) diameter) for benzoic acid. The estimated PSDs (Figs. <ref>(c) and <ref>(d)) both show breakage of particles moving from T_1 to T_5. That is, the peaks of the distributions shift to the left and both the right and left tails of the distributions shift to the left on moving from T_1 to T_5. The main peak of the volume based PSD estimated from the CLD at T_1 is shifted to the left of the peak of the volume based PSD of the starting material estimated with the offline Morphologi instrument in terms of particle length in Fig. <ref>(c), but it agrees very well with the corresponding estimate for the starting material in terms of CE diameter in Fig. <ref>(d). Also, the peak of the volume based PSD estimated from the CLD at T_5 is shifted to the left of the estimated volume based PSD of the milled product in terms of particle length in Fig. <ref>(c), but shows a better agreement with the estimated PSD of the milled product in terms of CE diameter in Fig. <ref>(d). However, the volume based PSD estimated at T_1 has a minor peak at a particlelength of about 500μm (Fig. <ref>(c)) which is to the right of the peak (at a particle length of about 300μm in Fig. <ref>(c)) of the volume based PSD estimated for the starting material using the offline Morphologi instrument.The occurrence of the minor peak at a particle length of about 500μm in the estimated volume based PSD from the CLD at T_1, and fat right tail (extending to particle length of 2000μm) of the estimated volume based PSD of the starting material obtained with the offline Morphologi instrument suggest the presence of particles of length of up to about 2000μm in the starting material. These length dimensions appear not to have been captured very well in the CLD data. Part of the reason could be that the circular laser beam (with a diameter of 5.3mm for the FBRM G400 used in this work) of the FBRM sensor has a much lower probability of capturing this length dimension than that predicted by the ideal CLD model used in this work. This ideal CLD model <cit.> assumes among other things that all particles lie on the focal plane of the laser spot of the FBRM instrument, and that the laser spot makes a straight chord on the particles. However, as the length dimensions of the particles become comparable to the diameter of the laser beam, the curvature of the chord becomes more pronounced. Then the estimated probability of obtaining a chord of a given length becomes less accurate. Particles detected out of focus also contribute to this inaccuracy. Another possible reason why the length dimensions of around 200μm were not captured well in the CLD data could be that some of the particles in the starting material had been broken down during the time interval T_1 reducing their contribution to the CLD count. The estimated volume based PSD from the CLD at T_5 shows a higher proportion of particles of length between about 20μm to about 70μm in Fig. <ref>(c) (or CE diameter of about 20μm to about 50μm in Fig. <ref>(d)) when compared with the estimated volume based PSD of the milled product using the offline Morphologi instrument. A possible reason for this discrepancy could be the large amount of bubbles produced during the wet milling of the benzoic acid sample. These bubbles lead to chord splitting <cit.> at their boundaries as they are transparent to the FBRM laser. This effect leads to an unphysical high count of short chords, and thereby leads to an over estimation of fines in the estimated volume based PSD. It could also be that there was significant agglomeration of the milled product during filtration and drying, thereby leading to a higher count of large particles which dominate the estimated volume based PSD obtained with the offline Morphologi instrument.The number of particles in the slurry relative to the number of initially suspended particles N̂_CLD estimated from the CLD data is shown in Fig. <ref>(e). The estimate is made using the volume based PSD shown in Fig. <ref>(c) (at T_1 to T_5) in Eq. (<ref>). The data shows an increase in the number of particles in the slurry which is to be expected in a wet milling process. The increase in total CLD counts over the stages T_1 to T_5 of the wet milling process for paracetamol in Fig. <ref>(a) suggests particle breakage. This is similar to the case of benzoic acid in Fig. <ref>(a). The mean of the measured CLD in the last 10 mins of each of the stages T_1 to T_5 of the wet milling process for the paracetamol sample is shown by the symbols in Fig. <ref>(b). The solid lines (with colours corresponding to the symbols) in the same Fig. are the estimated CLDs at aspect ratiosr=0.5 (T_1), r=0.4 (T_2), r=0.4 (T_3), r=0.4 (T_4) and r=0.5 (T_5). The aspect ratio of r=0.5 estimated at T_1 is slightly less than the position of the peak (at about r=0.7 in Fig. <ref>(b)) of the PDF of theaspect ratio obtained using the offline Morphologi instrument for the starting material of paracetamol. However, this PDF covers a broad range from about r=0.2 to r=1. This suggests that the mean aspect ratio of r=0.5 at T_1 obtained from the CLD is a reasonable value. A similar situation holds for the milled product of paracetamol seen in Fig. <ref>(b) obtained with the offline Morphologi instrument. The estimated aspect ratios cover a broad range from about r=0.2 to r=1 (with a peak close to r=0.5). This suggests that the mean aspect ratio of r=0.5 estimated from the CLD data at T_5 is reasonable. Similar to the case of benzoic acid, breakage of particles is reflected in an increase in total CLD counts for the paracetamol sample as seen in Fig. <ref>(a). This breakage also leads to an increase in the peaks of the mean CLDs collected in the last 10 mins of each time intervals T_1 to T_5 as shown by the symbols in Fig. <ref>(b), where the solid lines with corresponding colours to the symbols are the estimated CLDs at the aspect ratios indicated in the Fig.Furthermore, the estimated PSDs (from the CLDs in Fig. <ref>(b)) in Fig. <ref>(c) (as functions of particle length) and Fig. <ref>(d) (as functions of CE diameter) both show particle breakage moving from T_1 to T_5. The peaks of the estimated PSDs from the CLDs from T_1 to T_5 show a slight drift to the left, however, a stronger signature of particle breakage is seen in the tails of the estimated PSDs from T_1 to T_5 as seen in Figs. <ref>(c) and <ref>(d).The volume based PSD estimated from the starting material using the offline Morphologi instrument shows peaks at a particle length of 800μm (Fig. <ref>(c)) and at a CE diameter of 500μm (Fig. <ref>(d)). These peaks are far to the right of the peaks of the estimated volume based PSD from the CLD at T_1 which occur at a particle length of about 200μm (Fig. <ref>(c)) and a CE diameter of about 100μm (Fig. <ref>(d)). The reason for this huge discrepancy in the estimated PSDs from CLD and the offline Morphologi instrument could be because a significant number of particles may have been agglomerated during the offline measurement of the starting material. These agglomerates may have de-agglomerated upon agitation in the stirred tank during the time interval T_1, hence these large dimensions were not captured in the CLD data.However, the peaks of the volume based PSD estimated from the CLD at T_5 are closer (although shifted to the right) to those estimated for the milled product using the offline Morphologi instrument as seen in Figs. <ref>(c) and <ref>(d).The estimated volume based PSD for the milled product obtained with the offline Morphologi instrument suggests a higher proportion of particles of length less than about 100μm in Fig. <ref>(c) and CE diameter less than about 70μm (Fig. <ref>(d)) when compared with similar estimates from the CLD at T_5. Part of the reason for the discrepancy could be the production of small particles during filtration and drying due to breakage.Similar to the case of benzoic acid in Fig. <ref>(e), the number of particle relative to the number of initially suspended particles N̂_CLD for the paracetamol sample shows an increase in particle number over the course of the wet milling process as seen in Fig. <ref>(e). However, there is a slight decrease in N̂_CLD after 17:30 in Fig. <ref>(e). This is because the left shift in the estimated volume based PSD (in Fig. <ref>(c)) is not large enough to counter the effect of the larger aspect ratio (r=0.5) estimated at T_5 for paracetamol. This causes a decrease in the number of particles estimated with Eq. (<ref>). Similar to the cases of benzoic acid and paracetamol, breakage of particles during the wet milling of the metformin sample is reflected in the increase in total CLD counts in Fig. <ref>(a) and the mean CLDs measured in the last 10 mins of each stage T_1 to T_4 shown by the symbols in Fig. <ref>(b). This is also seen in the increase in the number of particles relative to that of initially suspended particles N̂_CLD in Fig. <ref>(e). As mentioned in subsection <ref>, the stage T_5 of the wet milling process was not conducted for the metformin sample, as the particles had broken down so much at the later part of T_4 that there was very little contrast between the background and the objects in the PVM images collected. See the supplementary information for sample images of the materials used in this work.The estimated CLDs corresponding to the measured CLDs (symbols in Fig. <ref>(b)) at the time intervals T_1 to T_4 are shown by the solid lines in Fig. <ref>(b) at the aspect ratios indicated in the Fig. The PVM images (section 2 of the supplementary information) collected at T_1 for the metformin sample show that some of the metformin particles were long rod-like objects at the start of the process. This is consistent with the mean aspect ratio of r=0.3 estimated from the CLD data (in Fig. <ref>(b)) atT_1 for metformin. But at variance with the estimated aspect ratio PDF (shown in Fig. <ref>(f)) for metformin obtained with the offline Morphologi instrument; the starting material and milled product have peaks close to r=0.8. This suggests that the aspect ratio PDF estimated using the offline Morphologi instrument (which is number based) is dominated by the smaller particles (which are more likely to be rounded) in the metformin sample. This is in contrast to the CLD method which is biased towards larger particles <cit.>, and in this case rod-like particles. Some sample images for both the starting material and milled product for metformin obtained with the offline Morphologi instrument in section 1 of the supplementary information show some of these small particles.The estimated volume based PSD (from the CLD data in Fig. <ref>(b)) at the time interval T_1 shows a minor peak at a particle length close to 1000μm in Fig. <ref>(c) and close to a CE diameter of 700μm in Fig. <ref>(d).These values are close to the peaks of the volume based PSD of the starting material estimated with the offline Morphologi instrument as seen in Figs. <ref>(c) and <ref>(d). This clearly reflects the presence of the long rod-like particles (considering the aspect ratio of r=0.3 estimated at T_1) in the slurry as seen in Fig. 9 in section 2 of the supplementary information. However, the main peaks of the volume based PSD estimated from the CLD at T_1 occur at a particle length of about 200μm (Fig. <ref>(c)) and CE diameter of about 100μm (Fig. <ref>(d)). The reason why the main peak is markedly shifted to the left could be because the length dimension of up to about 3000μm is not well captured by the ideal CLD model used in this work. This is similar to the case of benzoic acid discussed previously.However, the estimated PSDs from the CLDs from T_1 to T_4 show (Figs. <ref>(c) and <ref>(d)) a significant breakage of particles in agreement with the estimated PSDs (Figs. <ref>(c) and <ref>(d)) of the milled product obtained using the offline Morphologi instrument.§.§ Analysis of inline PVM imagesThe scatter plot of aspect ratio versus particle length (Fig. <ref>(a)) obtained from the analysis of images captured inline with the PVM sensor for the benzoic acid particles suggests breakage of the particles moving from T_1 to T_5. This is also seen in the volume based PSDs at the time intervals T_1 to T_5 estimated from the detected objects in the PVM images in the specified time interval as shown in Figs. <ref>(d) (as functions of particle length) and <ref>(e) (as functions of CE diameter). This breakage of particles is also reflected in the mean number of objects per frame N̂_IMG relative to the number of initially suspended particles seen in Fig. <ref>(b). The average number of objects increase up to about time 13:30 before decreasing. As the wet milling progresses and the particle sizes reduce, the number of images containing at least one particle wholly within the frame increases. However, most of these frames only contain a few particles in focus. Hence, even though the number of objects detected in focus increases, this increase is not enough to match the increase in the number of frames. Therefore, the value of N̂_IMG decreases after some time. In the case of benzoic acid, this decrease occurs after around the time 13:30.However, the estimated volume based PSD from the PVM images are quite noisy as a significant number of objects are either out of focus or have sizes below the resolution limit of the PVM or touch the boundaries of the image frame. The objects which fall into the aforementioned categories are rejected by the image processing algorithm <cit.>. However, the trend of particle breakage from T_1 to T_5 agrees with that of the data for the starting material and milled product obtained with the offline Morphologi instrument as seen in Figs. <ref>(d) and <ref>(e).The particle lengths estimated from the PVM images at T_1 in Fig. <ref>(d) show a truncation at a length of about 800μm, whereas the corresponding estimate (shown in the same Fig.) of the starting material with the offline Morphologi instrument extends to lengths larger than 1000μm. Part of the reason for this discrepancy is that particles longer than about 800μm (the PVM image frame has dimensions of 1088 × 819μm) have a very low chance of fitting within the image frame and hence get rejected by the image processing algorithm. Furthermore, some of the particles in the inline images may be detected partially in focus resulting in an underestimation of their lengths. There may also have been breakage of some of the long rod-like particles upon agitation in the stirred tank.The (noisy) peak of the estimated PSD for benzoic acid from the inline PVM images at T_1 in Figs. <ref>(d) (as a function of particle length) and <ref>(e) (as a function of CE diameter) are close to the corresponding estimated PSDs of the starting material obtained with the offline Morphologi instrument as seen in the same Figs. In contrast, the estimated volume based PSD from the PVM images at T_5 has peaks that are shifted far to the left of the corresponding estimate from the milled product using the offline Morphologi instrument as seen in Figs. <ref>(d) and <ref>(e). Furthermore, the left tail of the estimated volume based PSD from the PVM images at T_5 suggest a significantly higher proportion of small particles when compared to the corresponding estimate from the milled product using the offline Morphologi instrument. This is despite the fact that the image processing algorithm rejects particles of sizes below about 30μm due to the resolution limit of the PVM sensor images <cit.>. This is similar to the case of the estimated volume based PSD from the CLD data for benzoic acid at T_5 (Figs. <ref>(c) and <ref>(d)) when compared with the volume based PSD of the milled product (Figs. <ref>(c) and <ref>(d)) estimated with the offline Morphologi instrument. Hence, it is very likely that significant agglomeration of the milled product of benzoic acid occurred during filtration and drying. The estimated aspect ratio PDF from the PVM images for benzoic acid at T_1 has a peak at r=0.45 (Fig. <ref>(c)) which is close to the estimated mean aspect ratio of r=0.5 from the CLD data for benzoic acid at T_1 but less than the peak value of r=0.65 estimated for the starting material with the offline Morphologi instrument as seen in Fig. <ref>(c). However, the estimated aspect ratio PDF from the inline PVM images at T_5 in Fig. <ref>(c) has two peaks at r=0.55 and r=0.75. The mean value of these aspect ratios are close to the estimated mean aspect ratio of r=0.6 from the CLD data for benzoic acid at T_5, as well as the peak position of the estimated aspect ratio PDF from the offline Morphologi instrument in Fig. <ref>(c). In this case of benzoic acid, there is more consistency in the shape information obtained by the three methods despite the inconsistencies in the PSD estimation.The data in Fig. <ref> are estimates from inline images for paracetamol similar to those of benzoic acid shown in Fig. <ref>. Breakage of particles can be seen in the scatter plot of the aspect ratio versus particle length in Fig. <ref>(a) and the estimated PSDs in Figs. <ref>(d) (as functions of particle length) and <ref>(e) (as functions of CE diameter). This breakage can also be seen in the estimated mean number of images per frame N̂_CLD relative to the number of initially suspended particles in Fig. <ref>(b). The value of N̂_CLD decreases just after time 15:00 (Fig. <ref>(b)) for similar reasons discussed earlier for the case of benzoic acid in Fig. <ref>(b).The volume based PSD of the starting material estimated with the offline Morphologi instrument has a peak at a particle length of 800μm (Fig. <ref>(d)) and CE diameter of 500μm (Fig. <ref>(e)). This is far to the right of the (noisy) peaks of the volume based PSD (at T_1) estimated from the inline PVM images which occur at a particle length of 400μm (Fig. <ref>(d)) and CE diameter of 200μm (Fig. <ref>(e)). Even though larger particles have a higher chance of being rejected by the image processing algorithm due to their higher chance of making contact with the image frame, the PVM images do not show the presence of particles of lengths up to 800μm at T_1 as seen in Fig. <ref>(a). This situation is similar to that faced when the estimated volume based PSD of the starting material (using the offline Morphologi instrument) of paracetamol was compared with that from the CLDdata for paracetamol at T_1. This further strengthens the suggestion that a significant number of the paracetamol particles (of the starting material) may have been agglomerated which then de-agglomerated upon loading and agitation in the stirred tank. However, the position of the peaks of the estimated volume based PSD from the PVM images at T_5 is closer to those of the milled product as seen in Figs. <ref>(d) and <ref>(e), although the estimates (using the offline Morphologi instrument) suggest a higher proportion of fines as seen in the left tails of the volume based PSD in Figs. <ref>(d) and <ref>(e). This is partly due to the resolution limit of the PVM images discussed earlier and the possible production of more fines during filtration and drying due to breakage.Similarly, the peak of the aspect ratio PDF estimate from the PVM images at T_1 is very close to the corresponding estimate of the starting material using the offline Morphologi instrument as seen in Fig. <ref>(c), but the estimated estimated aspect ratio PDF (from the inline PVM images) at T_1 suggests a lower proportion of particles with aspect ratio r≲ 0.5. Furthermore, the estimated aspect ratio PDF from the PVM images at T_5 shows a very good agreement with the corresponding estimate of the milled product obtained with the offline Morphologi instrument. The level of agreement between the estimated aspect ratio PDF from the PVM images (at T_1 and T_5) and the corresponding estimates from the starting material and milled product using the offline Morphologi instrument demonstrates that more robust estimates of PSD can be made possibly from a combination of inline PVM images and CLD in a multi-objective approach.Similar to the cases of benzoic acid and paracetamol, breakage of particles during the wet milling process for metformin can be inferred from the data in Figs. <ref>(a), <ref>(b), <ref>(d) and <ref>(e). The scatter plot of aspect ratio versus particle length for the metformin particles obtained from the inline PVM images at T_1 is truncated at a particle length of about 800μm as seen in Fig. <ref>(a), whereas a similar estimate from the starting material using the offline Morphologi instrument extends to a particle length of about 3000μm as seen in Fig. <ref>(e). This is because these long rod-like particles mostly touch the the PVM image frame (see section 2 of the supplementary information for sample images) and hence are rejected by the image processing algorithm. This situation also shows up in the estimated volume based PSD from the PVM images at T_1 whose (noisy) peak occurs at a particle length significantly less than that of the corresponding estimate from the starting material using the offline Morphologi instrument seen in Fig. <ref>(d). Similarly, the peak of the volume based PSD occurs at a CE diameter much less than that of the corresponding estimate of the starting material in Fig. <ref>(e).However, the volume based PSDs estimated from the PVM images at T_3 (Fig. <ref>(a)) show better agreement with similar estimates of the milled product using the offline Morphologi instrument as seen in Figs. <ref>(d) and <ref>(e).However, the volume based PSD of the milled product suggests a higher proportion of fines as seen in Figs. <ref>(d) and <ref>(e). This is similar to the case of paracetamol in Figs. <ref>(d) and <ref>(e). Similar to that case, the resolution limit of the PVM images and the fact that no reliable estimate of the PSD from the PVM images could be made at T_4 (where further breakage may have occurred) could have been responsible for the discrepancy. There could have also been further breakage of the particles during filtration and drying before offline analysis of the milled product.The estimated aspect ratio PDF of metformin particles both of the starting material and milled product obtained with the offline Morphologi instrument have their peaks close to 0.8. Similarly, the aspect ratio PDF estimated from the inline PVM images has a peak close to 0.8 as seen in Fig. <ref>(c). This clearly shows that the long rod-like particles of metformin do not dominate the aspect ratio estimated from images. This is in contrast to the mean aspect ratio r=0.3 estimated from the CLD at T_1 which is dominated by the long rod-like particles. § CONCLUSIONSWe have employed wet milling processes of slurries of different crystalline materials to assess the strengths and limitations of two different inline particle monitoring modalities namely CLD and PVM imaging. The materials were carefully chosen as they produce crystals of different morphologies and mechanical strengths. The work has shown the relative sensitivities of the two modalities to changes in particle size and shape due to the wet milling. The effect of bubbles produced during the process on the two modalities has also been studied.When the particles are sufficiently large and within the resolution of the PVM instrument camera, and not too large so as to fit into the image frame, the inline imaging method gives PSD estimates which are closer to offline estimates of similar materials. This is especially true for systems composed of long needle-like particles mixed with shorter or more rounded particles. This is because the CLD method needs to find a compromise PSD which fits the measured CLD at an appropriate aspect ratio.In systems like these, the aspect ratio distribution estimated by the imaging method tends to be dominated by the smaller particles, while the mean aspect ratio estimated by the CLD method tends to be dominated by the long needle-like or rod-like particles.However, the inline imaging method is limited to particles of sizes above about 30μm, whereas the CLD method can go to smaller sizes. In addition, the PSD estimated by the inline imaging method becomes less representative as the sizes of the particles approach the size of the image frame. The accuracy of the PSD estimate is also affected by the proportion of objects that are captured outside the focal plane of the camera. Similarly, the PSD estimated by the CLD method becomes less representative as the length of needle-like or rod-like particles approach the diameter of the circular trajectory of the FBRM laser spot making the chord length probability estimate less accurate.Hence, in systems composed of a mixture of needle-like or rod-like particles and more rounded particles of various sizes, a combination of both the CLD and inline imaging methods (probably in a multi-objective approach) should give more robust estimates of the PSD and the aspect ratio. This will be particularly important to real-time monitoring and control of crystallisation processes where various process conditions could lead to the production of particles of various sizes and shapes and effects such as bubbles which could be very challenging to capture by a single sensor method.§ DATA MANAGEMENTAll images captured with the PVM sensor as well as the data from the offline Morphologi instrument and CLD data from the FBRM sensor have been deposited in the publicly accessible repository http://dx.doi.org/10.15129/33e74309-d91c-4ebc-9b2a-18fd33b24876University of Strathclyde-Pure.§ ACKNOWLEDGEMENT This work was performed within the UK EPSRC funded project(EP/K014250/1) `Intelligent Decision Support and Control Technologies for Continuous Manufacturing and Crystallisation of Pharmaceuticals and Fine Chemicals' (ICT-CMAC). The authors would like to acknowledge financial support from EPSRC, AstraZeneca and GSK. The authors are also grateful for useful discussions with industrial partners from AstraZeneca, GSK, Mettler-Toledo, Perceptive Engineering and Process Systems Enterprise. Supplementary Information § SAMPLE IMAGES FROM OFFLINE MORPHOLOGI INSTRUMENTSample images collected for the starting materials and milled products for benzoic acid, paracetamol and metformin are shown in Figs. <ref> to <ref>. The images were captured with the offline Morphologi instrument. The images for the starting material for benzoic acid in Fig. <ref> shows that some of the particles in the starting material in the benzoic acid sample had some degree of agglomeration. The particle sizes also covered a wide range as seen on the left and right of Fig. <ref>. The degree of agglomeration of the benzoic acid particles had increased by the time the milled product was produced as seen in Fig. <ref>. Similar to benzoic acid, the starting material for paracetamol had particles in different states of agglomeration and covering a wide range of sizes as seen on the left and right of Fig. <ref>. However, the state of agglomeration of the milled product of paracetamol (Fig. <ref>) is significantly less than that of benzoic acid in Fig. <ref>.An example of the long rod-like particles of the metformin starting material can be seen on the left of Fig. <ref>. The longest rods had lengths close to 3000μm as seen on the left of Fig. <ref>. However, the metformin starting material contained a significant amount of fines with particles of lengths as small as around 8μm as seen on the right of Fig. <ref>. The rods of the metformin starting material were mostly separate with no significant agglomeration. The milled product of the metformin sample contained small particles of lengths ≲ 300μm as seen on the left of Fig. <ref>. However, this milled product also contained a significant amount of fines of lengths ≲ 1μm as seen on the right of Fig. <ref>. The amount of these fines must have been quite huge during the stage T_4 of the wet milling process for metformin that the larger particles hardly showed up on the inline PVM images. Hence the inline PVM images were mostly blank during the stage T_4 of the wet milling process for metformin, so that the process was terminated at this stage. § SAMPLE IMAGES FROM INLINE PVMSome sample images collected with the inline PVM sensor during the wet milling of benzoic acid are shown in Fig. <ref>. The images in Fig. <ref>(T_1) to <ref>(T_5) were collected during the stages T_1 to T_5 respectively of the wet milling process for benzoic acid. The image in Fig. <ref>(T_1) shows that some of the particles were agglomerated during stage T_1 of the process. The bubbles produced during the wet milling of benzoic acid can be seen in Figs. <ref>(T_2) to <ref>(T_5). However, the images show breakage of particles.Similar to that of benzoic acid are sample images collected during the wet milling of paracetamol in Fig. <ref>. The images clearly show breakage of particles moving from the stage T_1 to T_5 during the wet milling process.In the case of metformin, the particles had broken down considerably moving from stage T_1 (Fig. <ref>(T_1)) to stage T_2 (<ref>(T_2)). The long rod-like particle which mostly crossed the image boundaries (Fig. <ref>(T_1)) were rejected from the image analysis but contributed to the CLD collected by the FBRM sensor. The led to an estimation of an aspect ratio of 0.3 at stage T_1 for the CLD inversion of metformin in the main text. The particles had broken down so much at stage T_4 (Fig. <ref>(T_4)) that there was no significant contrast between the objects and image background. Hence the wet milling process for metformin was terminated at stage T_4. Although, the FBRM sensor collected CLD data at stage T_4, the images collected at this stage could not be analysed due to poor contrast. Hence only estimates in the stages T_1 to T_3 from the images of metformin were reported in the main text. However, estimates from CLD data up to T_4 were reported.1 url<#>1urlprefixURL href#1#2#2 #1#1Barrett2005 P. Barrett, B. Smith, J. Worlitschek, V. Bracken, B. O' Sullivan, D. O' Grady, A review of the use of process analytical technology for the understanding and optimization of production batch crystallization processes, Organic Process Research & Development 9 (2005) 348–355.Paul2005 E. L. Paul, H.-H. Tung, M. Midler, Organic crystallization processes, Powder Technology 150 (2005) 133–143.Chen2011 J. Chen, B. Sarma, J. M. B. Evans, A. S. Myerson, Pharmaceutical crystallization, Cryst. Growth Des. 11 (2011) 887–895.Kail2007 N. Kail, H. Briesen, W. Marquardt, Advanced geometrical modeling of focused beam reflectance measurements (FBRM), Part. Part. Syst. Charact. 24 (2007) 184–192.Kail2009 N. Kail, W. Marquardt, H. Briesen, Estimation of particle size distributions from focused beam reflectance measurements based on an optical model, Chemical Engineering Science 64 (2009) 984–1000.Heinrich2012 J. Heinrich, J. Ulrich, Application of laser-backscattering instruments for in situ monitoring of crystallization processes - a review, Chem. Eng. Technol. 35 (6) (2012) 967–979.Agimelen2015 O. S. Agimelen, P. Hamilton, I. Haley, A. Nordon, M. Vasile, J. Sefcik, A. J. Mulholland, Estimation of particle size distribution and aspect ratio of non-spherical particles from chord length distribution, Chemical Engineering Science 123 (2015) 629–640.Agimelen2016 O. S. Agimelen, A. Jawor-Baczynska, J. McGinty, J. Dziewierz, C. Tachtatzis, A. Cleary, I. Haley, C. Michie, I. Andonovic, J. Sefcik, A. J. Mulholland, Integration of in situ imaging and chord length distribution measurements for estimation of particle size and shape, Chemical Engineering Science 144 (2016) 87–100.Li2005n1 M. Li, D. Wilkinson, Determination of non-spherical particle size distribution from chord length measurements. part 1: theoretical analysis., Chemical Engineering Science 60 (2005) 3251–3265.
http://arxiv.org/abs/1703.09186v1
{ "authors": [ "Okpeafoh S. Agimelen", "Vaclav Svoboda", "Bilal Ahmed", "Javier Cardona", "Jerzy Dziewierz", "Cameron J. Brown", "Thomas McGlone", "Alison Cleary", "Christos Tachtatzis", "Craig Michie", "Alastair J. Florence", "Ivan Andonovic", "Anthony J. Mulholland", "Jan Sefcik" ], "categories": [ "physics.data-an" ], "primary_category": "physics.data-an", "published": "20170327170439", "title": "Monitoring crystal breakage in wet milling processes using inline imaging and chord length distribution measurements" }
Quantitative results using variants of Schmidt's game]Quantitative results using variants of Schmidt's game: Dimension bounds, arithmetic progressions, and moreSchmidt's game is generally used to deduce qualitative information about the Hausdorff dimensions of fractal sets and their intersections. However, one can also ask about quantitative versions of the properties of winning sets. In this paper we show that such quantitative information has applications to various questions including: * What is the maximal length of an arithmetic progression on the “middle ϵ” Cantor set?* What is the smallest n such that there is some element of the ternary Cantor set whose continued fraction partial quotients are all ≤ n?* What is the Hausdorff dimension of the set of ϵ-badly approximable numbers on the Cantor set?We show that a variant of Schmidt's game known as the potential game is capable of providing better bounds on the answers to these questions than the classical Schmidt's game. We also use the potential game to provide a new proof of an important lemma in the classical proof of the existence of Hall's Ray. [ Jian-Jhih Kuo1, Shan-Hsiang Shen12, Ming-Hong Yang13, De-Nian Yang1, Ming-Jer Tsai4 and Wen-Tsuen Chen14 1Inst. of Information Science, Academia Sinica, Taipei, Taiwan2Dept. of Computer Science & Information Engineering,National Taiwan University of Science & Technology, Taipei, Taiwan3Dept. of Computer Science & Engineering, University of Minnesota, Minneapolis MN, USA4Dept. of Computer Science, National Tsing Hua University, Hsinchu, TaiwanE-mail: {lajacky,sshen3,curtisyang,dnyang,chenwt}@iis.sinica.edu.tw and mjtsai@cs.nthu.edu.tw December 30, 2023 ===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION As motivation, we begin by considering three questions which initially appear to be unrelated, but whose answers turn out to have a deep connection.For each 0 < ϵ < 1, let M_ϵ be the middle-ϵ Cantor set obtained by starting with the interval [0,1] and repeatedly deleting from each interval appearing in the construction the middle open interval of relative length ϵ. As ϵ→ 0, the sets M_ϵ are getting “larger” in the sense that their Hausdorff dimensions tend to 1. Do they also get “larger” in the sense of containing longer and longer arithmetic progressions as ϵ→ 0? How does the length of the longest arithmetic progression in M_ϵ behave as ϵ→ 0? What is the Hausdorff dimension of the set of ϵ-badly approximable vectors_d(ϵ) {∈^d : ∀/q ∈^d|-/q| > ϵ q^-d+1/d}?Here |·| denotes a fixed norm on ^d. For each n∈, let F_n denote the set of irrational numbers in (0,1) whose continued fraction partial quotients are all ≤ n.The continued fraction expansion of an irrational number is the unique expressiona_0 + 1a_1+1a_2+⋱with a_0∈, a_1,a_2,…∈, whose value is equal to that number. The numbers a_1,a_2,… are called the partial quotients. The union of F_n over all n is the set of badly approximable numbers in (0,1), i.e. (0,1)∩⋃_ϵ>0_1(ϵ), which is known to have full dimension in the ternary Cantor set C = M_1/3, so in particular we have F_n ∩ C ≠∅ for all sufficiently large n. What is the smallest n for which F_n∩ C ≠∅? What these questions have in common is that they can all be (partially) answered using Schmidt's game, a technique for proving lower bounds on the Hausdorff dimensions of certain sets known as “winning sets”, as well as on the dimensions of their intersections with other winning sets and with various nice fractals. In particular, the class of winning sets (see e.g. <cit.> for the definition) has the following properties: (a) The class of winning sets is invariant under bi-Lipschitz maps and in particular under translations.(b) The intersection of finitely many winning sets is winning.If α > 0 is fixed, then the intersection of countably many α-winning sets is α-winning (see <cit.> for the definition of α-winning). But if α is not fixed, then it may only be possible to intersect finitely many winning sets.(c) Winning sets in ^d have full Hausdorff dimension and in particular are nonempty.(d) The set of badly approximable numbers is winning, both inand on the Cantor set.These properties already hint at why Schmidt's game might be relevant to Questions <ref>–<ref>. Namely, properties (a), (b), and (c) imply that any winning subset ofcontains arbitrarily long arithmetic progressions (since if S is winning, then for any t,k the set ⋂_i=0^k-1 (S-it) is winning and therefore nonempty), and properties (c) and (d) imply that the set of badly approximable numbers has full Hausdorff dimension both inand on the Cantor set. The middle-ϵ Cantor set M_ϵ is not winning, but by showing that it is “approximately winning” in some quantitative sense, we will end up getting a lower bound on the maximal length of arithmetic progressions it contains, thus addressing Question <ref>. Similarly, the set _d(ϵ) of ϵ-badly approximable points in ^d is not winning, but since the union ⋃_ϵ > 0_d(ϵ) is the set of all badly approximable points, it is winning and thus the dimension of _d(ϵ) tends to d as ϵ tends to 0. Again, showing that _d(ϵ) is “approximately winning” will yield a lower bound on its Hausdorff dimension, thus addressing Question <ref>. Using a known relationship between _1(ϵ) and F_n, this yields lower bounds on the Hausdorff dimensions of both F_n and F_n∩ C. For n for which the second of these lower bounds is positive, we have F_n∩ C ≠∅, which addresses Question <ref>.To make the above paragraph rigorous, we will need to have a clear notion of what it means for a set to be “approximately winning” in a quantitative sense. One idea is to use Schmidt's original definition of “(α,β)-winning” sets (see <cit.>) as a quantitative approximation of winning sets. This is particularly natural because the notion of being (α,β)-winning is the basis of Schmidt's definition of the class of winning sets. However, it turns out that the class of (α,β)-winning sets is not very nice from a quantitative point of view (see Remark <ref>). Thus, we will instead consider two variants of Schmidt's game, the absolute game introduced by McMullen <cit.> and the potential game introduced in <cit.>. The natural notions of “approximately winning” for these games turn out to be more suited to proving quantitative results.§ MAIN RESULTS Before listing our main results, we will state what we expect the answers to Questions <ref>–<ref> to be based on the following heuristic: if S_1,S_2 are two fractal subsets of ^d, then if S_1 and S_2 are “independent” we expect that(S_1∩ S_2) = max(0,(S_1) + (S_2) - d),or equivalently_H(S_1∩ S_2) = min(d,_H(S_1) + _H(S_2)),where _H(S) = d - (S). This is roughly because if we divide [0,1]^d into N^d blocks of size 1/N, then we should expect that S_1 will intersect N^(S_1) of these blocks and S_2 will intersect N^(S_2) of them, so if S_1 and S_2 are independent then we should expectN^(S_1)N^(S_2)N^d = N^(S_1) + (S_2) - dblocks to be intersected by both S_1 and S_2. We can expect that most such blocks will also intersect S_1∩ S_2. Of course, if the exponent is negative then we should expect that S_1∩ S_1 = ∅ and in particular (S_1∩ S_2) = 0.To use this heuristic to estimate the maximal length of an arithmetic progression on M_ϵ, note that if {a,a+t,…,a+(k-1)t} is such an arithmetic progression, then we have⋂_i = 0^k-1 (M_ϵ - it) ≠∅.If the sets M_ϵ,M_ϵ-t,…,M_ϵ-(k-1)t are independent, then we expect the Hausdorff dimension of their intersection to bemax(0,1-k_H(M_ϵ)),which is positive if and only if k < 1/_H(M_ϵ). Since _H(M_ϵ) ∼ϵ, this means that we expect the maximal length of an arithmetic progression on M_ϵ to be approximately 1/ϵ.There is an additional degree of freedom with respect to t that this heuristic argument does not take into account, but its contribution to the expected maximal length of an arithmetic progression is not very significant. Similarly, since _H(F_n) ∼ 1/n, we expect the maximal length of an arithmetic progression on F_n to be approximately n. We are able to prove the following bounds rigorously:Let (S) denote the maximal length of an arithmetic progression in the set S. For all ϵ > 0 sufficiently small and n∈ sufficiently large, we have1/ϵ/log(1/ϵ)≲(M_ϵ)≤ 1/ϵ+1n/log(n)≲(F_n)≲ n^2. Here and hereafter, A ≲ B means that there exists a constant K (called the implied constant) such that A ≤ KB, and A ≍ B means A ≲ B ≲ A. When 1/ϵ is an integer, the lower bound of (<ref>) was first proven by Jon Chaika, see <cit.>. Theorem <ref> does not give any information about the implied constants of (<ref>) and (<ref>), so for example it cannot tell us how small ϵ has to be before we can be sure that M_ϵ contains an arithmetic progression of length 3 (i.e. a nontrivial arithmetic progression). However, using similar techniques we can show:For all 0 < ϵ≤ 1/49, we have (M_ϵ) ≥ 3, i.e. M_ϵ contains an arithmetic progression of length 3. Also, F_49 contains an arithmetic progression of length 3 (and thus so does F_n for all n≥ 49). It was pointed out to us by Pablo Shmerkin that one can get a better result using Newhouse's gap lemma <cit.>, namely that (M_ϵ) ≥ 4 for all 0 < ϵ≤ 1/3.Namely, Newhouse's gap lemma implies that there exists t∈ (M_ϵ-12)∩13(M_ϵ-12), and then {12-3t,12-t,12+t,12+3t} is an arithmetic progression in M_ϵ of length 4. Note that for all ϵ > 1/3, we have (M_ϵ) = 2 (the proof is similar to the proof of the upper bound of (<ref>)). Question <ref> can also be addressed via the independence assumption (<ref>). Namely, we have (F_2) ∼ 0.531 <cit.> and (C) = log(2)/log(3)∼ 0.631, so we should expect(F_2∩ C) ∼ 0.531+0.631-1=0.162 > 0,and in particular F_2∩ C ≠∅. This guess appears to be confirmed by computer estimates, which give (F_2∩ C) ∼ 0.14.We estimated the dimension of F_2∩ C by searching for a disjoint cover of F_2 by intervals of the form I_ω = [[0;ω,1],[0;ω,3]] or I_ω = [[0;ω,3],[0;ω,1]] (see (<ref>) for the notation), where ω is a finite word in the alphabet {1,2}, such that either (A) I_ω∩ C ≠∅ and |I_ω| < ϵ≤ |I_ω'|, where ω' is the word resulting from deleting the last letter of ω; or(B) I_ω∩ C = ∅.Here ϵ > 0 is a free parameter determining the accuracy of the computation. We then used the heuristic estimate(F_2∩ C) ∼log(N_ϵ)/-log(ϵ),where N_ϵ is the calculated number of intervals of type (A). The right-hand side varies with respect to ϵ but remains within the range [0.13,0.15] for ϵ∈ [10^-18,10^-8]. It appears quite ambitious to prove such a statement, but we can prove the following weaker one:For each n∈, let F_n denote the set of irrational numbers in (0,1) whose continued fraction partial quotients are all ≤ n, and let C denote the ternary Cantor set. Then F_19∩ C ≠∅. Yann Bugeaud pointed out to us that his Folding Lemma <cit.> can be used to prove the stronger result that F_9∩ C ≠∅.The proof is as follows. For the notation see (<ref>). Call a rational p/q good if: * q is a power of 3, and* p/q = [0;1,1,a_3...,a_h] with h ≥ 4, a_h ≥ 2, h odd,and a_i ≤ 3 for all i = 3,…,h.By direct calculation, the rational 17/27 = [0;1,1,1,2,3] is good. Moreover, by the Folding Lemma <cit.>, if p/q is good then so is f(p/q) := p/q - 1/3q^2. Thus f^n(17/27) is good for all n and thus x := lim_n→∞ f^n(17/27) is in F_3 ⊂(1/5) (cf. <cit.> for the subset relation). Let y=2-2x = ∑_k≥ 2 2/3^2^k-1∈ C. Since x∈_1(1/5), we have y ∈_1(1/10) ⊂ F_9. So y∈ F_9∩ C.Question <ref> is different from our other two questions in that a fairly precise answer is already known: we have_H(_d(ϵ)) ∼ k_d ϵ^d,where k_d is an explicit constant of proportionality and A∼ B means that A/B→ 1 as ϵ→ 0 <cit.>Note that κ in the notation of <cit.>, and c in the notation of <cit.>, are both equal to ϵ^d in our notation (and c^n in the notation of <cit.>). (see also <cit.>). However, from a historical perspective the first proof that (_d(ϵ)) → d as ϵ→ 0 is Schmidt's proof using his eponymous game <cit.>, so it is interesting to ask what the best bound is that can be proven using Schmidt's game or its variants. Kleinbock and the first-named author used a variant of Schmidt's game (specifically the hyperplane absolute game) to prove that_H(_d(ϵ)) ≲ϵ^1/2/log(1/ϵ),see <cit.>. We are able to improve their result by proving the following using the hyperplane potential game instead of the hyperplane absolute game:For all ϵ > 0 we have_H(_d(ϵ)) ≲ϵ. Note that this upper bound is the correct order of magnitude when d = 1 but not for larger d.We can also ask about the intersection of _d(ϵ) with a fractal set. Recall that a compact set J ⊂^d is called Ahlfors regular of dimension δ if there exists a measure μ with topological support equal to J such that for all ∈ J and 0 < ρ≤ 1, we haveC^-1ρ^δ≤μ(B(,ρ)) ≤ Cρ^δ where C is an absolute constant. It was proven in <cit.> that if J ⊂ is any Ahlfors regular set, then the union _1 ⋃_ϵ > 0_1(ϵ) has full dimension in J. We can prove the following quantitative version of this result:Let J ⊂ be an Ahlfors regular set of dimension δ > 0. Then for all ϵ > 0, we have(M_ϵ∩ J)≥δ - K ϵ^δ (_1(ϵ)∩ J)≥δ - K ϵ^δwhere K is a constant depending on J. Note that if _1(ϵ) and J are independent in the sense of (<ref>), then we have(_1(ϵ)∩ J) = δ - _H(_1(ϵ)) = δ - k_1 ϵ + o(ϵ),where k_1 is as in (<ref>). This means that Theorem <ref> can be considered close to optimal when δ = 1 but not for smaller δ.We also prove a higher-dimensional analogue of Theorem <ref>, where the set J needs to satisfy an additional condition known as absolute decay; see Sections <ref>-<ref> for details. Quantitative versions of variants of Schmidt's game also have applications to the proof of the existence of Hall's Ray. We recall that Hall's Ray is a set of the form t_0∞ such that for all t∈t_0∞, there exists x∈ such thatlim sup_q→∞1/q^2 qx = t,where · denotes distance to the nearest integer. The proof of the existence of Hall's Ray proceeds via a lemma stating that F_4 + F_4 = [√(2) - 1,4√(2) - 4], see e.g. <cit.>. The proof of this lemma is ordinarily via a “hands-on” argument, but we are able to prove the following weaker version of the lemma in a more conceptual way using the absolute game:F_49+F_49⊃ [1/6,11/6]. This weaker version is still sufficient to prove the existence of Hall's Ray, although it would yield a worse bound for t_0 than a proof using the original lemma. Acknowledgements. We thank Vitaly Bergelson for suggesting the Question <ref> to us, and Yann Bugeaud for suggesting Question <ref>, as well as for pointing out the relevance of his Folding Lemma to this question. We thank Jon Chaika for helpful discussions including providing us with an early draft of <cit.>. We thank Pablo Shmerkin for his comments regarding the Newhouse gap lemma. The second-named author was supported in part by the Simons Foundation grant #245708. The third-named author was supported by the EPSRC Programme Grant EP/J018260/1. The authors thank an anonymous referee for helpful comments. § THE ABSOLUTE GAME AND ITS APPLICATIONSWe will prove most of our results using two different variations of Schmidt's game: the absolute game and the potential game. In this section we define the absolute game and use it to prove Theorems <ref>, <ref>, and <ref>. The version of the absolute game that we give below is slightly different from the standard definition as found in e.g. <cit.>. Specifically, the following things are different: * We introduce a parameter ρ limiting Bob's initial move by preventing him from playing too small a ball at first.* We use two separate parameters α,β to limit Alice and Bob's moves during the game, rather than a single parameter β as in the classical definition of the absolute game.* We introduce a parameter k allowing Alice to delete a fixed finite number of balls rather than a single ball.* We use the convention that if a player has no legal moves, he loses. Note that this convention means that the implication “winning implies nonempty” will only be true for certain sets of parameters.The first three changes are for the purpose of recording more precise quantitative information about winning sets, while the last change allows for more elegant general statements (cf. Remark <ref>), as well as making the absolute game more similar to the potential game that we define in Section <ref>.Let X be a complete metric space. Given α,β,ρ > 0 and k∈, Alice and Bob play the (α,β,ρ,k)-absolute game as follows: * The turn order is alternating, with Bob playing first. Thus, Alice's th turn occurs after Bob's th turn and before Bob's (+1)st turn. It is thought of as a response to Bob's th move.* On the th turn, Bob plays a closed ball B_ = B(x_,ρ_), and Alice responds by choosing at most k different closed balls A_^(i), each of radius ≤αρ_. She is thought of as “deleting” these balls.* On the first (0th) turn, Bob's ball B_0 = B(x_0,ρ_0) is required to satisfyρ_0 ≥ρ,while on subsequent turns his ball B_ + 1 = B(x_+1,ρ_+1) is required to satisfyρ_ + 1≥βρ_ andB_ + 1⊂ B_⋃_i A_^(i).If Bob cannot choose a ball consistent with these rules, then he loses automatically.After infinitely many turns have passed, Bob has chosen an infinite descending sequence of ballsB_0 ⊃ B_1 ⊃⋯If the radii of these balls do not tend to zero, then Alice is said to win by default. Otherwise, the balls intersect at a unique point x_∞∈ X, which is called the outcome of the game.Now fix a set S⊂ X, called the target set. Suppose that Alice has a strategy guaranteeing that if she does not win by default, then x_∞∈ S. Then the set Sis called (α,β,ρ,k)-absolute winning. A set is called (α,β,ρ)-absolute winning if it is (α,β,ρ,1)-absolute winning. If a set is (α,β,ρ)-absolute winning for all α,β,ρ > 0, then it is called absolute winning. Note that the last part of this definition, defining the term “absolute winning” with no parameters, defines a concept identical to the standard concept of absolute winning, even though the intermediate concepts are different.We warn the reader that despite the terminology, an (α,β,ρ,k)-absolute winning set is not necessarily absolute winning. To prevent confusion, we will sometimes call an absolute winning set strongly absolute winning, and a set which is (α,β,ρ,k)-absolute winning for a suitable choice of α,β,ρ,k weakly or approximately absolute winning.Finally, note that if α≥ 1, then Alice can win in one turn by simply choosing A_m^(1) = B_m, regardless of the target set.§.§ Properties of the absolute gameThe most basic properties of the absolute game are the finite intersection property, monotonicity with respect to the parameters, and invariance under similarities.Let J be a finite index set, and for each j∈ J let S_j be (α,β,ρ,k_j)-absolute winning. Let k = ∑_j∈ J k_j. Then S = ⋂_j∈ J S_j is (α,β,ρ,k)-absolute winning. Alice can play all of the strategies corresponding to the sets S_j simultaneously by deleting all of the balls she is supposed to delete in each of the individual games. The formula k = ∑_j∈ J k_j guarantees that this strategy will be legal to play. If S is (α,β,ρ,k)-absolute winning and α≤α, β≤β, ρ≤ρ, and k≤ k, then S is also (α,β,ρ, k)-absolute winning. Switching from the (α,β,ρ,k)-game to the (α,β,ρ, k)-game increases Alice's set of legal moves while decreasing Bob's, making it easier for Alice to win. Let f:X→ Y be a bijection that satisfies(f(x),f(y)) = λ(x,y)x,y∈ X.Then a set S ⊂ X is (α,β,ρ,k)-absolute winning if and only if the set f(S) ⊂ Y is (α,β,λρ,k)-absolute winning. The (α,β,λρ,k)-game on Y with target set f(S) is a disguised version of the (α,β,ρ,k)-game on X with target set S, with the moves B_,A_ in the latter game corresponding to the moves f(B_),f(A_) in the former game.In the proof of Proposition <ref> we crucially used the fact that Bob is considered to lose if he cannot play. This is because although the formula k = ∑_j∈ J k_j proves that the strategy described will be legal for Alice to play, it does not show that Bob necessarily has any legal responses to it. A similar comment applies to the proof of Proposition <ref>. The question of what circumstances guarantee that Bob has legal responses will be dealt with in Lemma <ref> below. The proof of Proposition <ref>, which proceeds by combining different strategies used on the same turn of two different games, is very different from the proof of the countable intersection property of the classical Schmidt's game (see <cit.>), which proceeds by splicing different turns of different games together to get a sequence of turns in a new game with different parameters. This difference is in fact the key advantage of the absolute game over the classical Schmidt's game for quantitative purposes. By modifying the argument given in <cit.> one can use the classical Schmidt's game to prove results such as (M_ϵ) ≳log(1/ϵ) and (F_n) ≳log(n), but these bounds are so much worse than the ones appearing in Theorem <ref> that we do not include their proof. A key fact about (strongly) absolute winning sets is that in a sufficiently nice (e.g. Ahlfors regular) space, they have full Hausdorff dimension and in particular are nonempty. (This follows from the corresponding property for classical winning sets, see e.g. <cit.>.) The following lemma is a quantitative version of this property:Let S ⊂ X be (α,β,ρ,k)-absolute winning, and suppose that every ball B⊂ X contains N > k disjoint subballs of radius β(B) separated by distances of at least 2α(B). Let B_0 be a ball of radius ≥ρ. Then S∩ B_0 ≠∅, and(S∩ B_0) ≥log(N-k)/-log(β)·For our purposes, the important part of this lemma is the assertion that S∩ B_0 ≠∅, but we include the bound (<ref>) because it can be proven easily. Note that the formula S∩ B_0 ≠∅ follows from (<ref>) when N > k+1 but not when N=k+1.For each ball B⊂ X, let f_1(B),…,f_N(B) denote the disjoint subballs of radius β(B) guaranteed by the assumption. We will consider strategies for Bob that begin by playing B_0 and continue playing using the functions f_1,…,f_N; that is, on the turn after playing a ball B Bob will play one of the balls f_1(B),…,f_N(B). Some of these strategies are ruled out by the rules of the absolute game, but the separation hypothesis guarantees that at most k of them are ruled out, and thus at least N-k are left. In particular, since N > k Bob always has at least one legal play, so it is possible for him to play the entire game legally starting with the move B_0. The outcome of the corresponding game is a member of S∩ B_0, and in particular this set is nonempty.To demonstrate (<ref>), we observe that since Bob actually had at least N-k legal moves at each stage, the resulting Cantor set consisting of all possible outcomes of games where Bob plays according to the strategies described above is produced by a branching construction in which each ball of radius r has at least N-k children of radius β r. It is well-known (see e.g. <cit.>) that the Hausdorff dimension of a Cantor set constructed in this way is at least log(N-k)/-log(β). This completes the proof. Let S ⊂ be (α,β,ρ,k)-absolute winning, and suppose that kα + (k+1)β≤ 1. Let I_0 be an interval of length ≥ 2ρ. Then S∩ I_0 ≠∅. Every interval I incan be subdivided into (k+1) equally spaced intervals of length β|I|, such that the left endpoint of the leftmost interval is equal to the left endpoint of I, and the right endpoint of the rightmost interval is equal to the right endpoint of I. There are k gaps between these intervals, so the common gap size ϵ satisfies kϵ + (k+1)β|I| = |I|. By assumption kα + (k+1)β≤ 1, so we have α|I| ≤ϵ, i.e. the gap size is at least α|I| = 2α(I). Thus the hypotheses of Lemma <ref> are satisfied. §.§ ApplicationsWe begin by showing that the sets M_ϵ and _1(ϵ) are weakly absolute winning.For all 0 < ϵ < 1 and 0 < β < 1, the set M_ϵ∪ (-∞, 0) ∪ (1, ∞) is (α,β,ρ)-absolute winning where(α,β,ρ) = (2ϵ1-ϵβ^-1,β,1-ϵ2β/2).To describe Alice's strategy, let B be a move for Bob and we will describe Alice's response. Let λ = 1-ϵ/2, and let n≥ 0 be the largest integer such that λ^n+1≥ |B|, if such an integer exists. At the (n+1)st stage of the construction of M_ϵ, all remaining intervals are of length λ^n+1, which means that the distances between the removed intervals of the first n stages are all at least λ^n+1. So B intersects at most one of these intervals, and Alice's strategy is to remove this interval if it is legal to do so, and otherwise to not delete anything. If the integer n does not exist, then Alice does not delete anything.To show that this strategy is winning, we must show that if Alice did not win by default, then the outcome of the game x_∞ is in M_ϵ∪ (-∞, 0) ∪ (1, ∞). By contradiction, suppose that x_∞ exists but is not in M_ϵ∪ (-∞, 0) ∪ (1, ∞). Then x_∞ is in some interval I that was removed during the construction of M_ϵ. We will show that Alice deleted I at some stage of the game, which contradicts the fact that x_∞ was obtained by a sequence of legal plays for Bob.Indeed, let n≥ 0 be the stage of the construction of M_ϵ at which I was removed, so that |I| = λ^n ϵ, and let ≥ 0 be the smallest integer such that λ^n+1≥ |B_|. If > 0, then we have |B_| ≥β |B_-1| > λ^n+1β, while if = 0 then we have |B_| ≥ 2ρ = λβ≥λ^n+1β. (The length of the ball/interval B_ is equal to twice its radius.) So either way we have |B_| ≥λ^n+1β. Thus, we have |I| = λ^n ϵ = αλ^n+1β≤α|B_|, so on turnAlice is allowed to delete the interval I. Her strategy specifies that she does so, which completes the proof. For all 0 < ϵ < 1/2 and (ϵ/1 - ϵ)^2 ≤β < 1, the set _1(ϵ) is (α,β,ρ)-absolute winning, where(α,β,ρ) = (2ϵ1 - 2ϵβ^-1,β,β/2).For each p/q∈ letΔ_ϵ(p/q) = B(p/q,ϵ q^-2)and note that_1(ϵ) = ⋃_p/q∈Δ_ϵ(p/q).Now we have|p_1/q_1 - p_2/q_2| ≥1/q_1 q_2for all p_1/q_1 ≠ p_2/q_2. Now fix ℓ > 0 and let_ℓ := {Δ_ϵ(p/q) : ℓ < (1-2ϵ) q^-2≤β^-1ℓ}.If Δ_ϵ(p_1/q_1),Δ_ϵ(p_2/q_2) are two distinct members of _ℓ with q_1 ≤ q_2, then the distance between them is|p_1/q_1 - p_2/q_2| - ϵ/q_1^2 - ϵ/q_2^2≥1/q_1 q_2 - ϵ/q_1^2 - ϵ/q_2^2 = 1/q_2^2[q_2/q_1 - ϵ(q_2/q_1)^2 - ϵ] ≥1/q_2^2(1 - 2ϵ) > ℓ,where the second-to-last inequality is derived from the fact that x - ϵ x^2 - ϵ≥ 1 - 2ϵ for all x∈ [1,1/ϵ - 1] and in particular for all x∈ [1,β^-1/2]. Thus each interval of length ℓ intersects at most one member of _ℓ.Now Alice's strategy can be given as follows: If Bob chooses an interval B_ of length ℓ = |B_|, then Alice deletes the unique member Δ_ϵ(p/q) of the collection _ℓ that intersects B_. By construction, the length of this member is|Δ_ϵ(p/q)| = 2ϵ/q^2≤2ϵ/1 - 2ϵβ^-1ℓ = α |B_|,meaning that it is legal to play. To show that this strategy is winning, we observe that the length of Bob's first interval B_0 is at least 2ρ = β, and thus β^-1 |B_0| ≥ 1 ≥ q^-2 for all p/q∈. So ifis the smallest integer such that |B_| < q^-2, then |B_| < q^-2≤β^-1 |B_| and thus Alice must delete Δ_ϵ(p/q) on turn . For all n≥ 2 and 1/n^2≤β < 1, the set (-∞,0)∪ F_n∪ (1,∞) is (α,β,ρ)-absolute winning, where(α,β,ρ) = (2n-1β^-1,β,β/2).This follows immediately from Lemma <ref> together with the inclusion[0,1]∩_1(1n+1) ⊂ F_n(see e.g. <cit.>). We now use the above lemmas to prove Theorems <ref>, <ref>, and <ref>. We will in fact show that every a∈ M_ϵ is contained in some arithmetic progression of length 3. Indeed, apply Lemma <ref> with 0 < ϵ≤ 1/49 and β = 1/6 and combine with Proposition <ref> to get that S_1(-∞,0)∪ M_ϵ∪ (1,∞) is (1/4,1/6,1/24)-absolute winning. By Proposition <ref> it follows that S_22S_1 - a is (1/4,1/6,1/12)-absolute winning and thus by Proposition <ref>, S_1∩ S_2 is (1/4,1/6,1/12,2)-absolute winning. Since 2(1/4)+3(1/6)=1, Corollary <ref> shows that S_1∩ S_2 ∩ [0,1/6]≠∅ and S_1∩ S_2∩ [5/6,1] ≠∅; in particular S_1∩ S_2∩ [0,1]{a}≠∅. If t∈ S_1∩ S_2 ∩ [0,1]{a}, then {a,a+t/2,t} is an arithmetic progression of length 3 on M_ϵ. The proof for F_49 is similar, using Lemma <ref> instead of Lemma <ref>.Apply Lemma <ref> with n=19 and β = 1/3 to get that (-∞,0)∪ F_19∪ (1,∞) is (1/3,1/3,1/6)-absolute winning. Now in the (1/3,1/3,1/6)-absolute game, Bob can start by playing the interval [0,1] and on each turn can legally play an interval appearing in the construction of the Cantor set (cf. the proof of Corollary <ref>). The outcome of the resulting game will be a member of F_19∩ C, so we have F_19∩ C ≠∅. Apply Lemma <ref> with n=49 and β = 1/6 to get that S_1(-∞,0)∪ F_49∪ (1,∞) is (1/4,1/6,1/12)-absolute winning. Now fix t∈ [1/6,11/6] and observe that by Proposition <ref> the set S_2t - S_1 is also (1/4,1/6,1/12)-absolute winning, and thus S = S_1∩ S_2 is (1/4,1/6,1/12,2)-absolute winning. Since t∈ [1/6,11/6], the length of the interval I[max(0,t-1),min(1,t)] is at least 2ρ = 1/6. On the other hand, we have 2(1/4) + 3(1/6) = 1, and thus by Corollary <ref>, we have S ∩ I ≠∅. But S∩ I ⊂ F_49∩ (t-F_49), so we get t∈ F_49+F_49, which completes the proof. Although the absolute game is good at getting simple quantitative results when small numbers are involved, the bounds coming from Lemma <ref> get worse asymptotically as the Hausdorff dimension of the sets in question tends to 1, and in particular are not good enough to prove Theorem <ref>. To get a better asymptotic estimate we need to introduce another game, the potential game.§ THE POTENTIAL GAME AND ITS PROPERTIESWe now define the potential game. In the next two sections we will use it to prove Theorems <ref>, <ref>, and <ref>. Like in the previous section, the version of the potential game we give below is slightly different from the original one found in <cit.>. The first two changes are the same as for the absolute game (introducing the parameter ρ, and splitting β into two different parameters), and we also introduce the possibility that the parameter c is equal to zero, to provide a clearer relation between the absolute game and the potential game.Let X be a complete metric space and letbe a collection of closed subsets of X. Given α,β,ρ > 0 and c≥ 0, Alice and Bob play the (α,β,c,ρ,)-potential game as follows: * As before the turn order is alternating, with Bob playing first.* On the th turn, Bob plays a closed ball B_ = B(x_,ρ_), and Alice responds by choosing a finite or countably infinite collection Å_ of sets of the form (_i,,ρ_i,), with _i,∈ and ρ_i, > 0, satisfying∑_i ρ_i,^c ≤ (αρ_)^c,where (_i,,ρ_i,) denotes the ρ_i,-thickening of _i,, i.e.(_i,,ρ_i,) = {x∈ X : (x,_i,) ≤ρ_i,}.As before we say that Alice “deletes” the collection Å_, though the meaning of this will be slightly different from what it was in the absolute game.If c = 0, then instead of requiring (<ref>), we require the collection Å_ to consist of a single element, which must have thickness ≤αρ_.* On the first (0th) turn, Bob's ball B_0 = B(x_0,ρ_0) is required to satisfyρ_0 ≥ρ,while on subsequent turns his ball B_ + 1 = B(x_+1,ρ_+1) is required to satisfyρ_ + 1≥βρ_ andB_ + 1⊂ B_,with no reference made to the collection Å_ chosen by Alice on the previous turn.As before the result after infinitely many turns is an infinite descending sequence of balls B_0 ⊃ B_1 ⊃⋯, and if the radii of these balls does not tend to zero we say that Alice wins by default, while otherwise we call the intersection point x_∞ the outcome of the game. However, we now make the additional rule that if the outcome of the game is a member of any “deleted” element (_i,,ρ_i,) of one of the collections Å_ chosen throughout the game, then Alice wins by default.Now let S ⊂ X, and suppose that Alice has a strategy guaranteeing that if she does not win by default, then x_∞∈ S. Then the set S is called (α,β,c,ρ,)-potential winning.Letbe the collection of singletons in X. When c = 0 and =, the potential game is similar to the absolute game considered in the previous section. The only difference is that while in the absolute game Bob must immediately move to avoid Alice's choice, in the potential game he must only do so eventually. This is a significant difference because it means that Bob gets a much larger advantage from having α small, since it means he can wait several turns before avoiding a region.Thus, every (α,β,0,ρ,)-potential winning set is (α,β,ρ)-absolute winning, but the converse is not true. However, the proofs of Lemmas <ref>-<ref> in fact show that the sets in question are (α,β,0,ρ,)-potential winning, since the proofs only use the fact that the outcome is not in any deleted set, not that the deleted sets are disjoint from Bob's subsequent moves. This fact will be used in the applications below. In what follows we will use the notation((,ρ)) = ρ,i.e. ((,ρ)) is the “thickness” of (,ρ). The basic properties of the potential game are similar to those for the absolute game. We omit the proofs as they are essentially the same as the proofs of Propositions <ref>-<ref>.Let J be a countable (finite or infinite) index set, and for each j∈ J, let S_j be an (α_j,β,c,ρ,)-potential winning set, where c > 0.Then the set S = ⋂_j∈ J S_j is (α,β,c,ρ,)-potential winning, whereα^c = ∑_j∈ Jα_j^c,assuming that the series converges. If S is (α,β,c,ρ,)-potential winning and α≤α, β≤β, c≤ c, ρ≤ρ, and ⊂, then S is (α,β, c,ρ,)-potential winning. Note that the proof of Proposition <ref> uses the Hölder inequality(∑_i α_i^ c)^1/ c≤(∑_i α_i^c)^1/c whenc ≤ c.Let f:X→ Y be a bijection that satisfies(f(x),f(y)) = λ(x,y)x,y∈ X.Then a set S ⊂ X is (α,β,c,ρ,)-absolute winning if and only if the set f(S) ⊂ Y is (α,β,c,λρ,f())-absolute winning. If S is (α,β,c,ρ,)-potential winning and f:X→ Y is a K_3-Lipschitz homeomorphism whose inverse is K_4-Lipschitz, then f(S) ⊂ Y is (K_3 K_4α,β,c,K_3ρ,f())-potential winning. Let us call the (α,β,c,ρ,)-potential game Game I and the (K_3 K_4α,β,c,K_3ρ,f())-potential game Game II. We need to show that Alice can transfer any winning strategy from Game I to Game II. For each move B_ = B(x,r) Bob makes in Game II, Alice pretends that he has made the corresponding move B_ = B(f^-1(x),K_4 r) in Game I. The inclusions B_ + 1⊂ B_ in Game II together with the fact that f^-1 is K_4-Lipschitz imply that the corresponding inclusions B_ + 1⊂ B_ hold in Game I, i.e. Bob is playing legally in Game I. In Game I Alice plays her winning strategy, which means that in response to each of Bob's balls B_, she deletes some collection Å_. She then transfers her strategy to Game II by deleting the corresponding collectionÅ_ = {(f(),K_3 r) : (,r) ∈Å_}.This is legal to play since∑_A∈Å_^c(A) = ∑_A∈Å_ (K_3 (A))^c ≤ (K_3 α(B_))^c = (K_3 K_4 α( B_))^c.Since f is K_3-Lipschitz, we have f(⋃[Å_]) ⊂⋃Å_, and thus if Alice wins by default in Game I, then she wins by default in Game II as well. This completes the proof.A map is called (C,θ)-quasisymmetric if for all x,y,z∈ X such that (x,z) ≥(x,y), we have(fx,fz)/(fx,fy)≤ C((x,z)/(x,y))^θ.Suppose X is geodesic. If S is (α,β,c,ρ,)-potential winning and f:X→ Y is a (C_1,θ_1)-quasisymmetric homeomorphism whose inverse is (C_2,θ_2)-quasisymmetric, then f(S) ⊂ Y is (C^2α^θ,β^θ,c/θ,?,f()-potential winning. Let us call the (α,β,c,ρ,)-potential game Game I and the (C^2α^θ,β^θ,c/θ,?,f()-potential game Game II. We need to show that Alice can transfer any winning strategy from Game I to Game II. For each move B_ = B(x,r) Bob makes in Game II, Alice pretends that he has made the corresponding move B_ = B(f^-1(x),r') in Game I, where r' is chosen so that f(B_) ⊃ B_, and is minimal subject to this condition. [How to verify B_+1⊂ B_?] The inclusions B_ + 1⊂ B_ in Game II together with the fact that f^-1 is K_4-Lipschitz imply that the corresponding inclusions B_ + 1⊂ B_ hold in Game I, i.e. Bob is playing legally in Game I. In Game I Alice plays her winning strategy, which means that in response to each of Bob's balls B_, she deletes some collection Å_. She then transfers her strategy to Game II by deleting the corresponding collectionÅ_ = {(f(),K_3 r) : (,r) ∈Å_}.This is legal to play since∑_A∈Å_^c(A) = ∑_A∈Å_ (K_3 (A))^c ≤ (K_3 α(B_))^c = (K_3 K_4 α( B_))^c.Since f is K_3-Lipschitz, we have f(⋃[Å_]) ⊂⋃Å_, and thus if Alice wins by default in Game I, then she wins by default in Game II as well. This completes the proof. If S is (α,β,c,ρ,)-potential winning and λ≥ 1 then S is (λα,β,c,λ^-1ρ,)-potential winning. Let us call the (α,β,c,ρ,)-potential game Game I and the (λα,β,c,λ^-1ρ,)-potential game Game II. We need to show that Alice can transfer any winning strategy from Game I to Game II. She transfers her strategy as follows: every time Bob makes a move B_ = B(x,ρ_) in Game II, she pretends that Bob has made the move B_ = B(x,λρ_) in Game I. Since λ≥ 1, the inclusions B_+1⊂ B_ in Game II imply that the inclusions B_ + 1⊂ B_ hold for Game I, i.e. Bob is playing legally in Game I. In Game I Alice plays her winning strategy, which means that in response to each of Bob's balls B_, she deletes some collection Å_. She then transfers her strategy by deleting the exact same collection in response to the corresponding move B_ in Game II. This is legal because (λα)ρ_ = α (λρ_). Suppose that X is geodesic. If S is (α,β,c,ρ,)-potential winning and m∈ then S is ((1+β^-c+⋯+β^-(m-1)c)^1/cα,β^m,c,β^m-1ρ,)-potential winning. [m≠] The proof is similar to the proof of the previous proposition. Let us call the (α,β,c,ρ,)-potential game Game I and the ((1+β^-c+⋯+β^-(m-1)c)^1/cα,β^m,c,ρ,)-potential game Game II. Alice can transfer a winning strategy from Game I to Game II as follows:every time Bob makes a move B_ = B(x,ρ_) in Game II, she pretends that Bob has made a sequence of moves B_mn,…, B_mn+m-1 satisfyingB_-1 =B_mn-1⊃ B_mn⊃⋯⊃ B_mn+m-1 = B_as well as the inequalities ( B_k+1) ≥β ( B_k). Such a choice is possible due to the fact that X is geodesic, as well as to the inequality (B_) ≥β^m(B_-1) and the inclusion B_⊂ B_-1. If n = 0, we omit the first inclusion and radial inequality but instead require that ( B_0) ≥ρ. Finally, Alice uses her winning strategy for Game I against the moves B_mn-m+1,…, B_mn, and combines the resulting collections to form a single collection which she plays in response to B_. It can be checked that the choice of α = (1+β^-c+⋯+β^-(m-1)c)^1/cα means that this strategy is legal. § HAUSDORFF DIMENSION OF POTENTIAL WINNING SETSIn this section we will address the question: what is the appropriate analogue of Lemma <ref> for the potential game? In other words, what quantitative information about Hausdorff dimension can be deduced from the assumption that a certain set is potential winning? To answer this question, we first need some definitions. Given δ > 0, a measure μ on a complete metric space X is said to be Ahlfors δ-regular if for every sufficiently small ball B(x,ρ) centered in the topological support of μ, we have μ(B(x,ρ)) ≍ρ^δ. The topological support of an Ahlfors δ-regular measure is also said to be Ahlfors δ-regular.Given η > 0 and a collection of closed setsin X, the measure μ is called absolutely (η,)-decaying if for every sufficiently small ball B(x,ρ) centered in the topological support of μ, for every ∈, and for every ϵ > 0, we haveμ(B(x,ρ)∩(,ϵρ)) ≲ϵ^ημ(B(x,ρ)).When X = ^d andis the collection of hyperplanes, then μ is called absolutely η-decaying.Finally, the Ahlfors dimension of a (not necessarily closed) set S ⊂ is the supremum of δ such that S contains a closed Ahlfors δ-regular subset. We will denote it by (S). The Ahlfors dimension of a set is a lower bound for its Hausdorff dimension. Every Ahlfors δ-regular measure is absolutely (δ,)-decaying, whereis the collection of singletons in X. Lebesgue measure on ^d is absolutely 1-decaying. The following theorem is a combination of known results: Let X = ^d, letbe the collection of hyperplanes, and let J be the support of an Ahlfors δ-regular and absolutely (η,)-decaying measure μ. Suppose that S ⊂ X is (α,β,c,ρ,)-potential winning for all α,β,c,ρ > 0. Then we have (S∩ J) = δ. The set S is -potential winning in the terminology of <cit.>, and thus by <cit.> it is also -absolute winning, or in other words hyperplane absolute winning. So by <cit.> S is winning on J, and thus by <cit.> we have (S∩ J) = δ. In this paper we will be interested in the following quantitative version of Theorem <ref>:Let X be a complete metric space,a collection of closed subsets of X, and J⊂ X be the topological support of an Ahlfors δ-regular and absolutely (η,)-decaying measure μ. Let S⊂ X be (α,β,c,ρ,)-potential winning, with c < η and β≤ 1/4. Then for every ball B_0⊂ X centered in J with (B_0) ≥ρ, we have(S∩ J∩ B_0) ≥δ - K_1 α^η/|log(β)| > 0 ifα^c ≤1/K_2(1-β^η-c)where K_1,K_2 are large constants independent of α,β,c,ρ (but possibly depending on X,J,).Theorem <ref> implies that Theorem <ref> is true for every complete metric space X and every collectionof closed subsets of X. In particular, the condition <cit.>, which is crucial for establishing <cit.>, turns out not to be necessary for proving its consequences in terms of Hausdorff dimension.For each ≥ 0 let ρ_ = β^ρ, let _⊂ J be a maximal ρ_/2-separated subset, and let_ = {B(x,ρ_) : x∈_}.Let π_:_ + 1→_ be a map such that for all B∈_ + 1, we haveB ⊂π_(B).Such a map exists since β≤ 1/2. (We will later impose a further restriction on the map π_.) When m < n and B∈_n, we will abuse notation slightly by writing π_m(B) = π_m∘π_m+1∘⋯∘π_n-1(B).For each B∈_, consider the sequence of moves in the (α,β,c,ρ,)-potential game where for each = 0,…,, on the th turn Bob plays the move π_(B), and Alice responds according to her winning strategy. By (<ref>), Bob's moves are all legal. Let Å(B) denote Alice's response on turnaccording to her winning strategy. Also, let Å_^*(B) = {A∈Å(π_(B)) : B∩ A ≠∅}. Finally, letÅ_ = ⋃_B∈ B_{B∩ A : A∈Å(B)}.For each A∈Å_, we let (A) denote the thickness of A, i.e. (B(x,)∩A') = '.For all x∈ X⋃_∈⋃[Å_], we have x∈ S. By Konig's lemma, there exists a sequence _∋ B_→ x such that for all , we have π(B_ + 1) = B_. Consider the strategy for Bob consisting of the plays B_0,B_1,…. Alice's responses are the sets Å(B_) (n∈). Since Alice's strategy is winning, we havex∈ S ∪⋃_∈⋃[Å(B_)].On the other hand, x∈⋂_∈ B_. So either x∈ S, or there exists n∈ such that x∈ B_∩⋃[Å(B_)]. In the former case we are done, and in the latter case we have x∈⋃_∈⋃[Å_], a contradiction.Fix ϵ > 0 small to be determined, independent of α,β,c,ρ, and letN = ⌊ϵα^-η⌋.For each j≥ 0 let D_j ⊂_jN be a maximal 3ρ_jN-separated set, and let _j = {B(x,ρ_jN) : x∈ D_j}⊂_jN. Note that _j is a disjoint collection. For each B∈_j letϕ_j(B) = ∑_ < jN∑_A∈Å_^*(B)^c(A)(cf. Notation <ref>). Fix γ > 0 small to be determined, independent of α,β,c,ρ, and let_j' = {B ∈_j : ϕ_j(B) ≤ (γρ_jN)^c}.For every ball B let_j+1(B) = {B' ∈_j + 1 : B' ⊂12 B},where λ B denotes the ball resulting from multiplying the radius of B by λ while leaving the center fixed. For all B∈_j', we have#(_j+1(B)∩_j+1') ≳β^-Nδifα^c ≤1/K_2(1-β^η-c),where K_2 is a large constant. The Ahlfors regularity of J implies that the cardinality of _j+1(B) is at least 1/K_3β^-Nδ, where K_3 is a large constant. Thus we just need to show that#(_j+1(B)_j+1') ≤1/2K_3β^-Nδ.Now#(_j+1(B)_j+1')≤∑_B'∈_j+1(B)min(1,ϕ_j+1(B')/(γρ_(j+1)N)^c)≤∑_B'∈_j+1(B)∑_ < (j+1)N∑_A∈Å_^*(B')min(1,^c(A)/(γρ_(j+1)N)^c)≤∑_ < jN∑_A∈Å_^*(B)min(1,^c(A)/(γρ_(j+1)N)^c)#{B'∈_j+1(B) : B'∩ A≠∅}+ ∑_jN ≤ < (j+1)N∑_B'∈_B' ⊂ B∑_A∈Å(B')min(1,^c(A)/(γρ_(j+1)N)^c) #{B”∈_j+1(B') : B”∩ A ≠∅}.The idea is to bound the first term (representing “old” obstacles) using the assumption that B∈_j', which implies that ϕ_j(B) ≤ (γρ_jN)^c, and to bound the second term (representing “new” obstacles) using the fact that Alice is playing legally, which implies (<ref>). To do this, we observe that for all B' ∈⋃__ and A = (,(A)), since μ is Ahlfors δ-regular and absolutely (η,)-decaying we have#{B”∈_j+1(B') : B”∩ A ≠∅} ≲1/ρ_(j+1)N^δμ(B'∩ A2ρ_(j+1)N)≲1/ρ_(j+1)N^δ(( A2ρ_(j+1)N)/(B'))^η ^δ(B')= ((B')/ρ_(j+1)N)^δ((A) + 2ρ_(j+1)N/(B'))^ηand thus#(_j+1(B)_j+1')≲((B)/ρ_(j+1)N)^δ∑_ < jN∑_A∈Å_^*(B)min(1,^c(A)/(γρ_(j+1)N)^c) ((A) + 2ρ_(j+1)N/(B))^η+ ∑_jN ≤ < (j+1)N∑_B'∈_B' ⊂ B((B')/ρ_(j+1)N)^δ∑_A∈Å(B')min(1,^c(A)/(γρ_(j+1)N)^c)((A) + 2ρ_(j+1)N/(B'))^η.To bound this expression, we first prove the following. We have∑_ < jN∑_A∈Å_^*(B)min(1,^c(A)/(γρ_(j+1)N)^c) ((A) + 2ρ_(j+1)N/(B))^η≤ 3^ηγ^c max(γ^η-c,1/γ^c(ρ_(j+1)N/(B))^η-c),and for all B'∈⋃__, we have∑_A∈Å(B')min(1,^c(A)/(γρ_(j+1)N)^c) ((A) + 2ρ_(j+1)N/(B'))^η≤ 3^ηα^c max(α^η-c,1/γ^c(ρ_(j+1)N/(B'))^η-c).The proof of this subclaim will show that the left-hand sides of the maxima correspond to the contributions from “big” obstacles while the right-hand sides of the maxima correspond to contributions from “small” obstacles. Let us prove (<ref>) first. Since Alice is playing legally, we have∑_A∈Å(B')((A)/(B'))^c ≤α^cso the trick is relating the left-hand side of (<ref>) to the left-hand side of (<ref>).Now, it can be verified that the inequalitymin(1,x^c/(γ y)^c) (x + 2y)^η≤ 3^η x^c max(x^η - c,y^η - c/γ^c)holds for all x,y > 0, e.g. by splitting into the cases x≥ y (use the left option of both “min” and “max”) and x≤ y (use the right option of both “min” and “max”). Letting x = (A)/(B') and y = ρ_(j+1)N/(B') and summing over all A∈Å(B') shows that∑_A∈Å(B')min(1,^c(A)/(γρ_(j+1)N)^c) ((A) + 2ρ_(j+1)N/(B'))^η≤∑_A∈Å(B') 3^η((A)/(B'))^cmax(((A)/(B'))^η-c,1/γ^c(ρ_(j+1)N/(B'))^η-c)≤ 3^η(∑_A∈Å(B')((A)/(B'))^c) max((max_A∈Å(B')(A)/(B'))^η-c,1/γ^c(ρ_(j+1)N/(B'))^η-c).Applying (<ref>) twice yields (<ref>).The proof of (<ref>) is similar, except that instead of summing over A∈Å(B'), we sum overA ∈⋃_ < jNÅ_^*(B), and instead of (<ref>), we use the fact that the assumption B∈_j' implies that∑_ < jN∑_A∈Å_^*(B)((A)/(B))^c ≤γ^c.This completes the proof of Subclaim <ref>.Combining Subclaim <ref> with the inequality preceding it yields#(_j+1(B)_j+1')≲((B)/ρ_(j+1)N)^δγ^c max(γ^η-c,1/γ^c(ρ_(j+1)N/(B))^η-c)+ ∑_jN ≤ < (j+1)N∑_B'∈_B' ⊂ B((B')/ρ_(j+1)N)^δα^c max(α^η-c,1/γ^c(ρ_(j+1)N/(B'))^η-c).Now by definition we have (B) = β^jNρ, ρ_(j+1)N = β^(j+1)Nρ, and (B') = β^ρ for all B'∈_. Thus after applying the change of variables = (j+1)N - k, we get(B)/ρ_(j+1)N = β^-N, (B')/ρ_(j+1)N = β^-k.On the other hand, the Ahlfors regularity of J implies that#{B'∈_ : B' ⊂ B}≍((B)/β^ρ)^δ = β^-(N-k)δ,so we have#(_j+1(B)_j+1') ≲β^-Nδγ^c max(γ^η-c,1/γ^cβ^N(η-c)) + β^-Nδα^c ∑_k = 1^N max(α^η-c,1/γ^cβ^k(η-c)).Denote the implied constant of this inequality by K_4, and letϵ = 1/6 K_3 K_4·Then to deduce (<ref>) from (<ref>), it suffices to show that all four contributions to the right-hand side of (<ref>) are less than β^-Nδϵ, i.e. thatγ^η ≤ϵ.(old big obstacles) β^N(η - c) ≤ϵ.(old small obstacles)N α^η ≤ϵ.(new big obstacles) α^c/γ^c∑_k = 0^∞β^k(η - c) ≤ϵ. (new small obstacles)Now (<ref>) can be achieved by choosing γ = ϵ^1/η, while (<ref>) is true by the definition of N (see (<ref>)). This leaves (<ref>) and (<ref>), which can be rearranged asN(η - c)|log(β)|≥ |log(ϵ)| α^c 1/1 - β^η - c ≤ϵγ^c = ϵ^1 + c/η.Now fix K_2 large to be determined, and suppose that α^c ≤1/K_2 (1 - β^η - c). Since 1+c/η < 2, if K_2 ≥ϵ^-2 then (<ref>) holds. Moreover, since α^c ≤ϵ^2 ≤ϵ^c/η, we have ϵα^-η≥ 1 and thusN = ⌊ϵα^-η⌋≥12 ϵα^-η.On the other hand, we haveα^η≤α^c ≤1/K_2|logβ^η - c|and thus if K_2 ≥ 2ϵ^-1log(ϵ^-1), then12ϵα^-η (η - c)|log(β)| ≥ |log(ϵ)|,demonstrating (<ref>). So we letK_2 = max(ϵ^-2,2ϵ^-1log(ϵ^-1)).This completes the proof of Claim <ref>. Let K_3 be as in the proof of the claim, so that#(_j+1(B)∩_j+1') ≥ M ⌈1/2K_3β^-Nδ⌉ for all B'∈_j'.Let B_0 be the ball given in the statement of the theorem, and assume that B_0 ∈_0. (It is always possible to select _0 and _0 such that this is the case.) Since ϕ_0(B_0) = 0 < (γρ)^c, we have B_0 ∈_0'.We can now construct a Cantor set F as follows: let _0 = {B_0}⊂_0', and whenever we are given a collection _j ⊂_j', construct a new collection _j+1 by replacing each element B∈_j by M elements of _j+1(B)∩_j+1'. Such elements exist by (<ref>). Finally, letF = ⋂_j = 0^∞⋃_B∈_j B.Then standard arguments (see e.g. <cit.>) show that F is Ahlfors regular of dimensionlog(M)/|log(β^N)|≥log(12K_3β^-Nδ)/|log(β^N)| = δ - log(2K_3)/|log(β^N)| = δ - log(2K_3)/N|log(β)|≥δ - 2ϵ^-1log(2K_3) α^η/|log(β)|·So to demonstrate the first half of (<ref>), we just need to show that F ⊂ S∩ J∩ B_0. It is clear that F ⊂ J∩ B_0, so we show that F ⊂ S. Indeed, fix x∈ F. For each j∈, let B_jN be the unique element of _j containing x. At this point, we introduce the requirement that for each j, the map π_jN must satisfyπ_jN(B') = Bwhenever _jN+1∋ B' ⊂ B ∈_j.Due to the disjointness of the collection _j, it is possible to choose a map π_jN satisfying this requirement. Since β≤ 1/4, if B∈_j and B'∈_jN+1 satisfy B'∩1/2 B ≠∅, then B' ⊂ B. It follows thatπ_jN(B') = Bwhenever _n ∋ B' ⊂12 B, B ∈_j,> jN.By the definition of _j+1 we have B_(j+1)N⊂12 B_jN and thus π_jN(B_(j+1)N) = B_jN. Thus the partial sequence (B_n)_n∈ jN can be uniquely extended to a full sequence (B_n)_n∈ by requiring that B_n = π_n(B_n+1) for all n.Now interpret the sequence (B_n)_n∈ as a sequence of moves for Bob in the potential game, and suppose Alice responds by playing her winning strategy. Then the outcome of the game is x, so either x∈ S or Alice wins by default. Suppose that Alice wins by default. Then we have x∈ A∈Å(B_m) for some m. It follows that A ∈Å_m^*(B_n) for all n > m, and thusϕ_j(B_jN) ≥^c(A)for all j such that jN > m. On the other hand, since B_jN∈_j' we have ϕ_j(B_jN) ≤ (γρ_jN)^c, and thus (A) ≤γρ_jN for all j such that jN > m. Letting j→∞ we get (A) = 0, a contradiction. Thus x∈ S, and hence F ⊂ S. This demonstrates the first half of (<ref>).To demonstrate the second half of (<ref>), we observe that if α^c ≤1/K_2(1-β^η-c), thenα^η/|log(β)|≤α^c/|log(β)|≤1/K_2·|log(β^η-c)|/|log(β)| = η - c/K_2≤η/K_2,so requiring K_2 > η K_1/δ completes the proof.§ APPLICATIONS OF THE POTENTIAL GAMEWe now use the potential game, and in particular Theorem <ref>, to prove Theorems <ref> and <ref>. Note that Theorem <ref> follows immediately from combining Theorem <ref> with Lemmas <ref> and <ref> (cf. Remark <ref> and Example <ref>). §.§ Proof of Theorem <ref> Note: In this section we fix a norm on ^d and treat ^d as a metric space with respect to that norm, as well as letting _d(ϵ) be defined in terms of this norm; it does not matter which norm it is.Letbe the collection of hyperplanes in ^d. Then for all ϵ > 0 and (d!V_d)^1/dϵ < β < 1, the set _d(ϵ) is (α,β,c,ρ,)-potential winning, where(α,β,c,ρ,) = (ϵβ^-1/(d!V_d)^-1/d-ϵβ^-1,β,0,β(d!V_d)^-1/d - ϵ,).Here V_d denotes the volume of the d-dimensional unit ball (with respect to the chosen norm).When d = 1, these numbers are only slightly worse than the ones appearing in Lemma <ref>.As in the proof of Lemma <ref>, we letΔ_ϵ(/q) = B(/q,ϵ q^-d+1/d)so that_1(ϵ) = ⋃_/q∈^dΔ_ϵ(/q).We will use the simplex lemma in the following form:Fix Q > 1 and s > 0 such thatV_d s^d = 1/d! Q^d+1·Fix ∈^d. Then the set{/q∈^d ∩ B(,s) : q < Q}is contained in an affine hyperplane.We now describe Alice's strategy in the potential game. Suppose that Bob has just made the move B_ = B(_,ρ_), and let Q = Q_ > 1 and s = s_ > 0 be chosen so as to satisfy (<ref>) as well as the equations = ρ_ + ϵβ^-1 Q^-d+1/d.Note that solving for ρ_ in terms of Q givesρ_ = (1/√(d! V_d) - ϵβ^-1) Q^-d+1/d.Then Alice deletes the αρ_-neighborhood of the affine hyperplane containing the set (<ref>).To show that this strategy is winning (it is clearly legal), letdenote the outcome of the game and suppose that ∉_d(ϵ), so that ∈Δ_ϵ(/q) for some /q∈^d. We will show that ∈ A∈Å_ for some ≥ 0. Indeed, letbe the first integer such that q < Q_. If > 0, thenβ≤ρ_/ρ_-1 = (Q_/Q_-1)^-d+1/dand thusq ≥ Q_ - 1≥β^d/d+1Q_while if = 0, then1 ≤ρ_0/ρ = Q_0^-d+1/d/βand thusq ≥ 1 ≥β^d/d+1Q_.Either way we have q ≥β^d/d+1Q_, so(Δ_ϵ(/q)) = ϵ q^-d+1/d≤ϵβ^-1 Q_^-d+1/d.Thus since ∈ B(_,ρ_)∩Δ_ϵ(/q), we have|/q - _| ≤ρ_ + ϵβ^-1 Q_^-d+1/d = si.e. /q∈ B(_,s). Thus /q is a member of the set (<ref>) and thus of the hyperplane that Alice deleted the αρ_-neighborhood of on turn . So to complete the proof it suffices to show thatϵ q^-d+1/d≤αρ_,which follows from (<ref>), (<ref>), and the definition of α. Let J ⊂^d be the topological support of an Ahlfors δ-regular and absolutely η-decaying measure. Then for all ϵ > 0, we have(_d(ϵ)∩ J) ≥δ - K ϵ^ηwhere K is a constant depending on J. Let β = 1/4 and c = η/2. Combining Lemma <ref> with Proposition <ref> shows that _d(ϵ) is (α,β,c,ρ,)-potential winning, where ρ is a constant, α≍ϵ, andis the collection of hyperplanes in ^d. If ϵ is sufficiently small, then α^c ≤1/K_2(1-β^η-c) and thus Theorem <ref> shows that (<ref>) holds. Theorem <ref> is a special case of of this corollary (cf. Example <ref>). §.§ Proof of Theorem <ref> In this section we letdenote the set of points in X =.For all 0 < β≤ 1/4, there exists δ = δ(β) such that for all α,c,ρ,ϵ > 0 and S ⊂ such that S = S ∪ (-∞, a) ∪ (a + 2ρ + ϵ, ∞) ⊂ is an (α,β, c, ρ,)-potential winning set with c ≤ 1 - 1/log(α^-1), the set S contains an arithmetic progression of length δα^-1/log(α^-1). In fact, for every sufficiently small t > 0, S contains uncountably many arithmetic progressions of length δα^-1/log(α^-1) and common gap size t. By Proposition <ref>, we may without loss of generality assume that c = 1 - 1/log(α^-1). Fix k∈ to be determined, and fix 0 < t ≤ϵ/k. By Propositions <ref> and <ref>, the setS' = ⋂_i = 0^k-1 ( S - it)is (k^1/cα,β,c,ρ,)-potential winning. Thus by Theorem <ref>, ifkα^c ≤1/K_2(1-β^1-c)then (S'∩ [a,a+2ρ]) > 0. In particular, in this case S'∩ [a,a+2ρ] ≠∅, and if x∈ S'∩ [a,a+2ρ] ≠∅ then the arithmetic progression {x,x+t,…,x+(k-1)t} is contained in S.Now let k be the largest integer such that (<ref>) is satisfied. To complete the proof, we need to show that k ≍α^-1/log(α^-1) as long as α is sufficiently small. Indeed, since β is fixed and c = 1-1/log(α^-1), we have1-β^1-c = 1-β^1/log(α^-1)≍ 1/log(α^-1), α^c= eα≍αand thusk = ⌊1/K_2·1-β^1-c/α^c⌋≍1-β^1-c/α^c≍1/log(α^-1)/α = α^-1/log(α^-1)as long as the right-hand side large enough to guarantee that k≥ 1. Combining with Lemmas <ref> and <ref> (cf. Remark <ref>) immediately yields the lower bounds of (<ref>) and (<ref>), respectively. So in the remainder of the proof we will demonstrate the upper bounds.Let S be an arithmetic progression in M_ϵ of length k≥ 2, and let I be the smallest interval appearing in the construction of M_ϵ such that S ⊂ I. Let J be the middle ϵ gap of I. The minimality of I implies that S contains points both to the left and to the right of J, so the common gap size t of S is at least |J| = ϵ |I|. On the other hand, we have (k-1)t = (S) ≤ |I|, so k-1 ≤ |I|/|J| = 1/ϵ. This demonstrates the upper bound of (<ref>).The proof for F_n is similar but more technical. In what follows we use the standard notation[a_0;a_1,a_2,…]a_0+1a_1+1a_2+⋱Let S be an arithmetic progression in F_n of length k≥ 2, and let ω = ω_1⋯ω_r be the longest word in the alphabet {1,…,n} such that the continued fraction expansions of all elements of S begin with ω. (Note that ω may be the empty word.) Then the set A of numbers i = 1,…,n such that some element of S has a continued fraction expansion of the form [0;ω,i,…] has at least two elements. Here [0;ω,i,…] is short for [0;ω_1,…,ω_r,i,…]. Let i and j be the smallest and second-smallest elements of A, respectively, and consider first the case where j = i+1. As before, write t for the common gap size of S, so that (k-1)t = (S). Thent ≥ |[0;ω,j,n+1] - [0;ω,i,1]| while (k-1)t ≤ |[0;ω,i] - [0;ω,n+1]|,sok-1≤|[0;ω,i] - [0;ω,n+1]|/|[0;ω,j,n+1] - [0;ω,i,1]|≍|[0;i] - [0;n+1]|/|[0;j,n+1] - [0;i,1]|bounded distortion property≤1/i/|[0;j,n+1] - [0;j]|≍1/i/(1/j^2)|[0;n+1] - 0|bounded distortion property again= j^2/i(n+1) ≲ n^2. j = i + 1The bounded distortion property for the Gauss iterated function system (u_k(x) 1/k+x)_k∈ can be proven by applying <cit.>. It states that if u_ω(x) = u_ω_1∘⋯∘ u_ω_r(x), or equivalently u_ω([0;x]) = [0;ω,x], then|u_ω(y) - u_ω(x)| ≍max_[0,1] |u_ω'| · |y-x|for all x,y∈ [0,1].If j > i+1, then the bound |[0;j,n+1] - [0;i,1]| ≥1/i+1 - 1/j can be used instead, yielding the better boundk-1≲1/i/1/i+1-1/j≤1/i/(1/(i+1)(i+2)) = (i+1)(i+2)/i≍ i ≤ n.This demonstrates the upper bound of (<ref>), completing the proof. amsplain
http://arxiv.org/abs/1703.09015v3
{ "authors": [ "Ryan Broderick", "Lior Fishman", "David Simmons" ], "categories": [ "math.MG", "math.NT" ], "primary_category": "math.MG", "published": "20170327112057", "title": "Quantitative results using variants of Schmidt's game: Dimension bounds, arithmetic progressions, and more" }
pdfpagelabels=falsehyperref
http://arxiv.org/abs/1703.08585v2
{ "authors": [ "Julian Adamek", "Jacob Brandbyge", "Christian Fidler", "Steen Hannestad", "Cornelius Rampf", "Thomas Tram" ], "categories": [ "astro-ph.CO", "gr-qc" ], "primary_category": "astro-ph.CO", "published": "20170324195907", "title": "The effect of early radiation in N-body simulations of cosmic structure formation" }
Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing The authors acknowledge support from the NSF via grants DTRA-DMS 1322393 and DMS 1620455. Shuyang LingCourant Institute of Mathematical Sciences, New York University (Email: sling@cims.nyu.edu). Thomas StrohmerDepartment of Mathematics, University of California at Davis (Email: strohmer@math.ucdavis.edu).December 30, 2023 ===============================================================================================================================================================================================================================We study the question of extracting a sequence offunctions {f_i, g_i}_i=1^s from observing only the sum of their convolutions, i.e.,from y = ∑_i=1^s f_i∗g_i. While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the information-theoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom.We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things. § INTRODUCTION The goal of blind deconvolution is the task of estimatingtwo unknown functions from their convolution. While it is a highly ill-posed bilinear inverse problem, blind deconvolution is also an extremely important problem in signal processing <cit.>, communications engineering <cit.>, imaging processing <cit.>, audioprocessing <cit.>, etc.In this paper, we deal with an even more difficult and more general variation of the blind deconvolution problem, in which we have to extract multiple convolved signals mixed together in one observation signal. This joint blind deconvolution-demixing problem arises in a range of applications such as acoustics <cit.>, dictionary learning <cit.>, and wireless communications <cit.>.We briefly discuss one such application in more detail. Blind deconvolution/demixing problems are expected to play a vital role in the future Internet-of-Things. The Internet-of-Things will connect billions of wireless devices, which is far more than the current wireless systems can technically and economically accommodate. One of the many challenges in the design of the Internet-of-Things will be its ability to manage the massive number of sporadic traffic generating devices which are most of the time inactive, but regularly access the network for minor updates with no human interaction <cit.>. This means among others that the overhead caused by the exchange of certain types of information between transmitter and receiver, such as channel estimation, assignment of data slots, etc, has to be avoided as much as possible <cit.>.Focusing on the underlying mathematical challenges, we consider a multi-user communication scenario where many different users/devices communicate with a common base station,as illustrated in Figure <ref>. Suppose we have s users and each of them sends a signal _i through an unknown channel (which differs from user to user)to a common base station,.We assume that the i-th channel, represented by its impulse response _i, does not change during the transmission of the signal _i. Therefore _i acts as convolution operator, i.e., the signal transmitted by the i-th user arriving at the base station becomes _i ∗_i,where “∗" denotesconvolution. The antenna at the base station, instead of receiving each individual component _i∗_i, is onlyable to record the superposition of all those signals, namely, = ∑_i=1^s _i∗_i +,where represents noise. We aim to develop a fast algorithm to simultaneously extract all pairs {(_i,_i)}_i=1^s from(i.e., estimating the channel/impulse responses _i and the signals _i jointly) in a numerically efficient and robust way, while keeping the number of required measurements as small as possible. §.§ State of the art and contributions of this paperA thorough theoretical analysisconcerning the solvability of demixing problems via convex optimization can be found in <cit.>.There, the authors derive explicit sharp bounds and phase transitions regarding the number of measurements required to successfully demix structured signals (such as sparse signals or low-rank matrices) from a single measurement vector. In principle we could recast the blind deconvolution/demixing problem as the demixing of a sum of rank-one matrices, see (<ref>). As such, it seems to fit into the framework analyzed by McCoy and Tropp. However, the setup in <cit.> differs from ours in a crucial manner. McCoy and Tropp consider as measurement matrices (see the matrices _i in (<ref>)) full-rank random matrices, while in our setting the measurement matrices are rank-one.This difference fundamentally changes the theoretical analysis.The findings in <cit.> are therefore not applicable to the problem of joint blind deconvolution/demixing. The compressive principal component analysis in <cit.> is also a form of demixing problem, but its setting is only vaguely related to ours. There is a large amount of literature on demixing problems, but the vast majority does not have a “blind deconvolution component”, therefore this body of work is only marginally related to the topic of our paper.Blind deconvolution/demixing problems also appear in convolutional dictionary learning, see e.g. <cit.>.There, the aim is to factorize an ensemble of input vectors into a linear combination of overcomplete basis elements which are modeled as shift-invariant—the latter property is why the factorization turns into a convolution.The setup is similar to (<ref>), but with an additional penalty term to enforce sparsity of the convolving filters. The existing literature on convolutional dictionary learning is mainly focused on empirical results, therefore there is little overlap with our work. But it is an interesting challenge for future research to see whether the approach in this paper can be modified to provide a fast and theoretically sound solver for the sparse convolutional coding problem.There are numerous papers concerned with blind deconvolution/demixing problems in the area of wireless communications <cit.>. But the majority of these papers assumes the availability of multiple measurement vectors, which makes the problem significantly easier. Those methods however cannot be applied to the case of a single measurement vector, which is the focus of this paper. Thus there is essentially no overlap of those papers with our work. Our previous paper <cit.> solves (<ref>) under subspace conditions, i.e., assuming that both _i and _i belong toknown linear subspaces. This contributes to generalizing the pioneering work by Ahmed, Recht, and Romberg <cit.> from the “single-user" scenario to the “multi-user" scenario. Both <cit.> and <cit.> employ a two-step convex approach: first“lifting" <cit.>is used and then thelifted version of the original bilinear inverse problems is relaxed into asemi-definite program. An improvement of the theoretical bounds in <cit.> was announced in <cit.>.While the convex approach is certainly effective and elegant, it can hardly handle large-scale problems. This motivates us to apply a nonconvex optimization approach <cit.> to this blind-deconvolution-blind-demixing problem. The mathematical challenge, when using non-convex methods, is to derive a rigorous convergence framework with conditions that are competitive with those in a convex framework.In the last few years several excellent articles have appeared on provably convergent nonconvex optimization applied to various problems in signal processing and machine learning, e.g., matrix completion <cit.>, phase retrieval <cit.>, blind deconvolution <cit.>, dictionary learning <cit.>, super-resolution <cit.> and low-rank matrix recovery <cit.>. In this paper we derive the first nonconvex optimization algorithm to solve (<ref>) fast and with rigorous theoretical guarantees concerning exact recovery, convergence rates, as well as robustness for noisy data. Our work can be viewed as a generalization of blind deconvolution <cit.> (s=1) to the multi-user scenario (s > 1). The idea behind our approach is strongly motivated by the nonconvex optimization algorithm for phase retrieval proposed in <cit.>. In this foundational paper, the authors use a two-step approach: (i) Construct a good initial guess with a numerically efficient algorithm; (ii) Starting with this initial guess, prove that simple gradient descent will converge to the true solution. Our paper follows a similar two-step scheme. However, the techniques used here are quite different from <cit.>.Like the matrix completion problem <cit.>, the performance of the algorithm relies heavily and inherently on how much the ground truth signals are aligned with the design matrix.Due to this so-called “incoherence" issue, we need to impose extra constraints, which results in a different construction of the so-called basin of attraction. Therefore, influenced by <cit.>, we add penalty terms to control the incoherence and this leads to the regularized gradient descent method, which forms the core of our proposed algorithm.To the best of our knowledge, our algorithm is the firstalgorithm for the blind deconvolution/blind demixing problem that is numerically efficient, robust against noise, and comes with rigorous recovery guarantees. §.§ NotationFor a matrix ,denotes its operator norm and _F is its the Frobenius norm. For a vector ,is its Euclidean norm and _∞ is the ℓ_∞-norm. For both matrices and vectors, ^* and ^* denote their complex conjugate transpose.is the complex conjugate of .We equip the matrix space ^K× N with the inner product defined by ,: =(^*). For a given vector , () represents the diagonal matrix whose diagonal entries are. For any z∈, let z_+ = z + |z|/2. § PRELIMINARIESObviously, without any further assumption, it is impossible to solve (<ref>). Therefore, we impose the following subspace assumptions throughout our discussion <cit.>. * Channel subspace assumption: Each finite impulse response _i∈^L is assumed to have maximum delay spread K, i.e., _i = [ _i;].Here _i∈^K is the nonzero part of _i and _i(n) = 0forn > K. * Signal subspace assumption: Let _i : = _i_i be the outcome of the signal _i∈^N encoded by a matrix _i∈^L× N with L > N, where the encoding matrix _i is known and assumed to have full rank[Here we use the conjugate _i instead of _i because it will simplify our notation in later derivations.].Both subspace assumptions are common in various applications. Forinstance in wireless communications, the channel impulse response can always be modeled to have finite support (or maximum delay spread, as it is called in engineering jargon) due to the physical properties of wave propagation <cit.>; and the signal subspace assumption is a standard feature found in many current communication systems <cit.>, including CDMA where _i is known as spreading matrix and OFDMwhere _i is known as precoding matrix.The specific choice of the encoding matrices _i depends on a variety of conditions.In this paper, we derive our theory by assuming that _iis a complex Gaussian random matrix, i.e., each entry in _i is i.i.d. 𝒞𝒩(0,1). This assumption, while sometimes imposed in the wireless communications literature, is somewhat unrealistic in practice, due to the lack of a fast algorithm to apply _i and due to storage requirements. In practice one would rather choose _i to be something like the product of a Hadamard matrix and adiagonal matrix with random binary entries. We hope to address such more structured encoding matrices in our future research. Our numerical simulations (see Section <ref>) show no difference in the performance of our algorithm for either choice.Under the two assumptions above, the model actually has a simpler form in the frequency domain. We assume throughout the paper that the convolution of finite sequences is circular convolution[This circular convolution assumption can often be reinforced directly (for example in wireless communications the use of a cyclic prefix in OFDM renders the convolution circular) or indirectly (e.g. via zero-padding). In the first case replacing regular convolution by circular convolution does not introduce any errors at all. In the latter case one introduces an additional approximation error in the inversion which is negligible, since it decays exponentially for impulse responses of finite length <cit.>.]. By applying the Discrete Fourier Transform DFT) to (<ref>) along with the two assumptions, we have1/√(L) = ∑_i=1^s(_i)(_i _i) + 1/√(L)whereis the L× L normalized unitaryDFT matrix with ^* = ^* = _L. The noise is assumed to be additive white complex Gaussian noise with ∼𝒞𝒩(, σ^2d_0^2_L) where d_0 = √(∑_i=1^s _i0^2 _i0^2), and {(_i0, _i0)}_i=1^s is the ground truth.We define d_i0 = _i0_i0^*_F and assume without loss of generality that _i0 and _i0 are of the same norm, i.e., _i0 = _i0 = √(d_i0), which is due to the scaling ambiguity[Namely, if the pair (_i, _i) is a solution, then so is (α_i, α^-1_i) for any α≠ 0.]. In that way, 1/σ^2 actually is a measure of SNR (signal to noise ratio).Let _i∈^K be the first K nonzero entries of _i and ∈^L× K be a low-frequency DFT matrix (the first K columns of an L× L unitary DFT matrix). Then a simple relation holds,_i = _i, ^* = _K.We also denote _i := _i and := 1/√(L). Due to the Gaussianity, _i also possesses complex Gaussian distribution and so does . From now on, instead of focusing on the original model, we consider (with a slight abuse of notation) the following equivalent formulation throughout our discussion:= ∑_i=1^s (_i)_i_i + ,where ∼𝒞𝒩(, σ^2 d_0^2/L_L). Our goal here is to estimate all {_i, _i}_i=1^s from , and {_i}_i=1^s. Obviously, this is a bilinear inverse problem, i.e., if all {_i}_i=1^s are given, it is a linear inverse problem (the ordinary demixing problem) to recover all {_i}_i=1^s, and vice versa. We notethat there is a scaling ambiguity in all blind deconvolution problemsthat cannot be resolved by any reconstruction method without further information. Therefore, when we talk about exact recovery in the following, then this is understood modulo such a trivial scaling ambiguity.0.25cm Before proceeding to our proposed algorithm we introduce some notation to facilitate a more convenient presentation of our approach.Let _l be the l-th column of ^* and _il be the l-th column of _i^*.Based on our assumptions the following properties hold:∑_l=1^L_l_l^* = _K, _l^2 = K/L, _il∼𝒞𝒩(, _N).Moreover, inspired by the well-known lifting idea <cit.>,we define the useful matrix-valued linear operator_i : ^K× N→^L and its adjoint _i^*:^L→^K× Nby _i() := {_l^*_il}_l=1^L, ^*_i() := ∑_l=1^L z_l _l_il^* = ^*()_ifor each 1≤ i≤ s under canonical inner product over ^K× N. Therefore, (<ref>) can be written in the following equivalent form= ∑_i=1^s _i(_i_i^*) + .Hence, we can think ofas the observation vector obtained from taking linear measurements with respect to a set of rank-1 matrices {_i_i^*}_i=1^s. In fact, with a bitof linear algebra (and ignoring the noise term for the moment), the l-th entry ofin (<ref>) equals the inner product of two block-diagonal matrices:y_l = [ _1,0_1,0^* ⋯ ;_2,0_2,0^*⋯ ;⋮⋮⋱⋮;⋯ _s0_s0^*;]_defined as _0 , [ _l_1l^* ⋯; _l_2l^* ⋯; ⋮ ⋮ ⋱ ⋮; ⋯ _l_sl^*; ] + e_l,where y_l = ∑_i=1^s _l^*_i0_i0^*_il + e_l, 1≤ l≤ L and _0 is defined as the ground truth matrix. In other words, we aim to recover such a block-diagonal matrix _0 from L linear measurementswith block structure if = . By stacking all {_i}_i=1^s (and {_i}_i=1^s, {_i0}_i=1^s,{_i0}_i=1^s) into a long column, we let:=[ _1;⋮; _s ], _0 :=[ _1,0;⋮;_s0 ]∈^Ks , :=[ _1;⋮; _s ],_0 :=[ _1,0;⋮;_s0 ]∈^Ns.We defineas a bilinear operator which maps a pair (, )∈^Ks×^Ns into a block diagonal matrix in ^Ks× Ns, i.e., (, ) :=[ _1_1^* ⋯ ;_2_2^*⋯ ;⋮⋮⋱⋮;⋯ _s_s^*;]∈^Ks× Ns.Let := (, ) and _0 := (_0, _0) where _0 is the ground truth as illustrated in (<ref>). Define ():^Ks× Ns→^L as() := ∑_i=1^s _i(_i),where = blkdiag(_1,⋯,_s) and blkdiag is the standard MATLAB function to construct block diagonal matrix. Therefore,((, )) = ∑_i=1^s _i(_i_i^*) and = ((_0, _0)) + . The adjoint operator ^* is defined naturally as^*() : =[ _1^*() ⋯ ;_2^*()⋯ ;⋮⋮⋱⋮;⋯ _s^*();]∈^Ks× Ns,which is a linear map from ^L to ^Ks× Ns. To measure the approximation error of _0 given by , we define δ(,) as the global relative error: δ(,) :=- _0_F/_0_F = √(∑_i=1^s _i_i^* - _i0_i0^*_F^2)/d_0 = √(∑_i=1^s δ_i^2 d_i0^2/∑_i=1^s d_i0^2),where δ_i : = δ_i(_i,_i) is the relative error within each component:δ_i(_i,_i) := _i_i^* - _i0_i0^*_F/d_i0.Note that δ and δ_i are functions of (,) and (_i,_i) respectively and in most cases, we just simply use δ and δ_i if no possibility of confusion exists.§.§ Convex versus nonconvex approachesAs indicated in (<ref>), joint blind deconvolution-demixing can be recast as the task to recover a rank-s block-diagonal matrix from linear measurements. In general, such a low-rank matrix recovery problem is NP-hard. In order to take advantage of the low-rank property of the ground truth, it is natural to adopt convex relaxation by solving a convenient nuclear norm minimization program, i.e., min∑_i=1^s _i_*,s.t. ∑_i=1^s _i(_i) = . The question of when the solution of (<ref>) yields exact recovery is first answered in our previous work <cit.>. Late, <cit.> have improved this result to the near-optimal bound L≥ C_0s(K + N) up to some log-factors where the main theoretical resultis informally summarized in the following theorem.Suppose that _i are L× N i.i.d. complex Gaussian matrices andis an L× K partial DFT matrix with ^* = _K. Then solving (<ref>) gives exact recovery if the number of measurements L yieldsL ≥ C_γ s(K+N)log^3Lwith probability at least 1 - L^-γ where C_γ is an absolute scalar only depending on γlinearly. While the SDP relaxation is definitely effective and has theoretic performance guarantees, the computational costs for solving an SDP already become too expensive for moderate size problems, let alone for large scale problems. Therefore, we try to look for a more efficient nonconvex approach such as gradient descent, which hopefully is also reinforced by theory. It seems quite naturalto achieve the goal by minimizing the following nonlinear least squares objective function with respect to (, )F(, ) : = ((,)) - ^2 = ∑_i=1^s_i(_i_i^*) - ^2.In particular, if = , we writeF_0(, ) := ∑_i=1^s_i(_i_i^* - _i0_i0^*)^2.As also pointed out in <cit.>, this is a highly nonconvex optimization problem. Many of the commonly used algorithms, such as gradient descent or alternating minimization, may not necessarily yield convergence to the global minimum, so that we cannot always hope to obtain the desired solution. Often, those simple algorithms might getstuck in local minima.§.§ The basin of attractionMotivated by several excellent recent papers of nonconvex optimization on various signal processing and machine learning problem, we propose our two-step algorithm: (i) Compute an initial guess carefully; (ii) Apply gradient descent to the objective function, starting with the carefully chosen initial guess. One difficulty of understanding nonconvex optimization consists in how to construct the so-called basin of attraction, i.e., if the starting point is inside this basin of attraction, the iterates will always stay inside the region and converge to the global minimum. The construction of the basin of attraction varies for different problems <cit.>. For this problem, similar to <cit.>, the construction follows from the following three observations. Each of these observations suggests the definition of a certain neighborhood and the basin of attraction is then defined as the intersection of these three neighborhood sets .* Ambiguity of solution: in fact, we can only recover (_i,_i) up to a scalar since (α_i,α^-1_i) and (_i,_i) are both solutions for α≠ 0. From a numerical perspective, we want to avoid the scenario when _i→ 0 and _i→∞ while _i_i is fixed, which potentially leads to numerical instability. To balance both the norm of _i and _i for all 1≤ i≤ s, we define:= {{(_i, _i)}_i=1^s: _i≤ 2√(d_i0),_i≤ 2√(d_i0), 1≤ i≤ s },which is a convex set.* Incoherence: the performance depends on how large/small the incoherence μ^2_h is, where μ_h^2 is defined byμ^2_h : = max_1≤ i≤ sL_i0^2_∞/_i0^2.The idea is that: the smaller the μ^2_h is, the better the performance is. Let us consider an extreme case: if _i0 is highly sparse or spiky, we lose much information on those zero/small entries and cannot hope to get satisfactory recovered signals. In other words, we need the ground truth _i0 has “spectral flatness" and _i0 is not highly localized on the Fourier domain.A similar quantity is also introduced in the matrix completion problem <cit.>. The larger μ^2_h is, the more _i0 is aligned with one particular row of . To control the incoherence between _l and _i, we define the second neighborhood, :={{_i}_i=1^s : √(L)_i_∞≤ 4√(d_i0)μ,1≤ i≤ s},where μ is a parameter and μ≥μ_h. Note thatis also a convex set.* Close to the ground truth: we also want to construct an initial guess such that it is close to the ground truth, i.e., :={{(_i, _i)}_i=1^s: δ_i = _i_i^* - _i0_i0^*_F/d_i0≤, 1≤ i≤ s }whereis a predetermined parameter in (0, 1/15]. To ensure δ_i≤, it suffices to ensure δ≤/√(s)κ where κ := max d_i0/min d_i0≥ 1. This is because1/sκ^2∑_i=1^s δ_i^2 ≤δ^2 ≤^2/sκ^2which implies max_1≤ i≤ sδ_i ≤. When we say (, )∈𝒩_d, or , it means for all i=1,…,s we have (_i,_i) ∈,orrespectively. In particular, (_0, _0) ∈ where _0 and _0 are defined in (<ref>).§.§ Objective function and Wirtinger derivativeTo implement the first two observations, we introduce the regularizer G(, ), defined as the sum of s componentsG(, ):= ∑_i=1^s G_i(_i,_i) .For each component G_i(_i,_i), we let ρ≥ d^2 + 2^2, 0.9 d_0 ≤ d ≤ 1.1d_0, 0.9d_i0≤ d_i ≤ 1.1d_i0 for all 1≤ i≤ s andG_i:=ρ[ G_0(_i^2/2d_i) + G_0(_i^2/2d_i)_ + ∑_l=1^LG_0(L |_l^*_i|^2/8d_iμ^2 )_],where G_0(z) = max{z-1, 0}^2. Here both d and {d_i}_i=1^s are data-driven and well approximated by our spectral initialization procedure; and μ^2 is a tuning parameter which could be estimated if we assume a specific statistical model for the channel (for example, in the widely used Rayleigh fading model, the channel coefficients are assumed to be complex Gaussian). The idea behind G_i is quite straightforward though the formulation is complicated. For eachG_i in (<ref>), the first two terms try to force the iterates to lie inand the third term tries to encourage the iterates to lie in . What about the neighborhood ? A proper choice of the initialization followed by gradient descent which keeps the objective function decreasing will ensure that the iterates stay in .0.25cm Finally, we consider the objective function as the sum of nonlinear least squares objective function F(,) in (<ref>) and the regularizer G(,), (, ) := F(,) + G(, ). Note that the input of the function (,) consists of complex variables but the output is real-valued. As a result, the following simple relations hold/_i = /_i, /_i = /_i.Similar properties also apply to both F(,) and G(,).Therefore, to minimize this function, it suffices to consider only the gradient ofwith respect to _i and _i, which is also called Wirtinger derivative <cit.>. The Wirtinger derivatives of F(,) and G(,) w.r.t. _i and _i can be easily computed as follows ∇ F__i= _i^*(()- )_i = _i^*((-_0)- )_i,∇ F__i= (_i^*(() - ))^*_i = (_i^*((-_0) - ))^*_i,∇ G__i= ρ/2d_i[G'_0(_i^2/2d_i) _i+ L/4μ^2∑_l=1^L G'_0(L|_l^*_i|^2/8d_iμ^2) _l_l^*_i ],∇ G__i= ρ/2d_i G'_0( _i^2/2d_i) _i,where()= ∑_i=1^s _i(_i_i^*) and ^* is defined in (<ref>). In short, we denote∇_ : = ∇ F_ + ∇ G_, ∇ F_ : =[ ∇ F__1;⋮; ∇ F__s ], ∇ G_ : =[ ∇ G__1;⋮; ∇ G__s ].Similar definitions hold for ∇_,∇ F_ and G_. It is easy to see that ∇ F_ = ^*(() - ) and ∇ F_ = (^*(() - ))^*. § ALGORITHM AND THEORY§.§ Two-step algorithmAs mentioned before, the first step is to find a good initial guess (^(0), ^(0))∈^Ks×^Ns such that it is inside the basin of attraction.The initialization follows from this key fact: (_i^*()) = (_i^*(∑_j=1^s_j(_j0_j0^* )+)) = _i0_i0^*,where we use ^* = ∑_l=1^L_l_l^* = _K, (_il_il^*) = _N and(_i^*_i(_i0_i0^*)) = ∑_l=1^L _l_l^*_i0_i0^* (_il_il^*) = _i0_i0^*,(_j^*_i(_i0_i0^*))= ∑_l=1^L _l_l^*_i0_i0^* (_il_jl^*) = ,∀ j≠ i.Therefore, it is natural to extract the leading singular value and associated left and right singular vectors from each _i^*() and use them as (a hopefully good) approximation to (d_i0, _i0, _i0). This idea leads to Algorithm <ref>, the theoretic guarantees of which are given in Section <ref>. The second step of the algorithm is just to apply gradient descent towith the initial guess {(^(0)_i, ^(0)_i, d_i)}_i=1^s or (^(0), ^(0),{ d_i}_i=1^s), where ^(0) stems from stacking all ^(0)_i into one long vector[It is clear that instead of gradient descent one could also use a second-order method to achieve faster convergence at the tradeoff of increased computational cost per iteration. The theoretical convergence analysis for a second-order method will require a very different approach from the one developed in this paper.].For Algorithm <ref>, we can rewrite each iteration into^(t) = ^(t-1) - η∇_(^(t-1), ^(t-1)), ^(t) = ^(t-1) - η∇_(^(t-1), ^(t-1)),where ∇_ and ∇_ are in (<ref>), and ^(t) : =[ _1^(t);⋮; _s^(t) ], ^(t) : =[ _1^(t);⋮; _s^(t) ].§.§ Main resultsOur main findings are summarized as follows: Theorem <ref> shows that the initial guess given by Algorithm <ref> indeed belongs to the basin of attraction. Moreover, d_i also serves as a good approximation of d_i0 for each i. Theorem <ref> demonstrates that the regularized Wirtinger gradient descent will guarantee the linear convergence of the iterates and the recovery is exact in the noisefree case and stable in the presence of noise. The initialization obtained via Algorithm <ref> satisfies(^(0), ^(0)) ∈1/√(3)⋂1/√(3)_μ⋂_2/5√(s)κand0.9d_i0≤ d_i≤ 1.1d_i0 , 0.9d_0 ≤ d≤ 1.1d_0,holds with probability at least 1 - L^-γ+1 if the number of measurements satisfiesL ≥ C_γ+log(s)(μ_h^2 + σ^2)s^2 κ^4max{K,N}log^2 L/^2.Hereis any predetermined constant in (0, 1/15], and C_γ is a constant only linearly depending on γ with γ≥ 1.Starting with the initial value ^(0):= (^(0), ^(0)) satisfying (<ref>), the Algorithm <ref> creates a sequence of iterates (^(t), ^(t)) which converges to the global minimum linearly, (^(t), ^(t)) - (_0, _0) _F ≤ d_0/√(2sκ^2)(1 - ηω)^t/2 + 60√(s)^*()with probability at least 1 - L^-γ+1 where ηω = 𝒪((sκ d_0(K+N)log^2L)^-1) and ^*()≤ C_0 σd_0√(γ s(K + N)(log^2L)/L) if the number of measurements L satisfiesL ≥ C_γ+log (s)(μ^2 + σ^2)s^2 κ^4max{K,N}log^2 L/^2. Our previous work <cit.> shows thatthe convex approach via semidefinite programming (see (<ref>)) requires L ≥ C_0s^2(K + μ^2_h N)log^3(L) to ensure exact recovery. Later, <cit.> improves this result to the near-optimal bound L≥ C_0s(K + μ^2_h N) up to some log-factors. The difference between nonconvex and convex methods lies in the appearance of the condition number κ in (<ref>). This is not just an artifact of the proof—empirically we also observe that the value of κ affects the convergence rate of our nonconvex algorithm, see Figure <ref>.Our theory suggests s^2-dependence for the number of measurements L, although numerically L in fact depends on s linearly, as shown in Section <ref>. The reason for s^2-dependence will be addressed in details in Section <ref>.In the theoretical analysis, we assume that _i (or equivalently _i) is a Gaussian random matrix. Numerical simulations suggest that this assumption is clearly not necessary. For example, _i may be chosen to be a Hadamard-type matrix which is more appropriate and favorable for communications.If = , (<ref>) shows that (^(t), ^(t))converges to the ground truth at a linear rate. On the other hand, ifnoise exists, (^(t), ^(t)) is guaranteed to converge to a point within a small neighborhood of (_0,_0). More importantly, if the number of measurements L gets larger, ^*() decays at the rate of 𝒪(L^-1/2). § NUMERICAL SIMULATIONS In this section we present a range of numerical simulations to illustrate and complement different aspects of our theoretical framework. We will empirically analyze the number of measurements needed for perfect joint deconvolution/demixing to see how this compares to our theoretical bounds. We will also study the robustness for noisy data. In our simulations we use Gaussian encoding matrices, as in our theorems. But we also try more realistic structured encoding matrices, that are more reminiscent of what one might come across in wireless communications.While Theorem <ref> says that the number of measurements L depends quadratically on the number of sources s, numerical simulations suggest near-optimal performance. Figure <ref> demonstrates that L actually depends linearly on s, i.e., the boundary between success (white) and failure (black) is approximately a linear function of s. In the experiment,K = N = 50 are fixed, all _i are complex Gaussians and all (_i,_i) are standard complex Gaussian vectors.For each pair of (L,s), 25 experiments are performed and we treat the recovery as a success if - _0_F/_0_F≤ 10^-3. For our algorithm, we use backtrackingto determine the stepsize and the iteration stops either if((^(t+1), ^(t+1)) - (^(t), ^(t)))< 10^-6 or if the number of iterations reaches 500. The backtracking is based on the Armijo-Goldstein condition <cit.>. The initial stepsize is chosen to be η = 1/K+N. If (^(t) - η∇(^(t))) > (^(t)), we just divide η by two and use a smaller stepsize.We see from Figure <ref> that the number of measurements for the proposed algorithm to succeed not only seems to depend linearly on the number of sensors,but it isactually rather close to the information-theoretic limit s(K+N). Indeed, the green dashed line in Figure <ref>, which represents the empirical boundary for the phase transition between success and failure corresponds to a line with slope about 3/2 s(K+N).It is interesting to compare this empirical performance to the sharp theoretical phase transition bounds one would obtain via convex optimization <cit.>. Considering the convex approach based on lifting in <cit.>, we can adapt the theoretical framework in <cit.> to the blind deconvolution/demixing setting, but with one modification. The bounds in <cit.> rely on Gaussian widths of tangent cones related to the measurement matrices _i. Since simply analytic formulas for these expressions seem to be out of reach for the structured rank-one measurement matrices used in our paper, we instead compute the bounds for full-rank Gaussian random matrices, which yields a sharp bound of about3s(K+N)(the corresponding bounds for rank-one sensing matrices will likely have a constant larger than 3).Note that these sharp theoretical bounds predict quite accurately the empirical behavior of convex methods. Thus our empirical bound for using a non-convex methods compares rather favorably with that of the convex approach. Similar conclusions can be drawn from Figure <ref>; there all _i are in the form of _i = _i whereis the unitary L× L DFT matrix, all _i are independent diagonal binary ± 1 matrices andis an L× N fixed partial deterministic Hadamard matrix.The purpose of using _i is to enhance the incoherence between each channel so that our algorithm is able to tell apart each individual signal and channel. As before we assume Gaussian channels, i.e., _i∼(, _K) Therefore, our approach does not only work for Gaussian encoding matrices _i but also for the matrices that are interesting to real-world applications, although no satisfactory theory has been derived yet for that case. Moreover, due to the structure of _i and , fast transform algorithms are available, potentially allowing forreal-time deployment.Figure <ref> shows the robustness of our algorithm under different levels of noise. We also run 25 samples for each level of SNR and different L and then compute the average relative error. It is easily seen that the relative error scales linearly with the SNR and one unit of increase in SNR (in dB) results in one unit of decrease in the relative error.Theorem <ref> suggests that the performance and convergence rate actually depend on the condition number of _0 = (_0,_0), i.e.,on κ = max d_i0/min d_i0 where d_i0 = _i0_i0. Next we demonstrate that this dependence on the condition number is not an artifact of the proof, but is indeed also observed empirically. In this experiment, we let s=2 and set for the first component d_1,0 = 1 and for the second one d_2,0 = κ for κ∈{1,2,5}. Here, κ = 1 means that the received signals of both sensors have equal power, whereas κ=5 means that the signal received from the second sensor is considerably stronger. The initial stepsize is chosen as η=1, followed by the backtracking scheme. Figure <ref> shows how the relative error decays with respect to the number of iterations t under different condition number κ and L.The larger κ is, the slower the convergence rate is, as we see from Figure <ref>. This may result from two reasons: our spectral initialization may not be able to give a good initial guess for those weak components; moreover, during the gradient descent procedure, the gradient directions for the weak components could be totally dominated/polluted by the strong components. Currently, we still have no effective way of how to deal with this issue of slow convergence when κ is not small. We have to leave this topic for future investigations. § CONVERGENCE ANALYSIS Our convergence analysis relies on the following four conditions where the first three of them are local properties.We will also briefly discuss how they contribute to the proof of our main theorem. Note that our previous work <cit.> on blind deconvolution is actually a special case (s=1) of (<ref>). The proof of Theorem <ref> follows in part the main ideas in <cit.>. The readers may find the technical parts of <cit.> and this manuscript share many similarities.However, there are also important differences. After all, we are now dealing with a more complicated problem where the ground truth matrix _0 and measurement matrices are both rank-s block-diagonal matrices, as shown in (<ref>), instead of rank-1 matrices in <cit.>. The key is to understand the properties of the linear operatorapplying to different types of block-diagonal matrices. Therefore, many technical details are much more involved while on the other hand, some of results in <cit.> can be used directly.During the presentation, we will clearly point out both the similarities to and differences from <cit.>. §.§ Four key conditionsLocal regularity condition: Let : = (, )∈^s(K+N) and ∇() := [ ∇_(); ∇_() ]∈^s(K+N), then∇()^2 ≥ω [() - c]_+for ∈ where ω = d_0/7000 and c = ^2 + 2000s^*()^2.We will prove Condition <ref> in Section <ref>. Condition <ref> states that () = 0 if ∇() = 0 and = 0, i.e., all the stationary points inside the basin of attraction are global minima. Local smoothness condition: Let = (, ) and = (, ) and there holds( + ) - ()≤ C_Lfor + andinsidewhere C_L ≈𝒪(d_0sκ(1 + σ^2)(K + N)log^2 L ) is the Lipschitz constant ofover . The convergence rateis governed by C_L.The proof of Condition <ref> can be found in Section <ref>. Local restricted isometry property: Denote = (, ) and _0 = (_0, _0). There holds2/3 - _0_F^2 ≤( - _0) ^2 ≤3/2 - _0_F^2uniformly all for (, )∈.Condition <ref> will be proven in Section <ref>. It says that the convergence of the objective function implies the convergence of the iterates. Although Condition <ref> is seemingly the same as the one in our previous work <cit.>, it is indeed very different. Recall thatis a linear operator acting on block-diagonal matrices and its output is the sum of s different components involving _i. Therefore, the proof of Condition <ref> heavily depends on the inter-user incoherence whereas this notion of incoherence is not needed at all for the single-user scenario. At the beginning of Section <ref>, we discuss the choice of _i (or _i). In order to distinguish one user from another, it is essential to use sufficiently different[Suppose all _i are the same, there is no hope to recover all pairs of {(_i,_i)}_i=1^s simultaneously.]encoding matrices _i (or _i). Herethe independence and Gaussianity of all _i (or _i) guarantee that_T_i_i^*_j_T_j is sufficiently small for all i≠ j where T_i is defined in (<ref>). It is a key element to ensure the validity of Condition <ref> which is also an important component to prove Condition <ref>. On the other hand, due to the recent progress on this joint deconvolution and demixing problem, one is also able to prove a local restricted isometry property with tools such as bounding the suprema of chaos processes <cit.> by assuming {_i}_i=1^s as Gaussian matrices. Robustness condition: Let≤1/15 be a predetermined constant. We have^*() = max_1≤ i≤ s_i^*()≤ d_0/10√(2)s κ,where ∼𝒞𝒩(0, σ^2d_0^2/L) if L ≥ C_γκ^2s^2(K + N)/^2.We will prove Condition <ref> in Section <ref>. We now extract one useful result based on Conditions <ref> and <ref>. From these two conditions, we are able to produce a good approximation of F(, ) for all (, )∈ in terms of δ in (<ref>). For (, )∈,the following inequality holds2/3δ^2d_0^2 -δ d_0^2/5√(s)κ + ^2 ≤ F(, ) ≤3/2δ^2d_0^2 + δ d_0^2/5√(s)κ + ^2.Note that (<ref>)simply follows fromF(, ) = ( - _0) _F^2 - 2(- _0, ^*()) + ^2.Note that (<ref>) implies 2/3δ^2d_0^2≤(-_0)_F^2≤3/2δ^2d_0^2. Thus it suffices to estimate the cross-term, |(- _0, ^*())| ≤^*() - _0_* = ^*()∑_i=1^s_i_i^* - _i0_i0^*_* ≤√(2)^*()∑_i=1^s_i_i^* - _i0_i0^*_F ≤√(2s)^*() - _0_F ≤δ d_0^2/10√(s)κwhere ·_* and · are a pair of dual norms and ^*() comes from (<ref>).0.5cm §.§ Outline of the convergence analysis For the ease of proof, we introduce another neighborhood:= { (,) : (, ) ≤^2 d_0^2/3sκ^2 + ^2}.Moreover, another reason to consideris based on the fact that gradient descent only allows one to make the objective function decrease if the step size is chosen appropriately.In other words, all the iterates ^(t) generated by gradient descent are insideas long as ^(0)∈. On the other hand, it is crucial to note that the decrease of the objective function does not necessarily imply the decrease of the relative error of the iterates. Therefore, we want to construct an initial guess in ∩ so that ^(0) is sufficiently close to the ground truth and then analyze the behavior of ^(t).0.25cmIn the rest of this section, we basically try to prove the following relation:1/√(3)∩1/√(3)_μ∩_2/5√(s)κ_Initial guess⊂∩_{^(t)}_t≥ 0 in ∩⊂_Key conditions hold over . Now we give a more detailed explanation of the relation above, which constitutes the main structure of the proof: * We will show 1/√(3)∩1/√(3)_μ∩_2/5√(s)κ⊂∩ in the proof of Theorem <ref> in Section <ref>, which is quite straightforward. * Lemma <ref> explains why it holds that ∩⊂ and where the s^2-bottleneck comes from. * Lemma <ref> implicitly shows that the iterates ^(t) will remain in ∩ if the initial guess ^(0) is inside ∩ and (^(t)) is monotonically decreasing (simply by induction).Lemma <ref> makes this observation explicit by showingthat ^(t)∈∩ implies ^(t+1) : = ^(t) - η∇(^(t))∈∩ if the stepsize η obeys η≤1/C_L. Moreover, Lemma <ref> guarantees sufficient decrease of (^(t)) in each iteration, which paves the road towards the proof of linear convergence of (^(t)) and thus ^(t). 0.25cm Remember thatandare both convex sets, and the purpose of introducing regularizers G_i(_i, _i) is to approximately project the iterates onto ∩. Moreover, we hope that once the iterates are insideand inside a sublevel subset , they will never escape from ∩.Those ideas are fully reflected in the following lemma. Assume 0.9d_i0≤ d_i≤ 1.1d_i0 and 0.9d_0≤ d≤ 1.1d_0. There holds ⊂∩; moreover, under Conditions <ref> and <ref>, we have ∩⊂∩∩_9/10ϵ. If (h, x) ∉∩, by the definition of G in (<ref>), at least one component in G exceeds ρ G_0(2d_i0/d_i). We have (, )≥ ρ G_0(2d_i0/d_i) ≥(d^2 + 2^2) ( 2d_i0/d_i - 1)^2 ≥(2/1.1 - 1)^2 (d^2 + 2^2) ≥ 1/2 d_0^2 + ^2 > ^2 d_0^2/3s κ^2 + ^2,where ρ≥ d^2 + 2^2, 0.9d_0 ≤ d ≤ 1.1d_0 and 0.9d_i0≤ d_i≤ 1.1d_i0. This implies (, ) ∉ and hence ⊂∩.Note that (, )∈ if (, ) ∈∩. Applying (<ref>) gives2/3δ^2d_0^2 -δ d_0^2/5√(s)κ + ^2 ≤ F(, )≤(, )≤^2 d_0^2/3sκ^2 +^2which implies that δ≤9/10/√(s)κ.By definition of δ in (<ref>), there holds81^2/100sκ^2≥δ^2 = ∑_i=1^s δ_i^2d_i0^2/∑_i=1^s d_i0^2≥∑_i=1^s δ_i^2/sκ^2≥1/sκ^2max_1≤ i≤ sδ_i^2,which gives δ_i ≤9/10 and (, )∈_9/10. The s^2-bottleneck comes from (<ref>). If δ≤ is small, we cannot guarantee that each δ_i is also smaller than . Just consider the simplest case when all d_i0 are the same: then d_0^2 = ∑_i=1^s d_i0^2 = s d_i0^2 and there holds^2≥δ^2 = 1/s∑_i=1^s δ_i^2.Obviously, we cannot conclude that maxδ_i ≤ but only say thatδ_i ≤√(s). This is why we require δ = O(/√(s)) to ensure δ_i ≤, which gives s^2-dependence in L. Denote z_1 = (_1, _1) and z_2 = (_2, _2). Let z(λ):=(1-λ)z_1 + λz_2. If z_1 ∈ and z(λ) ∈ for all λ∈ [0, 1], we have z_2 ∈. Note that for _1∈∩, we have _1∈∩∩_9/10 which follows from the second part of Lemma <ref>. Now we prove _2∈ by contradiction.Let us suppose that_2 ∉ and _1 ∈. There exists z(λ_0):=(h(λ_0), x(λ_0)) ∈ for some λ_0 ∈ [0, 1] such that max_1≤ i≤ s_i_i^* - _i0_i0^*_F/d_i0 = ϵ. Therefore, z(λ_0) ∈∩ and lem:betamu implies max_1≤ i≤ s_i_i^* - _i0_i0^*_F/d_i0≤9/10ϵ, which contradicts max_1≤ i≤ s_i_i^* - _i0_i0^*_F/d_i0 = ϵ. Let the stepsize η≤1/C_L, ^(t) : = (^(t), ^(t))∈^s(K + N) and C_L be the Lipschitz constant of ∇() overin (<ref>). If ^(t)∈∩, we have ^(t+1)∈∩ and(^(t+1)) ≤(^(t)) - η∇(^(t))^2where ^(t+1) = ^(t) - η∇(^(t)). This lemma tells us that once ^(t)∈∩, the next iterate ^(t+1) = ^(t) - η∇(^(t)) is also inside ∩ as long as the stepsize η≤1/C_L. In other words, ∩ is in fact a stronger version of the basin of attraction. Moreover, the objective function will decay sufficiently in each step as long as we can control the lower bound of the ∇, which is guaranteed by the Local Regularity Condition <ref>. Let ϕ(τ) := (^(t) - τ∇(^(t))), ϕ(0) = (^(t)) and consider the following quantity:τ_max: = max{μ: ϕ(τ) ≤(^(t)), 0≤τ≤μ},where τ_max is the largest stepsize such that the objective function () evaluated at any point over the whole line segment {^(t) -τ(^(t)), 0≤τ≤τ_max} is not greater than (^(t)). Now we will show τ_max≥1/C_L. Obviously, if ∇(^(t)) = 0, it holds automatically. Consider ∇(^(t))≠ 0 and assume τ_max < 1/C_L.First note that, /τϕ(τ) < 0 ⟹τ_max > 0.By the definition of τ_max, there holds ϕ(τ_max) = ϕ(0) since ϕ(τ) is a continuous function w.r.t. τ. Lemma <ref> implies{^(t) - τ∇(^(t)), 0≤τ≤τ_max}⊆∩. Now we apply Lemma <ref>, the modified descent lemma, and obtain(^(t) - τ_max∇(^(t))) ≤(^(t)) - (2τ_max - C_Lτ_max^2)(^(t))^2 ≤(^(t)) - τ_max(^(t))^2where C_Lτ_max≤ 1. In other words, ϕ(τ_max) ≤(^(t) - τ_max∇(^(t))) < (^(t)) = ϕ(0) contradicts ϕ(τ_max) = ϕ(0). Therefore, we conclude that τ_max≥1/C_L. For any η≤1/C_L, Lemma <ref> implies{^(t) - τ∇(^(t)), 0≤τ≤η}⊆∩and applying Lemma <ref> gives(^(t) - η∇(^(t))) ≤(^(t)) - (2η - C_Lη^2)(^(t))^2 ≤(^(t)) - η(^(t))^2. §.§ Proof of Theorem <ref> Combining all the considerations above, we now prove Theorem <ref> to conclude this section.The proof consists of three parts: Part I: Proof of ^(0) : = (^(0), ^(0)) ∈∩. From the assumption of Theorem <ref>,^(0)∈1/√(3)⋂1/√(3)∩_2/5√(s)κ.First we show G(^(0), ^(0)) = 0: for 0≤ i≤ s and the definition ofand ,^(0)_i^2/2d_i≤2d_i0/3d_i < 1, L|_l^* ^(0)_i|^2/8d_iμ^2≤L/8d_iμ^2·16d_i0μ^2/3L≤2d_i0/3d_i < 1,where ^(0)_i≤2√(d_i0)/√(3), √(L)^(0)_i_∞≤4 √(d_i0)μ/√(3) and 9/10d_i0≤ d_i≤11/10d_i0. Therefore G_0( ^(0)_i^2/2d_i) = G_0( ^(0)_i^2/2d_i) = G_0(L|_l^*_i^(0)|^2/8d_iμ^2) = 0 for all 1≤ l≤ L and G(^(0), ^(0)) = 0. For ^(0) = (^(0), ^(0))∈_2/5√(s)κ, we haveδ(^(0)) := √(∑_i=1^s δ_i^2d_i0^2 )/d_0≤2/5√(s)κ. By (<ref>), there holds δ(^(0)) ≤2/5√(s)κ and G(^(0), ^(0)) = 0,(^(0), ^(0)) = F(^(0), ^(0)) ≤^2 + 3/2δ^2(^(0))d_0^2 + δ(^(0)) d_0^2/5√(s)κ≤^2 + ^2 d_0^2/3sκ^2and hence ^(0) = (^(0), ^(0))∈⋂. Part II: The linear convergence of the objective function (^(t)).Denote ^(t) : = (^(t), ^(t)). Note that ^(0)∈∩,Lemma <ref> implies ^(t)∈∩ for all t≥ 0 by induction ifη≤1/C_L. Moreover, combining Condition <ref> with Lemma <ref> leads to (^(t )) ≤(^(t-1)) - ηω[(^(t-1))- c ]_+,t≥ 1with c = ^2 + a^*()^2 and a = 2000s.Therefore, by induction, we have[ (^(t)) - c]_+ ≤ (1 - ηω)[(^(t-1))- c ]_+ ≤(1 - ηω)^t [ (^(0)) - c]_+ ≤^2 d_0^2/3sκ^2 (1 - ηω)^twhere (^(0)) ≤^2d_0^2/3sκ^2 + ^2 and [ (^(0)) - c ]_+ ≤[ 1/3sκ^2^2 d_0^2 - a^*()^2 ]_+ ≤^2 d_0^2/3sκ^2. Now we conclude that[ (^(t)) - c]_+ converges to 0 linearly.Part III: The linear convergence of the iterates (^(t), ^(t)). Denote δ(^(t)) : = (^(t), ^(t)) - (_0,_0)_F/d_0.Note that ^(t)∈∩⊆ and over , there holds F_0(^(t)) ≥2/3δ^2(^(t))d_0^2 which follows from Local RIP Condition in (<ref>) and F_0(^(t)) defined in (<ref>). Moreover(^(t)) - ^2≥F_0(^(t)) - 2(^*(), (^(0), ^(0)) - (_0, _0) )≥ 2/3δ^2(^(t))d_0^2 - 2√(2s)^*()δ(^(t))d_0where G(^(t)) ≥ 0 and the second inequality follows from (<ref>). There holds2/3δ^2(^(t))d_0^2 - 2√(2s)^*()δ(^(t))d_0- a^*()^2 ≤[ (^(t)) - c ]_+ ≤^2 d_0^2/3sκ^2(1 - ηω)^tand equivalently, |δ(^(t))d_0 - 3√(2)/2^*()|^2 ≤^2 d_0^2/2sκ^2 (1 - ηω)^t+ (3/2a + 9/2)^*()^2.Solving the inequality above for δ(^(t)), we have δ(^(t)) d_0 ≤d_0/√(2sκ^2)(1 - ηω)^t/2 +(3√(2)/2 + √(3/2a + 9/2))^*()≤d_0/√(2sκ^2)(1 - ηω)^t/2 + 60√(s)^*()where a = 2000s. Let d^(t) : = √(∑_i=1^s _i^(t)^2_i^(t)^2 ) for t∈_≥ 0. By (<ref>) and triangle inequality, we immediately obtain |d^(t) - d_0| ≤ d_0/√(2sκ^2)(1 - ηω)^t/2 + 60√(s)^*(). § PROOF OF THE FOUR CONDITIONSThis section is devoted to proving the four key conditions introduced in Section <ref>. The local smoothness condition and the robustness condition are relatively less challenging to deal with. The more difficult part is to show the local regularity condition and the local isometry property. The key to solve those problems is to understand how the vector-valued linear operatorin (<ref>) behaves on block-diagonal matrices, such as (,), (_0,_0) and (,) - (_0,_0). In particular, when s=1, all those matrices become rank-1 matrices, which have been well discussed in our previous work <cit.>.0.25cm First of all, we define the linear subspace T_i⊂^K× N along with its orthogonal complement for 1≤ i≤ s asT_i :={_i∈^K× N : _i = _i0_i^* + _i_i0^*, _i∈^K,_i∈^N },_i :={(_K - _i0_i0^*/d_i0) _i (_N - _i0_i0^*/d_i0) :_i∈^K× N}where _i0 = _i0 = √(d_i0). In particular, _i0_i0^* ∈ T_i for all 1≤ i≤ s.0.25cm The proof also requires us to consider block-diagonal matrices whose i-th block belongs to T_i (or _i). Let = (_1,⋯,_s)∈^Ks× Ns be a block-diagonal matrix and say ∈ T ifT := {blkdiag({_i}_i=1^s) | _i∈ T_i }and ∈ if:= {blkdiag({_i}_i=1^s) | _i∈_i }where both T andare subsets in ^Ks× Ns and (_0,_0)∈ T.0.25cm Now we take a closer look at a special case of block-diagonal matrices, i.e., (, ) and calculate its projection onto T andrespectively and it suffices to consider _T_i(_i_i^*) and __i(_i_i^*). For each block _i_i^* and 1≤ i≤ s, there are unique orthogonal decompositions_i := α_i1_i0 + _i,:= α_i2_i0 + _i,where _i0⊥_i and _i0⊥_i. It is important to note that α_i1 = α_i1(_i) = _i0, _i/d_i0 and α_i2 = α_i2(_i) =_i0, _i/d_i0and thus α_i1 and α_i2 are functions of _i and _i respectively. Immediately, we have the following matrix orthogonal decomposition for _i_i^* onto T_i and _i,_i_i^* - _i0_i0^* = (α_i1α_i2 - 1)_i0_i0^* + α_i2_i _i0^* + α_i1_i0_i^*_belong toT_i+ _i _i^*_belongs to _iwhere the first three components are in T_i while _i_i^*∈ T^_i. §.§ Key lemmataFrom the decomposition in (<ref>) and (<ref>), we want to analyze how _i, _i, α_i1 and α_i2 depend on δ_i = _i_i^* - _i0_i0^*_F/d_i0 if δ_i < 1. The following lemma answers this question, which can be viewed as an application of singular value/vector perturbation theory <cit.> applied to rank-1 matrices. From the lemma below, we can see that if _i_i^* is close to _i0_i0^*, then __i(_i_i^*) is in fact very small (of order O(δ_i^2 d_i0)). (Lemma 5.9 in <cit.>)Recall that _i0 = _i0 = √(d_i0). If δ_i := _i_i^* - _i0_i0^*_F/d_i0<1, we have the following useful bounds|α_i1|≤_i/_i0,|α_i1α_i2 - 1|≤δ_i,and_i≤δ_i/1 - δ_i_i,_i≤δ_i/1 - δ_i_i,_i_i≤δ_i^2/2(1 - δ_i) d_i0.Moreover, if _i≤ 2√(d_i0) and √(L)B_i_∞≤ 4μ√(d_i0), i.e., _i∈⋂, we have √(L)B_i _i_∞≤ 6 μ√(d_i0). Now we start to focus on several results related to the linear operator . (Operator norm of ). Fordefined in (<ref>), there holds≤√(s(Nlog(NL/2) + (γ+log s)log L))with probability at least 1 - L^-γ. Note that _i(_i) : = {_l^*_i_il}_l=1^L in (<ref>). Lemma 1 in <cit.> implies_i≤√(Nlog(NL/2) + γ'log L)with probability at least 1 - L^-γ'. By taking the union bound over 1≤ i≤ s, max_i≤√(Nlog(NL/2) + (γ+ log s)log L)with probability at least 1 - sL^-γ-log s≥ 1 - L^-γ.0.25cmFordefined in (<ref>), applying the triangle inequality gives()= ∑_i=1^s _i(_i)≤∑_i=1^s _i_i_F≤max_1≤ i≤ s_i√(s ∑_i=1^s _i_F^2) = √(s)max_1≤ i≤ s_i_Fwhere = (_1,⋯, _s)∈^Ks× Ns. Therefore,≤√(s)max_1≤ i≤ s_i≤√( s(Nlog(NL/2) + (γ+log s)log L))with probability at least 1 - L^-γ. 0.25cm(Restricted isometry property foron T).The linear operatorrestricted on T is well-conditioned, i.e., _T^*_T - _T≤1/10where _T is the projection operator from ^Ks× Ns onto T, given L ≥ C_γs^2 max{K, μ_h^2 N}log^2L with probability at least 1 - L^-γ. Here _T and _T^* are defined as_T() = ∑_i=1^s _i(_T_i(_i)), _T^*() = ( _T_1(_1^*()), ⋯, _T_s(_s^*()) )respectively whereis a block-diagonal matrix and ∈^L.As shown in the remark above, the proof of Lemma <ref> depends on the properties of both _T_i_i^*_i_T_i and _T_i_i^*_j_T_j for i≠ j. Fortunately, we have already proven related results in <cit.> which are written as follows:There hold_T_i_i^*_j_T_j≤1/10s,∀ i≠ j; _T_i_i^*_i_T_i - _T_i≤1/10s, ∀ 1≤ i≤ swith probability at least 1 - L^-γ+1 if L≥ C_γs^2max{K, μ^2_hN}log^2Llog(s+1). Note that _T_i_i^*_j_T_j≤1/10s holds because of independence between each individual random Gaussian matrix _i. In particular, if s=1, the inter-user incoherence _T_i_i^*_j_T_j≤1/10s is not needed at all. With (<ref>), it is easy to prove Lemma <ref>. For any block diagonal matrix = (_1, ⋯,_s)∈^Ks× Ns and _i∈^K× N, , _T^*_T() - _T()= ∑_1≤ i,j≤ s_i_T_i(_i), _j_T_j(_j)- _T()_F^2= ∑_i=1^s _i, _T_i_i^*_i_T_i(_i) - _T_i(_i) +∑_i≠ j_i_T_i(_i), _j_T_j(_j).Using (<ref>), the following two inequalities hold,|_i, _T_i_i^*_i_T_i(_i) - _T_i(_i)|≤_T_i_i^*_i_T_i- _T_i_i_F^2 ≤_i^2_F/10s, |_i_T_i(_i), _j_T_j(_j)|≤_T_i_i^*_j_T_j_i_F_j_F ≤_i_F_j_F/10s.After substituting both estimates into (<ref>), we have |, _T^*_T() - _T()| ≤∑_1≤ i, j≤ s_i_F_j_F /10s≤1/10s(∑_i=1^s _i_F)^2 ≤_F^2/10. Finally, we show howbehaves when applied to block-diagonal matrices = (,). In particular, the calculations will be much simplified for the case s=1.( restricted on block-diagonal matrices with rank-1 blocks). Consider = (, ) andσ^2_max(, ) := max_1≤ l≤ L∑_i=1^s |^*_l_i|^2 _i^2.Conditioned on (<ref>), we have()^2≤4/3_F^2+ 2√(2s_F^2 σ^2_max(, )(K+N)log L)+ 8sσ^2_max(, )(K+N) log L,uniformly for any ∈^Ks and ∈^Ns with probabilityat least 1- 1/γexp(-s(K+N)) if L≥ C_γs(K+N)log L. Here _F^2= (, )_F^2 = ∑_i=1^s _i^2_i^2.Here are a few more explanations and facts about σ^2_max(,). Note that ()^2 is the sum of L sub-exponential [For the definition and properties of sub-exponential random variables, the readers can find all relevant information in <cit.>.] random variables, i.e., ()^2 = ∑_l=1^L |∑_i=1^s _l^*_i _i^*_il|^2.Here σ^2_max(, ) corresponds to the largest expectation of all those components in ()^2. For σ^2_max(, ), without loss of generality, we assume _i = 1 for 1≤ i≤ s and let ∈^Ks be a unit vector, i.e., ^2 = ∑_i=1^s _i^2= 1.The bound1/L≤σ^2_max(, ) ≤K/Lfollows from L σ^2_max(, ) ≥∑_l=1^L ∑_i=1^s |_l^*_i|^2 = ^2=1.Moreover, σ_max^2(,) and σ_max(,) are both Lipschitz functions w.r.t. . Now we want to determine their Lipschitz constants. First note that for _i = 1, σ_max(,) equalsσ_max(, ) =max_1≤ l≤ L(_s⊗_l^*)where ⊗ denotes Kronecker product. Let ∈^Ks be another unit vector and we have|σ_max(, ) - σ_max(, )|= | max_1≤ l≤ L(_s⊗_l^*)- max_1≤ l≤ L(_s⊗_l^*)|= max_1≤ l≤ L| (_s⊗_l^*)- (_s⊗_l^*) | ≤max_1≤ l≤ L(_s⊗_l^*) ( - )≤-where _s⊗_l^* = _l√(K/L) < 1. For σ^2_max(,),|σ^2_max(, ) - σ^2_max(, )| ≤(σ_max(, ) + σ_max(, )) · |σ_max(, ) - σ_max(, )| ≤2K/L-≤2-. Without loss of generality, let_i = 1 and ∑_i=1^s _i^2 = 1. It suffices to prove f(, ) ≤4/3 for all (, )∈^Ks×^Ns in (<ref>)where f(, ) is defined as f(, ) := ()^2 - 2√(2s σ^2_max(, )(K+N)log L) - 8sσ^2_max(, )(K+N) log L. Part I: Bounds of ()^2 for any fixed (,). From (<ref>), we already know that Y = ()_F^2 = ∑_i=1^2L c_iξ_i^2 where {ξ_i} are i.i.d. χ^2_1 random variables and = (c_1, ⋯, c_2L)^T∈^2L. More precisely, we can determine {c_i}_i=1^2L as | ∑_i=1^s _l^*_i^*_il|^2 = c_2l-1ξ_2l-1^2 + c_2lξ_2l^2, c_2l-1 = c_2l = 1/2∑_i=1^s |_l^*_i|^2because ∑_i=1^s ^*_l_i_i^*_il∼𝒞𝒩(0, ∑_i=1^s |^*_l _i|^2).By the Bernstein inequality, there holdsℙ(Y - 𝔼(Y) ≥ t) ≤exp(- t^2/8c^2) ∨exp(- t/8c_∞)where (Y) = _F^2 = 1. In order to apply the Bernstein inequality, we need to estimate ^2 and _∞ as follows,_∞=1/2max_1≤ l≤ L∑_i=1^s|^*_l_i|^2 = 1/2σ^2_max(, ),_2^2 =1/2∑_l=1^L |∑_i=1^s|^*_l_i|^2 |^2 ≤1/2(∑_i=1^s∑_l=1^L|^*_l_i|^2 )max_1≤ l≤ L∑_i=1^s|^*_l_i|^2≤1/2σ^2_max(, ).Applying (<ref>) givesℙ( ()^2 ≥ 1 + t)≤exp(- t^2/4 σ^2_max(, )) ∨exp(- t/4σ^2_max(, )).In particular, by setting t = g(,):= 2 √(2 sσ^2_max(, )(K+N)log L) + 8sσ^2_max(, )(K + N)log L,we haveℙ(()^2 ≥ 1 + g(,)) ≤ e^ - 2 s(K+N)(log L).So far, we have shown that f(, ) ≤ 1 with probability at least 1 - e^- 2 s(K+N)(log L) for a fixed pair of (, ).Part II: Covering argument. Now we will use a covering argument to extend this result for all (, ) and thus prove that f(, )≤4/3 uniformly for all (, ).We start with defining 𝒦 and 𝒩_i as ϵ_0-nets of ^Ks-1 and ^N-1 forand _i,1≤ i≤ s, respectively. The bounds |𝒦|≤ (1+2/ϵ_0)^2sK and |𝒩_i|≤ (1+2/ϵ_0)^2N follow from the covering numbers of the sphere (Lemma 5.2 in <cit.>). Here we let 𝒩 := 𝒩_1×⋯×𝒩_s. By taking the union bound over 𝒦×𝒩, we have that f(, )≤ 1 holds uniformly for all (, ) ∈𝒦×𝒩 with probability at least 1- (1+ 2/ϵ_0)^2s(K + N) e^ - 2s(K+N)log L= 1- e^-2s(K + N)(log L - log(1 + 2/_0)).For any (, ) ∈^Ks-1×^N-1×⋯×^N-1_stimes, we can find a point (, ) ∈𝒦×𝒩 satisfying - ≤_0 and _i - _i≤_0 for all 1≤ i≤ s. Conditioned on (<ref>), we know that ^2≤ s(Nlog(NL/2) + (γ + log s)log L) ≤ s(N + γ + log s)log L. Now we aim to evaluate |f(,) - f(,)|.First we consider |f(, ) - f(, )|. Since σ^2_max(, ) = σ^2_max(,) if _i = _i = =1 for 1≤ i≤ s, we have |f(, ) - f(, )| =|((, ))_F^2 - ((,)) _F^2 | ≤ ((,- ))·((,+ ))≤ ^2 √(∑_i=1^s _i^2_i - _i^2)√(∑_i=1^s _i^2_i + _i^2)≤2^2 _0 ≤ 2s(N + γ + log s)(log L)_0where the first inequality is due to ||z_1|^2 - |z_2|^2| ≤ |z_1 - z_2||z_1 + z_2| for any z_1, z_2 ∈ℂ.We proceed to estimate|f(, ) - f(, )| by using (<ref>) and (<ref>), | f(, ) - f(, )|≤J_1 + J_2 + J_3 ≤ (2^2 + 2√(2s(K+N)log L)+ 16s(K+N) log L) _0≤25s(K +N + γ + log s)(log L) _0where (<ref>) and (<ref>) giveJ_1 =| ((,))_F^2 - ((,))_F^2| ≤( ( - ,) )( ( + ,) )≤ 2^2 _0, J_2 = 2√(2s(K+N)log L)· |σ_max(, ) - σ_max(, )| ≤ 2√(2s(K+N)log L)_0, J_3 = 8s(K+N) (log L) · |σ^2_max(, ) - σ^2_max(, )|≤ 16s(K+N)(log L) _0. 0.25cm Therefore, if ϵ_0 = 1/81s(N + K + γ + log s)log L, there holdsf(,) ≤ f(,) + |f(, ) - f(, )| + |f(,) -f(,) |_≤ 27s(K+N+γ + log s)(log L)_0≤1/3≤4/3for all (,) uniformly with probabilityat least 1- e^-2s(K + N)(log L - log(1 + 2/_0)). By letting L ≥ C_γs(K+N)log L with C_γ reasonably large and γ≥ 1, we have log L - log(1 + 2/_0) ≥1/2(1 + log(γ)) and with probability at least 1 - 1/γexp(-s(K+N)).§.§ Proof of the local restricted isometry propertyConditioned on (<ref>) and (<ref>), the following RIP type of property holds:2/3 - _0_F^2 ≤( - _0)^2 ≤3/2-_0_F^2uniformly for all (,)∈ with μ≥μ_h and ϵ≤1/15 if L ≥ C_γμ^2 s(K+N)log^2 L for some numerical constant C_γ. The main idea of the proof follows two steps: decompose -_0 onto T and , then apply (<ref>) and (<ref>) to _T(-_0) and _(-_0) respectively.0.25cm For any =(,)∈ with δ_i ≤≤1/15, we can decompose - _0 as the sum of two block diagonal matrices = (_i, 1≤ i≤ s) and = (_i, 1≤ i≤ s) where each pair of (_i, _i) corresponds to the orthogonal decomposition of _i_i^* - _i0_i0^*, _i^*_i - _i0_i0^* := (α_i1α_i2 - 1)_i0_i0^* + α_i2_i _i0^* + α_i1_i0_i^*__i∈ T_i+ _i _i^*__i ∈T_i^⊥which has been briefly discussed in (<ref>) and (<ref>). Note that ( - _0) = ( + ) and () - ()≤( + )≤() + ().Therefore, it suffices to have a two-side bound for () and an upper bound for () where ∈ T and ∈ in order to establish the local isometry property.Estimation of (): For (), we know from lem:ripu that √(9/10)_F≤()≤√(11/10)_Fand hence we only need to compute _F. By lem:orth_decomp, there also hold _i_F ≤δ_i^2/2(1 - δ_i) d_i0 and δ_i - _i_F≤_i_F≤δ_i + _i_F, i.e., (δ_i - δ_i^2/2(1 - δ_i))d_i0≤_i_F ≤(δ_i + δ_i^2/2(1 - δ_i))d_i0,1≤ i≤ s.With _F^2 = ∑_i=1^s _i_F^2,it is easy to get δ d_0(1 - /2(1-)) ≤_F ≤δ d_0 (1 + /2(1-)). Combined with (<ref>), we get√(9/10)(1 - /2(1-))δ d_0 ≤() ≤√(11/10)(1 + /2(1-))δ d_0. Estimation of (): Note thatis a block-diagonal matrix with rank-1 block. Soapplying lem:key gives us()^2 ≤4/3_F^2+ 2√(2s_F^2 σ^2_max(, )(K+N)log L)+ 8sσ^2_max(, )(K+N) log Lwhere = (, ) and = [ _1;⋮; _s ]. It suffices to get an estimation of _F and σ^2_max(,) to bound () in (<ref>).Lemma <ref> says that _i_i≤δ_i^2/2(1 - δ_i) d_i0≤/2(1-)δ_i d_i0 if < 1. Moreover,_i≤δ_i/1 - δ_i_i≤2δ_i/1 - δ_i√(d_i0), √(L)B_i _∞≤ 6 μ√(d_i0),1≤ i≤ sif (,) belongs to . For _F,_F = √(∑_i=1^s _i_F^2) = √(∑_i=1^s _i^2 _i^2)≤δ d_0/2(1-).Now we aim to get an upper bound for σ^2_max(, ) by using (<ref>),σ_max^2(, ) = max_1≤ l≤ L∑_i=1^s |^*_l_i|^2 _i^2 ≤ C_0μ^2 ∑_i=1^s δ_i^2 d_i0^2/L = C_0μ^2δ^2 d_0^2/L.By substituting the estimations of _F and σ^2_max(, ) into (<ref>)()^2 ≤^2 δ^2d_0^2/3(1-)^2 + √(2)δ^2 d_0^2/1-√(C_0μ^2 s (K+N)log L/L) + 8C_0 μ^2 δ^2d_0^2s(K+N)log L/L.By letting L ≥ C_γμ^2 s(K + N)log^2 L with C_γ sufficiently large and combining eq:AV and eq:AU, we have√(2/3)δ d_0 ≤() - ()≤(+)≤() + ()≤√(3/2)δ d_0,which gives 2/3 - _0_F^2 ≤( - _0)^2 ≤3/2 - _0_F^2.§.§ Proof of the local regularity condition We first introduce a few notations: for all (, ) ∈∩, consider α_i1, α_i2, _i and _i defined in eq:orth and define_i = _i - α_i _i0, _i = _i - α_i^-1_i0where α_i (_i, _i)=(1 - δ_0)α_i1, if _i_2 ≥_i_21/(1 - δ_0)α_i2, if _i_2 < _i_2with δ_0 := δ/10.The function α_i(_i,_i) is defined for each block of = (, ). The particular form ofα_i(, ) serves primarily for proving theLemma <ref>, i.e., local regularity condition of G(, ). We also define: =[ _1 - α_1 _1,0; ⋮;_s - α_s _s0 ]∈^Ks,: =[ _1 - α_1 _1,0; ⋮;_s - α_s _s0 ]∈^Ns.The following lemma gives bounds of _i and _i. For all (,) ∈∩ with ϵ≤1/15, there hold max{_i_2^2, _i_2^2} ≤ (7.5δ_i^2 + 2.88δ_0^2) d_i0,_i_2^2 _i_2^2≤1/26(δ_i^2 + δ_0^2) d_i0^2. Moreover, if we assume (_i, _i) ∈ additionally, we have √(L)(_i)_∞≤ 6μ√(d_i0). We only consider _i_2 ≥_i_2 and α_i = (1 - δ_0) α_1i, and the other case is exactly the same due to the symmetry. For both _i and _i, by definition, _i = _i - α_i_i0 =δ_0 α_i1_i0 + _i, _i = _i - 1/(1 - δ_0)α_i_1_i0 = (α_i2 - 1/(1 - δ_0)α_i1)_i0 + _i ,where _i = α_i1_i0 + _i and _i = α_i2_i0 + _i come from the orthogonal decomposition in (<ref>).0.25cm We start with estimating _i^2. Note that _i_2^2 ≤ 4d_i0 and α_i1_i0_2^2≤_i_2^2 since (, )∈∩. By lem:orth_decomp, we have _i_2^2 = _i_2^2 + δ_0^2α_i1_i0_2^2 ≤((δ_i/1-δ_i)^2 + δ_0^2)_i_2^2 ≤ ( 4.6δ_i^2 + 4δ_0^2) d_i0. Then we calculate _i: from (<ref>), we have_i^2 = |α_i2 - 1/(1 - δ_0)α_i1|^2d_i0 + _i^2 ≤|α_i2 - 1/(1 - δ_0)α_i1|^2d_i0 + 4δ_i^2 d_i0/(1 - δ_i)^2,where lem:orth_decomp gives _i_2 ≤δ_i/1-δ_i_i_2 ≤2δ_i/1-δ_i√(d_i0) for (,)∈∩.So it suffices to estimate | α_i2 - 1/(1 - δ_0)α_i1|, which satisfies|α_i2 - 1/(1 - δ_0)α_i1|= 1/|α_i1|| α_i1α_i2- 1 -δ_0/1 - δ_0| ≤1/|α_i1|( |(α_i1α_i2- 1)|+δ_0/1 - δ_0).Lemma <ref> implies that | α_i1α_i2- 1| ≤δ_i, and (<ref>) gives|α_i1|^2 = 1/d_i0(_i^2 - _i ^2) ≥1/d_i0(1 - δ_i^2/(1-δ_i)^2)_i^2 ≥(1 - δ_i^2/(1-δ_i)^2)(1-)where _i≤δ_i/1-δ_i_i and _i^2 ≥_i_i≥ (1-)d_i0 if _i≥_i. Substituting (<ref>) into (<ref>) gives|α_i2 - 1/(1 - δ_0)α_i1| ≤ 1/√(1-)(1 - δ_i^2/(1-δ_i)^2)^-1/2(δ_i + δ_0/1-δ_0) ≤ 1.2(δ_i + δ_0).Then we have_i_2^2 ≤ (1.44(δ_i+δ_0)^2+ 4δ^2_i/(1 - δ_i)^2)d_i0≤ (7.5δ_i^2 + 2.88δ_0^2)d_i0. Finally, we try to bound _i^2_i^2. lem:orth_decomp gives _i_2 _i_2 ≤δ_i^2d_i0/2(1 - δ_i) and|α_i1| ≤ 2. Combining them along with (<ref>), (<ref>), (<ref>) and (<ref>), we have_i_2^2 _i_2^2≤_i_2^2_i_2^2 + δ_0^2 |α_i1|^2 _i0_2^2 _i_2^2 + |α_i2 - 1/(1 - δ_0)α_i1|^2 _i0_2^2 _i_2^2≤(δ_i^4/4(1 - δ_i)^2+ 4δ_0^2 (7.5δ_i^2 + 2.88δ_0^2) + 1.44(δ_i + δ_0)^2 (4.6δ_i^2 + 4δ_0^2 )) d_i0^2 ≤(δ_i^2 + δ_0^2)d_i0^2/26.By symmetry, similar results hold for the case _i_2 < _i_2 and max{_i, _i}≤ (7.5δ_i^2 + 2.88δ_0^2)d_i0. 0.25cm Next, under the additional assumption (, ) ∈, we now prove √(L)B(_i)_∞≤ 6μ√(d_i0):Case 1: _i_2 ≥_i_2 and α_i = (1 - δ_0) α_i1. By lem:orth_decomp gives |α_i1| ≤ 2, which implies√(L)B(_i)_∞ ≤√(L)B_i _∞ + (1 - δ_0) |α_i1|√(L)B_i0_∞≤ 4μ√(d_i0) + 2(1 - δ_0)μ_h √(d_i0)≤ 6μ√(d_i0).Case 2: _i_2 < _i_2 and α_i = 1/(1-δ_0)α_i2. Using the same argument as (<ref>) gives|α_i2|^2≥(1 - δ_i^2/(1-δ_i)^2)(1-).Therefore,√(L)B(_i)_∞ ≤√(L)B_i_∞ + 1/(1 - δ_0) |α_i2|√(L)B_0_∞≤ 4μ√(d_0) + (1 - δ_i^2/(1-δ_i)^2)^-1/2μ_h √(d)_0/(1-δ_0)√(1-)≤ 6 μ√(d_0). (Local Regularity for F(,)) Conditioned on (<ref>) and (<ref>),thefollowing inequality holds∇ F_,+ ∇ F_, ≥δ^2 d_0^2/8 - 2√(s)δ d_0 ^*(),uniformly for any (, ) ∈ with ϵ≤1/15 if L ≥ Cμ^2 s(K+N)log^2 Lfor some numerical constant C.First note that for I_0 =∇ F_,+ ∇ F_, = ∑_i=1^s ∇ F__i, _i+ ∇ F__i, _i .For each component, recall that (<ref>) and (<ref>), we have∇ F__i, _i+ ∇ F__i, _i = _i^*(( - _0) - )_i, _i +(_i^*(( - _0) - ))^*_i, _i= ( - _0)- , _i((_i)_i^* + _i (_i)^*) .Define _i and _i as_i := α_i_i0(_i)^* + α_i^-1(_i)_i0^* ∈ T_i, _i := _i(_i)^*.Here _i does not necessarily belong to _i. From the way of how _i, _i, _i and _i are constructed, two simple relations hold:_i_i^* - _i0_i0^* =_i + _i, (_i)_i^* + _i (_i)^* =_i + 2_i.Define : = (_1, ⋯, _s) and : = (_1, ⋯, _s). I_0 can be simplified toI_0 = ∑_i=1^s ∇ F__i, _i+ ∇ F__i, _i= ∑_i=1^s (+)-, _i(_i + 2_i) = (+), ( + 2)_I_01 - , ( + 2)_I_02. Now we will give a lower bound for (I_01) and an upper bound for (I_02) so that the lower bound of (I_0) is obtained.By the Cauchy-Schwarz inequality, (I_01) has the lower bound(I_01) ≥(() - ()) (() - 2()).In the following, we will give an upper bound for () and a lower bound for (). Upper bound for (): Note thatis a block-diagonal matrix with rank-1 blocks, and applying lem:key results in()^2 ≤4/3∑_i=1^s _F^2 + 2σ_max(,)_F√(2s(K+N)log L) + 8sσ_max^2(, )(K+N) log L.By using Lemma <ref>, we have _i^2 ≤ (7.5δ_i^2 + 2.88δ_0^2)d_i0 and √(L)(_i)_∞≤ 6μ√(d_i0). Substituting them into σ^2_max(,) givesσ_max^2(, ) = max_1≤ l≤ L(∑_i=1^s |_l^*_i|^2 _i^2) ≤36μ^2/L∑_i=1^s(7.5δ_i^2 + 2.88δ_0^2)d_i0^2≤272μ^2 δ^2 d_0^2/L.For _F, note that _i^2_i^2 ≤1/26(δ_i^2 + δ_0^2)d_i0^2 and thus^2_F = ∑_i=1^s _i^2_i^2 ≤1/26∑_i=1^s(δ_i^2 + δ_0^2)d_i0^2 ≤1/26· 1.01δ^2d_0^2 = δ^2 d_0^2/25. Then by δ≤≤1/15 and letting L ≥ Cμ^2 s(K + N)log^2 L for a sufficiently large numerical constant C, there holds()^2 ≤δ^2 d_0^2/16()≤δ d_0/4.Lower bound for ():By the triangle inequality,_F ≥δ d_0 - 1/5δ d_0 ≥4/5δ d_0 if ϵ≤1/15 since _F ≤ 0.2δ d_0. Since ∈ T, by lem:ripu, there holds()≥√(9/10)_F ≥3/4δ d_0.With the upper bound of () in (<ref>), the lower bound of () in (<ref>), and (<ref>), we get (I_01) ≥δ^2 d_0^2/8.Now let us give an upper bound for (I_02), I_02 ≤ ^*() + 2_* = ^*()∑_i=1^s_i + 2_i_rank-2_* ≤ √(2)^*()∑_i=1^s _i + 2_i_F ≤ √(2s)^*() + 2_F≤ 2√(s)δ d_0 ^*()where · and ·_* are a pair of dual norms and + 2_F ≤ + _F + _F ≤δ d_0+ 0.2δ d_0 ≤ 1.2δ d_0.Combining the estimation of (I_01) and (I_02) above leads to ( ∇ F_,+ ∇ F_, ) ≥δ^2 d_0^2/8 - 2√(s)δ d_0 ^*().(Local Regularity for G(,) For any (, ) ∈⋂ with ϵ≤1/15 and 9/10d_0≤ d ≤11/10d_0, 9/10d_i0≤ d_i ≤11/10d_i0, the following inequality holds uniformly∇ G__i, _i+ ∇ G__i, _i ≥ 2δ_0√(ρ G_i(_i, _i)) = δ/5√(ρ G_i(_i, _i)),where ρ≥ d^2 + 2^2. Immediately, we have∇ G_,+ ∇ G_,=∑_i=1^s∇ G__i, _i+ ∇ G__i, _i ≥δ/5√(ρ G(, )).For the local regularity condition for G(, ), we use the results from <cit.> when s=1. This is because each component G_i(,) only depends on (_i,_i) by definition and thus the lower bound of ∇ G__i, _i+ ∇ G__i, _i is completely determined by (_i,_i) and δ_0, and is independent of s.For each i:1≤ i≤ s, ∇ G__i (or ∇ G__i) only depends on _i (or _i) and there holds∇ G__i, _i+ ∇ G__i, _i ≥ 2δ_0√(ρ G_i(_i, _i)) = δ/5√(ρ G_i(_i, _i)),which follows exactly from Lemma 5.17 in <cit.>. For (<ref>), by definition of ∇ G_ and ∇ G_ in (<ref>), ∇ G_,+ ∇ G_, = ∑_i=1^s∇ G__i, _i+ ∇ G__i, _i ≥δ/5∑_i=1^s√(ρ G_i(_i, _i))≥δ/5√(ρ G(,))where G(,) = ∑_i=1^s G_i(_i,_i). (Proof of the Local Regularity Condition)Conditioned on (<ref>), for the objective function (,) in (<ref>), there exists a positive constant ω such that∇(, )^2 ≥ω[ (, ) - c ]_+with c =^2 + 2000s ^*()^2 and ω = d_0/7000 for all (, ) ∈.Here we setρ≥ d^2 + 2^2.Following from Lemma <ref> and Lemma <ref>, we have( ∇ F_,+ ∇ F_, )≥ δ^2 d_0^2/8 - 2√(s)δ d_0 ^*() ( ∇ G_,+ ∇ G_, )≥ δ d/5√( G(, ))≥9δ d_0/50√(G(, ))for all (, ) ∈ where ρ≥ d^2 + 2^2≥ d^2 and 9/10d_0 ≤ d ≤11/10d_0. Adding them together gives (∇_,+ ∇_, ) on the left side. Moreover, Cauchy-Schwarz inequality implies(∇_,+ ∇_, ) ≤ 4δ√(d_0)∇(, )where both ^2 and ^2 are bounded by 8δ^2d_0 inLemma <ref> since^2 = ∑_i=1^s _i^2 ≤∑_i=1^s (7.5δ_i^2 + 2.88δ_0^2) d_i0≤ 8δ^2 d_0.Therefore, δ^2 d_0^2/8 + 9δ d_0√( G(, ))/50 - 2√(s)δ d_0 ^*() ≤ 4 δ√(d_0)∇(, ). Dividing both sides of (<ref>) by δ d_0, we obtain4/√(d_0)∇(, ) ≥ δ d_0/12+ 9/50√(G(, )) + δ d_0 /24 - 2√(s)^*()≥ 1/6√(6)[√(F_0(,)) + √(G(, ))] + δ d_0/24 - 2√(s)^*()where the Local RIP condition (<ref>) implies F_0(, ) ≤3/2δ^2 d_0^2 and hence δ d_0/12≥1/6√(6)√(F_0(, )), where F_0(,) is defined in (<ref>).Note that (<ref>) gives√(2[ (^*(),- _0) ]_+)≤√( 2√(2s)^*()δ d_0)≤√(6)δ d_0/4 + 4√(s)/√(6)^*().By (<ref>) and (, ) - ^2 ≤ F_0(, ) + 2 [(^*(),- _0)]_+ + G(, ), there holds4/√(d_0)∇(, ) ≥ 1/6√(6)[ (√(F_0(, )) +√(2[ (^*(), -_0) ]_+) + √(G(, ))) + δ d_0/24 - 1/6√(6)( √(6)δ d_0/4 + 4√(s)/√(6)^*()) - 2√(s)^*()≥ 1/6√(6)[ √([(, ) - ^2]_+) -√(1000s)^*()]. For any nonnegative real numbers a and b, we have [√((x - a)_+) - b ]_+ + b ≥√((x - a)_+) and it implies( x - a)_+ ≤ 2 ( [√((x - a)_+) - b ]_+^2 + b^2) ⟹ [√((x - a)_+) - b ]_+^2≥(x - a)_+/2 - b^2.Therefore, by setting a = ^2 and b = √(1000s)^*(), there holds∇(, )^2 ≥ d_0/3500[ (, ) - ^2 /2 -1000s ^*()^2 ]_+ ≥ d_0/7000[ (, ) - (^2 + 2000s ^*()^2) ]_+.§.§ Local smoothness Conditioned on (<ref>), (<ref>) and (<ref>), for any : = (, )∈^(K+N)s and : = (, )∈^(K+N)s such thatand +∈∩, there holds∇( + ) - ∇() ≤ C_L ,withC_L ≤(10^2d_0 + 2ρ/min d_i0( 5 + 2L/μ^2))where ρ≥ d^2 + 2^2and ≤√(s(Nlog(NL/2) + (γ+log s)log L)) holds with probability at least 1 - L^-γ from Lemma <ref>. In particular, L = 𝒪((μ^2 + σ^2)s(K + N)log^2 L) and ^2 = 𝒪(σ^2d_0^2) follows from ^2 ∼σ^2d_0^2/2Lχ^2_2L and (<ref>). Therefore, C_L can be simplified toC_L = 𝒪(d_0sκ(1 + σ^2)(K + N)log^2 L )by choosing ρ≈ d^2 + 2^2.By lem:betamu, we know that both z=(h, x) and z+w=(h+u, x+v) ∈.Note that ∇ = (∇_, ∇_) = (∇ F_ + ∇ G_, ∇ F_ + ∇ G_),where (<ref>), (<ref>), (<ref>) and (<ref>) give ∇ F_,∇ F_,∇ G_ and ∇ G_. It suffices to find out the Lipschitz constants for all of those four functions.Step 1: We first estimate the Lipschitz constant for ∇ F_ and the result can be applied to ∇ F_ due to symmetry.∇ F_( + ) - ∇ F_() = ^*( (+, +)) ( + ) - [ ^*((,)) + ^*() ]= ^*( ((+, + ) - (,)) )( + )+ ^*( (,) - (_0,_0) ) - ^*()= ^*( ( (+,) + (, ) ) )( + )+ ^*( (,) - (_0,_0) ) - ^*() .Note that (, )_F≤√(∑_i=1^s _i^2_i^2)≤ and , +∈ directly implies (,) + (+,)_F ≤ux + h+uv≤ 2√(d_0) ( + )where + ≤ 2√(d_0). Moreover, (<ref>) implies(,) - (_0,_0)_F ≤ϵ d_0since ∈. Combined with ^*()≤ d_0 in (<ref>) and + ≤ 2√(d_0), we have ∇ F_( + ) - ∇ F_()≤4d_0 ^2( + ) +d_0 ^2+d_0 ≤5d_0 ^2 (+ ).Due to the symmetry between ∇ F_ and ∇ F_, we have,∇ F_( + ) - ∇ F_() ≤ 5d_0^2 (+ ).In other words, ∇ F( + ) - ∇ F() ≤ 5√(2)d_0 ^2( + ) ≤ 10d_0^2where + ≤√(2).Step 2: We estimate the upper bound of ∇ G__i(_i + _i) - ∇ G__i(_i). Implied by Lemma 5.19 in <cit.>,we have∇ G__i(_i + _i) - ∇ G__i(_i) ≤5d_i0ρ/d_i^2_i.Step 3: We estimate the upper bound of ∇ G__i( + ) - ∇ G__i(). Denote ∇ G__i( + ) - ∇ G__i()= ρ/2d_i[G'_0(_i + _i^2/2d_i) (_i + _i) - G'_0(_i^2/2d_i) _i] _j_1+ ρ L/8d_iμ^2 ∑_l=1^L [G'_0(L|_l^*(_i + _i)|^2/8d_iμ^2) _l^*(_i + _i) - G'_0(L|_l^*_i|^2/8d_iμ^2) _l^*_i ]_l_j_2.Following the same estimation of j_1 and j_2 in Lemma 5.19 of <cit.>, we havej_1≤5d_i0ρ/d_i^2_i, j_2≤3ρ Ld_i0/2d_i^2μ^2_i.Therefore, combining eq:LipGx and eq:LipG gives∇ G( + ) - ∇ G()= √(∑_i=1^s (∇ G__i( + ) - ∇ G__i() ^2 + ∇ G__i( + ) - ∇ G__i() ^2))≤max{5d_i0ρ/d_i^2 +3ρ Ld_i0/2d_i^2μ^2}√(∑_i=1^s_i^2)+ max{5d_i0ρ/d_i^2}√(∑_i=1^s_i^2)≤max{5d_i0ρ/d_i^2 +3ρ Ld_i0/2d_i^2μ^2} + max{5d_i0ρ/d_i^2}≤2ρ/min d_i0( 5 + 2L/μ^2).In summary, the Lipschitz constant C_L of () has an upper bound as follows:∇( + ) - ∇() ≤∇ F( + ) - ∇ F() + ∇ G( + ) - ∇ G()≤(10^2d_0 + 2ρ/min d_i0( 5 + 2L/μ^2)) . §.§ Robustness condition and spectral initialization In this section, we will prove the robustness condition (<ref>) and also Theorem <ref>. To prove (<ref>), it suffices to show the following lemma, which is a more general version of (<ref>).Consider a sequence of Gaussian independent random variable = (c_1, ⋯, c_L)∈^L where c_l∼(0, λ_i^2/L) with λ_i ≤λ. Moreover, we assume _i in (<ref>) is independent of . Then there holds^*() = max_1≤ i≤ s_i^*( )≤ξwith probability at least 1 - L^-γ if L ≥ C_γ + log(s)( λ/ξ +λ^2/ξ^2 )max{ K,N }log L/ξ^2. It suffices to show that max_1≤ i≤ s_i^*()≤ξ. For each fixed i:1≤ i≤ s, _i^*() = ∑_l=1^L c_l_l_il^*The key is to apply the matrix Bernstein inequality (<ref>) and we need to estimate _l_ψ_1, and the variance of ∑_l=1^L _l. For each l, c_l _l_il^*_ψ_1≤λ√(KN)/L follows from (<ref>). Moreover, the variance of _i^*() is bounded by λ^2 max{K,N}/L since[ _i^*() (_i^*() )^* ] = ∑_l=1^L (|c_l|^2 _il^2)_l_l^* = N/L∑_l=1^L λ_l^2_l_l^* ≼λ^2 N/L, [ (_i^*() )^* (_i^*())] = ∑_l=1^L _l^2 (|c_l|^2 _il_il^*) = K/L^2∑_l=1^L λ_i^2 _N ≼λ^2 K/L.Letting t = γlog L and applying (<ref>) leads to_i^*()≤ C_0max{λ√(KN)log^2 L/L,√(C_γλ^2max{K,N}log L/L)}≤ξ.Therefore, by taking the union bound over 1≤ i≤ s, _i^*()≤ξwith probability at least 1 - L^-γ if L≥ C_γ+log(s) (λ/ξ +λ^2/ξ^2 )max{K,N}log^2L. The robustness condition is an immediate result of Lemma <ref> by setting ξ =d_0/10√(2)sκ and λ = σ d_0. [Robustness Condition] For ∼(, σ^2d_0^2/L_L)_i^*()≤ d_0/10√(2) sκ, ∀ 1≤ i≤ swithprobability at least 1 - L^-γ if L ≥ C_γ( s^2κ^2 σ^2/^2 + sκσ/)max{K, N}log L.For ∼(, σ^2d_0^2/L_L), there holds_i^*() - _i0_i0^* ≤ξ d_i0, ∀ 1≤ i≤ swith probability at least 1 - L^-γ if L ≥ C_γ+ log (s)sκ^2 (μ^2_h + σ^2) max{K,N}log L /ξ^2.The success of the initialization algorithm completely relies on the lemma above. As mentioned in Section <ref>, (_i^*()) = _i0_i0^* and Lemma <ref> confirms that _i^*() is close to _i0_i0^* in operator norm and hence the spectral method is able to give us a reliable initialization.Note that_i^*() =_i^*_i(_i0_i0^*) + _i^*(_i)where_i =- _i(_i0_i0^*) = ∑_j≠ i_j(_j0_j0^*) +is independent of _i. The proof consists of two parts: 1. show that _i^*_i(_i0_i0^*) - _i0_i0^*≤ξ d_i0/2; 2. prove that _i^*(_i)≤ξ d_i0/2. Part I: Following from the definition of _i and _i^* in (<ref>), _i^*_i(_i0_i0^*) - _i0_i0^* = ∑_l=1^L _l_l^*_i0_i0^*(_il_il^* - _N)_defined as _l .where ^* = _K. The sub-exponential norm of _l is bounded by_l_ψ_1≤max_1≤ l≤ L_l |_l^*_i0|(_il_il^* - _N) _i0_ψ_1≤μ√(KN)d_i0/Lwhere _l = √(K/L), max_l|^*_l_i0|^2 ≤μ^2 d_i0/L and (_il_il^* - _N) _i0_ψ_1≤√(Nd_i0) follows from (<ref>).We proceed to estimate the variance of ∑_l=1^L _l by using (<ref>) and (<ref>):∑_l=1^L(_l_l^*)=∑ |_l^*_i0|^2 _i0^*(_il_il^* - _N)^2_i0_l_l^*≤μ^2N d_i0^2/L,∑_l=1^L(_l^*_l)=K/L∑_l=1^L |_l^*_i0|^2 [ (_il_il^* - _N)_i0_i0^*(_il_il^* - _N)] ≤Kd_i0^2/L.Therefore, the variance of ∑_l=1^L_l is bounded by max{K, μ^2_hN}d_i0^2/L. By applying matrix Bernstein inequality (<ref>) and taking the union bound over all i, we prove that_i^*_i(_i0_i0^*) - _i0_i0^*≤ξ d_i0/2, ∀ 1≤ i≤ sholds with probability at least 1 -L^-γ+1 if L ≥ C_γ+log(s)max{K,μ_h^2N}log L/ξ^2. Part II: For each 1≤ l≤ L, the l-th entry of _i in (<ref>), i.e., (_i )_l = ∑_j≠ i_l^*_j0_j0^*_jl + e_l, is independent of ^*_l_i0_i0^*_il and obeys 𝒞𝒩(0, σ_il^2/L). Hereσ_il^2 = L|(_i)_l|^2 =L∑_j≠ i |_l^*_j0|^2 _j0^2+ σ^2_0_F^2 ≤ μ_h^2 ∑_j≠ i_j0^2 _j0^2 + σ^2_0_F^2≤ (μ_h^2 + σ^2) _0_F^2.This gives max_i,lσ_il^2≤ (μ^2_h + σ^2) _0_F^2. Thanks to theindependence between _i and _i, applying Lemma <ref> results in_i^*(_i)≤ξ d_i0/2with probability 1 - L^-γ + 1 if L ≥ Cmax( (μ_h^2 + σ^2) _0_F^2 /ξ^2d_i0^2,√(μ^2_h + σ^2)_0_F /ξ d_i0)max{K,N}log L.0.5cmTherefore, combining (<ref>) with (<ref>), we get_i^*() - _i0_i0^*≤_i^*_i(_i0_i0^*) - _i0_i0^*+ _i^*(_i)≤ξ d_i0for all 1≤ i≤ s with probability at least 1 - L^-γ+1 ifL ≥ C_γ+log (s)(μ_h^2 + σ^2)s κ^2max{K,N}log L/ξ^2where _0_F/d_i0≤√(s)κ. Before moving to the proof of Theorem <ref>, weintroduce a property about the projection onto a closed convex set. Let Q := {∈^K | √(L)_∞≤ 2√(d)μ}be a closed nonempty convex set. There holds(- _Q() ,- _Q()) ≤ 0, ∀ ∈ Q, ∈^Kwhere _Q() is the projection ofonto Q.With this lemma, we can easily see - ^2 =- _Q()^2 + _Q() - ^2 + 2( - _Q(), _Q() - ) ≥_Q() - ^2for all ∈^K and ∈ Q. It means that projection onto nonempty closed convex set is non-expansive.Now we present the proof of Theorem <ref>. By choosing L ≥ C_γ+log (s)(μ_h^2 + σ^2)s^2 κ^4max{K,N}log L/^2, we have _i^*() - _i0_i0^*≤ξ d_i0, ∀ 1≤ i≤ swhere ξ = /10√(2s)κ.By applying the triangle inequality to (<ref>), it is easy to see that(1 - ξ)d_i0≤ d_i ≤ (1 + ξ)d_i0,|d_i - d_i0| ≤ξ d_i0≤ d_i0/10√(2s)κ < d_i0/10,which gives 9/10d_i0≤ d_i ≤11/10d_i0 where d_i = _i^*() according to Algorithm <ref>.Part I: Proof of (^(0),^(0))∈1/√(3)∩1/√(3)Note that _i^(0) = √(d_i)_i0 = √(d_i) where _i0 is the leading right singular vector of _i^*(). Therefore, _i^(0) = √(d_i)_i0 =√(d_i)≤√((1 + ξ)d_i0)≤2/√(3)√(d_i0), ∀ 1≤ i≤ swhich implies {_i^(0)}_i=1^s ∈1/√(3). Now we will prove that _i^(0)∈1/√(3)∩1/√(3) by Lemma <ref>. By Algorithm <ref>, _i^(0) is the minimizer to the function f() = 1/2 - √(d_i)_i0^2 over Q_i := { | √(L)_∞≤ 2√(d_i)μ}. Obviously, by definition, _i^(0) is the projection of √(d_i)_i0 onto Q_i. Note that _i^(0)∈ Q_i implies √(L)_i^(0)_∞≤ 2√(d_i)μ≤ 2√((1+ξ)d_i0)μ≤4√(d_i0)μ/√(3) and hence _i^(0)∈1/√(3).Moreover, due to (<ref>), there holds √(d_i)_i0 - ^2≥_i^(0) - ^2, ∀∈ Q_iIn particular, let = ∈ Q_i and immediately we have _i^(0)^2 ≤ d_i ≤4/3⟹_i^(0)∈1/√(3).In other words, {(_i^(0), _i^(0))}_i=1^s ∈1/√(3)∩1/√(3).Part II: Proof of (^(0),^(0))∈_2/5√(s)κ We will show _i^(0)(_i^(0))^* - _i0_i0^*_F ≤ 4ξ d_i0 for 1≤ i≤ s so that _i^(0)(_i^(0))^* - _i0_i0^*_F/d_i0≤2/5√(s)κ. 0.25cm First note that σ_j(_i^*()) ≤ξ d_i0 for all j≥ 2, which follows from Weyl's inequality <cit.> for singular values where σ_j(_i^*()) denotes the j-th largest singular value of _i^*(). Hence there holdsd_i _i0_i0^* - _i0_i0^* ≤_i^*() - d_i _i0_i0^*+ _i^*() - _i0_i0^* ≤ 2ξ d_i0.On the other hand, for any i,(_K - _i0_i0^*/d_i0)_i0 = (_K - _i0_i0^*/d_i0) _i0_i0^*_i0_i0^* = (_K - _i0_i0^*/d_i0)[1/d_i0(( _i^*() - d_i _i0_i0^*) + _i0_i0^* - _i0_i0^*/d_i0] _i0_i0^* =1/d_i0_i^*()- _i0_i0^*+ |d_i/d_i0-1| ≤ 2ξ where (_K - _i0_i0^*/d_i0) _i0_i0^* = and (_i^*() - d_i _i0_i0^*)_i0_i0^* =. Therefore, we have_i0 -_i0^*_i0/d_i0_i0≤ 2ξ,√(d_i)_i0 - t_i0_i0≤ 2√(d_i)ξ,where t_i0 = √(d_i)_i0^*_i0/d_i0 and |t_i0| ≤√(d_i/d_i0) <√(2).If wesubstituteby t_i0_i0∈ Q_i into (<ref>), √(d_i)_i0 - t_i0_i0≥_i^(0) - t_i0_i0.where t_i0_i0∈ Q_i follows from √(L) |t_i0|_i0_∞≤ |t_i0| √(d_i0)μ_h ≤√(2d_i0)μ. 0.5cm Now we are ready to estimate ^(0)_i(_i^(0))^* - _i0_i0^* _F as follows, ^(0)_i(_i^(0))^* - _i0_i0^* _F ≤^(0)_i(_i^(0))^* - t_i0_i0(_i^(0))^* _F + t_i0_i0(_i^(0))^* - _i0_i0^* _F ≤_i^(0) - t_i0_i0_i^(0)_I_1 +d_i/d_i0_i0^*_i0_i0_i0^* - _i0_i0^* _F_I_2.Here I_1≤ 2ξ d_i because _i^(0) = √(d_i) and _i^(0) - t_i0_i0≤ 2√(d_i)ξ follows from (<ref>) and (<ref>). For I_2, there holdsI_2= _i0_i0^*/d_i0(d_i _i0_i0^* - _i0_i0^*) _F ≤d_i _i0_i0^* - _i0_i0^*_F ≤ 2√(2)ξ d_i0,which follows from (<ref>). Therefore, ^(0)_i(_i^(0))^* - _i0_i0^* _F≤ 2ξ d_i+ 2 √(2)ξ d_i0≤ 5ξ d_i0≤2 d_i0/5√(s)κ. § APPENDIX §.§ Descent Lemma If f(, ) is a continuously differentiable real-valued function with two complex variablesand , (for simplicity, we just denote f(, ) by f() and keep in the mind that f() only assumes real values) for := (, ) ∈∩.Suppose that there exists a constant C_L such that∇ f( + t Δ) - ∇ f()≤ C_L tΔ, ∀ 0≤ t≤ 1,for all ∈∩ and Δ such that+ tΔ∈∩ and 0≤ t≤ 1. Thenf( + Δ) ≤ f() + 2( (Δ)^T ∇ f()) + C_LΔ^2where ∇ f() :=f(, )/ is the complex conjugate of ∇ f() =f(, )/∂.§.§ Concentration inequalityWe define the matrix ψ_1-norm via_ψ_1 := inf_u ≥ 0{[ exp(/u)] ≤ 2 }.  <cit.> Consider a finite sequence of _l of independent centered random matrices with dimension M_1× M_2. Assume that R : = max_1≤ l≤ L_l_ψ_1 and introduce the random matrix 𝒮 = ∑_l=1^L _l.Compute the variance parameterσ_0^2 = max{∑_l=1^L (_l_l^*), ∑_l=1^L (_l^* _l)},then for all t ≥ 0𝒮≤ C_0 max{σ_0 √(t + log(M_1 + M_2)), Rlog( √(L)R/σ_0)(t + log(M_1 + M_2)) }with probability at least 1 - e^-t where C_0 is an absolute constant.Let ∈^n ∼(, _n), then ^2 ∼1/2χ^2_2n and^2 _ψ_1 = , _ψ_1≤ C nand(^* - _n)^2 = n_n.Let ∈^n be any deterministic vector, then the following properties hold (^* - )_ψ_1≤ C√(n), [ (^* - )^* (^* - )] = ^2 _n.Let ∼(, _m)be a complex Gaussian random vector in ^m, independent of , then ·_ψ_1≤ C√(mn). § ACKNOWLEDGEMENTS.Ling would like to thank Felix Krahmer and Dominik Stöger for the discussion about <cit.>, and also thank Ju Sun for pointing out the connection between convolutional dictionary learning and this work. 10RR12 A. Ahmed, B. Recht, and J. Romberg. Blind deconvolution using convex programming. IEEE Transactions on Information Theory, 60(3):1711–1732, 2014.bristow2013fast H. Bristow, A. Eriksson, and S. Lucey. Fast convolutional sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 391–398, 2013.CLM16 T. T. Cai, X. Li, Z. Ma, et al. Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow. The Annals of Statistics, 44(5):2221–2251, 2016.CJ16b V. Cambareri and L. Jacques. Through the haze: A non-convex approach to blind calibration for linear random sensing models. arXiv preprint arXiv:1610.09028, 2016.CE07 P. Campisi and K. Egiazarian. Blind Image Deconvolution: Theory and Applications. CRC press, 2007.CESV11 E. Candès, Y. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1):199–225, 2013.CR08 E. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009.CLS14 E. J. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015.CSV11 E. J. Candès, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1241–1274, 2013.CRP12 V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012.CC15 Y. Chen and E. Candes. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Advances in Neural Information Processing Systems, pages 739–747, 2015.EftW15 A. Eftekhari and M. B. Wakin. Greed is super: A fast algorithm for super-resolution. arXiv preprint arXiv:1511.03385, 2015.RM11AP R. Escalante and M. Raydan. Alternating Projection Methods, volume 8. SIAM, 2011.goldsmith2005wireless A. Goldsmith. Wireless Communications. Cambridge University Press, 2005.JungKS17 P. Jung, F. Krahmer, and D. Stöger. Blind demixing and deconvolution at near-optimal rate. arXiv preprint arXiv:1704.04178, 2017.KMO09b R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. In Advances in Neural Information Processing Systems, pages 952–960, 2009.KMO09 R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980–2998, 2010.KolVal11 V. Koltchinskii et al. Von Neumann entropy penalization and low-rank matrix estimation. The Annals of Statistics, 39(6):2936–2973, 2011.LLB16 K. Lee, Y. Li, M. Junge, and Y. Bresler. Blind recovery of sparse signals from subsampled convolution. IEEE Transactions on Information Theory, 63(2):802–821, 2017.li2001direct X. Li and H. H. Fan. Direct blind multiuser detection for CDMA in multipath without channel estimation. IEEE Transactions on Signal Processing, 49(1):63–73, 2001.LLSW16 X. Li, S. Ling, T. Strohmer, and K. Wei. Rapid, robust, and reliable blind deconvolution via nonconvex optimization. arXiv preprint arXiv:1606.04933, 2016.LS15 S. Ling and T. Strohmer. Self-calibration and biconvex compressive sensing. Inverse Problems, 31(11):115002, 2015.LS17b S. Ling and T. Strohmer. Blind deconvolution meets blind demixing: Algorithms and performance bounds. IEEE Transactions on Information Theory, 63(7):4497–4520, July 2017.LXQZ09 J. Liu, J. Xin, Y. Qi, F.-G. Zheng, et al. A time domain algorithm for blind separation of convolutive sound mixtures and L_1 constrained minimization of cross correlations. Communications in Mathematical Sciences, 7(1):109–128, 2009.luenberger2015linear D. G. Luenberger and Y. Ye. Linear and Nonlinear Programming, volume 228. Springer, 2015.mccoy2013demixing M. B. McCoy and J. A. Tropp. Achievable performance of convex demixing. Technical report, Caltech, 2017, Paper dated Feb. 2013. ACM Technical Report 2017-02.shafi20175g M. Shafi, A. F. Molisch, P. J. Smith, T. Haustein, P. Zhu, P. De Silva, F. Tufvesson, A. Benjebbour, and G. Wunder. 5G: A tutorial overview of standards, trials, challenges, deployment, and practice. IEEE Journal on Selected Areas in Communications, 35(6):1201–1221, 2017.Stewart90 G. W. Stewart. Perturbation theory for the singular value decomposition. Technical Report CS-TR 2539, University of Maryland, September 1990.SJK16 D. Stöger, P. Jung, and F. Krahmer. Blind deconvolution and compressed sensing. In Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 2016 4th International Workshop on, pages 24–27. IEEE, 2016.Str00 T. Strohmer. Four short stories about Toeplitz matrix calculations. Linear Algebra Appl., 343/344:321–344, 2002. Special issue on structured and infinite systems of linear equations.sudhakar2010double P. Sudhakar, S. Arberet, and R. Gribonval. Double sparsity: Towards blind estimation of multiple channels. In Latent Variable Analysis and Signal Separation, pages 571–578. Springer, 2010.SQW16it J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere I: Overview and the geometric picture. IEEE Transactions on Information Theory, 63(2):853–884, 2017.SQW16 J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. Foundations of Computational Mathematics, Aug 2017.SL16 R. Sun and Z.-Q. Luo. Guaranteed matrix completion via non-convex factorization. IEEE Transactions on Information Theory, 62(11):6535–6579, 2016.TBSR15 S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank solutions of linear matrix equations via procrustes flow. In Proceedings of The 33rd International Conference on Machine Learning, pages 964–973, 2016.5Gbook R. Vannithamby and S. Talwar. Towards 5G: Applications, Requirements and Candidate Technologies. John Wiley & Sons, 2017.Ver98 S. Verdu. Multiuser Detection. Cambridge University Press, 1998.Ver10 R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. C. Eldar and G. Kutyniok, editors, Compressed Sensing: Theory and Applications, chapter 5. Cambridge University Press, 2012.WP98 X. Wang and H. V. Poor. Blind equalization and multiuser detection in dispersive CDMA channels. IEEE Transactions on Communications, 46(1):91–103, 1998.Wedin72 P.-Å. Wedin. Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics, 12(1):99–111, 1972.WCCL16 K. Wei, J.-F. Cai, T. F. Chan, and S. Leung. Guarantees of riemannian optimization for low rank matrix recovery. SIAM Journal on Matrix Analysis and Applications, 37(3):1198–1222, 2016.WGMM13 J. Wright, A. Ganesh, K. Min, and Y. Ma. Compressive principal component pursuit. Information and Inference, 2(1):32–68, 2013.WBSJ14 G. Wunder, H. Boche, T. Strohmer, and P. Jung. Sparse signal processing concepts for efficient 5G system design. IEEE Access, 3:195—208, 2015.
http://arxiv.org/abs/1703.08642v2
{ "authors": [ "Shuyang Ling", "Thomas Strohmer" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170325025515", "title": "Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing" }
^1Chemical Physics Department, Weizmann Institute of Science, Rehovot 7610001, Israel^2Institute for Theoretical Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands^3School of Engineering and Applied Sciences, Harvard University, Cambridge 02138, USA Identifying heterogeneous structures in glasses — such as localized soft spots — and understanding structure-dynamics relations in these systems remain major scientific challenges. Here we derive an exact expression for the local thermal energy of interacting particles (the mean local potential energy change due to thermal fluctuations) in glassy systems by a systematic low-temperature expansion. We show that the local thermal energy can attain anomalously large values, inversely related to the degree of softness of localized structures in a glass, determined by a coupling between internal stresses — an intrinsic signature of glassy frustration —, anharmonicity and low-frequency vibrational modes. These anomalously large values follow a fat-tailed distribution, with a universal exponent related to the recently observed universal ω^4 density of states of quasi-localized low-frequency vibrational modes. When the spatial thermal energy field — a `softness field' — is considered, this power-law tail manifests itself by highly localized spots which are significantly softer than their surroundings. These soft spots are shown to be susceptible to plastic rearrangements under external driving forces, having predictive powers that surpass those of the normal-modes-based approach. These results offer a general, system/model-independent, physical-observable-based approach to identify structural properties of quiescent glasses and to relate them to glassy dynamics. Local thermal energy as a structural indicator in glasses Jacques Zylberg^1, Edan Lerner^2, Yohai Bar-Sinai^1,3 and Eran Bouchbinder^1 December 30, 2023 ================================================================================Understanding the glassy state of matter remains one of the greatest challenges in condensed-matter physics and materials science <cit.>. In large part, this is due to the absence of well-established tools and concepts to quantify the disordered structures characterizing glassy materials — in sharp contrast to their ordered crystalline counterparts — and due to the lack of understanding of the relations between glassy structures and dynamics. Over the years, many attempts have been made to identify physical quantities that can indicate underlying local structures within glassy materials <cit.>. These indicators include, among others, free-volume <cit.>, internal stresses <cit.>, local elastic moduli <cit.>, local Debye-Waller factor <cit.>, coarse-grained energy and density <cit.>, locally favored structures <cit.>, short- and medium-range order <cit.> and various weighted sums over a system-dependent number of low-frequency normal modes <cit.>.These quantities measure some properties of quiescent glasses, evaluated at or in the near vicinity of a mechanically (meta)stable state of a glass (an inherent structure). Some of these indicators are purely structural in nature, i.e. they are obtained from the knowledge of particle positions alone, while others require in addition the knowledge of inter-particle interactions. Recently, the local yield stress — the minimal local stress needed to trigger an irreversible plastic rearrangement — has been proposed as a structural indicator <cit.>. It requires, however, to externally drive each local region in a glass to its nonlinear rearrangement threshold and hence belongs to a different class of structural indicators compared to those previously mentioned. The utility of each of the proposed indicators is usually assessed by looking for correlations between the revealed structures — typically localized soft spots — and glassy dynamics, either thermally-activated relaxation in the absence of external driving forces or localized irreversible plastic rearrangements under the application of global driving forces. In fact, a recent study established such structure-dynamics correlations by machine-learning techniques, leaving the precise physical nature of the underlying structural indicator unspecified <cit.>. These machine-learning-based structural indicators also belong to a different class of structural indicators since the training stage of the machine-learning algorithm requires knowledge of the plastic rearrangements themselves.Some of the previously proposed structural indicators have revealed a certain degree of correlation between identified soft spots and dynamics, providing important evidence that pre-existing localized structures in a glass significantly affect its dynamics. Yet, oftentimes the physical foundations of the structural indicators remain unclear, and they are sometimes defined algorithmically, but not derived from well-established physical observables. Moreover, their statistical properties are not commonly addressed, the relations between them and other basic physical quantities are not established and the fundamental reasons for them being particularly sensitive to underlying heterogeneous structures in glasses remain elusive.Here we propose a structural indicator of glassy `softness' — the local thermal energy (LTE) — which is a transparent physical observable derived by a systematic low-temperature expansion. We use the exact expression for the LTE of interacting particles to elucidate the underlying physical factors — most notably internal stresses, anharmonicity and nonlinear coupling to low-frequency vibrational modes — that give rise to significant spatial heterogeneities of softness. We show that the LTE can attain anomalously large values, directly related to particularly soft regions in a glass, which follow a fat-tailed distribution. The power-law exponent characterizing this distribution is shown to be universal and directly related to the recently observed universal ω^4 density of states of quasi-localized low-frequency vibrational modes <cit.>, constituting a link to a fundamental universal property of glassy systems. The LTE field, a `softness field', thus exhibits highly localized spots which are significantly softer than their surroundings. These soft spots are shown to be particularly susceptible to plastic rearrangements when the glass is being driven by external forces, having predictive powers that surpass those of the normal-modes-based approach <cit.>. As such, they can be identified with the long sought for glassy `flow defects', the so-called Shear-Transformation-Zones (STZ) <cit.>.§ PHYSICAL OBSERVABLES IN THE LOW-TEMPERATURE LIMIT Our starting point is the idea that the thermal average of local physical observables in a system equilibrated at a low temperature T is expected to be sensitive to the system's underlying structure <cit.>. Therefore, we first aim at deriving an expression for the thermal average of a general physical observable A, ⟨ A⟩__T, in the low-temperature limit. The latter is given by ⟨ A⟩__T= Z(T)^-1∫ A( x)exp(- U( x)/k_B T) d x, where the components of the vector x represent the deviations of the system's degrees of freedom from a (possibly local) minimum of its energy U( x), Z(T)=∫exp(- U( x)/k_B T) d x is the partition function and k_B is the Boltzmann constant. ⟨ A⟩__T can be systematically expanded to leading order in T, yielding (see Supporting Information)⟨ A⟩__T- A^(0)/12k_B T≃∂^2 A/∂ x∂ x:M^-1 - ∂ A/∂ x·M^-1·U”':M^-1 ,where M≡^2U/ x x is the dynamical matrix, U”'≡∂^3U/∂ x∂ x∂ x is a third-order anharmonicity tensor and A^(0)≡lim_T→ 0⟨ A⟩__T. All derivatives are evaluated at the minimum of U, i.e. at x=0. In obtaining (<ref>), higher order terms in T were neglected. In the T→0 limit, these terms vanish and the right-hand-side (RHS) of (<ref>) represents an intrinsic property of an inherent structure, independent of temperature.To gain some understanding of the physics encapsulated in (<ref>), let us briefly consider a few physical observables. Consider first the total energy AU( x) in the quadratic (harmonic) approximation. In this case, the first (harmonic) term on the RHS of (<ref>) equals the number of degrees of freedom N and the second (anharmonic) term vanishes due to mechanical equilibrium, ∂ U/∂ x0̄. Consequently, we obtain ⟨ U⟩__T- U^(0)=12N k_B T, which is nothing but the equipartition theorem in the harmonic approximation <cit.>. Consider then a system whose energy U(X) depends on a single (scalar) macroscopic degree of freedom X, representing changes in its linear dimension relative to a reference stable state X=0. In this case, the first (harmonic) term on the RHS of (<ref>) vanishes and we obtain ⟨X⟩__T≃-12U”'( U”)^-2 k_B T+ O(T^2), where a prime denotes a derivative with respect to X. This describes linear thermal expansion, which is well-known to be an intrinsically anharmonic physical effect proportional to U”' <cit.>. These examples both show that (<ref>) is fully consistent with well-established results (equipartition and thermal expansion) and highlight the anharmonic nature of the second term on the RHS of (<ref>).The examples presented above focused on macroscopic (global) scalar observables. As our main interest is in spatial heterogeneity, we consider now microscopic (local) observables defined at the particles' level. We thus focus on the microscopic generalization of ⟨X⟩__T: the thermal displacement vector ⟨ x⟩__T, which represents the variation of the mean positions of particles about the equilibrium state once thermal fluctuations are introduced. Using (<ref>), the normalized thermal average of x in the T→0 limit takes the formX≡lim_T→ 0⟨ x⟩__T/12k_B T = -M^-1·U”':M^-1 .Note the analogy between (<ref>) — which features a quadratic (nonlinear) coupling between the anharmonicity tensor U”' and the inverse of the dynamical matrix M^-1 — and the expression given above for ⟨X⟩__T. The components of the normalized thermal displacement vector X_i in (<ref>) should be distinguished from the local Debye-Waller factor x^2_i <cit.>, whose thermal average according to (<ref>) is given by ⟨ x^2_i ⟩__T(̄ M^-1)_ii k_B T (no summation is implied). While ⟨ x^2_i ⟩__T is completely given by the first term on RHS of (<ref>), which involves a single contraction of the inverse of the dynamical matrix M^-1, X_i is completely given by the second term, which involves two contractions with M^-1. As will be shown below, this distinction makes a qualitative difference. Moreover, X_i is directly sensitive to anharmonicity, while ⟨ x^2_i ⟩__T is independent of it. X, plotted in Fig. <ref> for a 2D model glass, is shown to exhibit significant spatial heterogeneity, suggesting that it is particularly sensitive to localized soft structures in glasses. § LOCAL THERMAL ENERGY The normalized thermal displacement vector X, defined in (<ref>) and shown to exhibit strong spatial heterogeneity in Fig. <ref>, contributes to the thermal average of any physical observable ⟨ A⟩__T that features ∂ A/∂ x0 at x0̄. It is important to emphasize the counter-intuitive result that for observables with ∂ A/∂ x0, anharmonicity appears to be important at vanishingly small temperatures, independent of how well the harmonic approximation for the energy holds. Thus, on the face of it, the normalized thermal displacements X could have been a good candidate for an indicator of `softness' of the underlying structure. However, we aim at proposing an observable that naturally `filters out' the regions of homogeneous, collective-translation-like, motion exhibited by the thermal displacements, further exposing localized soft structures that exhibit large gradients.Our goal now is to identify a physical observable A that can potentially serve as a `softness field', i.e. a local scalar that features a nonvanishing first spatial derivative and is particularly sensitive to gradients of X. Inspired by <cit.>, an observable that naturally suggests itself is the local potential energy ε_α, where α represents any pair of interacting particles and U∑̄_αε_α. Using Eqs. (<ref>)-(<ref>), we then defineE_α≡lim_T → 0⟨ε_α⟩__T-ε_α^(0)/12k_B T =f_α/∂ x:M^-1 +f_α·X ,where f_α≡ε_α/ x is the internal force vector acting between particles defining the interaction α.Mechanical equilibrium at particle i implies that the sum of all the forces acting on it vanishes. In systems with no internal frustration, internal forces/stresses do not exist and this sum is trivially satisfied by having f_α0 for all α's. In systems with internal frustration, however, internal forces/stresses generically emerge, f_α0. In the former case, the second term on the RHS of (<ref>) vanishes. Such internal-stress-free disordered systems were studied in <cit.>, where it was shown that under these conditions E_α is universally bounded between 0 and 1. This suggests that significant spatial heterogeneity in E_α cannot emerge in internal-stress-free systems.An intrinsic signature of glassy systems is the existence of internal frustration <cit.> that leads to the emergence of internal forces/stresses, f_α0 <cit.>. Consequently we expect the f_α·X term on the RHS of (<ref>) to be generically non-zero for glasses. As X is already known to exhibit strongly localized structures, cf. Fig. <ref> (left), we expect f_α·X to expose localized regions with a very large concentration of the normalized LTE E_α. In fact, we expect the scalar product of f_α with X to amplify the spatial heterogeneity in X. To understand this, note that f_α is actually a force-dipole composed of two forces acting along the line connecting the particles that define the interaction α, in opposite directions. Therefore, f_α·X is exactly the difference between the values of X at the positions of the particles defining the interaction α, projected along the line connecting them, multiplied by | f_α|. Consequently, regions of homogeneous thermal displacements are expected to feature small values of f_α·X, while heterogeneous regions — cf. Fig. <ref> (left) — are expected to feature much larger values.To test these ideas, we plot in Fig. <ref> (right) the normalized LTE E_α for the same glass realization shown in the left panel. The result is striking: E_α attains anomalously large values (both positive and negative) in localized regions where X exhibits marked heterogeneity. This observation provides strong visual evidence, to be quantified below, that E_α can be used to define a `softness field' that clearly identifies localized soft spots in glasses. Finally, note that E_α can be also measured directly by tracking thermal fluctuations in low T dynamics. Two examples obtained by finite T Molecular Dynamics (MD) simulations are shown in Fig. <ref> (inset), demonstrating perfect agreement with the exact expression in (<ref>). § UNIVERSAL ANOMALOUS STATISTICS To quantify the degree of `softness' of soft spots revealed by E_α — cf. Fig. <ref> (right) — and its probability of occurrence, we focus next on the statistical properties of E_α. To this aim, we argue that the statistics of normalized thermal energies E_α can be related to the density of vibrational frequencies D(ω). In particular, the form of Eqs. (<ref>)-(<ref>) suggests that soft vibrational modes, i.e. modes with small frequencies ω, give rise to large values of E_α due to the appearance of the inverse of the dynamical matrix M^-1. Recently, it has been observed that low-frequency vibrations in glassy materials appear in two qualitatively different species, one is ordinary long-wavelength plane-waves and the other is disorder-induced soft glassy modes. The former are spatially extended objects, while the latter are quasi-localized objects characterized by a disordered core and a power-law tail <cit.>. Moreover, long-wavelength plane-waves follow a Debye density of states (DOS) D_D(ω)∼ω^-1, indimensions, while soft glassy modes follow a universal DOS D_G(ω)∼ω^4 <cit.>. We stress that our focus here is on generic glasses, which do not dwell near a jamming transition, where the physics is expected to change.To proceed, note that E_α in (<ref>) has one contribution that involves a single contraction with M^-1 and another one that involves two contractions with M^-1, therefore the latter is expected to dominate the former. Consequently, we write E_α∼ f_α·X whose eigen-decomposition takes the formE_α∼∑_i,j( f_α·Ψ_i) c_ijj/ω_i^2 ω_j^2with c_ijj≡U”'Ψ_iΨ_jΨ_j,where i,j run over all of the vibrational modes Ψ_i, defined by the eigenvalue equation M·Ψ_iω̄^2_i Ψ_i.We argue that low-frequency plane-waves and quasi-localized soft glassy modes make qualitatively different contributions to the double sum in (<ref>). To see this, note that similarly to the discussion about the dipolar nature of f_α above, each contraction of U”' with a vibrational mode is proportional to the mode's spatial derivative (cf. Fig. 3 in <cit.>). For low-frequency plane-waves, each such derivative is proportional to the frequency ω, while for quasi-localized soft glassy modes the derivative is expected to attain a characteristic value that is nearly independent of frequency. Consequently, since c_ijj∼ω^3 and f_α·Ψ_i∼ω for plane-waves (which we have numerically verified), we expect their contribution to be negligible compared to that of quasi-localized soft glassy modes, and hence the above double sum is now understood to be dominated by the latter. Next, since different quasi-localized soft glassy modes are spatially well separated, we expect c_ijj for ij to be much smaller than c_iii such that E_α∼∑_i ( f_α·Ψ_i) c_iii ω_i^-4. Finally, as the internal force f_α is localized at the α-th interaction, only the glassy mode that is localized there will contribute to the sum, leading toE_α∼ω^-4 .Equation (<ref>), which is verified below, establishes an important relation between the LTE E_α and the frequency of vibrational modes ω. In fact, it constitutes a relation between E_α and the local stiffness κ≡ω^2, E_α∼κ^-2, showing that particularly soft excitations, κ→0, correspond to anomalously large values of the LTE E_α. This justifies the assertion that E_α quantifies the the degree of softness of glassy structures.Using (<ref>) and the universal relation D_G(ω)∼ω^4, the probability distribution function p( E_α) is obtained asp( E_α) = D_G[ω( E_α)]dω( E_α)/d E_α ∼E_α^-1 E_α^-5/4 ∼E_α^-9/4 .Note that in the above discussion we implicitly used the fact that the magnitude of the internal forces | f_α| has a characteristic value, as shown in Supporting Information. The prediction in (<ref>) has far reaching implications. First, it suggests that the physical observable E_α, i.e. the LTE, effectively filters out the effect of low-frequency plane-waves, which are known to obscure the origin of many glassy effects <cit.>. In fact, when low-frequency plane-waves coexist with quasi-localized soft glassy modes in the same frequency range, they hybridize such that glassy modes acquire spatially extended background displacements and appear to lose their quasi-localized nature. The derivation leading to (<ref>) assumed that E_α is insensitive to hybridization and that the D_G(ω)∼ω^4 distribution remains physically meaningful — i.e. it still characterizes the probability to find a soft localized structure in a glass — even in the presence of hybridization, when it cannot be directly probed by a harmonic normal modes analysis. Second, the prediction in (<ref>) rationalizes the existence of anomalously soft localized spots in glassy materials and predicts its probability.To test the prediction in (<ref>) and its degree of universality, we performed extensive numerical simulations of different computer glass-forming models: (i) a binary system of point-like particles interacting via inverse power-law purely repulsive pairwise potentials in 2D (2DIPL) and 3D (3DIPL) <cit.>; (ii) the canonical Kob-Andersen binary Lennard-Jones (3DKABLJ) system <cit.> in 3D (see Supporting Information for details about models and methods), in order to extract the statistics of E_α according to (<ref>). The results are summarized in Fig. <ref>. All of the glasses considered exhibit a power-law tail with a universal exponent fully consistent with the theoretically predicted -9/4 exponent. These results lend strong support to the prediction in (<ref>) and therefore also implicitly to its underlying assumptions. The results presented in this section explain the physical origin of the sensitivity of E_α to soft glassy structures, elucidate its anomalous statistical properties and establish a relation between its statistical properties and the recently observed universal ω^4 density of states of quasi-localized low-frequency vibrational modes <cit.>, a fundamental property of glasses. Next, we would like to explore the possibility of defining mesoscopic soft spots based on E_α and the their predictive powers.§ SOFTNESS FIELD AND PREDICTING PLASTIC REARRANGEMENTS The normalized LTE E_α is microscopically defined for any interaction α. In Fig. <ref> (left) we present yet another example of the spatial map of E_α, here for a larger system compared to Fig. <ref> (right). A continuous field can be naturally constructed by coarse-graining | E_α| on a scale larger than the particles scale. We use | E_α| because anomalously large negative and positive values of E_α are strongly correlated in space. Coarse-graining is achieved by discretizing space into bins containing at least two bonds each, assigning a bin with softness obtained by averaging the values of | E_α| of bonds belonging to it and finally by averaging the bin's value with the values of all bins in the first layer of neighboring bins (see Supporting Information). Applying this procedure to Fig. <ref> (left) yields Fig. <ref> (right), which we treat as a `softness field'. Our goal now is to test the predictive powers of this softness field in relation to glassy dynamics. The latter, either thermally-activated relaxation in non-driven conditions or plastic rearrangements under external driving forces, entail crossing some activation barriers. Activation barriers revealed by soft localized vibrational modes Ψ_i of frequency ω_i are small, of order ω_i^6/c_iii^2 in the leading anharmonic expansion of the energy <cit.>. Hence, we expect that regions that feature large values of | E_α| will be particularly susceptible to plastic rearrangements. To test this, we applied global quasi-static shear deformation in a certain direction, under athermal conditions, to each glass realization — such as the one shown in Fig. <ref> (right) — and measured the locations of the first few discrete irreversible plastic rearrangements, as described in Supporting Information. The advantage of this T0̄ protocol is that it allows to uniquely and unquestionably identify the discrete irreversible plastic rearrangements. The locations of the first 5 discrete irreversible plastic rearrangements (events) were superimposed on the softness field in Fig. <ref> (right). The first 4 plastic events overlap soft spots identified by the softness field, indicating a high degree of predictiveness of E_α.To quantify the degree of predictiveness of the LTE E_α, we extracted the location of soft spots from the spatial distribution of E_α, for example the one shown in Fig. <ref>, as described in Supporting Information. In addition to its location, each soft spot is characterized by its degree of softness, representing the average value of | E_α| in its near vicinity (see Supporting Information). As the fat-tailed distribution in (<ref>) predicts very large variability in the degree of softness of different soft spots within a single glass realization and among different realizations, we define Δ_ E of each soft spot as the maximal degree of softness in a given realization divided by the spot's degree of softness. That way we standardize the degree of softness such that the softest spot in each realization has Δ_ E1̄ and not-as-soft spots have Δ_ E>1. Then each plastic event of ordinal number n (n1̄ for the first event, n2̄ for the second etc.) is associated with the soft spot that is closest to it in space (see Supporting Information). We stress that the soft spots are extracted for the non-sheared system, and are not updated between plastic events.The cumulative distribution function F_n(Δ_ E), quantifying the fraction of plastic events of ordinal number n being closest to soft spots characterized by a value equal or smaller than Δ_ E, is constructed by collecting data from 5000 independent simulations of 2DIPL computer glasses. F_n(Δ_ E) for n1̄, 2, 3 is shown in Fig. <ref> (left, full symbols). As expected, the smaller n the larger the predictive power. Moreover, it is observed that about 20% of the first plastic events (i.e. n1̄) are predicted by the softest spot in each realization and nearly 70% are predicted by soft spots with Δ_ E≤2. To assess how good these predictive powers are, we need some reference case to compare to, which we consider next.§.§ Comparison to the normal-modes-based approachAmong the many structural indicators studied over the years, cf. the introduction above, the normal-modes-based approach <cit.> stands out according to the relatively high correlations between structure and dynamics it exhibits. The basic idea behind this approach is that while a single low-lying normal mode Ψ_i does not clearly exhibit localized structures, possibly due to hybridization, some weighted sum over a system-dependent number of normal modes does reveal such structures. We use this approach here in order to compare its predictions to the predictions obtained above based on the LTE. In particular, we follow <cit.> and construct maps analogous to Fig. <ref> (left) and Fig. <ref> (right) by summing the norm squared of the components of low-lying normal modes Ψ_i at each particle over the first 30 non-zero modes, i.e. ∑_i=1^30|Ψ^(j)_i|^2 for every particle j. Here Ψ^(j)_i≡(Ψ^(j)_i,x, Ψ^(j)_i,y) are the components of the normal mode Ψ_i at particle j and x, y are the axes directions in a global 2D Cartesian coordinate system.Once the normal modes maps are constructed (see Supporting Information for more details), we apply to them the same procedure described above and calculated the cumulative distribution function F_n(Δ_ E) based on them. The results are superimposed on the LTE results in Fig. <ref> (left, empty symbols). The comparison reveals that the thermal-energy-based approach significantly outperforms the normal-modes-based approach. This is quantified in Fig. <ref> (right), where we plot the ratio of F_n(Δ_ E) for the two approaches for n1̄, 2, 3, δF_n(Δ_ E), demonstrating that the thermal-energy-based approach outperforms the normal-modes-based approach by up to a factor of 1.85 for n1̄ and up to a factor of 3.3 for n3̄.We thus conclude that the LTE has predictive powers that surpass those of the normal-modes-based approach. Can we also assess its predictive powers in absolute terms? To address this question, one should note that soft spots are expected to be anisotropic objects <cit.> characterized by orientation and polarity, and hence feature variable coupling to shearing in various directions. That is, they are expected to be spin-like objects. Consequently, a spot which is very soft in a given direction may not undergo a rearrangement if the projection of the driving force on its soft direction is small. Hence, the optimal predictive power based on the degree of softness alone — a scalar measure — may be significantly smaller than unity. In particular, assuming a uniform/isotropic orientational distribution of equally-soft spots, a naive estimation indicates that only 25% of them will rearrange under shearing in a given direction. As a result, the ∼20% predictive power of the softest soft in each realization, cf. Fig. <ref> (left, full symbols, n1̄), may in fact be not so far from the optimal scalar predictiveness level. The optimal scalar predictiveness issue certainly deserves further investigation.§ CONCLUSION We have shown that the low-temperature LTE E_α is a physical observable that is particularly sensitive to localized soft structures in glasses. E_α effectively filters out the contribution of long-wavelength plane-waves, hence it is dominated by soft glassy vibrational modes alone. This property allows to establish a quantitative relation between the recently observed universal distribution of soft glassy vibrational modes, D_G(ω)∼ω^4 in the limit of small frequencies ω, and the distribution of the LTE, p( E_α)∼ E_α^-9/4 in the limit of large E_α. This universal anomalous, fat-tailed distribution of E_α has been supported by extensive simulations on various computer glass-former in 2D and 3D.While the problem of coexistence and hybridization of long-wavelength plane-waves and soft vibrational modes, which has hampered a direct observation of soft quasi-localized glassy modes and their statistical distribution for a long time, will be addressed elsewhere, we stress that our results have potentially important implications in this context. The universal fat-tailed distribution p( E_α)∼ E_α^-9/4 has been theoretically derived based on the DOS of soft quasi-localized vibrational modes D_G(ω)∼ω^4. Yet, the LTE E_α is a physical quantity that is defined without any explicit reference to soft quasi-localized vibrational modes or to any harmonic normal modes analysis. Consequently, it should be valid in the thermodynamic limit where the harmonic normal modes analysis may neither cleanly reveal soft quasi-localized vibrational modes nor their ω^4 DOS. As such, it suggests that the ω^4 distribution has a physical meaning that goes beyond the eigenvalues of harmonic normal modes, where κω̄^2 is a generalized measure the stiffness of localized soft glassy structures <cit.>.The universal anomalous distribution of E_α and its relation to the universal localized glassy modes DOS imply the existence of highly localized and soft structures in glassy materials. Consequently, E_α forms a softness field that naturally reveals soft spots. These soft spots are expected to be characterized by particularly small activation barriers and hence to predict the loci of plastic rearrangements under shearing. As such, these soft spots are natural candidates for STZ <cit.>. The predictive powers of the LTE have been substantiated by extensive numerical simulations and have been shown to be superior to those of the normal-modes-based structural indicator.Our approach offers a general, system/model-independent, physical-observable-based framework to identify structural properties of quiescent glasses and to relate them to glassy dynamics. In particular, the identified field of soft spots and its time-evolution under external driving forces should play a major role in theories of plasticity of amorphous materials, serving to define a population of STZ <cit.>. The predictive powers of our approach have been demonstrated here for plastic rearrangements in athermal quasi-statically driven systems. An important future challenge would be to test whether and to what extent these predictive powers persist at finite temperatures — possibly up to the glass transition region — and finite strain rates. It should also be tested against thermally-activated relaxation in the absence of external driving forces. Finally, as mentioned above, an interesting direction would be to go beyond the scalar degree of softness measure by incorporating orientational information into a generalized structural indicator.Acknowledgement E.B. acknowledges support from the Harold Perlman Family Foundation, and the William Z. and Eda Bess Novick Young Scientist Fund.10Alexander1998 Alexander S (1998) Amorphous solids: their structure, lattice dynamics and elasticity. Phys Rep 296(2):65–236.Dyre2006 Dyre JC (2006) Colloquium: The glass transition and elastic models of glass-forming liquids. Rev Mod Phys 78(3):953–972.Cavagna2009 Cavagna A (2009) Supercooled liquids for pedestrians. Phys Rep 476(4–6):51–124.Berthier2011 Berthier L, Biroli G (2011) Theoretical perspective on the glass transition and amorphous materials. Rev Mod Phys 83(2):587–645.binder_kob_book Binder K, Kob W (2011) Glassy Materials and Disordered Solids: An Introduction to Their Statistical Mechanics (Revised Edition). (World Scientific). Glen Hocky GM, Coslovich D, Ikeda A, Reichman DR (2014) Correlation of local order with particle mobility in supercooled liquids is highly system dependent. Phys Rev Lett 113(15):157801.aharonov2007 Aharonov E, Bouchbinder E, Hentschel HGE, Ilyin V, Makedonska N, Procaccia I, Schupper N. (2007) Direct identification of the glass transition: Growing length scale and the onset of plasticity. Europhys Lett 77(5):56002.Jack2014 Jack RL, Dunleavy AJ, Royall CP (2014) Information-theoretic measurements of coupling between structure and dynamics in glass formers. Phys Rev Lett 113(9):095703.Royall20151 Royall CP, Williams SR (2015) The role of local structure in dynamical arrest. Phys Rep 560:1–75.spaepen1977 Spaepen F (1977) A microscopic mechanism for steady state inhomogeneous flow in metallic glasses. Acta Metall 25(4):407–415.spaepen2006 Spaepen F (2006) Homogeneous flow of metallic glasses: A free volume perspective. Scripta Mater 54(3):363–367.WidmerCooper2006 Widmer-Cooper A, Harrowell P (2006) Free volume cannot explain the spatial heterogeneity of Debye–Waller factors in a glass-forming binary alloy. J Non-Cryst Solids 352(42-49):5098–5102.tau-defects Srolovitz D, Maeda K, Vitek V, Egami T (1981) Structural defects in amorphous solids statistical analysis of a computer model. Philos Mag A 44(4):847–866.Barrat_2009 Tsamados M, Tanguy A, Goldenberg C, Barrat JL (2009) Local elasticity map and plasticity in a model lennard-jones glass. Phys Rev E 80(2):026112.Asaph Widmer-Cooper A, Harrowell P (2006) Predicting the long-time dynamic heterogeneity in a supercooled liquid on the basis of short-time heterogeneities. Phys Rev Lett 96(18):185701.Matharoo2006 Matharoo GS, Razul MSG, Poole PH (2006) Structural and dynamical heterogeneity in a glass-forming liquid. Phys Rev E 74(5):050502.Berthier2007 Berthier L, Jack RL (2007) Structure and dynamics of glass formers: Predictability at large length scales. Phys Rev E 76(4):041509.Coslovich2007 Coslovich D, Pastore G (2007) Understanding fragility in supercooled lennard-jones mixtures. I. Locally preferred structures. J Chem Phys 127(12):124504.RoyallTanaka2008 Royall C, Williams S, Ohtsuka T, Tanaka H (2008) Direct observation of a local structural mechanism for dynamic arrest. Nat Mater 7:556–561.Malins2013 Malins A, Eggers J, Royall CP, Williams SR, Tanaka H (2013) Identification of long-lived clusters and their link to slow dynamics in a model glass former. J Chem Phys 138(12):12A535.Shi2005 Shi Y, Falk ML (2005) Strain localization and percolation of stable structure in amorphous solids. Phys Rev Lett 95(9):095502.Tanaka2005 Tanaka H (2005) Relationship among glass-forming ability, fragility, and short-range bond ordering of liquids. J Non-Cryst Solids 351(8–9):678 – 690.Kawasaki2007 Kawasaki T, Araki T, Tanaka H (2007) Correlation between dynamic heterogeneity and medium-range order in two-dimensional glass-forming liquids. Phys Rev Lett 99(21):215701.widmer2008irreversible Widmer-Cooper A, Perry H, Harrowell P, Reichman DR (2008) Irreversible reorganization in a supercooled liquid originates from localized soft modes. Nat Phys 4(9):711–715.tanguy2010 Tanguy A, Mantisi B, Tsamados M (2010) Vibrational modes as a predictor for plasticity in a model glass. Europhys Lett 90(1):16004.manning2011 Manning ML, Liu AJ (2011) Vibrational modes identify soft spots in a sheared disordered packing. Phys Rev Lett 107(10):108302.rottler_normal_modes Rottler J, Schoenholz SS, Liu AJ (2014) Predicting plasticity with soft vibrational modes: From dislocations to glasses. Phys Rev E 89(4):042304.Mosayebi2014 Mosayebi M, Ilg P, Widmer-Cooper A, Del Gado E (2014) Soft modes and nonaffine rearrangements in the inherent structures of supercooled liquids. Phys Rev Lett 112(10):105503.Schoenholz2014 Schoenholz SS, Liu AJ, Riggleman RA, Rottler J (2014) Understanding plastic deformation in thermal glasses from single-soft-spot dynamics. Phys Rev X 4(3):031014.Ding2014 Ding J, Patinet S, Falk ML, Cheng Y, Ma E (2014) Soft spots and their structural signature in a metallic glass. Proc Natl Acad Sci USA 111(39):14052–14056.falk_local_yield Patinet S, Vandembroucq D, Falk ML (2016) Connecting local yield stresses with plastic activity in amorphous solids. Phys Rev Lett 117(4):045501.Cubuk2015 Cubuk ED, Schoenholz SS, Rieser JM, Malone BD, Rottler J, Durian DJ, Kaxiras E, Liu AJ (2015) Identifying structural flow defects in disordered solids using machine-learning methods. Phys Rev Lett 114(10):108001.machine_learning Schoenholz SS, Cubuk ED, Sussman DM, Kaxiras E, Liu AJ (2016) A structural approach to relaxation in glassy liquids. Nat Phys 12(5):469–471.modes_prl Lerner E, Düring G, Bouchbinder E (2016) Statistics and properties of low-frequency vibrational modes in structural glasses. Phys Rev Lett 117(3):035501.Mizuno_arXiv Mizuno H, Shiba H, Ikeda A (2017) Continuum limit of the vibrational properties of amorphous solids. arXiv preprint arXiv:1703.10004.argon_st Argon A (1979) Plastic deformation in metallic glasses. Acta Metall 27(1):47–58.falk_langer_stz Falk ML, Langer JS (1998) Dynamics of viscoplastic deformation in amorphous solids. Phys Rev E 57(6):7192–7205.Yohai Bar-Sinai Y, Bouchbinder E (2015) Spatial distribution of thermal energy in equilibrium. Phys Rev E 91(6):060103.chaikin2000principles Chaikin PM, Lubensky TC (2000) Principles of condensed matter physics. (Cambridge university press).Tarjus2005 Tarjus G, Kivelson SA, Nussinov Z, Viot P (2005) The frustration-based approach of supercooled liquids and the glass transition: a review and critical assessment. J Phys Condens Matter 17(50):R1143.micromechanics Lerner E (2016) Micromechanics of nonlinear plastic modes. Phys Rev E 93(5):053004.manning2015 Wijtmans S, Manning ML (2015) Disentangling defects and sound modes in disordered solids. arXiv preprint arXiv:1502.00685.nonlinear_modes_scipost Gartner L, Lerner E (2016) Nonlinear modes disentangle glassy and Goldstone modes in structural glasses. SciPost Phys 1(2):016.kablj Kob W, Andersen HC (1995) Testing mode-coupling theory for a supercooled binary lennard-jones mixture I: The van hove correlation function. Phys Rev E 51(5):4626–4641.luka Gartner L, Lerner E (2016) Nonlinear plastic modes in disordered solids. Phys Rev E 93(1):011001.lemaitre2006_avalanches Maloney CE, Lemaître A (2006) Amorphous systems in athermal, quasistatic shear. Phys Rev E 74(1):016118.bouchbinder2007athermal Bouchbinder E, Langer JS, Procaccia I (2007) Athermal shear-transformation-zone theory of amorphous plastic deformation. I. Basic principles. Phys Rev E 75(3):036107.bouchbinder2009nonequilibrium Bouchbinder E, Langer JS (2009) Nonequilibrium thermodynamics of driven amorphous materials. III. Shear-transformation-zone plasticity. Phys Rev E 80(3):031133.bouchbinder2011linear Bouchbinder E, Langer JS (2011) Linear response theory for hard and soft glassy materials. Phys Rev Lett 106(14):148301.falk2011deformation Falk ML, Langer JS (2011) Deformation and failure of amorphous, solidlike materials. Annu Rev Condens Matter Phys 2(1):353–373. Supporting Information
http://arxiv.org/abs/1703.09014v2
{ "authors": [ "Jacques Zylberg", "Edan Lerner", "Yohai Bar-Sinai", "Eran Bouchbinder" ], "categories": [ "cond-mat.soft", "cond-mat.mtrl-sci", "cond-mat.stat-mech" ], "primary_category": "cond-mat.soft", "published": "20170327111648", "title": "Local thermal energy as a structural indicator in glasses" }
Xiaoshuai Zhu xszhu@bao.a.cn Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences; xszhu@bao.ac.cn Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences; xszhu@bao.ac.cn University of Chinese Academy of Sciences, China; hnwang@nao.cas.cn Nanjing University, China; xincheng@nju.edu.cn Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China Institute of Space Sciences and School of Space Science and Physics, Shandong University, Weihai 264209, China Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences; xszhu@bao.ac.cnWe investigate the three-dimensional (3D) magnetic structure of a blowout jet originated in the west edge of NOAA Active Region (AR) 11513 on 02 July 2012 by means of recently developed forced field extrapolation (FFE) model. The results show that the blowout jet was caused by the eruption of the magnetic flux rope (MFR) consisting of twisted field lines. We further calculate the twist number 𝒯_w and squashing factor Q of the reconstructed magnetic field and find that (1) the MFR corresponds well to the high 𝒯_w region (2) the MFR outer boundary corresponds well to the high Q region, probably interpreting the bright structure at the base of the jet. The twist number of the MFR is estimated to be 𝒯_w=-1.54± 0.67. Thus, the kink instability is regarded as the initiation mechanism of the blowout jet as 𝒯_w reaching or even exceeding the threshold value of the kink instability. Our results also indicate that the bright point at the decaying phase is actually comprised of some small loops that are heated by the reconnection occurred above. In summary, the blowout jet is mostly consistent with the scenario proposed by <cit.> except that the kink instability is found to be a possible trigger.§ INTRODUCTION The concept of “blowout jet” was first introduced by <cit.> based on the morphological description of X-ray jet in the Hinode/X-ray Telescope movie. The broad spire and bright base arch of the “blowout jet” distinguish from the thin spire and relatively dim base arch of standard jet. In the widely accepted scenario of a blowout jet <cit.>: the sheared or twisted arch field is supposed to emerge from below the photosphere, forming a current sheet at the interface between the arch field and ambient open field. The onset of magnetic reconnection at the current sheet shows the similar feature with the standard jet apparently, then the sheared or twisted arch field is erupted outward as the key structure of a CME <cit.>. Among the total number of 109 jets examined in <cit.> and <cit.>, 50 are blowout, 53 are standard, 6 are ambiguous.A minifilament whose magnetic structure is argued to be a helical MFR, is often observed at the base of the blowout jet <cit.>. The existence of MFR is supported by the helical structure in the spire during the untwisting motion of the erupting mass <cit.>. In addition, 3D magnetohydrodynamic (MHD) jet models <cit.> according to the eruption of the twisted magnetic field also show the same helical motion as the observations.Magnetic structures of the source regions of the blowout jets have been modeled using potential <cit.>, linear <cit.> and non-linear force-free modelings <cit.>. The previous works just unveiled the weakly sheared core fields and opened ambient fields. The twisted MFRs in which more magnetic free energy is stored to power the blowout jets, however, have never been disclosed. In this letter, we report an MFR at the base of a blowout jet. The MFR is successfully reconstructed by recently developed FFE model <cit.>. To study the change of the magnetic field, we make a time series of extrapolations using Helioseismic and Magnetic Imager <cit.> vector magnetograms. We further calculate the twist number <cit.> and squashing factor <cit.> to study the property of the MFR. The observational data sets are described in Section <ref>, the evolution of the blowout jet is presented in Section <ref>, the extrapolation results are analyzed in Section <ref>, which is followed by the discussions and conclusions in Section <ref>.§ OBSERVATIONAL DATARecurrent jets were observed at the boundary between the AR 11513 and the neighboring coronal hole on July 2, 2012 (see white box in Figure <ref>). Our attention is paid to the blowout jet occurred at 21:12UT.HMI onboard the Solar Dynamics Observatory <cit.> provides 45 seconds line of sight (LOS) magnetograms and 12 minutes vector magnetograms. Both of their pixel size is 0.5”. The Atmospheric Imaging Assembly <cit.> also on board SDO provides full disk images of solar corona at multiple EUV passbands with cadence of 12 seconds and pixel size of 0.6”. We also used the Hα data observed at Big Bear Solar Observatories (BBSO) to study the evolution of the jet in the chromosphere.§ EVOLUTION OF THE BLOWOUT JETFigure <ref> and online movie show the blowout jet in different passbands. Here, the evolution is divided into three stages.First stage: before 21:11, the jet's base appeared as a circular shape (Figure <ref> (a1, b1, c1, d1)) which could be the combination of several dipoles, the loops connect which may be heated. The plasma is observed to intermittently move out along the open field lines even though the whole structure is stable.Second stage: at 21:11, a bright point at the south of the circular area appeared and then quickly extended to the north to form a bent tube (pointed by the white arrows in Figure <ref> (b2, c2)). The tube increasingly got brightened, followed by a slowly upward motion (lower dot line in Figure <ref> (e)) and a fast ejection motion (upper dot line in Figure <ref> (e)). The strong brightening of the jet's base suggests that the internal reconnection occurs between the opposite-polarity stretched legs of the erupting structure. Meanwhile, the jet's spire shows multi-stranded curtain structure with rotating motion in the broaden spire (pointed by the yellow arrows in Figure <ref> (a3, b3, c3)). All these are typical morphological characteristics of a blowout jet.Third stage: at 21:18, the jet's base and spire started to decay. All bright structure faded away except a dimming bright point (Figure <ref> (a4, b4, c4, d4)).§ 3D MAGNETIC STRUCTURE OF THE BLOWOUT JETTo understand the magnetic structure and evolution of the jet, we use the FFE model that utilizes the MHD relaxation method (full MHD equations are solved) to build the equilibrium state of the system that approximate the solar atmosphere. The HMI vector magnetograms are taken as bottom boundary condition. The FFE model is particularly suited to compute the magnetic field in the chromosphere, transition region and low corona because of the relatively high plasma β there. It has been successfully used to reproduce the magnetic structure of Hα fibrils <cit.>, small filament <cit.>, and bright arcade in the chromosphere or low corona <cit.>. In the work, the extrapolation is performed in the cubic box resolved by 480*416*128 grid points with x= y= z=0.5”. The photosphere boundary field of view for extrapolation is shown in Figure <ref>. §.§ The evolution of magnetic structure Figure <ref> (d) shows that an MFR (yellow lines) appears at the source region of the jet. The MFR corresponds well to the observed bright tube (Figure <ref> (a)) as seen in AIA images. With the jet eruption, most of the twisted lines are released, only leaving some untwisted and open field lines in place (Figure <ref> (e)). The small loops (white field lines in panel (e)) at the bright point (Figure <ref> (b)) are possibly the reconnected field lines. Although we can not see the dynamic process of the jet by extrapolation, the change of the magnetic field clearly display that the MFR is ejected during the jet.The arrows in Figure <ref> (g) show the transverse field which is aligned with the MFR. After the jet, the transverse field decreases and becomes disordered (Figure <ref> (h,i)). This is consistent with the fact that the eruption of the jet takes away most of twisted field and just leaves some small closed field lines and large-scale open field. §.§ The structure of MFR <cit.> defined the twist of the neighboring magnetic field lines, which is related to the parallel electric current (J_), as follows:𝒯_w = ∫_sμ_0J_/4π|𝐁|ds = ∫_s(∇×𝐁)·𝐁/4π B^2ds,where the integration is carried out along the specific field line.<cit.> introduced the quasi-separatrix layers (QSLs) as the generalized topological structure. The QSLs are defined by high squashing factor Q regions where the connection of the magnetic field varies strongly. Q is defined by mapping the field line <cit.>:D_12=( [ ∂ x_2/∂ x_1 ∂ x_2/∂ y_1; ∂ y_2/∂ x_1 ∂ y_2/∂ y_1 ])= ( [ ab; cd ]),Q = a^2+b^2+c^2+d^2/|B_n(x_1,y_1)/B_n(x_2,y_2)|,where (x_1, y_1) and (x_2, y_2) are the two footpoints of a field line.The code we used to calculate the twist number 𝒯_w and squashing factor Q is developed by <cit.>. To save computation resource, we select sub-domain x∈[130.0, 160.3], y∈[231.0, 240.3] and z∈[0.0, 10.1] where x (+x towards west) and y (+y towards north) are the heliocentric coordinate and z is the height. The sub-domain was resolved by 960*880*320 grids when computing 𝒯_w and Q. Therefore, the grids are refined by 16 times after extrapolation.Figure <ref>(a) shows the extrapolated 3D field lines of the MFR. The contour of 𝒯_w=-1.5 (see (b)) marks the MFR accurately. Figure <ref> (e) and (f) shows a 2D plane of 𝒯_w and Q perpendicular to the axis of the MFR. We can see that the 𝒯_w has a sharp edge which is consistent with the regions of high Q value. In an MFR, field lines winding around an axis have similar connectivity. QSLs separate the twisted field lines from ambient field lines, which are typical features of an active-region-scaled MFR <cit.>. Assuming 𝒯_w=-0.5 as the boundary, field lines inside have a twist number of 𝒯_w=-1.54± 0.67. The twist at the center of the MFR exceeds to 2.0 turns while decreases to 0.5 toward the edge. <cit.> shows that the <cit.> MFR is kink unstable for |𝒯_w|>1.75 with aspect ratio R/r=5 (R and r are the major and minor radius of the MFR). The instability threshold decrease with decreasing aspect ratio <cit.>. Assuming the length (21 arcsec) and width (5.5 arcsec, see Figure <ref> (g)) of the extrapolated MFR approximate the major and minor diameters, respectively. The aspect ratio is estimated to be 3.8, implying a smaller kink-instability threshold than 1.75 turns. Therefore, the small scale MFR may be marginally kink unstable. The decay index of the magnetic field above the MFR is about 0.3, which means that the MFR is far below the height where torus instability will occur (the critical decay index requires to be 1.5, <cit.>). §.§ Noise and change of the magnetic field on the photosphere The noise of the transverse magnetic field is large in weak field region because of the nonlinear dependence between the linear polarization and field strength. This lead to unreliable vector magnetic field inversion in solar quiet regions. The jet we analyzed occurred at the boundary of an AR and a coronal hole, which is the interface area of the strong and weak magnetic field. Therefore, it is necessary to assess the noise of the transverse magnetic field at the jet source region. The SDO/HMI provides the standard deviation of inverted magnetic field with data segments _ERR. For example, FIELD_ERR and INCLINATION_ERR are the standard deviation of field strength and inclination angle relative to the LOS. Hence, it is convenient to compute the uncertainty of the transverse field. The temporal profile of the magnetic field is showed in Figure <ref>. Typically at 21:12 UT, the average LOS field, average transverse field, average noise of the transverse field, and the average signal to noise of region “R” (surrounded by black curves in Figure <ref> left): are: 28G, 160G, 32G, and 5.4, respectively. The transverse field on the photosphere is about 5.7 times larger than the LOS field under the MFR, which indicate the field lines at this area are nearly horizontal. This results in the relatively small noise of transverse field. The uncertainty of the transverse field is about 18%, 20%, and 36% at 21:00:00, 21:12:00, and 21:24:00, respectively (see the error bar in Figure <ref> right). The high signal to noise of the data denotes it could be used in extrapolation.The largely different field configuration mainly results from the change of transverse magnetic field on the photosphere after the jet took place. The region “R” has a pronounced, 30% decrease of the transverse field (solid line in Figure <ref> right) from 160G at 21:12:00 before the jet to 112G at 21:24:00 after the jet in 12 minutes. Figure <ref> (f-i) also show decrease and less sheared of the transverse field after the jet. The decrease of the positive, negative, and unsigned LOS field (right panel of Figure <ref>) suggest that the flux cancelation took place at the jet's source region.§ DISCUSSION AND CONCLUSIONA blowout jet was observed on 2 July 2012 at the west edge of AR 11513. In a previous paper, <cit.> suggested that the rotation and shear motion of the magnetic field build up the free energy to make the jet blow out. In the current work, we further study the 3D magnetic structure of the jet's source region by recently developed FFE model. The twist number and squashing factor are calculated to analyze the magnetic property of this jet. We have the following findings:First, the transverse magnetic field decreased during the jet. The originally twisted and closed field lines are released, manifesting as the bright base and broaden helical spire, finally just leaving some untwisted and opened field lines in place.Second, an MFR, reconstructed by the FEE method and being cospatial with the bright tube, is found to exist before the jet and then disappear after the jet blows out. A sharp boundary of the MFR can be seen at 2D cutting plane of 𝒯_w distribution. This boundary also corresponds well with the layer with very high Q value that distinguishes the twisted field lines of the MFR from outside.Third, the twist number of the MFR is 𝒯_w=-1.54± 0.67 with the small aspect ratio R/r=3.8, which indicates that the blowout jet is likely triggered by kink instability. The low decay index prevents the eruption from torus instability. Combining the observed features with reconstructed 3D magnetic structures, we can argue that before the onset of the blowout jet, a highly twisted MFR exists at the source region of the jet. The twist of the MFR may continuously increase because of the plasma motion or magnetic cancelation at the photosphere. Homologous jets erupted before the blowout one remove the restraining overly field lines. When the twist exceeds a critical value, kink instability takes place and leads to the MFR being ejected. As the MFR moving upward, the internal reconnection occurs between the stretched field lines below. The reconnection outflows may take on a bright core of the jet. Meanwhile, the eruption of the heated MFR in the partly opened ambient field shows multi-strand curtain structure. The helical motion observed in the spire indicates the untwisting process of the erupting MFR. Finally, the jet's base gradually fade away with a weak bright point. This bright point may denote small loops that are heated by the reconnection above. In short, the process of blowout jet is mostly consistent with the scenario proposed by <cit.>, except that the kink instability is considered to be its initiation mechanism. It has to be pointed out that the direct observation of the twist, for example the twist between fine structures of a filament <cit.>, is a stronger and direct piece of evidence for the MFR existence. In the future, more case studies, even a statistical study, of 3D magnetic structures of blowout jets will be presented.The authors thank the referee for constructive suggestions. This work is jointly supported by National Natural Science Foundation of China (NSFC) through grants 11403044, 11673035 and 11273031; collaborating Research Program of CAS Key Laboratory of Solar Activity, National Astronomical Observatories(KLSA201609); Natural Science Foundation of Shandong Province (ZR2014AP010). The data used are courtesy of NASA/SDO and the AIA and HMI science teams. The BBSO operation is supported by NJIT, US NSF AGS-1250818, and NASA NNX13AG14G grants.[Antiochos et al.(1999)]adk99 Antiochos, S. K., DeVore, C. R., Klimchuk, J. A.1999, , 510, 485 [Adams et al.(2014)]asm14 Adams, M., Sterling, Alphonse C., Moore, Ronald L., et al.2014, , 783, 11 [Berger & Prior(2006)]bp06 Berger, M. A., & Prior, C. 2006, Journal of Physics A: Mathematical and Theoretical, 39, 26 [Chen (2011)]c11 Chen, P. F. 2011, Living Reviews in Solar Physics, 8, 1 [Chen et al.(2015)]csy15 Chen, J., Su, J., Yin, Z., et al. 2015, , 815, 71 [Chen et al.(2012)]czm12 Chen, H.D., Zhang, J., Ma, S.L. 2012, Research in Astronomy and Astrophysics, 12, 573 [Cheng et al.(2014)]cdz14 Cheng, X., Ding, M.D., Zhang, J., et al. 2014, , 789, 93 [Cheung et al.(2015)]cpt15 Cheung, Mark C. M., De Pontieu, B., Tarbell, T. D., et al. 2015, , 801, 83 [Curdt et al.(2012)]ctk12 Curdt, W. Tian, H. Kamio, S. 2012, , 280, 417 [Démoulin et al.(1996)]dpm96 Démoulin, P., Prest, E. R., Mandrini, C. H. 1996, , 308, 643[Guo et al.(2013)]gdc13 Guo, Y., Ding, M. D., Cheng, X., et al. 2013, , 779, 157 [Guo et al.(2013)]gds13 Guo, Y., Démoulin, P., Schmieder, B., et al. 2013, , 555, 19 [Hong et al.(2013)]hjy13 Hong, J., Jiang, Y., Yang, J., et al. 2013, Research in Astronomy and Astrophysics, 13, 253 [Hong et al.(2011)]hjz11 Hong, J., Jiang, Y., Zheng, R., et al. 2011, , 738, 20 [Hoeksema et al.(2014)]hlh14 Hoeksema, J. T., Liu, Y., Hayashi, K., et al. 2014, , 289, 3483 [Karpen et al.(2017)]kda17Karpen, J. T., DeVore, C. R., Antiochos, S. K., et al. 2017, , 834, 62 [Kliem & Török(2006)]kt06Kliem, B., & Török, T. 2006, , 96, 25502 [Lee et al.(2013)]lim13 Lee, K.-S., Innes, D. E., Moon, Y.-J., et al. 2013, , 766, 1 [Lemen et al.(2012)]lta12 Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, , 275, 17 [Liu et al.(2011)]ldl11 Liu, C., Deng, N., Liu, R., et al. 2011, , 735, 18 [Liu et al.(2014)]lwl14 Liu, J., Wang, Y., Liu, R., et al. 2014, , 782, 94 [Liu et al.(2016)]lkt16 Liu, R., Kliem, B., Titov, Viacheslav S., et al. 2016, , 818, 148 [Moore et al.(2010)]mcs10 Moore, Ronald L., Cirtain, Jonathan W., Sterling, Alphonse C., et al. 2010, , 720, 757 [Moore et al.(2013)]msf13 Moore, Ronald L., Sterling, Alphonse C., Falconer, David A., et al. 2013, , 769, 134 [Moreno-Insertis et al.(2008)]mgu08 Moreno-Insertis, F., Galsgaard, K., Ugarte-Urra, I. 2008, , 673, 211 [Morton et al.(2012)]mse12 Morton, R. J., Srivastava, A. K., Erdlyi, R., et al. 2012, , 542, 70 [Nistic et al.(2009)]nbp09 Nistic, G., Bothmer, V., Patsourakos, S., et al. 2009, , 259, 87 [Pariat et al.(2009)]pad09 Pariat, E., Antiochos, S. K., DeVore, C. R. 2009, , 691, 61 [Pariat et al.(2010)]pad10 Pariat, E., Antiochos, S. K., DeVore, C. R. 2010, , 714, 1762 [Pariat et al.(2012)]pd12 Pariat, E., & Démoulin, P. 2012, , 541, A78 [Pariat et al.(2015)]pdd15 Pariat, E., Dalmasse, K., DeVore C. R. 2015, , 573, 130 [Patsourakos et al.(2008)]ppv08 Patsourakos, S., Pariat, E., Vourlidas, A., et al. 2008, , 680, 73 [Pesnell et al.(2012)]ptc12 Pesnell, W. D., Thompson, B. J., Chamberlin, P. C. 2012, , 275, 3 [Rachmeler et al.(2010)]rpd10 Rachmeler, L. A., Pariat, E., DeForest, C. E., et al. 2010, , 715, 1556 [Raouafi et al.(2016)]rpp16 Raouafi, N. E., Patsourakos, S., Pariat, E., et al. 2016, , 201, 1 [Schmieder et al.(2013)]sgm13 Schmieder, B., Guo, Y., Moreno-Insertis, et al. 2013, , 559, 1 [Schou et al.(2012)]ssb12 Schou, J., Scherrer, P. H., Bush, R. I., et al. 2012, , 275, 229 [Shen et al.(2011)]sls11 Shen, Y., Liu, Y., Su, J., et al. 2011, , 735, 43 [Shen et al.(2012)]sls12 Shen, Y., Liu, Y., Su, J., et al. 2012, , 745, 164 [Sterling et al.(2015)]smf15 Sterling, Alphonse C., Moore, Ronald L., Falconer, David A., et al. 2015, , 523, 437 [Titov & Démoulin (1999)]td99 Titov, V. S., & Démoulin, P. 1999, , 406, 1043 [Titov et al.(2002)]thd02 Titov, V. S., Hornig, G., & Démoulin, P. 2002, , 107, 1164 [Török & Kliem(2003)]tk03 Török, T., & Kliem, B. 2003, , 406, 1043 [Wang et al.(2015)]wcl15 Wang, H., Cao, W., Liu, C., et al. 2015, Nature Communications, 6, 7008 [Wang et al.(2016)]wlz16 Wang, R., Liu, Y., Zimovet, I., et al. 2016, , 827, 12 [Zhang et al.(2012)]zcg12 Zhang, Q. M., Chen, P. F., Guo, Y. 2012, , 746, 19 [Zhang & Ji(2014)]zj14 Zhang, Q. M., & Ji, H. S. 2014, , 561, 134 [Zhao et al.(2017)]zsl17 Zhao, J., Schmieder, B., Li, H., et al. 2017, , 836, 52 [Zhu et al.(2013)]zwd13 Zhu, X., Wang H., Du Z., et al. 2013, , 768, 119 [Zhu et al.(2016)]zwd16 Zhu, X., Wang H., Du Z., et al. 2016, , 826, 51
http://arxiv.org/abs/1703.08992v2
{ "authors": [ "Xiaoshuai Zhu", "Huaning Wang", "Xin Cheng", "Chong Huang" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170327100522", "title": "A solar blowout jet caused by the eruption of a magnetic flux rope" }
ifp,ustem]T. Schachingercor1 thomas.schachinger@tuwien.ac.at ifp,ustem]S. Löffler ustem]M. Stöger-Pollach ifp,ecp]P. Schattschneider [cor1]Corresponding author[ifp]Institute of Solid State Physics, Vienna University of Technology, Wiedner Hauptstraße 8-10, 1040 Vienna, Austria [ustem]University Service Centre for Transmission Electron Microscopy, Vienna University of Technology, Wiedner Hauptstraße 8-10, 1040 Wien, Austria [ecp]LMSSMat (CNRS UMR 8579)m Ecole Centrale Paris, F-92295 Châtenay-Malabry, France Electron Vortex Beams Landau States Larmor Rotation Gouy Rotation TEMStandard electron optics predicts Larmor image rotation in the magnetic lens field of a TEM. Introducing the possibility to produce electron vortex beams with quantized orbital angular momentum brought up the question of their rotational dynamics in the presence of a magnetic field. Recently, it has been shown that electron vortex beams can be prepared as free electron Landau states showing peculiar rotational dynamics, including no and cyclotron (double-Larmor) rotation. Additionally very fast Gouy rotation of electron vortex beams has been observed. In this work a model is developed which reveals that the rotational dynamics of electron vortices are a combination of slow Larmor and fast Gouy rotations and that the Landau states naturally occur in the transition region in between the two regimes. This more general picture is confirmed by experimental data showing an extended set of peculiar rotations, including no, cyclotron, Larmor and rapid Gouy rotations all present in one single convergent electron vortex beam.Peculiar Rotation of Electron Vortex Beams [==========================================§ INTRODUCTIONVortex beams arecharacterized by a spiraling wavefront with a phase singularity at the center. First signs of this phenomenon have already been observed in the 1950's <cit.>, but possibly due to the lack of an accompanying theory, which was delivered nearly a quarter of a century later by Nye and Berry <cit.>, it took until the 1990's for the first intentional experimental realization with light waves <cit.>.Today, there are many applications of optical vortices ranging from tweezers exerting a torque <cit.>, over optical micromotors <cit.>, cooling mechanisms <cit.>, toroidal Bose-Einstein condensates <cit.>,communication through turbulent air <cit.> to exoplanet detection <cit.>. A similar story holds true for vortex matter waves. In the early 1970's Beck, Mills and Munro produced helical spiraling electron beams <cit.> using a special magnetic field configuration. However, the significance of that discovery was most likely underestimated until recently, when Bliokh et al. described phase vortices in electron wave packets <cit.>. This paved the way for the first experimental realization of electron vortex beams (EVBs) in a transmission electron microscope (TEM) <cit.>. Shortly after, the holographic mask technique was introduced for routinely producing electrons with quantized orbital angular momentum (OAM) in the TEM <cit.>. Up to now, several methods for producing vortex beams with higher orders or higher brilliance have been published <cit.>, reflecting the vital interest in shaping the electron wavefront. Owing to their short wavelength, fast EVBs (∼ 100 to 300 keV) canbe focused to atomic size <cit.>. Another interesting aspect for future applications is their quantized orbital magnetic moment μ_B m, which is – at least for fast electrons, where spin-orbit coupling can be neglected – independent of spin polarization. This allows the creation of electron beams carrying magnetic moment even without spin-polarization. Both features make them attractive as a novel probe in solid state physics for mapping, e.g. magnetic properties <cit.> on the atomic level. In addition, EVBs could be used to probe Landau states (LS) <cit.> and have already been shown to be a promising candidate for manipulating nanoparticles <cit.>.The theory of propagating EVBs has been developed in a series of publications <cit.>. The most intriguing prediction is the peculiar rotation mechanism ofvortex beams in a magnetic field. It was shown that exact solutions of the paraxial Schrödinger equation in a homogeneous magnetic field — non-diffracting Laguerre-Gaussian modes also known as LS — acquire a phase upon propagation along the z axis <cit.>. This phase shift causes a quantized rotation of coherent superpositions of LG modes, falling in one of three possible groups showing either cyclotron, Larmor, or zero frequency, depending on the topological charges involved.This surprising prediction was confirmed experimentally with beams closely resembling non-diffracting solutions of the Schrödinger equation in a homogeneous magnetic field <cit.>. Contrary to this quantized rotation, rapid Gouy rotation of EVBs close to the focus has been observed <cit.>. Both experiments seem to contradict each other, as well as the standard theory of electron movement in a TEM that predicts Larmor rotation of paraxial ray pencils between object and image in a round magnetic lens <cit.>.These facts call for a re-evaluation of the rotation dynamics of convergent and divergent electron beams in the TEM, including beams with non-vanishing topological charge.Here, we give a quantum description ofdiffracting electron vortices based upon radius r, angular momentum mħ, and their time evolution. It is found that the rotation dynamics are a function of the OAM and the expectation value r̂^-2 (the second moment of the inverse beam radius). This finding is supported by experiments tracking the rotation of convergent electron waves with topological charge in the lens field.It is shown that the whole range of rotation dynamics, namely 'classical' Larmor rotation (LR), rapid Gouy rotation, cyclotron (double-Larmor) and zero rotation occurs in one beam, and can be described within a uniform picture. § THEORYWe discuss thedynamics of electron vortex beams (and their superpositions) in a constant homogeneous magnetic field pointing in z-direction, B⃗. Such a field gives rise to the vector potentialA = -r×B⃗/2in the Coulomb gauge. The Hamiltonian of the system takes the formH = p^2/2m_e = (p - eA)^2/2m_e,wherep = p - eA is the observable kinetical or covariant momentum operator, whereas the canonical momentum operator p is gauge dependent and not an observable.§.§ Rotation dynamicsThe rotation dynamics of EVBs in a magnetic field is intimately connected to the concept of Bohmian trajectories. They can be interpreted as streamlines of the quantum mechanical particle current density <cit.>j⃗(r⃗) = 1/m_e[ψ^*(r⃗) p⃗(r⃗)ψ(r⃗)]= = -1/m_e[ψ^*(r⃗) (i ħ∇+e A⃗(r⃗))ψ(r⃗)]. In the present context, we want to calculate the angular velocity of a quantum fluid following such streamlines.In cylindrical coordinates r⃗=(r,φ,z) centered at the optical axis, the azimuthal velocity on a streamline isv_φ(r⃗)=j_φ(r⃗)/|ψ(r⃗)|^2.For a constant magnetic field in z-direction B_z we have A_r(r⃗)=A_z(r⃗)=0 and A_φ(r⃗)=r B_z/2. Since ∇_φ =r^-1∂_φ, the angular velocity isω(r⃗)=v_φ(r⃗)/r=ħ/m_e[ψ^*(r⃗) r^-1∂_φ ψ(r⃗)/rψ^*(r⃗) ψ(r⃗)]-e B_z/2 m_e.The rotation dynamicsis experimentally accessible through the expectation value ω(z)=∫ψ^*(r⃗) ω(r⃗)ψ(r⃗)rdrdφ.Since in cylindrical coordinates the OAM operator is L_z = i ħ∂_φand [L_z,r]=0, we find from Eqs. <ref>–<ref>ω(z)=1/m_er^-2L_z+ σΩwhere we have introduced the Larmor frequency Ω = |e B_z / 2 m_e|.σ= B_z=±1 designates the direction of the axial magnetic field. Eq. <ref> is one of our main results. It serves as basis for the followingstudy of the rotation dynamics.The expectation value of the angular velocity Eq. <ref> is conveniently obtained by decomposing the wave function ψ(r⃗) into normalized orthogonal eigenfunctions of L_z:ψ(r⃗)=∑_m ψ_m(r,z) e^ m φ.Since L_z ψ_m(r,z) e^ m φ=ħ m ψ_m(r,z) e^ m φ, it followsr^-2L_z=ħ∑_m mr^-2_mwithr̂^-2_m =∫_0^∞ψ_m^∗(r⃗) r^-2ψ_m(r⃗) rdr.Transforming to the dimensionless radial distance ξ̂= r̂/w_B, wherew_B = √(2 ħ/|e B|) = √(ħ/m_e Ω)is the magnetic beam waist, representing the radius thatenclosesone magnetic flux-quantum[The magnetic flux through a circle of radius w_B is w^2_B πB = h/e.], Eq. <ref> yieldsω(z) = ħ/m_e w_B^2∑_m mξ̂^-2_m+σΩ= Ω(∑_m m ξ̂^-2_m+σ). Eq. <ref> and Eq. <ref> show that vortices will rotate according to their radial extension. The wider the vortices are in a given observation plane, the smaller are their inverse square moments ξ̂^-2. With that the first term in Eq. <ref> gets negligible compared to the second, such that the rotation frequency asymptotically approaches the Larmor frequency. Depending on the magnetic field orientation σ, this rotation will be clockwise or anticlockwise, as long as the vortex radius is significantly larger than w_B. In this regime — which will be referred to as LR-region throughout this manuscript — the rotation is completely independent of the vortices' OAM.When ∑_m m ξ^-2_m is close to ±1, the EVBs approximate LS, see Sec. <ref>, showing no rotation for anti-parallel orientation of the OAM with respect to the magnetic field B_z and cyclotron rotation (double-LR) for parallel orientation. This regime will be called LS-region. Obviously, for radial extensions smaller than w_B, which is of the order ofw_B∼25 for typical objective lens fields of the order of B_z∼2, the rotation frequency increases drastically. In this so-called Gouy-regime or Gouy-region, simulations show that vortices with a radial extension of ∼1 and m=1 can rotate with ∼ 10^3Ω, a rotation frequency that corresponds to the cyclotron frequency in fields of ∼ 1000 T.§.§ Diffracting LG modesTo illustrate the properties of EVBs in the TEM, it is reasonable to consider a set of solutions of the paraxial Schrödinger equation, namely diffracting LG (DLG) modes <cit.> given by:ψ_m,n(r⃗)= √(n!/π (n+|m|)!)1/w(z)(r/w(z))^|m|× L_n^|m|( r^2/w(z)^2) e^-r^2/2 w(z)^2 e^k r^2/2R(z)×e^-(2n+|m|+1)ζ(z)e^ (m φ+k z).The parameterw(z)=w_0 √(1+(z/z_R)^2)describes the transverse beam size evolution over z. w_0 represents the "beam waist radius" (of the m=0 beam)[w(z) is not the maximum intensity radius of a vortex, which is given by r_max(z,m)=w(z)√(|m|) for n=0.] at the focal plane z=0, k stands for the forward wave vector and z_R=π w_0^2/λ=k w^2_0,is the Rayleigh range, denoting the position where the illuminated area doubles and the acquired phase shift stemming from the Gouy phase given by ζ(z)=arctan(z/z_R) reaches π/4. The curvature R(z) of the LG mode is R(z)=z (1+(z/z_R)^2).To check the applicability of this approach for converging vortices in the TEM, Fig. <ref> compares numerically simulated transverse beam profiles with the analytical approach Eq. <ref> of a ψ_3,0 mode for different z-values, see also Sec. <ref>. It illustrates that diffracting LG modes approximate the intensity distributions very well between the focal plane and the Rayleigh range z_R. For higher z-values, the radial profiles deviate more and more from the LG modes, but as to the rotation dynamics, ξ^-2_m=w_B^2/(|m| w(z)^2) is still close to that of the numerical simulated radial profiles, see Fig. <ref>.Now, the expectation valueω(z) in Eq. <ref> for a single mode results in ω(z)=Ω(m/|m|(w_B/w(z))^2+σ), or with Eq. <ref> ω(z)=Ω (m w_B^2/|m|w_0^21/1+(z/z_R)^2+σ). Eq. <ref> and <ref> show the salient features of convergent electron beams, which have already been described in Sec. <ref>, including LR (w(z) ≫ w_B), no rotation (w(z) = w_B, σ m <0), double-LR (w(z) = w_B, σ m > 0), and fast Gouy phase dynamics(w(z) ≪ w_B). They ultimately depend on the relative strength of the two terms in Eq. <ref> and <ref> and thus the radial extension of the EVB. The intriguing thing is that these rich rotational dynamics are all contained in one single beam, due to its converging character, see Figs. <ref> and  <ref>.For a certain defocus value z=z_B where w(z_B)=w_B we findω(z_B)=Ω((m)+σ).The angular frequency becomes quantized, with only three possible absolute values: 0, Ω, or 2Ω for σ m<0, σ m=0, σ m >0.This surprising result was derived in <cit.> and experimentally verified in <cit.>. Atz=z_B, Eq. <ref> readsψ_m,n(r,φ,z_B) =LS_m,n(r,φ,z_B)e^k r^2/2R(z) e^-(2n+|m|+1)ζ(z_B),where, LS_m,n = √(n!/π w_B^2 (n+|m|)!)(r/w_B)^|m|×L_n^|m|( r^2/w_B^2)e^-r^2/2 w_B^2e^i (m φ+k_z z_B).It is not by chance that this is the wave function of a Landau state in cylindrical coordinates up to a phase factor that does not influence the rotation dynamics <cit.>.The rotation dynamicsdepends on the Larmor frequency, the OAM, the magnetic waist w_B, and the z-dependent vortex radius. It is therefore difficult to compare different experiments with each other and with theory and simulation. This problem canbe tackled by defining a Rayleigh frequency ω_R = Ω w^2_B/w^2_0 = ħ k / m_e z_R = v_z/z_R (this is the reciprocal time the electron takes to traverse the Rayleigh range of a diffracting LG mode) and a dimensionless variable along the optical axis ζ=z/z_R. For diffracting LG modes, the rotation dynamics Eqs. <ref> and <ref> expressed as a dimensionless rotation frequency follow a universal function (Lorentz function), ω(z)-σΩ/ (m)ω_R=1/1+ζ^2.True vortices will deviate from thisbehavior. In any case, using Eq. <ref> we can compareexperiments performed with different parameters, or numerical simulations with the analytical result for LG modes. § SIMULATION §.§ Knife edge cuttingIn order to gain access to the aforementioned peculiar rotational dynamics contained in the expectation value ω(z) we borrow a technique which was successfully applied in optics <cit.> as well as in electron physics <cit.>. That is to break the circular symmetry of the annular shaped EVB using an electron blocking knife-edge (KE). This enables us to measure the azimuthal rotation angle φ(z) of the truncated intensity pattern. Since ω=dφ/ dt = dφ/dz v_z, where dz/dt=v_z is the velocity of the electron along the z axis, it is possible to map rotational frequencies onto the z-axis. Eq. <ref> givesφ(z)=Ω/v_z (∑_m m∫^z_z_dfξ^-2_m dz+∫^z_z_dfσdz),where z_df is the defocus of the observation plane, see Sec. <ref>. Thus, spatial angular variations can be translated into rotational dynamics.For diffracting LG modes, Eq. <ref>, this can be expressed analytically by integrating Eq. <ref> over zφ(z)=m/|m|(arctan(z/z_R)-arctan(z_df/z_R) ) ++Ω/v_z σ (z-z_df).To clarify that this procedure does not significantly alter the measurement outcome, the influence of limiting the azimuthal range is studied in this section by looking at the Fourier series representation in the azimuthal angle.An uncut EVB with quantized OAM ħ m can be written asψ_m=f(r) e^i m φ.with a radial amplitude f(r). When a sector of the vortex is blocked by a knife edge the resulting wave function ψ_c can be expressed as a Fourier series in the azimuthal angleψ_c(r,φ)=f(r)∑_μ c_μ e^i (m+μ) φwith the same r-dependence as the inital state Eq <ref>. We can evaluate the expectation value Eq. <ref> as we did above for the uncut vortex,r^-2L_z=⟨ψ_c | L̂_z r̂^-2| ψ_c|=⟩ħr^-2_m ∑_μ c_μ^∗ c_μ (m+μ)wherer^-2_m =∫_0^∞|f(r)|^2 r^-2rdr.The Fourier coefficients are normalized∑_μ c_μ^∗ c_μ =1and obey c_-μ=c_μ^∗so that ∑_μ c_μ^∗ c_μ (m+μ)=m ∑_μ c_μ^∗ c_μ = m.Thus, Eq. <ref> reduces tor^-2L_z=ħm r^-2_mwhich is the same as Eq. <ref> for the single uncut vortex in Eq. <ref>.The conclusion is that cutting a sector of a single vortex does not change the rotation dynamics. §.§ Numerical simulation and error sourcesEq. <ref> provides a simple way to obtain the angular velocity: given the OAM of a vortex, weneed themomentξ^-2 as a function of z. In the plane of the cutting edge, this moment depends only on the radial density of the (cut) vortex, so we can use the standard FFT procedure based on the Fresnel propagator[The FFT results are known to be exact, except of the LR (σΩ-term in Eq. <ref>), which can be added by using a so called co-rotating coordinate system <cit.>.]. One can even include the lens aberrations. So our approach was to calculate ξ^-2for beam profiles as a function of the position z of the obstructive edge, from which we quantify the rotation dynamics of the vortical structure via Eq. <ref>. A standard method to produce EVBs is to use holographic fork masks (a diffraction grating with a dislocation) <cit.>. It has been argued that EVBs created by this method carry OAM impurities <cit.>. The main reason are irregularities in the geometry of the mask. This means that a certain vortex order does not carry a quantized OAM any more. Although we have shown that the preparation of a sector of a given vortex does not change its rotation dynamics, see Eq. <ref>, irregularities in the obstructing edge, as well as diffuse scattering at its rim may well cause OAM impurities in a vortex.Based on Eq. <ref> for a vortex of the order m_0 we can make the ansatzω(z)= Ω(m_0ξ^-2_m_0+σ)+Ω∑_m≠ m_0 m ξ̂^-2_m.The last term in Eq. <ref> causes a deviation from the rotation dynamics of the quantized vortex m_0. It is small because the coefficients measure the small admixtures of other OAMs, so ξ^-2_m ≪ξ^-2_m_0. Another potential source of errors is the diffraction at the obstructing edge. Instead of a rotated shadow image of the cut vortex, one observes a blurred half ring structure with Fresnel fringes. These fringes add to the challenge of measuring a rotation angle of the edge. Nevertheless, since these angles are measured ina series of z-positions of the edge, the difference of two consecutive measurements is less sensitive to thesmoothly changing fringe contrast.§ EXPERIMENTALTo test the peculiar rotational behavior of EVBs in magnetic fields predicted by Eq. <ref>, it is necessary to probe their internal azimuthal dynamics for a large range of different beam radii. To achieve that we placed a holographic fork mask in the C2 aperture holder of a FEI TECNAI F20 TEM working at 200. By adjusting the C2 condenser strength, convergent EVBs can be produced in the KE plane, see Fig. <ref>. These beams constantly change their radii when propagating along the z-direction. Their semi-convergence angle was 1.16.B_z=1.9 for astandard TEM objective lens and thus w_B∼25. With that, the radial extension of the EVBs can be expressed in units of w_B. It ranges from the radius of the used holographic aperture, in our case 10.5, equal to ∼400 w_B, down to the focused spot, with its characteristic dip in the center, showing a maximum intensity radius of 0.9, ∼0.03 w_B, for m=±1 EVBs.Due to the TEM geometry and the limited z-shift range of the specimen stage of ±375 the experimentally accessible range of different radii is reduced, ranging from ∼20 w_B down to ∼ w_B/3.Note that the axial magnetic field within the accessible range of EVB radii can be considered as quasi-homogeneous showing deviations smaller than ±1.5% of B_z. This has been tested in advance by investigating image rotations of copper grids and numerical simulations. The radial component of the magnetic lens field was calculated to be less than 10^-6B_z.In order to map the azimuthal dynamics of EVB onto the z-direction we obstruct half of the beam using a KE placed in the sample holder (see Sec. <ref>). As the Larmor frequency in the TEM objective lens field is of the order of Ω∼2π×19, when using the relativistic electron mass m_e=γ m_0, the transverse rotational dynamics (Ω w_B ∼ 10^-5 c) of EVBs are rather slow compared to the forward velocity of the relativistic electrons (v_z∼0.7c). This leads to small but detectable pattern rotation of 3.3/100 when the KE is shifted up or down in the z-direction. To enhance the angular resolution the C2 condenser is under-focused (a few Rayleigh ranges z_R∼2) as seen from the observation plane (i.e. the rotated vortex is observed about z_df∼±10 from the focus, where the vortex orders do not overlap, see Fig. <ref>).To further increase the angular resolution the image contrast was enhanced using color coding and gamma correction.Fig. <ref> shows typical experimental images of the cut EVBs for the whole accessible z-shift range, including |m|=0,1,3 vortex orders. The angles between e.g. a horizontal line and the faint solid lines represent the measured azimuthal angles φ(z) for different z-shift values of the KE, indicated next to the row of cut EVBs. By visual judgment alone one can already see rotational dynamics for all vortex orders. To quantify these rotational dynamics, Fig. <ref> shows the measured azimuthal angles for LR- and LS-regions and compares them to the numerical simulated values as well as to the analytical model Eq. <ref>. Finer z-scanning was used where the vortex size approaches the LS-size. This is also illustrated by the colored bar in the upper diagram in Fig. <ref> indicating the LR-region in green, which ranges from z=-380 to z∼-100 and the LS-region in cyan, ranging from z ∼-100 to z ∼-40. The region from z ∼-30 to the focus z=0 represents the Gouy regime, in magenta. The measured azimuthal angles are in good agreement with the theoretical predictions. They show LR independent of the EVBs OAM for beam radii significantly larger than w_B. When the beam size approaches the LS-size, LR smoothly turns into quantized LS rotational behavior with no-rotation for negative OAM and cyclotron-rotation for positive OAM. When further decreasing the EVBs size well below w_B the transition to fast Gouy rotation is observed, especially for negative OAM. Care must be taken that in particular for lower z-shift values diffraction effects and faint misalignments of the KE, meaning more or less cutting between the different z-shift values induce strong angular deviations. According to <cit.> the OAM admixtures for m≠ m_0 in Eq. <ref> were estimated to 2.5% for m = m_0 ± 1, 5% for m = m_0 ± 2, 7.5% for m = m_0 ± 3 and 2.5% for m = m_0 ± 4.To account for the effect of these impurities on the rotation, these percentages were used in Eq. <ref> to obtain upper/lower bounds for the deviation from the pure vortex behavior, indicated by colored bands in Fig. <ref>. When applied symmetrically to the pure vortices, meaning 65%m_0, 17.5% for all m > m_0 and 17.5% for all m < m_0, this results in a shift towards lower spreadings between the positive and negative vortex orders (see Fig. <ref>, dot dashed lines). This tendency can also be observed in the experimental data in Fig. <ref>, hinting at OAM impurity contributions.The error estimates given in Fig. <ref> and Tab. <ref> are calculated as a combination of three effects: the first is the stage positioning error, which increases linearly for higher z-shift values. The relative stage error was given by the manufacturer to be of the order of 3%.The second is the surface roughness of the used KE to block half of the incoming EVB. It turned out that the KE surface roughness is a crucial experimental parameter, because when the beam radii approach w_B, surface corrugations of the order of a few nanometers already introduce angular deviations of a few degrees. On the other hand for higher beam radii the surface roughness does not significantly contribute to the overall error. As a consequence of that, it is absolutely necessary to choose a very smooth KE with a surface roughness better than R_z<1. We chose to take a 111 Si-wafer that was broken along a low indexed zone axis, where R_z was measured to lie below 1, thus keeping the contributions of the KE surface roughness well below 0.5.The third contribution is the reading error stemming from the azimuthal cutting angle determination, which lies below 1.4 in our case.Altogether the estimated error of the azimuthal angle determination techniques used is of the order of ±2.Tab. <ref> compares rotational frequencies in units of Ω averaged over the LR-region and the LS-region stemming from multiple experiments for the vortex orders |m|=1,3 with the numerical simulation and the diffracting LG model, Eq. <ref>. It shows the remarkable agreement between the experimentally gained rotational frequencies and the theoretically expected ones, thus bringing further evidence that EVBs in magnetic fields exhibit peculiar rotations.Note that due to the averaging over extended z-shift regions and the smooth transition between different rotational regimes the rotational frequency of EVBs slightly increases or decreases compared to the LR in the LR region and to the zero- and cyclotron-rotation in the LS-region, respectively.The rotation dynamics included in Eq. <ref> represent our main findings, using the universal form Eq. <ref>; it is possible to summarize experimental data, including Gouy-region data from the close vicinity of the focal plane, the numerical simulation and the diffracting LG model Eq. <ref>, see Fig. <ref>. The logarithmic scale of Fig. <ref> covers four orders of magnitude of the dimensionless rotation dynamics. It can be seen that the numerical simulations based on the moments of the defocused vortices follow the universal curve (valid for LG modes) up to ζ∼ 10 quite well. For higher z-values, the m=1 simulation is slightly above the Lorentz curve, the m=3 simulation is well above. We have included the results of <cit.> taken with other parameters (voltage, convergence, method). The experimental results in the entire range covering Larmor, Landau and Gouy behavior are very close to the numerical simulations. In view of the experimental difficulties, and the rotation frequencies covering four orders of magnitude this is an extraordinary result. § CONCLUSIONS The intuitive idea that the rotation dynamics of electron vortices can be described by a combination of slow Larmor and fast Gouy rotationsis illustrated by the present theoretical approach and confirmed byexperiments. Usual Larmor rotation is found for large vortex radii, far from the focus.Rapid Gouy rotation appears close to the focus for narrow vortices.Landau behavior with quantized rotation emerges as a special case when the vortices have radii close to the magnetic waist, bridging the Larmor and Gouy regimes. The present quantum approach reconciles the three different regimes of rotational behavior (classical Larmor <cit.>, Landau <cit.>, and rapid Gouy <cit.> rotation) of electron vortices in a magnetic field in a unifying description.§ ACKNOWLEDGEMENTSThis work was supported by the Austrian Science Fund (FWF; grant no. I543-N20).§ REFERENCES unsrt
http://arxiv.org/abs/1703.10235v1
{ "authors": [ "Thomas Schachinger", "Stefan Löffler", "Michael Stöger-Pollach", "Peter Schattschneider" ], "categories": [ "physics.class-ph", "physics.optics" ], "primary_category": "physics.class-ph", "published": "20170327153522", "title": "Peculiar Rotation of Electron Vortex Beams" }
Faculty of Science and Technology, Seikei University,Kichijyoji-Kitamachi 3-3-1, Musashino-shi, Tokyo, 180-8633, JAPANDepartment of Physics, Harvard University,Cambridge, Massachussetts 02138, USAThis study analyzed the scar-like localization in the time-average of a time-evolving wavepacket on the desymmetrized stadium billiard.When a wave-packet is launched along the orbits, it emerges on classical unstable periodic orbits as a scar in the stationary states. This localization along the periodic orbit is clarified through the semiclassical approximation.Itessentially originates from the same mechanism of a scar in stationary states: the piling up of the contribution from the classical actions of multiply repeated passes on a primitive periodic orbit. To create this enhancement, several states are required in the energy range, which is determined by the initial wavepacket.Scars, Wavepackets, Semiclassical approximation § INTRODUCTION This study investigates the localization in the time-average of the absolute squares of the time-evolving wave function on the desymmetrized stadium billiard that occurs after the Gaussian wavepacket is launched as the initial state.In chaotic billiards like a stadium, the nodal patterns of stationary states with unique characteristics were discovered approximately three decades ago <cit.>. The patterns often have a unique enhancement along classical unstable periodic orbits.Such a phenomenon is called scar in quantum stationary states of a finite chaotic region.The eigen states with scars are called scar states.In contrast, in integrable billiards, the nodal patterns are essentially repetitive and synthetic.The eigen states are a genuine quantum mechanical concept, whereas the periodic orbits are apparently classical mechanical objects.The scar state is an important discovery expressing a providential quantum-classical correspondence.A semiclassical approximation emerged as a powerful tool to clarify scar states in quantum systems along the classical unstable periodic orbits. This method has been used to construct theories of scars in coordinate space <cit.> and phase space <cit.>; they successfully clarify the contribution of the periodic orbits to the scar states.Both theories discuss the scars in energy dependencebecause the scars first are discovered in the eigen states. Bogomolny <cit.> proposed a Green's function in terms of actions of classical periodic orbits to expose the periodic orbits as the origins of the scar in the coordinate space.Berry's theory <cit.> utilizes the Wigner function under approximation in the phase space to clarify the cause of the scars.In particular, Heller's lecture <cit.> revealed the dynamical properties of scars, stating that the time-evolving wavepackets propagate near the periodic orbits.Especially, the Heller group focused on homoclinic orbits and the return of the Gaussian wavepacketto the neighborhood of its launching point in finite regions. In addition, they realized the importance of the autocorrelation function and its Fourier counter part: the weighted spectrum <cit.>.Finally, the enhancement or localization in the time-average of the time-evolving wavepacket was discovered <cit.>. In this study, it is called as the “dynamical scar".It has a distinctly close relation to scar states because it also emerges along a periodic orbit <cit.>.In this study, the scar states are shown to heavily contribute to the dynamical states.The window function <cit.> for the semiclassical approximation to describe the enhancement is derived from the weighted power spectrum.However, it is known that reflection symmetries of billiard's shape sometimes prevents the detection of its genuine chaotic characteristics.To remove the discrete symmetries, we studied the localization in a desymmetrized 2×4 stadium billiard <cit.>. The desymmetrization eliminates the two discrete mirror symmetries of the full stadium shape and makes the chaotic properties more evident. We use Table I in Ref.<cit.> to distinguish the periodic orbits; however, the table is for a full stadium, and not for a desymmetrized stadium.Therefore, it should be used with caution. If the periodic orbits pass over the horizontal and vertical axes of the symmetries, they may have to be folded at the crossing points for the desymmetrized stadium (cf. Fig.2,3).§ GAUSSIAN WAVEPACKET AS A PROBE FOR DYNAMICAL PROPERTIES The time-dependent Schrödinger equationi ħ∂Ψ/∂ t=- ħ^2/2m∇ ^2Ψ + VΨgoverns dynamical properties of quantum systems.By adopting the quarter of the 2 × 4 stadium (FIG.1-3)as the 2D chaotic finite structure, the potential is simply set to V=0 inside the billiard and V=∞ outside.The Gaussian wavepacket is a conventional tool used for elucidating the time-evolution of quantum states <cit.>.It has been one of the fundamental quantum objects since the early stage of quantum mechanics.Its initial form in a 2D region isΨ_0 (𝐫) = 1/σ_0 √(π) exp [ i/ħ𝐩_0 ( 𝐫 - 𝐫_0 ) - (𝐫 - 𝐫_0)^2/2 σ_0^2] ,where𝐫=(x, y) is a point inside the nanostructure,𝐫_0=(x_0, y_0) is the initial location of the center of the wavepacket,and𝐩_0=(p_0x,p_0y) is the packet's initial momentum.The standard deviation of the Gaussian packet σ_0 determines its size.If the Gaussian wavepacket is placed in a flat infinite space,it travels as a bunch with the initial velocity of the center of the wavepacket 𝐯_0=𝐩_0 / m. The absolute value of the wavepacket shows that its shape is always Gaussian; however, its size increases as|σ(t)|=σ_0 √(1+ ( ħ t/mσ_0^2)^2 ).If time is sufficiently long, σ(t) ≈ħ/m σ_0 t. In this study, the wavepacket travels in the finite region,and repeated reflections on the boundary eventually diffuse it all around the billiard (Fig.1; cf. <cit.>). Initially it behaves like a bunch of viscous liquid.The travelling wavepacket then gradually and progressively shows less specific texture. Finally, in chaotic billiards, the snapshots of wave function ripple all over the billiard with irregular granular pattern. On the contrary, the autocorrelation function has surprisingly already revealed long time recurrence <cit.>.Moreover, this obliquely implies the localization on the periodic orbit. § DYNAMICAL SCAR One of the most fundamental concepts in quantum physics is the use of the absolute square of the wave function to derive any physical properties; usually its time average is important to investigate a quantum effect.Therefore, the time-average of the absolute square of the wave function is as follows: A_T(𝐫)= 1/T∫_0^T |Ψ(𝐫,t)|^2 dt .This is an appropriate tool to detect the localization that is concerned.Here, T expresses the time required to measure the time-average.For numerical calculation, it is discretized asA_T(𝐫_i)=1/N_t∑_j=0^N_t | Ψ( 𝐫_i, t_j )|^2 , on the mesh points 𝐫_i=(x_i,y_i), and the integration over time is the summation over the discretized times t_j = j Δ t, where Δ t is a time step. The summation must then be divided by the integer N_t representing the number of whole time steps and apparently T=N_t Δ t.In this study, the natural units ħ=m=1 are always applied for actual numerical evaluation. The time step is set at Δ t= 2.5 × 10^-2, T=9 × 10^4, or N_t = 3.6 × 10^6, and the lattice constant is 0.2. A typical example of calculated A_T is presented in Fig.2.The time-average expresses clear localization along unstable periodic orbits despite no specific patterns in the snapshots of the wavepackets (e.g. Fig.1(f)).It is apparently similar to the scars of a stationary wave function <cit.>. Furthermore, different launching conditions exibit the same phenomena on various periodic orbits, as shown in Fig.3 (also see <cit.>).The enhancement appears clearly around the periodic orbit if the initial location of the center of the wavepacket and its velocity are on and along the orbit. These are referred to as “dynamical scars" to distinguish them from the scar states in stationary eigen states. These are an enhancement in the time-average of time-dependent wave function. Any states in quantum systems can be expanded using these eigenfunctions asΨ(𝐫,t)=∑_n c_n ψ_n (𝐫,t)=∑_n c_n ϕ_n (𝐫) exp(- i/ħ E_n t), where ψ_n (𝐫,t) = ϕ_n (𝐫) exp(- i/ħ E_n t) is the n-th eigen state of the system with energy E_n. The expansion coeffient c_n must satisfy the condition ∑_n |c_n|^2=1. In this study, the initial state is setΨ(𝐫,t=0)=Ψ_0(𝐫). The expansion coefficient c_n can be determined using the initial wavepacket Ψ_0 as c_n = ∫ϕ_n^* Ψ_0 (𝐫) d 𝐫 .Moreover, the expansion can be used to elucidate the time-average of |Ψ(𝐫,t)|^2 asA(𝐫)= lim_T →∞A_T(𝐫)= lim_T →∞1/T∫_0^T |Ψ(𝐫,t)|^2 dt = lim_T →∞1/T∫_0 ^T [ ∑_n |c_n|^2 |ϕ_n (𝐫)|^2 + ∑_n ≠ m c_m^* c_n ϕ_m^* ϕ_n exp{i/ħ (E_m - E_n) t}] dt = ∑_n |c_n|^2 |ϕ_n (𝐫)|^2 ,assuming E_n ≠ E_m, if n ≠ m. In other words, by Eq.(7), if the coefficients c_n of the scar eigen states on the same periodic orbit have dominantly larger values, “dynamical scars" of the periodic orbits are observed in the time-average A(𝐫) <cit.>.Therefore, at least theoretically, the time-average (7) can be written in energy integration as follows:A(𝐫) = ∫∑_n|c_n|^2|ϕ_n (𝐫)|^2 δ (E-E_n) dE.However, the Dirac delta function must be treated carefully to allow comparison of numerical results and experimental data.The behavior of the delta functions is often smoothed by the limitation of the precision of numerical calculation and experimental measurement. Eq.(8) can be considered as the summation of the related wave fuctions and the specific contribution weight that closely correspond to the weighted spectrum because it includes the factor |c_n|^2.In numerical calculation, the weighted spectrum would be smoothed by the numerical discretization and the precision of the calculation.The Dirac delta function could be replaced with a smoothed function. § WINDOW FUNCTION The correlation function between the travelling wavepacket (5) and initial state (2) C_0(t) = ∫Ψ_0^* (𝐫) Ψ(𝐫,t) d𝐫^2 closely relates to the weighted spectrum.The autocorrelation function is expressed by the eigenfunction expansion (5) asC_0(t)= ∫Ψ_0^*(𝐫) Ψ(𝐫,t)d^2 𝐫= ∫ (∑_m c_m^* ϕ_m^*)( ∑_n c_n ϕ_n e^-i/ħE_n t) d^2 𝐫= ∑_n |c_n|^2 e^-i/ħE_n t.The weighted spectrum can be defined through its Fourier transform asC̃_0 (E)=1/2π∫_-∞^∞ C_0(t) e^i/ħEtdt=1/2π∫_-∞^∞∑_n |c_n|^2 e^i/ħ(E-E_n)tdt= ħ∑_n |c_n|^2 δ(E-E_n)= ħ P(E).This only represents the bare weighted power spectrum P(E)= ∑_n |c_n|^2 δ(E-E_n) with the Planck constant. The smoothed version of the weighted spectrum and the Green's function introduce a neat form of the time-average.The smoothed weighted spectrum function (SWSF) can be written asP_ϵ(E)=∑_n |c_n|^2 δ_ϵ (E-E_n) .In addition, we have lim _ ϵ→ 0 P_ϵ(E) =P(E).When ϵ becomes infinitesimal, lim_ϵ→ 0δ_ϵ (x) = δ(x). Here, the Lorentzian form of the smoothed version delta function is introduced as δ_ϵ (E-E_n) = ϵ/π1/(E-E_n)^2 + ϵ^2 .Realistic systems have finite precision and always show errors because of numerical applications, limit of measurement, etc. Owing to these inevitable limitations of the systems, the Dirac delta functions are replaced by some finite regular functions.Its infinity and singular behavior cannot be recreated exactly in a computation; they seem very large but are finite, and are singular-like; however, the peaks are not numerically infinite.The width of the Lorenzian ϵ would be the order of the mean level spacing Δ E under such limitationbecause much finer energy difference would not be distinguishable.The replacement is allowed, considering the width oftheLorentzian ϵ should be equal or larger than the order of the mean level spacing Δ E. By using this expression, the smoothed Green's functionImG_ϵ (𝐫,𝐫;E)=-π∑_n |ϕ_n (𝐫)|^2δ_ϵ (E-E_n)is also introduced.Under such circumstances, a square of the delta functions can be treated using Berry's method <cit.>.The smoothed delta function (12) has a remarkable property: δ̅_̅ϵ̅ (E-E_n) =2 πϵ [δ_ϵ (E-E_n)]^2 = 2ϵ^3/π1/{(E-E_n)^2+ϵ^2 }^2, where δ̅_̅ϵ̅ (E-E_n) is another version of the smoothed delta function lim_ϵ→ 0δ̅_̅ϵ̅ (x) = δ(x).Next, we use an alternative practical version of the time-averageA_ϵ(𝐫) = ∫∑_n|c_n|^2|ϕ_n (𝐫)|^2 δ̅_̅ϵ̅ (E-E_n) dE.The original time-average A is in the limit A(𝐫) =lim_ϵ→ 0 A_ϵ (𝐫). By multiplying the two terms (11) and (13), we obtainP_ϵ(E) ImG_ϵ(𝐫,𝐫;E ) =∑_n |c_n|^2δ_ϵ(E-E_n) { -π∑_n'|ϕ_n'(𝐫)|^2 δ_ϵ(E-E_n') }=-π∑_n,n' |c_n|^2 |ϕ_n'(𝐫)|^2 δ_ϵ(E-E_n) δ_ϵ(E-E_n') =-π∑_n |c_n|^2 |ϕ_n(𝐫)|^2 [ δ_ϵ(E-E_n) ]^2=-1/2ϵ∑_n |c_n|^2 |ϕ_n(𝐫)|^2δ̅_̅ϵ̅(E-E_n) .Here Eq.(14) is also applied for this deformation. Finally, Eq.(16) is used to provide the following expression for the time-average by using theGreen's functionA_ϵ(𝐫)=-2ϵ∫_-∞^∞ P_ϵ(E) ImG_ϵ (𝐫,𝐫;E) dE= ∫_-∞^∞ w(E) ImG_ϵ (𝐫,𝐫;E) dE,wherethe window function w(E) is introduced <cit.>through SWSF (11) as w(E)=-2 ϵ P_ϵ(E) = - 2ϵ/ħC̃_̃0̃(E).In other words, w(E) is the weight for the integration over the energy region to evaluate the time-average A_ϵ (E) from tne imaginary part of the smoothed Green's function (13).This is the specific quantum phenomenon that is focused upon in this study.It determines where the window should be transparent in the energy spectrum. In a two-dimensional flat and infinite space, the travelling wavepacket can be calculated exactly. The autocorrelation function is then well approximated as,C_f(t) = ∫Ψ_0^* (𝐫 ) Ψ(𝐫,t) d^2 𝐫≈exp ( - v^2 t^2/4 σ_0^2-i/ħ E_0 t ),and its real phase part C_R(t) ≈exp ( - v^2 t^2/4 σ_0^2 )satisfactorily represents the damping behavior of the correlation function C_f (t).In a chaotic finite region, the autocorrelation function should differ as C(t) ≈∑_n exp { - v^2 (t-nτ)^2/4 σ_0^2-i/ħ E_0 (t-nτ) }exp( - λ/2 |t|) ,where τ is the period of a paticular periodic orbit, along which the initial wavepacket is launched <cit.>. The summation implies that the finite region allows the wavepacket to repeatedly return to its original location.Moreover, its chaoticity makes it spread all over the billiard exponetially under the Lyapnov exponent λ of the periodic orbit. It can be reformed using the Poisson sum rule asC(t)=∑_n1/ħΔ/√(π)σ_0/v exp{ -σ_0^2/v^2 ħ^2 (E_n-E_0)^2 }e^- i/ħE_n t e^- λ/2|t| , where Δ=2 πħ / τ (=ħω), E_n=Δ n, and E_0=𝐩_0^2/2m.The weighted power spectrum can then be derived through the Fourier transform ofthe autocorrelation function (22) as follows:C̃(E) =1/2 π∫_-∞^∞ C(t) e^i/ħEt dt =∑_n=-∞^∞1/ħΔ/√(π)σ_0/v exp{ -σ_0^2/v^2 ħ^2 (E_n-E_0)^2 }×1/πλ/2/((E-E_n)/ħ)^2+(λ/2)^2.This also includes the Lorentzian function of (12); however, the origin of its peaky behavior is completely different from ϵ.The Lyapnov exponent λ is purely due to the chaotic property of our system and does not exist in C_f (t). Therefore, replacing C̃_0 (E) with C̃(E), the relation between the window function and power spectrum should be modified to w(E) ≅ - 2ϵ/ħC̃(E).Then, by Eq.(23), the window function is expected to bew (E)≈-2ϵ 1/√(π)σ_0/ vexp{ -σ_0^2/v^2 ħ^2 (E-E_0)^2 }××Δ/π∑_n=-∞^+∞λ/2/(E-E_p-nΔ)^2+(ħλ/2)^2.Here, E_p represents the energy at the highest maximum of the serial local peaks with width λ, which is the Lyapnov exponent of the billiard, and Δ is the energy gap between local peaks. The interplay of the Gaussian envelope shape with its width v ħ / σ_0 is due to the size of the initial Gaussian (1) and the narrow peaks, with width λ, represented by the Lorentzian.Finally, w(E) is well estimated through Eq.(25) by replacing the eigen energies E_n of the eigenstates in the exponential function of Eq.(23) with an ordinary energy variable E.In reality, the resulting numerical difference of w(E) is slight under the replacement. Then, by using the summation symbol, Eq.(25) simply adds the Lorentzian “delta" functions, which are smoothed by the Lyapnov exponent λ.In chaotic billiard systems, the actual weighted power spectrum C̃(E), which is evaluated from numerically obtained eigen states, is known to have an extremely spiky and oscillatory behavior <cit.>.The existence of the scar states in chaotic billiard systems leads to a relatively smaller amount of selected eigen states contributing dominantly to A(𝐫).The |c_n|^2 histograms clearly show this tendency. Fig.4 and 6 show the histograms for No.7 and 14 respectively, where the numbering stands for a specific periodic orbit in the stadium, as shown in Table 1 of<cit.>. In Fig.4, the red curve represents w (E) for No.7, with λ=0.418|𝐩_0|. The constant 0.418 is the geometric Lyapunov exponent and was evaluated from the monodromy matrix of the corresponding periodic orbit <cit.>. In addition, ϵ is set to the averaged energy level spacing Δ E =0.0003412 × 10^4. Other parameters related to the initial Gaussian are the same as those in FIG.1. They are simply the linear-dynamical predictions of the window function <cit.>. The local peaks of the actual weighted spectrum are located at almost equal energy intervals, that is, Δ=0.03193 × 10^4; this is very close to the theoretical estimation Δ_th=ħ/m(2π/L)|𝐩_0|=0.03253 × 10^4, where L=4.8284 is the length of the specific periodic orbit. Through semiclassical approximation, the classical action on the classical periodic orbit is determined as S_r(ξ, ξ; E_0)=∮_r 𝐩 d𝐫 = L √(2mE_0). It must increase by as much as 2πħ, adding Δ_th to its energy E_0.As aformentioned, w(E) is less spikier than the actual |c_n|^2 histogram.In addition, it is the “totalitarian" case in Ref. <cit.>.In the weighted spectrum of the "totalitarian" system, some paticular states have dominant contributions. The scars can often be found in such states.Still, its smoothed behavior follows the estimated envelop function: the window w(E). (The opposite case is called the "egalitarian" in <cit.>.Then the weighted spectrum essentially follows the window function. )It simultaneously allows the emergence of “dynamical scars".Similar to the scar states, if only one primitive periodic orbit has a dominant contribution, the “dynamical scars" become visible. In actuality, the eigen states at peaks often become the scar states of the corresponding periodic orbit (cf. Fig4, 6).Of course, the eigen states with larger c_n would also contribute to the “dynamical scars".However, in some cases, the “dynamical scars" are blurred by the superposition of the other orbits on the eigen state.The histogram of |c_n|^2s is extremely spiky, although it is possible to enlucidate its smoothed version (Fig.5) formedby averaging the energy range, which is sufficiently larger than the energy spacing of levels but much smaller than the required energy.It agrees strikingly with the window function w (E). The same situation occurs for periodic orbit No.14 (Fig3.(b)) in FIG.6, and for orbit No.5, which is already published in <cit.>.In FIG.6, the red curve represents w (E), with λ=0.3684|𝐩_0|.The local peaks' energy intervals Δ=0.02340 × 10^4 are extremely close to its prediction Δ_th=ħ/m(2π/L)|𝐩_0|=0.02428 × 10^4 (L=6.47). Moreover, other parameters related to the initial Gaussian are the same as those in Fig.3(b). In addition, the processes in the smoothed histogram are the same.The smoothed histogram matches very closely with its window function w (E) (Fig.7). Moreover, the bouncing ball mode produces a considerably unique result(Fig.8). This exceptional mode is the only nonchaotic periodic orbit in the stadium billiard.It has a zero Lyapunov exponent and no chaotic origin because it bounces between the parallel walls of the billiard in terms of classical mechanics.However, the parameter λ still cannot be set to zero or be infinitesimally small in our numerical calculation because the Lorentzian approaches the Dirac delta function in such a limit; this cannot be presented exactly in numerical calculation. Numerical results clarify that only the wave functions with scars on the boucing ball mode significantly contribute to the “dynamical scar”. Fig.9 compares the numerical histogram and the estimated weighted spectrum, both of which show strikingly good agreement.Numerically calculated interval between the peaks is Δ=0.07524, whereas its theoretical estimation isΔ_th=ħ/m(2π/L)|p_0|=0.07854 (L=2).Note that the width of the sharp peaks λ in the weighted spectrum is replaced by averaged level spacing Δ E, instead of the theoretically exact value of vanishing Lyapnov exponent λ=0.It also implies that this system does not have much finer energy resolution than Δ E.As mentioned earlier, with a good agreeement between w(E) and the averaged behavior of |c_n|^2, the semiclassical approximation can be expected to function satisfactorily in this field.Moreover, it reminds us of the “totalitarian" aspect of the system. If we choose a sufficiently small window size to reasonably suppose that only one eigen state would be in the window simultaneously, it essentially resembles the result of Ref.<cit.> for the scar states.However, in this study the window size is much larger because the initial wavepacket must involve the contribution of eigen states in a broader energy range. Thus, a scar is not directly observed in the snapshot of time-dependent wave functions (Fig.1(f)).The “dynamical scar" is the superposition of many corresponding states in the energy window.§ SEMICLASSICAL APPROXIMATION Throughsemiclassical approximation <cit.>,the localization becomes the summation oftwo parts: A(𝐫) ≅⟨ρ_0 (𝐫, E) ⟩+ ∫ w(E)ImG_osc(𝐫,𝐫;E)dE =⟨ρ_0 (𝐫, E) ⟩+A_osc(𝐫) ,whereG_osc (𝐫,𝐫;E) ≅2/(2 π)^1/2ħ^3/2××∑_γ, nD_γ,n (ξ)^1/2/v{exp [ i/ħ (S_γ, n(ξ, ξ; E) + W_γ,n (ξ) /2η^2) ] - i πν_γ,n/2 -i 3/4π} .The first term of Eq.(26) in the right-hand side is the smooth part ⟨ρ_0 ⟩, and the second is the oscillatory term A_osc. Further the angle branckets ⟨⋯⟩ denote an average over the energy range that the window function w(E) covers, and ρ_0 (𝐫, E) is the classical probability density of finding a particle with energy E at point 𝐫.Needless to say, w(E) depends on the shape of the (initial) wavepacket. The ξ axis is set along the concerned periodic orbit, and the η axis perpendicular to it at point ξ.The classical action of the n-fold repeated orbit can be derived as S_γ,n=nS_γ from the action of the primitive orbit γ: S_γ.Then, T_γ,n (𝐫,E)=nT_γ, T_γ is the period of the primitive orbit γ.Its maximal number of conjugate points ν_γ,n=nν_γcan be derived from the primitive ν_γ. In addition,W_γ,n (ξ), D_γ,n (ξ)areversions for the n-fold periodic orbit and can be expressed byD_γ=-(∂^2 S_γ/∂η' ∂η”)_η'=η”=0and W_γ(ξ)=(∂^2 S_γ/∂η'^2 + ∂^2 S_γ/∂η' ∂η” + ∂^2 S_γ/∂η”^2)_η'=η”=0 for the primitive orbit. Theycan be derived from D_γ: D_γ,n(ξ)=D_γμ_1 - μ_2/μ_1^n - μ_2^n, W_γ,n(ξ)=D_γ,n(μ_1^n + μ_2^n - 2). Note that μ_1, μ_2=μ_1^-1 are the eigenvalues of the monodromy matrix of the primitive orbit.It is assumed that only one specific periodic orbit γ=C shows a prime contribution.Moreover, primitive orbit n=1 is expected to be dominant on the periodic orbit because the factor D_C,n vanishes rapidly with increasing n.Therefore, the oscillatory part of A can be approximated on the classical orbit C (η=0) as A_osc (ξ) ≅2√(2)/πħ^7/2σ_0/vϵΔ∑_j exp[- σ_0^2/ħ^2 v^2(E_j - E_0)^2 ] |D_C|^1/2/v××∫1/πλ /2/{(E-E_j)/ħ}^2 + (λ/2)^2Im{ i exp [i/ħ S_C -i π/2ν_C + i π N_C - i 1/4π ] }dE . Note that N_C is the number of hits on the boundary, when a particle travels around the closed orbit C, and D_C = D_C,1.Under the semiclassical approximation, at E=E_j,it can be well assumed thatexp {i/ħ S_C(ξ,ξ;E_j)-i π/2ν_C + i π N_C -i 1/4π} =1. Finally, the integration in Eq.(28) can be performed using the complex integral, and the localization is evaluated asA(ξ) =⟨ρ⟩ + A_osc(ξ, E) = 1/Area + 2√(2)/πħ^5/2σ_0/vϵΔ|D_C(ξ)|^1/2/v∑_j exp [ -σ_0^2 /ħ^2 v^2 (E_j - E_0)^2 ] e^-T_j λ/2where S_C(ξ, ξ;E_j + i ħλ/2 ) ≅ S_C(ξ, ξ;E_j)+i T_j λħ/2 is used, T_j is the period of the periodic orbit at E=E_j, and Area is just the area of the billiard. Finally, the averaged level spacing Δ E, which is the criterion of the energy resolution limit of the billiard system, is adopted for ϵThe evaluated localization A on the periodic orbit No.7(FIG.2) is presented in Fig.10.Assuming the wave function is completely flat in the finite region,⟨ρ⟩ must be the inverse of the area of the billiard: { (4+π)/4 } ^-1 =0.5601... throughout the stadium.Owing to the scar or the contribution of the classical periodic orbit, the concentration enhances the absolute square of the wave function by at least 10% on the periodic orbit above the average behavior ⟨ρ⟩, except in the neighborhood of the singularity around the conjugate point. Of course, it cannot recreate the wavy behavior, which is especially sharp close to the boundary because Eq.(29) does not show the exact effect of the boundary condition.The approximation is determined essentially through the length of the orbits and the energy.Actually, the wave must be zero at the boundary according to the Dirichlet condition, and all dominant eigenfunctions' phases become almost coherent near the boundary. Fig.11 shows the semiclassical approximation of No.14. In addition, it presents essentially the same results as No.7 (Fig.10).The singularity at the conjugate point is inevitable for the semiclasical approximation; however, it is also beyond the scope of the approximation in the neighborhood of the point. The semiclassical approximation of the wave function diverges at the point due to factor D_C = 1/m_12, and m_12 is the off-diagonal element of the monodromy matrix<cit.> for the unstable classical periodic orbit C.In our study, m_12=-2{(2+√(2))-ξ^2} for No.7 (Fig.10), andm_12=-2{(5+√(5))-ξ^2} for No.14 (Fig.11).In both cases ξ is measured from the left wall and along the orbits.The monodromy matrix element m_12 becomes zero and D_C diverges at the conjugate point ξ_C, where the classical orbits near the classical periodic orbit converge. The conjugate points are located at ξ_C=√(2+√(2)) for No.7 mearsured from the point (0,1), and √(5+√(5)) for No.14 measured from (0,0). In reality, a relatively strong enhancement exists around the point. Apart from these properties,the semiclassical approximation works well, and Eq.(29) still matches remarkably with the numerically evaluated time-averages on the orbits.§ CONCLUSION The quantum phenomenon: the “dynamical scar" is analyzed from the aspect of the eigen state expansion of the incident wavepacket and the semiclassical approximation.By launching a Gaussian wavepacket along a classical unstable periodic orbit, its weighted power spectrum C̅(E) accomplishes a good match with its averaged histogram of expansion coefficients |c_n|^2s.By utilizing C̅(E) as the energy window function for the semiclassical approximation, the “dynamical scars" can be evaluated.The periodic orbit critically contributes to the approximation.However, it has nonrealistic singularities close to the conjugate points on the orbit. The window function w(E), which is manipulated from C̃(E), plays a crucial role for the approximation.By setting the window size small so that only one eigen state can exist inside the window energy range, our discussion then becomes the same as the scar state theory of Bogomolny <cit.>.In this study, the window size was sufficiently large to include more than several scarred eigen states to make the “dynamical scar" clearly visible.Simultaneously, this may be why we cannot observe scars in the snapshots of traveling wave functions after their diffusing throughout the billiard (Fig.1(f)). The “dynamical scar" is the interplay of many related scarred states inside the range of the energy window.00 heller E. J. Heller,Bound-State Eigenfunctions of classically Chaotic Hamiltonian Systems: Scars of Periodic Orbits, Phys. Rev. Lett. 53, (1984), 1515-1518. Bogomolny E. B. Bogomolny, Smoothed Wave Functions of Chaotic Quantum Systems, Physica D 31, (1988) 169-189. LesHouches E. J. Heller, Chaos and Quantum Physics, edited by M. J. Giannoni, A. Voros, and J. Zinn-Justin, Les Houches Session LII, 1989 Elsevier, Amsterdam, 1991 pp.547-663. Berry-Wignerdist M. V. Berry, Quantum Scars of Classical Closed Orbits in Phase Space, Proc. R. Soc. Lond. A 423, (1989) 219-231.heller2 E. J. Heller, Quantum Localization and the Rate of Exploration of Phase Space, Phys. Rev. A 35, (1987) 1360-1370. TH S. Tomsovic and E. J. Heller,Semiclassical Construction of Chaotic Eigenstates, Phys. Rev. Lett. 70, (1993) 1405-1408. gaussian S. Tomsovic and E. J. Heller, Long-time Semiclassical Dynamics of Chaos: the Stadium, Phys. Rev. E 47, (1993) 282-299. KH-LinearNonlinear L. Kaplan and E. J. Heller,Linear and Nonlinear Theory of Eigenfunction Scars,Ann. Phys. (N.Y.)264,(1998)171-206. KH-shorttime L. Kaplan and E. J. Heller, Short-time Effects on Eigenstate Structure in Sinai Billiards and Related Systems,Phys. Rev. E62,(2000)409-426.ourpaper H. Tsuyuki, M. Tomiya, S. Sakamoto and M. Nishikawa, Scar-Like States in Dynamical Electron-Wavepackets in Chaotic Billiard, e-J. Surf. Sci. Nanotech. 7, (2009)721-727. ourpaper2 M. Tomiya, H. Tsuyuki and S. Sakamoto,Quantum Fidelity and Dynamical Scar States on Chaotic Billiard System, Comm. Comp. Phys. 182, (2011) 245-248.prepM. Tomiya, H. Tsuyuki, K. Kawamura, S. Sakamoto and E. Heller,Scar State on Time-evolving Wavepacket, J. Phys. Conf. Ser.640, (2015)012068. St H-J. Stöckmann,Quantum Chaos an IntroductionCambridge University Press, Cambridge, 1999,pp.305-310.Bunimovich L. A. Bunimovich, On Ergotic Properties of Certain Billiards, Funct. Anal. Appl. 8, (1974) 254-255.Berry85 M. V. Berry, Semiclassical Theory of Spectral Rigidity, Proc. R. Soc. Lond. A 400, (1985) 229-251.
http://arxiv.org/abs/1703.08613v1
{ "authors": [ "Mitsuyoshi Tomiya", "Shoichi Sakamoto", "Eric J. Heller" ], "categories": [ "quant-ph", "nlin.CD", "81Q50", "G.1.7; J.2" ], "primary_category": "quant-ph", "published": "20170324220509", "title": "Periodic Orbit Scar in Propagation of Wavepacket" }
1]Dongdong ChenThis work was done when Dongdong Chen is an intern at MSR Asia. 2]Lu Yuan 2]Jing Liao 1]Nenghai Yu 2]Gang Hua [1]University of Science and Technology of China, cd722522@mail.ustc.edu.cn, ynh@ustc.edu.cn [2]Microsoft Research Asia, {luyuan,jliao,ganghua}@microsoft.comStyleBank: An Explicit Representation for Neural Image Style Transfer[ Received ; accepted====================================================================== We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods. § INTRODUCTION Style transfer is to migrate a style from an image to another, and is closely related to texture synthesis. The core problem behind these two tasks is to model the statistics of a reference image (texture, or style image), which enables further sampling from it under certain constraints. For texture synthesis, the constraints are that the boundaries between two neighboring samples must have a smooth transition, while for style transfer, the constraints are that the samples should match the local structure of the content image. So in this sense, style transfer can be regarded as a generalization of texture synthesis.Recent work on style transfer adopting Convolutional Neural Networks (CNN) ignited a renewed interest in this problem. On the machine learning side, it has been shown that a pre-trained image classifier can be used as a feature extractor to drive texture synthesis <cit.> and style transfer <cit.>. These CNN algorithms either apply an iterative optimization mechanism <cit.>, or directly learn a feed-forward generator network <cit.> to seek an image close to both the content image and the style image – all measured in the CNN (i.e., pre-trained VGG-16 <cit.>) feature domain. These algorithms often produce more impressive results compared to the texture-synthesis ones, since the rich feature representation that a deep network can produce from an image would allow more flexible manipulation of an image.Notwithstanding their demonstrated success, the principles of CNN style transfer are vaguely understood. After a careful examination of existing style transfer networks, we argue that the content and style are still coupled in their learnt network structures and hyper-parameters. To the best of our knowledge, an explicit representation for either style or content has not yet been proposed in these previous neural style transfer methods. As a result, the network is only able to capture a specific style one at a time. For a new style, the whole network has to be retrained end-to-end. In practice, this makes these methods unable to scale to large number of styles, especially when the style set needs to be incrementally augmented. In addition, how to further reduce run time, network model size and enable more flexibilities to control transfer (., region-specific transfer), remain to be challenges yet to be addressed.To explore an explicit representation for style, we reconsider neural style transfer by linking back to traditional texton (known as the basic element of texture) mapping methods, where mapping a texton to the target location is equivalent to a convolution between a texton and a Delta function (indicating sampling positions) in the image space. Inspired by this, we propose StyleBank, which is composed of multiple convolution filter banks and each filter bank represents one style. To transfer an image to a specific style, the corresponding filter bank is convolved with the intermediate feature embedding produced by a single auto-encoder, which decomposes the original image into multiple feature response maps. This way, for the first time, we provide a clear understanding of the mechanism underneath neural style transfer.The StyleBank and the auto-encoder are jointly learnt in our proposed feed-forward network. It not only allows us to simultaneously learn a bundle of various styles, but also enables a very efficient incremental learning for a new image style. This is achieved by learning a new filter bank while holding the auto-encoder fixed. We believe this is a very useful functionality to recently emerged style transfer mobile applications (, Prisma) since we do not need to train and prepare a complete network for every style. More importantly, it can even allow users to efficiently create their own style models and conveniently share to others. Since the part of our image encoding is shared for variant styles, it may provide a faster and more convenient switch for users between different style models.Because of the explicit representation, we can more conveniently control style transfer and create new interesting style fusion effects. More specifically, we can either linearly fuse different styles altogether, or produce region-specific style fusion effects. In other words, we may produce an artistic work with hybrid elements from van Gogh's and Picaso's paintings.Compared with existing neural style transfer networks  <cit.>, our proposed neural style transfer network is unique in the following aspects: *In our method, we provide an explicit representation for styles. This enables our network to completely decouple styles from the content after learning. * Due to the explicit style representation, our method enables region-based style transfer. This is infeasible in existing neural style transfer networks, although classical texture transfer methods were able to achieve it.* Our method not only allows to simultaneously train multiple styles sharing a single auto-encoder, but also incrementally learn a new style without changing the auto-encoder. The remainder of the paper is organized as follows. We summarize related work in Section 2. We devote Section 3 to the main technical design of the proposed Style-Bank Network. Section 4 discusses about new characteristics of the proposed Style-Bank Network when compared with previous work. We present experimental results and comparisons in Section 5. And finally we conclude in Section 6. § RELATED WORK Style transfer is very related to texture synthesis, which attempts to grow textures using non-parametric sampling of pixels <cit.> or patches <cit.> in a given source texture. The task of style transfer can be regarded as a problem of texture transfer <cit.>, which synthesizes a texture from a source image constrained by the content of a target image. Hertzman et al. <cit.> further introduce the concept of image analogies, which transfer the texture from an already stylised image onto a target image. However, these methods only use low-level image features of the target image to inform the texture transfer.Ideally, a style transfer algorithms should be able to extract and represent the semantic image content from the target image and then render the content in the style of the source image. To generally separate content from style in natural images is still an extremely difficult problem before, but the problem is better mitigated by the recent development of Deep Convolutional Neural Networks (CNN) <cit.>.DeepDream <cit.> may be the first attempt to generate artistic work using CNN. Inspired by this work, Gatys et al. <cit.> successfully applies CNN (pre-trained VGG-16 networks) to neural style transfer and produces more impressive stylization results compared to classic texture transfer methods. This idea is further extended to portrait painting style transfer <cit.> and patch-based style transfer by combining Markov Random Field (MRF) and CNN <cit.>. Unfortunately, these methods based on an iterative optimization mechanism are computationally expensive in run-time, which imposes a big limitation in real applications.To make the run-time more efficient, more and more works begin to directly learn a feed-forward generator network for a specific style. This way, stylized results can be obtained just with a forward pass, which is hundreds of times faster than iterative optimization <cit.>. For example, Ulyanov et al. <cit.> propose a texture network for both texture synthesis and style transfer. Johnson et al. <cit.> define a perceptual loss function to help learn a transfer network that aims to produce results approaching <cit.>. Chuan et al. <cit.> introduce a Markovian Generative Adversarial Networks, aiming to speed up their previous work <cit.>.However, in all of these methods, the learnt feed-forward networks can only represent one specific style. For a new style, the whole network has to be retrained, which may limit the scalability of adding more styles on demand. In contrast, our network allows a single network to simultaneously learn numerous styles. Moreover, our work enables incremental training for new styles.At the core of our network, the proposed StyleBank represents each style by a convolution filter bank. It is very analogous to the concept of "texton" <cit.> and filter bank in <cit.>, but StyleBank is defined in feature embedding space produced by auto-encoder <cit.> rather than image space. As we known, embedding space can provide compact and descriptive representation for original data <cit.>. Therefore, our StyleBank would provide a better representation for style data compared to predefined dictionaries (such as wavelet <cit.> or pyramid <cit.> ).§ STYLEBANK NETWORKS §.§ StyleBank At its core, the task of neural style transfer requires a more explicit representation, like texton <cit.> (known as the basic element of texture) used in classical texture synthesis. It may provide a new understanding for the style transfer task, and then help design a more elegant architecture to resolve the coupling issue in existing transfer networks <cit.>, which have to retrain hyper-parameters of the whole network for each newly added style end-to-end.We build a feed-forward network based on a simple image auto-encoder (shown in fg:architecture), which would first transform the input image (, the content image) into the feature space through the encoder subnetwork. Inspired by the texton concept, we introduce StyleBank as style representation by analogy, which is learnt from input styles.Indeed, our StyleBank contains multiple convolution filter banks. Every filter bank represents one kind of style, and all channels in a filter bank can be regarded as bases of style elements (, texture pattern, coarsening or softening strokes). By convolving with the intermediate feature maps of content image, produced by auto-encoder, StyleBank would be mapped to the content image to produce different stylization results. Actually, this manner is analogy to texton mapping in image space, which can also be interpreted as the convolution between texton and Delta function (indicating sampling positions). §.§ Network Architecture fg:architecture shows our network architecture, which consists of three modules: image encoder ℰ, StyleBank layer 𝒦 and image decoder 𝒟, which constitute two learning branches: auto-encoder (, ℰ→𝒟) and stylizing (, ℰ→𝒦→𝒟). Both branches share the same encoder ℰ and decoder 𝒟 modules.Our network requires the content image 𝐼 to be the input. Then the image is transformed into multi-layer feature maps 𝐹 through the encoder ℰ: 𝐹 = ℰ(𝐼). For the auto-encoder branch, we train the auto-encoder to produce an image that is as close as possible to the input image, , 𝑂=𝒟(𝐹) →𝐼. In parallel, for the stylizing branch, we add an intermediate StyleBank layer 𝒦 between ℰ and 𝒟. In this layer, StyleBank {K_i}, (i= 1, 2,...,n), for n styles would be respectively convolved with features 𝐹 to obtain transferred features 𝐹_i. Finally, the stylization result 𝑂_i for style i is achieved by the decoder 𝒟: 𝑂_i = 𝒟(𝐹_i). In this manner, contents could be encoded to the auto-encoder ℰ and 𝒟 as much as possible, while styles would be encoded into StyleBank. As a result, content and style are decoupled from our network as much as possible. Encoder and Decoder. Following the architecture used in <cit.>, the image encoder ℰ consists of one stride-1 convolution layer and two stride-2 convolution layers, symmetrically, the image decoder 𝒟 consists of two stride-1/2 fractionally strided convolution layers and one stride-1 convolution layer. All convolutional layers are followed by instance normalization <cit.> and a ReLU nolinearity except the last output layer. Instance normalization has been demonstrated to perform better than spatial batch normalization <cit.> in handling boundary artifacts brought by padding. Other than the first and last layers which use 9 × 9 kernels, all convolutional layers use 3 × 3 kernels. Benefited from the explicit representation, our network can remove all the residual blocks <cit.> used in the network presented in Johnson et al. <cit.> to further reduce the model size and computation cost without performance degradation. StyleBank Layer. Our architecture allows multiple styles (by default, 50 styles, but there is really no limit on it) to be simultaneously trained in the single network at the beginning. In the StyleBank layer 𝒦, we learn n convolution filter banks {K_i}, (i = 1, 2, ...n) (referred as StyleBank). During training, we need to specify the i-th style, and use the corresponding filter bank K_i for forward and backward propagation of gradients. At this time, transferred features 𝐹_i is achieved by𝐹_i = 𝐾_i⊗𝐹,where 𝐹∈ℛ^c_in× h × w, 𝐾_i∈ℛ^c_out× c_in× k_h × k_w, 𝐹∈ℛ^c_out× h × w, c_in and c_out are numbers of feature channels for 𝐹 and 𝐹 respectively, (h, w) is the feature map size, and (k_w, k_h) is the kernel size. To allow efficient training of new styles in our network, we may reuse the encoder ℰ and the decoder 𝒟 in our new training. We fix the trained ℰ and 𝒟, and only retrain the layer 𝒦 with new filter banks starting from random initialization. Loss Functions. Our network consists of two branches: auto-encoder (, ℰ→𝒟) and stylizing (, ℰ→𝒦→𝒟), which are alternatively trained. Thus, we need to define two loss functions respectively for the two branches. In the auto-encoder branch, we use MSE (Mean Square Error) between input image I and output image O to measure an identity loss ℒ_ℐ:ℒ_ℐ(𝐼,𝑂) = ‖𝑂-𝐼‖^2. At the stylizing branch, we use perceptual loss ℒ_𝒦 proposed in <cit.>, which consists of a content loss ℒ_c, a style loss ℒ_s and a variation regularization loss ℒ_tv(O_i):ℒ_𝒦(𝐼,𝑆_𝑖,𝑂_𝑖)=αℒ_c(𝑂_𝑖,𝐼)+βℒ_s(𝑂_𝑖,𝑆_𝑖)+γℒ_tv(O_i)where 𝐼, 𝑆_𝑖, 𝑂_𝑖 are the input content image, style image and stylization result (for the i-th style) respectively. ℒ_tv(O_i) is a variation regularizer used in <cit.>. ℒ_c and ℒ_s use the same definition in  <cit.>:ℒ_c(𝑂_𝑖, 𝐼)= ∑_l ∈{l_c}‖ F^l(O_i) - F^l(I)‖^2ℒ_s(𝑂_𝑖, 𝑆)= ∑_l ∈{l_s}‖ G(F^l(O_i)) - G(F^l(S_i))‖^2where 𝐹^l and 𝐺 are respectively feature map and Gram matrix computed from layer l of VGG-16 network <cit.>(pre-trained on the ImageNet dataset  <cit.>). {l_c}, {l_s} are VGG-16 layers used to respectively compute the content loss and the style loss. Training Strategy.We employ a (T+1)-step alternative training strategy motivated by <cit.> in order to balance the two branches (auto-encoder and stylizing). During training, for every T+1 iterations, we first train T iterations on the branch with 𝒦, then train one iteration for auto-encoder branch. We show the training process in Algorithm <ref>.§.§ Understanding StyleBank and Auto-encoderFor our new representation of styles, there are several questions one might ask:1) How does StyleBank represent styles?After training the network, each styles is encoded in one convolution filter bank. Each channel of filter bank can be considered as dictionaries or bases in the literature of representation learning method <cit.>. Different weighted combinations of these filter channels can constitute various style elements, which would be the basic elements extracted from the style image for style synthesis. We may link them to “textons" in texture synthesis by analogy.For better understanding, we try to reconstruct style elements from a learnt filter bank in an exemplar stylization image shown in fg:visualization. We extract two kinds of representative patches from the stylization result (in fg:visualization(b))– stroke patch (indicated by red box) and texture patch (indicated by green box) as an object to study. Then we apply two operations below to visualize what style elements are learnt in these two kinds of patches.First, we mask out other regions but only remain these corresponding positions of the two patches in feature maps (as shown in fg:visualization(c)(d)), that would be convolved with the filter bank (corresponding to a specific style). We further plot feature responses in fg:visualization(e) for the two patches along the dimension of feature channels. As we can observe, their responses are actually sparsely distributed and some peak responses occur at individual channels. Then, we only consider non-zero feature channels for convolution and their convolved channels of filter bank (marked by green and red colors in fg:visualization(f)) indeed contribute to a certain style element. Transferred features are then passed to the decoder. Recovery style elements are shown in fg:visualization(g), which are very close in appearance to the original style patches (fg:visualization(i)) and stylization patches (fg:visualization(j)).To further explore the effect of kernel size (k_w,k_h) in the StyleBank, we set a comparison experiment to train our network with two different kernel size of (3,3) and (7,7). Then we use similar method to visualize the learnt filter banks, as shown in fg:ablation_kernelsize. Here the green and red box indicate representative patches from (3,3) and (7,7) kernels respectively. After comparison, it is easy to observe that bigger style elements can be learnt with larger kernel size. For example, in the bottom row , bigger sea spray appears in the stylization result with (7,7) kernels. That suggests our network supports the control on the style element size by tuning parameters to better characterize the example style.2) What is the content image encoded in? In our method, the auto-encoder is learnt to decompose the content image into multi-layer feature maps, which are independent of any styles. When further analyzing these feature maps, we have two observations.First, these features can be spatially grouped into meaningful clusters in some sense (, colors, edges, textures). To verify this point, we extract each feature vector at every position of feature maps. Then, an unsupervised clustering (, K-means algorithms) is applied to all feature vectors (based on L2 normalized distance). Finally, we can obtain the clustering results shown in left of fg:vis_layerwise, which suggests a certain segmentation to the content image.Comparing the right stylization result with left clustering results, we can easily find that different segmented regions are indeed rendered with different kinds of colors or textures. For regions with the same cluster label, the filled color or textures are almost the same. As a result, our auto-encoder may enable region-specific style transfer. Second, these features would distribute sparsely in channels. To exploit this point, we randomly sample 200 content images, and for each image, we compute the average of all non-zero responses at every of 128 feature channels (in the final layer of encoder). And then we plot the means and standard deviations of those per-channel averages among 200 images in the top-left of fg:sparse_analysis. As we can see, valuable responses consistently exist at certain channels. One possible reason is that these channels correspond to specific style elements for region-specific transfer, which is in consistency with our observation in fg:visualization(e).The above sparsity property will drive us to consider smaller model size of the network. We attempt to reduce all channel numbers in our auto-encoder and StyleBank layer by a factor of 2 or 4. Then the maximum channel number C_max become 64, 32 respectively from the original 128. We also compute and sort the means of per-channel averages, as plotted in the top-right of fg:sparse_analysis. We can observe that the final layer of our encoder still maintains the sparsity even for smaller models although sparsity is decreased in smaller models (C_max=32). On the bottom of fg:sparse_analysis, we show corresponding stylization results of C_max=32,64,128 respectively. By comparison, we can notice that C_max=32 obviously produces worse results than C_max=128 since the latter may encourage better region decomposition for transfer. Nevertheless, there may still be a potential to design a more compact model for content and style representation. We leave that to our future exploration.3) How are content and style decoupled from each other?To further know how well content is decoupled from style, we need to examine if the image is completely encoded in the auto-encoder. We compare two experiments with and without the auto-encoder branch in our training. When we only consider the stylizing branch, the decoded image (shown in the middle of  fg:shortcut) produced by solely auto-encoder without 𝒦 fails to reconstruct the original input image (shown in the left of  fg:shortcut), and instead seems to carry some style information. When we enable the auto-encoder branch in training,we obtain the final image (shown in the right of fg:shortcut) reconstructed from the auto-encoder, which has very close appearance to the input image. Consequently, the content is explicitly encoded into the auto-encoder, and independent of any styles. This is very convenient to carry multiple styles learning in a single network and reduce the interferences among different styles.4) How does the content image control style transfer?To know how the content controls style transfer, we consider a toy case shown in fg:vis_toy. On the top, we show the input toy image consisting of five regions with variant colors or textures. On the bottom, we show the output stylization result. Below are some interesting observations: * For input regions with different colors but without textures, only a purely color transfer is applied (see fg:vis_toy (b)(f)). * For input regions with the same color but different textures, the transfer consists of two parts: the same color transfer and different texture transfer influenced by appearance of input textures. (see fg:vis_toy (c)(d)). * For input regions with different colors but the same textures, the results have the same transferred textures but different target colors (see fg:vis_toy (d)(e)). § CAPABILITIES OF OUR NETWORK Because of an explicit representation, our proposed feed-forward network provides additional capabilities, when compared with previous feedforward networks for style transfer. They may bring new user experiences or generate new stylization effects compared to existing methods. §.§ Incremental Training Previous style transfer networks (, <cit.>) have to be retrained for a new style, which is very inconvenient. In contrast, an iterative optimization mechanism <cit.> provides an online-learning for any new style, which would take several minutes for one style on GPU (, Titan X). Our method has virtues of both feed-forward networks <cit.> and iterative optimization method <cit.>. We enable an incremental training for new styles, which has comparable learning time to the online-learning method  <cit.>, while preserving efficiency of feed-forward networks <cit.>.In our configuration, we first jointly train the auto-encoder and multiple filter banks (50 styles used at the beginning) with the strategy described in Algorithm <ref>. After that, it allows to incrementally augment and train the StyleBank layer for new styles by fixing the auto-encoder. The process converges very fast since only the augmented part of the StyleBank would be updated in iterations instead of the whole network. In our experiments, when training with Titan X and given training image size of 512, it only takes around 8 minutes with about 1,000 iterations to train a new style, which can speed up the training time by 20∼40 times compared with previous feed-forward methods.fg:incremental_style_result shows several stylization results of new styles by incremental training. It obtains very comparable stylization results to those from fresh training, which retrains the whole network with the new styles. §.§ Style FusionWe provide two different types of style fusion: linear fusion of multiple styles, and region-specific style fusion. Linear Fusion of Styles. Since different styles are encoded into different filter banks {K_i}, we can linearly fuse multiple styles by simply linearly fusing filter banks in the StyleBank layer. Next, the fused filter bank is used to convolve with content features 𝐹:𝐹 = (∑_i=1^mw_i*𝐾_i) ⊗𝐹∑_i=1^mw_i = 1,where m is the number of styles, 𝐾_i is the filter bank of style i. 𝐹 is then fed to the decoder. fg:lc_stylefuse shows such linear fusion results of two styles with variant fusion weight w_i. Region-specific Style Fusion. Our method naturally allows a region-specific style transfer, in which different image regions can be rendered by various styles. Suppose that the image is decomposed into n disjoint regions by automatic clustering (, K-means mentioned in sc:working_principle or advanced segmentation algorithms <cit.>) in our feature space, and 𝑀_i denotes every region mask. The feature maps can be described as 𝐹 = ∑_i=1^m(𝑀_i× F). Then region-specific style fusion can be formulated as eq:rs_sf:𝐹 = ∑_i=1^m𝐾_i⊗ (𝑀_i× F),where 𝐾_i is the i-th filter bank.fg:rs_stylefuse shows such a region-specific style fusion result which exactly borrows styles from two famous paintings of Picasso and Van Goph. Superior to existing feed-forward networks, our method naturally obtains image decomposition for transferring specific styles, and passes the network only once. On the contrary, previous approaches have to pass the network several times and finally montage different styles via additional segmentation masks. § EXPERIMENTS Training Details Our network is trained on 1000 content images randomly sampled from Microsoft COCO dataset <cit.> and 50 style images (from existing papers and the Internet). Each content image is randomly cropped to 512 × 512, and each style image is scaled to 600 on the long side. We train the network with a batch size of 4 (m=4 in Algorithm <ref>) for 300k iterations. And the Adam optimization method <cit.> is adopted with the initial learning rate of 0.01 and decayed by 0.8 at every 30k iterations. In all of our experiments, we compute content loss at layer relu4_2 and style loss at layer relu1_2, relu2_2, relu3_2, and relu4_2 of the pre-trained VGG-16 network. We use T=2, λ = 1 (in Algorithm <ref>) in our two branches training. §.§ Comparisons In this section, we compare our method with other CNN-based style transfer approaches <cit.>. For fair comparison, we directly borrow results from their papers. It is difficult to compare results with different abstract stylization, which is indeed controlled by the ratio α/β in eq:loss_perceptual and different work may use their own ratios to present results. For comparable perception quality, we choose different α,β in each comparison. More results are available in our supplementary material[<http://home.ustc.edu.cn/ cd722522/>]. Compared with the Iterative Optimization Method. We use α/β = 1/100 (in eq:loss_perceptual) to produce comparable perceptual stylization in fg:comparison_neuralstyle. Our method, like all other feed-forward methods, creates less abstract stylization results than optimization method <cit.>. It is still difficult to judge which one is more appealing in practices. However, our method, like other feed-forward methods, could be hundreds of times faster than optimization-based methods. Compared with Feed-forward Networks. In fg:comparison_texturnets and fg:comparison_johnson, we respectively compare our results with two feed-forward network methods <cit.>.We use α/β = 1/50 (in eq:loss_perceptual) in both comparisons. Ulyanov et al. <cit.> design a shallow network specified for the texture synthesis task. When it is applied to style transfer task, the stylization results are more like texture transfer, sometimes randomly pasting textures to the content image. Johnson et al. <cit.> use a much deeper network and often obtain better results. Compared with both methods, our results obviously present more region-based style transfer, for instance, the portrait in fg:comparison_texturnets, and river/grass/forest in fg:comparison_johnson. Moreover, different from their one-network-per-style training, all of our styles are jointly trained in a single model. Compared with other Synchronal Learning.Dumoulin et al., in their very recent work <cit.>, introduces the “conditional instance normalization" mechanism derived from <cit.> to jointly train multiple styles in one model, where parameters of different styles are defined by different instance normalization factors (scaling and shifting) after each convolution layer. However, their network does not explicitly decouple the content and styles as ours. Compared with theirs, our method seems to allow more abilities of region-specific transfer. As shown in fg:comparison_google, our stylization results better correspond to the natural regions of content images. In this comparison, we use α/β = 1/25 (in eq:loss_perceptual). § DISCUSSION AND CONCLUSION In this paper, we have proposed a novel explicit representation for style and content, which can be well decoupled by our network. The decoupling allows faster training (for multiple styles, and new styles), and enables new interesting style fusion effects, like linear and region-specific style transfer. More importantly, we present a new interpretation to neutral style transfer which may inspire other understandings for image reconstruction, and restoration.There are still some interesting issues for further investigation. For example, the auto-encoder may integrate semantic segmentation <cit.> as additional supervision in the region decomposition, which would help create more impressive region-specific transfer. Besides, our learnt representation does not fully utilize all channels, which may imply a more compact representation.§ ACKNOWLEDGEMENTThis work is partially supported by National Natural Science Foundation of China(NSFC, NO.61371192)ieee
http://arxiv.org/abs/1703.09210v2
{ "authors": [ "Dongdong Chen", "Lu Yuan", "Jing Liao", "Nenghai Yu", "Gang Hua" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170327175218", "title": "StyleBank: An Explicit Representation for Neural Image Style Transfer" }
§ INTRODUCTION In nature, there are less than three hundred stable or long-lived nuclideswhich are along the valley of stability in the nuclear chart. When those unstable nuclei,the number is now close to three thousand <cit.>,are explored, many exotic nuclear phenomena have been observed.The most famous exotic nucleus is ^11Li in which the halo structure wasidentified <cit.>.Theoretically, many more nuclei are predicted to be bound.In Fig. <ref> is shown the prediction from the Weiszacker-Skyrme (WS4)mass model <cit.> which is one of the best nuclear massmodels on the market.Many other models, e.g., non-relativistic <cit.> and relativistic density functional theories <cit.>have also been used to explore the border of nuclear chart. Nowadays, the study of the properties of these exotic nuclei is at the forefront of nuclear physics research because it can not only reveal new physicsbut also lead to new insights on the nucleosynthesis.In this contribution, I will first discuss the physics connected with exotic nuclear phenomena in Section <ref>. According to my personal point of view, six features will be illustrated concerning exotic nuclear structure. Then I'll highlight some recent progresses corresponding to each of these features in Section <ref>.Finally I will discuss perspectives in Section <ref>.§ PHYSICS IN EXOTIC NUCLEAR STRUCTURE The first important characteristic of exotic nuclei is certainlythe weakly-bound feature.In unstable nuclei, particularly in those close to drip lines, e.g.,to the neutron drip line, the neutron Fermi surface is very close to the threshold,as seen in Fig. <ref>(a)<cit.>. Therefore the contribution from continua becomes more and more important.In Fig. <ref>(b),the neutron separation energy S_n, equivalent to the neutron Fermi energy,is shown schematically as a function of neutron number<cit.>.Larger neutron excess results in smaller S_n, i.e., the valence neutron(s)is (are) more easily knocked out and the nucleus is more easily coupled to thescattering environment, thus making exotic nuclei open quantum systems which are very much involved in the studies of nuclear reactions and nucleosynthesis.Halo nuclei are characterized by a large spatial extension, seeFig. <ref>(c) for an example for ^11Li <cit.>.In neutron halo nuclei, there appears pure neutron matter with a very low density, surrounding a dense core<cit.>.Similar to what happens in low density infinite nuclear matter or neutron matter <cit.>,pair condensate or strong di-neutron correlations may occur in finite nucleiwith halo structure [Fig. <ref>(d)]<cit.>.In addition, the oscillation between the core and the low density neutron matter leads to some soft dipole modes, also known as pygmy dipole resonances, whichhave been discussed a lot in INPC2016. Experimentally these featureshave been explored in, e.g., Refs. <cit.>. Most known nuclei are deformed <cit.>.What kind of new features can deformation effects bring to exotic nuclei, in particular, to halo nuclei?Note that in recent years, more candidates of deformed halo nuclei have beenidentified; examples are^31Ne <cit.> and^37Mg <cit.>.For deformation effects in halo nuclei, I will focus on theoreticalpredictions on the shape decoupling [Fig. <ref>(e) & (f)] <cit.>.Shell structure is very important in the study of atomic nuclei which ischaracterized by large spin-orbit couplings. The spin-orbit couplings areclosely connected with the nuclear surface diffuseness.It is therefore very natural that the spin-orbit splitting would changewhen going from the β-stability line to the drip linesbecause the nuclear surface could be more diffuse [Fig. <ref>(g)]<cit.>. This results in the shell evolutionin exotic nuclei and changes of nuclear magicity which can be hinted from, e.g., separation energies <cit.>.Certainly the shell evolution is also the result of many other importantfactors, like the tensor force<cit.>. It should be emphasized that shape evolution and shape coexistence are alsorelated physical topics of exotic nuclei, which were discussed in a dedicatedsession in INPC2016.Beyond the drip lines, nuclei are unbound with respect to nucleon(s) emission.This feature implies some new radioactivities of which mostly discussedare one- or two-proton radioactivities, thanks to the Coulomb interaction which leads to a Coulomb barrier hindering the escape of proton(s) from the parentnucleus [Fig. <ref>(h)]<cit.>. Beyond the neutron drip line, there may be the two-neutron radioactivity;one example of recent interests is ^26O <cit.>.It is well known that clustering effects are important in atomic nuclei and cluster structure appears in some stable ones if they are excited to beclose to some thresholds <cit.>.In exotic nuclei, clustering effects can also emerge in low-lying excited states and even in ground states.For exotic nuclei with much more neutrons, a cluster configuration isenergetically more favored because it permits a more even distribution ofvalence neutrons as shown in Fig. <ref>(i)<cit.>. In one recent experimental study of ^12Be, a 0^+ resonant state with a large cluster decay branching ratio was observed <cit.>. This observation supports strong clustering effects in ^12Be.In addition, in some halo nuclei,there appears the so called “Borromean” structure, which is shown inFig. <ref>(j) <cit.>—a bound three-body system with any two-body subsystems unbound. § HIGHLIGHTS OF RECENT PROGRESSES Next I will highlight some recent progresses on theoretical study of exoticnuclear structure. There are indeed many interesting and important works,but I can only choose some of them due to the limitation of pages.More extensive discussions can be found in Ref. <cit.>. §.§ The weakly bound feature of exotic nuclei Many models have been developed to take into account the contribution ofcontinua and resonances, see, e.g., Refs. <cit.>for recent reviews. There are many ways to locate single particle resonances.Besides the conventional scattering phase shift method<cit.>,several bound-state-like approaches <cit.>, such as the analytical continuation in coupling constant <cit.>,the real stabilization method <cit.>and the complex scaling method (CSM) <cit.>,are often used to study single particle resonances in atomic nuclei.Several other methods, e.g.,the Jost function method <cit.>,the Green's function method <cit.>,the Green's function + CSM <cit.>,and solving Schrödinger or Dirac equations in the complex momentumrepresentation <cit.>, have also been implemented in nuclear models.For describing the contribution from the continua in the mean field level,the conventional BCS method suffers from some problems.One of them is the non-localization of nucleon density distributions<cit.>. One way to solve partly this problem is to use the resonance BCS (rBCS) approach <cit.>: After single particle resonances are located, their contribution can be takeninto account through the BCS approximation.It was also justified that if one, instead of using the conventional BCSapproximation, makes Bogoliubov transformation and solvesthe Hartree-Fock-Bogoliubov (HFB) equations in r space, the contribution from continua can be included properly and the nucleus in question islocalized <cit.>.Since then, the HFB models have been developed for spherical nuclei with continuum either discretized <cit.>or treated with scattering boundary conditions<cit.>.The HFB model was also extended to the study of deformednuclei <cit.>. In parallel, the relativistic Hartree-Bogoliubov (RHB) or relativistic HFBmodels were established for spherical <cit.> and deformedexotic nuclei including halos <cit.>. §.§ Deformation effects in nuclear halos Based on the deformed RHB model in a Woods-Saxon basis <cit.>, shape decoupling effects have been predicted in ^42,44Mg<cit.>: The core of these nuclei isprolate, but the halo has an oblate shape. The generic conditions for the occurrence of halo in deformed nuclei and shape decoupling effects were given in Ref. <cit.>. Later, with a non-relativistic HFB model, Pei et al. predicted that ^38Nehas a nearly spherical core, but a prolate halo<cit.>.Similar effects has been investigated in Ref. <cit.> inwhich a square well potential was used and the spin-orbit coupling was neglected.These predictions are made for the ground state. It would be interesting tostudy dynamics and excitations of these deformed halo nuclei<cit.>. §.§ Di-neutron correlations Concerning di-neutron correlations, progresses in recent years include the study of Cooper pairs <cit.>,di-neutron correlations <cit.>,di-proton correlations <cit.>,neutron-proton correlations <cit.>, and so on.For example, the asymptotic form of a neutron Cooper pair penetrating to theexterior of the nuclear surface was investigated with the Bogoliubov theoryin Ref. <cit.>.It was found that Cooper pairs are spatially correlated in the asymptotic large distance limit, and the penetration length of the pair condensate is universally governed by the two-neutron separation energy.There are also lots of theoretical investigations on the soft dipole modes<cit.>.In Ref. <cit.>, a systematic study with canonical basistime dependent HFB theory reveals a number of characteristic features ofthe low-energy E1 modes, e.g., a universal behavior in the low-energy E1modes for heavy neutron-rich isotopes, which suggests the emergence ofdecoupled E1 peaks beyond N = 82. §.§ The shell evolution It is interesting and instructive to choose the Ca isotope chain as an exampleto discuss the evolution of shell structure and changes of magicitybecause there might be five magic numbers in the Ca isotopes.In Ref. <cit.>, from the measured mass of^53,54Ca, a prominent shell closure at N=32 was established.This shell closure was later confirmed together with a new one at N=34,indicated by the fact that the energy of the first 2^+ state for ^52,54Ca risesdramatically <cit.>.Recently the magicity of N=32 was shown to persist in Sc isotopes<cit.>. The appearance of the shell closures at N=32 and 34 have been attributed tothe evolution of neutron f_5/2 orbital, which rises due to a weakened proton-neutron interaction when Z decreases to 20 <cit.>.Theoretically there have been many investigations on the magicity of N=32 and/or 34 <cit.>.For example, it has been shown in Ref. <cit.> that thelike-particle tensor contribution is responsible for these new shell closures and in Ref. <cit.>, the importance of exchange terms inrelativistic framework on these shell closures was emphasized.However, a recent precise measurement of charge radii in Ca isotopes<cit.> casts some doubts on the magicity at N=32.If ^52Ca is doubly magic, its charge radius should be smaller thanthat of its neighbors.But this is not the case <cit.>.Therefore nuclear magicity in exotic nuclei may be “local”in the sense that it manifests itself in some nuclear properties but notin others, contrast to those traditional magic numbers which are “global” or robust and manifest themselves in “all” nuclear properties, e.g., separation energies, charge radii, Q values of α decays, etc. §.§ New radioactivities For new radioactivity, without going into details, I'd like to mention thatthere have been some systematic studies with the HFB model<cit.> andpredictions were also made with the relativistic continuumHartree-Bogoliubov model <cit.>. §.§ Clustering effects There have been many interesting results concerning the theoretical study ofclustering effects in atomic nuclei. For example, a non-localized or containerpicture was proposed for cluster structure <cit.>,giant dipole resonances were argued to be a fingerprint of cluster structure<cit.>, one-dimensional α condensation ofα-linear-chain states in ^12C and ^16O were studied<cit.> and rod-shaped nuclei were explored atextremely high spin and isospin <cit.>.One more thing about clustering effects is that from radius-constrainedmean field calculations, regardless of non-relativistic<cit.> or relativistic models<cit.>, one can also obtain thecluster structure.However, in such kind of studies, one has to take a serious care ofthe truncation of the basis and to ensure a convergence<cit.>.§ CONCLUDING REMARKS AND PERSPECTIVES To summarize, after introducing the following characteristic features and newphysics connected with exotic nuclear phenomena: the weakly-bound feature,the large-spatial extension in halo nuclei, deformation effects in halo nuclei,the shell evolution, new radioactivities and clustering effects,I have highlighted some recent progresses corresponding to these features.It should be emphasized that to describe the structure of exotic nuclei, oneoften needs to modify conventional nuclear models or develop new theoreticalapproaches. Nowadays, there are many attempts to unify nuclear models.For example, with the fast development of supercomputers, ab initio theoriescan deal with heavier and heavier nuclei, as discussed in Ekström and Bacca'stalks.Besides that, there are also projects to develop density functional theoriesfrom first principles <cit.>(also mentioned in Liang's plenary talk) or models based on subnucleon degrees of freedom <cit.>.In his talk, Nazarewicz has put atomic nuclei on a table with three pillars.This table would not be stable if it had only two pillars, only theory and simulations.We need experiments and experimental facilities. With the development of radioactive ion beam facilities around the world, including the High Intensity heavy ion Accelerator Facility (HIAF) in Huizhou, China <cit.> andBeijing Isotope-Separation-On-Line Neutron-Rich Beam Facility (BISOL) <cit.>,more unstable nuclei would become experimentally accessible, which will for sure challenge as well as provide opportunities for theoretical study of exotic nuclear structure.§ ACKNOWLEDGEMENTS Collaborations and/or helpful discussions with A. Afanasjev, K. Blaum, Y. Chen, G. Colo, L.S. Geng, N.V. Giai, L.L. Li, H.Z. Liang, W.H. Long, B.N. Lu, H.F. Lü, J. Meng, J. Pei, P. Ring, H. Sagawa, J.R. Stone, X.X. Sun, I. Tanihata, J. Terasaki, A.W. Thomas, H. Toki, D. Vretenar,N. Wang, F. R. Xu, S. Yamaji, J.Y. Zeng, Y.H. Zhang, S.Q. Zhang, E.G. Zhao, J. Zhao and P.W. Zhao are gratefully acknowledged.135Thoennessen2013_RPP76-056301 M. Thoennessen, Rep. Prog. Phys. 76, 056301 (2013)Tanihata1985_PRL55-2676 I. Tanihata, H. Hamagaki, O. Hashimoto, Y. Shida, N. Yoshikawa, K. Sugimoto, O. Yamakawa, T. Kobayashi, N. Takahashi, Phys. Rev. Lett. 55, 2676 (1985)Wang2014_PLB734-215 N. Wang, M. Liu, X. Wu, J. Meng, Phys. Lett. B 734, 215 (2014)Erler2012_Nature486-509 J. Erler, N. Birge, M. Kortelainen, W. Nazarewicz, E. Olsen, A.M. Perhac, M. Stoitsov, Nature 486, 509 (2012)Afanasjev2013_PLB726-680 A.V. Afanasjev, S.E. Agbemava, D. Ray, P. Ring, Phys. Lett. B 726, 680 (2013)Qu2013_SciChinaPMA56-2031 X. Qu, Y. Chen, S. Zhang, P. Zhao, I. Shin, Y. Lim, Y. Kim, J. Meng, Sci. China-Phys. Mech. Astron. 56, 2031 (2013)Lu2015_PRC91-027304 K.Q. Lu, Z.X. Li, Z.P. Li, J.M. Yao, J. Meng, Phys. Rev. C 91, 027304 (2015)Audi2012_ChinPhysC36-1157 G. Audi, F.G. Kondev, M. Wang, B. Pfeiffer, X. Sun, J. Blachot, M. MacCormick, Chin. Phys. C 36, 1157 (2012)Audi2012_ChinPhysC36-1287 G. Audi, M. Wang, A.H. Wapstra, F.G. Kondev, M. MacCormick, X. Xu, B. Pfeiffer, Chin. Phys. C 36, 1287 (2012)Wang2012_ChinPhysC36-1603 M. Wang, G. Audi, A.H. Wapstra, F.G. Kondev, M. MacCormick, X. Xu, B. Pfeiffer, Chin. Phys. C 36, 1603 (2012)Meng2006_PPNP57-470 J. Meng, H. Toki, S.G. Zhou, S.Q. Zhang, W.H. Long, L.S. Geng, Prog. Part. Nucl. Phys. 57, 470 (2006)Meng2015_JPG42-093101 J. Meng, S.G. Zhou, J. Phys. G: Nucl. Part. Phys. 42, 093101 (2015)Dobaczewski2007_PPNP59-432 J. Dobaczewski, N. Michel, W. Nazarewicz, M. Ploszajczak, J. Rotureau, Prog. Part. Nucl. Phys. 59, 432 (2007)Michel2009_JPG36-013101 N. Michel, W. Nazarewicz, M. Ploszajczak, T. Vertse, J. Phys. G: Nucl. Phys. 36, 013101 (2009)Meng1996_PRL77-3963 J. Meng, P. Ring, Phys. Rev. Lett. 77, 3963 (1996)Hagino2007_PRL99-022506 K. Hagino, H. Sagawa, J. Carbonell, P. Schuck, Phys. Rev. Lett. 99, 022506 (2007)Blank2008_RPP71-046301 B. Blank, M. Ploszajczak, Rep. Prog. Phys. 71, 046301 (2008)Freer2007_RPP70-2149 M. Freer, Rep. Prog. Phys. 70, 2149 (2007)Johnson2004_PR389-1 B. Johnson, Phys. Rep. 389, 1 (2004)Vretenar2005_PR409-101 D. Vretenar, A.V. Afanasjev, G.A. Lalazissis, P. Ring, Phys. Rep. 409, 101 (2005)Sun2010_PLB683-134 B.Y. Sun, H. Toki, J. Meng, Phys. Lett. B 683, 134 (2010)Matsuo2006_PRC73-044309 M. Matsuo, Phys. Rev. C 73, 044309 (2006)Sun2012_PRC86-014305 T.T. Sun, B.Y. Sun, J. Meng, Phys. Rev. C 86, 014305 (2012)Sagawa2015_EPJA51-102 H. Sagawa, K. Hagino, Eur. Phys. J. A 51, 102 (2015)Nakamura2006_PRL96-252502 T. Nakamura, A.M. Vinodkumar, T. Sugimoto et al., Phys. Rev. Lett. 96, 252502 (2006)Kanungo2015_PRL114-192502 R. Kanungo, A. Sanetullaev, J. Tanaka et al., Phys. Rev. Lett. 114, 192502 (2015)Zhou2016_PS91-063008 S.G. Zhou, Phys. Scr. 91, 063008 (2016)Nakamura2014_PRL112-142501 T. Nakamura, N. Kobayashi, Y. Kondo et al., Phys. Rev. Lett. 112, 142501 (2014)Kobayashi2014_PRL112-242501 N. Kobayashi, T. Nakamura, Y. Kondo et al., Phys. Rev. Lett. 112, 242501 (2014)Zhou2010_PRC82-011301R S.G. Zhou, J. Meng, P. Ring, E.G. Zhao, Phys. Rev. C 82, 011301(R) (2010)Li2012_PRC85-024312 L. Li, J. Meng, P. Ring, E.G. Zhao, S.G. Zhou, Phys. Rev. C 85, 024312 (2012)Pei2013_PRC87-051302R J.C. Pei, Y.N. Zhang, F.R. Xu, Phys. Rev. C 87, 051302(R) (2013)Pei2014_PRC90-024317 J.C. Pei, G.I. Fann, R.J. Harrison, W. Nazarewicz, Y. Shi, S. Thornton, Phys. Rev. C 90, 024317 (2014)Ozawa2000_PRL84-5493 A. Ozawa, T. Kobayashi, T. Suzuki, K. Yoshida, I. Tanihata, Phys. Rev. Lett. 84, 5493 (2000)Peru2000_EPJA9-35 S. Peru, M. Girod, J. Berger, Eur. Phys. J. A 9, 35 (2000)Otsuka2005_PRL95-232502 T. Otsuka, T. Suzuki, R. Fujimoto, H. Grawe, Y. Akaishi, Phys. Rev. Lett. 95, 232502 (2005)Colo2007_PLB646-227 G. Colo, H. Sagawa, S. Fracasso, P. Bortignon, Phys. Lett. B 646, 227 (2007)Sorlin2008_PPNP61-602 O. Sorlin, M.G. Porquet, Prog. Part. Nucl. Phys. 61, 602 (2008)Sagawa2014_PPNP76-76 H. Sagawa, G. Colo, Prog. Part. Nucl. Phys. 76, 76 (2014)Woods1997_ARNPS47-541 P.J. Woods, C.N. Davids, Annu. Rev. Nucl. Part. Sci. 47, 541 (1997)Thoennessen2004_RPP67-1187 M. Thoennessen, Rep. Prog. Phys. 67, 1187 (2004)Lin2011_SciChinaPMA54S1-73 C. Lin, X. Xu, H. Jia et al., Sci. China-Phys. Mech. Astron. 54 (Suppl. 1), 73 (2011)Pfutzner2012_RMP84-567 M. Pfutzner, M. Karny, L.V. Grigorenko, K. Riisager, Rev. Mod. Phys. 84, 567 (2012)Ma2015_PLB743-306 Y.G. Ma, D.Q. Fang, X.Y. Sun et al., Phys. Lett. B 743, 306 (2015)Lunderberg2012_PRL108-142503 E. Lunderberg, P.A. DeYoung, Z. Kohley et al., Phys. Rev. Lett. 108, 142503 (2012)Kohley2013_PRL110-152501 Z. Kohley, T. Baumann, D. Bazin et al., Phys. Rev. Lett. 110, 152501 (2013)Kondo2016_PRL116-102503 Y. Kondo, T. Nakamura, R. Tanaka et al., Phys. Rev. Lett. 116, 102503 (2016)Oertzen2006_PR432-43 W. von Oertzen, M. Freer, Y. Kanada-En'yo, Phys. Rep. 432, 43 (2006)Yang2014_PRL112-162501 Z.H. Yang, Y.L. Ye, Z.H. Li et al., Phys. Rev. Lett. 112, 162501 (2014)Zhou2017_NSC2016 S.G. Zhou, Theoretical Study of Exotic Nuclear Structure, in Nuclear Structure in China 2016 - Proceedings of the 16th National Conference on Nuclear Structure in China, to be publishedFrederico2012_PPNP67-939 T. Frederico, A. Delfino, L. Tomio, M.T. Yamashita, Prog. Part. Nucl. Phys. 67, 939 (2012)Ji2016_IJMPE25-1641003 C. Ji, Int. J. Mod. Phys. E 25, 1641003 (2016)Meng2016_RDFNS-83 J. Meng, P. Ring, P. Zhao, S.G. Zhou, Relativistic mean field description of exotic nuclei, Chap. 3 in Vol. 10 of International Review of Nuclear Physics(World Scientific Publishing Co. Pte. Ltd., 2016, edited by J. Meng),pp. 83–141Hamamoto2016_PRC93-054328 I. Hamamoto, Phys. Rev. C 93, 054328 (2016)Efros2007_JPG34-R459 V.D. Efros, W. Leidemann, G. Orlandini, N. Barnea, J. Phys. G: Nucl. Part. Phys. 34, R459 (2007)Carbonell2014_PPNP74-55 J. Carbonell, A. Deltuva, A.C. Fonseca, R. Lazauskas, Prog. Part. Nucl. Phys. 74, 55 (2014)Tanaka1997_PRC56-562 N. Tanaka, Y. Suzuki, K. Varga, Phys. Rev. C 56, 562 (1997)Yang2001_CPL18-196 S.C. Yang, J. Meng, S.G. Zhou, Chin. Phys. Lett. 18, 196 (2001)Zhang2004_PRC70-034308 S.S. Zhang, J. Meng, S.G. Zhou, G.C. Hillhouse, Phys. Rev. C 70, 034308 (2004)Guo2006_PRC74-024320 J.Y. Guo, X.Z. Fang, Phys. Rev. C 74, 024320 (2006)Zhang2012_PRC86-032802 S.S. Zhang, M.S. Smith, G. Arbanas, R.L. Kozub, Phys. Rev. C 86, 032802 (2012)Xu2015_PRC92-024324 X.D. Xu, S.S. Zhang, A.J. Signoracci, M.S. Smith, Z.P. Li, Phys. Rev. C 92, 024324 (2015)Zhang2008_PRC77-014312 L. Zhang, S.G. Zhou, J. Meng, E.G. Zhao, Phys. Rev. C 77, 014312 (2008)Zhou2009_JPB42-245001 S.G. Zhou, J. Meng, E.G. Zhao, J. Phys. B: At. Mol. Phys. 42, 245001 (2009)Pei2011_PRC84-024311 J.C. Pei, A.T. Kruppa, W. Nazarewicz, Phys. Rev. C 84, 024311 (2011)Myo2014_PPNP79-1 T. Myo, Y. Kikuchi, H. Masui, K. Katō, Prog. Part. Nucl. Phys. 79, 1 (2014)Papadimitriou2015_PRC91-021001R G. Papadimitriou, J.P. Vary, Phys. Rev. C 91, 021001(R) (2015)Shi2014_PRC90-034319 M. Shi, Q. Liu, Z.M. Niu, J.Y. Guo, Phys. Rev. C 90, 034319 (2014)Shi2015_PRC92-054313 M. Shi, J.Y. Guo, Q. Liu, Z.M. Niu, T.H. Heng, Phys. Rev. C 92, 054313 (2015)Lu2012_PRL109-072501 B.N. Lu, E.G. Zhao, S.G. Zhou, Phys. Rev. Lett. 109, 072501 (2012)Lu2013_PRC88-024323 B.N. Lu, E.G. Zhao, S.G. Zhou, Phys. Rev. C 88, 024323 (2013)Matsuo2001_NPA696-371 M. Matsuo, Nucl. Phys. A 696, 371 (2001)Sun2014_PRC90-054321 T.T. Sun, S.Q. Zhang, Y. Zhang, J.N. Hu, J. Meng, Phys. Rev. C 90, 054321 (2014)Shi2016_PRC94-024302 X.X. Shi, M. Shi, Z.M. Niu, T.H. Heng, J.Y. Guo, Phys. Rev. C 94, 024302 (2016)Li2016_PRL117-062502 N. Li, M. Shi, J.Y. Guo, Z.M. Niu, H. Liang, Phys. Rev. Lett. 117, 062502 (2016)Fang2017_PRC95-024311 Z. Fang, M. Shi, J.Y. Guo, Z.M. Niu, H. Liang, S.S. Zhang, Phys. Rev. C 95, 024311 (2017)Bulgac1980_nucl-th9907088 A. Bulgac,IPNE FT-194-1980, Bucharest (arXiv: nucl-th/9907088)Dobaczewski1984_NPA422-103 J. Dobaczewski, H. Flocard, J. Treiner, Nucl. Phys. A 422, 103 (1984)Sandulescu2000_PRC61-061301R N. Sandulescu, N. Van Giai, R.J. Liotta, Phys. Rev. C 61, 061301(R) (2000)Geng2003_PTP110-921 L. Geng, H. Toki, S. Sugimoto, J. Meng, Prog. Theo. Phys. 110, 921 (2003)Zhang2013_EPJA49-77 S.S. Zhang, E.G. Zhao, S.G. Zhou, Euro. Phys. J. A 49, 77 (2013)Dobaczewski1996_PRC53-2809 J. Dobaczewski, W. Nazarewicz, T.R. Werner, J.F. Berger, C.R. Chinn, J. Dechargé, Phys. Rev. C 53, 2809 (1996)Yu2003_PRL90-222501 Y. Yu, A. Bulgac, Phys. Rev. Lett. 90, 222501 (2003)Schunck2008_PRC78-064305 N. Schunck, J.L. Egido, Phys. Rev. C 78, 064305 (2008)Zhang2011_PRC83-054301 Y. Zhang, M. Matsuo, J. Meng, Phys. Rev. C 83, 054301 (2011)Zhang2012_PRC86-054318 Y. Zhang, M. Matsuo, J. Meng, Phys. Rev. C 86, 054318 (2012)Zhang2017_PRC95-014316 Y. Zhang, Y. Chen, J. Meng, P. Ring, Phys. Rev. C 95, 014316 (2017)Nakada2008_NPA808-47 H. Nakada, Nucl. Phys. A 808, 47 (2008)Meng1998_NPA635-3 J. Meng, Nucl. Phys. A 635, 3 (1998)Poschl1997_PRL79-3841 W. Pöschl, D. Vretenar, G.A. Lalazissis, P. Ring, Phys. Rev. Lett. 79, 3841 (1997)Long2010_PRC81-024308 W.H. Long, P. Ring, N.V. Giai, J. Meng, Phys. Rev. C 81, 024308 (2010)Li2012_CPL29-042101 L. Li, J. Meng, P. Ring, E.G. Zhao, S.G. Zhou, Chin. Phys. Lett. 29, 042101 (2012)Chen2012_PRC85-067301 Y. Chen, L. Li, H. Liang, J. Meng, Phys. Rev. C 85, 067301 (2012)Zhou2003_PRC68-034323 S.G. Zhou, J. Meng, P. Ring, Phys. Rev. C 68, 034323 (2003)Misu1997_NPA614-44 T. Misu, W. Nazarewicz, S. Åberg, Nucl. Phys. A 614, 44 (1997)Fossez2016_PRC93-011305R K. Fossez, W. Nazarewicz, Y. Jaganathen, N. Michel, M. Płoszajczak, Phys. Rev. C 93, 011305(R) (2016)Zhang2014_PRC90-034313R Y. Zhang, M. Matsuo, J. Meng, Phys. Rev. C 90, 034313(R) (2014)Kobayashi2016_PRC93-024310 F. Kobayashi, Y. Kanada-En'yo, Phys. Rev. C 93, 024310 (2016)Oishi2014_PRC90-034303 T. Oishi, K. Hagino, H. Sagawa, Phys. Rev. C 90, 034303 (2014)Masui2016_PTEP2016-053D01 H. Masui, M. Kimura, Prog. Theor. Exp. Phys. 2016, 053D01 (2016)Paar2007_RPP70-691 N. Paar, D. Vretenar, G. Colo, Rep. Prog. Phys. 70, 691 (2007)Roca-Maza2012_PRC85-024601 X. Roca-Maza, G. Pozzi, M. Brenna, K. Mizuyama, G. Colò, Phys. Rev. C 85, 024601 (2012)Vretenar2012_PRC85-044317 D. Vretenar, Y.F. Niu, N. Paar, J. Meng, Phys. Rev. C 85, 044317 (2012)Savran2013_PPNP70-210 D. Savran, T. Aumann, A. Zilges, Prog. Part. Nucl. Phys. 70, 210 (2013)Ebata2014_PRC90-024303 S. Ebata, T. Nakatsukasa, T. Inakura, Phys. Rev. C 90, 024303 (2014)Inakura2014_PRC89-064316 T. Inakura, W. Horiuchi, Y. Suzuki, T. Nakatsukasa, Phys. Rev. C 89, 064316 (2014)Papakonstantinou2015_PRC92-034311 P. Papakonstantinou, H. Hergert, R. Roth, Phys. Rev. C 92, 034311 (2015)Ma2016_PRC93-014317 H.L. Ma, B.G. Dong, Y.L. Yan, H.Q. Zhang, D.Q. Yuan, S.Y. Zhu, X.Z. Zhang, Phys. Rev. C 93, 014317 (2016)DeGregorio2016_PRC93-044314 G. De Gregorio, F. Knapp, N. Lo Iudice, P. Vesely, Phys. Rev. C 93, 044314 (2016)Zheng2016_PRC94-014313 H. Zheng, S. Burrello, M. Colonna, V. Baran, Phys. Rev. C 94, 014313 (2016)Nakatsukasa2016_RMP88-045004 T. Nakatsukasa, K. Matsuyanagi, M. Matsuo, K. Yabana, Rev. Mod. Phys. 88, 045004 (2016)Wienholtz2013_Nature498-346 F. Wienholtz, D. Beck, K. Blaum et al., Nature 498, 346 (2013)Steppenbeck2013_Nature502-207 D. Steppenbeck, S. Takeuchi, N. Aoi et al., Nature 502, 207 (2013)Xu2015_ChinPhysC39-104001 X. Xu, M. Wang, Y.H. Zhang et al.,Chin. Phys. C 39, 104001 (2015)Grasso2014_PRC89-034316 M. Grasso, Phys. Rev. C 89, 034316 (2014)Yueksel2014_PRC89-064322 E. Yüksel, N. Van Giai, E. Khan, K. Bozkurt, Phys. Rev. C 89 (2014)Wang2015_JPG42-125101 X.B. Wang, G.X. Dong, J. Phys. G: Nucl. Part. Phys. 42, 125101 (2015)Li2016_PLB753-97 J.J. Li, J. Margueron, W.H. Long, N. Van Giai, Phys. Lett. B 753, 97 (2016)GarciaRuiz2016_NatPhys12-594 R.F. Garcia Ruiz, M.L. Bissell, K. Blaum et al., Nat. Phys. 12, 594 (2016)Olsen2013_PRL110-222501 E. Olsen, M. Pfuetzner, N. Birge, M. Brown, W. Nazarewicz, A. Perhac, Phys. Rev. Lett. 110, 222501 (2013); ibid., 111, 139903 (2013) [Erratum]Lim2016_PRC93-014314 Y. Lim, X. Xia, Y. Kim, Phys. Rev. C 93, 014314 (2016)Zhou2013_PRL110-262501 B. Zhou, Y. Funaki, H. Horiuchi, Z. Ren, G. Roepke, P. Schuck, A. Tohsaki, C. Xu, T. Yamada, Phys. Rev. Lett. 110, 262501 (2013)He2014_PRL113-032506 W.B. He, Y.G. Ma, X.G. Cao, X.Z. Cai, G.Q. Zhang, Phys. Rev. Lett. 113, 032506 (2014)Suhara2014_PRL112-062501 T. Suhara, Y. Funaki, B. Zhou, H. Horiuchi, A. Tohsaki, Phys. Rev. Lett. 112, 062501 (2014)Zhao2015_PRL115-022501 P.W. Zhao, N. Itagaki, J. Meng, Phys. Rev. Lett. 115, 022501 (2015)Girod2013_PRL111-132503 M. Girod, P. Schuck, Phys. Rev. Lett. 111, 132503 (2013)Ebran2012_Nature487-341 J.P. Ebran, E. Khan, T. Nikšić, D. Vretenar, Nature 487, 341 (2012)Ebran2014_PRC89-031303R J.P. Ebran, E. Khan, T. Nikšić, D. Vretenar, Phys. Rev. C 89, 031303(R) (2014)Zhou2015_SINAP-CUSTIPEN S.G. Zhou, Constraint cluster structure from covariant density functional theory (2015), talk given at the SINAP-CUSTIPEN Workshop on Clusters and Correlations in Nuclei, Nuclear Reactions and Neutron Stars, Dec. 14-18, 2015, Shanghai, China, Dobaczewski2016_JPG43-04LT01 J. Dobaczewski, J. Phys. G: Nucl. Part. Phys. 43, 04LT01 (2016)Shen2016_CPL33-102103 S.H. Shen, J.N. Hu, H.Z. Liang, J. Meng, P. Ring, S.Q. Zhang, Chin. Phys. Lett. 33, 102103 (2016)Stone2016_PRL116-092501 J.R. Stone, P.A.M. Guichon, P.G. Reinhard, A.W. Thomas, Phys. Rev. Lett. 116, 092501 (2016)Zhou2016_Huizhou-HIAF X.H. Zhou, High intensity heavy ion accelerator facility (HIAF) and its physics goals (2016), talk given at the CUSTIPEN-IMP-PKU Workshop on Physics of Exotic Nuclei, Dec. 12-15, 2016, Huizhou, China, Tanihata2016_Huizhou-HIAF I. Tanihata, What physics can we study with HIAF's high-energy beams (2016), talk given at the CUSTIPEN-IMP-PKU Workshop on Physics of Exotic Nuclei, Dec. 12-15, 2016, Huizhou, China, Zeng2015_ChinSciBulletin60-1329 S. Zeng, W. Liu, Y. Ye, Z. Guo, Chin. Sci. Bull. 60, 1329 (2015) (in Chinese)
http://arxiv.org/abs/1703.09045v1
{ "authors": [ "Shan-Gui Zhou" ], "categories": [ "nucl-th", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170327130837", "title": "Structure of Exotic Nuclei: A Theoretical Review" }
The work is necessitated by search for new materials to detect ionizing radiation. The rare-earth ions doped with ternary alkali earth-halide systems are promising scintillators showing high efficiency and energy resolution. Some aspects of crystal growth and data on the structural and luminescence properties of BaBrI and BaClI doped with low concentrations of Eu^2+ ions are reported. The crystals are grown by the vertical Bridgman method in sealed quartz ampoule. New crystallography data for BaClI single crystal obtained by single crystal X-ray diffraction method are presented in this paper. Emission, excitation and optical absorption spectra as well as luminescence decay kinetics are studied under excitation by X-ray, vacuum ultraviolet and ultraviolet radiation. The energies of the first 4f-5d transition in Eu^2+ and band gap of the crystals have been obtained. We have calculated the electronic band structure of the crystals using density functional theory as implemented in the Ab Initio. Calculated band gap energies are in accord with the experimental estimates. The energy of gaps between the occupied Eu^2+ 4f level and the valence band top are predicted. In addition, positions of lanthanide energy levels in relation to valence band have been constructed using the chemical shift model. § INTRODUCTIONEu-doped orthorhombic alkali earth halides have been recently utilized as the prospective scintillators for gamma ray detection having high light yield and energy resolution. The strontium iodide crystals doped with Eu^2+ ions demonstrate excellent properties close to the theoretical limit <cit.>. This material has been more extensively studied with optical spectroscopy methodswithin the past decade. Spectroscopic data measured in ultraviolet and vacuum ultraviolet (VUV) together with Ab Initio calculations provided the information about exciton and band gap energies <cit.> and is necessary for understanding mechanism of defect formation and their role in energy transfer.Recently the research focus is shifted to the study of mixed halide compounds due to their superior light yield  <cit.>. In a number of barium dihalides BaFI-BaClI-BaBrI-BaBrCl, the scintillation properties have been studied for Eu-doped BaFI, BaBrCl and BaBrI <cit.>. Despite their excellent properties, experimental data on optical absorption and excitation spectra in spectral region of 4f-5d and band to band transitions are scarce because single-crystals doped with high concentrations of Eu^2+ ions (more than 5 mol. %) were used. When measuring high concentration doped samples, the inner filter effects can be observed. These include reabsorption and non uniform excitation throughout the sample. These effects dramatically change the shape of excitation spectrum. Therefore, estimation of the lowest energy of 4f-5d transitions in Eu-doped BaBrI given in <cit.> is not correct. Furthermore, the experimental determination of band gap in these crystals is not possible due to high absorption related to allowed 4f^65d^1→ 4f^7 transitions in Eu^2+ ions. At this moment, the energy of band gap of the mixed halide compounds is based on theoretical estimates.We investigate luminescence, electrical and structural properties of undoped BaBrI, BaBrI-0.05 mol.% Eu^2+ and BaClI-0.1 mol.% Eu^2+ crystals. Absorption, excitation and emission spectra, photoluminescence decay time constants, dielectric properties and pulsed height spectra are presented. The vacuum referred binding (VRBE) energy diagram is constructed in conformity with density functional study.It displays the electron binding energy in the ground and excited state levels of all divalent and trivalent lanthanides ions in BaBrI and BaClI crystals. § METHODOLOGY§.§ Growth and Structural Characterization The crystals were grown by the vertical Bridgman method in sealed quartz ampoules in vacuum. The temperature gradient was about 10-15 ^oC/cm, and the growth rate was 1 mm/hour. The reagents used for the growth were BaBr_2, BaI_2 and BaCl_2 (purity 99.9%, Lanhit, LTD). The stoichiometric mixtures of BaBr_2+BaI_2 and BaCl_2+BaI_2 were employed. The samples were doped with of EuBr_3 and EuCl_3, respectively.Since the material is hydroscopic, the batch was thoroughly dried prior to sealing the ampoule diameter 10-30 mm.The thermogravimetric and differential scanning calorimetry methods determined the melting point,the level of hydration and the possible dehydration temperatures of the charge materials prior to the crystal growth.The melting points for BaBrI and BaClI are about 783 ^oC and 815 ^oC, respectively. The plates about 1-2 mm of thickness and 1 cm in diameter were cut and polished in glove box for optical absorption and luminescence spectra measurements. The rest of grains were analyzed to determine structure. For pulse height spectra measurements a sample 1x1x1 cm^3 of BaClI-0.1 mol.% Eu^2+ was cut, polished and coated with polytetrafluoroethylene (PTFE) tape to maximize the light collection efficiency.The crystallography data of BaBrI crystal were published in paper <cit.>. The diffraction pattern of grown BaBrI crystals was in agreement with the early published data.We report the new data for BaClI single crystal measured by single crystal X-ray diffraction method. Structure analysis of BaClI crystals was carried out using a Bruker AXS D8 VENTURE dualsource diffractometer with a Photon 100 detector under monochromatized Mo-K_α radiation.Low temperature data was acquired with the crystal cooled by a Bruker Cobra nitrogen Cryostat.Three sets of 20 frames were utilized for initial cell determination, whereas complete data were collected by severalφ and ω scans with 0.3^o rotation, 2 s exposure time per frame and crystal-to-detector distance 40 mm.The data collection strategies were optimized by the APEX2 program suite <cit.>, and the reflection intensities were extracted and corrected for the Lorentz-polarization by the SAINT package <cit.>.A semi-empirical absorption correction was applied by means of the SADABS software <cit.>.It was revealed that the studied samples crystallize in orthorhombic symmetry.The XPREP software assisted in the determination of the space group (Pnma) and in calculation of intensity statistics.Finally, the least-squares refinements were refined by the program CRYSTALS <cit.>.The structures were solved with the use of the charge flipping algorithm <cit.>,and the space group was confirmed by the analysis of the reconstructed electronic density. Scale factors, atomic positions, occupancies and atomic displacement factors werethe refined parameters.In preliminary anisotropic refinement the R values converged to R≈ 4.The obtained unit cell parameters are: a = 8.4829(5), b= 4.9517(3), c = 9.6139(5) Å, V= 403.83 Å^3. The density of BaClI crystals calculated from the structure is 4.94 g/cm^3. In the orthorhombic structures (space group Pnma) of studied crystals Ba, Cl and I atomsoccupy the fourfold special positions (4c) and lie on the mirror planes, perpendicular to the b axis.Barium atom position is coordinated by 9 anions with mean interatomic distances:Ba-Cl ∼ 3.15 and Ba-I ∼ 3.59 Å(Fig.<ref>). X-ray powder diffraction data were obtained by diffractometer D8 ADVANCE Bruker in range of diffraction angles 2θ varying from 3 to 80 degrees, CuK_α radiation. The experimental conditions were the following: 40 kV, 40 mA, time per step - 1 s and step size - 0.02^o, Goubel mirror. The XRD pattern of the sample is shown in Fig.<ref> and, in general, it is similar to that obtained in Ref. <cit.>. In contrast to our data, the space group Pbam was reported for BaClI in Ref. <cit.>. §.§ Optical and dielectrical characteristic measurementsThe optical absorption spectra were obtained by a Perkin-Elmer Lambda 950 UV/VIS/NIR spectrophotometer at 300 K;photoluminescence (PL) was measured in vacuum cold-finger cryostat;The spectra were detected with a MDR2 and SDL1 (LOMO) grating monochromator, a photomodule Hamamatsu H6780-04 (185-850 nm), and a photon-counter unit.The luminescence spectra were corrected for spectral response of detection channel.The photoluminescence excitation (PLE) spectra were measured with a grating monochromators MDR2 and 200 W xenon arc lamp for direct 4f-5dexcitation and vacuum monochromator VM-2 (LOMO) and Hamamatsu deuterium lamp L7292 for measurements in VUV spectral region.The excitation spectra were corrected for the varying intensity of exciting light due.Photoluminescence decay curves were registred by an oscilloscope Rigol 1202 under pulse nitrogen laser excitation with impulse duration about 10 ns.The X-ray excited luminescence was performed using an X-ray tube operating at 50 kV and 1 mA.The measurements of pulse height spectra were carried out with a photomultiplier tube Enterprises 9814QSB.The PMT was operated with a CSN638C1 negative polarity voltage chain. The focusing system of 46 mm active diameterwas assumed to be 100% photoelectron collection efficiency in the center of the photocathode.The sample was irradiated with gamma rays from a monoenergetic γ-ray source of ^137Cs (662 KeV).A homemade preamplifier and an Ortec 570 amplifier were used to obtain pulse height spectra.The samples are optically coupled to the window of PMT using mineral oil.The dielectric constant of the BaBrI crystal was measured using immitance (RLC) meter E7-20 manufactured by MNIPI.The measurements of capacitance and dielectric losses were performed in frequency range from 25 Hz to 1 MHz.The silver paint (kontaktol "Kettler") was employed as the electrode contact material. The pad area was about 60 mm^2 and the sample thickness was about 1 mm.To prevent surface degradation the dieletric measurements of polished samples were made in the glove box .§.§ Calculation details Ab Initio calculations ofBaClI crystal doped with Eu^2+ were carried out withindensity functional theory (DFT) using VASP (Vienna Ab Initio Simulation Package) computer code <cit.>.The calculations were performed with HPC clusters "Academician V.M. Matrosov"  <cit.>and "Academician A.M. Fock" <cit.>. Using the unit sell parameters from the X-ray diffraction weconstructed the 2×2×1 (48 atoms) supercell,in which one of Ba^2+ ions was replaced by Eu^2+.The spin-polarized calculations were carried out within the framework of the generalized gradient approximation (GGA)with the exchange-correlation potential PBE <cit.>.Integration within the Brillouin zone was performed on a Γ-centered grid of 8 irreducible k points. Geometry optimization was performed with fixed cell dimensions.The convergence was achieved if the difference in total energy between the two iterations was less than 10^-6 eV. § RESULTS AND DISCUSSION§.§ Eu^2+ luminescenceFigures <ref> and <ref> exhibit the absorption spectra for Eu^2+ ions in BaBrI and BaClI crystals measured at room temperature and the relative emission and excitation spectra measured at 80 K.Strong absorption bands are observed from 4f^7 ground state to 4f^65d^1 states in both figures (curves 1).The maxima of peaks are 280 (4.43 eV) and 292 nm (4.25 eV) for BaBrIcrystal and 278 (4.46 eV) and 290 nm (4.25 eV) for BaClI crystal. The excitation wavelength for the emission is 290 nm. At this wavelength, the optical penetration depth is less than 1 mm for all the samples. The emission spectra, which result from 5d-4f transitions with peaks at 415 nm (2.99 eV) in BaBrI-Eu and 410 nm (3.02 eV) in BaClI-Eu(curves 3 in Fig. <ref> and  <ref>), agree well with previously published data <cit.>.The excitation spectra of 5d-4f emission of BaBrI-Eu and BaClI-Eu crystals monitored at 415 and 410 nm (curves 2 in Fig. <ref> and  <ref>) are well agreed with the optical absorption spectra.The ground state of Eu^2+ ions includes seven 4f electrons, which, according to the Hund's rules, give rise to the ground state ^8S_7/2.The bands consist of two clearly distinguishable peaks and slightly expressed characteristic "staircase" structure. In BaBrI and BaClI crystals the site symmetry of cations is D_2h as found in structure analisis. In D_2h symmetry the degeneracy of the t_2g is lifted and the t_2g level splits into three levels. Excitation and absorption spectra indicate two bands corresponding to a t_2g level splitting into three levels, where degeneracy of two of them is weak. The characteristic "staircase" structure was originally explained by transitions from the ^8S_7/2 to the seven ^7F_J multiplets (J = 0-6)of the excited 4f^6(^7F_J)5d^1 configuration <cit.>.It is feasible to the lowest energy of 4f-5d transtion (λ_abs) from the absorption spectra for BaBrI-Eu and BaClI-Eu.According to P. Dorenbos <cit.>, this value pertains to the first step of the characteristic "staircase" structurein 4f-5d absorption and excitation spectra of Eu^2+ corresponding to the zero phonon-line in emission spectrum,such as CaF_2 doped Eu^2+ ions <cit.>.Usually "staircase" structure and vibronic structure of Eu^2+ emission band are not resolved.Therefore, the λ_abs value is estimated on the energy of low-energy side, where the band has risen to 15–20% of the maximum of the "staircase".In this method some level of arbitrariness may introduce an error.To keep errors small, the data on samples with low Eu concentration were preferably used.In previous works <cit.>, the estimation of λ_abs 397 nmis based on the measurement of crystalsdoped with high concentrations of Eu^2+ ions (more than 5 mol.%).Thus, excitation spectra could not be measured correctly due to self-absorption and λ_abs value contains a large error.From absorption spectra of BaBrI-0.05 mol.% Eu^2+ and BaClI-0.1 mol.% Eu^2+ we found λ_abs(BaBrI)=376 nm forBaBrI-Eu^2+ andλ_abs(BaClI)=373 nm.The lowest energies of 4f-5d transition in Eu^2+ ion are 3.29 eV for BaBrI and 3.32 eV for BaClI. At all temperatures the investigated crystals show only strong 5d-4f luminescence and no 4f-4f emission.At room temperature Eu^2+ compounds can exhibit broad and strong fluorescence resulting from the 5d-4f transitions, as well as sharp line emission, which has been assigned to 4f-4f transitions from ^6P_7/2 to ^8S_7/2 terms.The presence of both 5d-4f and 4f-4f transitions indicates the proximity of the lowest excited 5d and the 4f^7 (^6P_7/2) states.The relative positions of the 5d and^6P_7/2 levels change in different hosts.Strong 5d-4f band emission is observed when the lowest 5d state is significantly lower than the^6P_7/2 level,f-f sharp line emission appears when thereverse is true. Both 5d-4f and f-f bands in luminescence spectrum are obtained when the energy of the levels are close.Decreasing temperature causes to redistribution of intensities 4f-4f and 5d-4f luminescence lines.Increase of 4f-4f luminescence and the corresponding reduction of 5d-4f luminescence at low temperatures take place, when the lowest 5d statelies slightly above ^6P_7/2 terms of 4f state.That type of luminescence was observed in Eu^2+ doped BaFCl and SrFCl crystals <cit.>.Energy of ^6P_7/2 level in relation to the ground ^8S_7/2 level is about 3.49 eV.Thus, it can be concluded that the energy 5d state in relation to ^8S_7/2 level is significantly lower than 3.49 eV.Therefore, our estimation of the lowest 4f-5d energies should be reliable.Luminescence decay curves measured at the maximum of the emission peak under nitrogen laser excitation at 337 nm are monoexponential in shape(Fig. <ref>). The measured time constants of the photoluminescence decay kinetics were equal to τ=390 ns for BaClI-Eu(Fig. <ref> curve 1) and τ=400 ns for BaBrI-Eu (Fig. <ref>) crystals.The values are quite similar to the time-constants obtained with other types of excitations <cit.>. BaBrI and BaClI crystals doped with Eu^2+ ions demonstrate bright luminescence under x-ray and gamma-ray excitation.The spectra of x-ray excited luminescenceare given in curves 4 in Fig. <ref> and  <ref>.The spectra are in agreement with photoluminescence spectra.The X-ray excited luminescence output measured by integral intensity is compared with the one of CaF_2-Eu crystal.Light output of CaF_2-Eu crystals is approximately 21000 photons/MeV <cit.>.Therefore, the light output of measured samples can be estimated. BaBrI doped with 0.05 mol.% Eu has light output about 25000 photons/MeV.Light output of BaClI doped with 0.1 mol.% Eu^2+ is estimated about 30000 photons/MeV.In artciles <cit.> the authors obtained the best light output in BaBrI doped with 5–7 % Eu^2+ ions.Therefore, we can expect to increase light output at higher than 0.1 mol. % concentrations of Eu^2+ ions. Light output of BaClI-Eu calculated from the pulsed height spectrum (Fig. <ref>) is about 9000 photons/MeV and an energy resolution is about 11%.This value is much lower than the one obtained from the x-ray excited luminescence spectra. The spread is attributed to various factors.The first one is the lower quality of large crystal than the small sample used in the x-ray luminescence spectra measurements.The next reason is in possible presence of intensive slow components in Eu^2+ x-ray and gamma excited luminescence similarto <cit.>. §.§ Exciton emission and band gap In the X-ray excited luminescence spectra of the Eu-doped samples we observe a low intensity peak at higher energy (about 3.8–4 eV)together with intense Eu attributed luminescence.In nominally undoped BaBrI crystals this luminescence dominates, and its intensity increases at low temperatures (Fig. <ref>, curve 1).Under VUV excitation we also observe the same emission.The instensity of luminescence decreases with concentration of Eu^2+ ions and it is almost absent at 0.1 mol.% and higher Eu^2+ concentrations.In BaClI we also obtain luminescence band in the 300–320 nm spectral region.The inset of Fig. <ref> contains the luminescence spectra under excitation to 4f-5d Eu^2+ band (curve 2) and VUV excitation at about 165 nm. The spectra look different from each other. Under VUV excitation the additional relatively weak bands apear at about 300–320 nm and 460–520 nm. The excitation spectra of the luminescence centred at 320 nm in undoped and doped with 0.05 mol.% Eu^2+ BaBrI crystals are shown in Fig. <ref> in comparison with optical absorption spectra.In all samples the most efficient photoluminescence excitation ranges from about 5.1 to 5.7 eV and lies within interband absorption spectrum.The excitation spectrum of 300–320 nm emission band in BaClI-0.1 mol.% Eu crystal is provided in figure <ref>.Similarly to BaBrI crystals a narrow peak in fundamental absorption region is found.The observed luminescence is attributed to self-trapped excitons (STE).As it was first shown by Hayes and confirmed by Song and Williams <cit.>the STE in alkaline-earth fluorides consists of molecular ion similar to H-centre (hole on interstitial fluorine)and an F-centre-like part (electron trapped fluorine vacancy).Radzhabov pointed out that STE in barium dihalides has the configuration similar to the excitons in alkali-earth fluorides <cit.>.In SrI_2 crystal having close band gap energy the alike emission was also ascribed to self-trapped excitonemission <cit.>.The STE luminescence quenching with Eu^2+ concentration is due to the exciton emission and 4f-5d Eu^2+ absorption spectra overlapping.This fact makes possible a resonant transfer from exciton to Eu^2+ ions.Emission peaked at 480 nm and excited at about 245–250 nm can be attributed to oxygen-vacancy centres.This oxygen related emitting center has also been identified and discussed in the other alkali earth metal halides, such as BaFCl andBaFBr <cit.> and in SrI_2 <cit.>, <cit.>.This luminescence has been previously suggested to be STE luminescence <cit.>.Pustovarov <cit.> concluded that a significant part of the oxygen contamination comes from the surface hydratereactions in strontium iodide and similar mechanism can take place in the investigated crystals.The oxygen luminescence in contrast to the exciton emission is quenched at low temperatures similarly to the other barium dihalides <cit.>.To estimate exction binding energy we need to measure the dielectric constant of the crystals.The dieletric constant (ε^') is calculated using measured capacitance at different frequencies.The value of ε^' at 1 MHz is about 9.93±0.2.Assuming hydrogenlike energy levels for a simple exciton model the exciton binding energy is plotted as function of 1s hydrogen-like wave function, then its Bohr radius a^*_0.Let assume that the variation of m=m^* is relatively small in the considered ionic crystals (BaFBr, BaFI, KBr, RbI, NaI, CaF_2, SrF_2, BaF_2). Therefore, considering the Ref. <cit.> we can suppose that exciton binding energy:E_b=e^2/ε^'a_0^*=e^2/(m^*/m)a_0(ε^')^2=E_0/(m^*/m)(ε^')^2, where m(m^*) is electron (effective reduced) mass, ε^' is the dielectric constant of material, a_0 is the Bohr radius for the hydrogen atom, E_0=13.6 eV is the ionization energy of hydrogen atom, and e is the electron charge.To estimate exciton binding energy in BaBrI and BaClI we plot exciton binding energies versus dielectric constants for a row of known materials.Exciton binding energies for KBr, RbI and NaI are known from Refs. <cit.>.The binding energies of CaF_2, SrF_2, and BaF_2 were obtained in <cit.>. The energies for BaFI and BaFBr crystals are estimated from measurements in Ref. <cit.>.They are 0.66 and 0.32 eV for BaFBr and BaFI, respectivelly. The dielectric constants for BaFBr and BaFI are equal to 6.14 and 8.8respectively following by <cit.>. It is evident in Fig. <ref> that model quite correctly describes delocalized excitons created underinterband excitation at the initial time in ionic crystals. In this model, the n-th level (n=1,2,3, . . . )of the exciton energy E_x is expressed as <cit.>: E_x=E_g-E_b/n^2, where E_g is the band gap of crystal and E_b is exciton binding energy.Since the model above gives a good agreement between dielectric constants and binding energies for diverse crystals we can concludethat the primary process is excitation of conduction electrons and valence holes or directly free exciton.The next step is rapid localisation into a self-trapped exciton (STE). It is clear that this model can not be applied to the STE. Considering the spread of values for the crystals it is possible to estimate exciton binding energy in BaBrI crystal as 0.23±0.02 eV.Exciton energy (E_x) corresponds to the exciton peak in Fig. <ref>, <ref>.It can be determined following the procedure proposedby Ref. <cit.>. The exciton energies are about 5.35±0.15 eV for BaBrI and 6.0±0.4 eV for BaClI.Therefore, the band gap of the crystal is obtained by adding 1s exciton energy to the exciton binding energy (eq. (<ref>)).The one is E_g=5.58±0.17 eV for BaBrI.This value is smaller than the one estimated from Dorenbos empirical rule E_g=1.08· E_x <cit.>. The 1.08 proportionality factor was determined as the average of the limited available data.The error on the location of the conduction band depends on the type of anions in material. For most of iodides this rule gives less than 68%prediction interval <cit.>.We did not measure dielectric constant for BaClI, nevertheless we suppose that the dielectric constant value lies between 8.8 (ε^' (BaFI))and 9.91 (ε^' (BaBrI)).We use the value 9.35, that is the mean in the interval 8.8 and 9.9, as dielectric constant of BaClI for exciton binding energy estimation.Therefore, exciton binding energy in BaClI is about 0.26±0.05 eV and band gap of BaClI is about 6.26±0.3 eV.§.§ Calculation dataThe ground state was calculated in order to estimate the location of Eu^2+ 5d and 4f levelsrelative to the valence band maximum (VBM) and conduction band minimum (CBM) respectively. The calculations were carried out assuming that4f and 5d levels of europium ion must be located in the band gap of the crystal.The ground state of Eu^2+ ion configuration [Xe]4f^7 is characterized by a half-filled 4f shell.In Refs. <cit.> it was shown that a correct description of the 4f electrons requiresa corrections of the effective on-site Coulomb interaction between f-electrons (characterized by the Hubbard U value). Therefore to correct the position of 4f levels the Dudarev's approximation PBE+U  <cit.> was applied,in which only a difference U_eff=(U - J) is meaningfull without the individual parameters U and J.In accordance with literature U_eff for Eu^2+ doped wide gap materials should be ≥ 4 <cit.>.However, authors of Ref. <cit.> showed that the most correct Eu^2+ 4f levels described with U_eff from 2.2 to 2.5.Some methodological calculations of BaBrI:Eu^2+ crystals were performed to identify U_eff value for the method.The results of the calculations are almost identical to the ones obtained in Ref. <cit.>.The value U_eff was chosen to be 2.5, which gave a good agreement with the data reported in Ref. <cit.>(4f-VBM = 1.4 eV) for BaBrI:Eu^2+).The calculated 4f-VBM gap for BaClI:Eu^2+ was 1.5 eV at U_eff=2.5.The band gap was estimated both by the PBE, and by the G_0W_0 approximations <cit.>.It is known that the using of density functional calculations with PBE potential in semiconductors and dielecricsleads to a delocalized electron states and, consequently, to underestimate the energy of the band gap <cit.>. However, the G_0W_0 method gives the value of band gap in ionic crystals, comparable withthe experimental data <cit.>, which is also confirmed by the calculations. The results of the calculations are presented in Table <ref>). Finally, we can estimate the energy of 5d-CBM transition using the calculated band gap and 4f-VBM energy, and experimental dataof the first 4f→5d transtion. The estimated 5d-CBM values were 0.65 and 0.75 eV for BaBrI:Eu^2+ and BaClI:Eu^2+ and agreed well with the data obtaned in Ref. <cit.>.The excited state of [Xe]4f^65d^1 configuration of Eu^2+ ion was performed by setting the occupancyof the highest 4f state to zero.The isosurface of electron density for the excited state of Eu^2+-doped BaClI is shown in Fig. <ref>. The 5d^1excited state is almost completely localized on the Eu ion for both investigated crystals.§.§Vacuum referred binding energy diagram of the lanthanide ions in BaBrI and BaClI Behaviour of spectroscopic properties of the lanthanide ions can be predicted behaviour with respect to their ionic charge and number of electrons into 4f-shell.The energy diagrams are plotted for BaBrI and BaClI crystals based on the experimental values of energies of band gap, exciton creation and 4f-5d transitionand calculated position of ground Eu^2+ in respect to the valence band obtained in this work. The diagram shows the binding energies of an electron in the divalent and trivalent lanthanide ion ground and excited states (Fig. <ref>).By using the Dorenbos chemical shift model <cit.>, these binding energies are related to the binding energy of an electron at rest in vacuum defined as zero of energy(Vacuum Referred Binding Energy (VRBE) diagram).We have not spectrocopic data about trivalent lanthanides in investigated crystals, therefore the Coulomb repulsion energy U(6,A)is used. It defines the binding energy difference between an electron in the ground state of Eu^2+ with that in the ground state of Eu^3+ <cit.>. Similarly to the other alkali-earth iodide and bromide compounds like LaBr_3 and SrI_2 it is estimated as 6.4 eV for BaBrI. Chlorides have higher U(6,A) energy, therefore this one is chosen for BaClI equal to 6.9 eV. Such choice immediately defines the VRBE of electrons in the 4f^n states of divalent and trivalent lanthanides. The calculated energies betweeen top of the valence band and Eu^2+ ground state in BaBrI are about 1.4 eV and 1.5 eV in BaClI crystals (see Table <ref>). Energies of valence band top E_v are equal to -5.18 and -5.7 eV for BaBrI and BaClI, respectivelly. Using exciton creation E_x (5.35±0.15 eV for BaBrI and 6.0±0.4 eV) and exciton binding E_b (0.23 eV for BaBrI and 0.26 eV for BaClI) energies estimated above the energy of conduction band bottom (E_c) is calculated. These obtain about 0.4 eV in BaBrI and 0.53 eV in BaClI. These values are above the vacuum level implying that BaBrI and BaClI should be a negative electron affinity material (E_C>0, χ=-E_C). The fact, that E_C>0, can be explained by uncertainty in measurements of exciton and exciton binding energies and an error in calculation of distance between Eu^2+ ground state and top of valence band. It appears that the used in this work Ab Initio approach usually underestimates energies of transitions <cit.>. On the other hand, the chemical shift model gives negative electron affinity χ=-0.6 eV for some materials such as LiCaAlF_6, SrAl_12O_19 and SrI_2 due to the model limitation <cit.>.Following the diagrams ground states of divalent La, Ce, Gd, and Tb ions are to be 4f5d.Stable divalent La, Ce, Gd, and Tb ions and accompanying them photochromic centres as possibly present as pointed in Refs. <cit.>.The divalent state of these ions can be obtained by x-ray irradiation or additive coloration in alkali-earth metal vapours.It is clear from the diagrams in Fig. <ref> that all 4f ground levels of trivalent rare earth ions exluded Ce^3+ lie deeply in valence band.Therefore, we expect only 4f-4f luminescence from the trivalent rare earth dopants in these crystals.In constrast, the ground 4f and 5d levels of Ce^3+ ions are in band gap and 5d-4f luminescence in the Ce^3+ doped crystals can be possible.The alkali-earth fluorides are not efficient scintillator although the 5d-4f luminescence of Ce^3+ and Pr^3+ ions is observed in them. The reason for this is the inefficient energy transfer from hot holes in valence band to rare-earth ions. The energy barrier for hole capturing by the rare earth ion is higher than the energy of hole self-trapping. Therefore, in alkali-earth fluorides energy is transferred through hole traps to rare earth ion. This leads to hyperbolic law of luminescence decay and low light yield <cit.>. If the ground state of trivalent rare earth ion is located closer to the top of valence band the probability of hot hole capturing increases. The ground state of Ce^3+ ion in BaClI and BaBrI crystals as predicted from the chemical shift model is located so close to the top of valence band. Therefore, Ce-doped BaClI and BaBrI crystals would be promising for high light output. Notwithstanding, 5d-4f luminescence is possible for trivalent lanthanide ions having ground state lower than top of valence band. Theoretical calculations indicate that valence band of the investigated crystals consists of two subbands formed by p-orbitals of iodine and bromine or chlorine ions (Fig. <ref>,left and right insets). The iodine valence band lies upper than the bromine and chlorine as well as in BaFBr and BaFCl crystals where valence band is formed by bromine-fluorine and chlorine-fluorine subbands <cit.>. Ground state of oxygen-vacancy centres in BaFBr and BaClI crystals is located within valence band in gap between the subbands whereas excited states are located in band gap. Bright luminescence related to the oxygen-vacancy centres was observed <cit.>.Therefore, if a ground state of lanthanide ion is in gap between the subbands the 5d-4f luminescence is possible. According to the calculations the ground state of Pr^3+ ion is located into the gap between valence subbands. In this case 5d-4f luminescence is observed in Pr-doped BaBrI and BaClI crystals inasmuch as the lowest 5d state binding energy is less than the one of 4f(^1S_0) level. The efficient hole capture by the Pr-ions is expected. In Pr-doped alkali-earth fluorides hight temperature stability of light yield was demonstrated in Ref. <cit.>. We are looking forward to the same effect for Pr-doped BaClI and BaBrI crystals. Thus, based on the predictions of the chemical shift model Ce^3+ and Pr^3+ ions seem to be as promising activators for the scintillation BaClI and BaBrI crystals. § CONCLUSIONThe scintillation, luminescence and structural properties of BaBrI and BaClI crystals doped with low concentrations of Eu^2+ ions have been studied. The BaClI and BaBrI single crystal undoped and doped with 0.05-0.1 mol.% Eu^2+ ions were grown. The structure of BaClI crystals was determined by single-crystal X-ray diffraction technique. The results obtained by luminescence spectroscopy revealed the presence of divalent europium ions in the BaBrI and BaClI crystals. Although, the impurity ions in the trivalent state in these crystals have not been found. The intense bands in absorption and excitation spectra having slightly manifested characteristic "staircase" structure peaked at about 4.25-4.4 eV for BaBrI and 4.25-4.45 eV for BaClI crystals were caused by 4f^7(^8S_7/2)→ 4f^65d^1(t_2g) transitions. The lowest energies of 4f-5d transition in Eu^2+ ion obtained from the spectra were 3.29 eV for BaBrI and 3.32 eV for BaClI. The narrow intense peaks at 5.35 and 6 eV in the photoluminescence excitation spectrum of the intrinsic emission of BaBrI and BaClI crystals were due to the creation of excitons. The band gaps of BaBrI and BaClI crystals were estimated about 5.58 and 6.26 eV consequently using hydrogen-like exciton model.The band gap eas estimated both with the PBE, and G_0W_0 approximations. The calculated band gap energies agree with the experimental data. The distance between the lowest 5d level of Eu^2+ ions and top of the valence band has been calculated. The VRBE diagrams of levels of all divalent and trivalent lanthanides ions in BaBrI and BaClI crystals were constructed established on the acquired experimental and theoretical data. This work was partially supported by RFBR grants 15-02-06514a. The reported study was performed with the equipment set at the Centres for Collective Use ("Isotope-geochemistry investigations" at A.P. Vinogradov Institute of Geochemistry SB RAS and "Analysis of organic state" at A.E. Favorsky Institute of Chemistry SB RAS).
http://arxiv.org/abs/1703.08926v1
{ "authors": [ "R. Shendrik", "A. Myasnikova", "A. Shalaev", "A. Bogdanov", "E. Kaneva", "A. Rusakov", "A. Vasilkovskyi" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170327043834", "title": "Optical and structural properties of $\\mathrm{Eu^{2+}}$ doped BaBrI and BaClI crystals" }
apsrevAsia Pacific Center for Theoretical Physics (APCTP), Pohang, Gyeongsangbuk-do, 790-330, Korea Department of Applied Physics, School of Science, Northwestern Polytechnical University, Xi'an 710072, China Department of Physics and Astronomy, University of Manitoba, Winnipeg R3T 2N2, Canada Department of Physics and Astronomy, University of Manitoba, Winnipeg R3T 2N2, CanadaWe study fermionic ladders with identical disorder along the leg direction. Following recent experiments we focus, in particular, on how an initial occupation imbalance evolves in time. By considering different initial states and different ladder geometries we conclude that in generic cases interchain coupling leads to a destruction of the imbalance over time, both for Anderson and for many-body localized systems. 71.10.Fd, 05.70.Ln, 72.15.Rn, 67.85.-dLocalization of fermions in coupled chains with identical disorder J. Sirker December 30, 2023 ==================================================================§ INTRODUCTION It is known for more than fifty years that disorder in one- and two-dimensional tight-binding models of non-interacting fermions with sufficiently fast decaying hopping amplitudes always leads to localization <cit.>. In recent years, localization phenomena in interactinglow-dimensional tight-binding models have attracted renewed attention <cit.>.For the random field Heisenberg chain it has been suggested, in particular, that there is a transition at a finite disorder strength between an ergodic phase and a non-ergodic many-body localized (MBL) phase <cit.>.Experimentally, the localization of interacting particles in quasi one-dimensional geometries has been studied in ultracold fermionic gases and in systems of trapped ions <cit.>. Quite recently, experimental studies have been extended to two-dimensional systems.In particular, the decay of an imbalance in the occupation of even and odd sites (see Fig. <ref>) in fermionic chains as a function of the interchain coupling and the onsite Hubbard interaction has been investigated. For the case of identical disorder in the coupled chains it has been suggested that the system remains localized in the non-interacting Anderson case when interchain couplings are turned on while the coupling leads to delocalization in the interacting case <cit.>. Theoretically, the decay rate in coupled interacting Hubbard chains has been addressed by perturbative means <cit.>.For Hubbard chains, evidence for non-ergodic behavior has been found at strong disorder in numerical simulations.<cit.> A non-ergodic phase was also found in the two-dimensional Anderson-Hubbard model with independent disorder for each spin species using a self-consistent perturbative approach.<cit.> For coupled chains of non-interacting spinless fermions with independent potential disorder in each chain it has been found that interchain coupling can both strengthen or weaken Anderson localization, depending on the number of legs and the ratio of inter- to intrachain coupling.<cit.>The purpose of this paper is to investigate quench dynamics in tight-binding models of fermionic chains with identical potential disorder for different initial states and interchain couplings, both in the non-interacting and in the interacting case. Our study relies on analytical arguments as well as on exact diagonalizations of finite systems. Our main results are: For the initial state used in the experiment of Ref. SchneiderBloch2 (see Fig. <ref>(a,b)) we confirm that the dynamics in the non-interacting case is separable and completely independent of the coupling between the chains. The Anderson localized state is fully stable because perpendicular interchain couplings for this particular setup are ineffective. For generic interchain couplings and generic initial states, on the other hand, we find that the occupation imbalance does decay both in the Anderson and the MBL phase.Our paper is organized as follows: In Sec. <ref> we define the fermionic Hubbard models, initial states, and order parameters investigated. In Sec. <ref> we obtain analytical results for the time dependence of the order parameters after a quench in the non-interacting, clean limit. Based on the initial state and the geometry of the interchain couplings we make several general observations in Sec. <ref> on whether or not the coupling between the chains will affect the dynamics. Specific cases of disordered free fermionic ladder models are considered in Sec. <ref> while numerical results for interacting systems are provided in Sec. <ref>. In addition to the order parameters, we also consider the time evolution of the entanglement entropy of the ladder system, see Sec. <ref>. Finally, we summarize and conclude.§ MODEL We consider a model of coupled fermionic Hubbard chainsH=-J∑_i,j=1;σ^L_x-1,L_y (c^†_i,j,σc_i+1,j,σ+c^†_i+1,j,σc_i,j,σ)- J_⊥∑_i,j=1;σ^L_x,L_y-1 (c^†_i,j,σc_i,j+1,σ+c^†_i,j+1,σc_i,j,σ)- J_d ∑_i,j=1;σ^L_x-1,L_y-1 (c^†_i,j,σc_i+1,j+1,σ+c^†_i+1,j,σc_i,j+1,σ+h.c.)+ U ∑_i,j=1^L_x,L_y(n_i,j,↑n_i,j,↓-1/2) +∑_i,j=1;σ^L_x,L_y D_i n_i,j,σwith L_x sites along the x direction and L_y sites along the y-direction. c^(†)_i,j,σ annihilates (creates) an electron with spin σ=↑,↓ at site (i,j), and the local density operator is given by n_i,j,σ=c^†_i,j,σc_i,j,σ. J is the hopping amplitude along the x-direction, J_⊥ the hopping amplitude along y, and J_d a diagonal hopping amplitude. U is the onsite Hubbard interaction. The random disorder potential D_i only depends on the position along the x-direction. It is the same for all sites with the same index j. We assume open boundary conditions in both directions. In the numerical calculations we will set J=1.We are interested in the non-equilibrium dynamics of the disordered fermionic Hubbard model (<ref>) starting from a prepared initial product state. Following recent experiments on cold fermionic gases we consider, in particular, initial product states at quarter filling for chains with L_x even. We concentrate on two initial states. The first one is given by|Ψ_1⟩ =∏_i=1^L_x/2∏_j=1^L_y c^†_2i-1,j |0⟩ .In the following, we call this state the rung occupied state, see Fig. <ref>(a,b). The second initial state we will consider is the diagonally occupied state|Ψ_2⟩ =∏_i=1^L_x/2∏_j=1^L_y/2 c^†_2i,2jc^†_2i-1,2j-1 |0⟩ ,depicted in Fig. <ref>(c,d). For free fermions the time evolution of the order parameter will not depend on the spin. For interacting fermions we consider the spin of the particles in the initial states above as being completely random. For the initial state |Ψ_1⟩ the order parameter is given byI_1 = 2/L_xL_y∑_i,j (-1)^i+1 n_ijwhileI_2 = 2/L_xL_y∑_i,j (-1)^i+j n_ijis the order parameter for the initial state |Ψ_2⟩. Here n_ij=∑_σ n_ijσ. Both order parameters are normalized such that ⟨ I_1(0)⟩=⟨Ψ_1|I_1|Ψ_1⟩ = 1 and ⟨ I_2(0)⟩= ⟨Ψ_2|I_2|Ψ_2⟩ =1. In the following, we study the unitary time evolution of the order parameters ⟨ I_1,2(t)⟩ under the Hamiltonian (<ref>) for different sets of parameters.§ FREE FERMIONS IN THE CLEAN LIMIT We start with the clean free fermion case U=0 and D_i=0. The Hamiltonian can then be diagonalized by Fourier transform and the time evolution of ⟨ I_1,2(t)⟩ can be calculated analytically. The Fourier representation of the annihilation operator for open boundary conditions is given byc_ijσ= 2/√((L_x+1)(L_y+1))∑_k_x,k_ysin k_x sin k_y c_k_x,k_y,σ.The wave vectors are quantized according k_x=nπ/(L_x+1) and k_y=mπ/(L_y+1) with n=1,⋯,L_x; m=1,⋯,L_y. Unitary time evolution results inc_k_x,k_y,σ(t)=exp(-ε_k_x,k_y,σt)c_k_x,k_y,σwhere the dispersion for model (<ref>) readsε_k_x,k_y= 2Jcos k_x +2J_⊥cos k_y + 4J_d cos k_x cos k_yand is independent of the spin index σ. Using the Fourier expansion (<ref>) for the order parameter (<ref>) we find⟨ I_1(t)⟩ = 1/L_xL_y∑_n,m=1^L_x,L_yexp[(ε_n,m-ε_L_x+1-n,m)]= 1/L_xL_y∑_k_x,k_yexp[4 tcos k_x (J+2J_dcos k_y)] → 1/π∫_0^π dk_xexp[4 tJcos k_x]J_0(8J_dtcos k_x)where we have taken the thermodynamic limit, L_x,L_y→∞, in the last line with J_0 being the Bessel function of the first kind. Without the diagonal couplings (J_d=0) as in the experiment of Ref. SchneiderBloch2 we find, in particular,⟨ I_1(t)⟩=J_0(4Jt)∼ (2π Jt)^-1/2in the thermodynamic limit while ⟨ I_1(t)⟩∼ (JJ_d)^-1/2/t for J_d≠ 0.Importantly, the result for the the initial state |Ψ_1⟩ is always independent of the coupling in the transverse direction J_⊥. Without diagonal couplings we have a fine-tuned setup where ⟨ I_1(t)⟩ is identical to the result for a single chain. While a generic coupling between the chains will typically lead to a faster dephasing and therefore to a faster decay of the order parameter this is not the case in such a fine-tuned setup. For a finite number of legs one can also prevent the order parameter ⟨ I_1(t)⟩ from decaying completely by fine-tuning the diagonal coupling J_d. This happens if for any of the allowed wave vectors k^(m)_y = mπ/(L_y+1) the diagonal coupling is chosen such that J_d=-J/(2cos k^(m)_y). For an infinite two-leg ladder, for example, we find lim_t→∞⟨ I_1(t)⟩=1/2 if J_d=± J because cos k^(m)_y=± 1/2 in this case.The behavior of the order parameter (<ref>) for the diagonal initial state (<ref>), on the other hand, is very different.In this case we find ⟨ I_2(t)⟩ = 1/L_xL_y∑_k_x,k_yexp[4it(Jcos k_x +J_⊥cos k_y)]L_x,L_y→∞→ J_0(4Jt)J_0(4J_⊥ t)∼ (JJ_⊥)^-1/2(2π t)^-1even without diagonal couplings. For the infinite two-dimensional lattice (J_⊥≠ 0, J_d=0) the order parameter is decaying ∼ 1/t for the diagonal initial state |Ψ_2⟩ as compared to the 1/√(t) decay for the initial state |Ψ_1⟩. The diagonally occupied state is thus a more generic initial state where a crossover from one- to two-dimensional behavior for free fermions does occur if the chains are coupled by a perpendicular hopping term.§ GENERAL RESULTS FOR FERMIONIC CHAINS WITH IDENTICAL DISORDER In this section we want to provide some general arguments to show why the time evolution of the order parameter ⟨ I_1(t)⟩ of the ladder system can be one-dimensional even in the presence of interchain couplings and disorder. We concentrate here on the system without the diagonal hopping terms (J_d=0) which will always make the system two-dimensional and which are not part of the experimental setup in Ref. SchneiderBloch2.First, we perform a Fourier transform along the direction of the interchain couplings J_⊥. Note that all sites for a given index i along the x-direction have the same potential. The Hamiltonian (<ref>) can then be written as H=H_J + H_J_⊥ + H_D = ∑_i h_i withh_i=J∑_k_y (c^†_i,k_yc_i+1,k_y+h.c.) +2J_⊥∑_k_ycos k_y n_i,k_y +D_i∑_k_yn_i,k_y.Similarly, we can write the order parameter asI_1 = 2/L_xL_y∑_i,k_y (-1)^i+1 n_i,k_y.In this representation it is immediately clear that [H_J_⊥,H_J]=[H_J_⊥,H_D]=[H_J_⊥,n_i,k_y]=0 thusc^†_i,k_yc_i,k_y(t)=^-i(H_J+H_D)tc^†_i,k_yc_i,k_y^i(H_J+H_D)t.For free fermions ⟨ I_1(t)⟩ is therefore independent of J_⊥ even in the presence of disorder. This order parameter will therefore always appear to indicate that the Anderson localized phase is stable against perpendicular interchain couplings.If, on the other hand, diagonal hoppings are included then H_J_d does not commute with n_i,k_y. In this generic situation the Anderson localized chain will be affected by the diagonal interchain couplings J_d. We analyze several examples in more detail in the next section. Similarly, introducing a Hubbard interaction U implies that H_J_⊥ does not commute with the rest of the Hamiltonian anymore. On this level, the roles played by H_J_d and H_U are similar: both break the fine-tuned symmetry which make the disordered system behave completely one-dimensional even in the presence of couplings J_⊥ between the chains. Without the diagonal hopping terms the initial state |Ψ_1⟩ together with the order parameter ⟨ I_1(t)⟩ are thus not suitable to study the generic differences between Anderson and many-body localization in coupled chains with identical disorder.§ FREE FERMIONS WITH BINARY AND BOX DISORDER In this section we want to consider specific examples for theHamiltonian (<ref>) with U=0 and different types of disorder. §.§ Free fermions with infinite binary disorderApart from the clean non-interacting case we can also study the case of binary disorder, D_i=± D, in the limit D→∞ analytically. We consider, in particular, a ladder with L_y legs and J_d=0 in the limit L_x→∞. The infinite binary potential along the x-direction then splits the ladder system into decoupled finite clusters with equal potential. The disorder averaged time evolution of the system is then given by a sum of the time evolution of open clusters I_1,2^ℓ(t) with length ℓ along the x-direction and width L_y weighted by their probability of occurence p_ℓ=ℓ/2^ℓ+1 with ∑_ℓ p_ℓ =1 <cit.>. For infinite binary disorder the disorder averageof the order parameters is therefore given by⟨ I_1,2^D=∞(t)⟩ =∑_ℓ=1^∞ p_ℓ⟨ I_1,2^ℓ (t)⟩. For the rung occupied state, ⟨ I_1^ℓ(t)⟩ does not depend on J_⊥. The result for I_1^D=∞(t) is therefore exactly the same as for a single chain. In particular, only clusters with ℓ odd give a contribution I_1^ℓ odd =1/ℓ to the time average so thatI_1^D=∞ =∑_ℓ oddp_ℓ/ℓ=1/3. There is no dephasing in the case of infinite binary disorder. ⟨ I_1^D=∞(t)⟩ does show persistent oscillations around the time average I_1^D=∞=1/3 <cit.>.For the diagonally occupied state the situation is very different. Let us first consider the case of an even number of legs, i.e., L_y even. In this case every decoupled cluster with equal potential of size ℓ× L_y will have ℓ L_y/2 fermions. For the generic case J≠ J_⊥ the order parameter I_2^ℓ(t) will then show persistent oscillations around zero for all cluster lengths ℓ resulting in I_2^D=∞=0. For J=J_⊥, on the other hand, clusters with length ℓ=n(L_y+1)-1; n=1,2,⋯ will give a contribution I_2^ℓ =1/ℓ to the time average so that I_2^D=∞ =∑_ℓp_ℓ/ℓ=∑_n=1^∞ 2^-n(L_y+1)=1/(2^L_y+1-1).For L_y odd and J≠ J_⊥ all clusters with odd length ℓ will give a contribution 1/(L_yℓ) so that I_2^D=∞ =1/(3L_y). For J=J_⊥ and L_y odd, clusters of length ℓ=n(L_y+1)-1, n=1,2,⋯ will give a 1/ℓ contribution to the time average while all other odd clusters will contribute 1/(L_yℓ) giving rise to a time average I_2^D=∞=1/3L_y+1-1/L_y/2^L_y+1-1.In Fig. <ref> these analytically obtained long-time averages are compared to numerical data. For the two-leg ladder with J_⊥ =0.5J, see Fig. <ref>(a), the long-time average is zero while for J_⊥ = J, see Fig. <ref>(b), we have I_2=1/7. For the three-leg ladder we find, on the other hand, I_2=1/9 and I_2=7/45, respectively.To summarize, there is an interesting even/odd effect for the diagonally occupied state with I_2^D=∞=0 for L_y even and I_2^D=∞=1/3L_y for a generic interchain coupling J_⊥≠ J. In the following subsection we will see that these even/odd effects do persist for finite box disorder. §.§ Free fermions with box disorderHere we want to present numerical results for non-interacting ladders with disorder drawn from a box distribution D_i∈ [-D,D]. Because the system is non-interacting, calculating the order parameters ⟨ I_1,2(t)⟩ reduces to an effective one-particle problem which can be solved numerically for large system sizes. We start from the initial L_xL_y/2 one-particle states in position representation and time evolve each of these states using the Hamiltonian (<ref>) with U=0. The order parameters are then simply givenby the sum of the order parameters for each one particle wave function. We have checked that the numerical data agree with the analytical solutions in Sec. <ref> for the clean case and that ⟨ I_1(t)⟩ is indeed independent of J_⊥ for all disorder strengths.We start by presenting data in Fig. <ref>(a) for the time evolution in two-leg ladders prepared in the rung occupied initial state. As discussed in section Sec. <ref> the results are independent of J_⊥. While the order parameter is increased for J_d=J, stronger diagonal couplings lead to a decrease and the data are consistent with I_1→ 0 for J_d→∞, see Fig. <ref>. In Fig. <ref>(b) data for the same parameters but for three-leg ladders are shown. The results are quite different from the two-leg case. While the results are again independent of J_⊥, we now find that the long-time average I_1 remains non-zero even for strong interchain couplings J_d/J≫ 1, see Fig. <ref>. The long-time behavior is thus quite different for ladders with an even or an odd number of legs. Similar to the case of infinite binary disorder we expect that for ladders with an odd number of legs the long-time average I_1 decreases with the number of legs. Coupling an infinite number of Anderson localized chains with identical disorder in a generic way will thus lead to a complete destruction of the order parameter.Next, we present data for the diagonally occupied initial state in Fig. <ref>.The results are qualitatively similar to the case of infinite binary disorder solved analytically in the previous section. In particular, we find that for generic interchain couplings J_⊥ the long-time average I_2 is zero for an even number of legs while it is non-zero for an odd number of legs.§ INTERACTING LADDER MODELS We now turn to a numerical study of the interacting case. Here we are limited to the exact diagonalization of rather small two-leg ladders. While the system sizes could, in principle, be increased the substantial number of samples required to obtain disorder averages with small statistical errors is a further limiting factor in practice. Nevertheless, even these small systems show behavior which is qualitatively consistent with the experimental results in Ref. SchneiderBloch2.§.§ Spinful fermionsWe concentrate first on spinful fermions on a two-leg ladder with onsite Hubbard interaction U. For a 4× 2 ladder with n_↑=n_↓=2 the Hilbert space has dimension 82^2=784. We find that in the interacting case a much larger number of samples than in the non-interacting case is required (by at least a factor of 10) to obtain the same accuracy for the disorder average. For a 4× 2 ladder this is still easily achievable while already for a 6× 2 ladder with n_↑=n_↓=3 the Hilbert space dimension is 123^2=48400, and an enormous amount of computing resources would be required. Instead, we will also present results for a 6× 2 ladder with n_↑=4 and n_↓=2 with Hilbert space dimension 122124=32670.In Fig. <ref> the order parameter ⟨ I_1(t)⟩ for the 4× 2 ladder is shown for different interaction strengths U/J.Both for weak and for strong interchain coupling, increasing the Hubbard interaction initially leads to a decrease of the long time average I_1 with a minimum at |U|/J∼ 4-5, see Fig. <ref>(a).For even larger interaction strengths the long-time average increases again leading to a characteristic shape of the imbalance versus |U|/J curve qualitatively consistent with the experimental data obtained in Ref. SchneiderBloch2. The same is true for the 6× 2 ladder, see Fig. <ref>(b), although the small number of samples we have simulated leads to relatively large error bars. Note that the arguments presented in Ref. EnssSirker for the U→ -U symmetry in such quenches for clean Hubbard models remain valid even if potential disorder is included. The sign of U does not affect the quench dynamics.Results for the diagonal initial state |Ψ_2⟩ are shown in Fig. <ref>.For U=0, see Fig. <ref>(a), we obtain results for the 4× 2 ladder which show qualitatively the same behavior as the ones already presented in Fig. <ref> for much larger ladders. ⟨ I_2(t)⟩ for J_⊥≠ 0 oscillates around zero with J_⊥ determining the oscillation frequency. While the oscillation amplitude around zero is modified for U=4, see Fig. <ref>(b), there is otherwise no qualitative difference between the non-interacting and the interacting case. For a given coupling strength J_⊥ the time scale for the initial decay of ⟨ I_2(t)⟩ is of the same order. For interchain coupling J_⊥ =1 we observe, in particular, an almost complete decay of the order parameter on a time scale of order J in both cases. §.§ Spinless fermionsWhile our numerical results for spinful 4× 2 and 6× 2 ladders demonstrate behavior which is qualitatively consistent with the experimental data, the system sizes are quite small. To corroborate these results we thus also consider the case of spinless fermionic two-leg ladders where larger system sizes can be simulated. Instead of an onsite Hubbard interaction U we now introduce a nearest-neighbor interactionH_V = V∑_i (n_i,1n_i,2+n_i,1n_i+1,1+n_i,2n_i+1,2).Results for a 8× 2 ladder with 8 fermions are shown in Fig. <ref>. As in the spinful case, the dynamics for V=0 is one-dimensional and completely independent of the strength of the interchain coupling J_⊥: the results for V=0 in Fig. <ref>(a) and Fig. <ref>(b) are identical.Adding nearest-neighbor interactions leads to a strong reduction of the order parameter both for weak and strong hopping between the chains.The decay of the order parameter at long times in the interacting case seems to be well described by an exponential. The long-time average and the decay rate extracted from exponential fits are shown in Fig. <ref>. The results show that both the long-time average I_1 and the decay rate γ do depend on the strength of J_⊥ albeit rather weakly. For weak interchain coupling J_⊥=0.1 we observe a non-monotonic dependence of I_1 on the interaction strength similar to the spinful case. § ENTANGLEMENT ENTROPY In this final section we want to briefly discuss the entanglement properties of fermionic ladders. We consider ladders where the chains contain an even number of sites and cut the ladder into two equal halfs, A and B, perpendicular to the chain direction. The von Neumann entanglement entropy is then defined asS_ent(t) = - ρ_A(t)lnρ_A(t)where ρ_A(t)=_B |Ψ_i(t)⟩⟨Ψ_i(t)| is the reduced density matrix of segment A. If we start from one of the product states |Ψ_1,2⟩ then the entanglement entropy for a clean ladder grows linearly in time before saturating at a constant for times t>L_x/(2v) where L_x/2 is the length of the segment and v∼ 2J the velocity of excitations.<cit.> The entanglement entropy per chain, S_ent(t)/L_y, in the clean case is independent of the number of legs L_y and independent of the coupling J_⊥ between the ladders for J_d=0. For spinless fermions we find, in particular, that S_ent(t)/L_y∼ 0.88 t for t<L_x/(2v) consistent with the results for a single chain.<cit.> Similar to the order parameter ⟨ I_1(t)⟩ the entanglement entropy S_ent(t) for the rung occupied initial state remains independent of the interchain coupling J_⊥ in the non-interacting case even if we include disorder. Without interactions or diagonal couplings, S_ent(t) of a ladder prepared in the rung occupied initial state is simply L_y times the entanglement entropy of a single chain. This demonstrates further that the stability of the Anderson localized state cannot be investigated in this setup. One way to allow for dynamics which involves the full ladder is to include diagonal couplings. As demonstrated in Fig. <ref>, S_ent(t) is then no longer simply given by L_y times the entanglement entropy of a single chain but rather grows more rapidly with the number of legs as expected when moving towards a two-dimensional system.The entanglement entropy at long times increases monotonically with J_d up to a maximum value. The maximal value is determined by the smaller of the two relevant length scales: the localization length and the block size.Another way of breaking the one-dimensionality of the dynamics is to include interactions. As demonstrated in Fig. <ref> the entanglement entropy then depends on the strength of the interchain coupling J_⊥ even without the diagonal couplings.For spin chains it has been shown that the entanglement entropy increases logarithmically in the many-body localized phase.<cit.> While some of the data in Fig. <ref> might be hinting at such a scaling, the system sizes are too small to observe scaling over a large time interval. We also note that it has recently been argued—based on numerical data—that the entanglement growth in a Hubbard chain with potential disorder does not grow logarithmically but rather follows a power law with an exponent much smaller than 1.<cit.>In addition to the spinful case we therefore also consider the spinless case, see Fig. <ref>.In this case we do see clear signatures of a logarithmic scaling for small interactions V which seem to indicate that the ladder for D=2.5 is already in the many-body localized phase. Determining the phase diagram of the ladder as a function of disorder strength D and interaction V is difficult using exact diagonalization because of the limited system sizes accessible and is beyond the scope of this paper.§ CONCLUSIONS We have studied non-equilibrium dynamics and localization phenomena in fermionic Hubbard ladders with identical disorder along the chain direction using analytical calculations in limiting cases as well as exact diagonalizations. In the free fermion case we confirm that a perpendicular coupling between the chains does not affect the dynamics for an initial state where all even sites on the chains are occupied by one fermion while all odd sites are empty (rung occupied state). Anderson localization in the chains appears to be stable in such a setup simply because turning on the perpendicular interchain couplings does not affect the dynamics at all.In order to study the differences in the response to interchain couplings between an Anderson and a many-body localized system in a non-trivial setup, we considered to either modify the initial state, or to allow for additional diagonal hoppings between the chains. For the modified initial state—where even sites are occupied by one fermion on even legs and odd sites on odd legs (diagonal occupied state)—we did not find any qualitative difference between the Anderson and the many-body localized state. In both cases interchain coupling leads to a complete decay of the order parameter for a two-leg ladder. At least for small systems there is also no discernible difference in the time scales for the decay of the order parameter between the interacting and the non-interacting model.Similarly, we found that the order parameter for the rung occupied state does decay also in the non-interacting case if we allow for diagonal hoppings which truly couple the chains. Qualitatively, there is again no difference between the Anderson and the many-body localized case: in both cases the initial order is unstable to generic couplings between the chains.While a more detailed analysis of the long-time average of the order parameter, the decay time, and of the entanglement entropy does reveal quantitative differences between the non-interacting and the interacting case, coupling chains with identical disorder in a generic way does not appear to be a 'smoking gun' experiment to distinguish Anderson and many-body localized systems.We acknowledge support by the Natural Sciences and Engineering Research Council (NSERC, Canada) and by the Deutsche Forschungsgemeinschaft (DFG) via Research Unit FOR 2316. We are grateful for the computing resources provided by Compute Canada and Westgrid. Y.Z. thanks Prof. J. Cho for useful discussions. Y.Z. is supported (in part) by the R&D Convergence Program of NST (National Research Council of Science and Technology) of Republic of Korea (Grant No. CAP-15-08-KRISS).35 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Anderson(1952)]Anderson authorP. W. Anderson, journalPhys. Rev. volume86, pages694 (year1952).[Abrahams et al.(1979)Abrahams, Anderson, Licciardello, and Ramakrishnan]AbrahamsAnderson authorE. Abrahams, authorP. W. Anderson, authorD. C. Licciardello, and authorT. V. Ramakrishnan, journalPhys. Rev. Lett. volume42, pages673 (year1979).[Edwards and Thouless(1972)]EdwardsThouless authorJ. T. Edwards and authorD. J. Thouless, journalJ. Phys. C volume5, pages807 (year1972).[Abrahams(2010)]AndersonLocalization editorE. Abrahams, ed., title50 Years of Anderson Localization (publisherWorld Scientific, year2010).[Kramer and MacKinnon(1993)]KramerMacKinnon authorB. Kramer and authorA. MacKinnon, journalRep. Prog. Phys. volume56, pages1469 (year1993).[Basko et al.(2006)Basko, Aleiner, and Altshuler]AleinerAltshuler authorD. Basko, authorI. Aleiner, and authorB. Altshuler, journalAnn. Phys. volume321, pages1126(year2006).[ŽŽnidari čč et al.(2008)ŽŽnidari čč, Prosen, and Prelov ššek]ZnidaricProsen authorM. ŽŽnidari čč, authorT. Prosen, and authorP. Prelov ššek, journalPhys. Rev. B volume77, pages064426 (year2008).[Pal and Huse(2010)]PalHuse authorA. Pal and authorD. A. Huse, journalPhys. Rev. B volume82, pages174411 (year2010).[Imbrie(2016)]Imbrie2016 authorJ. Z. Imbrie, journalPhys. Rev. Lett. volume117, pages027201 (year2016).[Nandkishore and Huse(2015)]NandkishoreHuse authorR. Nandkishore and authorD. A. Huse, journalAnnual Review of Condensed Matter Physics volume6, pages15 (year2015).[Altman and Vosk(2015)]AltmanVoskReview authorE. Altman and authorR. Vosk, journalAnnual Review of Condensed Matter Physics volume6, pages383 (year2015).[Serbyn and Moore(2016)]SerbynMoore authorM. Serbyn and authorJ. E. Moore, journalPhys. Rev. B volume93, pages041424 (year2016).[Agarwal et al.(2015)Agarwal, Gopalakrishnan, Knap, Müller, and Demler]AgarwalGopalakrishnan authorK. Agarwal, authorS. Gopalakrishnan, authorM. Knap, authorM. Müller, and authorE. Demler, journalPhys. Rev. Lett. volume114, pages160401 (year2015).[Gopalakrishnan et al.(2015)Gopalakrishnan, Müller, Khemani, Knap, Demler, and Huse]GopalakrishnanMueller authorS. Gopalakrishnan, authorM. Müller, authorV. Khemani, authorM. Knap, authorE. Demler, and authorD. A. Huse, journalPhys. Rev. B volume92, pages104202 (year2015).[Huse et al.(2014)Huse, Nandkishore, and Oganesyan]HuseNandkishore authorD. A. Huse, authorR. Nandkishore, and authorV. Oganesyan, journalPhys. Rev. B volume90, pages174202 (year2014).[Serbyn et al.(2015)Serbyn, Papi ćć, and Abanin]SerbynPapicPRX authorM. Serbyn, authorZ. Papi ćć, and authorD. A. Abanin, journalPhys. Rev. X volume5, pages041047 (year2015).[Oganesyan and Huse(2007)]OganesyanHuse authorV. Oganesyan and authorD. A. Huse, journalPhys. Rev. B volume75, pages155111 (year2007).[Luitz et al.(2015)Luitz, Laflorencie, and Alet]Luitz1 authorD. J. Luitz, authorN. Laflorencie, and authorF. Alet, journalPhys. Rev. B volume91, pages081103 (year2015).[Luitz et al.(2016)Luitz, Laflorencie, and Alet]Luitz2 authorD. J. Luitz, authorN. Laflorencie, and authorF. Alet, journalPhys. Rev. B volume93, pages060201 (year2016).[Serbyn et al.(2013)Serbyn, Papi ćć, and Abanin]SerbynPapic authorM. Serbyn, authorZ. Papi ćć, and authorD. A. Abanin, journalPhys. Rev. Lett. volume111, pages127201 (year2013).[Andraschko et al.(2014)Andraschko, Enss, and Sirker]AndraschkoEnssSirker authorF. Andraschko, authorT. Enss, and authorJ. Sirker, journalPhys. Rev. Lett. volume113, pages217201 (year2014).[Enss et al.(2017)Enss, Andraschko, and Sirker]EnssAndraschkoSirker authorT. Enss, authorF. Andraschko, and authorJ. Sirker, journalPhys. Rev. B volume95, pages045121 (year2017).[Bar Lev et al.(2015)Bar Lev, Cohen, and Reichman]BarLevCohen authorY. Bar Lev, authorG. Cohen, and authorD. R. Reichman, journalPhys. Rev. Lett. volume114, pages100601 (year2015).[Schreiber et al.(2015)Schreiber, Hodgman, Bordia, Lüschen, Fischer, Vosk, Altman, Schneider, and Bloch]SchreiberHodgman authorM. Schreiber, authorS. S. Hodgman, authorP. Bordia, authorH. P. Lüschen, authorM. H. Fischer, authorR. Vosk, authorE. Altman, authorU. Schneider, and authorI. Bloch, journalScience volume349, pages842 (year2015).[Smith et al.(2016)Smith, Lee, Richerme, Neyenhuis, Hess, Hauke, Heyl, Huse, and Monroe]SmithLee authorJ. Smith, authorA. Lee, authorP. Richerme, authorB. Neyenhuis, authorP. W. Hess, authorP. Hauke, authorM. Heyl, authorD. A. Huse, and authorC. Monroe, journalNature Phys. volume12, pages907 (year2016).[Bordia et al.(2016)Bordia, Lüschen, Hodgman, Schreiber, Bloch, and Schneider]SchneiderBloch2 authorP. Bordia, authorH. P. Lüschen, authorS. S. Hodgman, authorM. Schreiber, authorI. Bloch, and authorU. Schneider, journalPhys. Rev. Lett. volume116, pages140401 (year2016).[Prelov ššek(2016)]Prelovsek2016 authorP. Prelov ššek, journalPhys. Rev. B volume94, pages144204 (year2016).[Mondaini and Rigol(2015)]MondainiRigol authorR. Mondaini and authorM. Rigol, journalPhys. Rev. A volume92, pages041601 (year2015).[Lev and Reichman(2016)]BarLevReichman authorY. B. Lev and authorD. R. Reichman, journalEPL volume113, pages46001 (year2016).[Weinmann and Evangelou(2014)]WeinmannEvangelou authorD. Weinmann and authorS. N. Evangelou, journalPhys. Rev. B volume90, pages155411 (year2014).[Enss and Sirker(2012)]EnssSirker authorT. Enss and authorJ. Sirker, journalNew J. Phys. volume14, pages023008 (year2012).[Calabrese and Cardy(2009)]CalabreseCardy09 authorP. Calabrese and authorJ. Cardy, journalJournal of Physics A: Mathematical and Theoretical volume42, pages504005 (year2009).[Zhao et al.(2016)Zhao, Andraschko, and Sirker]ZhaoAndraschkoSirker authorY. Zhao, authorF. Andraschko, and authorJ. Sirker, journalPhys. Rev. B volume93, pages205146 (year2016).[Bardarson et al.(2012)Bardarson, Pollmann, and Moore]BardarsonPollmann authorJ. H. Bardarson, authorF. Pollmann, and authorJ. E. Moore, journalPhys. Rev. Lett. volume109, pages017202 (year2012).[Prelov ššek et al.(2016)Prelov ššek, Barišši ćć, and ŽŽnidari čč]PrelovsekBarisic authorP. Prelov ššek, authorO. S. Barišši ćć, and authorM. ŽŽnidari čč, journalPhys. Rev. B volume94, pages241104 (year2016).
http://arxiv.org/abs/1703.09336v1
{ "authors": [ "Y. Zhao", "S. Ahmed", "J. Sirker" ], "categories": [ "cond-mat.dis-nn", "cond-mat.quant-gas", "cond-mat.str-el" ], "primary_category": "cond-mat.dis-nn", "published": "20170327230617", "title": "Localization of fermions in coupled chains with identical disorder" }
Low-Dimensional Bounded Cohomology and Extensions of Groups Nicolaus Heuer December 30, 2023 ===========================================================empty empty We consider the problem of attitude tracking for small-scale aerobatic helicopters. A small scale helicopter has two subsystems: the fuselage, modeled as a rigid body; and the rotor, modeled as a first order system. Due to the coupling between rotor and fuselage, the complete system does not inherit the structure of a simple mechanical system. The coupled rotor fuselage dynamics is first transformed to rigid body attitude tracking problem with a first order actuator dynamics. The proposed controller is developed using geometric and backstepping control technique. The controller is globally defined on SO(3) and is shown to be locally exponentially stable. The controller is validated in simulation and experiment for a 10 kg class small scale flybarless helicopter by demonstrating aggressive roll attitude tracking.§ INTRODUCTIONSmall-scale conventional helicopters with a single main rotor and a tail rotor are capable of performing extreme 3D aerobatic maneuvers <cit.>, <cit.>, <cit.>. Such maneuvers involve large angle rotation with high angular velocity, inverted flight, pirouette etc. This necessitates a tracking controller which is globally defined and is capable of achievingfast rotational maneuvers. The attitude tracking problem of a helicopter is quite different from that of a rigid body. The control moments generated by the rotor excite the rigid body dynamics of the fuselage which in-turn affects the rotor loads and its dynamics causing nonlinear coupling. The key differences between the rigid body tracking problem and attitude tracking of a helicopter are the following: 1) the presence of large aerodynamic damping in the rotational dynamics; and 2) the required control moment for tracking cannot be applied instantaneously due to the rotor blade dynamics. The control moments are produced by the rotor subsystem which has a first order dynamics <cit.>. The importance of including rotor dynamics in controller design for large scale helicopters has been extensively studied in the literature <cit.>, <cit.>, <cit.>, <cit.>. Hall and Bryson <cit.> have shown the importance of rotor state feedback in achieving tight attitude control for large scale helicopters, while Takahashi <cit.> compares H_∞ attitude controller design for cases with and without rotor state feedback. In this article, we propose an attitude tracking controller for small scale helicopters using notions based on geometric control and backstepping control design approaches. We show that the controller is defined globally on the attitude manifold, SO(3), achieves local exponential stability and is capable of performing rapid rotational maneuvers. Previous attempts to small-scale helicopter attitude control are based on attitude parametrization such as Euler angles, which suffer fromsingularity issues, or quaternions which have ambiguity in representation. The proposed controller being defined on SO(3) is free of these issues. In <cit.>, an adaptive backstepping stabilizing controller using Euler angles for a small scale helicopter with servo and rotor dynamics is considered. Tang et al. <cit.> explicitly consider the rotor dynamics and design stabilizing controller based on sliding mode technique using Euler angles and hence confined to small angle maneuvers.The paper is organized as follows: Section 2 describes the rotor-fuselage dynamics of a small-scale helicopter. Section 3 presents an attitude tracking controller for a rigid body and later presents the proposed controller for helicopter rotor-fuselage dynamics. The efficacy of the proposed design is demonstrated through numerical simulation in Section 4 and it's experimentally validation is given in Section 5. § HELICOPTER MODEL Unlike quadrotors, a helicopter modeled as a rigid body does not capture all the dynamics required for high bandwidth attitude control purposes. A coupled rotor-fuselage model of a small-scale helicopter is considered <cit.>. The fuselage is modeled as a rigid body and the rotor as a first order system which generates the required control moments. The inclusion of the rotor model is crucial as it introduces aerodynamic damping in the system.The rotational equations of motion of the fuselage are given by,Ṙ = Rω̂, Jω̇ + ω× Jω = M,where R ∈ SO(3) is the rotation matrix which transforms vectors from the body fixed frame of reference, (O_b,X_b,Y_b,Z_b), to a spatial frame of reference, (O_e,X_e,Y_e,Z_e), M = [M_x,M_y,M_z] is the external moment acting on the fuselage and J is the body moment of inertia of the fuselage, ω = [ω_x,ω_y,ω_z] is the angular velocity of the body frame with respect to the spatial frame expressed in the body frame. The hat operator, ·̂, is a Lie algebra isomorphism from ℝ^3 to 𝔰𝔬(3) given byω̂ = [0 -ω_zω_y;ω_z0 -ω_x; -ω_yω_x0 ]. We consider here first order tip path plane (TPP) equations for the rotor as it captures the required dynamics for gross movement of fuselage <cit.>. The rotor dynamics equations are given by ȧ = -ω_y - a/τ_m + θ_a/τ_m,ḃ = -ω_x - b/τ_m + θ_b/τ_m,where a and b are respectively the longitudinal and lateral tilt of the rotor disc with respect to the hub plane as shown in Fig. <ref>, τ_m is the main rotor time constant and θ_a and θ_b are the control inputs to the rotor subsystem. They are respectively the lateral and longitudinal cyclic blade pitch angles actuated by servos through a swashplate mechanism.The coupling of the rotor and fuselage occurs through the rotor hub. The rolling moment, M_x and pitching moment M_y, acting on the fuselage due to the rotor flapping consists of two components – due to tilting of the thrust vector, T, and due to the rotor hub stiffness, k_β,M_x= (hT + k_β)b,M_y= (hT + k_β)a.Here h is the distance of rotor hub from the center of mass. For near-hover condition the thrust can be considered constant which gives the equivalent hub stiffness, K_β = (hT + k_β). The control moment about yaw axis, M_z, is applied through tail rotor which has a much faster aerodynamic response than the main rotor flap dynamics. The tail rotor along with the actuating servo is approximated as a first order system with τ_t as tail rotor time constantṀ_z = -M_z/τ_t + K_tθ_t/τ_t. Since angular velocity of the fuselage is available for feedback, the main rotor dynamics eq:flap_eq and tail rotor dynamics can be written in terms of control moments and a new control input u asṀ = -AM + u,where A is a positive definite matrix defined asA ≜[ 1/τ_m 0 0; 0 1/τ_m 0; 0 0 1/τ_t ]and the new control input u is defined asu ≜[ K_β(θ_b/τ_m - ω_x); K_β(θ_a/τ_m - ω_y); K_tθ_t/τ_t ]. The combined rotor-fuselage dynamics given by eq:rigid_body and eq:actuator can be seen as a simple mechanical system driven by a force which has first order dynamics. The overall dynamical system does not have the form of a simple mechanical system <cit.> as the actuator dynamics is first order. § ATTITUDE TRACKING CONTROLLER Given a twice differentiable attitude reference command (R_d(t),ω_d(t),ω̇_d(t)), the objective is to design an attitude tracking controller for the helicopter. The combined rotor-fuselage dynamics is reproduced here for convenience Fuselage Ṙ = Rω̂,Jω̇ + ω× Jω = M, Rotor Ṁ = -AM + u. First we design an attitude tracking controller for the fuselage, modeled as rigid body eq:fuselage, using geometric control theory as is described in <cit.> and <cit.>. Next, we use the results from this part to prove local exponential stability of the proposed helicopter tracking controller. The rigid body tracking controller has proportional derivative plus feed-forward components. The proportional action is derived from a tracking error function ψ : SO(3) × SO(3) →ℝ which is defined in terms of the configuration error function ψ_c : SO(3) →ℝ asψ(R,R_d) = ψ_c(R_d^TR) 1/2 tr[I - R_d^TR].This is possible on a Lie group since the tracking problem can be reduced to a configuration stabilization problem about the identity because of the possibility of defining error between any two configurations using the group operation <cit.>. ψ_c has a single critical point within the sub level set about the identity I, ψ_c^-1(≤ 2,I) = {R ∈ SO(3) |ψ_c(R)<2 }. This sub level set represents the set of all rotations which are less than π radians from the identity I. From the above function the attitude error vector, e_R, is defined as the differential of ψ with respect to first argument,d_1ψ(R,R_d)· Rω̂= 1/2[R_d^TR-R^TR_d]^∨·ω, e_R = 1/2[R_d^TR-R^TR_d]^∨,where (·)^∨ : 𝔰𝔬(3)→ℝ^3 is the inverse of hat map (̂·̂)̂. Since the velocities at reference and current configurations are in different tangent spaces they cannot be directly compared. Therefore Ṙ_d is transported to the tangent space at R by the tangent map of the left action of R^TR_d. Thus, the tracking error for angular velocity is given bye_ω = ω - R^TR_dω_d.The total derivative of e_R isė_R= 1/2[-ω̂_d R_d^TR + R_d^TRω̂ + ω̂R^TR_d - R^TR_dω̂_d]^∨= 1/2[R_d^TR(ω̂ - R^TR_dω̂_dR_d^TR) + (ω̂ - R^TR_dω̂_dR_d^TR)R^TR_d]^∨= 1/2[R_d^TRê_ω + ê_ω R^TR_d]^∨= B(R_d^TR)e_ω,where B(R_d^TR) = 1/2[tr(R^TR_d)I - R^TR_d] and B(R_d^TR) <1 for all R_d^TR∈SO(3). Here we have used the fact that [Rx̂R^T]^∨ = Rx for all R ∈ SO(3) and x ∈ℝ^3. The time derivative of e_ω isė_ω = ω̇ - R^TR_dω̇_d + ω̂R^TR_dω_d.The total derivative of ψ is dψ/dt = -1/2 tr(-ω̂_d R_d^T R + R_d^TRω̂) = -1/2 tr(R_d^TR(ω̂ - R^TR_dω̂_dR_d^TR)) = -1/2 tr( 1/2(R_d^TR - R^TR_d) ê_ω) = e_R· e_ω. ψ_c is positive definite and quadratic within the sub level set ψ_c^-1(≤ 2,I) which makes ψ uniformly quadratic about the identity <cit.>. This implies there exist scalers b_1, b_2 such that 0 < b_1 ≤ b_2 andb_1e_R^2 ≤ψ(R,R_d) ≤ b_2e_R^2 ∀ R,R_d ∈ψ_c^-1(≤ 2,I).§.§ Attitude Tracking for Rigid bodyThe tracking controller for the fuselage <cit.> based on the above error function is given by M_d = -k_R e_R - k_ω e_ω + ω× Jω - J(ω̂R^TR_dω_d-R^TR_d ω̇_d)where k_R and k_ω are positive constants, the third term cancels the rotational dynamics, and the subsequent terms are the feedforward terms. The error dynamics for the rigid body can now be obtained by substituting the above desired moment, M_d, in eq:fuselage, which results in Jė_ω = -k_Re_R - k_ω e_ω. The following theorem, taken from <cit.>, shows exponential stability of the attitude tracking controller.(Exponential stability of attitude error dynamics) The control moment given in eq:moment_des makes the equilibrium (e_R,e_ω) = (0,0) of tracking error dynamics defined in eq:att_err_dyn exponentially stable for all initial conditions satisfying k_R ψ(R(0),R_d(0)) + 1/2λ_max(J)e_ω (0)^2 < 2k_R.Define a Lyapunov candidate function for the error dynamics eq:att_err_dyn V_1 = 1/2e_ω· J e_ω + k_Rψ(R,R_d) + ϵ e_R· e_ω,where 0 < ϵ∈ℝ. V_1 can be lower and upper bounded by 1/2λ_min(J)e_ω^2 + k_Rb_1e_R^2 - ϵe_Re_ω≤ V_1≤1/2λ_max(J)e_ω^2 + k_Rb_2e_R^2 + ϵe_Re_ωresulting in the relation,z_1^TM_1z_1 ≤ V_1 ≤ z_1^TM_2z_1,where z_1 = [e_ωe_R] andM_1 = [ λ_min(J)/2 -ϵ/2; -ϵ/2 k_Rb_1 ], M_2 = [ λ_max(J)/2ϵ/2;ϵ/2 k_Rb_2 ]. The time derivative of V_1 is given byV̇_1= e_ω· Jė_ω + k_Re_R· e_ω + ϵė_̇Ṙ· e_ω + ϵ e_R·ė_ω= -k_ωe_ω^2 + ϵ B(R_d^TR)e_ω· e_ω - ϵ k_R e_R· J^-1e_R - ϵ k_ω e_R · J^-1 e_ω.V̇_1 can be upper bounded by V̇_1 ≤ -k_ωe_ω^2 + ϵe_ω^2 - ϵ k_R/λ_max(J)e_R^2 + ϵ k_ω/λ_min(J)e_ωe_R,which can be written as,V̇_1 ≤ -z_1^TW_1z_1,where,W_1 = [k_ω - ϵ -ϵ k_ω/2λ_min(J); -ϵ k_ω/2λ_min(J) ϵ k_R/λ_max(J) ].Choosing ϵ such thatϵ < min{k_ω, √(2k_Rb_1λ_min(J)), 4k_Rk_ωλ_min(J)^2/k_ω^2λ_max(J) + 4k_R λ_min(J)^2 }makes the matrices M_1, M_2 and W_1 positive definite. This makes V_1 quadratic fromeq:V and V̇_1 negative definite as long as the configuration error R_e(t)=R_d^T(t)R(t) remains in the sub level set ψ_c^-1(≤ 2,I). This is shown to be true in the following sequence of arguments. Consider a Lyapunov candidate V_2 = 1/2e_ω· J e_ω + k_Rψ(R,R_d) for the attitude error dynamics. V̇_2 = -k_ωe_ω^2 ≤ 0.This guaranteesk_Rψ(R(t),R_d(t)) ≤ k_Rψ(R(t),R_d(t)) + 1/2e_ω(t) · J e_ω(t)≤ k_Rψ(R(0),R_d(0)) + 1/2e_ω(0) · J e_ω(0) < 2k_Rψ(R(t),R_d(t)) < 2 ∀ t>0. Therefore there exists positive constants α_1, β_1 such that ψ(t) ≤min{2,α_1e^-β_1t}. §.§ Attitude Tracking for Helicopter In this subsection we bring in the rotor dynamics which induces the desired torque computed in the previous part through a first order subsystem. The error between thisdesired torque and the applied torque on the fuselage is denoted as e_M ≜ M - M_d. We derive the error dynamics for the combined rotor-fuselage dynamics. Equation eq:fuselage can be rewritten asJω̇ + ω× Jω = M_d + e_M. Using M_d for rigid body tracking from eq:moment_des gives the error dynamics for fuselage asJė_ω = -k_Re_R - k_ω e_ω + e_M.Taking the derivative of e_M and using eq:rotor leads to the following error dynamics for rotor ė_M = -Ae_M -AM_d - Ṁ_d + u.Equations eq:er_fuselage and eq:er_rotor constitute the error dynamics for the rotor-fuselage system. It is clear that the resulting error dynamics has a strict-feedback form wherein e_M acts as a virtual control input in eq:er_fuselage. Therefore a backstepping approach can be used for controller synthesis <cit.>. We claim that the rotor-fuselage error dynamics is locally exponentially stable if the control input u is chosen to be u = Ṁ_d + AM_d - e_ω -ϵ J^-1e_R.In the above expression, the derivative of the desired control moment is obtained by differentiating eq:moment_des,Ṁ_d = -k_Rė_R - k_ωė_ω + ω̇× Jω + ω× Jω̇- J(ω̇̂̇R^TR_dω_d- ω̂^2R^TR_dω_d + 2ω̂R^TR_dω̇_d - R^TR_dω̂_dω̇_d - R^TR_dω̈_d). (Exponential stability of rotor-fuselage error dynamics) The control input given in eq:control_ip renders the equilibrium (e_R,e_ω,e_M) = (0,0,0) of the rotor fuselage error dynamics exponentially stable for all initial conditions satisfyingk_Rψ(R(0),R_d(0)) + 1/2λ_max(J)e_ω (0)^2 + 1/2e_M(0)^2 + ϵe_R(0)e_ω(0) < 2k_R.Consider the following Lyapunov candidate for combined rotor-fuselage error dynamicsV= V_1 + 1/2e_M^2 = 1/2e_ω· J e_ω + k_Rψ(R,R_d) + ϵ e_R· e_ω + 1/2e_M^2.V is quadratic within the sub level set ψ_c^-1(≤ 2,I) since V_1 is quadratic in the same set.The time derivative of V is given byV̇ = e_ω·(-k_ω e_ω + e_M) + ϵė_R· e_ω + ϵ e_R· J^-1(-k_Re_R -k_ω e_ω + e_M) + e_M·ė_M= -k_ωe_ω^2 + ϵB(R_d^TR)e_ω· e_ω - ϵ k_R e_R· J^-1e_R - ϵ k_ω e_R· J^-1e_ω+ e_M· (ϵ J^-1e_R + e_ω -Ae_M -Ṁ_d -AM_d + u)From the previous subsection on rigid body tracking, the first four terms in the above expression have been rendered negative definite eq:V_dot. By substituting u from eq:control_ip we get,V̇ = V̇_1 - e_M· Ae_M ≤ -z_1^TW_1z_1 - e_M· Ae_M≤ -z^TWz,where z = [e_ωe_Re_M]^T andW = [k_ω - ϵ -ϵ k_ω/2λ_min(J)0; -ϵ k_ω/2λ_min(J) ϵ k_R/λ_max(J)0;00 λ_min(A) ]V(t) remains quadratic when R_e(t)=R_d(t)^TR(t) lies in the sub level set ψ_c^-1(≤ 2,I). This is true since k_Rψ(R(t),R_d(t)) ≤ k_Rψ(R(t),R_d(t)) + 1/2e_ω(t) · J e_ω(t) + 1/2e_M(t)^2 + ϵe_R(t)e_ω(t) ≤ k_Rψ(R(0),R_d(0)) + 1/2e_ω(0) · J e_ω(0) + 1/2e_M(0)^2+ ϵe_R(0)e_ω(0) < 2k_Rψ(R(t),R_d(t)) < 2 ∀ t>0.V(t) is positive definite, quadratic and V̇(t) is negative definite, therefore there exists positive scalars α and β such that ψ(t) ≤min{2, α e^-β t}.Equation eq:moment_dot implies that a feasible attitude reference trajectory must have a continuous second derivative of the angular velocity ω̈_d for a continuous control input. It is assumed that the fuselage body frame angular acceleration ω̇ is available for feedback. The flap angles (a,b), which are difficult to measure, are not required for implementing the controller. § SIMULATION RESULTS The tracking controller given by eq:control_ip was simulated for a 10 kg class model helicopter whose parameters are given in Table <ref>. The helicopter was given an initial attitude of 150 deg in roll angle and 57 deg/s of roll-rate and was subjected to a sinusoidal roll angle input with an amplitude of twenty degree and a frequency of one Hertz. Fig. <ref> shows the response and is evident that the controller is able to converge to reference command within one second. The controller is able to track the desired roll attitude with maximum flap deflection of ± 0.87 degrees as shown in Fig. <ref>. As expected, the longitudinal tilt of the rotor remains unchanged at zero as the maneuver simulated has purely lateral motion. § EXPERIMENTAL RESULTSThe proposed controller was validated on an instrumented 10 kg class small scale conventional helicopter which consists of a single main rotor and a tail rotor. The instrumented helicopter is shown in Fig. <ref>. The main rotor of diameter 1.4 meter operates at 1500 rpm. The lateral and longitudinal control moment is produced by tilting the swashplate using three servos. Yawing moment is generated by changing the collective pitch of tail rotor. The helicopter has a stiff rotor hub (large k_β) which makes it extremely agile.The controller was implemented on PX4 autopilot hardware. It consists of a suite of sensors, namely 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetometer, a GPS receiver and a barometer which together constitute the attitude heading reference system (AHRS). The autopilot software is based on PX4 flight stack which has a modular design and runs on top of a real-time operating system (NuttX). The autopilot comes with an EKF based attitude estimator. The proposed attitude controller was added as a module and runs at 250 Hz.For validation purpose the helicopter was excited about roll/lateral axis. This axis of excitation was chosen as it eases the pilot to keep track of translation motion which is not the case with pitch/longitudinal movement. The input reference signal was a superposition of manual pilot input and autopilot generated sinusoidal roll reference input of ±20 degree at 1 Hz. The manual input was superimposed as a correction so as to keep the translational motion of helicopter within a safe region.The performance of the attitude controller was found to be satisfactory as seen in the linked video <cit.>. The error in tracking can be attributed to uncertainty in model structure and parameters.§ CONCLUSIONTo the best of the knowledge of the authors, this work is the first attempt to integrate geometric control theory for the purpose of synthesizing an attitude tracking controller for a small-scale aerobatic helicopter, preserving the significant dynamics of the system while doing so.The control law was validated in simulation and experiment on a 10 kg class small scale helicopter. The results as seen through the experimental validation are very encouraging.§ ACKNOWLEDGEMENTSNidhish Raj and Ravi N Banavar acknowledge with pleasure, the support, and the conducive and serene surroundings of IIT-Gandhinagar, where most of the theoretical work for this effort wasdone.* unsrt
http://arxiv.org/abs/1703.08800v2
{ "authors": [ "Nidhish Raj", "Ravi N. Banavar", "Abhishek", "Mangal Kothari" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170326095956", "title": "Attitude Tracking Control for Aerobatic Helicopters: A Geometric Approach" }
Einstein-Maxwell-axion theory: Dyon solution with regular electric field Alexei E. Zayats December 30, 2023 ======================================================================== We consider a sequence of elliptic partial differential equations (PDEs) with different but similar rapidly varying coefficients. Such sequences appear, for example, in splitting schemes for time-dependent problems (with one coefficient per time step) and in sample based stochastic integration of outputs from an elliptic PDE (with one coefficient per sample member). We propose a parallelizable algorithm based on Petrov–Galerkin localized orthogonal decomposition (PG-LOD) that adaptively (using computable and theoretically derived error indicators) recomputes the local corrector problems only where it improves accuracy. The method is illustrated in detail by an example of a time-dependent two-pase Darcy flow problem in three dimensions. § INTRODUCTIONWe consider a sequence of elliptic partial differential equations (PDEs) with different, but in some sense similar, rapidly varying coefficients. In some applications, the difference between consecutive coefficients in the sequence is localized, for example for certain Darcy flow applications and in the simulation of random defects in composite materials. This paper studies an opportunity to exploit that the differences are localized to save computational work in the context of the localized orthogonal decomposition method (LOD, <cit.>).The accuracy of Galerkin projection onto standard finite element spaces generally suffers from variations in the coefficient that are not resolved by the finite element mesh. The work <cit.> studies an elliptic equation in 1D with a rapidly varying coefficient and notes that coefficient variations within the element lead to inaccurate solutions for the standard finite element method. Replacing the coefficient with its elementwise harmonic average leads to an accurate method. This result, however, does not easily generalize to higher dimensions. For periodic and semi-periodic coefficients varying on an asymptotically fine scale, a homogenized coefficient can be computed and used for coarse scale computations also in higher dimensions <cit.>. The early multiscale method <cit.> is based on homogenization theory and works under assumptions on scale separation and periodicity. Many recent contributions within the field of numerical homogenization can be used without assumptions on periodicity and in higher dimensions, see e.g. <cit.>. In this work, we consider the LOD technique <cit.> in the Petrov–Galerkin formulation (PG-LOD) studied in detail in <cit.>.The fundamental idea of the LOD method is that a low-dimensional function space (multiscale space) with good approximation properties is constructed by computing localized fine-scale correctors to the basis functions of a standard low-dimensional coarse finite element space based on a coarse mesh. Each localized corrector problem is posed only within a patch of a certain radius around its coarse basis function and thus depends only on the diffusion coefficient in that patch. The PG-LOD method has several good properties from a computational perspective. The main advantage is that the PG-LOD corrector problems can be computed completely in parallel with the only communication being a final reduction to form a low-dimensional global stiffness matrix. Further, the fine-scale coefficient only needs to be accessible and stored in memory for one localized corrector problem at a time. Additionally, the method is robust in the sense that both the localized corrector problems and the global low-dimensional problems are typically small enough to be solved with a direct solver.Once computed, the correctors can be reused for problems with the same or similar diffusion coefficient.We study the case when the diffusion coefficient varies in a sequence of problems. In such situations, there is an opportunity to reuse previously computed localized correctors if the coefficients do not vary too much between consecutive problems. Since the computational cost is proportional to the number of localized corrector problems that have to be recomputed, it is most advantageous if the perturbations of the coefficient are localized. Two practical examples are two-phase flow where the coefficient depends on the saturation of the two fluids, or when the coefficient is a deviation from a base coefficient as in the case with defects in composite materials.In this work we derive computable error indicators for the error introduced by refraining from recomputing a corrector after a perturbation in the coefficient. The method we propose computes all localized correctors and global stiffness matrix contributions for the first coefficient in the sequence of elliptic PDEs. For the subsequent coefficients, we use the error indicators to adaptively recompute only the correctors that need to be recomputed in order to get a sufficiently accurate solution.The coefficients that have not been recomputed we call lagging coefficients. The method is completely parallelizable over the elements of the coarse mesh. A particularly interesting setting is when only quantities on the coarse mesh are required from the solution, for example upscaled Darcy fluxes in a Darcy flow problem, or the coarse interpolation of the full solution. Any computed fine scale quantities can then be forgotten between the iterations in the sequence and the memory requirement becomes very low.The paper is divided into five sections: Problem formulation in Section <ref>, method description in Section <ref>, error analysis in Section <ref>, implementation in Section <ref>, and numerical experiments in Section <ref>. Both the method description and the error analysis are divided into four steps, with increasing level of approximation in each step: (i) reformulation by variational multiscale method (VMS), (ii) localization by LOD, (iii) approximation of localized correctors by lagging coefficient, and (iv) approximation of global stiffness matrix contribution by lagging coefficient.The main results are the method (<ref>) in Section <ref>, the error bound in Theorem <ref> and Algorithm <ref>.§ PROBLEM FORMULATION Let Ω be a polygonal domain in ^d (with d=1, 2 or 3) with the boundary partitioned into disjoint subsets Γ_D (for Dirichlet boundary conditions) and Γ_N (for Neumann boundary conditions). Suppose we have a sequence of elliptic equations: for n = 1, 2, …, solve for u̅^n, such that-÷ A^n ∇u̅^n = f in Ω,u̅^n = g on Γ_D, n· A^n ∇u̅^n = 0 on Γ_N,where f ∈ L^2(Ω), g ∈ H^1/2(Γ_D), n is the outward normal of the boundary, and A^n ∈ L^∞(Ω) is a coefficient varying significantly over small distances. To keep the presentation short, we limit ourselves to the case where f and g are independent of n, however, the analysis in this paper can be generalized to n-dependent f and g. We will refer to the sequence index or rank as time step throughout the paper, although it does not need to correspond to a step from a time-disceratization. For instance, in Section <ref> we briefly discuss an application for simulation of weakly random defects in composite materials, where the sequence index corresponds to a Monte Carlo sample member index.In the remainder of this section and Sections <ref>–<ref>, we consider a fixedstep n and drop this index for all quantities. We call the coefficient A = A^n at the current time step the true coefficient. Ideally, only the true coefficient A would be used in the solution at the current time step. However, in order to lower the computational cost, computations from previous time steps will be reused. This means coefficients from previous time steps (lagging coefficients) will enter the analysis through the definition of the localized correctors and in the assembly of the global stiffness matrix. These lagging coefficients will be denoted by Ã. We also want to emphasize that the error indicators derived here are applicable also to situations where the coefficient deviates from a base coefficient, for example within the application of simulations of weakly random defects in composite materials.We will work with a weak formulation of the above problem. Let V = { v ∈ H^1(Ω):v|_Γ_D = 0}. In case Γ_D is empty, we instead consider only solutions and test functions in the quotient space V = H^1(Ω) /. Let (·, ·) denote the L^2-scalar product over Ω, and (u, v)_ω = ∫_ω uv.Further, we define v ^2_L^2(ω) = ∫_ω v^2, v _L^2 =v _L^2(Ω), and the bilinear form a(u, v) = (A ∇ u, ∇ v). We let u̅ = u + g, where u ∈ V and g ∈ H^1(Ω) is an extension of the boundary condition g to the full domain and seek to find u ∈ V, such that for all v ∈ V,(A ∇ u, ∇ v) = (f, v) - (A∇ g, ∇ v).Assuming there exist constants 0 < α and β < ∞, so that α≤ A ≤β a.e., (A ∇·, ∇·) is bounded and coercive on V and existence of a unique solution is guaranteed by the Lax–Milgram theorem. We further define the energy norm |·|_A = (A ∇·, ∇·)^1/2 on V, and the semi-norm |·|_A,ω = (A ∇·, ∇·)_ω^1/2.§ METHOD DESCRIPTION In this section, we describe the proposed numerical method in a series of steps, each of which introduces another level of approximation for the problem (<ref>) above. §.§ Variational multiscale methodThe first step is to reformulate the problem using the variational multiscale method <cit.>. This formulation forms the basis for the LOD approximation and makes it possible to reduce the dimensionality of the problem once the corrector problems have been solved.Letbe a regular and quasi-uniform family of conforming subdivisions of Ω into elements of maximum diameter H, and V_H ⊂ V be a family of conforming first order finite element spaces on this mesh, e.g. _1 or _1 depending on the shape of the elements. The choice of a linear projective quasi-interpolation operator : V → V_H defines the fine space as its kernel == { v ∈ V:v = 0 }. We assume there exists a constant C independent of H so that for all v ∈ V and T ∈, it holdsH^-1 v -v_L^2(T) + ∇ (v -v) _L^2(T)≤ C_∇ v _L^2(U(T)).Here U(T) is the union of all neighboring elements to T, i.e.U(T) = ⋃{T' ∈ : T∩T'∅}.Since we assumeis projective (this is not strictly necessary, see e.g. <cit.>), we have the decomposition V = V_H ⊕ and can decompose the solution u = u_H + and test function v = v_H + and test (<ref>) with the two spaces separately: (A ∇ (u_H + ) , ∇ v_H)= (f, v_H) - (A ∇ g, ∇ v_H), (A ∇, ∇)= (f, ) - (A ∇ g, ∇) - (A ∇ u_H, ∇). We note thatis linear in f and u_H, and we define the linear correction operators : H^1(Ω) → and : L^2(Ω) →, so that = - u_H +f -g, i.e., find v ∈ and f ∈, such that for all ∈,(A ∇ v , ∇)= (A ∇ v, ∇), (A ∇ f , ∇)= (f, ).These equations have unique solutions, since (A∇·, ∇·) is still bounded and coercive on a subspace ⊂ V.We introduce a new space, the multiscale space, = V_H -V_H = {v_H -v_H : v_H ∈ V_H}, and note that we have the orthogonality relation ⊥_a.The solutions u_H, f, and g can be plugged into (<ref>) and we get the following low-dimensional Petrov–Galerkin problem, find ∈, such that for all v_H ∈ V_H,(A ∇, ∇ v_H) = (f, v_H) - (A ∇ g , ∇ v_H) - (A ∇ f , ∇ v_H) + (A ∇ g , ∇ v_H).The full solution is then u =+f -g. It is possible to obtain an approximate solution even if neglecting the right hand side correction term, i.e. letting = 0 above. See for example <cit.>.§.§ Localized orthogonal decompositionThe second step is to localize the corrector computations by means of localized orthogonal decomposition (LOD). The basic idea is to solve the corrector problems (<ref>) only on localized patches instead of on the full domain to reduce the computational cost.For the localization, we define element patches for T ∈, U_k(T) ⊂Ω, where 0 ≤ k ∈ℕ. With trivial case U_0(T) = T, U_k(T) (a k-layer element patch around T) is defined by the recursive relationU_k+1(T) = ⋃{T' ∈ : U_k(T)∩T'∅}.See Figure <ref> for an illustration of element patches.We further define localized fine spaces(U_k(T)) = { v ∈ :v|_Ω∖ U_k(T) = 0},consisting of fine functions which are zero outside element patches. Throughout the paper, localized quantities are subscripted with the patch size k.Instead of solving (<ref>), we compute the operators _k = ∑_T ∈_k,T and _k = ∑_T ∈_k,T, with _k,T and _k,T defined by(A ∇_k,T v, ∇)= (A ∇ v, ∇)_T, (A ∇_k,T f, ∇)= (f, )_T,for all ∈(U_k(T)) and all T ∈.We define the localized multiscale space = V_H - _k V_H. Our localized multiscale problem reads find ∈, such that for all v_H ∈ V_H,(A ∇, ∇ v_H) = (f, v_H) - (A ∇ g, ∇ v_H) - = (A ∇_k f, ∇ v_H) + (A ∇_k g, ∇ v_H),and the full solution for the second approximation is u_k =+ _k f - _k g. §.§ Lagging multiscale spaceIn the third approximation we compute the localized element correctors using a lagging coefficient Ã_T rather than the true A. This makes it possible to reuse correctors that have been computed at earlier time steps, so that localized correctors only for a small number of elements T need to be recomputed.We define the lagging localized corrector operators _k = ∑_T _k,T and _k = ∑_T _k,T. The element corrector operators _k, T v, _k, T f ∈(U_k(T)) are defined such that for all ∈(U_k(T)),(Ã_T ∇_k, T v, ∇)= (Ã_T ∇ v,∇)_T, (Ã_T ∇_k, T f, ∇)= (f, )_T.Note that lagging coefficients Ã_T are not necessarily the same for all T. [Relation between lagging coefficient and time steps] As an example, for the current time step A = A^n, for element T' the coefficient can be one time step old, i.e.Ã_T' = A^n-1 and for T” three time steps old, i.e. Ã_T” = A^n-3. That is, different lagging localized element correctors may be defined in terms of coefficients from different time steps in history. In analogy with previous multiscale spaces, we define a lagging multiscale space = V_H - _k V_H and the problem is then to find ∈, such that for all v_H ∈ V_H,(A ∇, ∇ v_H) = (f, v_H) - (A ∇ g, ∇ v_H) - (A ∇_k f, ∇ v_H) + (A ∇_k g, ∇ v_H)and the full solution for the third approximation is ũ_k =+ _k f - _k g. §.§ Lagging global stiffness matrix contribution The fourth approximation involves not only using a lagging multiscale space, but also a lagging coefficient in the assembly of the global stiffness matrix and right hand side. The rationale behind this is that computing the integrals in the stiffness matrix and right hand side for (<ref>) requires that all precomputed element correctors are stored. To circumvent this, we propose the following approximation. First we define a lagging bilinear form ã (and its elementwise contributor ã_T), based on the same lagging coefficients Ã_T as was used for the multiscale space in the previous section,ã(u, v) := ∑_T ∈ã_T(u, v) := ∑_T ∈ (Ã_T (χ_T∇ - ∇_k,T) u, ∇ v)where χ_T is the indicator function for subset T ⊂Ω. We also define a lagging linear functional L̃ (and its elementwise contributor L̃_T),L̃(v) := ∑_T∈L̃_T(v):= ∑_T ∈ (f, v)_T - (A∇ g, ∇ v)_T - (Ã_T ∇_k,T f, ∇ v) + (Ã_T ∇_k,T g, ∇ v).Then the problem is posed as to find ∈, such that for all v_H ∈ V_H,ã(, v_H) = L̃(v_H).The full solution for the final approximation step is then =+ _k f - _k g.We note that (<ref>) coincides with (<ref>) when Ã_T = A for all T. Also, we note that the coefficients for the linear system can be computed immediately after _k,T and _k,T have been computed. This means no correctors need to exist simultaneously.This method is independent of the true coefficient A, if not Ã_T = A for any T. In order to construct a numerical method with control of the error from this approximation, we use error indicators on the element correctors to determine whether they need to be recomputed or not. Next, we define three computable error indicators, e_u, e_f and e_g, for the error introduced by using a lagging coefficient. §.§ Error indicators As can be seen in later Sections <ref> and <ref>, the differences |_k,Tv-_k,Tv|_A for v ∈ V_H, |_k,Tf-_k,Tf|_A, and |_k,Tg-_k,Tg|_A constitute the sources to the error in the approximation from using lagging coefficients. In this section, we define three elementwise error indicators (e_u,T, e_f,T, and e_g,T) and relate them to the above differences in Lemma <ref>. The following bounds hold, |_k,Tv-_k,Tv|_A≤ e_u,T |v|_A,T,for allv ∈ V_H,|_k,Tf-_k,Tf|_A≤ e_f,Tf_L^2(T), |_k,Tg-_k,Tg|_A≤ e_g,T |g|_A,T,wheree_u,T= max_w|_T : w ∈ V_H, |w|_A,T = 1(Ã_T - A) A^-1/2 (χ_T∇ w - ∇_k,Tw)_L^2(U_k(T)), e_f,T= (Ã_T - A) A^-1/2∇_k,Tf_L^2(U_k(T))/f_L^2(T)or 0 if f_L^2(T) = 0,e_g,T= (Ã_T - A) A^-1/2 (χ_T∇ g - ∇_k,Tg)_L^2(U_k(T))/|g|_A,Tor 0 if|g|_A,T = 0.We additionally definee_u= max_T ∈ e_u,T, e_f= max_T ∈ e_f,T, ande_g= max_T ∈ e_g,T.For any v ∈ V_H, let z = _k,Tv - _k,Tv, then using (<ref>) and (<ref>), we get|z|^2_A,U_k(T)= (A ∇ (_k,Tv - _k,Tv), ∇ z)_U_k(T) = ((Ã_T - A) ∇_k,Tv, ∇ z)_U_k(T) - ((Ã_T - A)∇ v, ∇ z)_T ≤(Ã_T - A) A^-1/2(χ_T ∇ v - ∇_k,Tv)_L^2(U_k(T))· |z|_A,U_k(T).Then, clearly e_u,T (if it exists) constitute the asserted bound.The following inequality gives a bound for the norm being maximized in the definition of e_u,T (assuming that |w|_A,T = 1),(Ã_T - A) A^-1/2 (χ_T∇ w - ∇_k,T w)_L^2(U_k(T))≤(Ã_T - A) A^-1_L^∞(T) + ≤(Ã_T - A) A^-1/2Ã_T^-1/2_L^∞(U_k(T))A^-1/2Ã_T^1/2_L^∞(T).The maximum is thus attained and exists by the extreme value theorem.Similarly, for z = _k,T f- _k,T f, we have|z|^2_A,U_k(T)= (A ∇ (_k,Tf - _k,T f), ∇ z)_U_k(T) = ((Ã_T - A) ∇_k,T f, ∇ z)_U_k(T)≤(Ã_T - A) A^-1/2∇_k,T f_L^2(U_k(T))· |z|_A,U_k(T),which motivates the definition of e_f,T and the asserted bound. The result for e_g,T holds analogously. Regarding the computation of these error indicators, both e_f,T and e_g,T are straight-forward to compute, being a ratio of two computable norms. The error indicator e_u,T is also easy to compute. It is the square root of a Rayleigh quotient for a generalized eigenvalue problem (where the restriction |w|_A,T = 1 removes the singularity of the denominator matrix):B x_ℓ = μ_ℓ C x_ℓwith the matricesB_ij = ((Ã_T - A)^2 A^-1 (χ_T∇ϕ_j - ∇_k,Tϕ_j), χ_T∇ϕ_i - ∇_k,Tϕ_i)_U_k(T),C_ij = (A ∇ϕ_j, ∇ϕ_i)_T,for all i,j=1,…,m-1 where m is the number of basis functions in T (i.e. one of them removed). The squared maximum e_u,T^2 corresponds to the maximum eigenvalue max_ℓμ_ℓ.We emphasize that the matrices B and C are very small: the same size as the number of degrees of freedom in the coarse element T (minus one for removing the constant), e.g., 2× 2 for 2D simplicial meshes or 7 × 7 for 3D hexahedral meshes.§.§.§ Coarse error indicators In order to compute the error indicators e_u,T, e_f,T, and e_g,T we need access to the true coefficient A and lagging correctors _k,Tϕ_i, _k,Tf, and _k,Tg at the same time. This implies all lagging correctors need to be saved in order to compute the error indicators. Since the correctors in practice are defined on patches of a fine mesh, and the patch overlap can be substantial, the memory requirements for saving them might be large.In this section, we construct an additional bound that makes it possible to discard the lagging correctors after they have been computed.We construct the following bound starting from the definition of e_u,T in Lemma <ref>,e_u,T^2≤∑_T' ∈T'∩ U_k(T)∅max_w|_T : w ∈ V_H, |w|_A,T = 1(Ã_T - A) A^-1/2 (χ_T∇ w - ∇_k,Tw)_L^2(T')^2 ≤∑_T' ∈T'∩ U_k(T)∅δ_T^2_L^∞(T')A^-1/2Ã_T^1/2^2_L^∞(T)·≤∑_T' ∈T'∩ U_k(T)∅·max_w|_T : w ∈ V_H, |w|_Ã_T,T = 1Ã_T^1/2 (χ_T∇ w - ∇_k,T w)^2_L^2(T') =: E_u,T.where δ_T = (Ã_T - A) A^-1/2Ã_T^-1/2, and we used that|w|_Ã_T,T≤A^-1/2Ã_T^1/2_L^∞(T) |w|_A,Tin the last inequality. We further define E_u = max_T ∈ E_u,T.The maximum in (<ref>) corresponds to a maximum eigenvalue of a low-dimensional generalized eigenvalue problem, as was the case for e_u,T in Section <ref>. More specifically, it is the square root of the maximum eigenvalueμ̃_T,T' := max_ℓμ_ℓof B x_ℓ = μ_ℓ C x_ℓ with the matricesB_ij = (Ã_T (χ_T∇ϕ_j - ∇_k,Tϕ_j), χ_T∇ϕ_i - ∇_k,Tϕ_i)_T',C_ij = (Ã_T ∇ϕ_j, ∇ϕ_i)_T,for i,j=1,…,m-1, where m is the number of basis functions in T.We note that the quantity μ̃_T,T' can be computed directly after corrector _k,T has been computed for the basis functions in element T. Now, _k,T does not need to be saved for computing E_u,T later, and it can be discarded. In particular, the memory required for storing μ̃_T,T' (which, however, is needed to compute E_u,T) scales like Ø(k^dH^-d).Still, Ã_T needs to be available to compute A^-1/2Ã_T^1/2^2_L^∞(T) and δ_T. This might not be a problem in applications where there is a low-dimensional description of the coefficient, for example if the coefficient is defined by a set of geometric shapes which can be described by location, size, shape and so on.In the section for numerical experiments, we will study an example of upscaled two-phase Darcy flow, where we illustrate a way to avoid saving Ã_T.The error indicator E_u can replace e_u in all results and algorithms in this work. Similar coarse error indicators can be derived for e_f and e_g.§ ERROR ANALYSIS In this section we study the approximation error of the three approximations ,and , and the inf-sup stability for the systems yielding the solutions , ,and . Finally, in Theorem <ref> in Section <ref>, we present a bound on the error u - of the full approximation.We use C to denote a constant that is independent of the regularity of u, patch size k and coarse mesh size H. It can, however, depend on the contrast β/α. The value of the constant is not tracked between steps in inequalities. By the notation a ≲ b, we mean a ≤ Cb. §.§ Variational multiscale methodSince the varitional multiscale formulation (<ref>) is only a reformulation of the original problem, without any approximations, there is no error. However, the well-posedness of the formulation is still of interest.§.§.§ StabilityUniqueness of a solution to (<ref>) is guaranteed by an inf-sup condition for a onand V_H,inf_w ∈sup_v ∈ V_H|(A ∇ w, ∇ v)|/|w|_A | v |_A= inf_w ∈ V_Hsup_v ∈ V_H|(A ∇ (w -w), ∇ v)|/|w -w|_A |v|_A = inf_w ∈ V_Hsup_v ∈ V_H|(A ∇ (w -w), ∇ (v -v))|/|w -w|_A |v|_A≥inf_w ∈ V_H|w -w|^2_A/|w -w|_A |w|_A = inf_w ∈ V_H|w -w|^2_A/|w -w|_A |(w -w)|_A≥ C_^-1α^1/2β^-1/2 =: γ.The existence inf-sup condition holds analogously. We let γ denote the inf-sup stability constant and note that it is depends on the contrast for general . See <cit.> for corrector localization results independent of the contrast. §.§ Localized orthogonal decompositionFor the error analysis of LOD we recite previous exponential decay results (first presented in <cit.>) of the localized corrector operators by means of the following lemmas. For example, the proof in <cit.> is almost directly applicable here. Let k > 0 be a fixed integer and let p_T ∈ be the solution of(A ∇ p_T, ∇) = F_T()for all ∈, where F_T ∈ V^* such that F_T() = 0 for all ∈(Ω∖ T). Furthermore, we let p_k,T∈(U_k(T)) be the solution of(A ∇ p_k,T, ∇) = F_T()for all ∈(U_k(T)). Then there exists a constant 0 < θ < 1 that depends on the contrast but not on H or the variations of A, such that|∑_T p_T - p_k,T|^2_A ≲ k^dθ^2k∑_T|p_T|^2_A. This lemma can be applied for the localization error of both - _k and - _k. In analogy with the definition of _k as a sum of _k,T, we can define = ∑_T∈_T with _T = _∞,T. Then for any v ∈ V, we can identify _T v with p_T and _k,T v with p_k,T in the lemma above (and similarly for ).§.§.§ StabilityUsing Lemma <ref>, we get the following result for v - _k v, with v ∈ H^1(Ω),| v - _k v|^2_A ≲ k^dθ^2k∑_T|_Tv|^2_A ≲ k^dθ^2k |v|^2_A.If in addtion v ∈ V_H, we can use the stability ofand continue to get|v|^2_A ≲ | (v -v)|^2_A ≲ |v -v|^2_A.Using the result above, we can derive an inf-sup constant for a and the pair of spacesand V_H,inf_∈sup_v ∈ V_H|a(, v)|/||_A |v|_A ≥inf_w ∈ V_Hsup_v ∈ V_H|a(w -w, v)| - |a( w - _k w, v)|/(|w -w|_A + | w - _k w|_A) |v|_A≥inf_w ∈ V_Hsup_v ∈ V_H|a(w -w, v)| - Ck^d/2θ^k|w -w|_A|v|_A/(1 + Ck^d/2θ^k)|w -w|_A |v|_A≥γ - Ck^d/2θ^k/1 + Ck^d/2θ^k =: γ_k.For sufficiently large k, there is a uniform bound γ_0 ≤γ_k. See <cit.> for more details on stability of this approximation.§.§.§ ErrorFor arbitrary u_I ∈, using the equations (<ref>) and (<ref>), we have for all v ∈ V_H,(A∇ ( - u_I), ∇ v)= (A∇ ( - u_I), ∇ v) + =(A∇ ( f - _k f), ∇ v) - (A∇ ( g - _k g), ∇ v).The inf-sup condition for uniqueness above yields the following approximation result, for arbitrary u_I ∈,γ_0 | - u_I|_A ≤ | - u_I|_A + | f - _k f|_A + | g - _k g|_A.In analogy with (<ref>) we get the following result for f - _k f,| f - _k f|^2_A≲ k^dθ^2k∑_T|_T f|^2_A ≲ k^dθ^2kf^2_L^2. Recall that u =-+f -g and u_k =-_k+ _k f - _k g. Now, if we choose u_I =- _k∈, then -u_I = -(-_k) and using the approximation result we get| - |_A ≤ | - u_I|_A + | - u_I|_A ≤ (1+γ_0^-1)| - u_I|_A + γ_0^-1(| f-_k f|_A + | g-_k g|_A)≤ (1+γ_0^-1)|( - _k)|_A + γ_0^-1(| f-_k f|_A + | g-_k g|_A).Then, using =u, interpolation stability (<ref>) and stability of the continuous problem, we have for the full error|u - u_k|_A≤ | - |_A + |( - _k) f|_A + |( - _k) g|_A≤ (1+γ_0^-1)(|( - _k)|_A + | f-_k f|_A - | g-_k g|_A)≲ (1+γ_0^-1)k^d/2θ^k(||_A + f_L^2 + |g|_A)≲ (1+γ_0^-1)k^d/2θ^k(|u|_A + f_L^2 + |g|_A)≲ (1+γ_0^-1)k^d/2θ^k(f_L^2 + |g|_A).This result was first shown in <cit.> and is noteworthy, since the error of the approximation decays exponentially with increasing k, independently of the regularity of the solution u.§.§ Lagging multiscale space For this step, we use a lagging multiscale spaceand need to establish an inf-sup stability constant for a with respect toand V_H. We will use the results from Lemma <ref> both for deriving stability and the approximation error. The following full corrector error can be derived using Lemma <ref>,|_kw - _k w|^2_A = |∑_T (_k,Tw - _k,T w)|^2_A ≲ k^d ∑_T |_k,Tw - _k,T w|^2_A,U_k(T)≤ k^d e^2_u |w|^2_A.The bounds |_k f - _k f|^2_A ≲ k^d e^2_f f^2_L^2 and |_k g - _k g|^2_A ≲ k^d e^2_g |g|^2_A hold similarly.We note that if Ã_T = A, then e_u,T = e_f,T = e_g,T = 0.Obviously, updating a lagging coefficient for an element corrector leads to no error for this element corrector.§.§.§ Stability We can now derive an inf-sup constant for a onand V_H, using similar techniques as in (<ref>),inf_∈sup_v ∈ V_H|a(, v)|/||_A |v|_A≥inf_w ∈ V_Hsup_v ∈ V_H|a(w - _k w, v)| - |a(_k w - _k w, v)|/(|w - _k w|_A + |_k w - _k w|_A) |v|_A≥inf_w ∈ V_Hsup_v ∈ V_H|a(w - _k w, v)| - Ck^d/2e_u|w -w|_A|v|_A/(1 + Ck^d/2e_u)|w - _k w|_A |v|_A≥γ_k - Ck^d/2e_u/1 + Ck^d/2e_u =: γ̃_k.We note that k enters the constant, but that it can be compensated by a small e_u. Since e_u,T is computable, a rule to recompute all element correctors T with e_u,T≥(k) for some small enough (k) = 𝒪(k^-d/2), will (after recomputation) make Ã_T = A and e_u,T = 0. This makes e_u < (k). Following this adaptive rule makes it possible to find a lower bound γ̃_0 ≤γ̃_k for sufficiently large k and sufficiently small .§.§.§ ErrorAgain, we get an approximation result from the inf-sup stability. In complete analogy with (<ref>) and (<ref>), we get| - |_A ≤ (1+γ̃_0^-1) (|(_k - _k)|_A + |_k f-_k f|_A + |_k g-_k g|_A)≲ (1+γ̃_0^-1)k^d/2(e_u|u_k|_A + e_ff_L^2 + e_g|g|_A)≲ (1+γ̃_0^-1)γ_0^-1k^d/2max(e_u,e_f,e_g)(f_L^2 + |g|_A). §.§ Lagging global stiffness matrix contribution In the fourth approximation (<ref>), the coefficients for the integration of the global stiffness matrix and (parts of) the right hand side are also lagging. §.§.§ StabilityWe derive an inf-sup constant for ã (see (<ref>)) with respect toand V_H,inf_∈sup_v ∈ V_H|ã(, v)|/||_A |v|_A≥inf_∈sup_v ∈ V_H (||_A |v|_A)^-1(|a(, v)| - =|∑_T ∈∫_U_k(T) (Ã_T-A) ( χ_T∇ - ∇_k,T) ·∇ v| ) ≥γ̃_k - inf_w ∈ V_Hsup_v ∈ V_H∑_T e_u,TA^1/2∇ w_L^2(T)A^1/2∇ v_L^2(U_k(T))/|w - _k w|_A |v|_A≥γ̃_k - inf_w ∈ V_HCk^d/2 e_u A^1/2∇ w_L^2/|w - _k w|_A = γ̃_k - inf_w ∈ V_HCk^d/2 e_u A^1/2∇ (w-_kw)_L^2/|w - _k w|_A≥γ̃_k - Ck^d/2 e_u =: γ̂_kAgain, k enters, but can be compensated by a small e_u according to the discussion in Section <ref>. Thus, there is a bound γ̂_0 ≤γ̂_k for all sufficiently large k.§.§.§ ErrorTo study the error | - |_A, we first note that | - |_A = | - |_A, since the right hand side and boundary condition corrections are the same in both cases. We form the following difference from (<ref>) and (<ref>),a(, v_H) - ã(, v_H)= ∑_T ((Ã_T -A) ∇_k,T f, ∇ v_H) - ∑_T ((Ã_T - A) ∇_k, T g, ∇ v_H).Add and subtract a(, v_H) and use Lemma <ref> to get|a( - , v_H)|= |∑_T((Ã_T - A)(χ_T ∇ - ∇_k,T), ∇ v_H) + =|∑_T ((Ã_T -A) ∇_k,T f, ∇ v_H) - =|∑_T ((Ã_T - A) ∇_k, T g, ∇ v_H)|≤∑_T ((Ã_T - A)A^-1/2(χ_T ∇ - ∇_k,T)_L^2(U_k(T)) + (Ã_T -A)A^-1/2∇_k,T f_L^2(U_k(T)) + (Ã_T -A)A^-1/2∇_k,T g_L^2(U_k(T))) |v_H|_A,U_k(T)≲ k^d/2( ∑_T e_u,T^2 ||_A,T^2 + e_f,T^2 f_L^2(T)^2 + e_g,T^2 |g|_A,T)^1/2 |v_H|_A ≤ k^d/2max(e_u, e_f, e_g) (||_A + f_L^2 + |g|_A) |v_H|_A .Inf-sup stability for a and ã finally gives, for any v_H ∈ V_H,| - |_A≤γ̃_0^-1|a( - , v_H)|/|v_H|_A≲γ̃_0^-1 k^d/2max(e_u, e_f, e_g) (||_A + f_L^2 + |g|_A) ≲γ̃_0^-1γ̂_0^-1 k^d/2max(e_u, e_f, e_g) (f_L^2 + |g|_A). We conclude this section by presenting the main theoretical result of this paper. It gives a bound of the full error of(in energy norm) in terms of the patch size k and the error indicators e_u, e_f, and e_g defined in Lemma <ref>. This theorem forms the basis for the implementation of a method that updates the multiscale space adaptively while iterating through the sequence of coefficients. Assume k is sufficiently large, so that γ_k ≥γ_0 holds. Let = c k^-d/2 and (by recomputation of element correctors) max(e_u, e_g, e_f) ≤. Choose c sufficiently small so that γ̃_k ≥γ̃_0, and γ̂_k ≥γ̂_0. Further, let u solve (<ref>) andsolve (<ref>). Let =+ _k f - _k g. Then|u - |_A≲ k^d/2(θ^k + )(f_L^2 + |g|_A),where the hidden constant depends on the contrast but is independent of mesh size H, patch size k and regularity of the solution u. The estimate of the full error |u - |_A is obtained by combining (<ref>), (<ref>), and (<ref>), and using the triangle inequality,|u - |_A≤ |u - |_A + | - |_A + | - |_A≲k^d/2((1+γ_0^-1) θ^k + ((1+γ̃_0^-1)γ_0^-1 + γ̃_0^-1γ̂_0^-1) max(e_u, e_f, e_g)) ·≲·(f_L^2 + |g|_A)≲ k^d/2(θ^k + max(e_u, e_f, e_g))(f_L^2 + |g|_A)and finally using the assumed bounds of e_u, e_f and e_g.The coarse mesh size parameter H is typically chosen based the desired accuracy of the computation. The localization parameter k is chosen to be proportional to |log(H)| guaranteeing a perturbation of the approximation of the order H|log(H)|^d/2. Finally,is chosen proportional to H. The resulting error bound in energy norm then reads ≲ |log(H)|^d/2H(f_L^2 + |g|_A).§ IMPLEMENTATION In this section, we present an algorithm for computing approximate solutions to a sequence of problems as described by (<ref>). In a practical implementation we can not let V be an infinite dimensional space. We will assume that there is a finite element space V_h based on a mesh that resolves the coefficient, which if used to solve (<ref>), yields an approximate solution u_h with satisfactory small error |u-u_h|_A. The analysis in the previous sections holds also if replacing V with V_h, however, the error estimates will then of course be bounding |u_h - |_A instead of |u - |_A. In the end of the section we discuss the memory requirements of the algorithm. The key idea is that, as time n progresses, we do not update the full multiscale space, but only the parts where it is necessary for a sufficiently small error. If A^n only changes slightly between two consecutive n, it is possible that many of the element correctors (_k,Tv_H, _k,Tf and _k,Tg) based on lagging coefficients do not need to be recomputed. We use the error indicators e_u, e_f and e_g to determine for which elements to recompute correctors. This results in an algorithm that is completely parallelizable over T, except for the solution of the low-dimensional (posed in V_H) global system. Even the assembly of the global stiffness matrix K = (K_ij)_ij and right hand side b = (b_i)_i can be done in parallel, as it becomes a reduction over T.The algorithm is presented in Algorithm <ref>. We denote by ϕ_i ∈ V_H, i = 1,2,… the finite element basis functions spanning V_H. mycommfontNote that the if-statement in this algorithm together with properly chosen k and , ensures that the conditions for Theorem <ref> are fulfilled. The numerical experiment in Section <ref> investigates the relations between the error andand the fraction of recomputed element correctors. The memory required to perform the main algorithm grows with k in the following manner. Suppose Ø(h^-d) is the number of elements in the fine discretizations, as is the case for quasi-uniform meshes. To compute e_u, e_f and e_g, we need to keep Ã_T, _k,Tϕ_i, _k,Tf, and _k,Tg between the iterations in Algorithm <ref> (see Lemma <ref>). Since the patches U_k(T) overlap by Ø(k^d) coarse elements, the amount of memory required between the iterations scales like Ø(k^dh^-d). In high dimensions for very fine meshes, the amount of memory needed for storage can become a limitation. Depending on the application, it is possible to reduce the memory requirements. Below we give two examples of such applications.[Defects in composite materials] For simulations on weakly random materials <cit.>, we consider the coefficient of a reference material and a material with random defects shown in Figure <ref>. If each ball in this material has a certain low probability to be missing (a localized point defect), the proposed method can be used to solve the model problem with the defect material on the right (true coefficient) using correctors precomputed on the reference material on the left (lagging coefficient). In sample based methods for stochastic integration (e.g. Monte Carlo), the proposed method for determining what correctors to recompute can reduce the computational cost for the full simulation.The lagging coefficient Ã_T in this example is the single reference coefficient and thus the same for all n and all T. Because of this, no additional memory is required to store lagging coefficients in this case. If we additionally use the (less efficient) coarse error indicators E_u,T, E_f,T, and E_g,T presented in Section <ref>, our memory requirement scales with Ø(k^dH^-d + h^-d) between the iterations in the algorithm.[Two-phase Darcy flow] In a discretization of a two-phase Darcy flow system of equations (pressure and saturation equation) for an injection scenario, the permeability coefficient A = A(s^n) varies over time n indirectly through the dependence on the saturation s^n. Typically, the change in saturation between time steps is localized to the front of the plume of the injected fluid. Thus, most corrector problems can be expected to be reused between iterations. An approach to reducing the memory requirements for the solution of this problem is revisited in detail in Section <ref>. § NUMERICAL EXPERIMENTS In all numerical experiments, we use _1 Lagrange finite elements in 2D or 3D on rectangular or rectangular cuboid elements. The degrees of freedom are the values of the polynomial in the corners of the element. We define the interpolation operatorto be used throughout the experiments. Let P_1 denote the polynomials of no partial degree greater than 1, i.e. ∂^2 p/∂ x^2 = 0 for all independent variables x if p ∈ P_1. We define the broken finite element space,S_H,b = { v ∈ L^2(Ω):v|_T ∈ P_1for allT ∈}.We denote by Π_H the L^2-projection onto S_H,b and by E_H : S_H,b→ V_H the boundary condition conforming node averaging operator (Oswald interpolation operator), for all nodes in ,(E_H v)(x) =0ifx ∈Γ_D,card(T_x)^-1∑_T ∈ T_x v|_T(x) otherwise,where T_x = { T ∈ :x ∈T} and card is the cardinality. Then we define = E_H ∘Π_H. This operator satisfies (<ref>), see e.g. <cit.>. §.§ Experiments studying the effects of k andWe let Ω = [0, 1]^2, Γ_D = {x ∈∂Ω : x_1 = 0orx_1 = 1}, Γ_N = ∂Ω∖Γ_D, f = 0, g = 1-x_1, and A_ b as shown in Figure <ref>. A was constructed by taking a uniform grid with 512 × 512 cells, and assigning each grid cell a value 10^c, where c was drawn from a uniform distribution between [-2, 0], for each cell independently. Then, the values in cells whose midpoint x_ m = (x_ m,1, x_ m,2) satisfied 15/32 ≤ x_ m,1≤ 1/2 were set to 10^-2. Finally, the values in cells whose midpoint x_ m satisfied 1/4 ≤ x_ m,2≤ 5/16 were set to 1.The space V is discretized on a _1 finite element space on a uniform grid of size 512 × 512, see Figure <ref>. The V_H is chosen as a _1 finite element space on the coarse mesh shown in the same figure.§.§.§ Error decay with k First, we let A = A_ b and solve for u_k for k = 1,2,3, and 4. We solve for u on the fine mesh and use that as reference solution.The exponential convergence in terms of k can be observed in Figure <ref>. §.§.§ Error decay with Now we fix k=3 and define a sequence of coefficients A^n for n=0,…,127,A^n(x) = A_ b(x)·(2+sin(8π(x_1-n/128)))This describes a perturbation of a factor up to 3 over the full domain, sweeping from the left to the right. We emphasize that the difference A_n+1-A_n is nonzero everywhere, which means a strategy to determine which correctors to recompute is necessary.We use Algorithm <ref> to compute the approximate solution û_k for every time step n. A reference solution u is also computed. We do this for four values of = 0.5, 0.1, 0.05, and 0.01.The (relative) error in energy norm versus the time step n is plotted to the left in Figure <ref>. The right plot in the same figure shows the fraction of all element correctors _k,T that were recomputed in each time step. We note that the error decreases with decreasingas expected and that the fraction of recomputed element correctors increase with decreasing . Without an adaptive strategy, all element correctors would have to be recomputed in every time step. See Figure <ref> for two maps over the recomputed element correctors in time step n=31 for two different values of . §.§ Patch size and tolerance parametersIn this experiment, we compare how the patch size k and toleranceaffect the error. We use a fine-scale coefficient for which we know the true solution.Again, we let Ω = [0, 1]^2, Γ_D = {x ∈∂Ω : x_1 = 0orx_1 = 1}, Γ_N = ∂Ω∖Γ_D, f = 0, g = 1-x_1. We use two coefficients, A^1 and A^2:A^1(x) = ϵ^-1 if 1/2≤ x_2 ≤1/2+ϵ, 1otherwise, A^2(x) = ϵ if 1/2≤ x_1 ≤1/2+ϵ, 1otherwise.We let Γ = {x ∈∂Ω : x_1 = 0 } denote the left boundary. We are interested in the boundary normal flux on Γ, i.e. q^n = ∫_Γn⃗· A^n ∇u̅^n = ∫_Ω A^n ∇u̅^n ·∇ g for the two cases. The true values of these fluxes are q^1 = 2-ϵ for coefficient A^1, and q^2 = 1/2-ϵ for coefficient A^2.We pick ϵ = 1/1024 and use a 64 × 64 rectangular grid for the coarse discretization V_H and a 4096 × 4096 rectangular grid for the discretization of V. For n=1, all element correctors are computed using the true coefficient A^1 and the true solution is u̅^1 = 1-x, which is what we would obtain using the standard finite element spaces. For n=2, however, we reuse the element correctors from n=1. Instead of e_u,T and e_g,T, we use the error indicators E_u,T and E_g,T, for which the fine scale correctors _k,Tϕ_x need not be saved between iteration n=1 and n=2.The main algorithm (except for replacing e_u and e_g by E_u and E_g) was used to compute the approximation q̂^2 = ∫_Ω A^2 ∇ ( + g)·∇ g.We performed 12 runs with all combinations of k=2, 3, 4 and =1, 0.5, 0.1, 0.05.The absolute differences |q^2 - q̂^2| was computed for all combinations and are reported in Table <ref>. The difference A^1-A^2 forms a cross of non-zero values. Many patches do not overlap this cross at all. We report (in parenthesis) the number of non-recomputed element correctors with a patch overlapping a non-zero difference in A^1-A^2. §.§ Low-memory Darcy flow upscaling algorithm In order to continue with two additional numerical experiments (in Section <ref>), we describe an algorithm for pressure solution upscaling for Darcy flows that reduces the space complexity to Ø(k^dH^-d + h^-d) (from Ø(k^dh^-d)). This is done by solving the saturation equation on the coarse mesh and the pressure equation with saturation dependent diffusion coefficient on a fine mesh using the proposed adaptive multiscale method. This is possible in a situation where the diffusion coefficient cannot be averaged on the coarse mesh, but the saturation solution can. The low space complexity and the possibility to parallelize the corrector computations enable the solution of large-scale problems of this kind.§.§.§ A Two-phase Darcy flow model problemWe consider the immiscible non-capillary two-phase Darcy flow problem using the fractional flow formulation <cit.>. This leads to a system of a coupled pressure and saturation equation-÷λ(s) K ∇ u = f,∂ s/∂ t - ÷λ_w(s) K∇ u = f,where space-time functions: u, s and f are pressure, saturation for the wetting phase, and sources/sinks, respectively; space function K is intrinsic permeability; and nonlinear scalar functions: λ and λ_w are total mobility and wetting phase mobility, respectively. A common technique used for solving this system is sequential splitting, where the pressure equation and saturation equation are solved separately within a time step n. This means that, as we iterate in time, we need to solve a sequence of pressure equations with coefficient A^n(x) = λ(s(x, t_n-1)) K(x).Since the wetting saturation s changes only significantly along the plume front between time steps, we are in the setting where consecutive differences in the coefficient is localized.The permeability K varies on a fine scale, requring these variations to be resolved by a fine mesh with mesh size h in order to obtain an accurate pressure solution. We consider the case when the saturation equation needs only be solved on a coarser mesh with mesh size H > h to obtain a sufficiently accurate saturation solution.We let the fine meshbe a refinement of the coarse mesh . The pressure and saturation equations are solved sequentially: Given initial data for the saturation, the pressure equation is solved. An approximation of the coarse element face flux is computed and used to solve for the next saturation using a zeroth order upwind discontinuous Galerkin method with explicit Euler forward time-stepping.We use the same discretization scheme as in <cit.>.Let P_0() be the space of piecewise constants on the elements of . Let _H denote set of faces of . Each face F ∈_H has a normal direction n_F (outward pointing for boundary faces). We define the jump operator over face F as v = (v|_T_1)|_Fn_1· n_F + (v|_T_2)|_Fn_2· n_F, where n_1 and n_2 are the outward pointing face normals of the two elements T_1 and T_2 adjacent to F. Let ⟨·, ·⟩_ω denote the L^2-scalar product when ω is d-1-dimensional. We let the flow be completely driven by boundary conditions, i.e. f = 0.We use the following discretization for the saturation equation. Given s_H^n-1∈ P_0(), σ^n∈ L^1(_H), find s_H^n∈ P_0() such that for all r_H ∈ P_0(),Δ t^-1 (s_H^n - s_H^n-1, r_H) = - ⟨ψ(s_H,upw^n-1) σ^n, r_H⟩__H,I - =⟨ψ(s_H^n-1) σ^n, r_H⟩__H,out - ⟨ψ(s_B) σ^n, r_H⟩__H,in.Here σ^n is an upscaled total flux quantity approximating (over face F) σ^n|_F ≈ -n_F ·λ(s) K ∇ (u + g); the sets _H,I, _H,out, and _H,in contain interior faces, Dirichlet boundary faces with outgoing and ingoing flux, respectively; s_B is the saturation boundary condition; and s_H,upw^n is the upwind saturations_H,upw^n|_F =(s_H^n|_T_1)|_F, if σ^n ≥ 0, (s_H^n|_T_2)|_F, if σ^n < 0,where T_1 and T_2 are adjacent to F and n_F points from T_1 to T_2; and the function ψ(s) = λ_w(s)/λ(s) is the so called fractional flow function. The discretization of the pressure equation is: find u_h^n∈ V_h, so that for all v_h ∈ V_h,(A^n ∇ u^n_h, ∇ v_h)= -(A^n ∇ g, ∇ v_h),where A^n = λ(s_H^n-1) K. As suggested in <cit.>, we define the non-conservative pre-flux σ̅^n be a harmonic average of the (discontinuous) element face flux - n_F · A^n ∇ (u_h + g) at the faces. Then we use the post-processing technique presented in the same paper (with non-weighted minimization) to post-process σ̅^n and obtain the conservative flux σ^n used in the saturation equation.We make two observations on the information exchange between the two equations when using this discretization: * In the pressure equation, we are only interested in the coarse scale saturation s_H^n-1 from the saturation equation.* In the saturation equation, we are only interested in the upscaled flux ∫_F σ^n from the pressure equation. §.§.§ Coarse error indicators The first observation allows us to compute E_u, E_f and E_g from Section <ref>, without saving Ã_T. Suppose that for element T, the lagging coefficient Ã_T = A^m is from time step m < n, and A = A^n. We note that δ_T is a coarse quantity, since fine-scale K cancels,δ_T = (Ã_T - A) A^-1/2Ã_T^-1/2 = λ(s_H^m-1)-λ(s_H^n-1)/√(λ(s_H^m-1)λ(s_H^n-1)).Thus, to compute δ_T in (<ref>), only λ̃_T,T' := λ(s^m-1_H)|_T' for T' ⊂ U_k(T) need to be saved from previous time steps. The memory required to store λ̃_T,T' behaves like Ø(k^dH^-d). Also, λ̃_T,T' can be used to compute A^-1/2Ã_T^1/2 in (<ref>).To summarize, no lagging fine scale information needs to be stored. Only the coarse quantities μ̃_T,T' and λ̃_T,T' need to be saved between iterations.§.§.§ Coarse face flux The second observation is that we only need the coarse element face flux ∫_F σ^n for the saturation equation. Since this quantity is defined on the coarse mesh, we precomputeσ̃^n_u,T,T',F,i= -∫_F n_F·Ã_T (χ_T ∇ϕ_i - ∇_k,Tϕ_i)|_T',σ̃^n_fg,T,T',F= -∫_F n_F·Ã_T∇(-_k,Tg + _k,Tf)|_T',for all T,T' ∈, for all faces F ⊂T' and all basis functions ϕ_i with support in T. Here, we used the harmonic average v|_F = 2(v|_T_1)|_F (v|_T_2)|_F/(v|_T_1)|_F + (v|_T_2)|_F, where T_1 and T_2 are the two elements adjacent to F, if F is an interior face, and v|_F = 2(v|_T)|_F, where T is adjacent to F, if F is a boundary face.The memory required for storing σ̃^n_u,T,T',F,i and σ̃^n_fg,T,T',F scales with Ø(k^d H^-d), since the two quantities are zero for all pairs (T', T) except when T' ⊂ U_k(T).If we now let the coarse component of the multiscale solution be expressed as = ∑_iα_i ϕ_i, then we can compute the upscaled non-conservative face flux byσ̅^n|_F = 1/2∑_T,T',iα_i σ̃^n_u,T,T',F,i + 1/2∑_T,T'σ̃^n_fg,T,T',F.The final conservative face flux σ^n|_F is then computed using the post-process­ing technique developed in <cit.>.We conclude this section by listing the upscaling algorithm (Algorithm <ref>) for this two-phase Darcy flow problem. In this algorithm, the memory requirements are Ø(k^dH^-d+h^-d) (where h^-d is for the coefficient A which can be distributed on different computational nodes). This allows for very refined fine meshes. Also, the coarse element loop is still completely parallel, and this algorithm serves as a good candidate for a scalable memory efficient upscaling algorithm. §.§ Darcy flow upscaling numerical experiments In the following two experiments, we investigate the properties of the upscaling algorithm presented in the previous section. We pick the following mobility functions λ_w(s) = s^3, λ_n(s) = (1-s)^3, and λ(s) = λ_w(s) +λ_n(s).§.§.§ 2D random field dataWe let Ω = [0, 1]^2, Γ_D = {x ∈∂Ω : x_1 = 0orx_1 = 1}, Γ_N = ∂Ω∖Γ_D, f = 0, g = 1-x_1. We use a 512 × 512 rectangular grid as fine mesh for V_h and a 64 × 64 grid as coarse mesh for V_H. The permeability K is realized as a piecewise constant function on the fine mesh from a lognormal distribution with exponential spatial correlation and standard deviation 3, i.e. for a fine element midpoint x_ m,K(x_ m) = exp(3 κ(x_ m))where κ(x_ m) ∼𝒩(0, 1) and covariance between points arecov(κ(x),κ(y)) = exp( -x-y_2/d),with correlation length d=0.05 and where ·_2 denotes Euclidian norm. The initial saturation is set to s^0 = 0, and boundary conditions are set to s_B = 1 on the left boundary, which is the only boundary with ingoing flux. The number of time steps and their size are set to N = 2000 and Δ t = N^-1, respectively.The upscaling algorithm was run with this setup with k=1,2,3 and =0.4, 0.2, 0.1, 0.05, 0.025, and 0.0125. A reference solution s^n_H, ref, where the pressure equation was solved on the fine mesh using the Q_1 standard finite element method in every iteration was computed. To illustrate the need for upscaling, we also computed the pressure equation on the coarse mesh using the standard _1 finite elements. See Figure <ref> for plots over the error in the saturaton solution at the final time step and average fraction of recomputed correctors. In the error plot we can see that both parameters k andaffect the error in the chosen regimes. We note from the recomputation plot that there is no dependency between the fraction of recomputed element correctors and patch size k. Figure <ref> shows an example of the saturation solution and the number of times correctors have been computed.§.§.§ 3D random field dataWe let Ω = [0, 1]^3, Γ_D = {x ∈∂Ω: x_1 = 0orx_1 = 1}, Γ_N = ∂Ω∖Γ_D, f = 0, g = 1-x_1. We use a 128 × 128 × 128 rectangular grid as fine mesh for V_h and a 16 × 16 × 16 grid as coarse mesh for V_H. We use a sample ω_i,j,k,ℓ of independent uniformly distributed random numbers between 0 and 1. The permeability K is a piecewise constant function on the fine uniform mesh and is defined byK(x_ m) = 1/2^21∏_i=1^7 (1+ω_i,⌈ 2^i x_ m,1⌉, ⌈ 2^i x_ m,2⌉, ⌈ 2^i x_ m,3⌉)^3,where x_ m = (x_ m,1, x_ m,2, x_ m,3) are the fine element midpoints, and ⌈·⌉ denotes the ceiling function. See Figure <ref> for the particular realization used. The boundary conditions are set to s_B = 0 on the boundary with ingoing flux, and the initial saturation is set to a piecewise constant function s^0_H on the coarse mesh with the following values in the coarse element midpoints x_ m,s^0_H(x_ m) =1,if x_-1/2(1,1,1)_2 ≤1/4, 0,otherwise.The number of time steps are set to N=200 and the time step to Δ t = 1.The upscaling algorithm was run for the three parameter combinations I: k=1, =0.1, II: k=2, =0.1, and III: k=1, =0.01.Figure <ref> gives an illustration of the solution at n=0 and n=200. One of the images shows the recomputed elements as blue boxes and we can see that many elements are not recomputed. In this case, we were not able to compute a reference solution using the available computational resources, but we can estimate the sensitivity of the solutions with respect to the parameters k and . Let s^200_H,I, s^200_H,II, and s^200_H,III denote the saturation solutions at time step n=200 for the parameter combinations I, II, and III, respectively. We getvary k: s^200_H,I - s^200_H,II_L^2(Ω)= 0.0065,s^200_H,I - s^200_H,II_L^∞(Ω)= 0.0664,vary : s^200_H,I - s^200_H,III_L^2(Ω)= 0.0019,s^200_H,I - s^200_H,III_L^∞(Ω)= 0.0242.These numbers suggest that the error due to localization (controlled by the parameter k) dominates in this case.§ CONCLUSIONElliptic equations with similar rapidly varying coefficients occur for instance in time-dependent problems for two-phase Darcy flow and in stochastic simulations on defect composite materials. We consider a sequence of elliptic equations, each with different coefficients A^n, n = 1,2,…. We define a method that computes and updates an LOD multiscale space as we iterate through the coefficients. This is done by the computation of localized element correctors that depend on the coefficient in the vicinity of the element. These computations can be performed completely in parallell. We derive error indicators e_u,T, e_f,T, and e_g,T that indicate whether or not to update the corrector at element T while iterating the sequence of coefficients. By selecting a small enough tolerancefor the error indicators, the multiscale space will keep its approximation properties through the sequence of coefficients. It is shown analytically and numerically that the error indicators bound the error in energy norm of the solution. We present a memory efficient upscaling algorithm for a particular application of two-phase Darcy flows.abbrv
http://arxiv.org/abs/1703.08857v2
{ "authors": [ "Fredrik Hellman", "Axel Målqvist" ], "categories": [ "math.NA", "35J15, 65N12, 65N15, 65N30" ], "primary_category": "math.NA", "published": "20170326175733", "title": "Numerical homogenization of elliptic PDEs with similar coefficients" }
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality Song Mei[Institute for Computational and Mathematical Engineering, Stanford University] Theodor Misiakiewicz[Departement de Physique, Ecole Normale Supérieure] Andrea Montanari[Department of Electrical Engineering and Department of Statistics, Stanford University] Roberto I. Oliveira[ Instituto Nacional de Matemática Pura e Aplicada (IMPA)]December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================== A number of statistical estimation problems can be addressed by semidefinite programs (SDP). While SDPs are solvable in polynomial time using interior point methods,in practice generic SDP solvers do not scale well to high-dimensional problems. In order to cope with this problem, Burer and Monteiro proposed a non-convex rank-constrained formulation, which has good performance in practice but is still poorly understood theoretically.In this paper we study the rank-constrained version of SDPsarising in MaxCut and in synchronization problems. We establish a Grothendieck-type inequality that proves that all the local maxima anddangerous saddle points are within a small multiplicativegap from the global maximum. We use this structural information to prove that SDPs can be solved within a known accuracy, by applying the Riemannian trust-regionmethod to this non-convex problem, while constraining the rank to beof order one.For the MaxCut problem, our inequality implies that any local maximizer of the rank-constrained SDP provides a (1 - 1/(k-1)) × 0.878 approximationof the MaxCut, when the rank is fixed tok. We then apply our results to data matrices generated according to the Gaussian _2 synchronization problem, and the two-groups stochastic block model with large bounded degree. We prove that the error achieved by local maximizers undergoes a phase transition at the same threshold as for information-theoretically optimal methods.§ INTRODUCTION A successful approach to statistical estimation and statistical learning suggests to estimate the object of interest by solving an optimization problem, for instance motivated by maximum likelihood, or empirical risk minimization.In modern applications, the unknown object is often combinatorial, e.g. a sparse vector in high-dimensional regression or a partition in clustering. In these cases, the resulting optimization problem is computationally intractable and convex relaxations have been a method of choice for obtaining tractable and yet statistically efficient estimators.In this paperwe consider the following specific semidefinite program MC-SDP maximize⟨ A, X ⟩subject to X_ii = 1,i ∈ [n],X ≽ 0, as well as some of its generalizations. This SDP famously arises as a convex relaxation of the MaxCut problem[In the MaxCut problem, we are given a graph G=(V,E) and want to partition the vertices in two sets as to maximize the number of edges across the partition.], whereby the matrix Ais the opposite of the adjacency matrix of the graph to be cut. In a seminal paper,Goemans and Williamson <cit.> proved that this SDP provides a 0.878 approximation of the combinatorial problem.Under the unique games conjecture, this approximation factor is optimal for polynomial time algorithms <cit.>. More recently, SDPs of this form (see below for generalizations) have been studied in the context of group synchronization and community detection problems. An incomplete list of references includes <cit.>.In community detection, we try to partition the vertices of a graph into tightly connected communities under a statistical model for the edges.Synchronization aims at estimating n elements g_1,…, g_n in a group , from the pairwise noisy measurement of the group differencesg_i^-1g_j.Examples includesynchronization in which = =({+1,-1}, · ) (the group with elements {+1,-1} and usual multiplication),angular synchronization in which = U(1) (the multiplicative group of complex numbers of modulo one), and (d) synchronization in which we need to estimate n rotations R_1, …, R_n ∈(d) from the special orthogonal group. In this paper, we will focus onsynchronization and (d) synchronization. Although SDPs can be solvedto arbitrary precision in polynomial time <cit.>, generic solvers do not scale well to large instances. In order to address the scalability problem, <cit.> proposed to reduce the problem dimensions by imposing the rank constraint (X)≤ k. This constraint can besolved by setting X = σσ^ where σ∈^n × k. In the case of (<ref>), we obtain the following non-convex problem, with decision variable σ: k-Ncvx-MC-SDP maximize⟨σ, A σ⟩subject toσ = [σ_1, …, σ_n]^∈ℝ^n × k,‖σ_i‖_2 = 1,i ∈ [n]. Provided that k≥√(2n), the solution of (<ref>) corresponds to the global maximum of (<ref>) <cit.>.Recently,<cit.> proved that, as long as k ≥√(2n), for almost all matrices A, the problem(<ref>) has a unique local maximum which is also the global maximum. This paper proposed to use the Riemannian trust-regionmethod to solve the non-convex SDP problem, and provided computational complexity guarantees on the resulting algorithm.While the theory of <cit.> suggests the choice k = (√(n)), it has been observed empirically that setting k=(1) yields excellent solutions and scales well to large scale applications <cit.>.In order to explain this phenomenon, <cit.> considered thesynchronization problem withk=2, and established theoretical guarantees for the local maxima, provided the noise level is small enough. A different point of view was taken in a recentunpublished technical note <cit.>, which proposed a Grothendieck-typeinequality for the local maxima of (<ref>). In this paper we continue and develop the preliminary work in <cit.>, to obtain explicit computational guarantees for the non-convex approach with rank constraint k=(1).As mentioned above, we extend our analysis beyond the MaxCut type problem (<ref>) to treat an optimization problemmotivated by(d) synchronization. (d) synchronization (with d=3) has applications to computer vision <cit.> and cryo-electron microscopy (cryo-EM) <cit.>.A natural SDP relaxation of the maximum likelihood estimator is given by the problem OC-SDP maximize⟨ A, X ⟩subject to X_ii = 𝕀_d,i ∈ [m],X ≽ 0, with decision variable X. Here A,X∈^n× n, n=md are matrices with d× d blocks denoted by (A_ij)_1≤ i,j≤ m, (X_ij)_1≤ i,j≤ m.This semidefinite program is also known as Orthogonal-Cut SDP. In the context of (d) synchronization, A_ij∈^d × d is a noisy measurementof the pairwise group differences R_i^-1 R_j where R_i ∈ (d). By imposing the rank constraint (X)≤ k, we obtain a non-convex analogue of(<ref>), namely: k-Ncvx-OC-SDP maximize⟨σ, Aσ⟩subject toσ = [σ_1, …, σ_m]^∈^n× k,σ_i^σ_i = 𝕀_d,i ∈ [m]. Here the decision variables are matrices σ_i∈^k× d.According to the result in <cit.>, as long as k ≥ (d+1) √(m), the global maximum of the problem (<ref>) coincideswith the maximum of the problem (<ref>). As proved in <cit.>, with the same value of k for almost all matrices A, the non-convex problemhas no local maximum other than the global maximum. <cit.>proposed to choose the rank k adaptively: as k is not large enough, increase k to find a better solution.However, none of these works considers k=(1), which is the focus of the present paper (under the assumption thatd is of order one as well). §.§ Our contributions A main result of our paper is a Grothendieck-type inequality that generalizes and strengthens the preliminarytechnical result of <cit.>. Namely, we prove that forany -approximate concave point σ of the rank-k non-convex SDP (<ref>), we have(A) ≥( σ) ≥(A) - 1/k-1 ((A) + (-A)) - n/2,where (A) denotes the maximum value of the problem (<ref>) and (σ) is the objective function in(<ref>).An -approximate concave point is a point at which the eigenvalues of the Hessian of ( · ) are upper bounded by(see below for formal definitions).Surprisingly, this result connects a second order local property, namely the highest local curvature of the cost function, to its global position.In particular, all the local maxima (corresponding to =0) are within a 1/k-gap of the SDP value.Namely, for any local maximizer σ^*, we have( σ^*) ≥(A) - 1/k-1 ((A) + (-A)) . All the points outside this gap, with an n/2-margin have a direction of positive curvature of at least size .Figure <ref> illustrates the landscape of the rank-k non-convex MaxCut SDP problem (<ref>). We show that this structure impliesglobal convergence rates for approximately solving (<ref>). We study the Riemannian trust-region method in Theorem <ref>.In particular, we show that this algorithm with any initialization returns a 0.878× (1-O(1/k)) approximation of the MaxCut of a random d-regular graph in𝒪(n k^2) iterations, cf. Theorem <ref>.In the case ofsynchronization, we show that for any signal-to-noiseratio λ >1, all the local maxima of the rank-k non-convex SDP correlate non-trivially with theground truthwhen k≥ k^*(λ) =(1) (Theorem <ref>). Furthermore, Theorem <ref> provides a lower bound on thecorrelation between local maxima and the ground truth that converges to one when λ goes to infinity.These results improve over the earlier ones of <cit.>, by establishing the tight phase transition location, and the correct qualitative behavior. We extend these results to the two-groups symmetric Stochastic Block Model. For (d) synchronization, we consider the problem (<ref>) and generalize our main Grothendieck-type inequality to this case, cf. Theorem <ref>. Namely, for any -approximate concave point σ of the rank-k non-convex Orthogonal-Cut SDP (<ref>), we have( σ) ≥_o(A) - 1/k_d-1 (_o(A) + _o(-A)) - n/2,where k_d = 2k/(d+1), _o(A) denotes the maximum value of the problem (<ref>) and (σ) is the objective function in(<ref>). We expect that the statistical analysis of local maxima, as well as the analysis of optimization algorithms, should extend to this case as well, but we leave this to future work. §.§ Notations Given amatrix A = (A_ij) ∈^m × n, we write ‖ A ‖_1 = max_1 ≤ j ≤ n∑_i=1^m | A_ij| for its operator ℓ_1-norm, ‖ A ‖_ for its operator ℓ_2-norm (largest singular value), and ‖ A ‖_F = (∑_i=1^m ∑_j= 1^n A_ij^2)^1/2for its Frobenius norm. For two matrices A, B ∈^m × n, we write ⟨ A, B ⟩ =(A^ B) for the inner product associated to the Frobenius norm ⟨ A , A ⟩ = ‖ A ‖_F^2. In particular for two vectors u,v ∈^n, ⟨ u , v ⟩ corresponds to the inner product of the vectors u and v associated to the Euclidean norm on ^n.We denote by (B) the matrix obtained from B by setting to zero all the entries outside the diagonal.Given a real symmetric matrix A ∈ℝ^n × n, we write (A) for value of the SDP problem (<ref>). That is, (A) = max{⟨ A, X⟩ : X ≽ 0, X_ii = 1, i ∈ [n] }. Optimization is performed over the convex set of positive-semidefinite matrices with diagonal entries equal to one, also known as the elliptope. We write (A) =(A) +(-A) for the length of the range of the SDP with data A (noticing that for every matrix X in the elliptope,we have (A) ≥⟨ A, X ⟩≥ - (-A)). For the rank-k non-convex SDP problem (<ref>), we define the manifold _k as _k = {σ∈ℝ^n× k: σ = (σ_1, σ_2, …, σ_n )^, ‖σ_i ‖_2 = 1}≅𝕊^k-1×𝕊^k-1×…×𝕊^k-1_ntimes. where 𝕊^k-1≡{ x ∈^k : ‖ x ‖_2 = 1} is the unit sphere in ^k. Given a real symmetric matrix A ∈ℝ^n × n, for σ∈_k, we write ( σ) = ⟨σ, A σ⟩ the objective function of the rank-k non-convex SDP (<ref>). Our optimization algorithm makes use of the Riemannian gradient and the Hessian of the function f. We anticipate their formulas here, deferring to Section <ref> for further details. Defining Λ = (Aσσ^), the gradient is given by: ( σ) = 2 (A - Λ) . The Hessian is uniquely defined by the following holding for all u,v in the tangent space T_σℳ_k: ⟨ v, ( σ)[u] ⟩ = 2 ⟨ v, (A - Λ) u ⟩, § MAIN RESULTS First we define the notion of approximate concave point of a function f on a manifold . Let f be a twice differentiable function on a Riemannian manifold . We say σ∈ is an -approximate concave point of f on , if σ satisfies⟨ u,f(σ)[u] ⟩≤⟨ u, u⟩, ∀ u∈ T_σ,where f(σ) denotes the Riemannian (intrinsic) Hessian of f at point σ,T_σ is the tangent space, and· ,· is the scalar product on T_σ.Note that an approximate concave point may not be a stationary point, or may not even be an approximate stationary point. Both local maximizers and saddles with largest eigenvalue of the Hessian close to zero are approximate concave points.The classical Grothendieck inequality relates the global maximum of a non-convex optimization problem to the maximum ofits SDP relaxation <cit.>. Our main tool is instead an inequality that applies to all approximate concave ponts in the non-convex problem.For any -approximate concave point σ∈_k of the rank-k non-convex problem (<ref>), we have( σ) ≥(A) - 1/k-1 ((A) + (-A)) - n/2. §.§ Fast Riemannian trust-region algorithm We can use the structural information inTheorem <ref>, to develop an algorithm that approximately solves the problem (<ref>), and hence the MaxCut SDP (<ref>). The algorithm we propose is a variant of the Riemannian trust-region algorithm. The Riemannian trust-region algorithm (RTR) <cit.> is a generalization of the trust-region algorithm to manifolds. To maximize the objective functionon the manifold , RTRproceeds as follows:at each step, we find a direction ξ∈ T_σ that maximizes the quadratic approximation ofover a ball of small radius η_σ RTR-updateξ_*≡max{( σ) + ⟨( σ), ξ⟩ + ⟨ξ, ( σ) [ξ]⟩, ξ∈ T_σ,‖ξ‖≤η_σ}, where (σ) is the manifold gradient of , and the radius η_σ is chosen to ensure that the higher order terms remain small. The next iterate σ^ new = _ (σ + ξ^*) is obtained by projecting σ + ξ^* back onto the manifold.Solving the trust-region problem (<ref>) exactly is computationally expensive. In order to obtain a faster algorithm,we adopt two variants in the RTR algorithm. First, if the gradient ofat the current estimate σ^t is sufficiently large, we only use gradient information to determine the new direction: we call this a gradient-step; if the gradient is small (i.e. we are at an approximately stationary point), we try to maximize uniquely the Hessian contribution: we call this an eigen-step. Second, in an eigen-step, we only approximately maximize the Hessian contribution. Let us emphasize that these two variants are commonly used and we do not claim they are novel. For the non-convex MaxCut SDP problem (<ref>), we describe the algorithm concretely as follows. In each step, first we find a direction u^t using the direction-finding routine outlinedbelow[Throughout the paper, points σ∈_k and vectors u∈ T_σ_k are represented by matrices σ,u∈^n× k and hence the norm on T_σ_k is identified with the Frobenius norm u_F.]. 2l Direction-Finding Algorithm 2l Input :Current position σ^t; parameter μ_G; 2l Output :Searching direction u^t with ‖ u^t ‖_F = 1;1: Compute ‖( σ^t) ‖_F; 2: If ‖( σ^t) ‖_F > μ_G3:aa Return u^t =( σ^t)/‖( σ^t)‖_F; 4:Else 5:aa Use power method to construct a direction u^t ∈ T_σ_k such that aa ‖ u^t ‖_F = 1, ⟨ u^t, ( σ^t) [u^t] ⟩≥λ_max((σ^t))/2, aa and ⟨ u^t, ( σ^t) ⟩≥ 0; Return u^t. 6: End Given this direction u^t, we update our current estimate byσ^t+1= __k (σ^t + η^t u^t) with η^t an appropriately chosen step size. We consider two specific implementations for the parameter μ_G and the choice of step size:(a) Take μ_G = ∞, which means that only eigen-steps are used. In this implementation, we take the step size η_H^t =⟨ u^t, (σ^t) [u^t] ⟩/(100 ‖ A ‖_1).(b) Take μ_G = ‖ A ‖_2. When ‖(σ^t) ‖_F > μ_G, we choose the step size η_G^t = μ_G / (20 ‖ A ‖_1). When ‖(σ^t) ‖_F ≤μ_G, we choose the step size η_H^t= min{√(λ_H^t/(216 ‖ A ‖_1));λ_H^t/(12 ‖ A ‖_2) }, where λ_H^t = ⟨ u^t, (σ^t) [u^t] ⟩.In each eigen-step, we need to compute a direction u ∈ T_σ_k such that ‖ u ‖_F = 1 and ⟨ u, ( σ) [u] ⟩≥λ_max((σ))/2.This can be done using the following power method. (Note that the condition u^t, f(σ^t)≥ 0 can always be ensured eventually by replacing u^t by -u^t.) 2l Power Method 2l Input :σ, (σ); parameters N_H, μ_H; 2l Output :u ∈ T_σ_k, such that ‖ u ‖_F = 1 and ⟨ u, ( σ) [u] ⟩≥λ_max((σ))/2;1: Sample a u^0 uniformly randomly on T_σ_k with ‖ u^0 ‖ = 1; 2: For i = 1,…, N_H3:aa u^i←( σ) [u^i-1] + μ_H · u^i-1;4:aa u^i← u^i/ ‖ u^i‖_F; 5:End6: Return u^N. The shifting parameter μ_H can be chosen as 4 ‖ A ‖_1 which is an upper bound of ‖(σ) ‖_. We take the parameter N_H = C ·‖ A ‖_1 log n/λ_max((σ)) with a large absolute constant C. In practice, when choosing the parameter N_H, we do not know λ_max((σ)) for each σ, but we can replace it by a lower bound, or estimate it using some heuristics. It isa classical result that –with high probability– the power method with this number of iterations finds a solution u^t with the required curvature <cit.>.There exists a universal constant c such that, for any matrix A and >0, the Fast Riemannian Trust-Region method with step size as described above for each iteration and initialized with any σ_0 ∈ℳ_k returns a point σ^* ∈_k with( σ^*) ≥(A) - 1/k-1 ((A) + (-A)) - n/2,within the following number of steps with each implementation (a) Taking μ_G=∞ (i.e. only eigen-steps are used), then it is sufficient to runT _H≤ c · n ‖ A ‖_1^2/^2 steps. (b) Taking μ_G = ‖ A ‖_2, then it is sufficientto run T = T_H + T_G steps in which there are T_H ≤ c ·n max( ‖ A ‖_2^2/^2, ‖ A ‖_1/) eigen-steps and T_G ≤ c · (A) ‖ A ‖_1/‖ A ‖_2^2 gradient-steps. The gap (A) / (k-1) = ((A)+(-A))/(k-1) in Eq. (<ref>), is due to the fact that Theorem <ref> does not rule out the presence of local maximawithin aninterval (A) / (k-1) from the global maximum. It is therefore natural to set = 2(A)/(n(k-1)), to obtain the following corollary. There exists a universal constant c such that for any matrix A, the Fast Riemannian Trust-Region method with step size as described above for each iteration and initialized with any σ_0 ∈ℳ_k returns a point σ^* ∈_k with( σ^*) ≥(A) - 2/k-1 ((A) + (-A))within the following number of steps with each implementation (a) Taking μ_G=∞ , then it is sufficient to runT_H ≤ c · n k^2 ( n ‖ A ‖_1/(A) )^2 eigen-steps. (b) Taking μ_G = ‖ A ‖_2, then it is sufficientto run T = T_H + T_G steps in which there are T_H ≤ c ·n max( n^2 k^2 ‖ A ‖_2^2/(A)^2, nk ‖ A ‖_1/(A) ) eigen-steps and T_G ≤ c · (A) ‖ A ‖_1/‖ A ‖_2^2 gradient-steps.In order to develop some intuition on these complexity bounds, let us consider two specific examples.Consider the problem of finding the minimum bisection of a random d-regular graph G, with adjacency matrix A_G. A natural SDP relaxation is given by the SDP(<ref>) with A = A_G- A_G = A_G-(d/n)^ the centered adjacency matrix. For this choice of A, we have ‖ A ‖_1 ≤ 2d, ‖ A ‖_2 = 2 √(d-1)(1+o_n(1)) <cit.>,(A) = 2n √(d-1) + o(n) and (-A) = 2 n √(d-1) + o(n) <cit.> (with high probability). Using implementation (a) (only eigen-steps), the bound on thenumber of iterationsin Corollary <ref> scales as T_H = 𝒪(n d k^2). In implementation (b), we choose μ_G = Θ(√(d)), and the number of gradient-steps and eigen-steps scale respectively asT_G = 𝒪(n √(d))and T_H=𝒪(n k max(k, √(d))). In terms of floating point operations, in each gradient-step, the computation of the gradient costs 𝒪 (ndk) operations; in each eigen-step, each iteration of the power method costs 𝒪 (ndk) operations and the number of iterations in each power method scales as 𝒪 (k √(d)log n).Implementation (b) presents a better scaling. The total number of floating point operations to find a (1 - O(1/k)) approximate solution of the minimum bisection SDP of a random d-regular graphis (with high probability) upper bounded by 𝒪( n^2 k^3 d^3/2max(k,√(d))log n).As a second example, consider the MaxCut problem for a d-regular graph G, with adjacency matrix A_G. This can be addressed by considering the SDP (<ref>) with A = -A_G, and the corresponding non-convex version (<ref>). As shown in the next section, finding a 2(A)/(n(k-1))-approximate concave point of (<ref>) yields an (1 - O(1/k)) × 0.878-approximation of the MaxCut of G.For this choice of A, we have ‖ A ‖_1 =d, ‖ A ‖_2 = d, and (A)= Θ(nd). Therefore, in implementation (a) where all the steps are eigen-step, the number of iterations given by Corollary <ref> scales as T_H = 𝒪(n k^2). In implementation (b), we choose μ_G = Θ(d), and the number of gradient-steps and eigen-steps scale respectively as T_G = 𝒪(n)and T_H=𝒪(nk^2). In terms of floating point operations, the computational costs of one gradient-step and one eigen-step power iteration are the same (which are 𝒪(ndk)) as in the example of minimum bisection SDP. The number of iterations inthe power method scalesas 𝒪 (k log n). Therefore, the two approaches are equivalent. The total number of floating point operations to find a (1 - O(1/k)) × 0.878 approximate solution of the MaxCut of ad-regular graph is upper bounded by 𝒪( n^2 d k^4 log n).Let us emphasize that the complexity bound in Theorem <ref>is not superior to the ones available for some alternative approaches. There is a vast literatures that studies fastSDP solvers <cit.>. In particular, <cit.>give nearly linear-time algorithms to approximate (<ref>). These algorithms are different from the one studied here, and rely on the multiplicative weight update method <cit.>.Using sketching techniques, theircomplexity can be further reduced <cit.>. However, in practice,the Burer-Monteiro approach studied hereis extremely simple and scales well to large instances <cit.>. Empirically, it appears to have better complexity than what is guaranteed by our theorem. It would be interesting to compare the multiplicative weight update method and the non-convex approach both theoretically and experimentally.§.§ Application to MaxCut LetA_G ∈^n × n denote the weighted adjacency matrix of a non-negative weighted graph G. TheMaxCut of G is given by the following integer program(G) = max_x_i ∈{ -1, +1}1/4∑_i,j=1^n A_G,ij (1-x_i x_j). We consider the following semidefinite programming relaxationSDPCut(G) = max_X ≽ 0, X_ii=11/4∑_i,j=1^n A_G,ij(1 - X_ij). Denote by X^* the solution of this SDP. Goemans and Williamson <cit.> proposed acelebrated rounding scheme using this X^*, which is guaranteed to find an α_*-approximate solution to the MaxCut problem (<ref>), where α_* ≡min_θ∈[0,π]2θ/(π(1-cosθ)), α_*>0.87856. The corresponding rank-k non-convex formulation is given bymax_σ{1/4∑_i,j=1^n A_G,ij (1-⟨σ_i, σ_j⟩):σ_i ∈𝕊^k-1, ∀ i∈ [n]}.Applying Theorem <ref>, we obtain the following result.For any k ≥ 3, if σ^* is a local maximizer of the rank-k non-convex SDP problem (<ref>), then using σ^* we can find an α_* × (1-1/(k-1)) ≥ 0.878 × (1-1/(k-1))-approximate solution of the MaxCut problem (<ref>). If σ^* is a 2(A_G)/(n(k-1))-approximate concave point, then using σ^* we can find an α_*× (1-2/(k-1)) ≥ 0.878 × (1-2/(k-1))-approximate solution of the MaxCut problem.The proof is deferred to Section <ref>. §.§ ℤ_2 synchronization Recall the definition of the Gaussian Orthogonal Ensemble. We write W ∼(n) if W ∈^n × n is symmetric with (W_ij)_i≤ j independent, with distribution W_ii∼ (0,2/n) and W_ij∼ (0,1/n) for i < j. In the ℤ_2 synchronization problem, we are requiredto estimate the vector u ∈{± 1 }^n from noisy pairwise measurements A(λ) = λ/n u u^ + W_n, where W_n ∼(n), andλ is a signal-to-noise ratio.The random matrix model (<ref>) is also known as the `spiked model' <cit.>or `deformed Wigner matrix'and has attracted significant attention across statistics and probability theory <cit.>.The Maximum Likelihood Estimator for recovering the labels u ∈{± 1 }^n is given by^(A) = max_x ∈{± 1 }^nx,Ax. A natural SDP relaxation of this optimization problem is given –once more– by(<ref>). It is known that _2 synchronization undergoes a phase transition at λ_c=1. For λ≤ 1, no statistical estimator (A) achieves scalar product |(A),u| bounded away from 0as n→∞. For λ>1, there exists an estimator with |(A),u| bounded away from 0 (`better than random guessing') <cit.>. Further, for λ<1 it is not possible[To the best of our knowledge, a formal proof of this statement has not been published.However, a proof can be obtained by the techniques of <cit.>.] to distinguish whether A is drawn from the spiked model or A∼(n) with probability of error converging to 0 as n→∞. This is instead possible for λ≥ 1.It was proved in <cit.> that the SDP relaxation (<ref>) –with a suitable rounding scheme– achieves the information-theoretic threshold λ_c=1 for this problem.In this paper, we prove a similar result for the non-convex problem (<ref>). Namely, we show that for any signal-to-noise ratio λ >1 there exists a sufficiently large k such that every local maximizer has a non trivial correlation to the ground truth. Below we denote by _n,k(A) the set oflocal maximizers of problem (<ref>). For any λ > 1, there exists a function k_*(λ) > 0, such that for any k > k_*(λ), with high probability, any local maximizer σ of the rank-k non-convex SDP (<ref>) problem has non-vanishing correlation with the ground truth parameter. Explicitly, there exists = (λ) > 0 such that lim_n→∞(inf_σ∈_n,k(A)1/n‖σ^ u ‖_2 ≥) =1 .The proof of this theorem is deferred to Section <ref>.Note that this guarantee is weaker than the one of <cit.>, which also presents an explicit rounding scheme to obtain an estimator ∈{+1,-1}^n. However, we expect that the techniques of<cit.> should be generalizable to the present setting. A simple rounding scheme takes the sign of principal left singular vector of σ.We will use this estimator in our numerical experiments in Section <ref>. This theorem can be compared with the one of <cit.> which uses k=2 but requires λ > 8. As a side result which improves over <cit.> for k = 2, we obtain the following lower bound on the correlation for any k ≥ 2.For any k ≥ 2, the following holds almost surely lim inf_n→∞inf_σ∈_n,k1/n^2‖σ^ u ‖_2^2 ≥ 1 - min(16/λ, 1/k+ 4/λ). The proof is deferred to Section <ref>. Our lower bound converges to 1 at large λ, which is the qualitatively correct behavior. §.§ Stochastic block model The planted partition problem (two-groups symmetric stochastic block model), is another well-studied statistical estimation problem that can be reduced to (<ref>) <cit.>.We write G∼𝒢 (n,p,q) if G is a graph over n vertices generated as follows (for simplicity of notation, we assume n even).Let u ∈{± 1 }^n be a vector of labels that is uniformly random with u^ = 0. Conditional on this partition,edges are drawn independently withℙ((i,j) ∈ E| u) = {[ p, u_i = u_j,; q, u_i ≠ u_j. ].We consider the case when p = a/n and q = b/n with a,b=(1), and a > b, and denote byd = (a+b)/2 the average degree. Aphase transition occurs as the following signal-to-noise parameter increasesλ (a,b) ≡a-b/√(2(a+b)).For λ>1 there exists an efficient estimator that correlates with the true labels with high probability <cit.>,whereas noestimator exists below this threshold, regardless of its computational complexity <cit.>. The Maximum Likelihood Estimator of the vertex labels is given by SBM-MLE ^(G) = max{x,A_G x:x∈{+1,-1}^n, x, =0}, where A_G is the adjacency matrix of the graph G. This optimization problem can again be attacked using the relaxation (<ref>),where A = _G≡ (A_G - d/n ·^ )/√(d) is the scaled and centered adjacency matrix. In order to emphasize the relationship between this problem andsynchronization, we rewrite _G= (λ /n) u u^ + E where E^ = E has zero mean and (E_ij)_i <j are independent with distributionE_ij = {[1/√(d) ( 1- p_ij), with probabilityp_ij,; -p_ij/√(d), with probability1-p_ij, ].where p_ij = a/n for u_i = u_j and p_ij = b/n for u_i ≠ u_j. In analogy with Theorem <ref>, we have the following results on the rank-constrainedapproach to the two-groups stochastic block model.Consider the rank-k non-convex SDP(<ref>) with A = _G the centered, scaled adjacency matrix of graphG ∼𝒢 (n, a/n, b/n). For any λ= λ (a,b) > 1, there exists an average degree d_* (λ) and arank k_*(λ), such that for any d≥ d_* (λ) and k ≥ k_*(λ),with high probability, any local maximizer σ has non-vanishing correlation with the true labels. Explicitly, there exists an = (λ) > 0 such that lim_n→∞( inf_σ∈_n,k1/n‖σ^ u ‖_2^2 ≥) = 1. The proof of this theorem can be found in Section <ref>. As mentioned above, efficient algorithms that estimate the hidden partition better than random guessingfor λ>1 and any d>1 have been developed, among others, in <cit.>. However, we expectthe optimization approach (<ref>) to share some of the robustness properties of semidefinite programming <cit.>, while scaling well to large instances. §.§ (d) synchronization In (d) synchronization we would like to estimate m matrices R_1,…,R_m in the special orthogonal group(d) = { R ∈^d × d: R^ R = 𝕀_d,(R) = 1 },from noisy measurements of the pairwise group differences A_ij = R_i^-1 R_j + W_ij for each pairs (i,j) ∈ [m] × [m]. Here A_ij∈^d× d is a measurement, and W_ij∈^d× d is noise.The Maximum Likelihood Estimator for recovering the group elements R_i ∈(d) solves the problem of the formmax_σ_1…σ_m∈(d)∑_i,j=1^m σ_i,A_ijσ_j,which can be relaxed to the Orthogonal-Cut SDP (<ref>). The non-convex rank-constrained approach fixesk> d, and solves the problem (<ref>).This is a smooth optimization problem with objective function (σ) = ⟨σ, A σ⟩ over the manifold _o,d,k= O(d,k)^m, where O(d,k) ={σ∈^k× d: σ^σ = 𝕀_d} is the set of k× d orthogonal matrices. We also denote the maximum value of the SDP(<ref>) by_o(A) = {⟨ A, X ⟩ : X ≽ 0, X_ii = 𝕀_d, i ∈ [m] }. In analogy with the MaxCut SDP, weobtain thefollowing Grothendieck-type inequality.For an -approximate concave point σ∈_o,d,k of the rank-k non-convex Orthogonal-Cut SDP problem (<ref>), we have ( σ) ≥_o(A) - 1/k_d-1 (_o(A) + _o(-A)) - n/2where k_d = 2k/(d+1).The proof of this theorem is a generalization of the proof of Theorem <ref>, and is deferred to Section <ref>.§ PROOF OF THEOREM <REF> In this section we present the proof of Theorem <ref>, while deferring other proofs to Section <ref>. Notice that the present proof is simpler and provides a tighter bound with respect tothe one of <cit.>. Before passing to the actual proof, we make a few remarks about the geometry of optimization on _k.§.§ Geometry of the manifold _k The set ℳ_k as defined in (<ref>) is a smooth submanifold of ℝ^n× k. We endow ℳ_k with the Riemannian geometry induced by the Euclidean space ^n × k. At any point σ∈ℳ_k, the tangent space is obtained by taking the differential of the equality constraintsT_σℳ_k = { u ∈ℝ^n× k:u = ( u_1,u_2, …, u_n )^ ,⟨ u_i , σ_i ⟩ = 0, i ∈ [n] }.In words, T_σ_k is the set of matrices u∈^n× k such that each row u_i of u is orthogonal to the corresponding row σ_i of σ. Equivalently, T_σℳ_k is the direct product of the tangent spaces of the n unit spheres 𝕊^k-1⊆^k at σ_1,…, σ_n.Let ^⊥ be the orthogonal projection operator from ^n × k onto T_σℳ_k. We have^⊥ (u)= ( ^⊥_1 (u_1),…, ^⊥_n (u_n))^ =( u_1 - σ_1,u_1σ_1, … ,u_n - σ_n,u_nσ_n )^ =u - ddiag( u σ^) σ,where we denoted by : ^n × n→^n × n the operator on the matrix space that sets all off-diagonal entries to zero. In problem (<ref>), we consider the cost function ( σ ) = ⟨σ , A σ⟩ on the submanifold _k. At σ∈_k, we denote ∇( σ) and ( σ)respectively the Euclidean gradient in ^n × k and the Riemannian gradient of .The former is ∇ f(σ) = 2Aσ, and the latteris the projection of the first onto the tangent space:( σ) = ^⊥ (∇( σ)) = 2 (A - (Aσσ^) ) σ.We will write Λ = Λ(σ) = ddiag(A σσ^) and often drop the dependence on σ for simplicity.At σ∈_k, let ∇^2 ( σ) and ( σ) be respectively the Euclidean and the Riemannian Hessian of . The Riemannian Hessian is a symmetric operator on the tangent space and is given by projecting the directional derivative of the gradient vector field (we use D to denote the directional derivative):∀ u ∈ T_σℳ_k, ( σ) [u] = ^⊥( D ( σ) [u] ) = ^⊥[ 2(A - Λ)u - 2 ( A σ u^ + A u σ^) σ].In particular, we will use the following identity∀ u,v ∈ T_σ_k, ⟨ v, ( σ)[u] ⟩ = 2 ⟨ v, (A - Λ ) u ⟩,where we used that the projection operator ^⊥ is self-adjoint and ⟨ v_i, σ_i ⟩ = 0 by definition of the tangent space. We observe that the Riemannian Hessian has a similar interpretation as in Euclidean geometry, namely it provides a second order approximation of the function f in a neighborhood of σ.§.§ Proof of Theorem<ref> Let σ be an -approximate concave point of ( σ) on _k.Using the definition and Equation (<ref>), we have (for Λ= ddiag(A σσ^ ))∀ u ∈ T_σ_k, ⟨ u, (Λ- A) u ⟩≥ -1/2⟨ u, u ⟩.Let V = [v_1, …, v_n]^∈ℝ^n× n be such that X = V V^ is an optimal solution of (<ref>) problem. Let G ∈ℝ^k× n be a random matrix with independent entries G_ij∼(0, 1/k), and denote by _i^⊥ = 𝕀_k - σ_iσ_i^∈ℝ^k × k the projection onto the subspace orthogonal to σ_i in ^k. We use G to obtain a random projectionW = [_1^⊥ G v_1,…, _n^⊥ G v_n]^∈ T_σ_k. From (<ref>), we have⟨ W, (Λ-A) W⟩≥ -1/2 ⟨ W, W⟩,where the expectation is taken over the random matrix G.The left hand side of the last equation gives ⟨ W, (Λ- A) W ⟩= ∑_i, j =1^n (Λ- A)_ij⟨_i^⊥ G v_i, _j^⊥ G v_j⟩= ∑_i, j =1^n (Λ- A)_ij⟨_i^⊥ G ∑_s=1^n v_is e_s, _j^⊥ G ∑_t=1^n v_jt e_t ⟩= ∑_i, j =1^n (Λ- A)_ij∑_s,t=1^n v_is v_jt [⟨_i^⊥ G e_s, _j^⊥ G e_t⟩]= ∑_i, j =1^n (Λ- A)_ij∑_s,t=1^n v_is v_jtδ_st1/k(_i^⊥_j^⊥)= ∑_i, j =1^n (Λ- A)_ij⟨ v_i, v_j ⟩1/k( 𝕀_k - σ_i σ_i^ - σ_j σ_j^ + σ_i σ_i^σ_j σ_j^)= ∑_i, j =1^n (Λ- A)_ij⟨ v_i, v_j ⟩(1 - 2/k + 1/k⟨σ_i, σ_j ⟩ ^2 )= (1-1/k)(Λ ) - (1-2/k) (A) - 1/k∑_i,j=1^n A_ij⟨ v_i, v_j ⟩(⟨σ_i, σ_j ⟩)^2, whereas the right hand side verifies⟨ W, W⟩ = ∑_i=1^n ⟨_i^⊥ G v_i, _i^⊥ G v_i ⟩ = ∑_i=1^n (1- 2/k + 1/k‖σ_i ‖^2_2 ) = (1-1/k) n. Note that (Λ) = ( σ). Crucially, if we let _ij = ⟨ v_i, v_j ⟩ (⟨σ_i, σ_j ⟩)^2, we have _ii = 1 and ≽ 0.Thus we have (-A) ≥⟨ -A, ⟩. Therefore, we have(1-1/k) ( σ) - (1-2/k) (A) + 1/k(-A) ≥ -1/2 n (1-1/k).Rearranging the terms gives the conclusion.§ NUMERICAL ILLUSTRATION In this section we carry out some numerical experiments to illustrate our results. We also find interesting phenomena which are not captured by our analysis. Although Theorem <ref> provides a complexity bound for the Riemannian trust-region method (RTR), we observe that (projected) gradient ascent also converges very fast. That is, gradient ascentrapidly increases the objective function, is not trapped at a saddle point, and converges to a local maximizer eventually. In Figure <ref>, we take A ∼(1000), anduse projected gradient ascentto solve the optimization problem (<ref>) with a random initialization and fixed step size. Figure <ref>a shows that the objective function increases rapidly and converges within a small interval from the local maximum (which is upper bounded by the value (A)). Also the gap between the value obtained by this procedure and the value (A)decreases rapidly with k.Figure <ref>b shows that the Riemannian gradient decreases very rapidly, but presents some non-monotonicity. We believe these bumps occur when the iteratesare close to saddle points.In Figure <ref>, we examine some geometric properties of the rank-k non-convex SDP. As above, we explore the landscape of this problem by projected gradient ascent.In Figure <ref>a, we plot the curvature λ_max(( σ)) versus the gap from the SDP value ((A) - ( σ))/n× 2 along the iterations.When ( σ) is far from (A), there is a linear relationship between these two quantities, which is consistent with Theorem <ref>. In Figure <ref>b, we plot the gap between(A) and ( σ^*) for a local maximizer σ^* ∈_k that is produced by projected gradient ascent, for different values of k.These data are averaged over 10 realizations of the random matrix A.This gap converges to zero as k gets large, and is upper bounded by the curve (A)/(15 (k-1)). This coincides with Theorem <ref>, which predicts that this gap must be smaller than (A)/(k-1).Note however that –in this case– Theorem <ref> is overly pessimistic, and the gap appears to decrease very rapidly with k. Now we turn to study the MaxCut problem. Note that Theorem <ref> gives a guarantee for the approximation ratio for the cut induced by any local maximizer of the rank-k non-convex SDP (<ref>).In Figure <ref>, we take the graph to be an Erdős-Rényigraph with n = 1000 and average degree d = 50. We plot the cut value found by rounding the maximizer of the rank-k non-convex SDP,for k from 2 to 10, and also for k = n which corresponds to the(<ref>). Surprisingly, the cut value found by solving rank-k non-convex problem is typically bigger than the cut value found by solving the original SDP.This provides a further reason to adopt the non-convex approach (<ref>). It appears to provide a significantly tight relaxation for random instances. In order to studysynchronization, we consider the matrix A = (λ/n) u u^ + W_n where W_n ∼GOE(n) for n = 1000. Figure <ref>a shows the correlation ‖σ^ u ‖_2^2/n^2 of alocal maximizer σ∈_k produced by projected gradient ascent, with the ground truth u. InFigure <ref>bwe construct label estimates (A)= (v_1(σ)) where v_1(σ) is the principal left singular vector of σ∈^n× k. We plotthe correlation (⟨û, u⟩/n)^2 as a function of λ. In both cases, results are averaged over 10 realizations of the matrix A. Surprisingly, the resulting correlation is strongly concentrated, despitethe fact that gradient ascent converges to a random local maximumσ∈_k.Finally, we turn to the (3) synchronization problem, and study the local maximizer of the Orthogonal-Cut SDP (<ref>). We sample a matrix A ∼GOE(300), and find the local maximum of the rank-k non-convex Orthogonal-Cut SDP (<ref>). In Figure <ref> we plot the gap between _o(A) and ( σ^*) for a local maximizer σ^* ∈^n× k produced by projected gradient ascent for different k. This gap converges to zero as k is larger, and is upper bounded by(A)/(20 (k_d-1)). This is in agreement withTheorem <ref>, which predicts that the gap is smaller than (A)/(k_d -1). § OTHER PROOFS§.§ Proof of Theorem <ref> Note that problem (<ref>)is equivalent to problem (<ref>) with matrix A = -A_G. Applying Theorem <ref>,and noting that the elements of A_G are non-negative, we for any local maximizer σ^* of the problem (<ref>),and any X^* optimal solution of the SDP (<ref>),⟨σ^*, -A_G σ^* ⟩≥ ⟨ -A_G, X^* ⟩ - 1/k-1(⟨ -A_G, X^* ⟩ + (A_G))=⟨ -A_G, X^* ⟩ - 1/k-1(⟨ -A_G, X^* ⟩ + ∑_i,j=1^n A_G,ij). Thus, we have1/4∑_i,j=1^n A_G,ij (1-⟨σ_i^*, σ_j^*⟩) = 1/4∑_i,j=1^n A_G,ij + 1/4⟨σ^*, -A_G σ^*⟩ ≥ 1/4∑_i,j=1^n A_G,ij + 1/4[⟨ -A_G, X^* ⟩ - 1/k-1(⟨ -A_G, X^* ⟩ + ∑_i,j=1^n A_G,ij)]= (1-1/k-1) ×1/4∑_i,j=1^n A_G,ij(1 - X_ij^*) = (1-1/k-1) ×SDPCut(G)≥ (1-1/k-1)×MaxCut(G). Applying the randomized rounding scheme of <cit.>, we sample a vector u ∼ (, 𝕀_k), and define v ∈{± 1}^n byv_i = (⟨σ_i^*, u⟩), then we obtain[ 1/4∑_i,j=1^n A_G,ij(1-⟨ v_i, v_j ⟩)] ≥α_* ×1/4∑_i,j=1^n A_G,ij(1-⟨σ_i^*, σ_j^* ⟩) ≥α_* ×(1-1/k-1) ×MaxCut(G).Therefore, for any local maximizer σ^*, it gives an α_* × (1-1/(k-1))-approximate solution of the MaxCut problem. If σ^* is an = 2(A_G) / (n (k-1))-approximate concave point, using Theorem <ref> and the same argument, we can prove that it gives an α_* × (1-2/(k-1))-approximate solution of the MaxCut problem.§.§ Proof of Theorem <ref> Let A(λ) = λ/n · u u^ + W_n. For any local maximum σ∈_n,k of the rank-k non-convex MaxCut SDP problem, according to Theorem <ref>, we have( σ) ≥(A(λ)) - 1/k-1((A(λ)) + (-A(λ))).Thereforeλ/n‖σ^ u ‖_2^2 ≥ (1-1/k-1) (λ/n u u^ + W_n ) - 1/k-1(-λ/n u u^ - W_n ) - ⟨ W_n, σσ^⟩ ≥ (1-1/k-1) (λ/n u u^ + W_n) - 1/k-1( - W_n) - (W_n).Using the convergence of the SDP value as proved in <cit.>, for any λ > 1, there exists Δ (λ) >0 such that, for any δ > 0, the following holds with high probability1/n( ± W_n )≤ 2 + δ, and1/n( λ/n u u^ + W_n )≥ 2 + Δ(λ). Therefore, we have with high probability1/n^2‖σ^ u ‖_2^2 ≥ 1/λ[ (2+Δ(λ)) ×(1-1/k-1) - (1 + 1/k-1) × (2+δ)] = (1-1/k-1) Δ(λ)/λ- 4 + δ/k-1·1/λ - δ/λ. Since Δ(λ) > 0 for λ >1, there exists a k_*(λ) such that the above expression is greater thanfor sufficiently smalland δ, which concludes the proof. §.§ Proof of Theorem <ref>We decompose the proof into two parts. In part (a), we prove that almost surelylim inf_n →∞inf_σ∈_n,k1/n^2‖σ^ u ‖_2^2 ≥ 1 - 1/k - 4/λ,using only the second order optimality condition. In part (b), we incorporate the first order optimality condition and prove that as λ≥ 12 k, we have almost surelylim inf_n →∞inf_σ∈_n,k1/n^2‖σ^ u ‖_2^2 ≥ 1 - 16/λ. §.§.§ Part (a) The proof of this part is similar to the proof of Theorem <ref>. We replace the matrix A by the expression A = u u^ + Δ, where u ∈{± 1 }^n and Δ = n/λ· W_n. Let g ∈ℝ^k, g ∼(, 1/k ·𝕀_k), and W = [_1^⊥ g u_1,…, _n^⊥ g u_n]^∈ T_σ_k, where _i^⊥ = 𝕀_k - σ_i σ_i^∈^k × k. Due to the second order optimality condition, similar to the calculation in Theorem <ref>, we have for any local maximizer σ of the rank-k non-convex SDP problem:0 ≤_g ⟨ W, (Λ(σ) - A) W ⟩= (1-1/k)f(σ) - (1-2/k) ∑_i,j=1^n A_ij u_i u_j - 1/k∑_i,j=1^n A_ij u_i u_j ⟨σ_i, σ_j ⟩^2.Plugging in the expression of A, we obtain(1- 1/k) ⟨ u u^ + Δ, σσ^⟩ - (1-2/k)(n^2 + ⟨ u, Δ u⟩) - 1/k∑_i,j=1^n [ ⟨σ_i, σ_j ⟩^2 + Δ_ij u_i u_j ⟨σ_i, σ_j⟩^2 ] ≥ 0.Letting _ij = u_i u_j ⟨σ_i, σ_j ⟩^2, we have(1-1/k)‖σ^ u ‖_2^2 ≥(1-2/k) n^2- (1-1/k) ⟨σ, Δσ⟩ + (1-2/k) ⟨ u, Δ u ⟩ + 1/k∑_i,j=1^n [ ⟨σ_i, σ_j ⟩^2 + Δ_ij_ij].Recall that rank(σσ^)= k, and (σσ^) = n. Thus, we get the lower bound∑_i,j=1^n ⟨σ_i, σ_j ⟩^2 = ‖σσ^‖_F^2= ∑_i=1^k λ_i^2(σσ^)≥1/k(∑_i=1^k λ_i(σσ^))^2 = 1/k((σσ^) )^2 =n^2/k.Also note thatis a feasible point of (<ref>). Therefore,(1-1/k)‖σ^ u ‖_2^2 ≥ (1-2/k+ 1/k^2) n^2- (1-1/k) ⟨σ, Δσ⟩ + (1-2/k) ⟨ u, Δ u ⟩ + 1/k⟨Δ, ⟩ ≥ (1-1/k)^2 n^2- (1-1/k) (Δ) - (1-1/k) (-Δ)≥ (1-1/k)^2 n^2- 2 (1-1/k) n ‖Δ‖_which implies thatlim inf_n →∞inf_σ∈_n,k1/n^2‖σ^ u ‖_2^2≥lim inf_n →∞ (1-1/k - 2/λ‖ W_n ‖_)= 1 - 1/k - 4/λ,a.s.where we used the fact that for a GOE matrix W_n, we have lim_n→∞‖ W_n ‖_ = 2 almost surely <cit.>.§.§.§ Part (b) In part (a) we only used the second order optimality condition. In this part of the proof, we will incorporate the first order optimality condition. Note that as λ < 12 k, the bound in part (a) is better. So in this part, we only consider the case when λ≥ 12 k. Without loss of generality, let u =, the vector with all entries equal to one. Let σ∈^n× k be a local optimizer of the rank-k non-convex SDP problem. We remark that the cost function is invariant by a right rotation of σ. We can therefore assume that σ = (v_1, …, v_k) where v_i ∈^n and ⟨ v_i, v_j ⟩= 0 for i≠ j (take the SVD decomposition σ = U Σ V^ and consider σ̃ = U Σ). Let X = σσ^ and A(λ) = (λ/n) ·^ + W_n. For simplicity, we will sometimes omit the dependence on λ and write A = A(λ).We decompose the proof into the following steps. Step 1 Upper bound on ⟨, v_j ⟩^2/n^2, for j=2,…, k, using the first order optimality condition.The first order optimality condition gives A σ= (A σσ^)σ, which implies that(A v_i) ∘ v_j = (A v_j) ∘ v_i,for any i ≠ j, where we denoted u ∘ v the entry-wise product of u and v. Replacing A by its expression gives((λ/n^ + W_n ) v_i )∘ v_j = ( (λ/n^ + W_n ) v_j )∘ v_i,which implies⟨, v_i⟩ v_j - ⟨, v_j ⟩ v_i = n/λ[-(W_n v_i )∘ v_j + (W_n v_j )∘ v_i].We take the norm of this expression and, recalling that ⟨ v_i, v_j ⟩ = 0, we obtain⟨, v_i ⟩^2 ‖ v_j ‖_2^2 + ⟨, v_j ⟩^2 ‖ v_i ‖_2^2 ≤ n^2/λ^2[‖( W_n v_i)∘ v_j ‖_2+‖( W_n v_j)∘ v_i ‖_2 ]^2.Notice that ‖ v_j‖_∞≤ 1, ∀ j∈ [k], hence ⟨, v_i ⟩^2 ‖ v_j ‖_2^2 + ⟨, v_j ⟩^2 ‖ v_i ‖_2^2 ≤ n^2/λ^2[‖ W_n v_i ‖_2+‖ W_n v_j ‖_2 ]^2≤ n^2/λ^2‖ W_n ‖_^2 [ ‖ v_i ‖_2+‖ v_j ‖_2 ]^2≤ 2n^2/λ^2‖ W_n ‖_^2 (‖ v_i ‖_2^2 + ‖ v_j ‖_2^2). Without loss of generality, let us assume that ‖ v_1 ‖_2 ≥‖ v_j ‖_2 for j ≥ 2 which implies ⟨, v_j ⟩^2 ‖ v_1 ‖_2^2 ≤4n^2/λ^2‖ W_n ‖_^2 ‖ v_1 ‖_2^2, forj ≥ 2.We deduce the following upper boundlim sup_n →∞sup_σ∈_n,k1/n^2⟨, v_j ⟩^2 ≤16/λ^2,a.s.for j = 2,…, k, where we use the fact that for a GOE matrix W_n, we have lim_n→∞‖ W_n ‖_ = 2 almost surely.Step 2 Lower bound on ⟨, v_1 ⟩^2/n^2. We combine equation (<ref>) and (<ref>) to get almost surelylim inf_n →∞inf_σ∈_n,k1/n^2⟨, v_1 ⟩^2 = lim inf_n →∞inf_σ∈_n,k[ 1/n^2‖σ^‖_2^2 - 1/n^2∑_j=2^k⟨, v_j ⟩^2 ] ≥1 - 1/k - 4/λ - 16k/λ^2.Since we assumed that λ≥ 12 k and k≥ 2, we obtain, almost surely,lim inf_n →∞inf_σ∈_n,k1/n^2⟨, v_1 ⟩^2 ≥ 1 - 1/k - 4/12 k - 16k/144 k^2≥1/4.The second inequality above is loose but it is sufficient for our purposes. Step 3 Upper bound on ‖ v_a ‖_2^2 for a ∈{ 2,…, k }. In Equation (<ref>), let us take i = 1 and j = a ∈{ 2,…, k }, we have⟨, v_1 ⟩^2 ‖ v_a ‖_2^2 +⟨, v_a ⟩^2 ‖ v_1 ‖_2^2 ≤2n^2/λ^2‖ W_n ‖_^2 (‖ v_1 ‖_2^2 + ‖ v_a ‖_2^2) ≤2 n^3/λ^2‖ W_n ‖_^2. Combining equation (<ref>) and (<ref>) results in the following upper bound for λ≥ 12 k, lim sup_n →∞sup_σ∈_n,k1/n‖ v_a ‖_2^2 ≤32/λ^2,holding almost surely for any a ∈{ 2,…, k }.Step 4 Lower bound on f(σ).By second order optimality of σ, for any vectors {ξ_i }_i=1^n satisfying σ_i, ξ_i=0, we have ⟨ξ, (Λ-A) ξ⟩≥ 0 where ξ = [ξ_1,…, ξ_n]^ and Λ = (A σσ^). Take ξ_i = e_a - σ_i,e_aσ_i, where e_a is the a-th canonical basis vector in ^k, a∈{2,…,k}. Noting that σ = (σ_1,…, σ_n)^ = (v_1,…, v_k), we have σ_i, e_a= v_a,i. Therefore, we have ξ_i,ξ_j = 1- v_a,i^2-v_a,j^2 + X_ijv_a,iv_a,j . Using the second order stationarity condition with this choice of ξ_i, we have0 ≤ ∑_i,j=1^n (Λ - A)_ij⟨ξ_i, ξ_j⟩= ∑_i,j=1^n (Λ - A)_ij(1- v_a,i^2-v_a,j^2 + X_ijv_a,iv_a,j)= ∑_i=1^n Λ_ii (1 - v_a,i^2) - ∑_i,j=1^n A_ij(1- 2 v_a,i^2+ X_ijv_a,iv_a,j),which impliesf(σ) = (Λ) ≥∑_i=1^n Λ_ii v_a,i^2 + ∑_i,j=1^n A_ij(1- 2 v_a,i^2+ X_ijv_a,iv_a,j) = ⟨, A ⟩ + ∑_i=1^n Λ_ii v_a,i^2 - 2 ∑_i,j=1^n A_ij v_a,i^2 + ∑_i,j=1^n A_ij X_ij v_a,i v_a,j ≡ ⟨, A ⟩ + B_1+B_2+B_3. Consider the first term B_1. It is easy to see that the second order stationary condition implies (Λ - A)_ii≥ 0. Thus, we haveB_1 = ∑_i=1^n Λ_ii v_a,i^2 ≥∑_i=1^n A_ii v_a,i^2 = ∑_i=1^n (λ/n + W_n, ii) v_a,i^2≥ ∑_i=1^n W_n, ii v_a,i^2 ≥ - max_i ∈ [n]| W_n, ii|·∑_i=1^n v_a,i^2≥-‖ W_n ‖_‖ v_a‖_2^2. Next consider the second term B_2. We have | B_2 | =2 |⟨, A ( v_a∘ v_a ) ⟩| =2 |⟨, (λ/n ·^ + W_n) ( v_a∘ v_a ) ⟩| ≤2 λ‖ v_a‖_2^2 + 2 |⟨, W_n (v_a ∘ v_a) ⟩|≤ 2 λ‖ v_a‖_2^2 + 2 √(n)‖ W_n ‖_‖ v_a ∘ v_a ‖_2≤2 λ‖ v_a‖_2^2 + 2 √(n)‖ W_n ‖_‖ v_a ‖_2.where the last inequality is because | v_a,i|≤ 1 so that ‖ v_a ∘ v_a ‖_2≤‖ v_a ‖_2. Finally, consider the last term B_3. B_3 = ⟨ v_a, ((λ/n ·^ + W_n) ∘ X) v_a⟩= λ/n ·⟨ v_a, X v_a ⟩ + ⟨ v_a, (W_n ∘ X) v_a ⟩ ≥ ⟨ v_a, (W_n ∘ X) v_a ⟩≥ -‖ W_n ∘ X ‖_‖ v_a ‖_2^2 ≥- ‖ W_n ‖_‖ v_a ‖_2^2,where the last inequality used a fact that if X ∈^n × n is in the elliptope, we have ‖ W ∘ X ‖_≤‖ W ‖_ for any W ∈^n× n.Here is the justification of the above fact. For X in the elliptope, we have X_ii = 1 and X ≽ 0. For any Z satisfying Z ≽ 0 and (Z) ≤ 1, X ∘ Z also satisfies X ∘ Z ≽ 0 and (X ∘ Z) ≤ 1. Therefore, using the variational representation of the operator norm, we have‖ W ∘ X ‖_ = max{sup_Z ≽ 0, (Z) ≤ 1⟨ W ∘ X, Z ⟩, sup_Z ≽ 0, (Z) ≤ 1⟨ - W ∘ X, Z ⟩}= max{sup_Z ≽ 0, (Z) ≤ 1⟨ W,X ∘ Z ⟩, sup_Z ≽ 0, (Z) ≤ 1⟨ - W, X ∘ Z ⟩} ≤ max{sup_Y ≽ 0, (Y) ≤ 1⟨ W,Y ⟩, sup_Y ≽ 0, (Y) ≤ 1⟨ - W, Y ⟩} = ‖ W ‖_. Step 5 Finish the proof.Noting that f(σ) = λ/n ·‖σ^‖_2^2 + ⟨σ, W_n σ⟩ and ⟨, A ⟩ = n λ + ⟨, W_n ⟩, we rewrite Equation (<ref>) as following1/n^2‖σ^‖_2^2 ≥ 1 - 1/λ n(⟨σ, W_n σ⟩ - ⟨, W_n ⟩)+ 1/λ n(B_1 + B_2 + B_3).Plug in the lower bound of B_1, B_2, B_3, we have almost surelylim inf_n→∞inf_σ∈_n,k1/n^2‖σ^‖_2^2≥ lim inf_n→∞inf_σ∈_n,k{ 1 - 2/λ‖ W_n ‖_- 1/λ n(2 ‖ W_n ‖_‖ v_a ‖_2^2 + 2λ‖ v_a ‖_2^2 + 2 √(n)‖ W_n ‖_‖ v_a ‖_2)} ≥1 - 4/λ- 1/λ(2 × 2 ×32/λ^2 + 2 λ×32/λ^2 + 2 × 2 ×√(32)/λ) ≥1 - 16/λ.Here we used Equation (<ref>), λ≥ 12k ≥ 24, and the fact that for a GOE matrix W_n, we have lim_n→∞‖ W_n ‖_ = 2 almost surely.§.§ Proof of Theorem <ref> The proof is similar to the proof of Theorem <ref>, where the GOE matrix W_n is replaced by the noise matrix E. Applying Theorem <ref> with the matrix _G(λ), similar to Equation (<ref>), we haveλ/n‖σ^ u ‖_2^2 ≥(1-1/k-1) (_G(λ) ) - 1/k-1( -E) - (E). According to <cit.>, the gap between the SDPs with the two different noise matrices is bounded with high probability by a function of the average degree d|1/n ( _G (λ) ) - 1/n ( A(λ)) | < Clog d/d^1/10 and |1/n (± E) - 1/n(± W_n) | < Clog d/d^1/10,where A(λ) = λ/n · u u^ + W_n corresponds to thesynchronization model and C = C(λ) is a function of λ bounded for any fixed λ. According to <cit.>, for any δ > 0 and λ > 1, there exists a function Δ (λ) >0 such that with high probability, we have1/n( ± W_n )≤ 2 + δ, and1/n( λ/n u u^ + W_n )≥ 2 + Δ(λ). Combining the above results, we have for any δ > 0, with high probabilityinf_σ∈_n,k1/n^2‖σ^ u ‖_2^2 ≥(1-1/k-1) Δ(λ)/λ- 4 + δ/k-1·1/λ - δ/λ - 2 C(λ)/λ·log d/d^1/10.For a sufficiently small > 0, taking δ sufficiently small, and taking successively d and k sufficiently large, the above expression will be greater than , which concludes the proof. §.§ Proof of Theorem <ref> We decompose the proof into three parts. In the first part, we do the calculation for a general non-convex problem. In the second part, we focus on the non-convex problem (<ref>). In the third part, we prove a claim we made in the second part.§.§.§ Part 1 First, let's consider a general SDP problem. Given a symmetric matrix A ∈^n× n, symmetric matrices B_1, B_2, …, B_s ∈^n× n and real numbers c_1, …, c_s ∈, we consider the following SDP:max_X ∈^n × n ⟨ A, X ⟩ subject to ⟨ B_i, X ⟩ = c_i,i ∈ [s],X ≽ 0.Let = [B_1,…, B_s] and = (c_1,…, c_s). We denote (A, , ) the maximum of the above SDP problem:(A, , ) = max{⟨ A, X ⟩: X ≽ 0, ⟨ B_i, X ⟩ = c_i, i ∈ [s] }.We assume (A, , ) < ∞. For a fixed integer k, the Burer-Monteiro approach considers the following non-convex problem:maximize ( σ)=⟨σ, A σ⟩ subject to ⟨σ, B_i σ⟩ = c_i,i ∈ [s],with decision variable σ∈^n× kDefine the manifold _k^, = {σ∈^n× k: ⟨σ, B_i σ⟩ = c_i, i ∈ [s] }. At each point σ∈_k^,, the tangent space is given by T_σ_k^, = { U ∈^n× k: ⟨ U , B_i σ⟩ = 0, i ∈ [s] }. We denote _T (U) the projection of U ∈^n × k onto T_σℳ_k^,:_T (U) = U - ∑_i,j = 1^s M_ij⟨ B_j σ, U ⟩ B_i σ.where M = ((⟨ B_iσ, B_j σ⟩)_ij=1^s)^-1∈^s × s. The Riemannian gradient is therefore given by(σ) = 2 (A - ∑_i=1^s λ_i B_i ) σwith λ_i = ∑_j=1^s M_ij(B_j A σσ^). We will write Λ = Λ (σ) = ∑_i=1^s λ_i B_i. The Riemannian Hessian (σ) applied on the direction U ∈ T_σ_k^, gives⟨ U, (σ) [U]⟩ = 2 ⟨ U, (A - Λ) U⟩.Therefore, according to the definition of the -approximate concave point σ∈ℳ_k^,, we have∀ Y ∈ T_σℳ_k^,, ⟨ Y, (Λ - A) Y ⟩≥ -1/2⟨ Y, Y ⟩. Let V ∈ℝ^n× n such that X^* = V V^ is a solution of the general SDP problem (<ref>), and G ∈ℝ^k× n, G_ij∼(0, 1/k) i.i.d., be a random mapping from ^n onto ^k. Let σ be a local maximizer of the rank-k non-convex SDP (<ref>), and take Y = _T (V G^), a random projection of V ∈ R^n × n onto T_σℳ_k^,. Due to the definition of the approximate concave point, we have⟨_T (VG^), (Λ - A) _T(VG^)⟩≥ -/2⟨_T (VG^), _T (VG^)⟩where the expectation is taken over the random mapping G. Expanding the left hand side gives0 ≤ ⟨ V, (Λ - A) V ⟩ - 2 ⟨ VG^ - _T(VG^), (Λ - A) VG^⟩+ ⟨ VG^ - _T(VG^), (Λ - A) (VG^ - _T(VG^)) ⟩ + /2⟨_T (VG^), _T (VG^)⟩ .The second term in the last equation gives⟨ VG^ - _T (VG^), (Λ - A) VG^⟩= ⟨∑_i,j=1^s M_ij⟨ B_j σ, VG^⟩ B_i σ, (Λ - A) VG^⟩= ∑_i,j=1^s M_ij[⟨ B_i σ , (Λ - A) VG^⟩⟨ B_j σ, VG^⟩]= ∑_i,j=1^s M_ij[ ⟨ V^(Λ - A) B_i σ, G^⟩⟨ V^ B_j σ, G^⟩]= 1/k∑_i,j = 1^s M_ij⟨ V^(Λ - A) B_i σ, V^ B_j σ⟩= 1/k⟨(Λ - A), V V^∑_i,j = 1^s M_ij B_j σσ^ B_i ⟩. The third term gives⟨ VG^ - _T(VG^), (Λ - A) (VG^ - _T(VG^)) ⟩= ⟨∑_i,j=1^s M_ij⟨ B_j σ, VG^⟩B_i σ, (Λ - A)∑_k,l=1^s M_k l⟨ B_l σ, VG^⟩ B_k σ⟩= ∑_ijkl = 1^s M_ij M_kl[⟨ B_i σ , (Λ - A) B_k σ⟩⟨ B_j σ, VG^⟩⟨ B_l σ, V G^⟩]= 1/k∑_ijkl = 1^s M_ij M_kl⟨ B_i , (Λ - A) B_k σσ^⟩⟨ X^* B_j , B_l σσ^⟩. For the fourth term, we have⟨_T (VG^), _T (VG^)⟩= ‖VG -∑_i,j=1^s M_ij⟨ B_j σ, VG^⟩B_i σ‖_F^2= ⟨ VG, VG⟩ - ‖∑_i,j=1^s M_ij⟨ B_j σ, VG^⟩B_i σ‖_F^2 =n - ∑_ij=1^n ∑_kl=1^n M_ij M_kl⟨ B_i σ, B_k σ⟩[ ⟨ V^ B_j σ, G ⟩⟨ V^ B_l σ, G ⟩]=n - 1/k∑_ij=1^n ∑_kl=1^n M_ij M_kl⟨ B_i σ, B_k σ⟩·⟨ V^ B_j σ, V^ B_l σ⟩. §.§.§ Part 2 Now let's consider the case of the rank-k non-convex Orthogonal-Cut SDP problem (<ref>). There are s = d(d+1)/2× m constraints corresponding to the set { (B_i,c_i): i ∈ [s]} = { (E_ii, 1): i ∈ [n]}⋃∪_t=1^m{ ((E_ij + E_ji)/√(2),0): (t-1) d+1 ≤ i < j ≤ t d}, where E_ij = e_i e_j^. We will denote _o,d,k the optimization manifold:_o,d,k = {σ∈^md × k : σ = ( σ_1 , … , σ_m )^ , σ_i^σ_i = 𝕀_d, i∈ [m] }.It is straightforward to verify that for any σ∈_o,d,k, we have (⟨ B_i σ, B_j σ⟩)_ij=1^s = 𝕀_s. Thus, we have M = 𝕀_s. In the following calculation, we write X = σσ^. Recall that X^* is a global maximizer of problem (<ref>), and X^* = V V^. Now, let us calculate each term in Equation (<ref>), for the specific problem (<ref>). For the second term in Equation (<ref>), we derived Equation (<ref>). One can check with some calculations that for any σ∈_o,d,k, we have∑_i,j=1^s M_ij B_j σσ^ B_i = d+1/2𝕀_n. For the fourth term in Equation (<ref>), we derived Equation (<ref>). Following the calculation in Equation (<ref>), we have∑_ij=1^n ∑_kl=1^n M_ij M_kl⟨ B_i σ, B_k σ⟩·⟨ V^ B_j σ, V^ B_l σ⟩= ∑_i j=1^n⟨ B_i σ, B_j σ⟩·⟨ V^ B_i σ, V^ B_j σ⟩= ∑_i = 1^n ⟨ V^ B_i σ, V^ B_i σ⟩ =d+1/2n.For the third term in Equation (<ref>), we derived Equation (<ref>). Following the calculation in Equation (<ref>), we have∑_ijkl=1^s M_ijM_kl⟨ B_i, (Λ-A) B_k X ⟩⟨ X^* B_j , B_l X ⟩= ∑_kl = 1^s ⟨ B_k, (Λ-A) B_l X ⟩⟨ X^* B_k , B_l X ⟩= ∑_ij=1^n (Λ-A)_ij∑_kl=1^s ⟨ B_k, E_ij B_l X ⟩⟨ X^* B_k, B_l X ⟩= d+1/2((Λ-A))where we define = 2/(d+1) · (∑_kl=1^s ⟨ B_k, E_ij B_l X ⟩⟨ X^* B_k, B_l X ⟩)_i,j=1^n. Here, we claim thatis a feasible point of the Orthogonal-Cut SDP problem (<ref>). We will prove this claim in part 3. For any feasible point X of the Orthogonal-Cut SDP problem (<ref>), we have ( σ) = ⟨Λ, X ⟩. Therefore, from Equation (<ref>), we obtainf(σ) - _o(A)= ⟨ V Λ V ⟩ - ⟨ V, A V ⟩ ≥ 1/k( 2 ·d+1/2⟨ (Λ - A), V V^⟩ - d+1/2⟨ (Λ-A), ⟩) - n/2(1 - d+1/2k)=1/k((d+1)(f(σ) - _o(A)) - d+1/2 (f(σ) -⟨ A,) )- n/2(1 - d+1/2k) ≥ ( 2 d+1/2k(f(σ) - _o(A)) - d+1/2k (f(σ) +_o(-A) ) ) - n/2(1 - d+1/2k)Letting k_d = 2k/(d+1), rearranging the above inequality, we have(1-1/k_d) ( σ) - (1-2/k_d)_o(A) + 1/k_d_o(-A) ≥ - n/2(1 - 1/k_d),which finally gives the desired inequality( σ) ≥_o(A) - 1/k_d-1 (_o(A) + _o(-A)) - n/2. §.§.§ Part 3 Now, let us check thatis a feasible point of the Orthogonal-Cut SDP problem (<ref>). The reason is given by the following Fact (a) and (b). Fact (a).is P.S.D.. Indeed, for any v ∈^n, recall that X = σσ^ and X^* = V V^, we have ⟨ v,v ⟩ = 2/d+1·∑_kl=1^s ⟨ v^ B_k σ, v^ B_l σ⟩⟨ V^ B_k σ, V^ B_l σ⟩= 2/d+1·((⟨ v^ B_k σ, v^ B_l σ⟩)_k,l=1^s· (⟨ V^ B_k σ, V^ B_l σ⟩)_k,l=1^s).The matrix (⟨ v^ B_k σ, v^ B_l σ⟩)_k,l=1^s = Z^ Z ≽ 0, where Z = [vec(v^ B_1 σ), …, vec(v^ B_s σ)]. Similarly, we have (⟨ V^ B_k σ, V^ B_l σ⟩)_k,l=1^s ≽ 0. Thus, ⟨ v,v⟩≥ 0 for any v ∈^n. Thenis P.S.D..Fact (b) The (i,i)'th block ofequals 𝕀_d. To show this, we assume d≥ 2, and due to the symmetry, we just need to check _11 = 1 and _12 = 0. We denote J_ij = E_iiδ_ij + (E_ij + E_ji)/√(2)· (1-δ_ij), and we rewrite _ij as_ij = 2/d+1·∑_a = 1^m ∑_(k,s,l,t) ∈Γ_a⟨ E_ij,J_ks X J_lt⟩⟨ X^*,J_ks XJ_lt⟩where Γ_a = {(k,s,l,t): 1 + (a-1)d≤ k≤ s ≤ ad, 1 + (a-1)d ≤ l ≤ t ≤ ad }. We have the following series of simplification_11 =2/d+1·∑_a = 1^m ∑_(k,s,l,t) ∈Γ_a⟨ E_11,J_ks X J_lt⟩⟨ X^*,J_ks XJ_lt⟩= 2/d+1·∑_(k,s,l,t) ∈Γ_1⟨ E_11,J_ks X J_lt⟩⟨ X^*,J_ks XJ_lt⟩= 2/d+1·∑_(k,s,l,t) ∈Γ_1⟨ E_11,J_ksJ_lt⟩⟨ J_ks, J_lt⟩ =2/d+1·∑_1 ≤ k ≤ s ≤ d⟨ E_11,J_ks J_ks⟩= 2/d+1·(∑_k=1^d ⟨ E_11,E_kk E_kk⟩ + ∑_1≤ k < l ≤ d1/2⟨ E_11, E_kk + E_ll⟩)= 2/d+1·(1 + d-1/2) = 1.The third equality used the fact that X and X^* are feasible point so that their (i,i)'th block are 𝕀_d. Similarly, we have_12 =2/d+1·∑_(k,s,l,t) ∈Γ_1⟨ E_12,J_ks X J_lt⟩⟨ X^*,J_ks XJ_lt⟩= 2/d+1·∑_(k,s,l,t) ∈Γ_1⟨ E_12,J_ksJ_lt⟩⟨ J_ks, J_lt⟩= 2/d+1·∑_1 ≤ k ≤ s ≤ d⟨ E_12,J_ks J_ks⟩ = 0.The last equality is because J_ks J_ks is always a diagonal matrix. Therefore, we proved thatis a feasible point of the Orthogonal-Cut SDP problem (<ref>).§.§ Proof of Theorem <ref> Given a point σ∈ℳ_k and a tangent vector u ∈ T_σℳ_k with ‖ u ‖_F = 1, we denote σ (t) = _ℳ_k (σ + t u) the updatewith searching direction u and step size t. The next three lemmas ensure a sufficient increment of the objective function at each step of the RTR algorithm. (Gradient-step) Fix μ_G ≤ 2 ‖ A ‖_1. For any point σ∈ℳ_k such that ‖( σ) ‖_F ≥μ_G, taking searching direction u = ( σ)/ ‖( σ) ‖_F and step size η =μ_G/(20 ‖ A ‖_1), we have( σ(η)) - ( σ) ≥μ_G^2/40 ‖ A ‖_1. The second order expansion of ( σ(t)) around 0 with t ≤ 1 gives ( σ(t)) - ( σ)≥ (∘σ)'(0) t - sup_ξ∈ [0,t]1/2(∘σ)”(ξ) t^2 ≥⟨( σ), u ⟩ t - 1/2‖ A ‖_1 · (4+ 8 t + 8t^2 )· t^2 ≥‖( σ) ‖_F t - 10 ‖ A ‖_1 t^2.The second inequality used the bound on the second order derivative in Lemma <ref> in Appendix <ref>. Now we take t= μ_G /(20 ‖ A ‖_1). Since μ_G ≤ 2 ‖ A ‖_1, we have t ≤ 1. Plugging this t into the above equation completes the proof. (Eigen-step) For any point σ∈ℳ_k, and u∈ T_σℳ_k satisfying ‖ u ‖_F = 1, ⟨ u , ( σ)⟩≥ 0, and λ_H = λ_H(σ, u) = ( σ)[u,u] >0, choosing η =λ_H/(100 ‖ A ‖_1), we have( σ (η)) - ( σ) ≥λ_H^3/4 · 10^4 ‖ A ‖_1^2. The third order expansion of ( σ(t)) around 0 for t ≤ 1 gives ( σ(t)) - ( σ)≥⟨( σ(0)),u ⟩ t + 1/2⟨ u, ( σ(0)) [u]⟩ t^2 - 1/6sup_ξ∈ [0,t] (∘σ)”'(ξ) t^3 ≥1/2λ_H t^2 -1/6‖ A ‖_1· (12 + 36t + 48 t^2 + 48 t^3)· t^3≥1/2λ_H t^2 - 24 ‖ A ‖_1 t^3.The second inequality used the bound on the third order derivative in Lemma <ref> in Appendix <ref>. Now we take t= λ_H/(100 ‖ A ‖_1). Note that we always have λ_H(σ, u) ≤‖( σ) ‖_2 ≤‖ A - Λ‖_2 ≤ 2 ‖ A ‖_1, and therefore we have t ≤ 2 ‖ A ‖_1 /(100 ‖ A ‖_1) ≤ 1. Plugging this t into the above equation completes the proof.The last lower bound on the increment of objective function for eigen-step used the loose bound in Lemma <ref>. Using Lemma <ref>, we can give an improved bound for the eigen-step when the norm of the gradient is small. In particular we take μ_G = ‖ A ‖_2. (Improved bound for eigen-step) For any point σ∈ℳ_k with ‖( σ) ‖_F ≤μ_G = ‖ A ‖_2, and u∈ T_σℳ_k satisfying ‖ u ‖_F = 1, ⟨ u , ( σ)⟩≥ 0 and λ_H = λ_H(σ, u) =f(σ)[u,u]>0, choosing η = min(√(λ_H/(216 ‖ A ‖_1)),λ_H/(12 ‖ A ‖_2)), we have( σ (η)) - ( σ) ≥1/4λ_H η^2 = min(λ_H^2/864 ‖ A ‖_1, λ_H^3/576 ‖ A ‖_2^2).The third order expansion of ( σ(t)) around 0 for t ≤ 1 gives ( σ(t)) - ( σ)≥⟨( σ(0)),u ⟩ t + 1/2⟨ u, ( σ(0)) [u]⟩ t^2 - 1/6sup_ξ∈ [0,t] (∘σ)”'(ξ) t^3 ≥1/2λ_H t^2 -1/6 (6‖ A ‖_2 + 3 ‖( σ(0))‖_F ) · t^3 - 1/6‖ A ‖_1 · (42+ 72 t + 48 t^2) · t^4 ≥1/2λ_H t^2 - 3/2‖ A ‖_2 t^3 - 27 ‖ A ‖_1 t^4.The first inequality used the improved bound on the third order derivative of Lemma <ref> in Appendix <ref>, which imply in particular f(σ)_F≤A_2.Taking t= min(√(λ_H/(216 ‖ A ‖_1)),λ_H/(12 ‖ A ‖_2))<1 completes the proof. We are now at a good position to prove Theorem <ref>.Denote ^* =(A) -1/(k-1) · ((A) + (-A)) and g(σ ) = ^* - ( σ). Let T be the number of iterations and {σ^0 , σ^1 ,… , σ^T}⊂ℳ_k the iterates returned by our RTR algorithm from an arbitrary initialization σ^0 ∈ℳ_k. We are only interested in the convergence rate as g(σ) > 0, namely the convergence rate below the gap. Since our algorithm is an ascent algorithm, without loss of generality, we assume g(σ^0), …, g(σ^T) > 0 (otherwise the theorem will hold automatically). At each point σ∈ℳ_k, Theorem <ref> gives the following lower bound on the highest curvatureλ_H,max(σ) = sup_u∈ T_σℳ_k⟨ u , ( σ)[u]⟩/⟨ u,u ⟩≥ 2g(σ)/n > 0.We will use this information to bound the algorithm's convergence rate.Case 1.First, we consider the case when all the RTR steps are eigen-steps. In each iteration, the algorithm constructs an update direction u^t with curvature λ_H(σ^t, u^t) ≥λ_H,max(σ^t)/2. According to Lemma <ref>, we haveg(σ^t) - g(σ^t+1) ≥λ_H^3(σ^t)/32· 10^4 ‖ A ‖^2_1≥g(σ^t)^3/32 · 10^4 ‖ A ‖^2_1 n^3,which implies g(σ^t+1) ≤ g(σ^t). Thus, we have 1/g(σ^t+1)^2 - 1/g(σ^t)^2≥(g(σ^t)^2/g(σ^t+1)^2 + g(σ^t)/g(σ^t+1))·1/32 · 10^4 ‖ A ‖_1^2 n≥1/16 · 10^4 ‖ A ‖_1^2 n^3.Summing over t = 0,…, T-1, we have1/g(σ^T)^2 - 1/g(σ^0)^2 = ∑_0 ≤ t ≤ T-11/g(σ^t+1)^2 - 1/g(σ^t)^2≥1/16 · 10^4 ‖ A ‖_1^2 n^3 TTherefore, we obtain the convergence rate g(σ^T) ≤ 400 ‖ A ‖_1 n √(n/T). This implies that ( σ^T) ≥(A) - 1/k-1 ((A) + (-A)) - n/2as soon as T ≥ 64 · 10^4 · n ‖ A ‖_1^2/^2. Case 2.Then, we consider the case where we set μ_G = ‖ A ‖_2, and we use the gradient step as ‖( σ) ‖_F > μ_G, and use the eigen-step as ‖( σ) ‖_F ≤μ_G. First let us bound the number of gradient steps. According to Lemma <ref>, we have T_G μ_G^2/40 ‖ A ‖_1≤ g(σ^0) - g(σ^T) ≤ (A).Hence, we deduce the upper bound T_G ≤ 40 ·‖ A‖_1 (A)/‖ A ‖_2^2. Then let us bound the number of eigen-steps. Let us denote ℐ and 𝒥⊂{ 0, 1, …, T-1 } the subsets of indices corresponding to eigensteps with respectively λ_H ≥ 3‖ A ‖_2^2/(2‖ A ‖_1) and λ_H < 3‖ A ‖_2^2/(2‖ A ‖_1). According to Lemma <ref>, we have for all t∈𝒥1/g(σ^t+1) - 1/g(σ^t)≥1/864 ‖ A ‖_1 n^2g(σ^t)/g(σ^t+1)≥1/864 ‖ A ‖_1 n^2,whereas for t ∈ℐ1/g(σ^t+1)^2 - 1/g(σ^t)^2≥1/576 ‖ A ‖_2^2 n^3(g(σ^t)/g(σ^t+1) + g(σ^t)^2/g(σ^t+1)^2) ≥1/288 ‖ A ‖_2^2 n^3.Summing the contributions of the above two equations gives the convergence rateg(σ^T) ≤ c ·max(‖ A ‖_1 n^2/T, ‖ A ‖_2 n √(n/T))for a universal constant c. This guarantees that g(σ^T) ≤ n/2 as soon as T ≥c̃· n max( ‖ A ‖_2^2/^2, ‖ A ‖_1/) for some universal constant c̃. § ACKNOWLEDGEMENTS A.M. was partially supported by the NSF grant CCF-1319979. S.M. was supported by Office of Technology Licensing Stanford Graduate Fellowship. amsalpha ANKKS+12[ABG07]absil2007trust P-A Absil, Christopher G Baker, and Kyle A Gallivan, Trust-region methods on Riemannian manifolds, Foundations of Computational Mathematics 7 (2007), no. 3, 303–330.[ABH16]abbe2016exact Emmanuel Abbe, Afonso S Bandeira, and Georgina Hall, Exact recovery in the stochastic block model, IEEE Transactions on Information Theory 62 (2016), no. 1, 471–487.[AGZ10]anderson2010introduction Greg W Anderson, Alice Guionnet, and Ofer Zeitouni, An introduction to random matrices, vol. 118, Cambridge university press, 2010.[AHK05]arora2005fast Sanjeev Arora, Elad Hazan, and Satyen Kale, Fast algorithms for approximate semidefinite programming using the multiplicative weights update method, Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on, IEEE, 2005, pp. 339–348.[AHK12]arora2012multiplicative , The multiplicative weights update method: a meta-algorithm and applications., Theory of Computing 8 (2012), no. 1, 121–164.[AK07]arora2007combinatorial Sanjeev Arora and Satyen Kale, A combinatorial, primal-dual approach to semidefinite programs, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, ACM, 2007, pp. 227–236.[ANKKS+12]arie2012global Mica Arie-Nachimson, Shahar Z Kovalsky, Ira Kemelmacher-Shlizerman, Amit Singer, and Ronen Basri, Global motion estimation from point matches, 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, IEEE, 2012, pp. 81–88.[BAP+05]baik2005phase Jinho Baik, Gérard Ben Arous, Sandrine Péché, et al., Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices, The Annals of Probability 33 (2005), no. 5, 1643–1697.[Bar95]barvinok1995problems Alexander I. Barvinok, Problems of distance geometry and convex properties of quadratic maps, Discrete & Computational Geometry 13 (1995), no. 2, 189–202.[BBV16]bandeira2016low Afonso S Bandeira, Nicolas Boumal, and Vladislav Voroninski, On the low-rank approach for semidefinite programs arising in synchronization and community detection, arXiv preprint arXiv:1602.04426 (2016).[BCSZ14]bandeira2014multireference Afonso S Bandeira, Moses Charikar, Amit Singer, and Andy Zhu, Multireference alignment using semidefinite programming, Proceedings of the 5th conference on Innovations in theoretical computer science, ACM, 2014, pp. 459–470.[BM03]burer2003nonlinear Samuel Burer and Renato DC Monteiro, A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization, Mathematical Programming 95 (2003), no. 2, 329–357.[Bou15]boumal2015Riemannian Nicolas Boumal, A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints, arXiv:1506.00575 (2015).[BVB16]boumal2016non Nicolas Boumal, Vlad Voroninski, and Afonso Bandeira, The non-convex burer-monteiro approach works on smooth semidefinite programs, Advances in Neural Information Processing Systems, 2016, pp. 2757–2765.[DAM15]deshpande2015asymptotic Yash Deshpande, Emmanuel Abbe, and Andrea Montanari, Asymptotic mutual information for the two-groups stochastic block model, arXiv preprint arXiv:1507.08685 (2015).[Fri03]friedman2003proof Joel Friedman, A proof of alon's second eigenvalue conjecture, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, ACM, 2003, pp. 720–724.[GH11]garber2011approximating Dan Garber and Elad Hazan, Approximating semidefinite programs in sublinear time, Advances in Neural Information Processing Systems, 2011, pp. 1080–1088.[Gro96]grothendieck1996resume Alexander Grothendieck, Résumé de la théorie métrique des produits tensoriels topologiques, Resenhas do Instituto de Matemática e Estatística da Universidade de São Paulo 2 (1996), no. 4, 401–481.[GV16]guedon2016community Olivier Guédon and Roman Vershynin, Community detection in sparse networks via grothendieck’s inequality, Probability Theory and Related Fields 165 (2016), no. 3-4, 1025–1049.[GW95]goemans1995improved Michel X Goemans and David P Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, Journal of the ACM (JACM) 42 (1995), no. 6, 1115–1145.[HWX16]hajek2016achieving Bruce Hajek, Yihong Wu, and Jiaming Xu, Achieving exact cluster recovery threshold via semidefinite programming, IEEE Transactions on Information Theory 62 (2016), no. 5, 2788–2797.[JMRT16]javanmard2016phase Adel Javanmard, Andrea Montanari, and Federico Ricci-Tersenghi, Phase transitions in semidefinite relaxations, Proceedings of the National Academy of Sciences 113 (2016), no. 16, E2218–E2223.[Joh01]johnstone2001distribution Iain M Johnstone, On the distribution of the largest eigenvalue in principal components analysis, Annals of statistics (2001), 295–327.[KKMO07]khot2007optimal Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O'Donnell, Optimal inapproximability results for max-cut and other 2-variable csps?, SIAM Journal on Computing 37 (2007), no. 1, 319–357.[KM09]korada2009exact Satish Babu Korada and Nicolas Macris, Exact solution of the gauge symmetric p-spin glass model on a complete graph, Journal of Statistical Physics 136 (2009), no. 2, 205–230.[KN12]khot2012grothendieck Subhash Khot and Assaf Naor, Grothendieck-type inequalities in combinatorial optimization, Communications on Pure and Applied Mathematics 65 (2012), no. 7, 992–1035.[KW92]kuczynski1992estimating J Kuczyński and H Woźniakowski, Estimating the largest eigenvalue by the power and lanczos algorithms with a random start, SIAM journal on matrix analysis and applications 13 (1992), no. 4, 1094–1122.[Mas14]massoulie2014community Laurent Massoulié, Community detection thresholds and the weak ramanujan property, Proceedings of the 46th Annual ACM Symposium on Theory of Computing, ACM, 2014, pp. 694–703.[MNS13]mossel2013proof Elchanan Mossel, Joe Neeman, and Allan Sly, A proof of the block model threshold conjecture, arXiv:1311.4115 (2013).[MNS15]mossel2015reconstruction , Reconstruction and estimation in the planted partition model, Probability Theory and Related Fields 162 (2015), no. 3-4, 431–461.[Mon16]montanari2016grothendieck Andrea Montanari, A Grothendieck-type inequality for local maxima, arXiv:1603.04064 (2016).[MPW16]moitra2016robust Ankur Moitra, William Perry, and Alexander S Wein, How robust are reconstruction thresholds for community detection?, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, ACM, 2016, pp. 828–841.[MS16]montanari2016semidefinite Andrea Montanari and Subhabrata Sen, Semidefinite programs on sparse random graphs and their application to community detection, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, ACM, 2016, pp. 814–827.[Nes13]nesterov2013introductory Yurii Nesterov, Introductory lectures on convex optimization: A basic course, vol. 87, Springer Science &amp; Business Media, 2013.[Pat98]pataki1998rank Gábor Pataki, On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues, Mathematics of operations research 23 (1998), no. 2, 339–358.[Sin11]singer2011angular Amit Singer, Angular synchronization by eigenvectors and semidefinite programming, Applied and computational harmonic analysis 30 (2011), no. 1, 20–36.[SS11]singer2011three Amit Singer and Yoel Shkolnisky, Three-dimensional structure determination from common lines in cryo-em by eigenvectors and semidefinite programming, SIAM journal on imaging sciences 4 (2011), no. 2, 543–572.[Ste10]steurer2010fast David Steurer, Fast sdp algorithms for constraint satisfaction problems, Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, SIAM, 2010, pp. 684–697. § SOME TECHNICAL STEPS §.§ Technical lemmas on (∘σ)(t)In this section, we give an upper bound to the second and third derivatives of (∘σ) (t) = ⟨σ(t), A σ(t) ⟩ (these notations are defined below). These bounds are important in bounding the complexity of the Riemannian trust-region method in solving the non-convex SDP problem. Fix a point σ∈ℳ_k ⊂^n× k on the manifold, and a tangent vector u∈ T_σℳ_k = {u = [u_1, …, u_i]^∈^n × k: u_i ∈^k, ⟨σ_i, u_i ⟩ = 0, ∀ i ∈ [n] } with ‖ u ‖_F = 1. Let σ(t) = _ℳ_k(σ + t u) be the orthogonal projection of σ + t u onto the manifold _k. For a given symmetric matrix A ∈^n × n, let ( σ) = ⟨σ, A σ⟩. We would like to study the derivatives of (∘σ) (t) = ( σ(t)) with respect to t. Furthermore, we define u_i(t) = u_i/√(1 + t^2 ‖ u_i ‖_2^2), u (t) = [ u_1 (t) ,… , u_n (t)]^, D(t) = ([‖ u_1(t) ‖_2^2,…, ‖ u_n(t) ‖_2^2 ]), and Λ(t) = (Aσ(t) σ(t)^). For convenience, we will denote = σ(t), = u(t), = D(t), and = Λ(t).For any σ∈ℳ_k and u ∈ T_σℳ_k, let σ(t) = _ℳ_k(σ + t u). We have ∀ t ∈σ'(t) =- tD(t) σ(t) + u(t),σ”(t) = [-D(t) + 3 t^2 D(t)^2 ] σ(t) - 2 t D(t) u(t),σ”'(t) =[9 t D(t)^2 - 15 t^3 D(t)^3] σ(t) + [-3D(t) + 9 t^2 D(t)^2] u(t).To calculate the first three derivatives of σ (t), we expand each row of σ(t+r) up to third order in r:σ_i(t+r) =σ_i + t u_i + r u_i/√(1 + (t+r)^2 ‖ u_i ‖_2^2 )= σ_i + tu_i + ru_i/√(1 + t^2 ‖ u_i ‖_2^2)·( 1- 1/2·2rt ‖ u_i ‖_2^2 + r^2 ‖ u_i ‖_2^2/1 + t^2 ‖ u_i ‖_2^2 +3/8(2rt ‖ u_i ‖_2^2 + r^2 ‖ u_i ‖_2^2/1 + t^2 ‖ u_i ‖_2^2)^2 . . - 5/16(2rt ‖ u_i ‖_2^2 + r^2 ‖ u_i ‖_2^2/1 + t^2 ‖ u_i ‖_2^2)^3 + o(r^3))=σ_i(t) + {[- t ‖ u_i(t) ‖_2^2] σ_i(t) + u_i(t) } r+ {[-1/2‖ u_i(t) ‖_2^2 + 3/2 t^2 ‖ u_i(t)‖_2^4] σ_i(t) + [- t ‖ u_i(t)‖_2^2] u_i(t) } r^2+ {[3/2 t ‖ u_i(t)‖_2^4 - 5/2 t^3 ‖ u_i(t) ‖_2^6] σ_i(t) + [-1/2‖ u_i(t) ‖_2^2 + 3/2 t^2 ‖ u_i(t)‖_2^4] u_i(t) } r^3 + o(r^3).By matching each expansion coefficient to the corresponding derivative, we obtain the desired result.For (∘σ)(t) as defined abovesup_ξ∈ [0,t]| (∘σ )” (ξ) |≤‖ A ‖_1 ·(4 + 8 t+ 8t^2), ∀ t ≥ 0.We explicitly calculate the second derivative(∘σ)”(t) = ⟨σ'(t), ∇^2 ( σ(t)) [ σ'(t)] ⟩ + ⟨∇( σ(t)), σ”(t) ⟩ = ⟨σ'(t), 2 Aσ'(t) ⟩ + ⟨ 2 A σ(t), σ”(t) ⟩= ⟨ - t+ , 2 A[- t + ] ⟩ + ⟨ 2 A, [- + 3 t^2 ^2 ]- 2 t ⟩=2 ⟨, (A - ) ⟩ - 4t [⟨, A ⟩ + ⟨ A , ⟩] + t^2 [2 ⟨, A⟩ + 6 ⟨ A , ^2 ⟩ ].Noticing that ‖ u (t) ‖_F ≤‖ u (0) ‖_F = 1, we can use the bounds derived in Appendix <ref> to obtain the following inequality| (∘σ )”(t) | ≤ 2 |⟨, (A - ) ⟩ | + 4t [ |⟨, A ⟩| + |⟨ A , ⟩ | ] + t^2 [2 |⟨, A⟩ | + 6 |⟨ A , ^2 ⟩ | ]≤ 4 ‖ A ‖_1 + 8 t ‖ A ‖_1 + 8t^2 ‖ A ‖_1. For (∘σ)(t) as defined abovesup_ξ∈ [0,t]| (∘σ )”' (ξ) |≤‖ A ‖_1 · (12 + 36 t + 48 t^2 + 48 t^3),∀ t ≥ 0.We explicitly calculate the third derivative(∘σ)”'(t)= ⟨∇( σ(t)), σ”'(t) ⟩ + 3 ⟨σ'(t), ∇^2 ( σ(t))[σ”(t)] ⟩= ⟨∇( ), [9 t ^2 - 15 t^3 ^3]+ [-3 + 9 t^2 ^2 ] ⟩+ 3 ⟨- t + , ∇^2 ( ) [(- + 3t^2 ^2) - 2t]⟩=-6 [⟨, A ⟩ + ⟨, A⟩]+ [18 ⟨, A ^2 ⟩ + 6 ⟨, A ⟩ - 12 ⟨, A⟩] t+ [18 ⟨, A^2 ⟩ + 12 ⟨, A⟩ + 18 ⟨, A ^2 ⟩]t^2 + [-30 ⟨, A ^3 ⟩ - 18 ⟨ ,A ^2 ⟩]t^3.The inequality is obtained by upper bounding each term using the bounds derived in Appendix <ref>. The above bound on the third derivative of order ‖ A ‖_1 as t→ 0. The next lemma proves a bound of order ‖ A ‖_2 + ‖( σ(0))‖_F as t→ 0. If ‖( σ(0))‖_F is small, this improves the above bound.For (∘σ)(t) as defined above, an improved bound on its third derivative givessup_ξ∈ [0,t]| (∘σ )”' (ξ) |≤ 6 ‖ A ‖_2 + 3 ‖( σ(0))‖_F + ‖ A ‖_1 · (42 t+ 72 t^2 + 48 t^3), ∀ t ≥ 0. From the proof in the previous lemma, we have| (∘σ)”' (t) |≤ 6 ( |⟨, A ⟩| + |⟨, A⟩ | )+‖ A ‖_1 · (36 t + 48 t^2 + 48 t^3).. We next bound more carefully g(t) = ⟨σ(t), A D(t) u(t) ⟩. Simple calculation gives usu'(t) = - t · D(t) u(t) and D'(t) = -2t · D(t)^2.Hence,g'(t) = ⟨σ'(t), AD(t) u(t) ⟩ + ⟨σ(t), A D'(t) u(t) ⟩ + ⟨σ(t), A D(t) u'(t)⟩= ⟨ -t+ , A⟩ + ⟨σ, A(-2t^2) ⟩ + ⟨, A(-t) ⟩= ⟨, A⟩ + [-⟨, A⟩ -3⟨,A^2 ⟩] t.According to the bounds in Appendix <ref>, we have| g'(t) |≤‖ A ‖_2 + 4 t ‖ A ‖_1.In the meanwhile, we have| g(0)| = |⟨σ, A D u ⟩| =|⟨ (A - Λ)σ, D u ⟩| = |⟨1/2( σ(0)), Du ⟩|≤1/2‖( σ(0))‖_F.According to the Taylor expansion of g(t) around 0 and t≥ 0 at first order, we have| g(t) |≤| g(0) | + t sup_ξ∈ [0,t]| g'(ξ) |≤1/2‖( σ(0))‖_F + t‖ A ‖_2 + 4 t^2 ‖ A ‖_1.Hence the improved bound follows.§.§ All the boundsIn this section we give all the bounds used in the proof of Lemma <ref>, <ref> and <ref>. Let σ∈_k and u ∈^n × k with ‖ u ‖_F ≤ 1. Note that here we do not require u ∈ T_σ_k. Denote D = ([‖ u_1‖_2^2, …, ‖ u_n ‖_2^2]) and Λ =(A σσ^ ). We have the following bound for each term. 2 *|⟨σ, AD u ⟩| = |⟨ Du, A σ⟩| ≤ max_i ‖ (A σ)_i ‖_2 ≤ ‖ A ‖_1. *|⟨ u, AD σ⟩| = |⟨ u, (A u σ^) u ⟩| ≤ max (|(Auσ^)|)= max_i |⟨σ_i, (Au)_i⟩| ≤ max_i ‖ (Au)_i ‖_2 ≤ ‖ Au ‖_F≤ ‖ A‖_2 ‖ u ‖_F≤ ‖ A ‖_2. *|⟨ u, AD u ⟩|≤ ‖ ADu ‖_F≤ ‖ A ‖_2 ‖ Du‖_F≤ ‖ A ‖_2. *|⟨ Dσ, AD σ⟩|=|⟨ u, (A D σσ^) u ⟩| ≤ max_i(|(A D σσ^)_ii|)≤ max_i ‖ (ADσ)_i‖_2 ≤ ‖ AD ‖_1 ≤ | A |_∞ ≤ ‖ A ‖_1.*|⟨ A σ, D^2 σ⟩| =|⟨ u, (A σσ^) D u⟩| ≤ max_i (|(A σσ^)_ii D_ii|)≤ max_i((A σσ^)_ii) ≤ max_i (⟨σ_i, (A σ)_i ⟩ ) ≤ max_i ‖ (Aσ)_i ‖_2 ≤ ‖ A ‖_1.*|⟨ u, (A - Λ) u ⟩|≤ ‖ A - Λ‖_2≤ ‖ A ‖_2 + max_i|(A σσ^)_ii| ≤ ‖ A ‖_2 + ‖ A ‖_1 ≤2 ‖ A ‖_1.*|⟨ D σ, AD u ⟩| =|⟨ AD σ, Du⟩| ≤ max_i ‖ (ADσ)_i ‖_2≤ ‖ A ‖_1.*|⟨σ, AD^2 u ⟩⟩|≤ max_i ‖ (A σ)_i ‖_2≤ ‖ A ‖_1.*|⟨ u, AD^2 σ⟩|≤ max_i ‖ (AD^2 σ)_i ‖_2≤ ‖ A D^2 ‖_1≤ ‖ A ‖_1.*|⟨σ, AD^3 σ⟩| =|⟨ u, (A σσ^) D^2 u ⟩| ≤ max((A σσ^) D^2)≤ max((A σσ^))≤ ‖ A ‖_1.*|⟨ D σ, AD^2 σ⟩| =|⟨ u, (A D σσ^) D u⟩| ≤ max_i (|(A D σσ^)_ii D_ii|)≤ max_i |⟨σ_i, (AD σ)_i ⟩| ≤ ‖ A D ‖_1≤ ‖ A ‖_1.
http://arxiv.org/abs/1703.08729v2
{ "authors": [ "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari", "Roberto I. Oliveira" ], "categories": [ "math.OC", "stat.ML" ], "primary_category": "math.OC", "published": "20170325184555", "title": "Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality" }
firstpage–lastpage Peculiar Rotation of Electron Vortex Beams [==========================================The Atacama Large Millimeter/submilimeter Array (ALMA) recently revealed a set of nearly concentric gaps in the protoplanetary disk surrounding the young star HL Tau. If these are carved by forming gas giants, this provides the first set of orbital initial conditions for planets as they emerge from their birth disks. Using N-body integrations, we have followed the evolution of the system for 5 Gyr to explore the possible outcomes. We find that HL Tau initial conditions scaled down to the size of typically observed exoplanet orbits naturally produce several populations in the observed exoplanet sample. First, for a plausible range of planetary masses, we can match the observed eccentricity distribution of dynamically excited radial velocity giant planets with eccentricities > 0.2. Second, we roughly obtain the observed rate of hot Jupiters around FGK stars. Finally, we obtain a large efficiency of planetary ejections of ≈ 2 per HL Tau-like system, but the small fraction of stars observed to host giant planets makes it hard to match the rate of free-floating planets inferred from microlensing observations. In view of upcoming GAIA results, we also provide predictions for the expected mutual inclination distribution, which is significantly broader than the absolute inclination distributions typically considered by previous studies. planets and satellites: dynamical evolution and stability – chaos – celestial mechanics – planets and satellites: protoplanetary discs – planets and satellites: planet-disc interactions § INTRODUCTIONA wealth of discoveries over the last two decades has generated an extensive and diverse exoplanetary database <cit.>.Perhaps most immediately striking is the fact that many systems are much more dynamically excited than the Solar System <cit.>.Additionally, observations reveal that ∼ 1% of F, G, and K dwarf stars host hot Jupiters at short orbital periods <cit.>, where it would seem difficult for them to form in situ<cit.>.Microlensing surveys have further inferred a frequency of ∼ 1.3 free-floating Jupiter mass planets per main sequence star <cit.>, which they interpret as planets ejected from their birth system.The above constraints all point to tumultuous dynamical histories for these planetary systems.Several mechanisms have been proposed to dynamically excite systems, leading to planetary scatterings, ejections, and at high enough eccentricity, circularization through tides to form hot Jupiters. First, under the right conditions, direct interactions with the protoplanetary disk can grow planetary eccentricities <cit.>.Another possibility is for mean-motion resonances to raise eccentricities under the action of disk or planetesimal-driven migration <cit.>, or for secular interactions with stellar <cit.> or planetary <cit.> companions to drive interior planets onto highly elliptical paths. Alternatively, planets could form in nearly circular but extremely compact configurations that over time will lead to dynamical excitation through gravitational scattering <cit.>.Previous studies involving two planets of equal mass <cit.> performed simulations showing that planet-planet scattering can dynamically excite systems, but not enough to match the observed distribution. However, systems containing two planets with unequal masses, or systems containing three or more planets, have been able to produce distributions closer to the observed sample <cit.>.One important challenge for these studies is that the various dynamical phenomena above depend on particular disk conditions and on the initial orbital configurations following planet formation.By varying parameters to match today's observed exoplanet distribution <cit.> and <cit.> infer an appropriate range of initial conditions, but this inverse problem inevitably leads to degeneracies among model parameters. The ideal scenario would be to instead determine initial conditions of planet formation directly through observation and evolve them for Gyr timescales to compare the results to the comparably aged exoplanet sample.Unfortunately observations of forming planetary systems are difficult due to their small angular scale, the stage's short (Myr) duration, and the high levels of dust extinction during this phase.However, the Atacama Large Millimeter/submilimeter Array (ALMA) is revolutionizing the field through its ability to probe optically thin millimeter-wavelength continuum emission at unprecedented AU-scale resolution.In October 2014, <cit.> obtained an image of the young star HL Tauri (hereafter HL Tau), revealing a set of nearly concentric gaps (Fig. <ref>) in the dust distribution. More recently, <cit.> have also observed gaps in the gas at coincident locations. Several mechanisms have been proposed to explain the origin of the gaps, such as clumping <cit.>, magnetic instabilities <cit.>, and rapid pebble growth due to condensation fronts <cit.>. However, arguably the most intriguing explanation is that the gaps are carved by forming giant planets <cit.>Theoretical studies have long shown that giant planets are expected to carve out gaps in the surrounding gas disk during their formation <cit.>. One complication is that the continuum emission in the ALMA image probes not the gas, but rather ∼millimeter-sized dust grains in the disk, which are additionally subject to aerodynamic forces. Two-phase (gas+dust) hydrodynamical codes have been developed to trace dust behavior, and find that planets as small as ∼ 1 Neptune mass are capable of opening a gap in the surrounding dust <cit.>.But if Neptune-mass or larger planets orbit in the observed gaps, this has important implications for the stability of the system. <cit.> showed through N-body integrations that if planets orbit in each of the observed gaps, the current masses must be ≲ 1 Saturn mass in order to ensure orbital stability over the lifetime of the system. However, even these stable cases are only transient.As the planets continue to accrete gas from the surrounding disk, and on galactically significant timescales of Gyrs, the systems will destabilize, undergoing close encounters that eject planets to interstellar space while leaving the remaining planets on inclined, eccentric orbits <cit.>Given that the HL Tau system potentially provides us the first set of initial conditions as giant planets emerge from their birth disk, we undertake an analysis of the potential scattering outcomes.We then compare the results following several Gyr of dynamical evolution to the observed exoplanet sample.We begin by describing our numerical methods and sets of initial conditions in Sec. <ref>. We then compare our resulting eccentricity distribution across sets of initial conditions (Sec. <ref>) and to the observed radial velocity sample (Sec. <ref>). In Sec. <ref> we explore the effect of adding a mass distribution to the planets. In Sec. <ref> we make predictions for the expected distribution of mutual inclinations between planets, and in Sec. <ref> investigate the production of free-floating planets. Finally, in Sec. <ref>, we explore the production rate of Hot Jupiters, and their expected obliquity distribution.§ METHODS §.§ The HL Tau System<cit.> identified five major gaps in the HL Tau disk, which we label gaps 1-5 (see Fig. <ref>).Planets orbiting in the inner two gaps are well enough separated from other gaps that they are dynamically stable on Gyr timescales.The stability of the system therefore hinges on any planets orbiting in the outer gaps <cit.>.<cit.> argue that each of the gaps' widths should be comparable to its progenitor planet's Hill sphere, inferring masses consistent with giant planets[<cit.> do not consider Gap 5, but their same reasoning would apply to this gap given its comparable width to the other ones]. Their inferred masses are likely overestimates, since the gap widths observed in the dust distribution from hydrodynamic simulations including both gas and dust are generally several Hill radii <cit.>; however, many disk parameters remain uncertain and several ingredients are important to the disk physics and radiative transfer <cit.>.We first considered a case with 5 planets in each of gaps 1-5. The initial conditions for this experimental setup were taken from the gap locations reported by <cit.> and can be found in the top row of Table <ref>.A more detailed description of the numerical setup for all suites of simulations can be found in Sec. <ref>.Alternatively, one might interpret the two closest gaps (3 and 4) as due to a single planet, with the intervening material representing dust that is co-orbital with the planet.This scenario was modeled with hydrodynamic simulations including gas and dust by <cit.> and <cit.>, inferring a mass for this planet of ∼ 0.5 M_J.We therefore ran an additional suite of simulations with planets in each of gaps 1, 2 and 5, and a single planet at the location of the brightness peak between gaps 3 and 4, reported by <cit.> (see the last two rows in Table <ref>).Several authors <cit.> have noted that the orbital periods at the locations of the various gaps in the HL Tau disk form near-integer (i.e., resonant) period ratios. The ALMA resolution, together with our ignorance of where within the gap the planet orbits makes it impossible to directly determine whether this is the case.However, <cit.> suggested that resonantly interacting planets experiencing damping from the surrounding disk should develop eccentricities of a few percent, which was consistent with the gap offsets later reported by <cit.>.Nevertheless, for generality, we chose to simulate (for each of the four and five planet scenarios) both a case where planets are initialized with random angles, and one where the planets were initially damped to the center of the nearest resonance (see Sec. <ref> for details).Previous works <cit.> have found that suites of unstable planetary systems with different initial conditions tend to converge to the same final orbital distribution, as the chaotic interactions tend to erase a system's memory of its initial conditions.One might therefore hope that our four cases summarized in Table <ref> would converge to the same distribution, obviating the need for precise initial conditions.We examine this hypothesis in Sec. <ref>.One caveat to the above-mentioned convergence of outcomes is that, depending on the number of planets and their masses, the HL Tau system may be stable over Gyr timescales. In particular, if we ignored the largely dynamically decoupled inner two planets, our four-planet case reduces to an effective two-planet system.In this case we can apply the analytic Hill stability criterion <cit.> to find that if the sum of the two planets' masses are ≲ 2 M_J, the planets cannot undergo close encounters.Adding a nearby third planet (i.e., the five-planet case) breaks this restriction, and leads to eventual instability.In all suites of simulations, we adopted planetary masses that would render the systems unstable, translating to higher masses in the four-planet case (see Sec. <ref>). §.§ Numerical IntegrationsOur simulations made use of the open-source N-Body package REBOUND <cit.>.In particular, all integrations were performed with the adaptive, high-order integrator IAS15 <cit.>. Any additional forces were incorporated using the REBOUNDx library[<https://github.com/dtamayo/reboundx>].One hundred integrations were performed for each case listed in Table <ref>, and each system was integrated for 5 Gyr. To limit the parameter space, we assigned all planets the same mass and radius.In all runs, the radius was set to a constant value of ≈ 1 Jupiter radius (R_J ≡ 71492km). We note that our results in Sec. <ref> are insensitive to the exact radius adopted, since collisions are rare[At the large orbital distances in the HL Tau system, where the orbital velocities are much smaller than the escape velocities from the planets' surfaces, scattering events dominate collisions<cit.>.].All planets were begun as proto-giant planets with M_0 = 10 M_⊕. At approximately this value, one expects the mass accretion to transition to exponential growth <cit.>. At some point this accretion must halt, though the exact mechanism by which this occurs is unclear <cit.>. We therefore made the simple choice to grow the mass with the modify_mass implementation in REBOUNDx, using a time-dependent e-folding timescale τ_M that keeps the masses in the planetary regime, τ_M = τ_M0e^(t/τ_ Disk),where τ_M0 is a constant, and the exponential factor is meant to qualitatively capture the dispersal of the protoplanetary disk on a timescale τ_ Disk. We set τ_ Disk = 3 Myr, which corresponds the median timescale observed for the disappearance of protoplanetary disks <cit.>. Taking expected accretion timescales τ_M0 <cit.> typically yield planetary masses that are too high <cit.>.In our five-planet cases, we therefore simply tuned τ_M0 to 0.85 Myr, which translated to final planetary masses of ≈ 1.1 Jupiter masses (M_J).This provided a typical observed mass for giant planets, and was large enough for all systems to go unstable. We note that once planets reach a mass at which pairs of adjacent planets are Hill unstable, the instability happens swiftly on a few conjunction timescales <cit.>. This means that the particular mass-growth prescription adopted does not have a large effect on the results. We verified this by running an additional suite of simulations mimicking the 5-planet nominal initial conditions, but with no mass-growth prescription—planets were simply initialized with their final mass of 1.1 Jupiter masses. The resulting cumulative eccentricity distribution was statistically consistent with having been drawn from the same distribution as the simulations with the mass-growth prescription above.In the four-planet case, however, 1.1 M_J was not large enough to render the majority of systems unstable. Since we later aim to compare to both the five-planet case and the observed exoplanet sample, we therefore decreased τ_M0 to 0.60 Myr, corresponding to final masses of 4.7M_J.This rendered most systems unstable. This promotes a loss of memory of initial conditions through chaotic interactions <cit.>; in this way, a single set of initial conditions can hope to reproduce the wider exoplanet sample.To simulate dissipation from the disk, we followed <cit.> and used the modify_orbits_forces implementation in REBOUNDx to apply eccentricity damping at constant angular momentum <cit.>.As above, we modify the e-folding timescale τ_e to qualitatively capture the disk dispersal, τ_e = τ_e0e^(t/τ_ Disk),where we set τ_e_0 = 1 Myr (∼ 10^3 orbits) for all simulations. The above effects were turned off after 40 Myr, at which point their effect is negligible.In a more complete physical picture, the details of the eccentricity damping would change as the planets grow, clear a gap in the disk etc. <cit.>. While eccentricity damping can stabilize planetary systems against slow chaotic diffusion <cit.>, once adjacent planets reach a mass at which they are Hill unstable, the instability is so violent that no reasonable amount of eccentricity damping will stabilize the system. We ran a copy of the 5-planet nominal suite of simulations but with τ_e0 a factor of ten larger, and obtained statistically consistent results.As the simulation progressed, planets were removed if they reached a distance of 1000 AU from the star. All collisions between bodies were treated as perfectly inelastic mergers.There is considerable uncertainty in the mass of the central star. Estimates range from ∼ 0.55 M_⊙ (solar masses, ) to ∼ 1.3 M_⊙ <cit.>. For simplicity, we adopted a stellar mass of 1 M_⊙. In the absence of additional forces and collisions, the point-particle N-body problem can be non-dimensionalized by expressing all the masses in units of the central mass, and time in units of the innermost orbital period. We argued above that the additional effects have little impact on the outcomes, and as we will see below, collisions are largely negligible. Thus, adopting a different stellar mass closely approximates proportionately adjusting the masses of the planets and the timescales from our reported values.For computational reasons, if planets reached separations <0.2 AU from the central star, we saved a checkpoint of the simulation, merged the planet with the star, and continued the integration. We present the results of these simulations in Sec. <ref>. In a separate investigation, we subsequently loaded the above-mentioned checkpoints where planets reached 0.2 AU, and continued these integrations with additional short-range forces (general relativity corrections, tidal precession) taken into account.We discuss those results in Sec. <ref>.Each integration required a different amount of time depending on the closeness of planetary encounters, but each 100-simulation case we executed required ∼ 3 × 10^4 hours on older Intel Xeon CPUs (E5310, 1.6 GHz). §.§ Initial ConditionsAs discussed above, the semimajor axes for the nominal cases were taken from the values reported by <cit.>.The eccentricities were initialized to zero, while the inclinations were drawn from a uniform distribution in the interval [0,1^∘].All remaining orbital angles were randomly drawn from the interval [0,360^∘].For the resonant cases, we note that capture into resonance depends on the relative rate at which pairs of planets migrate toward one another in the disk <cit.>, which in turn remains uncertain <cit.>. Determining whether capture into resonance is likely in the case of HL Tau is beyond the scope of this paper—we simply place planets in resonance to see whether such configurations lead to qualitatively different outcomes. We will find in Sec. <ref> that the different initial configurations lead to consistent outcomes, obviating some of these concerns. In any case, the steps below should be seen as a simple numerical procedure to obtain resonant chains, rather than reflecting physically plausible parameters from disk migration.We began with planets initialized as in the nominal case, but we moved outward and slightly separated the outer three planets,We then added semimajor axis damping to the outermost planet following the prescription of <cit.> that is implemented in the modify_orbits_forces routine in REBOUNDx.The outermost planet then migrated inward until it captured into the desired resonance nearby the observed separations, at which point the pair of planets continued migrating inward. In the 5-planet case, this resulted in the outer two planets locking into a 4:3 resonance, and the third and fourth planets trapping into a 5:4 resonance. In the 4-planet case the outer two planets locked into a 3:2 resonance.In order for the three planets to finish at their appropriate initial semimajor axes, and to avoid numerical artifacts arising from abruptly turning off the migration, we chose to smoothly remove it by varying the exponential semimajor axis damping timescale τ_a asτ_a(t) = τ_a0/a_3(t) - a_3 ALMAwhere a_3 ALMA = 64.2 AU and τ_a0 = 90 Myr (slow enough for the planets to capture into resonance) for the 5-planet case, and a_3 ALMA = 69.0 AU and τ_a0 = 105 Myr for the 4-planet case.Thus, as the inner planet approaches its appropriate semimajor axis, the migration timescale smoothly diverges, effectively shutting itself off.The simulation time was then reset to zero, and from that point treated like the non-resonant cases. For these setup simulations, we set the planetary masses and eccentricity damping timescales to constant values of 10 M_⊕ and 1 Myr, respectively (i.e., the initial values in the actual integrations, see Sec. <ref>).The above procedure resulted in initial semimajor axes that matched the ALMA observations to within ≲ 1 AU, which is smaller than the size of the synthesized beam in the ALMA data <cit.>. The resulting initial eccentricities for the resonantly interacting planets were ∼ 10^-3, while those for the dynamically detached inner two planets were ∼ 10^-5 . A summary of the initial conditions for each case can be found in Table <ref>.§ RESULTSWe begin by comparing our results following 5 Gyr of evolution, both to one another and to the observed exoplanet sample. Table <ref> summarizes the average number of ejected planets, planet-planet collisions (C), number of planets reaching inward of 0.2 AU (S) and number of remaining planets (R) for each of the four cases considered.Figure <ref> shows a scatter plot of the final eccentricities and inclinations (relative to the initial plane) vs. the semimajor axes, color-coded by the number of remaining planets. The inner limit at ≈ 6 AU is set by conservation of energy, where the innermost planet scatters outward or ejects the remaining planets, absorbing their (negative) orbital energy.We find that in a subset of cases (black circles in Fig. <ref>), a planet is ejected while leaving the remaining planets in a stable, long-lived configuration. However, in the majority of cases, the eccentricities are raised to a level where the planets continue to vigorously interact and the system relaxes to a steady distribution <cit.> with a wide range of eccentricities and inclinations.In the four-planet cases, most ejections and collisions occurred within 40 Myr (∼ 10^4 outer planet orbits), with a tail extending to Gyr timescales. The distribution was broader in the five-planet cases, where ejections more often left the remaining planets on orbits that would continue to interact strongly. §.§ Free-floating Planets Statistics of observed free-floating planets from <cit.> found there to be 1.8^+1.7_-0.8Jupiter mass objects per star, of which they estimate 75% of them are unbound. This corresponds to 1.3^+1.3_-0.6 free-floating planets per main-sequence star. Similar estimates were obtained from a recent synthesis of radial velocity, microlensing and direct imaging survey results <cit.>.While our numerical results show planets are ejected with high efficiency (1.5-2.4 planets per star system, Table <ref>)[We note that one would expect to obtain similar results for unstable planetary systems at smaller semimajor axes, up to the point where relative velocities between planets become comparable to the escape velocity from the largest body (scales of ∼ 1 AU for Jupiter mass planets around a solar mass star). Closer in, collisions become the dominant outcome (rather than ejections), which would drive the expected rate of free-floating planets down further.], one must additionally fold in the fraction of stars that host giant planets in the first place. <cit.> estimate that 13.9 ± 1.7 % of FGK stars host giant planets (larger than 50 Earth masses) with orbital periods under 10 years <cit.>. Therefore, while planet-planet scattering of giant planets efficiently ejects them, it would seem that there are too few gas giants formed in nature to match the rate of free-floating planets inferred from microlensing surveys <cit.>. The upcoming WFIRST-AFTA mission should detect many more unbound planets and help clarify this situation <cit.>. §.§ Sensitivity to Initial Conditions To compare the outcomes from our four suites of simulations, we plot the cumulative distributions of the remaining planets' final eccentricities after 5 Gyr of evolution in Figure <ref>. As mentioned above, not all systems (16 in the resonant case and 27 in the non-resonant case) in the four-planet cases became unstable.In order to compare to both the five-planet cases (always unstable) and to the observed sample (see Sec. <ref>), we only consider planets with eccentricities > 0.2.Figure <ref> shows that the overall distribution of outcomes is not particularly sensitive to initial conditions.We performed Kolmogorov-Smirnov (KS) tests between all the possible pairs of cases in Table <ref>. These revealed p-values as low as 0.11 (between the 4-planet nominal and resonant cases) and as high as 0.69 (between the 5-planet nominal and resonant cases). We therefore can not rule out that the null hypothesis that they were drawn from the same underlying distribution. This relative insensitivity to initial conditions is consistent with the results of previous planet-planet scattering experiments <cit.>. We also note that while planets' final distances from the star will depend on their detailed migration histories, this insensitivity to initial conditions suggests that planet-planet scattering will erase migration signatures in the eccentricities (and inclinations) of unstable planetary systems, relaxing to a dynamically imposed equilibrium distribution. §.§ Comparison to the Observed Exoplanet SampleOur results can also be compared to the observed exoplanet sample. While the HL Tau system has a scale (∼ 100 AU) probed by direct imaging, it is difficult to make a direct comparison due to the comparatively small number of observed planets, and the limited constraints on their orbital eccentricities and inclinations (due to the typically short observed orbital arcs). However, because point-source Newtonian gravity is scale-free, we can consider the HL Tau system as a scaled up prototype of initial conditions at ∼ 1 AU and compare our results to the observed sample of radial velocity (RV) giant planets (i.e., we would have obtained similar results for dimensionless quantities like the orbital eccentricities and inclinations if we had scaled all our semimajor axes down by a constant factor). In this picture, our 5 Gyr integrations represent ≈ 10^8 inner planet orbits, which is short compared to RV systems' ages, but long enough for the resulting orbital distributions to approximately converge <cit.>. We note, however, that our results are not strictly scale-free, since we consider collisions and our bodies are therefore not point particles. The importance of finite planetary and stellar radii for collisions varies as we scale down the system from tens of AU to ∼ 1 AU. This isnegligible for the planetary radii, since interplanetary collisions are rare (we observed 63 across 1800 planets in 400 simulations). In the case of the star, we inflated its radius to 0.2 AU.When taken as a fraction of the system size ∼ 50 AU, this corresponds to the size of a sun-like star for a system with characteristic semimajor axes of ∼ 1 AU, which makes our simulations comparable to previous studies <cit.>. Technically, our addition of mass growth and eccentricity damping also introduce new scales into the problem, but as argued in Sec. <ref> these have minimal effect on the dynamics of this particular problem.In order to compare to the observed sample, we drew data from <exoplanets.org> on planets discovered through RV with 1 AU< a < 5 AU, and with e > 0.2.This should predominantly select systems that underwent some form of dynamical excitation, and are thus candidate outcomes for (scaled down) initial conditions like those in HL Tau.From Fig. <ref>, we see that the observed sample has significantly lower eccentricities than our simulations provide.KS tests between our various simulated cases and the observed distribution yield p-values ≲ 10^-3.This suggests that equal-mass planets with HL Tau-like initial conditions can not reproduce the observed eccentricity distribution. We therefore add next a distribution of planetary masses. §.§ Adding a Mass DistributionTypically, scatterings between equal-mass planets will affect both orbits comparably. By contrast, if one planet is much more massive than the other, the diminutive body can achieve large eccentricities while only mildly perturbing the massive one. Adding a distribution of planet masses can therefore shift the cumulative eccentricity distribution closer to what is observed <cit.>.To explore this, we duplicated the initial conditions from the nominal five-planet case (Table <ref>), and ran two additional suites of simulations, one generated by drawing τ_M0 (Eq. <ref>) from a uniform distribution on the interval [0.70, 1.30] Myr, the other on the interval [0.65Myr, 1.30 Myr] respectively. These mapped (non-uniformly) onto a mass distribution between [0.3, 2.3] M_J and [0.3, 3.2] M_J, respectively. The resulting cumulative eccentricity distributions can be seen in Fig. <ref>, along with the equal-mass nominal five-planet case. We observe that by increasing the range in masses, one generates a higher-proportion of low eccentricity planets (i.e., the massive ones that are less perturbed during encounters).By tuning the mass distribution it is thus possible to obtain a closer match to the observed population.The best fit to the observed sample is the case with masses between [0.3,3.2] M_J. A K-S test reveals a 63% probability of equal or worse disagreement between the two distributions under the hypothesis that they are in fact drawn from the same distribution.Thus, scaled down HL Tau initial conditions with a plausible range of masses can reproduce the observed eccentricity distribution.§.§ Inclinations While mutual planetary inclinations have only been measured for a handful of systems <cit.>, GAIA is expected to constrain the mutual inclinations between large numbers of giant planets at ∼ 1 AU distances <cit.>. We therefore consider the inclination distribution generated by planet-planet scattering. Previous authors have typically presented predictions for the distribution of absolute inclinations to their respective system's invariable plane <cit.>; however, this does not accurately reflect the distribution of mutual inclinations between pairs of planets.Consider two orbits with the same absolute inclination (i.e., the angle between the orbit normal and z axis, defined by the invariable plane). As shown in Fig. <ref>, these can have orbit normals pointing in many different directions that lie along a cone. Thus, two orbits with the same inclination might have zero mutual inclination if their orbit normals coincide, but their longitudes of node (i.e., the azimuthal angle along the dashed blue circle in Fig. <ref>) could also be 180^∘ apart, corresponding to a mutual inclination that is twice as large as their absolute inclinations (one could also draw any configuration in between these two extremes). But if the two planets make up a closed system, then the total angular momentum must always lie along the invariable plane's axis z, and only the anti-aligned case is possible. This simple picture is complicated by planets having different absolute inclinations, the presence of additional planets, and the fact that ejected planets can permanently remove angular momentum from the system; nevertheless, one expects the distribution of mutual inclinations between planet pairs to be wider than the distribution of absolute inclinations.Figure <ref> compares the distributions of absolute and mutual inclinations in our numerical experiments, where we have manually removed the smallest-inclination bin in order to remove stable systems from consideration.Given the insensitivity to initial conditions (Sec. <ref>), we combined the four suites of simulations listed in Table <ref> to generate the resulting distributions.We see that indeed, the mutual inclination distribution is significantly wider.Because our results are largely scale-free (Sec. <ref>), they provide predictions for HL Tau-like initial conditions at the ∼ 1AU scales that will be probed by GAIA (e.g., ). We find through maximum likelihood estimation that the mutual inclination distribution is well fit by a Rayleigh distribution with a scale parameter of 26.2^∘ (26.1^∘-26.4^∘ at 95% confidence). Rayleigh distributions could not match the absolute inclinations, but they are reasonably fit by a Gamma distribution with shape parameter of 2.3 (2.26-2.34 at 95% confidence) and scale parameter of 7.9^∘ (7.8^∘-8.1^∘ at 95% confidence).As shown by <cit.>, similar to the eccentricity distribution (Sec. <ref>), the distribution of mutual inclinations shrinks if we consider unequal-mass planets. Thus, our distribution can be regarded as maximal.§.§ Production of Hot Jupiters§.§.§ Numerical Setup In order to explore the production of hot Jupiters in our simulations, we reloaded simulations in the two resonant cases in Table <ref>, at the first moment that a planet reached 0.2 AU from the central star.We then added precession effects from general relativity and the tides raised by the star on the planets. We added general relativity using the gr implementation in REBOUNDx, which adds the first post-Newtonian approximation (1PN), i.e. terms quadratic in the ratio of the velocities to the speed of light, but ignores terms that are smaller by a factor of the planet-star mass ratio <cit.>.We used the tides_precession implementation in REBOUNDx for the precession induced by the interaction between the central body and the tidal quadrupoles raised on the planets by the star, which is based on <cit.>. We adopted an apsidal motion constant k_1 (half the tidal Love number) of 0.15 for all planets.Mostly because in this case the IAS15 integrator must resolve the very short periastron passages, each of the two 100-simulation cases we explored required ∼ 10^5 CPU hours of integration time using the same hardware. Because of the computational cost, and the insensitivity to whether or not planets are started in resonance (Sec. <ref>), we only ran the four and five-planet resonant cases (Table <ref>) for this analysis.§.§.§ RatesThe planets that approach very close to their host stars can be strongly affected by tides and circularize their orbits giving rise to a population of the close-in planets (a≲0.1 AU), the so-called hot Jupiters (e.g., ). We have ignored the effect of this tidal dissipation in our calculations mainly because the formation of short-period planets demand extremely short integration time-steps. Instead, we have included only the conservative potential describing the interaction of the star with the tidal bulge it raises on the planets <cit.>, but have kept track of the minimum distance r_ min between each planet and the host star. This quantity is a good proxy to determine whether or not a planet will tidally capture because the orbital circularization timescale depends very steeply on r_ min, so migration should occur below a certain threshold. Because planet migration should proceed at nearly constant orbital angular momentum (a[1-e^2]≃ constant), a planet reaching an r_ min=a(1-e_ max) that is small enough for tides to damp its orbital eccentricity within 5 Gyr will reach a final semimajor axis a_ f≃ a(1-e_ max^2) = r_ min(1+e_ max)≃ 2r_ min.In Fig. <ref> we plot the cumulative distribution of r_ min for the four and five-planet resonant cases (Table <ref>). Depending on the efficiency of tides, we can determine the fraction of planets that can potentially become a hot Jupiter. For instance, the dynamical tides model by <cit.> predicts that the orbits of Jupiter-like planets can be efficiently circularized when r_ min≲0.03≃ 6R_⊙ and, therefore, form hot Jupiters with semi-major axes a_ f≲0.06 AU.We see from Table <ref> that while nominal vs resonant initial conditions make little difference in the number of planets coming close to their host star (column S), there is a significant difference between the four and five-planet cases. This is also evident in Fig. <ref>. Significant dependencies on the number of planets in the system have also been found in previous studies <cit.>. We note that while the final eccentricity distribution is largely scale free (Sec. <ref>), asking what fraction of planets reach pericenter distances at which tides can operate introduces a length scale into the problem, rendering initial conditions important.We also note that if the eccentricity evolution were purely secular, and thus slowly varying, the precession induced by the interaction between the tidal bulge induced on the planets and the star would cause the pericenter to stall at a minimum distance of ∼ 3 stellar radii, or 0.015 AU <cit.>. However, due to the strong interactions between planets, we find that a large fraction impact the star, despite this strong tidal precession (35% and 6% in the five and four-planet cases, respectively). However, one would expect that if we had included tidal dissipation, most of these planets would have captured to form hot Jupiters before colliding with the central star.In total, we find that in 40% and 7% of the five and four-planet systems, respectively, a planet passes within our adopted threshold of 0.03 AU of the central star <cit.>, and is thus a candidate for capture as a hot Jupiter. As discussed above, their final semimajor axes will be ≲ 0.06 AU, but because we do not consider tidal dissipation, we cannot predict the distribution of semimajor axes within this range. The above percentages are also likelyoverestimates, since <cit.> predict that a Jupiter-like planet orbiting a Sun-like star will be disrupted if it passes within ≈ 0.01 AU of the central body, and dissipation in the host star can pull close-in planets into the star <cit.>. Our results highlight the importance of initial conditions, but are broadly consistent with <cit.>, who predict that scattering of three-equal-mass planets (with the innermost body initially at 5 AU) form hot Jupiters in ∼30% of the systems, and <cit.>, who generated hot Jupiters in ∼10 and ∼20% of their three and four-planet systems (innermost body initially between 1-5 AU), respectively.Putting together our hot Jupiter formation rates with the fraction of FGK stars hosting giant planets <cit.> yields an expected rate of ∼ 0.7-4% hot Jupiters per FGK star. This brackets the rate found by radial velocity surveys of 1.2 ± 0.38 % <cit.>, with the caveat that our estimates would be lowered by tidal disruptions <cit.> and hot Jupiters migrating into the host star through tidal dissipation in the central body <cit.>. Since the above studies have considered these effects in detail, we do not pursue them further.§.§.§ Stellar Obliquities We finally briefly discuss the inclination distribution of planets as they approach the host star at high eccentricities. In particular, this inclination distribution at closest approach can be significantly different than the inclination distribution over all times. To track this, we recorded the inclinations (with respect to the initial invariable plane) of the planets at the time of closest-approach. We expect that, given the steep dependence of the tidal dissipation rate on r_ min, the inclination at closest approach should be a good proxy for the final inclination of a hot Jupiter. If we additionally assume that the initial stellar obliquities are small (the equator is nearly aligned with the system's invariable plane), then these planetary inclinations closely correspond to the star's final obliquity to the hot Jupiter's orbital plane, or the spin-orbit misalignment.In Figure<ref> we show in green the distribution of stellar obliquities for planets that come close to their host star. In order to increase our statistics, we combine the results from our four and five-planet resonant cases, and consider any planet that reached within r_ min<0.1 AU from the central body. This distribution is significantly flatter than the blue distribution of planetary inclinations sampled at many times over each simulation. This indicates that the highest inclinations are reached during the episodes of extremely large eccentricities, possibly linked to strong scattering events and/or secular chaotic effects. We find that ≈ 26% of planets reach inside 0.1 AU on retrograde orbits (obliquities >90^∘). This is consistent with the ∼30% of retrograde hot Jupiters found by <cit.> in their simulations.If hot Jupiters are largely formed through planet-planet scattering, this predicts that hot Jupiters should have outer planetary companions. While earlier studies suggested hot Jupiters were found in single-planet systems <cit.>, recent work suggests they have similar companion fractions to giant planets farther out <cit.>.However, the expected flat distribution of hot Jupiter obliquities is inconsistent with observations (black dashed distribution in Fig. <ref>, taken from <exoplanets.org>), which show a strong preference for lower values, and a rate of retrograde systems of ∼15% (e.g., ). This suggests that if planet-planet scattering is responsible for a large fraction of the observed hot Jupiters, then tidal dissipation has played an important role in aligning the axes of the stellar spin and planet's orbit (see, e.g. ).§ CONCLUSIONThe HL Tau system potentially provides the first set of initial conditions for planets as they emerge from their birth disks. We have followed the evolution of such initial conditions for 5 Gyr using N-body integrations to explore the possible outcomes. Because the results are largely scale free down to ∼ 1 AU scales, where interplanetary collisions start to become important <cit.>, they not only provide predictions for wide separation systems like HL Tau, but can also be directly compared to the observed sample of giant planets from radial velocity surveys.We find that HL Tau initial conditions naturally produce several populations in the observed exoplanet sample: * Eccentric cold Jupiters: We can match the observed eccentricity distribution of dynamically excited radial velocity planets with e>0.2, in agreement with previous planet-planet scattering studies <cit.>; we note that this result is sensitive to the unconstrained distribution of planetary masses. * Hot Jupiters: We obtain upper limits of ∼ 7-40% for the production efficiency of hot Jupiters. When combined with the observed fraction of systems that host giant planets (≈ 14%, ), this brackets the ≈1% rate of hot Jupiters observed around FGK stars <cit.>. Furthermore, we find that secular interactions lead to a significantly broader distribution of hot Jupiter obliquities than one would naively expect from the orbital inclination distribution from scattering experiments that ignore tidal dissipation.* Free Floating Planets: We find a high efficiency for ejections of ≈ 2 planets per HL Tau-like system. However, given the small fraction of systems that host giant planets (at least inside ∼ 5 AU, ), it does not seem possible to match the high rate of free-floating planets inferred from microlensing surveys (∼ 1 per main sequence star, ).We also present an expected distribution of mutual inclinations between planets, which will be probed by the GAIA mission. We find that due to angular momentum conservation, it is significantly broader than the distribution of absolute inclinations often reported by planet-scattering studies.This study is consistent with previous work on planet-planet scattering <cit.> showing that dynamically unstable systems lose memory of their initial conditions and relax to equilibrium orbital distributions. This makes it possible for initial conditions from a single system (HL Tau) to lead to features observed in the exoplanet sample as a whole. The striking observation is that the HL Tau gaps plausibly correspond to planets that are long-lived relative to the age of the system <cit.>, but are destined for dynamical instabilities on longer timescales that can produce many of the evolved systems we see today.§ ACKNOWLEDGEMENTS We would like to thank Yanqin Wu and Phil Armitage for insightful discussions. We are also grateful to the anonymous referee who greatly helped improve and sharpen this manuscript. D.T. is grateful for support from the Jeffrey L. Bishop Fellowship. H.R. was supported by NSERC Discovery Grant RGPIN-2014-04553. C.P. is grateful for support from the Gruber Foundation Fellowship. This research was made possible by the kind and tireless support of Claire Yu and John Dubinski, and by the Sunnyvale cluster at the Canadian Institute for Theoretical Astrophysics. This work was greatly aided by the open-source projects<cit.><cit.>,<cit.> and<cit.>. mnras@urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[ALMA Partnership, Brogan& et al.ALMA Partnership et al.2015]Brogan15 ALMA Partnership Brogan C. L., et al. 2015, arXiv preprint arXiv:1503.02649[Akiyama, Hasegawa, Hayashi& IguchiAkiyama et al.2016]Akiyama16 Akiyama E.,Hasegawa Y.,Hayashi M., Iguchi S.,2016, The Astrophysical Journal, 818, 158[Anderson, Esposito, Martin, Thornton & MuhlemanAnderson et al.1975]Anderson75 Anderson J. D.,Esposito P. B.,Martin W.,Thornton C. L., Muhleman D. O.,1975, @doi [] 10.1086/153779, http://adsabs.harvard.edu/abs/1975ApJ...200..221A 200, 221[ArtymowiczArtymowicz1992]artymowicz1992 Artymowicz P.,1992, Publications of the Astronomical Society of the Pacific, pp 769–774[Baruteau et al.,Baruteau et al.2014]Baruteau14 Baruteau C.,et al., 2014, @doi [Protostars and Planets VI] 10.2458/azu_uapress_9780816531240-ch029, http://adsabs.harvard.edu/abs/2014prpl.conf..667B pp 667–689[Beaugé & NesvornýBeaugé & Nesvorný2012]BN12 Beaugé C.,Nesvorný D.,2012, @doi [] 10.1088/0004-637X/751/2/119, http://adsabs.harvard.edu/abs/2012ApJ...751..119B 751, 119[Beckwith, Sargent, Chini& GuestenBeckwith et al.1990]Beckwith90 Beckwith S. V.,Sargent A. I.,Chini R. S., Guesten R.,1990, The Astronomical Journal, 99, 924[Bodenheimer, Hubickyj& LissauerBodenheimer et al.2000]Bodenheimer00 Bodenheimer P.,Hubickyj O., Lissauer J. J.,2000, Icarus, 143, 2[Casertano et al.,Casertano et al.2008]casertano08 Casertano S.,et al., 2008, @doi [] 10.1051/0004-6361:20078997, http://adsabs.harvard.edu/abs/2008A[Chatterjee, Ford, Matsumura& RasioChatterjee et al.2008]Chatterjee08 Chatterjee S.,Ford E. B.,Matsumura S., Rasio F. A.,2008, The Astrophysical Journal, 686, 580[Clanton & GaudiClanton & Gaudi2016]Clanton16 Clanton C.,Gaudi B. S.,2016, arXiv preprint arXiv:1609.04010[Cumming, Butler, Marcy, Vogt, Wright& FischerCumming et al.2008]Cumming08 Cumming A.,Butler R. P.,Marcy G. W.,Vogt S. S.,Wright J. T., Fischer D. A.,2008, Publications of the Astronomical Society of the Pacific, 120, 531[DawsonDawson2014]Dawson14b Dawson R. I.,2014, @doi [] 10.1088/2041-8205/790/2/L31, http://adsabs.harvard.edu/abs/2014ApJ...790L..31D 790, L31[Dawson et al.,Dawson et al.2014]dawson14 Dawson R. I.,et al., 2014, @doi [] 10.1088/0004-637X/791/2/89, http://adsabs.harvard.edu/abs/2014ApJ...791...89D 791, 89[Dipierro, Price, Laibe, Hirsh, Cerioli& LodatoDipierro et al.2015]Dipierro15 Dipierro G.,Price D.,Laibe G.,Hirsh K.,Cerioli A., Lodato G.,2015, @doi [MNRAS] 10.1093/mnrasl/slv105, http://adsabs.harvard.edu/abs/2015MNRAS.453L..73D 453, L73[Dong, Zhu& WhitneyDong et al.2015]Dong15 Dong R.,Zhu Z., Whitney B.,2015, The Astrophysical Journal, 809, 93[Droettboom et al.,Droettboom et al.2016]matplotlib2 Droettboom M.,et al., 2016, matplotlib: matplotlib v1.5.1, @doi10.5281/zenodo.44579, <http://dx.doi.org/10.5281/zenodo.44579>[Fabrycky & TremaineFabrycky & Tremaine2007]fabrycky2007 Fabrycky D.,Tremaine S.,2007, The Astrophysical Journal, 669, 1298[Fabrycky et al.,Fabrycky et al.2014]Fabrycky14 Fabrycky D. C.,et al., 2014, The Astrophysical Journal, 790, 146[Ford & RasioFord & Rasio2008]ford2008 Ford E. B.,Rasio F. A.,2008, The Astrophysical Journal, 686, 621[Ford, Havlickova& RasioFord et al.2001]ford2001dynamical Ford E. B.,Havlickova M., Rasio F. A.,2001, Icarus, 150, 303[Ford, Rasio& YuFord et al.2003]ford2003 Ford E.,Rasio F., Yu K.,2003, PhD thesis, ed. D. Deming & S. Seager (San Francisco, CA: ASP), 181 First citation in article[Fouchet, Gonzalez& MaddisonFouchet et al.2010]Fouchet10 Fouchet L.,Gonzalez J.-F., Maddison S. T.,2010, Astronomy & Astrophysics, 518, A16[GladmanGladman1993]Gladman93 Gladman B.,1993, Icarus, 106, 247[Goldreich & SariGoldreich & Sari2003]goldreich2003 Goldreich P.,Sari R.,2003, The Astrophysical Journal, 585, 1024[Guillochon, Ramirez-Ruiz& LinGuillochon et al.2011]guillochon11 Guillochon J.,Ramirez-Ruiz E., Lin D.,2011, @doi [] 10.1088/0004-637X/732/2/74, http://adsabs.harvard.edu/abs/2011ApJ...732...74G 732, 74[Haisch Jr, Lada& LadaHaisch Jr et al.2001]Haisch01 Haisch Jr K. E.,Lada E. A., Lada C. J.,2001, The Astrophysical Journal Letters, 553, L153[HowardHoward2013]Howard13 Howard A. W.,2013, Science, 340, 572[Howard et al.,Howard et al.2012]Howard12 Howard A. W.,et al., 2012, The Astrophysical Journal Supplement Series, 201, 15[HunterHunter2007]matplotlib Hunter J. D.,2007, Computing In Science & Engineering, 9, 90[HutHut1981]Hut81 Hut P.,1981, , http://adsabs.harvard.edu/abs/1981A[Ida & LinIda & Lin2004]Ida04 Ida S.,Lin D. N.,2004, The Astrophysical Journal, 604, 388[Ivanov & PapaloizouIvanov & Papaloizou2007]IP07 Ivanov P. B.,Papaloizou J. C. B.,2007, @doi [] 10.1111/j.1365-2966.2007.11463.x, http://adsabs.harvard.edu/abs/2007MNRAS.376..682I 376, 682[Ivanov & PapaloizouIvanov & Papaloizou2011]IP11 Ivanov P. B.,Papaloizou J. C. B.,2011, @doi [Celestial Mechanics and Dynamical Astronomy] 10.1007/s10569-011-9367-x, http://adsabs.harvard.edu/abs/2011CeMDA.111...51I 111, 51[Jin, Li, Isella, Li& JiJin et al.2016]Jin16 Jin S.,Li S.,Isella A.,Li H., Ji J.,2016, @doi [The Astrophysical Journal] 10.3847/0004-637X/818/1/76, http://adsabs.harvard.edu/abs/2016ApJ...818...76J 818, 76[Jones, Oliphant, Petersonet al.Jones et al.2001]scipy Jones E.,Oliphant T.,Peterson P., et al., 2001, SciPy: Open source scientific tools for Python, <http://www.scipy.org/>[Jurić & TremaineJurić & Tremaine2008]Juric08 Jurić M.,Tremaine S.,2008, The Astrophysical Journal, 686, 603[Kluyver et al.,Kluyver et al.2016]jupyter Kluyver T.,et al., 2016, Positioning and Power in Academic Publishing: Players, Agents and Agendas, p. 87[LaiLai2012]lai12 Lai D.,2012, @doi [] 10.1111/j.1365-2966.2012.20893.x, http://adsabs.harvard.edu/abs/2012MNRAS.423..486L 423, 486[Lee & PealeLee & Peale2002]lee2002 Lee M. H.,Peale S.,2002, The Astrophysical Journal, 567, 596[Li & LiLi & Li2016]Li16 Li S.,Li H.,2016, @doi [Journal of Physics Conference Series] 10.1088/1742-6596/719/1/012007, http://adsabs.harvard.edu/abs/2016JPhCS.719a2007L 719, 012007[Lin & IdaLin & Ida1997]lin1997 Lin D.,Ida S.,1997, The Astrophysical Journal, 477, 781[Lin & PapaloizouLin & Papaloizou1986]lin1986 Lin D.,Papaloizou J.,1986, The Astrophysical Journal, 307, 395[Lyra & KuchnerLyra & Kuchner2013]lyra2013 Lyra W.,Kuchner M.,2013, Nature, 499, 184[Marchal & BozisMarchal & Bozis1982]Marchal82 Marchal C.,Bozis G.,1982, Celestial Mechanics, 26, 311[Marzari & WeidenschillingMarzari & Weidenschilling2002]marzari2002 Marzari F.,Weidenschilling S.,2002, Icarus, 156, 570[Mayor et al.,Mayor et al.2011]mayor2011harps Mayor M.,et al., 2011, arXiv preprint arXiv:1109.2497[McArthur, Benedict, Barnes, Martioli, Korzennik, Nelan& ButlerMcArthur et al.2010]mcarthur10 McArthur B. E.,Benedict G. F.,Barnes R.,Martioli E.,Korzennik S.,Nelan E., Butler R. P.,2010, @doi [] 10.1088/0004-637X/715/2/1203, http://adsabs.harvard.edu/abs/2010ApJ...715.1203M 715, 1203[Mills & FabryckyMills & Fabrycky2016]MF16 Mills S. M.,Fabrycky D. C.,2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv160604485M(@eprint arXiv 1606.04485)[Morbidelli, Szulágyi, Crida, Lega, Bitsch, Tanigawa& KanagawaMorbidelli et al.2014]Morbidelli14 Morbidelli A.,Szulágyi J.,Crida A.,Lega E.,Bitsch B.,Tanigawa T., Kanagawa K.,2014, Icarus, 232, 266[Nagasawa & IdaNagasawa & Ida2011]nagasawa2011 Nagasawa M.,Ida S.,2011, @doi [] 10.1088/0004-637X/742/2/72, http://adsabs.harvard.edu/abs/2011ApJ...742...72N 742, 72[Nagasawa, Ida& BesshoNagasawa et al.2008]nagasawa2008 Nagasawa M.,Ida S., Bessho T.,2008, The Astrophysical Journal, 678, 498[Naoz, Farr, Lithwick, Rasio& TeyssandierNaoz et al.2011]Naoz11 Naoz S.,Farr W. M.,Lithwick Y.,Rasio F. A., Teyssandier J.,2011, Nature, 473, 187[Ogilvie & LubowOgilvie & Lubow2003]ogilvie2003 Ogilvie G.,Lubow S.,2003, The Astrophysical Journal, 587, 398[Papaloizou & LarwoodPapaloizou & Larwood2000]Papa00 Papaloizou J.,Larwood J.,2000, Monthly Notices of the Royal Astronomical Society, 315, 823[Papaloizou & TerquemPapaloizou & Terquem2001]papaloizou2001 Papaloizou J. C.,Terquem C.,2001, Monthly Notices of the Royal Astronomical Society, 325, 221[Pérez & GrangerPérez & Granger2007]ipython Pérez F.,Granger B. E.,2007, @doi [Computing in Science and Engineering] 10.1109/MCSE.2007.53, 9, 21[Perryman et al.,Perryman et al.2001]perryman Perryman M. A. C.,et al., 2001, @doi [] 10.1051/0004-6361:20010085, http://adsabs.harvard.edu/abs/2001A[PetrovichPetrovich2015]Petrovich15 Petrovich C.,2015, The Astrophysical Journal, 805, 75[Petrovich, Tremaine& RafikovPetrovich et al.2014]Petrovich14 Petrovich C.,Tremaine S., Rafikov R.,2014, The Astrophysical Journal, 786, 101[Pinilla, Birnstiel, Ricci, Dullemond, Uribe, Testi& NattaPinilla et al.2012]pinilla2012 Pinilla P.,Birnstiel T.,Ricci L.,Dullemond C.,Uribe A.,Testi L., Natta A.,2012, Astronomy & Astrophysics, 538, A114[Pinte, Dent, Ménard, Hales, Hill, Cortes& de Gregorio-MonsalvoPinte et al.2016]Pinte16 Pinte C.,Dent W. R. F.,Ménard F.,Hales A.,Hill T., Cortes P., de Gregorio-Monsalvo I.,2016, @doi [The Astrophysical Journal] 10.3847/0004-637X/816/1/25, http://adsabs.harvard.edu/abs/2016ApJ...816...25P 816, 25[Pollack, Hubickyj, Bodenheimer, Lissauer, Podolak& GreenzweigPollack et al.1996]Pollack96 Pollack J. B.,Hubickyj O.,Bodenheimer P.,Lissauer J. J.,Podolak M., Greenzweig Y.,1996, icarus, 124, 62[QuillenQuillen2006]Quillen06 Quillen A. C.,2006, Monthly Notices of the Royal Astronomical Society, 365, 1367[Rasio & FordRasio & Ford1996]rasio1996 Rasio F. A.,Ford E. B.,1996, Science, 274, 954[Rein & LiuRein & Liu2012]rein2012 Rein H.,Liu S.-F.,2012, Astronomy & Astrophysics, 537, A128[Rein & SpiegelRein & Spiegel2015]rein2015ias15 Rein H.,Spiegel D. S.,2015, Monthly Notices of the Royal Astronomical Society, 446, 1424[Rein, Papaloizou& KleyRein et al.2010]ReinPapaloizouKley2009 Rein H.,Papaloizou J. C. B., Kley W.,2010, @doi [] 10.1051/0004-6361/200913208, http://adsabs.harvard.edu/abs/2010A[Schlaufman & WinnSchlaufman & Winn2016]Schlaufman16 Schlaufman K. C.,Winn J. N.,2016, @doi [] 10.3847/0004-637X/825/1/62, http://adsabs.harvard.edu/abs/2016ApJ...825...62S 825, 62[Spergel et al.,Spergel et al.2013]Spergel13 Spergel D.,et al., 2013, arXiv preprint arXiv:1305.5425[Steffen et al.,Steffen et al.2012]Steffen12 Steffen J. H.,et al., 2012, @doi [Proceedings of the National Academy of Science] 10.1073/pnas.1120970109, http://adsabs.harvard.edu/abs/2012PNAS..109.7982S 109, 7982[Sumi et al.,Sumi et al.2011]sumi2011 Sumi T.,et al., 2011, @doi [] 10.1038/nature10092, http://adsabs.harvard.edu/abs/2011Natur.473..349S 473, 349[Szulágyi, Morbidelli, Crida& MassetSzulágyi et al.2014]Szulagyi14 Szulágyi J.,Morbidelli A.,Crida A., Masset F.,2014, @doi [] 10.1088/0004-637X/782/2/65, http://adsabs.harvard.edu/abs/2014ApJ...782...65S 782, 65[Tamayo, Triaud, Menou& ReinTamayo et al.2015]Tamayo15 Tamayo D.,Triaud A. H.,Menou K., Rein H.,2015, The Astrophysical Journal, 805, 100[Timpe, Barnes, Kopparapu, Raymond, Greenberg& GorelickTimpe et al.2013]timpe13 Timpe M.,Barnes R.,Kopparapu R.,Raymond S. N.,Greenberg R., Gorelick N.,2013, @doi [] 10.1088/0004-6256/146/3/63, http://adsabs.harvard.edu/abs/2013AJ....146...63T 146, 63[Udry & SantosUdry & Santos2007]Udry07 Udry S.,Santos N. C.,2007, Annu. Rev. Astron. Astrophys., 45, 397[Valsecchi & RasioValsecchi & Rasio2014]VR14 Valsecchi F.,Rasio F. A.,2014, @doi [] 10.1088/0004-637X/786/2/102, http://adsabs.harvard.edu/abs/2014ApJ...786..102V 786, 102[Veras & RaymondVeras & Raymond2012]Veras12 Veras D.,Raymond S. N.,2012, Monthly Notices of the Royal Astronomical Society: Letters, 421, L117[Weidenschilling & MarzariWeidenschilling & Marzari1996]weidenschilling1996 Weidenschilling S. J.,Marzari F.,1996, Nature, 384, 619[Winn & FabryckyWinn & Fabrycky2015]Winn15 Winn J. N.,Fabrycky D. C.,2015, @doi [] 10.1146/annurev-astro-082214-122246, http://adsabs.harvard.edu/abs/2015ARA[Wright, Upadhyay, Marcy, Fischer, Ford& JohnsonWright et al.2009]Wright09 Wright J. T.,Upadhyay S.,Marcy G.,Fischer D.,Ford E. B., Johnson J. A.,2009, The Astrophysical Journal, 693, 1084[Wright, Marcy, Howard, Johnson, Morton& FischerWright et al.2012]wright2012 Wright J.,Marcy G.,Howard A.,Johnson J. A.,Morton T., Fischer D., 2012, The Astrophysical Journal, 753, 160[Wu & LithwickWu & Lithwick2011]Wu11 Wu Y.,Lithwick Y.,2011, The Astrophysical Journal, 735, 109[Wu & MurrayWu & Murray2003]wu2003 Wu Y.,Murray N.,2003, The Astrophysical Journal, 589, 605[Yen, Liu, Gu, Hirano, Lee, Puspitaningrum& TakakuwaYen et al.2016]Yen16 Yen H.-W.,Liu H. B.,Gu P.-G.,Hirano N.,Lee C.-F.,Puspitaningrum E., Takakuwa S.,2016, The Astrophysical Journal Letters, 820, L25[Zhang, Blake& BerginZhang et al.2015]zhang2015 Zhang K.,Blake G. A., Bergin E. A.,2015, The Astrophysical Journal Letters, 806, L7
http://arxiv.org/abs/1703.09132v1
{ "authors": [ "Christopher Simbulan", "Daniel Tamayo", "Cristobal Petrovich", "Hanno Rein", "Norman Murray" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170327150440", "title": "Connecting HL Tau to the Observed Exoplanet Sample" }
Binarsity: a penalization for one-hot encoded features in linear supervised learningMokhtar Z. Alaya[LPSM, CNRS UMR 8001, Sorbonne University, Paris France] Simon Bussy[1] Stéphane Gaïffas[LPSM, CNRS UMR 8001, Université Paris Diderot, Paris, France] Agathe Guilloux[LaMME, UEVE and UMR 8071, Université Paris Saclay, Evry, France]December 30, 2023 ===================================================================================================================================================================================================================================================================== This paper deals with the problem of large-scale linear supervised learning in settings where a large number of continuous features are available. We propose to combine the well-known trick of one-hot encoding of continuous features with a new penalization called binarsity. In each group of binary features coming from the one-hot encoding of a single raw continuous feature, this penalization uses total-variation regularization together with an extra linear constraint. This induces two interesting properties on the model weights of the one-hot encoded features: they are piecewise constant, and are eventually block sparse. Non-asymptotic oracle inequalities for generalized linear models are proposed. Moreover, under a sparse additive model assumption, we prove that our procedure matches the state-of-the-art in this setting. Numerical experiments illustrate the good performances of our approach on several datasets. It is also noteworthy that our method has a numerical complexity comparable to standard ℓ_1 penalization.Keywords. Supervised learning; Features binarization; Sparse additive modeling; Total-variation; Oracle inequalities; Proximal methods§ INTRODUCTION In many applications, datasets used for linear supervised learning contain a large number of continuous features, with a large number of samples. An example is web-marketing, where features are obtained from bag-of-words scaled using tf-idf <cit.>, recorded during the visit of users on websites. A well-known trick <cit.> in this setting is to replace each raw continuous feature by a set of binary features that one-hot encodes the interval containing it, among a list of intervals partitioning the raw feature range. This improves the linear decision function with respect to the raw continuous features space, and can therefore improve prediction. However, this trick is prone to over-fitting, since it increases significantly the number of features. A new penalization.To overcome this problem, we introduce a new penalization called binarsity, that penalizes the model weights learned from such grouped one-hot encodings (one group for each raw continuous feature). Since the binary features within these groups are naturally ordered, the binarsity penalization combines a group total-variation penalization, with an extra linear constraint in each group to avoid collinearity between the one-hot encodings. This penalization forces the weights of the model to be as constant (with respect to the order induced by the original feature) as possible within a group, by selecting a minimal number of relevant cut-points. Moreover, if the model weights are all equal within a group, then the full block of weights is zero, because of the extra linear constraint. This allows to perform raw feature selection. High-dimensional linear supervised learning.To address the high-dimensionality of features, sparse linear inference is now an ubiquitous technique for dimension reduction and variable selection, see for instance <cit.> and <cit.> among many others.The principle is to induce sparsity (large number of zeros) in the model weights, assuming that only a few features are actually helpful for the label prediction. The most popular way to induce sparsity in model weights is to adda ℓ_1-penalization (Lasso) term to thegoodness-of-fit <cit.>. This typically leads to sparse parametrization of models, with a level of sparsity that depends on the strength of the penalization. Statistical properties of ℓ_1-penalization have been extensively investigated, see for instance <cit.> for linear and generalized linear models and <cit.> for compressed sensing, among others. However, the Lasso ignores ordering of features. In <cit.>, a structured sparse penalization is proposed, known as fused Lasso, which provides superior performance in recovering the true model in such applications where features are ordered in some meaningful way. It introduces a mixed penalization using a linear combination of the ℓ_1-norm and the total-variation penalization, thus enforcing sparsity in both the weights and their successive differences.Fused Lasso has achieved great success in some applications such as comparative genomic hybridization <cit.>, image denoising <cit.>, and prostate cancer analysis <cit.>. Features discretization and cuts.For supervised learning, it is often useful to encode the input features in a new space to let the model focus on the relevant areas <cit.>. One of the basic encoding technique is feature discretization or feature quantization <cit.> that partitions the range of a continuous feature into intervals and relates these intervals with meaningful labels.Recent overviews of discretization techniques can be found in <cit.> or <cit.>.Obtaining the optimal discretization is a NP-hard problem <cit.>, and an approximation can be easily obtained using a greedy approach, as proposed in decision trees: CART <cit.> and C4.5 <cit.>, among others, that sequentially select pairs of features and cuts that minimize some purity measure (intra-variance, Gini index, information gain are the main examples). These approaches build decision functions that are therefore very simple, by looking only at a single feature at a time, and a single cut at a time. Ensemble methods (boosting <cit.>, random forests <cit.>) improve this by combining such decisions trees, at the expense of models that are harder to interpret. Main contribution.This paper considers the setting of linear supervised learning. The main contribution of this paper is the ideato use a total-variation penalization, with an extra linear constraint, on the weights of a generalized linearmodel trained on a binarization of the raw continuous features, leading to a procedure that selects multiple cut-points per feature, looking at all features simultaneously. Our approach therefore increases the capacity of the considered generalized linear model: several weights are used for the binarized features instead of a single one for the raw feature.This leads to a more flexible decision function compared to the linear one: when looking at the decision function as a function of a single raw feature, it is now piecewise constant instead of linear, as illustrated in Figure <ref> below. Organization of the paper.The proposed methodology is described in Section <ref>. Section <ref> establishes an oracle inequality for generalized linear models and provides a convergence rate for our procedure in the particular case of a sparse additive model. Section <ref> highlights the results of the method on various datasets and compares its performances to well known classification algorithms. Finally, we discuss the obtained results inSection <ref>. *Notations.Throughout the paper, for every q > 0, we denote by v_q the usual ℓ_q-quasi norm of a vector v ∈^m, namelyv_q =(∑_k=1^m|v_k|^q)^1/q,and v_∞ = max_k=1, …, m|v_k|. We also denote v_0 = |{k : v_k ≠ 0}|, where |A| stands for the cardinality of a finite set A.For u, v ∈^m, we denote by u ⊙ v the Hadamard product u⊙ v =(u_1v_1, …, u_mv_m)^⊤.For any u ∈^m and any L ⊂{1, …, m}, we denote u_L as the vector in ^m satisfying (u_L)_k = u_k for k ∈ L and (u_L)_k = 0 for k ∈ L^∁ ={1, …, m}\ L. We write, for short, 1(resp. 0) for the vector of ^m having all coordinates equal to one (resp. zero).Finally, we denote by (x) the set of sub-differentials of the function x ↦ |x|, namely (x) = { 1} if x > 0, (x) = { -1 } if x < 0 and (0) = [-1, 1].§ THE PROPOSED METHODConsider a supervised training dataset (x_i, y_i)_i=1, …, n containing featuresx_i = [x_i,1⋯ x_i,p]^⊤∈^p and labelsy_i ∈⊂, that are independent and identically distributed samples of (X, Y) with unknown distribution . Let us denote = [x_i,j]_1 ≤ i ≤ n; 1 ≤ j ≤ p the n × p features matrix vertically stacking the n samples of p raw features. Let _∙, j be the j-th feature column of . Binarization.The binarized matrixis a matrix with an extended number d > p of columns, where the j-th column _∙, j is replaced by d_j ≥ 2 columns _∙, j, 1, …, _∙, j, d_j containing only zeros and ones. Its i-th row is written x_i^B = [x^B_i,1,1⋯ x^B_i,1,d_1 x^B_i,2,1⋯ x^B_i,2,d_2⋯ x^B_i,p,1⋯ x^B_i,p, d_p]^⊤∈^d,where d = ∑_j=1^p d_j. In order to simplify the presentation of our results, we assume in the paper that all rawfeatures _∙, j are continuous, so that they are transformed using the following one-hot encoding. For each raw feature j, we consider a partition of intervals I_j,1, …, I_j, d_j ofrange(_∙, j), namely satisfying ∪_k=1^d_jI_j,k = range(_∙, j) and I_j,k∩ I_j,k' = ∅ for k ≠ k' and definex^B_i, j, k =1 ifx_i,j∈ I_j, k, 0 otherwisefor i=1, …, n, j=1, …, p and k=1, …, d_j.An example is interquantiles intervals, namely I_j, 1 = [ q_j(0), q_j(1/d_j)] and I_j, k = (q_j(k-1/d_j) , q_j(k/d_j) ] for k=2, …, d_j, where q_j(α) denotes a quantile of order α∈ [0, 1] for _∙, j. In practice, if there are ties in the estimated quantiles for a given feature, we simply choose the set of ordered unique values to construct the intervals. This principle of binarization is a well-known trick <cit.>, that allows to improve over the linear decision function with respect to the raw feature space: it uses a larger number of model weights, for each interval of values for the feature considered in the binarization. If training data contains also unordered qualitative features, one-hot encoding with ℓ_1-penalization can be used for instance. Goodness-of-fit.Given a loss function ℓ : ×→, we consider the goodness-of-fit term R_n(θ) =1/n∑_i=1^n ℓ(y_i, m_θ(x_i)),where m_θ(x_i) = θ^⊤ x_i^B and θ∈^d where we recall thatd = ∑_j=1^p d_j. We then have θ = [θ_1, ∙^⊤⋯θ_p,∙^⊤]^⊤, with θ_j,∙ corresponding to the group of coefficients weighting the binarized raw j-th feature. We focus on generalized linear models <cit.>, where the conditional distribution Y | X = x is assumed to be from a one-parameter exponential family distribution with a density of the form y | x ↦ f^0(y | x) = exp(ym^0(x) - b(m^0(x))/ϕ + c(y,ϕ)),with respect to a reference measure which is either the Lebesgue measure (e.g. in the Gaussian case) or the counting measure (e.g. in the logistic or Poisson cases), leading to a loss function of the formℓ(y_1, y_2) = - y_1 y_2 + b(y_2).The density described in (<ref>) encompasses several distributions, see Table <ref>. The functions b(·) and c(·) are known, while the natural parameter function m^0(·) is unknown. The dispersion parameter ϕ is assumed to be known in what follows. It is also assumed that b(·) isthree times continuously differentiable. It is standard to notice that[Y|X=x] = ∫ yf^0(y | x) dy = b'(m^0(x)),where b' stands for the derivative of b.This formula explains how b' links the conditional expectation to the unknown m^0. The results given in Section <ref> rely on the following Assumption.Assume that b is three times continuously differentiable, that there is C_b > 0 such that |b”'(z)| ≤ C_b |b”(z)| for any z ∈ and that thereexist constants C_n > 0 and 0 < L_n ≤ U_n such that C_n = max_i=1, …, n|m^0(x_i)| < ∞ and L_n ≤max_i=1, …, n b”(m^0(x_i)) ≤U_n. This assumption is satisfied for most standard generalized linear models.In Table <ref>, we list some standard examples that fit in this framework, see also <cit.> and <cit.>. Binarsity.Several problems occur when using the binarization trick described above: (P1) The one-hot-encodings satisfy ∑_k=1^d_j_i, j, k = 1 for j=1, …, p, meaning that the columns of each block sum to 1, makingnot of full rank by construction.(P2) Choosing the number of intervals d_j for binarization of each raw feature j is not an easy task, as too many might lead to overfitting: the number of model-weights increases with each d_j, leading to a over-parametrized model.(P3) Some of the raw features _∙, j might not be relevant for the prediction task, so we want to select raw features from their one-hot encodings, namely induce block-sparsity in θ.A usual way to deal with (P1) is to impose a linear constraint <cit.> in each block. In order to do so, let us introduce first n_j, k = | { i : x_i, j∈ I_j, k} | and the vector n_j = [n_j, 1⋯ n_j, d_j] ∈^d_j. In our penalization term, we impose the linear constraintn_j^⊤θ_j,= ∑_k=1^d_j n_j, kθ_j,k = 0for all j=1, …, p. Note that if the I_j, k are taken as interquantiles intervals, then for each j, we have that n_j, k for k=1, …, d_j are equal and the constraint (<ref>) becomesthe standard constraint ∑_k=1^d_jθ_j,k = 0.The trick to tackle (P2) is to remark that within each block, binary features are ordered. We use a within block total-variation penalization ∑_j=1^p θ_j, ∙_,ŵ_j,∙whereθ_j,∙_,ŵ_j,∙ = ∑_k=2^d_jŵ_j,k |θ_j, k -θ_j, k-1|,with weights ŵ_j, k > 0 to be defined later, to keep the number of different values taken by θ_j, ∙ to a minimal level.Finally, dealing with (P3) is actually a by-product of dealing with (P1) and (P2). Indeed, if the raw feature j is not-relevant, then θ_j, ∙ should have all entries constant because of the penalization (<ref>), and in this case all entries are zero, because of (<ref>). We therefore introduce the following penalization, called binarsity(θ) = ∑_j=1^p ( ∑_k=2^d_jŵ_j,k|θ_j, k -θ_j, k-1| + δ_j(θ_j, ∙) )where the weights ŵ_j, k > 0 are defined in Section <ref> below, and whereδ_j(u) = 0 ifn_j^⊤ u = 0,∞otherwise.We consider the goodness-of-fit (<ref>) penalized by (<ref>), namelyθ̂∈_θ∈^d {R_n(θ) +(θ) }.An important fact is that this optimization problem is numerically cheap, as explained in the next paragraph. Figure <ref> illustrates the effect of the binarsity penalization with a varying strength on an example.In Figure <ref>, we illustrate on a toy example, when p=2, the decision boundaries obtained for logistic regression (LR) on raw features, LR on binarized features and LR on binarized features with the binarsity penalization.Proximal operator of binarsity.The proximal operator and proximal algorithms are important tools for non-smooth convex optimization, with important applications in the fieldof supervised learning with structured sparsity <cit.>. The proximal operator of a proper lower semi-continuous <cit.> convex function g : ^d → is defined by_g(v) ∈_u ∈^d{1/2v- u_2^2 + g(u) }.Proximal operators can be interpreted as generalized projections. Namely, if g is the indicator of a convex set C ⊂^d given byg(u) = δ_C(u) =0 ifu ∈ C,∞otherwise,then _g is the projection operator onto C. It turns out that the proximal operator of binarsity can be computed very efficiently, using an algorithm <cit.> that we modify in order to include weights ŵ_j, k.It applies in each group the proximal operator of the total-variation since binarsity penalization is block separable, followed by a simple projection onto span(n_j)^⊥ the orthogonalof span(n_j), see Algorithm <ref> below.We refer to Algorithm <ref> in Section <ref> for the weighted total-variation proximal operator.Algorithm <ref> computes the proximal operator of (θ) given by (<ref>).A proof of Proposition <ref> is given in Section <ref>. Algorithm <ref> leads to a very fast numerical routine, see Section <ref>.The next section provides a theoretical analysis of our algorithm with an oracle inequality for the prediction error, together with a convergence rate in the particular case of a sparse additive model.§ THEORETICAL GUARANTEES We now investigate the statistical properties of (<ref>) where the weights in the binarsity penalization have the formŵ_j,k = (√(log d/n π̂_j,k)),with π̂_j,k = |{ i=1, …, n: x_i,j∈∪_k'=k^d_j I_j, k'}|/nfor all k ∈{2, …, d_j}, see Theorem <ref> for a precise definition of ŵ_j, k. Note that π̂_j,k corresponds to the proportion of ones in the sub-matrix obtained by deleting the first k columns in the j-th binarized block matrix _∙,j. In particular, we have π̂_j,k > 0 for all j, k. We consider the risk measure defined byR(m_θ) =1/n∑_i=1^n {- b'(m^0(x_i)) m_θ(x_i) + b(m_θ(x_i))},which is standard with generalized linear models <cit.>. §.§ A general oracle inequalityWe aim at evaluating how “close” to the minimal possible expected riskour estimated function m_θ̂ with θ̂ given by (<ref>) is. To measure this closeness, we establish a non-asymptotic oracle inequality with a fast rate of convergence considering the excess risk ofm_θ̂, namely R(m_θ̂) - R(m^0).To derive this inequality, we consider for technical reasons the following problem insteadof (<ref>):θ̂∈_θ∈ B_d(ρ){R_n(θ) +(θ) },where B_d(ρ) = {θ∈^d: ∑_j=1^pθ_j , ∙_∞≤ρ}.This constraint is standard in literature for the proof of oracle inequalities for sparse generalized linear models, see for instance <cit.>, and is discussed in details below.We also impose a restricted eigenvalue assumption on . For all θ∈^d, let J(θ)=[J_1 (θ),…, J_p (θ)] be theconcatenation of the support sets relative to the total-variation penalization, that is J_j(θ)= {k = 2, …, d_j :θ_j,k≠θ_j,k-1}.Similarly, we denote J^∁(θ)= [J_1^∁(θ),…,J_p^∁(θ)] the complementary of J(θ).The restricted eigenvalue condition is defined as follow.Let K =[K_1, …, K_p] be a concatenation of index sets such that∑_j=1^p |K_j| ≤ J^⋆, where J^⋆ is a positive integer.Define κ (K) ∈inf_u ∈𝒞_,ŵ(K)\{0}{ u_2/√(n)u_K_2}with𝒞_, ŵ(K) ={u ∈^d: ∑_j=1^p(u_j, ∙)_K_j^∁_, ŵ_j,∙≤2∑_j=1^p (u_j, ∙)_K_j_, ŵ_j,∙}.We assume that the following condition holdsκ (K) > 0for any K satisfying (<ref>).The set 𝒞_, ŵ(K) is a cone composed by all vectors with a support “close” to K. Theorem <ref> gives a risk bound for the estimator m_θ̂.Let Assumptions <ref> and <ref> be satisfied. Fix A >0 and choose ŵ_j,k =√(2U_nϕ(A +log d)/n π̂_j,k).Then, with probability at least 1 -2e^-A, any θ̂ given by (<ref>) satisfiesR(m_θ̂) - R(m^0) ≤inf_θ{3 (R(m_θ) - R(m^0))+ 2560 (C_b(C_n + ρ) + 2)/L_n κ^2(J(θ))|J(θ)|max_j=1, …, p(ŵ_j,∙)_J_j(θ)_∞^2 },where the infimum is over the set of vectors θ∈ B_d(ρ) such that n_j^⊤θ_j, ∙ = 0 for all j=1, …, p and such that |J(θ)| ≤ J^*. The proof of Theorem <ref> is given in Section <ref> below. Note that the “variance” term or “complexity” term in the oracle inequality satisfies|J(θ)| max_j=1, …, p(ŵ_j,∙)_J_j(θ)_∞^2≤ 2 U_nϕ|J(θ)|(A + log d)/n. The value |J(θ)| characterizes the sparsity of the vector θ, given by |J(θ)| = ∑_j=1^p |J_j(θ)| =∑_j=1^p | {k = 1, …, d_j : θ_j,k≠θ_j,k-1}|.It counts the number of non-equal consecutive values of θ.If θ is block-sparse, namely whenever |(θ)| ≪ p where (θ) = { j = 1, …, p : θ_j, ≠ 0_d_j} (meaning that few raw features are useful for prediction), then|J(θ)| ≤ |(θ)| max_j ∈(θ) |J_j(θ)|, which means that |J(θ)| is controlled by the block sparsity |(θ)|. The oracle inequality from Theorem <ref> is stated uniformly for vectors θ∈ B_d(ρ) satisfying n_j^⊤θ_j, ∙ = 0 for all j=1, …, p and |J(θ)| ≤ J^*. Writing this oracle inequality under the assumption |J(θ)| ≤ J^* meets the standard way of stating sparse oracle inequalities, see e.g. <cit.>. Note that J^* is introduced in Assumption <ref> and corresponds to a maximal sparsity for which the matrix ^B satisfies the restricted eigenvalue assumption. Also, the oracle inequality stated in Theorem <ref> stands for vectors such that n_j^⊤θ_j, ∙ = 0, which is natural since the binarsity penalization imposes these extra linear constraints.The assumption that θ∈ B_d(ρ) is a technical one, that allows to establish a connection, via the notion of self-concordance, see <cit.>, between the empirical squared ℓ_2-norm and the empirical Kullback divergence (see Lemma <ref> in Section <ref>). It corresponds to a technical constraint which is commonly used in literature for the proof of oracle inequalities for sparse generalized linear models, see for instance <cit.>, a recent contribution for the particular case of Poisson regression being <cit.>. Also, note thatmax_i=1,…,n | x_i^B, θ | ≤∑_j=1^pθ_j , ∙_∞≤ |(θ)| ×θ_∞,where θ_∞ = max_j=1, …, pθ_j, _∞. The first inequality in (<ref>) comes from the fact that the entries ofare in {0, 1}, and it entails that max_i=1,…,n | x_i^B, θ | ≤ρ whenever θ∈ B_d(ρ). The second inequality in (<ref>) entails that ρ can be upper bounded by |(θ)| ×θ_∞, and therefore the constraint θ∈ B_d(ρ) becomes only a box constraint on θ, which depends on the dimensionality of the features through |(θ)| only. The fact that the procedure depends on ρ, and that the oracle inequality stated in Theorem <ref> depends linearly on ρ is commonly found in literature about sparse generalized linear models, see <cit.>. However, the constraint B_d(ρ) is a technicality which is not used in the numerical experiments provided in Section <ref> below.In the next Section, we exhibit a consequence of Theorem <ref>, whenever one considers the Gaussian case (least-squares loss) and where m^0 has a sparse additive structure defined below. This structure allows to control the bias term from Theorem <ref> and to exhibit a convergence rate.§.§ Sparse linear additive regression Theorem <ref> allows to study a particular case, namely an additive model, see e.g. <cit.> and in particular a sparse additive linear model, which is of particular interest in high-dimensional statistics, see <cit.>. We prove in Theorem <ref> below that our procedure matches the convergence rates previously known from literature. In this setting, we work under the following assumptions.We assume to simplify that x_i ∈ [0, 1]^d for all i=1, …, n. We consider the Gaussian setting with the least-squares loss,namely ℓ(y, y') = 1/2 (y - y')^2, b(y) = 1/2 y^2 and ϕ = σ^2 (noise variance) in Equation (<ref>), with L_n = U_n = 1, C_b = 0 inAssumption <ref>. Moreover, we assume that m^0 has the following sparse additive structurem^0(x) = ∑_j ∈_* m_j^0(x_j)for x = [x_1 ⋯ x_p] ∈^p, where m_j^0 : → are L-Lipschitz functions, namely satisfying|m_j^0(z) - m_j^0(z')| ≤ L |z - z'| for any z, z' ∈, and where _* ⊂{ 1, …, p } is a set of active features (sparsity means that |_*| ≪ p).Also, we assume the following identifiability condition∑_i=1^n m_j^0(x_i, j) = 0for all j=1, …, p. Assumption <ref> contains identifiability and smoothness requirements that are standard when studying additive models, see e.g. <cit.>. We restrict the functions m_j^0 to be Lipschitz and not smoother, since our procedure produces a piecewise constant decision function with respect to each j, that can approximate optimally only Lipschitz functions. For more regular functions, our procedure would lead to suboptimal rates, see also the discussion below the statement of Theorem <ref>.Consider procedure (<ref>) with d_j = D, where D is the integer part of n^1/3, and I_j, 1 = [0, 1/D], I_j, k = (k-1/D, k/D] for all k=2, …, D and j=1, …, p, and keep the weights ŵ_j, k the same as in Theorem <ref>. Introduce also θ_j, k^* = ∑_i=1^n m_j^0(x_i, j) I_k(x_i, j) / ∑_i=1^n I_k(x_i, j) for j ∈_* and θ_j, ^* =0_D for j ∉_*. Then, under Assumption <ref> with J^* = J(θ^*) and Assumption <ref>, we havem_θ̂ - m^0_n^2 ≤(3 L^2 |_*| + 5120 M_n σ^2(A + log(p n^1/3 M_n)/κ^2(J(θ^*))) |_*|/n^2/3,where M_n = max_j=1, …, pmax_i=1, …, n |m_j^0(x_i, j)|.The proof of Theorem <ref> is given in Section <ref> below. It is an easy consequence of Theorem <ref> under the sparse additive model assumption. It uses Assumption <ref> with J^* = J(θ^*), since θ_j, ^* is the minimizer of the bias for each j ∈_*, see the proof of Theorem <ref> for details.The rate of convergence is, up to constants and logarithmic terms, of order |_*|^2 n^-2/3. Recalling that we work under a Lipschitz assumption, namely Hölder smoothness of order 1, the scaling of this rate w.r.t. to n is n^-2 r / (2 r + 1) with r=1, which matches the one-dimensional minimax rate. This rate matches the one obtained in <cit.>, see Chapter 8 p. 272, where the rate |_*|^2 n^-2r / (2r + 1) = |_*|^2 n^-4 / 5 is derived under a C^2 smoothness assumption, namely r = 2. Hence, Theorem <ref> shows that, in the particular case of a sparse additive model, our procedure matches in terms of convergence rate the state of the art. Further improvements could consider more general smoothness (beyond Lipschitz) and adaptation with respect to the regularity, at the cost of a more complicated procedure which is beyond the scope of this paper. § NUMERICAL EXPERIMENTSIn this section, we first illustrate the fact that the binarsity penalization is roughly only two times slower than basic ℓ_1-penalization, see the timings in Figure <ref>. We then compare binarsity to a large number of baselines, see Table <ref>, using 9 classical binary classification datasets obtained from the UCI Machine Learning Repository <cit.>, see Table <ref>. For each method, we randomly split all datasets into a training and a test set (30% for testing), and all hyper-parameters are tuned on the training set using V-fold cross-validation withV = 10. For support vector machine with radial basis kernel (SVM), random forests (RF) and gradient boosting (GB), we use the reference implementations from thelibrary <cit.>, and we use theprocedure from the  library[<https://github.com/dswah/pyGAM>] for the GAM baseline. The binarsity penalization is proposed in the  library <cit.>, we provide sample code for its use in Figure <ref>. Logistic regression with no penalization or ridge penalization gave similar or lower scores for all considered datasets, and are therefore not reported in our experiments.The binarsity penalization does not require a careful tuning of d_j (number of bins for the one-hot encoding of raw feature j). Indeed, past a large enough value, increasing d_j evenfurther barely changes the results since the cut-points selected by the penalization do not change anymore. This is illustrated in Figure <ref>, where we observe that past 50 bins, increasing d_j even further does not affect the performance, and only leads to an increase of the training time. In all our experiments, we therefore fix d_j=50 for j = 1, …, p. The results of all our experiments are reported in Figures <ref> and <ref>. In Figure <ref> we compare the performance of binarsity with the baselines on all 9 datasets, using ROC curves and the Area Under the Curve (AUC), while we report computing (training) timings in Figure <ref>. We observe that binarsity consistently outperforms Lasso, as well as Group L1: this highlights the importance of the TV norm within each group.The AUC of Group TV is always slightly below the one of binarsity, and more importantly it involves a much larger training time: convergence is slower for Group TV, since it does not use the linear constraint of binarsity, leading to a ill-conditioned problem (sum of binary features equals 1 in each block). Finally, binarsity outperforms also GAM and its performance is comparable in all considered examples to RF and GB, with computational timings that are orders of magnitude faster, see Figure <ref>. All these experiments illustrate that binarsity achieves an extremely competitive compromise between computational time and performance, compared to all considered baselines.§ CONCLUSIONIn this paper, we introduced the binarsity penalization for one-hot encodings of continuous features.We illustrated the good statistical properties of binarsity for generalized linear models by proving non-asymptotic oracle inequalities. We conducted extensive comparisons of binarsity with state-of-the-art algorithms for binary classification on several standard datasets. Experimental results illustrate that binarsity significantly outperforms Lasso, Group L1 and Group TV penalizations and also generalized additive models, while being competitive with random forests and boosting.Moreover, it can be trained orders of magnitude faster than boosting and other ensemble methods.Even more importantly, it provides interpretability. Indeed, in addition to the raw feature selection ability of binarsity, the method pinpoints significant cut-points for all continuous feature. This leads to a much more precise and deeper understanding of the model than the one provided by Lasso on raw features. These results illustrate the fact that binarsity achieves an extremely competitive compromise between computational time and performance, compared to all considered baselines.§ PROOFS In this Section we gather the proofs of all the theoretical results proposed in the paper. Throughout this Section, we denote by ∂(ϕ) the subdifferential mapping of a convex function ϕ. §.§ Proof of Proposition <ref>Recall that the indicator function δ_j is given by (<ref>). For any fixed j=1, …, p, we prove that _·_,ŵ_j,∙ + δ_j is the composition of _·_,ŵ_j,∙ and _δ_j, namely_·_,ŵ_j,∙ + δ_j(θ_j,∙) = _δ_j(_·_,ŵ_j,∙(θ_j,∙))for all θ_j,∙∈^d_j.Using Theorem 1 in <cit.>, it is sufficient to show that for all θ_j,∙∈^d_j, we have∂(θ_j,∙_,ŵ_j,∙) ⊆∂(_δ_j(θ_j,∙)_, ŵ_j,∙). We have _δ_j (θ_j,∙) = Π_span{n_j}^⊥(θ_j,∙), where Π_span{n_j}^⊥(·) stands for the projection onto the orthogonal of span{n_j}. This projection simply writesΠ_span{n_j}^⊥(θ_j,∙) = θ_j,-n_j^⊤θ_j, ∙/n_j_2^2 n_jNow, let us define the d_j × d_j matrix D_j byD_j= [10 0; -11 ; ⋱⋱;0-11 ]∈^d_j×^d_j.We then remark that for all θ_j,∙∈^d_j,θ_j,∙_, ŵ_j,∙= ∑_k=2^d_jŵ_j,k |θ_j,k - θ_j, k-1| = ŵ_j,∙⊙ D_j θ_j,∙_1.Using subdifferential calculus (see details in the proof of Proposition <ref> below), one has ∂(θ_j,∙_, ŵ_j,∙) = ∂(ŵ_j,∙⊙D_jθ_j,∙_1) = D_j^⊤ŵ_j,∙⊙(D_jθ_j,∙).Then, the linear constraint n_j^⊤θ_j,= 0 entails D_j^⊤ŵ_j,∙⊙(D_jθ_j,∙) = D_j^⊤ŵ_j,∙⊙( D_j( θ_j,∙ -n_j^⊤θ_j, ∙/n_j_2^2 n_j ) ),which leads to (<ref>) and concludes the proof of the Proposition. □ §.§ Proximal operator of the weighted TV penalizationWe recall in Algorithm <ref> an algorithm provided in <cit.> for the computation of the proximal operator of the weighted total-variation penalizationβ =_·_,ŵ(θ) ∈_θ∈^m{1/2β - θ_2^2 + θ_, ŵ}.A quick explanation of this algorithm is as follows.The algorithm runs forwardly through the input vector (θ_1, …, θ_m).Using Karush-Kuhn-Tucker (KKT) optimality conditions <cit.>, we have that at a location k,the weight β_k stays constant whenever |u_k| < ŵ_k+1, where u_k is a solution to a dual problem associated to the primal problem (<ref>). If not possible, it goes back to the last location where a jump can be introduced in β, validates the current segment until this location, starts a new segment, and continues.§.§ Proof of Theorem <ref>The proof relies on several technical properties that are described below. From now on, we consider = [y_1 ⋯ y_n]^⊤, = [x_1 ⋯ x_n]^⊤, m^0() = [m^0(x_1) ⋯ m^0(x_n)]^⊤, and recalling that m_θ(x_i) = θ^⊤ x^B_i) we introduce m_θ() = [m_θ(x_1) ⋯ m_θ(x_n) ]^⊤ and b'(m_θ()) = [b'(m_θ(x_1)) ⋯ b'(m_θ(x_n)) ]^⊤. Let us now define the Kullback-Leibler divergence between the true probability density funtion f^0 defined in (<ref>) and a candidate f_θ within the generalized linear model f_θ(y|x) = exp(ym_θ(x) - b(m_θ(x)) as followsKL_n(f^0, f_θ)= 1/n∑_i=1^n _[logf^0(y_i|x_i)/f_θ(y_i | x_i)]:= KL_n(m^0(), m_θ ()),whereis the joint distribution ofgiven . We then have the following Lemma. The excess risk satisfiesR(m_θ) - R(m^0) = ϕKL_n(m^0(), m_θ ()),where we recall that ϕ is the dispertion parameter of the generalized linear model, see (<ref>).Proof. If follows from the following simple computationKL_n(m^0(), m_θ ())= ϕ^-11/n∑_i=1^n _[( -y_i m_θ(x_i) +b(m_θ(x_i))) - (-y_i m^0(x_i) +b(m^0(x_i)))] = ϕ^-1(R(m_θ) - R(m^0))which proves the Lemma. □ §.§ Optimality conditions As explained in the following Proposition, a solution to problem (<ref>) can be characterized using the Karush-Kuhn-Tucker (KKT) optimality conditions <cit.>. A vector θ̂= [θ̂_1,∙^⊤⋯θ̂_p,∙^⊤]^⊤∈^d isan optimum of the objective function (<ref>) if and only if there are subgradientsĥ = [ĥ _j, ∙]_j=1, …, p∈∂θ̂_, ŵ andĝ= [ĝ _j, ∙]_j=1, …, p∈∂ [δ_j(θ̂_j,∙)]_j=1, …, p such that ∇ R_n(θ̂_j,∙)+ĥ_j,∙+ ĝ_j,∙ = 0,where{[ĥ_j,∙ = D_j^⊤(ŵ_j,∙⊙(D_jθ̂_j,∙)) j ∈J(θ̂),; ĥ_j,∙∈D_j^⊤( ŵ_j,∙⊙[-1,+1]^d_j)j ∈ J^∁(θ̂), ].and where we recall that J(θ̂) is the support set of θ̂.The subgradient ĝ_j, ∙ belongs to ∂(δ_j(θ̂_j,∙)) = {μ_j, ∙∈^d_j: μ_j, ∙^⊤θ_j, ∙≤μ_j, ∙^⊤θ̂_j,∙for allθ_j, ∙such that n_j^⊤θ_j,∙ = 0 }.For the generalized linear model, we have 1/n( _∙,j)^⊤(b'(m_θ̂())- ) + ĥ_j,∙ + ĝ_j,∙ + f̂_j,∙= 0,where f̂ = [f̂_j,∙]_j=1, …, p belongs to the normal cone of the ball B_d(ρ). Proof. The function θ↦ R_n(θ) is differentiable, so the subdifferential of R_n(·) +(·) at a point θ = (θ_j,∙)_j=1, …, p∈^d is given by∂ (R_n(θ) + (θ)) = ∇ R_n(θ) + ∂((θ)),where∇ R_n(θ) = [∂ R_n(θ)/∂θ_1,∙⋯∂ R_n(θ)/∂θ_p,∙]^⊤ and∂(θ) = [ ∂θ_1,∙_, ŵ_1,∙ + ∂δ_j(θ_1,∙)⋯ ∂θ_p,∙_, ŵ_p,∙ + ∂δ_j(θ_p,∙)]^⊤.We have θ_j,∙_, ŵ_j,∙ = ŵ_j,∙⊙ D_jθ_j,∙_1 for all j =1, …,p. Then, by applying some properties of the subdifferential calculus, we get∂θ_j,∙_, ŵ_j,∙ =D_j^⊤(ŵ_j,∙⊙ D_jθ_j,∙) ifD_jθ≠0, D_j^⊤(ŵ_j,∙⊙ v_j) otherwise,where v_j ∈ [-1,+1]^d_j for all j=1, …, p. For generalized linear models, we rewrite θ̂∈_θ∈^d {R_n(θ) + (θ) + δ_B_d(ρ)(θ)},where δ_B_d(ρ) is the indicator function of B_d(ρ). Now, θ̂= [θ̂_1,∙^⊤⋯θ̂_p,∙^⊤]^⊤ is an optimum of (<ref>) if and only if 0∈∇ R_n(m_θ̂)+ ∂θ̂_, ŵ + ∂δ_B_d(ρ)(θ̂_). Recall that the subdifferential of δ_B_d(ρ)(·) is the normal cone of B_d(ρ), namely∂δ_B_d(ρ)(θ̂) = {η∈^d :η^⊤θ≤η^⊤θ̂ for all θ∈ B_d(ρ) }.One has∂ R_n(θ)/∂θ_j,∙ = 1/n (_∙,j)^⊤(b'(m_θ̂()) - ),so that together with (<ref>) and (<ref>) we obtain (<ref>), which concludes the proof of Proposition <ref>. □ §.§ Compatibility conditionsLet us define the block diagonal matrix D = (D_1, …, D_p) with D_jdefined in (<ref>).We denote its inverse T_j which is defined by the d_j × d_j lower triangular matrixwith entries (T_j)_r,s = 0 if r < s and (T_j)_r,s = 1 otherwise. We set T =(T_1, …, T_p), so that one has D ^-1 = T.In order to prove Theorem <ref>, we need the following results which give a compatibility property <cit.> for the matrix T, seeLemma <ref> below and for the matrix T, see Lemma <ref> below. For any concatenation of subsets K=[K_1, …, K_p], we setK_j = {τ_j^1, …, τ_j^b_j}⊂{1, …, d_j}for all j=1, …, p with the convention that τ_j^0 = 0 and τ_j^b_j +1 = d_j +1. Let γ∈^d_+ be given and K = [K_1, …, K_p] with K_j given by (<ref>) for all j=1, …, p. Then, for every u ∈^d \{0}, we have T u_2/|u_K⊙γ_K_1 - u_K^∁⊙γ_K^∁_1|≥κ_T,γ(K),where κ_T,γ(K) = { 32∑_j=1^p∑_k=1^d_j | γ_j,k+1 -γ_j,k|^2+2|K_j|γ_j,∙_∞^2Δ_min, K_j^-1}^-1/2,and Δ_min, K_j = min_r=1, … b^j| τ_j^r_j - τ_j^r_j -1 |. Proof. Using Proposition 3 in <cit.>, we haveu_K⊙γ_K_1 - u_K^∁⊙γ_K^∁_1 = ∑_j=1^pu_K_j⊙γ_K_j_1 - u_K_j^∁⊙γ_K_j^∁_1 ≤∑_j=1^p 4T_j u_j,∙_2 {2∑_k=1^d_j |γ_j,k+1 -γ_j,k|^2 + 2(b_j +1) γ_j,∙_∞^2Δ_min, K_j^-1}^1/2.Using Hölder's inequality for the right hand side of the last inequality gives u_K⊙γ_K_1 - u_K^∁⊙γ_K^∁_1≤T u_2 { 32∑_j=1^p ∑_k=1^d_j |γ_j,k+1 -γ_j,k |^2 + 2|K_j| γ_j,∙_∞^2Δ_min, K_j^-1}^1/2,which completes the proof of the Lemma. □Combining Assumption <ref> and Lemma <ref> allows to establish a compatibility condition satisfied by T.Let γ∈^d_+ be given and K = [K_1, …, K_p] with K_j given by (<ref>) for j=1, …, p. Then, if Assumption <ref> holds, one has inf_u ∈𝒞_1,ŵ(K)\{0}{T u_2/√(n) | u_K⊙γ_K_1 - u_K^∁⊙γ_K^∁_1 |}≥κ_T,γ(K)κ(K),where 𝒞_1, ŵ(K) ={u ∈^d: ∑_j=1^p (u_j, ∙)_K_j^∁_1,ŵ_j,∙≤ 2∑_j=1^p (u_j, ∙)_K_j_1,ŵ_j,∙}.Proof. Lemma <ref> givesTu_2/√(n) |u_K⊙γ_K_1 - u_K^∁⊙γ_K^∁_1|≥κ_T,γ(K)Tu_2/√(n)Tu_2.Now, we note that if u ∈𝒞_1, ŵ(K), then T u ∈𝒞_, ŵ(K). Hence, Assumption <ref> entailsT u_2/√(n) |u_K⊙γ_K_1 - u_K^∁⊙γ_K^∁_1|≥κ_T,γ(K) κ(K),which concludes the proof of the Lemma. □ §.§ Connection between the empirical Kullback-Leibler divergence and the empirical squared normThe next Lemma is from <cit.> (see Lemma 1 herein). Let φ:→ be a three times differentiable convex function such that for all t∈, |φ”'(t)| ≤ M |φ”(t)| for some M ≥ 0. Then, for all t ≥ 0, one hasφ”(0)/M^2ψ(-Mt) ≤φ(t) - φ(0) - φ'(0)t ≤φ”(0)/M^2ψ(Mt),with ψ(u) = e^u - u - 1. This Lemma entails the following in our setting.Under Assumption <ref>, one hasL_n ψ(-2(C_n + ρ))/4 ϕ (C_n + ρ)^21/nm^0() - m_θ()_2^2 ≤KL_n(m^0(), m_θ()),U_n ψ(2(C_n + ρ))/4 ϕ (C_n + ρ)^21/nm^0() - m_θ()_2^2 ≥KL_n(m^0(), m_θ()),for all θ∈ B_d(ρ). Proof. Let us consider the function G_n:→ defined by G_n(t)= R_n(m^0 + t m_η), with m_η to be defined later, which writesG_n(t) = 1/n∑_i=1^n b(m^0(x_i) + tm_η(x_i)) - 1/n∑_i=1^n y_i (m^0(x_i) + tm_η(x_i)).We haveG'_n(t)= 1/n∑_i=1^n m_η(x_i) b'(m^0(x_i) + tm_η(x_i)) - 1/n∑_i=1^n y_i m_η(x_i),G”_n(t) = 1/n∑_i=1^n m^2_η(x_i) b”(m^0(x_i) + tm_η(x_i)), and G”'_n(t) = 1/n∑_i=1^n m^3_η(x_i) b”'(m^0(x_i) + tm_η(x_i)).Using Assumption <ref>, we have |G”'_n(t)| ≤ C_b m_η_∞|G”_n(t)| where m_η_∞ := max_i=1, …, n|m_η(x_i)|. Lemma <ref> with M = C_b m_η_∞ givesG”_n(0)ψ(- C_b m_η_∞t)/C_b^2m_η_∞^2≤ G_n(t) - G_n(0)- tG'_n(0) ≤ G”_n(0) ψ(C_bm_η_∞t)/C_b^2 m_η_∞^2for all t≥ 0 and t=1 leads toG”_n(0)ψ(-C_b m_η_∞)/C_b^2 m_η_∞^2≤ R_n(m^0 +m_η) - R_n(m^0) - G'_n(0) ≤G”_n(0) ψ(C_b m_η_∞)/C_b^2 m_η_∞^2.An easy computation gives- G'_n(0) = 1/n∑_i=1^n m_η(x_i) (y_i - b'(m^0(x_i)) ) and G”_n(0) = 1/n∑_i=1^n m^2_η(x_i)b”(m_η(x_i)),and since obviously _[G_n'(0)] = 0, we obtainG”_n(0)ψ(-C_b m_η_∞)/C_b^2 m_η_∞^2≤ R(m^0 + m_η) - R(m^0) ≤ G”_n(0)ψ(C_b m_η_∞)/C_b^2 m_η_∞^2. Now, choosing m_η = m_θ - m^0 and combining Assumption <ref>with Equation (<ref>) givesC_b m_η_∞≤ C_bmax_i=1, …, n (|x_i^B, θ| + |m^0(x_i)| ) ≤ C_b (ρ+ C_n).Hence, since x ↦ψ(x) / x^2 is an increasing function on ^+, we end up withG”_n(0)ψ(-C_b (C_n + ρ))/C_b^2 (C_n + ρ)^2 ≤ R(m_θ) - R(m_0) = ϕKL_n(m^0(), m_θ()),G”_n(0)ψ(C_b (C_n + ρ))/C_b^2 (C_n + ρ)^2 ≥ R(m_θ) - R(m_0)= ϕKL_n(m^0(), m_θ()),and since G”_n(0) = n^-1∑_i=1^n (m_θ(x_i) - m^0(x_i) )^2 b”(m^0(x_i)),we obtainL_n ψ(-C_b (C_n + ρ))/C_n^2 ϕ (C_n + ρ)^21/nm^0() - m_θ()_2^2≤KL_n(m^0(), m_θ()),U_nψ(C_b (C_n + ρ))/C_b^2 ϕ (C_n + ρ)^21/nm^0() - m_θ()_2^2 ≥KL_n(m^0(), m_θ()),which concludes the proof of the Lemma. □ §.§ Proof of Theorem <ref>Let us recall thatR_n(m_θ) = 1/n∑_i=1^nb(m_θ(x_i))- 1/n∑_i=1^n y_i m_θ(x_i)for all θ∈^d and thatθ̂∈_θ∈ B_d(ρ){R_n(θ) + (θ)}.Proposition <ref> above entails that there isĥ = [ĥ _j, ∙]_j=1, …, p∈∂θ̂_, ŵ,ĝ = [ĝ _j, ∙]_j=1, ⋯, p∈ [∂δ_j(θ̂_j,∙)]_j=1, …, p andf̂ = [f̂_j,∙]_j=1, …, p∈∂δ_B_d(ρ)(θ̂) such that ⟨1/n ()^⊤ (b'(m_θ̂()) - ) + ĥ + ĝ + f̂,θ̂- θ⟩ = 0for all θ∈^d. This can be rewritten as1/nb'(m_θ̂())- b'(m^0()),m_θ̂() - m_θ() - 1/n - b'(m^0()),m_θ̂() - m_θ() +ĥ + ĝ + f̂, θ̂- θ = 0.For any θ∈ B_d(ρ) such that n_j^⊤θ_j,= 0 for all j andh ∈∂θ_, ŵ, the monotony of the subdifferential mapping impliesĥ, θ - θ̂≤h, θ - θ̂, ĝ, θ - θ̂≤ 0, and f̂, θ - θ̂≤ 0, so that1/nb'(m_θ̂())- b'(m^0()),m_θ̂() - m_θ()≤1/n - b'(m^0()),m_θ̂() - m_θ() - h,θ̂- θ.Now, consider the function H_n:→ defined byH_n(t) = 1/n∑_i=1^n b(m_θ̂+ tη(x_i)) - 1/n∑_i=1^nb'(m^0(x_i))m_θ̂+ tη(x_i),where η will be defined later. We use again the same arguments as in the proof of Lemma <ref>. We differentiate H_n three times with respect t, so thatH'_n(t)= 1/n∑_i=1^n m_η(x_i) b'(m_θ̂+ tη(x_i))-1/n∑_i=1^n b'(m^0(x_i))m_η(x_i),H”_n(t) = 1/n∑_i=1^n m^2_η(x_i) b”(m_θ̂+ tη(x_i)), and H”'_n(t) = 1/n∑_i=1^n m^3_η(x_i) b”'(m_θ̂+ tη(x_i)),and in the way as in the proof of Lemma <ref>, we have |H”'_n(t)| ≤ C_b (C_n + ρ)|H”_n(t)|, and Lemma <ref> entailsH”_n(0)ψ(-C_b t (C_n + ρ))/C_b^2 (C_n + ρ)^2≤ H_n(t) - H_n(0) - tH'_n(0) ≤ H”_n(0)ψ(C_b t (C_n + ρ))/C_b^2 (C_n + ρ)^2,for all t ≥ 0. Taking t=1 andη = θ - θ̂ implies H_n(1)= 1/n∑_i=1^n b(m_θ(x_i))- 1/n∑_i=1^nb'(m^0(x_i))m_θ(x_i) = R(m_θ), andH_n(0) = 1/n∑_i=1^n b(m_θ̂(x_i)) - 1/n∑_i=1^nb'(m^0(x_i))m_θ̂(x_i) = R(m_θ̂).Moreover, we haveH'_n(0)= 1/n∑_i=1^n x_i^B, θ - θ̂ b'(m_θ̂(x_i))-1/n∑_i=1^n b'(m^0(x_i))x_i^B, θ̂- θ= 1/nb'(m_θ̂()) - b'(m^0()),(θ - θ̂), and H”_n(0)= 1/n∑_i=1^n x_i^B, θ̂- θ^2 b”(m_θ̂(x_i)).Then, we deduce thatH”_n(0)ψ(-C_b (C_n + ρ))/C_b^2 (C_n + ρ)^2 ≤ R(m_θ) - R(m_θ̂)- 1/nb'(m_θ̂()) - b'(m^0()),(θ - θ̂)=ϕKL_n(m^0(), m_θ()) - ϕKL_n(m^0(), m_θ̂()) + 1/nb'(m_θ̂()) - b'(m^0()),m_θ̂() - m_θ().Then, with Equation (<ref>), one hasϕKL_n(m^0(), m_θ̂())+ H”_n(0)ψ(-C_b (C_n + ρ))/C_b^2 (C_n + ρ)^2≤ϕKL_n(m^0(), m_θ())+ 1/n - b'(m^0()),m_θ̂() - m_θ()- h, θ̂- θ.As H”_n(0) ≥ 0, it implies thatϕKL_n(m^0(), m_θ̂())≤ϕKL_n(m^0(), m_θ())+ 1/n - b'(m^0()), m_θ̂() - m_θ()- h,θ̂- θ.If 1/n - b'(m^0()), (θ̂-θ) - h, θ̂- θ < 0,it follows that KL_n(m^0(), m_θ̂())≤KL_n(m^0(), m_θ()),then Theorem <ref> holds. From now on, let us assume that1/n - b'(m^0()), m_θ̂() - m_θ() - h, θ̂- θ≥ 0.We first derive a bound on 1/n - b'(m^0()), m_θ̂() - m_θ().Recall that D ^-1 = T (see beginning of Section <ref>). We focus on finding out a bound for 1/n (T)^⊤( - b'(m^0())), D(θ̂- θ). On the one hand, one has1/n()^⊤ ( - b'(m^0())), θ̂- θ= 1/n(T)^⊤ ( - b'(m^0())), D(θ̂- θ)≤1/n∑_j=1^p ∑_k=1^d_j|((_∙,jT_j)_∙, k)^⊤ ( - b'(m^0())) | |(D_j(θ̂_j,∙ - θ_j,∙))_k |where (_∙,jT_j)_∙,k = [ (_∙,jT_j)_1,k⋯ (_∙,jT_j)_n,k]^⊤∈^n is the k-th column of the matrix _∙,jT_j.Let us consider the eventℰ_n = ⋂_j=1^p ⋂_k =2^d_jℰ_n,j,k,where ℰ_n,j,k = {1/n | (_∙,jT_j)_∙,k^⊤ ( - b'(m^0())) |≤ŵ_j,k},so that, on ℰ_n, we have 1/n()^⊤ ( - b'(m^0()), θ̂- θ ≤∑_j=1^p ∑_k=1^d_jŵ_j,k |(D_j(θ̂_j,∙ - θ_j,∙))_k| ≤∑_j=1^pŵ_j,∙⊙ D_j(θ̂_j,∙ - θ_j,∙)_1.On the other hand, from the definition of the subgradient [h_j, ∙]_j=1, …, p∈∂θ_, ŵ (see Equation (<ref>)), one can choose h such that h_j, k = (D_j^⊤ (ŵ_j,∙⊙(D_jθ_j,∙)))_kfor all k ∈ J_j(θ) andh_j, k = (D_j^⊤ (ŵ_j,∙⊙ ( D_jθ̂_j, ∙ ) )_k = (D_j^⊤ (ŵ_j,∙⊙ ( D_j(θ̂_j, ∙ - θ_j,∙) ) )_kfor all k ∈ J_j^∁(θ). Using a triangle inequality and the fact that (x)^⊤ x= x_1, we obtain-h, θ̂- θ ≤∑_j=1^p(ŵ_j,∙)_J_j(θ)⊙ D_j(θ̂_j, ∙ -θ_j, ∙)_J_j(θ)_1- ∑_j=1^p(ŵ_j,∙)_J^∁_j(θ)⊙ D_j(θ̂_j, ∙ -θ_j, ∙)_J^∁_j(θ)_1 ≤∑_j=1^p (θ̂_j, ∙ -θ_j, ∙)_J_j(θ)_, ŵ_j,∙ - ∑_j=1^p(θ̂_j, ∙ -θ_j, ∙)_J^∁_j(θ)_, ŵ_j,∙.Combining inequalities (<ref>) and (<ref>), we get∑_j=1^p(θ̂_j, ∙ -θ_j, ∙)_J^∁_j(θ)_, ŵ_j,∙≤ 2∑_j=1^p(θ̂_j, ∙ -θ_j, ∙)_ J_j(θ)_, ŵ_j,∙on ℰ_n. Hence∑_j=1^p (ŵ_j,∙)_J^∁_j(θ)⊙ D_j(θ̂_j, ∙ -θ_j, ∙)_J^∁_j(θ)_1 ≤ 2∑_j=1^p (ŵ_j,∙)_ J_j(θ)⊙ D_j(θ̂_j, ∙ -θ_j, ∙)_J_j(θ)_1.This means that θ̂- θ∈𝒞_, ŵ(J(θ)) and D(θ̂- θ) ∈𝒞_1,ŵ(J(θ)),see (<ref>) and (<ref>). Now, going back to (<ref>) and taking into account (<ref>), the compatibility of T given in Equation (<ref>) provides the following on the eventℰ_n:ϕKL_n(m^0(), m_θ̂())≤ϕKL_n(m^0(), m_θ())+ 2∑_j=1^p (ŵ_j,∙)_J_j(θ)⊙ D_j(θ̂_j, ∙ -θ_j, ∙)_J_j(θ)_1.ThenKL_n(m^0(), m_θ̂()) ≤KL_n(m^0(), m_θ()) + m_θ̂() - m_θ() _2/√(n) ϕ κ_T,γ̂(J(θ)) κ(J(θ)),where γ̂= (γ̂_1,∙^⊤, …, γ̂_p,∙^⊤)^⊤ is such that γ̂_j,k = {[2ŵ_j,k k ∈ J_j(θ),; 0 k ∈ J_j^∁(θ), ].for all j=1, …, p and κ_T, γ̂(J(θ)) ={ 32∑_j=1^p∑_k=1^d_j |γ̂_j,k+1 - γ̂_j,k|^2 + 2|J_j(θ)| γ̂_j,∙_∞^2Δ_min, J_j(θ)^-1}^-1/2.Now, we find an upper bound for1/κ^2_T,γ̂(J(θ)) = 32∑_j=1^p∑_k=1^d_j |γ̂_j,k+1 - γ̂_j,k|^2 + 2|J_j(θ)| γ̂_j,∙_∞^2Δ_min, J_j(θ)^-1.Note that γ̂_j,∙_∞≤ 2ŵ_j,∙_∞.Let us write J_j(θ) ={k_j^1, …, k_j^|J_j(θ)|} and set B_r =[[k_j^r-1, k_j^r[[ = {k_j^r-1, k_j^r-1 + 1, …, k_j^r -1} for r = 1, …, |J_j(θ)|+1 with the convention that k_j^0=0 and k_j^|J_j(θ)|+1 = d_j+1. Then∑_k=1^d_j |γ̂_j,k+1 -γ̂_j,k|^2= ∑_r=1^|J_j(θ)|+1∑_k ∈ B_r |γ̂_j,k+1 -γ̂_j,k|^2 =∑_r=1^|J_j(θ)|+1 |γ̂_j,k_j^r -1+1 - γ̂_j,k_j^r -1|^2 + |γ̂_j,k_j^r - γ̂_j,k_j^r -1|^2=∑_r=1^|J_j(θ)|+1γ̂_j,k_j^r -1^2 + γ̂_j,k_j^r^2= ∑_r=1^|J_j(θ)| 2 γ̂_j,k_j^r^2≤ 8 |J_j(θ)| (ŵ_j,∙)_J_j(θ)_∞^2.Therefore1/κ^2_T,γ̂(J(θ)) ≤ 512 ∑_j=1^p ( |J_j(θ)| (ŵ_j,∙)_J_j(θ)_∞^2 +|J_j(θ)| (ŵ_j,∙)_J_j(θ)_∞^2Δ_min, J_j(θ)^-1) ≤ 512 ∑_j=1^p ( 1 + 1/Δ_min, J_j(θ)) |J_j(θ)| (ŵ_j,∙)_J_j(θ)_∞^2≤ 512 |J(θ)| max_j=1, …, p(ŵ_j,∙)_J_j(θ)_∞^2.Now, we use the connection between the empirical norm and Kullback-Leibler divergence. Indeed, using Lemma <ref>, we getm_θ̂() - m_θ() _2/√(n)ϕκ_T,γ̂(J(θ)) κ(J(θ))≤1/√(ϕ)κ_T,γ̂(J(θ)) κ(J(θ))(1/√(n) m_θ̂()- m^0()_2 + 1/√(n) m^0() - m_θ()_2) ≤2/√(ϕ)κ_T,γ̂(J(θ))κ(J(θ)) √(C_n(ρ, L_n))(KL_n(m^0(), m_θ̂())^1/2+ KL_n(m^0(), m_θ())^1/2),where we defined C_n(ρ, L_n) = L_n ψ(-C_b (C_n + ρ))/C_b^2 ϕ (C_n + ρ)^2, so that combined with Equation (<ref>), we obtainKL_n(m^0(), m_θ̂())≤KL_n(m^0(), m_θ())+ 2/√(ϕ)κ_T,γ̂(J(θ)) κ(J(θ)) √(C_n(ρ, L_n))(KL_n(m^0(), m_θ̂())^1/2+ KL_n(m^0(), m_θ())^1/2).This inequality entails the following upper boundKL_n(m^0(), m_θ̂()) ≤ 3 KL_n(m^0(), m_θ())+ 5/ϕκ^2_T,γ̂(J(θ)) κ^2(J(θ)) C_n(ρ, L_n),since whenever we have x ≤ c + b √(x) for some x, b, c > 0, then x ≤ 2c + b^2. Introducing g(x) = x^2 / ψ(-x) = x^2 / (e^-x + 1 - x), we note that 1/C_n(ρ, L_n) = ϕ/L_n g(C_b (C_n + ρ)) ≤ϕ/L_n(C_b(C_n + ρ) + 2),since g(x) ≤ x + 2 for any x > 0. Finally, by using also (<ref>), we end up withKL_n(m^0(), m_θ̂()) ≤ 3 KL_n(m^0(), m_θ())+ 2560 (C_b(C_n + ρ) + 2)/L_n κ^2(J(θ))|J(θ)|(ŵ_j,∙)_J_j(θ)_∞^2,which is the statement provided in Theorem <ref>. The only thing remaining is to control the probability of the event ℰ_n^∁.This is given by the following: [ℰ_n^∁]≤∑_j=1^p ∑_k=2^d_j[1/n | (_∙,j T_j)_∙,k^⊤ ( - b'(m^0()))| ≥ŵ_j,k] ≤∑_j=1^p ∑_k=2^d_j[∑_i=1^n|(_∙,j T_j)_i,k(y_i - b'(m^0(x_i))) | ≥ nŵ_j,k].Let ξ_i,j,k = (_∙,jT_j)_i,k and Z_i= y_i - b'(m^0(x_i)). Note that conditionally on x_i, the random variables (Z_i) are independent. It can be easily shown (see Theorem 5.10 in <cit.>) that the moment generating function of Z (copy of Z_i) is given by[exp(tZ)] = exp(ϕ^-1{b(m^0(x) + t) - tb'(m^0(x) - b(m^0(x)))}).Applying Lemma 6.1 in <cit.>, using (<ref>) and Assumption <ref>, we can derive the following Chernoff-type bounds [ ∑_i=1^n |ξ_i,j,kZ_i| ≥ nŵ_j,k]≤ 2exp(- n^2ŵ^2_j,k/2U_nϕξ_∙,j,k_2^2),where ξ_∙,j,k = [ξ_1,j,k⋯ξ_n,j,k ]^⊤∈^n. We have ^B_∙,jT_j = [ 1 ∑_k=2^d_j x_1,j,k^B ∑_k=3^d_j x_1,j,k^B ⋯ ∑_k=d_j-1^d_j x_1,j,k^B x_1,j,d_j^B; ⋮ ⋮ ⋮ ⋮ ⋮; 1 ∑_k=2^d_j x_n,j,k^B ∑_k=3^d_j x_n,j,k^B ⋯ ∑_k=d_j-1^d_j x_n,j,k^B x_n,j,d_j^B ],thereforeξ_∙,j,k_2^2 = ∑_i=1^n(^B_∙,jT_j)^2_∙, k =#({ i : x_i,j∈⋃_r=k^d_j I_j,r}) = nπ̂_j,k.So, using the weights ŵ_j,k given by (<ref>) together with (<ref>) and (<ref>), we obtain that the probability of ℰ_n^∁ is smaller than 2e^-A.This concludes the proof of the first part of Theorem <ref>.□§.§ Proof of Theorem <ref> First, let us note that in the least squares setting, we have R(m_θ) - R(m^0) = m_θ - m^0_n^2 for any θ∈^d where g_n^2 =1/n∑_i=1^n g(x_i)^2, and that b(y) = 1/2 y^2, ϕ = σ^2 (noise variance) in Equation (<ref>), and L_n = U_n = 1, C_b = 0. Theorem <ref> provides m_θ̂ - m^0_n^2 ≤ 3 m_θ - m_θ^0_n^2+ 5120 σ^2/κ^2(J(θ))|J(θ)|(A + log d)/nfor any θ∈^d such that n_j^⊤θ_j= 0 and J(θ) ≤ J_*. Since d_j = D for all j=1, …, p, we have d = D p and| J(θ) | = ∑_j=1^p | { k = 2, …, D : θ_j, k≠θ_j, k-1 } | ≤ (D - 1) |(θ)| θ_∞≤ D p θ_∞for any θ∈^d, where we recall that (θ) = { j=1, …, p : θ_j, ≠ 0_D}. Also, recall that I_j, 1 = I_1 = [0, 1/D] and I_j, k = I_k = (k-1/D, k/D] for k=2, …, D and j = 1, …, p. Also, we consider θ = θ^*, where θ_j, ^* is defined, for any j ∈_*, as the minimizer of∑_i=1^n ( ∑_k=1^D (θ_j, k - m_j^0(x_i, j) )I_k(x_i, j) )^2over the set of vectors θ_j, ∈^D satisfying n_j^⊤θ_j,=0, and we put θ_j, ^* =0_D for j ∉_*.It is easy to see that the solution is given byθ_j, k^* = ∑_i=1^n m_j^0(x_i, j) I_k(x_i, j)/n_j, k,where we recall that n_j, k = ∑_i=1^n I_k(x_i, j). Note in particular that the identifiability assumption ∑_i=1^n m_j^0(x_i, j) = 0 entails that n_j^⊤θ_j, ^* = 0. In order to control the bias term, an easy computation gives that, whenever x_i, j∈ I_k|θ_j, k^*- m_j^0(x_i, j)| ≤∑_i'=1^n | m_j^0(x_i', j) - m_j^0(x_i, j) |I_k(x_i', j)/n_j, k≤ L |I_k| = L/D,where we used the fact that m_j^0 is L-Lipschitz, so thatm_θ^* - m^0_n^2= 1/n∑_i=1^n(m_θ^*(x_i, j) - m^0(x_i, j))^2 = 1/n∑_i=1^n ( ∑_j ∈_*∑_k=1^D (θ_j, k^* - m^0(x_i, j)) I_k)^2 ≤|_*|/n∑_i=1^n ( ∑_j ∈_*∑_k=1^D (θ_j, k^* - m^0(x_i, j)) I_k)^2 ≤|_*|/n∑_i=1^n ∑_j ∈_*∑_k=1^D (θ_j, k^* - m^0(x_i, j))^2 I_k(x_i, j) ≤ |_*| ∑_j ∈_*∑_k=1^D L^2 |I_k|^2 I_k(x_i, j) ≤L^2 |_*|^2/D^2.Note that |θ_j, k^*| ≤m_j^0_n, ∞ where m_j^0_n, ∞ = max_i=1, …, n |m_j^0(x_i, j)|. This entails thatθ^*_∞≤max_j=1, …, pm_j^0_n, ∞ = M_n. So, using also (<ref>), we end up withm_θ̂ - m^0_n^2≤3 L^2 |_*|^2/D^2+ 5120 σ^2/κ^2(J(θ^*))D _* M_n (A+ log (D p M_n))/n,which concludes the proof Theorem <ref> using D = n^1/3.□ 10agresti2015foundations A. Agresti. Foundations of Linear and Generalized Linear Models. John Wiley & Sons, 2015.alaya2014 M. Z. Alaya, S. Gaïffas, and A. Guilloux. Learning the intensity of time events with change-points. Information Theory, IEEE Transactions on, 61(9):5148–5171, 2015.bach2010selfconcordance F. Bach. Self-concordant analysis for logistic regression. Electron. J. Statist., 4:384–414, 2010.bach2012optimization F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends® in Machine Learning, 4(1):1–106, 2012.2017arXiv170703003B E. Bacry, M. Bompaire, S. Gaïffas, and S. Poulsen. tick: a Python library for statistical learning, with a particular emphasis on time-dependent modeling. ArXiv e-prints, July 2017.baldi2016parameterized P. Baldi, K. Cranmer, T. Faucett, P. Sadowski, and D. Whiteson. Parameterized neural networks for high-energy physics. The European Physical Journal C, 76(5):1–7, Apr 2016.baldi2014searching P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications, 5, 2014.BauCom-11 H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, New York, 2011.BicRitTsy-09 P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Ann. Statist., 37(4):1705–1732, 2009.blackard1999comparative J. A. Blackard and D. J. Dean. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Computers and electronics in agriculture, 24(3):131–151, 1999.BoyVan-04 S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, 2004.breiman2001random L. Breiman. Random forests. Mach. Learn., 45(1):5–32, 2001.BreFriOlsSto-84 L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA, 1984.BuhVan-11 P. Bühlmann and S. van De Geer. Statistics for high-dimensional data. Springer Series in Statistics. Springer, Heidelberg, 2011.bunea2007oracle F. Bunea, A. Tsybakov, and M. Wegkamp. Sparsity oracle inequalities for the Lasso. Electron. J. Statist., 1:169–194, 2007.candes2008b E. J. Candès and M. B. Wakin. An Introduction To Compressive Sampling. Signal Processing Magazine, IEEE, 25(2):21–30, 2008.candes2008a E. J. Candès, M. B. Wakin, and S. P. Boyd. Enhancing sparsity by reweighted ℓ_1 minimization. Journal of Fourier Analysis and Applications, 14(5):877–905, 2008.ChlNgu1998 B. Chlebus and S. H. Nguyen. On finding optimal discretizations for two attributes. In Lech Polkowski and Andrzej Skowron, editors, Rough Sets and Current Trends in Computing, volume 1424 of Lecture Notes in Computer Science, pages 537–544. Springer Berlin Heidelberg, 1998.Cond-13 L. Condat. A Direct Algorithm for 1D Total Variation Denoising. IEEE Signal Processing Letters, 20(11):1054–1057, 2013.DalHeiLeb14 A. S. Dalalyan, M. Hebiri, and J. Lederer. On the prediction performance of the Lasso. Bernoulli, 23(1):552–581, 2017.Donoho02optimallysparse D. L. Donoho and M. Elad. Optimally sparse representation in general (non-orthogonal) dictionaries via ℓ_1 minimization. In PROC. NATL ACAD. SCI. USA 100 2197–202, 2002.donoho12001 D. L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. Information Theory, IEEE Transactions on, 47(7):2845–2862, 2001.FriHasHofTib-07 J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani. Pathwise coordinate optimization. Ann. Appl. Stat., 1(2):302–332, 2007.friedman2002stochastic J. H. Friedman. Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367–378, 2002.GarLueSaeLopHer2013 S. Garcia, J. Luengo, J. A. Saez, V. Lopez, and F. Herrera. A survey of discretization techniques: Taxonomy and empirical analysis in supervised learning. IEEE Transactions on Knowledge and Data Engineering, 25(4):734–750, 2013.green1994 P. J. Green and B. W. Silverman. Nonparametric regression and generalized linear models: a roughness penalty approach. Chapman and Hall, London, 1994.hastie1990generalized T. Hastie and R. Tibshirani. Generalized additive models. Wiley Online Library, 1990.ESL T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer Series in Statistics. Springer-Verlag, New York, 2001.horowitz2006optimal Joel Horowitz, Jussi Klemelä, Enno Mammen, et al. Optimal estimation in additive regression models. Bernoulli, 12(2):271–298, 2006.ivanoff2016adaptive Stéphane Ivanoff, Franck Picard, and Vincent Rivoirard. Adaptive lasso and group-lasso for functional poisson regression. The Journal of Machine Learning Research, 17(1):1903–1948, 2016.KniFu-00 K. Knight and W. Fu. Asymptotics for Lasso-type estimators. Ann. Statist., 28(5):1356–1378, 2000.kohavi1996scaling R. Kohavi. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In KDD, volume 96, pages 202–207, 1996.lehmann1998 E. L. Lehmann and G. Casella. Theory of point estimation. Springer texts in statistics. Springer, New York, 1998.Lichman:2013 M. Lichman. UCI Machine Learning Repository, 2013.LiuHussTanDas2002 H. Liu, F. Hussain, C. L. Tan, and M. Dash. Discretization: an enabling technique. Data Min. Knowl. Discov., 6(4):393–423, 2002.lugosi2004bayes G. Lugosi and N. Vayatis. On the Bayes-risk consistency of regularized boosting methods. Annals of Statistics, pages 30–55, 2004.meier2008group L. Meier, S. van De Geer, and P. Bühlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):53–71, 2008.meier2009high Lukas Meier, Sara Van de Geer, Peter Bühlmann, et al. High-dimensional additive modeling. The Annals of Statistics, 37(6B):3779–3821, 2009.moro2014data S. Moro, P. Cortez, and P. Rita. A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62:22–31, 2014.scikit-learn F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.Qui-93 J. R. Quinlan. C4.5: Programs for Machine Learning (Morgan Kaufmann Series in Machine Learning). Morgan Kaufmann, 1 edition, 1993.rapaport2008 F. Rapaport, E. Barillot, and J. P. Vert. Classification of arraycgh data using fused SVM. Bioinformatics, 24(13):i375–i382, 2008.ravikumar2009sparse Pradeep Ravikumar, John Lafferty, Han Liu, and Larry Wasserman. Sparse additive models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(5):1009–1030, 2009.rigollet2012 P. Rigollet. Kullback Leibler aggregation and misspecified generalized linear models. Ann. Statist., 40(2):639–665, 2012.russell2013mining M. A. Russell. Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More. O'Reilly Media, 2013.scholkopf2002learning B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.sigillito1989classification V. G. Sigillito, S. P. Wing, L. V. Hutton, and K. B. Baker. Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Technical Digest, 10(3):262–266, 1989.Tib-96 R. Tibshirani. Regression shrinkage and selection via the Lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267–288, 1996.TibRosZhuKni-05 R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., 67(1):91–108, 2005.tibshirani1996regression Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.vandegeer2008 S. van de Geer. High-dimensional generalized linear models and the Lasso. Ann. Statist., 36(2):614–645, 2008.vandegeer2013 S. van de Geer and J. Lederer. The Lasso, correlated design, and improved oracle inequalities, volume Volume 9 of Collections, pages 303–316. Institute of Mathematical Statistics, Beachwood, Ohio, USA, 2013.WuCog2012 J. Wu and S. Coggeshall. Foundations of Predictive Analytics (Chapman & Hall/CRC Data Mining and Knowledge Discovery Series). Chapman & Hall/CRC, 1st edition, 2012.yeh2009comparisons I. C. Yeh and C. H. Lien. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36(2):2473–2480, 2009.Yu-13 Y. L. Yu. On decomposing the proximal map. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 91–99. 2013.zhaoconsistency2006 P. Zhao and B. Yu. On model selection consistency of Lasso. J. Mach. Learn. Res., 7:2541–2563, 2006.
http://arxiv.org/abs/1703.08619v4
{ "authors": [ "Mokhtar Z. Alaya", "Simon Bussy", "Stéphane Gaïffas", "Agathe Guilloux" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170324225417", "title": "Binarsity: a penalization for one-hot encoded features in linear supervised learning" }
plaintheoremTheoremstatement[equation]sublemma[equation]Sublemmalemma[theorem]Lemmaconjecture[theorem]Conjecturesubclaim[theorem]Subclaim claim[theorem]ClaimHilbertry[theorem]Corollaryproposition[theorem]Propositionfact[theorem]Factquestion[theorem]Questioncorollary[theorem]Corollarydefinitiondefinition[theorem]Definitionnotation[theorem]Notationexample[theorem]Exampleconstruction[theorem]Constructionhypothesis[theorem]Standing Hypothesisremarkremark[theorem]Remarkthmbis[1] theorem-1 Ubboldmn 11 ℤ ℚ 𝔽 ℝ 𝔸 𝔹 ℂ 𝔻 𝔼 ℍ ℕ 𝔾 𝕋 𝕍 𝕌 ℙ 𝕃 𝕊 𝕎 𝕄 𝕏 𝔾𝕃 𝕐𝒢 𝒬 ℱ 𝒰 𝒱 𝒪 𝒪 𝒮 𝒜 𝒦 ℬ 𝒞 𝒟 ℳ 𝒯 ℛ ℋ ℒ 𝒫 𝒲𝔥 𝔞 𝔅 𝔓 𝔮 𝔭 𝔄 𝔟 ℜ 𝔴 𝔬 𝔫 𝔲 𝔷 𝔤𝔩 𝔤 𝔪 𝔖 𝔰 𝔱 𝔣 𝔉𝐲 𝐱 𝐃 𝐰 𝐯 𝐞*remark*RemarkO2 Ox_1_#2^[#1] 1. To whom correspondence should be addressedE-mail: EZELMANO@MATH.UCSD.EDUAuthor Contributions: A.A., H.A, S. K. J., E. Z. designed research; performed research and wrote the paper. The authors declare no conflict of interest. We prove that a countable dimensional associative algebra (resp. a countable semigroup) of locally subexponential growth is M_∞-embeddable as a left ideal in a finitely generated algebra (resp. semigroup) of subexponential growth. Moreover, we provide bounds for the growth of the finitely generated algebra (resp. semigroup). The proof is based on a new construction of matrix wreath product of algebras. Algebras and semigroups of locally subexponential growth Adel Alahmadi[1], Hamed Alsulami[1], S.K. Jain[1]^,[2], Efim Zelmanov[1]^,[3]^,1====================================================================================§ INTRODUCTION G. Higman, H. Neumann and B. H. Neumann <cit.> proved that every countable group embeds in a finitely generated group. The papers <cit.>, <cit.>, <cit.>, <cit.> show that some important properties can be inherited by these embeddings. In the recent remarkable paper <cit.>, L. Bartholdi and A. Erschler proved that a countable group of locally subexponential growth embeds in a finitely generated group of subexponential growth.Following the paper <cit.>, A. I. Malcev <cit.> showed that every countable dimensional algebra over a field embeds into a finitely generated algebra.Let A be an associative algebra over a ground field F. Let X be a countable set. Consider the algebra M_∞(A) of X × X matrices over A having finitely many nonzero entries. Clearly, there are many ways the algebra A embeds into M_∞(A).We say that an algebra A is M_∞-embeddable in an algebra B if there exists an embedding φ: M_∞(A)→ B. The algebra A is M_∞-embeddable in B as a (left, right) ideal if the image of φ is a (left, right) ideal of B.The construction of a wreath product in <cit.> implied the following refinement of the theorem of Malcev: every countable dimensional algebra is M_∞-embeddable in a finitely generated algebra as an ideal.In this paper, we * prove the analog of Bartholdi-Erschler theorem for algebras: every countable dimensional associative algebra of locally subexponential growth is M_∞-embeddable in a 2-generated algebra of subexponential growth as a left ideal;* provide estimates for the growth of the finitely generated algebra above;* consider the case of a countable dimensional algebra of Gelfand-Kirillov dimension ≤ d and M_∞-embed it in a 2-generated algebra of Gelfand-Kirillov dimension ≤ d+2 as a left ideal;* establish the similar results for semigroups. J. Bell, L. Small and A. Smoktunowicz <cit.> embedded an arbitrary, countable dimensional algebra of Gelfand-Kirillov dimension d in a 2-generated algebra of Gelfand-Kirillov dimension ≤ d+2.§ DEFINITIONS AND MAIN RESULTS Let A be an associative algebra over a ground field F that is generated by a finite dimensional subspace V. Let V^n denote the span of all products v_1⋯ v_k, where v_i∈ V, k≤ n. Then V^1 ⊆ V^2 ⊆⋯ and ⋃_n≥ 1 V^n = A. The function g(V,n)=_FV^n is called the growth function of A.Letanddenote the set of integers and the set of positive integers, respectively. Given two functions f,g:→ [1, ∞), we say that f ≼ g (f is asymptotically less than or equal to g) if there exists a constant c∈ such that f(n)≤ cg(cn) for all n ∈. If f ≼ g and g≼ f, then f and g are said to beasymptotically equivalent, i.e., f ∼ g.We say that a function f is weakly asymptotically less than or equal to g if for arbitrary α > 0 we have f≼ gn^α (denoted f≼_w g).If V,W are finite dimensional generating subspaces of A, then g(V,n)∼ g(W,n). We will denote the class of equivalence of g(V,n) as g_A.A function f: → [1,∞) is said to be subexponential if for an arbitrary α>0lim_n→∞f(n)e^α n=0. For a growth function f(n) of an algebra, it is equivalent to f(n) ⪹ e^n and to lim_n→∞√(f(n))=1.If a function f(n) is subexponential but n^α⪹ f(n) for any α >0, then f(n) is said to be intermediate. In the seminal paper <cit.>, R. I. Grigorchuk constructed the first example of a group with an intermediate growth function. Finitely generated associative algebras with intermediate growth functions are more abundant (see <cit.>).A not necessarily finitely generated algebra A is of locally subexponential growth if every finitely generated subalgebra B of A has a subexponential growth function.We say that the growth of A is locally (weakly) bounded by a function f(n) if for an arbitrary finitely generated subalgebra of A, its growth function is ≼ f(n) (resp. ≼_w f(n)).A function h(n) is said to be superlinear if h(n)n→∞ as n →∞.The main result of this paper is: Let f(n) be an increasing function. Let A be a countable dimensional associative algebra whose growth is locally weakly bounded by f(n). Let h(n) be a superlinear function. Then the algebra A is M_∞-embeddable as a left ideal in a 2-generated algebra whose growth is weakly bounded by f(h(n))n^2. We then use Theorem <ref> to derive an analog of the Bartholdi-Erschler theorem (see <cit.>). A countable dimensional associative algebra of locally subexponential growth is M_∞-embeddable in a 2-generated algebra of subexponential growth as a left ideal. A finitely generated algebra A has polynomially bounded growth if there exists α > 0 such that g_A ≼ n^α. ThenGK(A)=inf{α>0 | g_A ≼ n^α}is called the Gelfand-Kirillov dimension of A. If the growth of A is not polynomially bounded, then we let GK(A)=∞. If the algebra A is not finitely generated then the Gelfand-Kirillov dimension of A is defined as the supremum of Gelfand-Kirillov dimensions of all finitely generated subalgebras of A.J. Bell, L. Small, A. Smoktunowicz <cit.> proved that every countable dimensional algebra of Gelfand-Kirillov dimension ≤ n is embeddable in a 2-generated algebra of Gelfand-Kirillov dimension ≤ n+2.We use Theorem <ref> to prove Every countable dimensional algebra of Gelfand-Kirillov dimension ≤ n is M_∞-embeddable in a 2-generated algebra of Gelfand-Kirillov dimension ≤ n+2 as a left ideal. The proof of Theorem <ref> is based on a new construction of the matrix wreath product A ≀ F[t^-1,t]. We view it as an analog of the wreath product of a group G with an infinite cyclic groupthat played an essential role in the Bartholdi-Erschler proof <cit.>.The construction is similar to that of <cit.>, though not quite the same.Analogs of Theorems <ref>, <ref>, <ref> are true also for semigroups. Recall that T. Evans <cit.> proved that every countable semigroup is embeddable in a finite 2-generated semigroup.We will formulate analogs of Theorems <ref>, <ref>, <ref> for semigroups for the sake of completeness, omitting some definitions that are similar to those for algebras.Let P be a semigroup. Consider the Rees type semigroupM_∞(P)=⋃_i,j∈e_ij(P),e_ij(a)e_kq(b)=δ_jke_iq(ab); a,b∈ P. We say that a semigroup P is M_∞-embeddable in a semigroup S if there is an embedding φ:M_∞(P)→ S. We say that P is M_∞-embeddable in S as a (left) ideal if φ(M_∞(P)) is a (left) ideal of S.Theorem1 Let f(n) be an increasing function. Let P be a countable semigroup whose growth is locally weakly bounded by f(n). Let h(n) be a superlinear function. Then the semigroup P is M_∞-embeddable as a left ideal in a finitely generated semigroup whose growth is weakly bounded by f(h(n))n^2. Theorem2 A countable semigroup of locally subexponential growth is M_∞-embeddable in a finitely generated semigroup of subexponential growth as a left ideal. Theorem3 Every countable semigroup of Gelfand-Kirillov dimension ≤ d is M_∞-embeddable in a finitely generated semigroup of Gelfand-Kirillov dimension ≤ d+2 as a left ideal. § MATRIX WREATH PRODUCTS As above, letbe the ring of integers. For an associative F-algebra A, consider the algebra M_∞(A) of infinite × matrices over A having finitely many nonzero entries in each column. The subalgebra of M_∞(A) that consists of matrices having finitely many nonzero entries is denoted as M_∞(A). Clearly, M_∞(A) is a left ideal of M_∞(A).For an element a ∈ A and integers i,j∈, let e_ij(a) denote the matrix having a in the position (i,j) and zeros everywhere else. For a matrix X ∈M_∞(A), the entry at the position (i,j) is denoted as X_i,j.The vector space M_∞(A) is a bimodule over the algebra F[t^-1,t] via the operations: if X ∈M_∞(A) then (t^kX)_i,j=X_i-k,j for all i,j,k∈. In other words left multiplication by t^k moves all rows of X up by k steps. Similarly, (Xt^k)_i,j=X_i,j+k, so multiplication by t^k on the right moves all columns of X left by k steps.Consider the semidirect sumA ≀ F[t^-1,t]=F[t^-1,t]+M_∞(A)and its subalgebraA≀̅F[t^-1,t]=F[t^-1,t]+M_∞(A).These algebras are analogs of the unrestricted and restricted wreath products of groups with .Let A be a countable dimensional algebra with 1. We say that a matrix X ∈M_∞(A) is a generating matrix if the entries of X generate A as an algebra.Let X ∈M_∞(A) be a generating matrix. Consider the subalgebra of A ≀ F[t^-1,t] generated by t^-1,t,e_00(1),X,S=⟨ t^-1, t, e_00(1), X ⟩.The algebra M_∞(A) is a left ideal of S. Suppose that entries X_i_1,j_1, X_i_2,j_2,⋯ generate A. We have e_ij(1)=t^ie_00(1)t^-j ande_00(X_i_k,j_k)=e_0i_k(1)Xe_j_k0(1)=e_00(1)t^-i_kXt^j_ke_00(1)∈ S.This implies that e_00(A) ⊆ S and therefore e_ij(A)=t^ie_00(A)t^-j⊆ S. We proved that M_∞(A)=∑_i,j∈e_ij(A)⊆ S.Since M_∞(A) is a left ideal in the algebra A ≀ F[t^-1,t] the assertion of the lemma follows. For a fixed n∈ by n^th diagonal we mean all integers pairs (i,j) such that i-j=n. If a generating matrix X has finitely many nonzero diagonals, then M_∞(A) is a two-sided ideal in S. If a matrix X has finitely many nonzero diagonals, then M_∞(A)X ⊆ M_∞(A), which implies the claim. We say that a sequence c=(a_1, a_2, a_3, ⋯) of elements of the algebra A is a generating sequence if the elements a_1, a_2, ⋯ generate A.For the sequence c, consider the matrix c_0,N=∑_j=1^∞e_0j(a_j)∈M_∞(A). This matrix has elements a_j at the positions (0,j), j≥ 1, and zeros everywhere else.Consider the subalgebraA^(c)=⟨ t,t^-1,e_00(1), c_0,N⟩of the matrix wreath product A ≀ F[t^-1,t]. As shown in Lemma <ref>, the countable dimensional algebra A is M_∞-embeddable in the finitely generated algebra A^(c) as a left ideal.When speaking about algebras A^(c) we always consider the generating subspace V=span(t,t^-1,e_00(1),c_0,N) and denote g(V,n)=g(n).For a generating sequence c=(a_1,a_2,⋯), let W_n be the subspace of A spanned by all products a_i_1⋯ a_i_r such that i_1+⋯+i_r≤ n.DenoteM_[-n,n]×[-n,n](W_n)=∑_-n≤ i,j≤ n e_ij(W_n), M_[-n,n]× 0(W_n)=∑_i=-n^n e_i0(W_n).* e_00(W_n)⊆ V^2n+1;*[V^n⊆ M_[-n,n]×[-n,n](W_n);+∑_i≥ 1, -n≤ j≤ n,i+|j|≤ nM_[-n,n]×0(W_i)c_0Nt^j+∑_j=-n^nFt^j. ]If i_1+⋯+i_r≤ n, then e_00(a_i_1⋯ a_i_r)=c_0Nt^i_1c_0Nt^i_re_00(1) ∈ V^2n+1, which proves part (1).Let us start the proof of part (2) with the inclusione_00(1)V^ne_00(1)⊆ e_00(W_n). Let w be the product of length ≤ n in t^-1,t,e_00(1),c_0N. If w does not involve c_0N, then we_00(1)∈∑_i=-n^nFe_i0(1) and therefore e_00(1)we_00(1)∈ Fe_00(1).Suppose now that w involves c_0N, w=w'c_0Nw”, the subproduct w” does not involve c_0N. We have c_0N=e_00(1)c_0N. Hence e_00(1)we_00(1)=e_00(1)w'e_00(1)c_0Nw”e_00(1). Let d_1 be the length of the product w', and let d_2 be the length of the product w” with d_1+d_2 ≤ n-1. By the induction assumption on the length of the product, we have e_00(1)w'e_00(1) ∈ e_00(w_d_1). As we have mentioned above w”e_00(1) ∈∑_i=-d_2^d_2Fe_i0(1). It is straightforward thatc_0N(∑_i=-d_2^d_2Fe_i0(1))∈ e_00(∑_i=1^d_2Fa_i)⊆ e_00(W_d_2).Now, e_00(1)we_00(1)∈ e_00(W_d_1)e_00(W_d_2)⊆ e_00(W_n), which proves the claimed inclusion.Let us denote the right hand side of the inclusion of Lemma <ref> (2) as RHS (n). We claim that (Ft+Ft^-1+Fe_00(1))RHS(n-1)⊆ RHS(n) and RHS(n-1)(Ft+Ft^-1+Fe_00(1)) ⊆ RHS(n).Let us check, for example, that M_[-n+1,n-1]× 0(W_i)c_0,Nt^je_00(1)⊆ RHS(n) provided that i+|j|≤ n-1. Indeed, t^je_00(1)=e_j0(1),c_0Ne_j0(1)= 0if j≤0,e_00(a_j)if j≥ 1. Now,M_[-n+1, n-1]× 0(W_i)e_00(a_j)⊆ M_[-n+1,n-1]× 0(W_ia_j)⊆ M_[-n,n]× 0(W_n-1).Hence, to check that a product of length ≤ n in t^-1,t,e_00(1),c_0N lies in RHS(n), we may assume that the product starts and ends with c_0N. Now,c_0NV^n-2c_0N=e_00(1)c_0NV^n-2c_00(1)c_0N⊆ e_00(1)V^n-1e_00(1)c_0N ⊆ e_00(W_n-1)c_0N⊆ RHS(n),which completes the proof of the lemma. Denote w(n)=_F W_n. w(n)≤ g(2n+1), g(n)≤ 2(2n+1)^2w(n)+2n+1. § GROWTH OF THE ALGEBRAS A^(C) Now we are ready to prove Theorem <ref>. Let f(n) be an increasing function, i.e., f(n)≤ f(n+1) for all n and f(n) →∞ as n→∞. Let A be a countable dimensional algebra whose growth is locally weakly bounded by f(n). Let h(n) be a superlinear function.Let elements b_1, b_2, ⋯ generate the algebra A. Choose a sequence ϵ_k>0 such that lim_k→∞ϵ_k=0. Denote V_k=span_F(b_1, ⋯, b_k). By the assumption, there exist constants c_k≥ 1, k≥ 1, such thatdim_FV_k^n≤ c_kf(c_kn)(c_kn)^ϵ_kfor all n≥ 1.Increasing ϵ_k and c_k we can assume thatdim_FV_k^n ≤ f(c_kn)n^ϵ_k Indeed, choose a sequence ϵ_k', k≥ 1, such that 0<ϵ_k<ϵ_k', lim_k→∞ϵ_k'=0.There exists μ_k≥1 such thatn^ϵ_k'-ϵ_k>c_k^ϵ_k+1for all n>μ_k.The function f(n) is an increasing function. Hence, there exists c_k' such thatf(c_k')≥ c_kf(c_ki)(c_ki)^ϵ_k,i=1,⋯, μ_k.Now we havedim_FV_k^n≤ f(c_k'n)n^ϵ_k'for all n≥ 1.From now on, we will assume (<ref>) for arbitrary k≥1, n≥ 1.Choose an increasing sequence n_1<n_2<⋯ such that c_kn≤ h(n) for all n≥ n_k.Define a generating sequence c=(a_1, a_2, ⋯) as follows: a_i=b_k if i=n_k; a_i=0 if i does not belong to the sequence n_1, n_2, ⋯.We will show that the growth function of A^(c) is weakly bounded by f(h(n))n^2. Choose α>0.For an integer n≥ n_1, fix k such that n_k≤ n < n_k+1. ThenW_n=span(a_i_1⋯ a_i_r|i_1+⋯+i_r≤ n; a_i_1,⋯, a_i_r∈{b_1, ⋯, b_k})⊆ V_k^nHence, w(n)≤ f(c_kn)n^ϵ_k. From n_k≤ n it follows that c_kn≤ h(n). If n is sufficiently large, then we also have ϵ_k<α. Thenw(n)≤ f(c_kn)n^ϵ_k≤ f(h(n))n^α.By Lemma <ref> (<ref>) we have g(n)≤ w(n)n^2. Therefore g(n) ≤ f(h(n))n^α+2.We have M_∞-embedded the algebra A as a left ideal in a finitely generated algebra B=A^(c) of growth ≤ f(h(n))n^α+2.V. Markov <cit.> showed that for a sufficiently large n, the matrix algebra M_n(B) is 2-generated. Clearly, M_n(B) has the same growth as B. Since M_n(M_∞(A)) ≅ M_∞(A), it follows that the algebra A is M_∞-embedded in M_n(B) as a left ideal. This completes the proof of Theorem <ref>.In order to prove Theorem <ref>, we will need two elementary lemmas. Let g_k(n), k≥ 1, be an increasing sequence of subexponential functions g_k:N→ N, g_k(n)≤ g_k+1(n) for all k,n. Then there exists a subexponential function f:N→ N and a sequence 1≤ n_1 < n_2 < ⋯, such that g_k(n)≤ f(n) for all n≥ n_k. Choose k≥ 1. From lim_n→∞g_k(n)e^1/kn=0, it follows that there exists n_k such that g_k(n)e^1/kn≤1k for all n≥ n_k. Without loss of generality, we will assume that n_1<n_2<⋯. For an integer n≥ n_1, let n_k≤ n<n_k+1. Define f(n)=g_k(n).We claim that f(n) is a subexponential function. Indeed, let s≥ 1. Our aim is to show that lim_n→∞f(n)e^1/sn=0.Let n≥ n_s. Let k be a maximal integer such that n_k≥ n, so n_k≥ n<n_k+1, s≤ k. We havef(n)e^1/sn=g_k(n)e^1/sn≤g_k(n)e^1/kn≤1k.This implies lim_n→∞f(n)e^1/sn=0 as claimed. Choose ℓ≥ 1. For all n≥ n_ℓ, we have n_k≤ n <n_k+1, where ℓ≤ k. Hence, g_ℓ(n)≤ g_k(n)=f(n). This completes the proof of the lemma.Let f(n) be a subexponential function. Then there exists a superlinear function h(n) such that f(h(n)) is still subexponential. For an arbitrary k≥ 1, we have lim_n→∞f(kn)e^1/k^2kn=0. Hence there exists an increasing sequence n_1<n_2<⋯ such that f(kn)<1/ke^n/k for all n≥ n_k.For an arbitrary n≥ n_1, choose k≥ 1 such that n_k≤ n < n_k+1. Let μ(n)=k. Then h(n)=nμ(n) is a superlinear function since μ(n)≤μ(n+1) and μ(n)→∞ as n→∞. Choose α>0. For a sufficiently large n, we have k=μ(n)>1/α. Thenf(nμ(n))=f(kn)<1/ke^1/kn<1/ke^α n.Hence lim_n→∞f(h(n))e^α n=0, which completes the proof of the lemma.Let A be a countable dimensional associative algebra that is locally of subexponential growth. By Lemma <ref>, there exists a subexponential function f(n) such that the growth of A is locally asymptotically bounded by f(n). By Lemma <ref>, there exists a superlinear function h(n) such that f(h(n)) is still a subexponential function. By Theorem <ref> for an arbitrary α>0, we can M_∞-embed the algebra A as a left ideal in a 2-generated algebra of growth ≤ f(h(n))n^α. A product of two subexponential functions is a subexponential function. Hence, the function f(h(n))n^α is subexponential. This finishes the proof of Theorem <ref>.Let A be a countable dimensional associative algebra of Gelfand-Kirillov dimension d. Then the growth of A is weakly asymptotically bounded by n^d. The function h(n)=nln n is superlinear. By Theorem <ref>, the algebra A is M_∞-embeddable as a left ideal is a 2-generated algebra B whose growth is weakly asymptotically bounded by (nln n)^dn^2, in other words, the growth of B is asymptotically bounded by n^d+2+α(ln n)^d for any α>0. This implies GK B≤ d+2 and completes the proof of Theorem <ref>. Now let us discuss the similar theorems for semigroups: Theorems <ref>, <ref>, <ref>.Let P be a semigroup with 1. Let F be an arbitrary field. Consider the semigroup algebra F[P] ≀ F[t^-1,t]. Let c=(a_1, a_2, ⋯) be a sequence of elements a_i∈ P ∪{0} that generate the semigroup P∪{0}.Consider the algebra F[P]^(c) and the semigroup P^(c) generated by t,t^-1,e_11(1), c_0,N. Arguing as in the proof of Lemma <ref>, we see that M_∞(P) is a left ideal of the semigroup P^(c).Starting with an arbitrary generating sequence b_1, b_2, ⋯ of the semigroup P and diluting it with zeros as in the proof of Theorem <ref>, we get a generating sequence c=(a_1, a_2, ⋯) of the semigroup P∪{0} such that the semigroup P^(c) has the needed growth properties. The proof just follow from the proofs of Theorem <ref>, <ref>, <ref>. amsplain § ACKNOWLEDGEMENT The project of the first two authors was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University. The authors, therefore, acknowledge technical and financial support of KAU.The fourth author gratefully acknowledges the support from the NSF.[1]Department of Mathematics, King Abdulaziz University, Jeddah, SA,E-mail address, ANALAHMADI@KAU.EDU.SA; HHAALSALMI@KAU.EDU.SA;[2]Department of Mathematics, Ohio University, Athens, USA,E-mail address, JAIN@OHIO.EDU;[3]Department of Mathematics, University of California, San Diego, USAE-mail address, EZELMANO@MATH.UCSD.EDU
http://arxiv.org/abs/1703.08733v1
{ "authors": [ "Adel Alahmadi", "Hamed Alsulami", "S. K. Jain", "Efim Zelmanov" ], "categories": [ "math.RA" ], "primary_category": "math.RA", "published": "20170325191531", "title": "Algebras and semigroups of locally subexponential growth" }
http://arxiv.org/abs/1703.08599v1
{ "authors": [ "Ya-Peng Hu", "Feng Pan", "Xin-Meng Wu" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170324211208", "title": "The effects of massive graviton on the equilibrium between the black hole and radiation gas in an isolated box" }
Looking for New Physics via Semi-leptonic and Leptonic rare decays of D and D_s ===============================================================================empty§ INTRODUCTION There are different scales at which one can look at the economic world: global scale, country scale, etc. What is observed at the country scale is that the correlation among economic institutions of a country causes the emergence of that country's economy. This statement indicates the common feature of all financial structure:some correlated financial units, at a scale, construct a financial structure at a larger scale. There are enough evidences such as the influence of a country recession into another countries that demonstrate correlation among the economy of different countries. Thus, considering each country as a financial unit, we expect to have a financial structure at the global scale whose constituents are stock markets of different countries. We call such an abstract structure “world stock market”. Here, two questions arise: (i) how the existence of this global market can be ascertained, and (ii) what are its communities?The first question that can be addressed, regarding to the main feature of every stock markets, is the emergent of collective behavior. Thus, if there exists a world stock market, one then should be able to show the collective behavior of that market's constituents. In the econophysics literature, the common approach for studying collective behavior is to analyze the cross-correlation matrixof stock returns using random matrix theory <cit.>. Since RMT describes a fully random system, any deviation from it contains information about the collective behavior among the market's constituents, see e.g., <cit.>. Here, we propose a method based on RMT for measuring the collective behavior. In order to make randome matrices, we shuffle the non-diagonal elements of . This procedure results in erasing the existing pattern of correlation among market's constituents, and hence removes collective behavior. Therefore, we expect to obtain valuable information about the collective behavior in a market by comparing statistical characteristics ofwith those of the shuffled . Among all characteristics, we use participation ratio - a tool for estimating the number of significant participants in an eigenvector of a matrix <cit.> - and develop two new quantities called relative participation ratio (RPR) and node participation ratio (NPR) which will be described in the method section. The first quantity measures the degree of collective behavior in a market and the second one determines the share of each market's constituent in the measured collective behavior. RPR can be used for ranking different markets based on the degree of their collective behavior. NPR determines how much a market's component behaves independently of the collective behavior in the whole market. It can be used for ranking elements of a market according to their independence level. We apply the proposed method to the indices of forty influential markets in the world economy, from January 2000 to October 2015, in looking for a global financial structure. The results demonstrate the existence of such structure. In order to support our finding we show similarity between the world stock market and four of its markets including two developed and two emerging markets. One of the common characteristic of both the world stock market and these four markets is the presence of some constituents evolving almost independently of the other ones. In order to address the second question and to get a better perspective of the correlation effect in the world stock market, we use dendrogram analysis. The results show three main communities along with some isolated stock markets which are less affected by a crisis and more affected by a booming.§ RESULT AND DISCUSSION Here, the proposed method of this study is first applied to the forty stock market indices to trace a global economic structure. These markets together with their corresponding countries are listed in Tab. <ref>. The markets are chosen due to Gross Domestic Product (GDP) and geographical considerations. We then apply our method to four stock markets, indicated by asterisk sign in Tab. <ref>, to observe similar structure but at the lower scale. §.§ Forty markets as a world stock marketThe common approach in studying collective behavior of a market is based on RMT results and its deviation from market results. Recently, two of the authors introduced another criterion based on fractional Gaussian noises <cit.>. The eigenvalue distribution of markets differs from RMT's distribution; there are some eigenvalues out of the RMT bulk region. These deviating eigenvalues contains useful information about the collective behavior. To be more precise, it was shown that large eigenvalues show the markets' trend and the largest eigenvalue indicates the largest collective mode in markets, see e.g., <cit.>. In the following, we study collective behavior in an abstract market named “world stock market” by taking a different path using the proposed method of this paper.Assuming that there exists a world stock market whose constituents are the forty markets, one can then construct the corresponding cross-correlation matrixand its shuffled counterpart _sh. For this purpose, we use the indices data of these forty markets in the period January 2000 to October 2015 <cit.>. After diagonalizingand _sh,participation ratios _k are then obtained using Eq. (<ref>). Figure <ref> shows the PRs of the world stock market and its shuffled version. As seen from this figure, the PR of the shuffled are greater than the market in average yielding to the relative participation ratio δ≈ 0.5. This number represents the degree of collective behavior among the forty markets. In the next subsection, we will do the same calculation for four stock markets. After showing that there is a collective behavior among these 40 markets, we find the share of each market in that behavior using the NPR parameter, Eq. (<ref>). Figure <ref> depicts the NPR for the world stock market in which 40 markets are sorted ascendingly according to their NPR values. The markets on the left side are more independent from the trend of the world stock market while the ones on the right side are more dependent on the market's collective behavior. This figure also shows the effect of shuffling on NPR. The notable result here is that the markets with low NPRs, located in the left side of Fig. <ref>, reduce the risk of a world portfolio because they have higher independence level than other markets and hence at the time of crashes they will be less affected by the world trend.Figure <ref> is the dendrogram of the cross-correlation matrix of the world stock market after financial crash 2008. This dendrogram shows three communities in the world market of 40 indices, colored by red, green and blue, according to their correlation distances. The clustering is highlighted by at least thirty percent of correlation between the markets. Looking at the component of these communities illustrates the effect of geographical relations between them. The red, blue and green communities are mostly consist of the markets located in East Asia, Europe and the continent of America, respectively. The black color markets are those with less than thirty percent of correlation. These markets, which are the ones with less NPRs, are Asian countries, namely China, Iran, Pakistan, Qatar, Saudi Arabia, and Sri Lanka. Figure <ref> is the corss-correlation matrix of the world stock market where its rows and columns are rearranged according to the dendrogram pattern of Fig. <ref>. The color of each square cell represents the value of cross-correlation between the two markets. Three communities, around the secondary diagonal of the matrix, can be clearly observed.§.§ Four marketsHere, we apply the proposed method to the indices of the four stock markets includes the Standard & Poor's 500 (USA) and the Financial Times Stock Exchange 100 (United Kingdom) as developed markets and the Shanghai Stock Exchange 180 (China) and the Tehran Stock Exchange (Iran) as emerging markets. The data of the markets are in the same period as 40 markets. Figure <ref> shows the relative participation ratio, δ, for these markets. Note that in order to have a correct comparison between different markets, before using Eq. <ref>, the PRs of these markets are normalized by the markets' size. Since δ represents the degree of collective behavior in a market, Fig. <ref> shows that the companies of S&P 500 have the highest degree of collective behavior among the four markets. This can be interpreted in this way that a strong collective atmosphere exists in S&P 500. Fig <ref> shows the normalized NPR of each market in a sorted fashion like Fig. <ref>. This figure gives the share of each company in a collective behavior of market. The other notable point is that the degree of collective behavior, δ, does not depend on the type of market for instance although SSE 180 and TSE are emerging markets but they have a greater δ than that of the developed market FTSE 100. The green solid line in Fig <ref>, represents the effect of shuffling on S&P500 NPR which is exactly similar to what is observed in Fig. <ref>. In order to identify the contribution of companies in the collective behavior more clearly, we also compute the probability density function (PDF) of node independency, which is the inverse of NPR. Figures <ref>- <ref> are the PDFs of node independency and interestingly illustrate fat-tail behavior. This means that there are very few companies in each of these markets that work almost independently and have small impact on the collective behavior, while most companies contribute in the collective behavior remarkably. § SUMMARY AND CONCLUSION Historically, stock markets were emerged from the central sovereign states and territories. However, in the globalization age, stock markets have been severely affected by communications, so that the future and the existence of all countries tie together. In this work, we have studied the network of forty influential markets from different countries to address this question that whether the globalization results in the emergence of a world stock market. Due to this fact that every financial system consists of many units with collective behavior, we expect to observe such behavior for the world stock market whose units are these forty markets. In order to meet this expectation, a method have been introduced for measuring collective behavior in a market. This method is based on the concept of participation ratio. We have shown that the forty markets possess collective behavior and their shares in this collective behavior are not the same. The community of the forty markets have also been extracted using the dendrogram technique; the result shows three main communities plus some isolated markets belong to some Asian countries. These markets have the lowest shares in the global collective behavior or in other words have the highest level of independency from the global trend. The three communities, on the other hand, have more participation in the global collective behavior. Moreover, each of these communities includes markets belonging to countries with geographical proximity. Eventually, the results of this study illustrates the collective behavior among forty markets and therefore proves the existence of a world stock market. § METHOD Here, we present a method based on random matrix theory. Historically, this theory traces back to the work of Wigner in nuclear physics where the precise nature of the interactions between the components of atomic nuclei are not known <cit.>. From the viewpoint of having unknown underlying interactions, financial systems are very similar to atomic nuclei. Laloux et. al. <cit.> demonstrated that RMT could be a suitable candidate for studying financial correlation matrices; then, Plerou et. al. <cit.> extract statistical properties of cross correlations in financial data using RMT. In order to construct the cross-correlation matrix , the price return of the ith stock is first calculated asR_i(t) = lnP_i(t+Δ t) - lnP_i(t),where i=1,…,N, Δ t is the time scale, and P_i(t) indicates the price of the ith stock. Since the returns of stocks have different variances, it is suitable to work with the normalized price return r_i(t), instead of R_i(t), which is defined asr_i = R_i(t) - ⟨R_i⟩_t /σ _i,whereσ _i = √(⟨ R_i^2 ⟩_t- ⟨ R_i ⟩_t^2) is the standard deviation of the return R_i(t), and ⟨⋯⟩_t indicates the time average over the period of study. The equal-time cross-correlation matrix C is then constructed with the elements C_ij given byC_ij = ⟨r_i(t) r_j(t)⟩_t.From Eqs. (<ref>) and (<ref>) it is readily seen thatis a symmetric matrix with unit diagonal elements and off-diagonal elements in [-1,1]. In the following subsections, we present two quantities for the measurement of collective behavior among markets based on the cross-correlation matrix; but, before that we introduce another matrix named “shuffled cross-correlation matrix” which is the counterpart of , and state its potentialapplication in this context.The cross-correlation matrixcan be diagonal, which means that there is no interaction or correlation between the markets, or off-diagonal, which, on the contrary, means that there is correlation between markets. The existence of correlation is a necessary condition, but not sufficient, for the emergence of collective behavior among markets. This statement can be justified in this way that we do not expect to observe a collective behavior in a stock market whoseconstituents are correlated to each other in a completely random fashion. Thus, besides having correlation between markets, a sort of pattern or structure for that correlation is needed. In other words, collective behavior is emerged when there exists a structure for the market in addition to the correlation among the market's constituents. Now a question arises: how to make visible such a structure or its effect? To answer this question, we randomly shuffle the off-diagonal elements of . The new matrix obtained in this way is calledshuffled cross-correlation matrix and denoted by _sh. Note that random shuffling the off-diagonal elements would vanish any specific pattern of correlation without annihilating correlations themselves. Briefly, two matrices can be assigned to each market: the cross-correlation matrixcontaining both the correlation values and structure and the shuffled cross-correlation matrix _sh containing only correlation values.§.§ Relative participation ratio In order to quantify the degree of collective behavior in a market, we introduce a quantity based on the concept of participation ratio (PR) which is first defined by Bell and Dean <cit.> in the context of atomic physics. Diagonalizing _N× N gives us a set of eigenvectors {_k} and eigenvalues {λ_k}. Note that an eigenvalue represents a collective mode of market and its corresponding eigenvector contains the share of market's components in that collective mode. For the kth eigenvector, participation ratio is defined as follows:_k ≡(∑_l = 1^N [u_k(l)]^4 )^ - 1where u_k(l), l=1,…,Nare the components of _k. Participation ratio _k is bounded from below by unity for the case of _k with only one non-zero component and from above by N for the case of _k with identical components u_k(l) = N^-1/2. This gives the natural meaning of the PR as a measure for the number of significant components in an eigenvector. Since PRs of a market depends on its size N, a correct comparison between the PRs of various markets of different sizes could be obtained when PRs become size independent. For this purpose, we normalize PRs, Eq. (<ref>), by the size of market so that the maximal bound of _k becomes unity.According to the reason for the construction of _sh, we now define a new parameter named relative participation ratio (RPR) as follows δ = ⟨_sh⟩ - ⟨⟩/⟨_sh⟩,where ⟨_sh⟩ and ⟨⟩ represent the average of PRs over all eigenvectors of _sh and , respectively. Since the parameter δ quantifies the deviation of the participation ratio of the cross-correlation matrix from its shuffled counterpart in an average sense, it gives us the degree of collective behavior pattern in a market. When there is a week collective behavior in the market, random shuffling should has small effect on , i.e., ⟨⟩≈⟨ _sh⟩, and hence δ is near zero. On the other hand, when a strong pattern of collective behavior presents, random shuffling has considerable effect on , and consequently we have a large δ. §.§ Node participation ratioQuantifying the collective behavior in a market, this question may arise that how one can specify the contribution of each market's constituent in the measured collective behavior. To address this question, we introduce a new quantity, named node participation ratio (NPR), as follows _l ≡(∑_k = 1^N [u_k(l)]^4)^-1.Notice that the summation is taken over the index “k”, i.e., over the lth row of eigenvectors. Since the eigenvector _k includes the share of market's components in the collective mode related to the eigenvalue λ_k, the NPR _l determine the share of the lth component in the total collective behavior. In the language of stock markets, this quantity can be also interpreted in this way that a company with lower NPR evolves more independently than a company with higher NPR. As a result _l^-1 gives a measure of the independence of the lth company from other companies. unsrt § ACKNOWLEDGMENTSGRJ and MS gratefully acknowledge support from Cognitive Science and Technologies Council grant No. 2694.
http://arxiv.org/abs/1703.08781v1
{ "authors": [ "M. Saeedian", "T. Jamali", "M. Z. Kamali", "H. Bayani", "T. Yasseri", "G. R. Jafari" ], "categories": [ "q-fin.ST", "physics.soc-ph" ], "primary_category": "q-fin.ST", "published": "20170326075350", "title": "Emergence of world-stock-market network" }
From molecules to Young Stellar Clusters: the star formation cycle across the M33 disk Edvige Corbelli1 Jonathan Braine2 Rino Bandiera1 Nathalie Brouillet2 Françoise Combes 3 Clement Druard 2 Pierre Gratier 2 Jimmy Mata 2 Karl Schuster 4 Manolis Xilouris 5 Francesco PallaDeceased1 Received .....; accepted .... ===========================================================================================================================================================================================================================================================================================================In this paper, we develop new first-order method for composite non-convex minimization problems with simple constraints and inexact oracle. The objective function is given as a sum of "`hard"', possibly non-convex part, and "`simple"' convex part. Informally speaking, oracle inexactness means that, for the "`hard"' part, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds this function from above.We give several examples of such inexactness: smooth non-convex functions with inexact Hölder-continuous gradient, functions given by auxiliary uniformly concave maximization problem, which can be solved only approximately. For the introduced class of problems, we propose a gradient-type method, which allows to use different proximal setup to adapt to geometry of the feasible set, adaptively chooses controlled oracle error, allows for inexact proximal mapping.We provide convergence rate for our method in terms of the norm of generalized gradient mapping and show that, in the case of inexact Hölder-continuous gradient, our method is universal with respect to Hölder parameters of the problem. Finally, in a particular case, we show that small value of the norm of generalized gradient mapping at a point means that a necessary condition of local minimum approximately holds at that point.Keywords: nonconvex optimization, composite optimization, inexact oracle, Hölder-continuous gradient, complexity, gradient descent methods, first-order methods, parameter free methods, universal gradient methods.AMS Classification: 90C30, 90C06, 90C26.§ INTRODUCTION In this paper, we introduce new first-order method for non-convex composite optimization problems with inexact oracle.Namely, our problem of interest is as followsmin_x ∈ X ⊆{ψ(x) := f(x) + h(x)},where X is a closed convex set, h(x) is a simple convex function, e.g. x_1. We assume that f(x) is a general function endowed with an inexact first-order oracle, which is defined below (see Definition <ref>). Informally speaking, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds our f(x) from above. An example of problem with this kind of inexactness is given in <cit.>, where the authors study a learning problem for parametric PageRank model. First-order methods are widely developed since the earliest years of optimization theory, see, e.g., <cit.>. Recent renaissance in their development started more than ten years ago and was mostly motivated by fast growing problem sizes in applications such as Machine Learning, Data Analysis, Telecommunications. For many years, researchers mostly considered convex optimization problems since they have good structure and allow to estimate rate of convergence for proposed algorithms. Recently, non-convex problems started to attract fast growing attention, as they appear often in Machine Learning, especially in Deep Learning. Thus, high standards of research on algorithms for convex optimization started to influence non-convex optimization. Namely, it have become very important for newly developed methods to obtain a rate of convergence with respect to some criterion. Usually, this criterion is the norm of gradient mapping, which is a generalization of gradient for constrained problems, see, e.g. <cit.>.Already in <cit.>, the author analyzed how different types of inexactness in gradient values influence gradient method for unconstrained smooth convex problems. At the moment, theory for convex optimization algorithms with inexact oracle is well-developed in a series of papers <cit.>. In <cit.>, it was proposed to calculate inexactly the gradient of the objective function and extend Fast Gradient Method of <cit.> to be able to use inexact oracle information. In <cit.>, a general concept of inexact oracle is introduced for convex problems, Primal, Dual and Fast gradient methods are analyzed. In <cit.>, the authors develop Stochastic Intermediate Gradient Method for problems with stochastic inexact oracle, which provides good flexibility for solving convex and strongly convex problems with both deterministic and stochastic inexactness. The theory for non-convex smooth, non-smooth and stochastic problems is well developed in <cit.>. In <cit.>, problems of the form (<ref>), where X ≡^n and f(x) is a smooth non-convex function are considered in the case when the gradient of f(x) is exactly available, as well as when it is available through stochastic approximation. Later, in <cit.> the authors generalized these methods for constrained problems of the form (<ref>) in both deterministic and stochastic settings. Nevertheless, it seems to us that gradient methods for non-convex optimization problems with deterministic inexact oracle lack sufficient development. The goal of this paper is to fill this gap.It turns out that smooth minimization with inexact oracle is closely connected with minimization of functions with Hölder-continuous gradient.We say that a function f(x) has Hölder-continuous gradient on X iff there exist ν∈ [0,1] and L_ν≥ 0 s.t.∇ f(x) - ∇ f(y)_,*≤ L_νx-y_^ν,x,y ∈ X.In <cit.> it was shown that a convex problem with Hölder-continuous subgradient can be considered as a smooth problem with deterministic inexact oracle. Later, universal gradient methods for convex problems with Hölder-continuous subgradient were proposed in <cit.>. These algorithms do not require to know Hölder parameter ν and Hölder constant L_ν. Thus, they are universal with respect to these parameters. <cit.> proposed methods for non-convex problems of the form (<ref>), where f(x) has Hölder-continuous gradient. These methods rely on Euclidean norm and are good when the euclidean projection onto the set X is simple.Our contribution in this paper is as follows. * We generalize for non-convex case the definition of inexact oracle in <cit.> and provide several examples, where such inexactness can arise. We consider two types of errors – controlled errors, which can be made as small as desired, and uncontrolled errors, which can only be estimated. * We introduce new gradient method for problem (<ref>) and prove a theorem (see Theorem <ref>) on its rate of convergence in terms of the norm of generalized gradient mapping. Our method is adaptive to the controlled oracle error, is capable to work with inexact proximal mapping, has flexibility of choice of proximal setup, based on the geometry of set X. * We show that, in the case of problems with inexact Hölder-continuous gradient, our method is universal, that is, it does not require to know in advance a Hölder parameter ν and Hölder constant L_ν for the function f(x), but provides best known convergence rate uniformly in Hölder parameter ν.Thus, we provide a universal algorithm for non-convex Hölder-smooth composite optimization problems with deterministic inexact oracle.The rest of the paper is organized as follows. In Section <ref>, we define deterministic inexact oracle for non-convex problems and provide several examples. In Section <ref>, we describe our algorithm, prove the convergence theorem. Also we provide two corollaries for particular cases of smooth functions and Hölder-smooth functions. Note that the latter case includes the former one. Finally, we provide some explanations about how convergence of the norm of generalized gradient mapping to zero leads to a good approximation for a point, where a necessary optimality condition for Problem (<ref>) holds. Note that we use different reasoning from what can be found in literature. Notation Letbe a finite-dimensional real vector space and ^* be its dual. We denote the value of linear function g ∈^* at x∈ by g, x. Let ·_ be some norm on , ·_,* be its dual.§ INEXACT ORACLE In this section, we define the inexact oracle and describe several examples where it naturally arises. We say that a function f(x) is equipped with an inexact first-order oracle on a set X if there exists δ_u > 0 and at any point x ∈ X for any number δ_c > 0 there exists a constant L(δ_c) ∈ (0, +∞) and one can calculate(x,δ_c,δ_u) ∈ and (x,δ_c,δ_u) ∈^* satisfying|f(x) - (x,δ_c,δ_u)| ≤δ_c+δ_u, f(y)-((x,δ_c,δ_u)- (x,δ_c,δ_u) ,y-x ) ≤L(δ_c)/2x-y_^2 + δ_c+δ_u, ∀ y ∈ X. In this definition, δ_c represents the error of the oracle, which we can control and make as small as we would like to. On the opposite, δ_u represents the error, which we can not control.The idea behind the definition is that at any point we can approximately calculate the value of the function and construct an upper quadratic bound.Let us now consider several examples. §.§ Smooth Function with Inexact Oracle Values Let us assume that * Function f(x) is L-smooth on X, i.e. it is differentiable and, for all x,y ∈ X, ∇ f(x) - ∇ f(y)_,*≤ Lx-y_. * Set X is bounded with max_x,y ∈ Xx-y_≤ D. * There exist δ̅_u^1, δ̅_u^2 > 0 and at any point x ∈ Q, for any δ̅_c^1, δ̅_c^2 > 0, we can calculate approximations f̅(x) and g̅(x) s.t. |f̅(x) - f(x)| ≤δ̅_c^1+δ̅_u^1, g̅(x)-∇ f(x)_,*≤δ̅_c^2+δ̅_u^2.Then, usingL-smoothness of f(x), we obtain, for any y ∈ X, f(y)≤ f(x) + ∇ f(x), y-x+ L/2x-y_^2 ≤f̅(x) + δ̅_c^1+δ̅_u^1 + ∇g̅(x), y-x+ ∇ f(x) - g̅(x), y-x+ L/2x-y_^2 ≤f̅(x)+ ∇g̅(x), y-x+ L/2x-y_^2 + δ̅_c^1+δ̅_u^1 + (δ̅_c^2+δ̅_u^2 ) D.Thus, (f̅(x),g̅(x)) is an inexact first-order oracle with δ_u = δ̅_u^1+δ̅_u^2D,δ_c = δ̅_c^1+δ̅_c^2D, and L(δ_c) ≡ L. §.§ Smooth Function with Hölder-Continuous Gradient Assume that f(x) is differentiable and its gradient is Hölder-continuous, i.e. for some ν∈ [0,1]and L_ν≥ 0,∇ f(x) - ∇ f(y) _* ≤ L_νx-y_^ν, ∀ x,y ∈ X. Then f(y) ≤ f(x)+ ∇ f(x) ,y-x+ L_ν/1+νx-y_^1+ν, ∀ x,y ∈ X.It can be shown, see <cit.>, Lemma 2, that, for all x ∈ X and any δ >0,f(y)-(f(x)- ∇ f(x) ,y-x ) ≤L(δ)/2x-y_^2 + δ, ∀ y ∈ X,where L(δ) = ( 1-ν/1+ν·2/δ)^1-ν/1+ν L_ν^2/1+ν.Thus, (f(x),∇ f(x)) is an inexact first-order oracle with δ_u = 0,δ_c = δ, and L(δ) given by (<ref>).Note that, if (f(x),∇ f(x)) can only be calculated inexactly as in Subsection <ref>, their approximations will again be an inexact first-order oracle. §.§ Function Given by Maximization Subproblem Assume that function f(x): → is defined by an auxiliary optimization problemf(x) = max_u ∈ U ⊆{Ψ(x,u):= -G(u) +A u , x },where A: →^* is a linear operator, G: → R is a continuously differentiable uniformly convex function of degree ρ≥ 2 with parameter σ_ρ≥ 0. The last means that ∇ G(u_1) - ∇ G(u_2) , u_1 - u_2 ≥σ_ρu_1-u_2_^ρ, ∀ u_1, u_2 ∈ U,where ·_ is some norm on . Note that f(x) is differentiable and ∇ f(x) = A u^*(x), where u^*(x) is the optimal solution in (<ref>) for fixed x. Extending the proof in <cit.>, we can prove the following.If G is uniformly convex on X, then the gradient of f is Hölder-continuous withν = 1/ρ-1,L_ν = A_→^*^ρ/ρ-1/σ_ρ^1/ρ-1,where A_→^* = max{Au_,*: u_=1}. From the optimality conditions in (<ref>), we obtainA^T x_1 - ∇ G(u(x_1)), u(x_2) - u(x_1) ≤ 0,A^T x_2 - ∇ G(u(x_2)), u(x_1) - u(x_2) ≤ 0.Adding these inequalities, we obtain, by definition of uniformly convex function,A^T (x_1-x_2), u(x_1) - u(x_2)≥∇ G(u(x_1)) - ∇ G(u(x_2)), u(x_1) - u(x_2)(<ref>)≥σ_ρu(x_1) - u(x_2)_^ρ.on the other hand,A(u(x_1) - u(x_2))_,*^2 ≤A_→^*^2 u(x_1) - u(x_2)_^2 ≤A_→^*^2 (1/σ_ρ A^T (x_1-x_2), u(x_1) - u(x_2))^2/ρ≤A_→^*^2/σ_ρ^2/ρA(u(x_1) - u(x_2))_,*^2/ρx_1-x_2_^2/ρ.Thus,A(u(x_1) - u(x_2))_,*^2-2/ρ≤A_→^*^2/σ_ρ^2/ρx_1-x_2_^2/ρ,which proves the Lemma. Let us now consider a situation, when the maximization problem in (<ref>) can be solved only inexactly by some auxiliary numerical method. It is natural to assume that, for any x ∈ X and any δ > 0, we can calculate a point u_x ∈ U s.t. 0 ≤ f(x)-Ψ(x,u_x) = Ψ(x,u^*(x)) -Ψ(x,u_x) ≤δ.Since ln(t) is a concave function, for any ρ≥ 2 and t,τ≥ 0, we haveln( 1/ρt^ρ+ ρ-1/ρτ^ρ/ρ-1) ≥1/ρln( t^ρ) + ρ-1/ρln(τ^ρ/ρ-1) = ln(t τ).Using this inequality witht=σ_ρ^1/ρu^*(x)-u_x_, τ = A_→^*/σ_ρ^1/ρy-x_,we obtain, for any y ∈ X,A(u^*(x)-u_x), y-x≤A_→^*u^*(x)-u_x_y-x_≤σ_ρ/ρu^*(x)-u_x_^ρ + A_→^*^ρ/ρ-1/ρ/ρ-1σ_ρ^1/ρ-1y-x_^ρ/ρ-1 = σ_ρ/ρu^*(x)-u_x_^ρ + L_ν/1+νy-x_^1+ν,where ν and L_ν are defined in (<ref>). At the same time, since Ψ(x,u) (<ref>) is uniformly concave in second argument, we haveσ_ρ/ρu^*(x)-u_x_^ρ≤Ψ(x,u^*(x)) - Ψ(x, u_x) (<ref>)≤δ.Combining this inequality with the previous one, we obtainA(u^*(x)-u_x), y-x ≤L_ν/1+νx-y_^1+ν + δ.Since f has Hölder-continuous gradient with parameters (<ref>), using (<ref>), we obtainf(y)≤ f(x) + ∇ f(x) ,y-x+ L_ν/1+νx-y_^1+ν(<ref>)≤Ψ(x,u_x) + δ + Au_x ,y-x+A(u^*(x)-u_x), y-x+ 2L_ν/1+νx-y_^1+ν(<ref>)≤Ψ(x,u_x) +Au_x ,y-x+ 2L_ν/1+νx-y_^1+ν+ 2 δ(<ref>),(<ref>),(<ref>)≤Ψ(x,u_x) +Au_x ,y-x+ 2L(δ)/2x-y_^2+ 4 δ.Thus, we have obtained that (Ψ(x,u_x), Au_x) is an inexact first-order oracle with δ_u = 0,δ_c = 4δ, and L(δ_c) given by (<ref>) with δ = δ_c/4. § ADAPTIVE GRADIENT METHOD FOR PROBLEMS WITH INEXACT ORACLE To construct our algorithm for problem (<ref>), we introduce, as it usually done, proximal setup <cit.>. We choose a prox-function d(x) which is continuous, convex on X and * admits a continuous in x ∈ X^0 selection of subgradients d'(x), where x ∈ X^0 ⊆ Xis the set of all x, where d'(x) exists; * d(x) is 1-strongly convex on X with respect to ·_, i.e., for any x ∈ X^0, y ∈ X d(y)-d(x) - d'(x) ,y-x ≥1/2y-x_^2. We define also the corresponding Bregman divergence V[z] (x) = d(x) - d(z) -d'(z), x - z, x ∈ X, z ∈ X^0. Standard proximal setups, i.e. Euclidean, entropy, ℓ_1/ℓ_2, simplex ,nuclear norm, spectahedron can be found in <cit.>. We will use Bregman divergence in so called composite prox-mappingmin_x ∈ X{ g,x+ 1/γ V[x̅](x) +h(x) } ,where γ >0, x̅∈ X^0, g ∈^* are given. We allow this problem to be solved inexactly in the following sense. Assume that we are given δ_pu >0, γ >0, x̅∈ X^0, g ∈^*. We call a point x̃ = x̃(x̅,g,γ,δ_pc,δ_pu) ∈ X^0 an inexact composite prox-mapping iff for any δ_pc >0 we can calculate x̃ and there exists p ∈∂ h(x̃) s.t. it holds thatg + 1/γ[d'(x̃) - d'(x̅) ] + p, u - x̃≥ - δ_pc-δ_pu, ∀ u ∈ X.We writex̃ =min_x∈ X^δ_pc+δ_pu{ g,x+ 1/γ V[x̅](x) +h(x) }and defineg_X (x̅,g,γ,δ_pc,δ_pu) := 1/γ(x̅-x̃). This is a generalization of inexact composite prox-mapping in <cit.>. Note that if x̃ is an exact solution of (<ref>), inequality (<ref>) holds with δ_pc=δ_pu=0 due to first-order optimality condition. Similarly to Definition <ref>, δ_pc represents an error, which can be controlled and made as small as it is desired, δ_pu represents an error which can not be controlled.Our main scheme is Algorithm <ref>. We will need the following simple extension of Lemma 1 in <cit.> to perform the theoretical analysis of our algorithm. Let x̃ = x̃(x̅,g,γ,δ_pc,δ_pu) be an inexact composite prox-mapping and g_X (x̅,g,γ,δ_pc,δ_pu) be defined in (<ref>). Then, for any x̅∈ X^0, g ∈^* and γ, δ_pc,δ_pu > 0, it holdsγ g, g_X (x̅,g,γ,δ_pc,δ_pu) ≥γg_X (x̅,g,γ,δ_pc,δ_pu)_^2 + (h(x̃(x̅,g,γ,δ_pc,δ_pu))-h(x)) - δ_pc-δ_pu . Taking u=x̅ in (<ref>) and rearranging terms, we obtain, by convexity of h(x) and strong convexity of d(x),g, x̅- x̃ ≥1/γ d'(x̃) - d'(x̅) , x̃- x̅ +p, x̃- x̅ - δ_pc-δ_pu≥1/γx̃- x̅_^2 + (h(x̃) - h(x̅))- δ_pc-δ_pu.Applying the definition (<ref>), we finish the proof. Now we state the mainAssume that f(x) is equipped with an inexact first-order oracle in the sense of Definition <ref> and for any constants c_1, c_2 >0 there exists an integer i ≥ 0 s.t. 2^ic_1 ≥ L(c_2/c_12^i). Assume also that there exists a number ψ^* > -∞ such that ψ(x) ≥ψ^* for all x ∈ X. Then, after N iterations of Algorithm <ref>, it holds thatM_K (x_K - x_K+1))_^2≤(∑_k=0^N-11/2M_k)^-1 (ψ(x_0) - ψ^* + N(4δ_u + δ_pu)) + /2.Moreover, the total number of checks of Inequality (<ref>) is not more than 2N-1+log_2M_N-1/L_0. First of all let us show that the procedure of search of point w_k satisfying (<ref>), (<ref>) is finite. Let i_k ≥ 0 be the current number of performed checks of inequality (<ref>) on the step k. Then M_k = 2^i_kL_k. At the same time, by Definition <ref> L(δ_c,k) = L( /16M_k) = L( /16 · 2^i_kL_k). Hence, by the Theorem assumptions, there exists i_k ≥ 0 s.t. M_k = 2^i_kL_k ≥ L(δ_c,k). At the same time, we have (w_k,δ_c,k,δ_u) - /20M_k - δ_u(<ref>)≤ f(w_k) (<ref>)≤(x_k,δ_c,k,δ_u) + (x_k,δ_c,k,δ_u),w_k - x_k+ L(δ_c,k)/2w_k - x_k_^2 + /20M_k + δ_u,which leads to (<ref>) when M_k ≥ L(δ_c,k).Let us now obtain the rate of convergence. We denote, for simplicity, _k = (x_k,δ_c,k,δ_u), _k = (x_k,δ_c,k,δ_u), _X,k =g_X (x_k, _k ,1/M_k,δ_pc,k,δ_pu) Note that_X,k(<ref>),(<ref>),(<ref>)= M_k (x_k - x_k+1). Using definition of x_k+1, we obtain, for any k=0,…,N-1,f(x_k+1) - /20M_k -δ_u = f(w_k) - /20M_k -δ_u (<ref>)≤(w_k,δ_c,k,δ_u) (<ref>)≤_k +_k , x_k+1-x_k+M_k/2x_k+1-x_k_^2 + /10M_k + 2δ_u (<ref>)=_k - 1/M_k_k, _X,k +1/2M_k_X,k_^2 + /10M_k + 2δ_u (<ref>),(<ref>)≤ f(x_k) + /20M_k + δ_u - [1/M_k_X,k_^2+ h(x_k+1)-h(x_k) - /20M_k - δ_pu] + 1/2M_k_X,k_^2 + /10M_k + 2δ_u .This leads toψ(x_k+1) ≤ψ(x_k) - 1/2M_k_X,k_^2 + /4M_k + 4δ_u + δ_pu,k=0,…,N-1.Summing up these inequalities, we get_X,K_^2 ∑_k=0^N-11/2M_k≤∑_k=0^N-11/2M_k_X,k_^2≤ψ(x_0) - ψ(x_N) + /4∑_k=0^N-11/M_k + N(4δ_u + δ_pu).Finally, since, for all x∈ X ψ(x) ≥ψ^* > -∞ and _X,K(<ref>)= M_K (x_K - x_K+1), we obtainM_K (x_K - x_K+1))_^2≤(∑_k=0^N-11/2M_k)^-1 (ψ(x_0) - ψ^* + N(4δ_u + δ_pu)) + /2,which is (<ref>). The estimate for the number of checks of Inequality (<ref>) is proved in the same way as in <cit.>, but we provide the proof for the reader's convenience. Let i_k ≥ 1 be the total number of checks ofInequality (<ref>) on the step k ≥ 0. Then i_0 = 1+log_2M_0/L_0 and, for k ≥ 1, M_k = 2^i_k-1L_k = 2^i_k-1M_k-1/2. Thus, i_k = 2+ log_2M_k/M_k-1, k ≥ 1. Then, the total number of checks of Inequality (<ref>) is ∑_k=0^N-1i_k=1+log_2M_0/L_0 + ∑_k=1^N-1(2+ log_2M_k/M_k-1) = 2N-1+log_2M_N-1/L_0.Let us consider two corollaries of the theorem above. First is a simple case, when in Definition <ref> L(δ_c) ≡ L. Second is the case, when L(δ_c) is given by (<ref>). Assume that there exists a constant L>0 s.t. for the dependence L(δ_c) in Definition <ref> it holds that L(δ_c) ≤ L for all δ_c > 0. Assume also that there exists a number ψ^* > -∞ such that ψ(x) ≥ψ^* for all x ∈ X. Then, after N iterations of Algorithm <ref>, it holds thatM_K (x_K - x_K+1))_^2≤4L(ψ(x_0) - ψ^*)/N + 4L(4δ_u + δ_pu) + /2.Moreover, the total number of checks of Inequality (<ref>) is not more than 2N+log_2L/L_0. By our assumptions, for all iterations k≥ 0, there exists i_k ≥ 0 s.t. M_k = 2^i_kL_k ≥ L(δ_c,k) ≡ L. Hence, we can apply Theorem <ref>. Let i_k ≥ 1 be the total number of checks of Inequality (<ref>) on a step k≥ 0. Then, for all k≥ 0, the inequality M_k = 2^i_kL_k ≤ 2L should hold. Otherwise the termination of the inner cycle would happen earlier. Using this inequalities, we obtain(∑_k=0^N-11/2M_k)^-1≤(∑_k=0^N-11/4L)^-1 = 4L/N.Thus (<ref>) follows from Theorem <ref>. The same argument proves the second statement of the corollary. Assume that the dependence L(δ_c) in Definition <ref> is given by (<ref>) for some ν∈ (0,1], i.e. L(δ_c) = ( 1-ν/1+ν·2/δ_c)^1-ν/1+ν L_ν^2/1+ν, δ_c > 0.Assume also that there exists a number ψ^* > -∞ such that ψ(x) ≥ψ^* for all x ∈ X.Then, after N iterations of Algorithm <ref>, it holds thatM_K (x_K - x_K+1))_^2≤ 2^1+3ν/2ν( 1-ν/1+ν·40/)^1-ν/2ν L_ν^1/ν(ψ(x_0) - ψ^*/N +(4δ_u + δ_pu) ) + /2.Moreover, the total number of checks of Inequality (<ref>) is not more than 2N-1+1+ν/2ν+1-ν/2νlog_2(40 ·1-ν/1+ν) + 1-ν/2νlog_21/ +log_2 L_ν^1/ν/L_0. First, let us check that, for any constants c_1, c_2 >0, there exists an integer i ≥ 0 s.t. 2^ic_1 ≥ L(c_2/c_12^i). Substituting δ_c = c_2/c_12^i to (<ref>) gives L(c_2/c_12^i) = 2^1-ν/1+νi c_3,where c_3>0 is some constant. Since 1-1-ν/1+ν = 2ν/1+ν >0, we conclude that the required i ≥ 0 exists. Thus, we can apply Theorem <ref>.Let i_k ≥ 1 be the total number of checks of Inequality (<ref>) on a step k≥ 0. Then, for all k≥ 0, the inequality M_k = 2^i_kL_k ≤ 2L(δ_c,k) should hold. Otherwise the termination of the inner cycle would happen earlier.From this inequality and (<ref>) it follows thatM_k ≤ 2 ( 1-ν/1+ν·40M_k/)^1-ν/1+ν L_ν^2/1+ν.Solving this inequality for M_k, we obtainM_k ≤2^1+ν/2ν( 1-ν/1+ν·40/)^1-ν/2ν L_ν^1/ν.Whence,(∑_k=0^N-11/2M_k)^-1≤(∑_k=0^N-11/4L)^-1 = 2^1+3ν/2ν( 1-ν/1+ν·40/)^1-ν/2νL_ν^1/ν/N.Now (<ref>) follows from Theorem <ref>.Using (<ref>) and the bound (<ref>), we obtain the estimate for the total number of checks of Inequality (<ref>).Let us make some remarks about the obtained results. First, if we set in Corollary <ref> ν=1, we recover the result of Corollary <ref>.Second, in the situation of Corollary <ref>, to make the controlled part of the right-hand side smaller thanwe need to choose N ≥ const·L_ν^1/ν(ψ(x_0)-ψ^*)/^1+ν/2ν.One can see that the less ν is, the worse is the bound. This is expected as for non-smooth non-convex problems the norm of gradient mapping g_X(·) at the stationary point could not be equal to zero. Third, we can see that uncontrolled error 4δ_u + δ_pu can dramatically influence the error estimate, especially, when ν tends to zero. Finally, let us explain, why small M_K (x_K - x_K+1))_ means that x_K+1 is a good approximation for stationary point of the initial problem (<ref>). Let us prove the following result, which was communicated to us by Prof. Yu. Nesterov without proof. Let in Problem (<ref>) f(x) be continuously differentiable, h(x) be convex, X be a closed convex set. Assume that x^* is a local minimum in this problem. Then, for all x ∈ X,∇ f(x^*), x- x^*+ h(x) + h(x^*) ≥ 0.Let us fix an arbitrary point x ∈ X. Denote x_t = tx+(1-t)x^* ∈ X, t ∈ [0,1]. Since x^* is a local minimum in (<ref>), X is a convex set, h(x) is a convex function, we obtain for all sufficiently small t > 00 ≤f(x_t)+h(x_t)-f(x^*)-h(x^*)/t≤f(x_t)-f(x^*)/t + h(x)-h(x^*).Taking the limit as t → +0, we prove the stated inequality.Assume, for simplicity, that we are in the situation of Subsection <ref>. This means that f(x) is L(f)-smooth, we can uniformly approximate its gradientg̅(x)-∇ f(x)_,*≤δ̅_c^2+δ̅_u^2,and the set X is bounded with diameter D. Also assume that the chosen prox-function d(·) is L(d)-smooth.From (<ref>), (<ref>), (<ref>), we obtain that there exists ∇ h(x_K+1) ∈∂ h(x_K+1) s.t., for all x ∈ X,(x_K,δ_c,K,δ_u) + M_K[d'(x_K+1) - d'(x_K) ]+ ∇ h(x_K+1), x - x_K+1≥ -δ_pc,K - δ_pu.Whence, by convexity of h(x),∇ f(x_K+1) , x- x_K+1 + h(x) - h(x_K+1) ≥ ∇ f(x_K+1) -∇ f(x_K) , x- x_K+1+ ∇ f(x_K) - (x_k,δ_c,k,δ_u) , x- x_K+1+M_k[d'(x_K) - d'(x_K+1) ], x - x_K+1 -δ_pc,K - δ_pu,x ∈ X.By L(f)-smoothness of f, boundedness of X, we obtain∇ f(x_K+1) -∇ f(x_K) , x- x_K+1≥ - L(f)/M_KM_K(x_K - x_K+1)_D.From (<ref>), by boundedness of X, we get∇ f(x_K) - (x_K,δ_c,K,δ_u) , x- x_K+1≥ - (δ̅_c,K^2+δ̅_u^2) D.Using L(d) smoothness of d(x) and boundedness of X, we obtainM_k[d'(x_K) - d'(x_K+1) ], x - x_K+1≥ -L(d) M_K(x_K - x_K+1)_D.Substituting last three inequalities to (<ref>), we obtain that, if M_K(x_K - x_K+1)_≤, then∇ f(x_K+1) , x- x_K+1 + h(x) - h(x_K+1) ≥ -Θ() - δ̅_u^2 D - δ_pu.Thus, at the point x_K+1 the necessary condition in Lemma <ref> approximately holds. § CONCLUSIONIn this article, we propose a new adaptive gradient method for non-convex composite optimization problems with inexact oracle and inexact proximal mapping. We showed that, for problems with inexact Hölder-continuous gradient, our method is universal in terms of Hölder parameter and constant. For the proposed method, we prove convergence theorem in terms of generalized gradient mapping and show that a point returned by our algorithm is a point where necessary optimality condition approximately holds.Acknowledgments. The author is very grateful to Prof. A. Nemirovski, Prof. Yu. Nesterov, Prof. B. Polyak for fruitful discussions. plainnat
http://arxiv.org/abs/1703.09180v1
{ "authors": [ "Pavel Dvurechensky" ], "categories": [ "math.OC", "90C30, 90C06, 90C26", "G.1.6" ], "primary_category": "math.OC", "published": "20170327164950", "title": "Gradient Method With Inexact Oracle for Composite Non-Convex Optimization" }
Structured Learning of Tree Potentials in CRF for Image Segmentation Fayao Liu, Guosheng Lin, Ruizhi Qiao, Chunhua ShenF. Liu, R. Qiao, C. Shen are with The University of Adelaide, Australia. G. Lin is with Nanyang Technological University, Singapore. This work was done when G. Lin was with The University of Adelaide. Email: {fayao.liu, ruizhi.qiao, chunhua.shen}@adelaide.edu.au, guosheng.lin@gmail.comAppearing in IEEE Transactions on Neural Networks and Learning Systems, 26 March 2017.December 30, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================We propose a new approach to image segmentation, which exploits the advantages of both conditional random fields () and decision trees. In the literature, the potential functions of are mostly defined as a linear combination of some pre-defined parametric models, and then methods like structured support vector machines () are applied to learn those linear coefficients. We instead formulate the unary and pairwise potentials as nonparametric forests—ensembles of decision trees, and learn the ensemble parameters and the trees in a unified optimization problem within the large-margin framework. In this fashion, we easily achieve nonlinear learning of potential functions on both unary and pairwise terms in . Moreover, we learn class-wise decision trees for each object that appears in the image. Due to the rich structure and flexibility of decision trees, our approach is powerful in modelling complex data likelihoods and label relationships. The resulting optimization problem is very challenging because it can have exponentially many variables and constraints. We show that this challenging optimization can be efficiently solved by combining a modified column generation and cutting-planes techniques. Experimental results on both binary (Graz-02, Weizmann horse, Oxford flower) and multi-class (MSRC-21, PASCAL VOC 2012) segmentation datasets demonstrate the power of the learned nonlinear nonparametric potentials.Conditional random fields, Decision trees, Structured support vector machines, Image segmentation. § INTRODUCTION The goal of object segmentation is to produce a pixel level segmentation of different object categories. It is challenging as the objects may appear in various backgrounds and in different visual conditions. <cit.> model the conditional distribution of labels given observations, and represents the state-of-the-art in image/object segmentation<cit.>. The max-margin principle has also been applied to predict structured outputs, including <cit.>, and max-margin Markov networks <cit.>. These three methods share similarities when viewed as optimization problems using different loss functions. Szummer<cit.> proposed to learn linear coefficients of potentials using and graph cuts. To date, most of these methods assume a pre-defined parametric model for the potential functions, and typically only the linear coefficients of the parametric model are learned. This can greatly limit the flexibility of the model capability of , and thus calls for effective methods to incorporate nonlinear nonparametric models for learning the potential functions in .As similar in standard support vector machines (),nonlinearity can be achieved by introducing nonlinear kernels for . However, the time complexity of nonlinear is roughly O(n^3.5) with n being the number of training data examples. This time complexity is problematic for , where the number of constraints grows exponentially in the description length of the label . Moreover, nonlinear functions can significantly slow down the test time in most cases. Because of these reasons, currently most applications use linear kernels (or linear parametric potential functions in ), despite the fact that nonlinear functions usually deliver more promising prediction accuracy. In this work, we address this issue by combining with nonparametric decision trees. Both and decision trees have gained tremendous success in computer vision. Decision trees are capable of modelling complex relations and generalize well on test data. Unlike kernel methods, decision trees are fast to evaluate and can be used to select informative features.In this work, we propose to use ensembles of decision trees to map the image content to both the unary terms and the pairwise interaction values in. The proposed method is termed as . Specifically, we formulate both the unary and pairwise potentials as nonparametric forests—ensembles of decision trees, and learn the ensemble parameters and the trees in a single optimization framework. In this way, the nonlinearity is easily introduced into learning without confronting the kernel dilemma. Furthermore, we learn class-wise decision trees for each object. Due to the rich structure and flexibility of decision trees, our approach is powerful in modelling complex data likelihoods and label relationships. The resulting optimization problem is very challenging in the sense that it can involve exponentially or even infinitely many variables and constraints. We summarize our main contributions as follows.-1pt1. We formulate the unary and pairwise potentials as ensembles of decision trees, and show how to jointly learn the ensemble parameters and the trees as a unified optimization problem within the large-margin framework. In this fashion, we achieve nonlinear potential learning on both the unary and pairwise terms. 2. We learn class-wise decision trees (potentials) for each object that appears in the image.3. We show how to train the proposed model efficiently. In particular, we combine the column generation and cutting-planes techniques to approximately solve the resulting optimization problem, which can involve exponentially many variables and constraints.4. We empirically demonstrate that outperformsexisting methods for image segmentation. On both binaryand multi-class segmentation datasets we show the advantages of the learned nonlinear nonparametric potentials of decision trees. Related work We briefly review the recent works that are relevant to ours. A few attempts have been made to apply nonlinear kernels in . Yu <cit.> and Severyn <cit.> developed sampled cuts based methods for training with kernels. Sampled cuts methods were originally proposed for standard kernel . When applied to , the performance is compromised <cit.>. In <cit.>, the image-mask pair kernels are designed to exploit image-level structural information for object segmentation. However, these kernels are restricted to the unary term. Although not in the large margin framework, the kernel proposed in <cit.> incorporates kernels into the learning. The authors only demonstrated the efficacy of their method on a synthetic and a small scale protein dataset. To sum up, these approaches are hampered by the heavy computation complexity. Furthermore, it is not a trivial task to design appropriate kernels for structured problems. Recently, Lucchi et al. <cit.> proposed a two-step solution to tackle this problem. Specifically, they train linear by using kernelized feature vectors that are obtained from training a standard non-linear kernel model. They experimentally demonstrate that the kernel transferred linear model achieves similar performance as the Gaussian . However, this approach is heuristic and it cannot be shown theoretically that their formulation approximates a nonlinear model. Besides, their method consumes extra usage of memory and training time since the dimension of the transformed features equals to the number of support vectors, while the latter is linearly proportional to the size of the training data <cit.>. Moreover, compared to the above mentioned works of <cit.> and <cit.>, we achieve nonlinear learning on both the unary and the pairwise terms while theirs are limited to nonlinear unary potential learning. The recent work of Shen <cit.> generalizes standard boosting methods to structured learning, which shares similarities to our work here. However, our method bears critical differences from theirs: 1) We design a column generation method for non-linear tree potentials learning in directly from the formulation. Different from the case in <cit.>, which can directly derive column generation method analogous to LPBoost <cit.>, our derivation here is more challenging. This is because we can not obtain the most violated constraint from the constraints of the dual problem, on which the column generation technique relies. We instead inspect the KKT condition to seek for the most violated constraint. This is an important difference compared to existing column generation techniques. 2) We develop a learning method for multi-class semantic segmentation, while <cit.> only shows learning for binary foreground/background segmentation. Our experiments on the MSRC-21 dataset shows that our method achieves state-of-the-art results. 3) We learn class-wise decision trees (potentials) for each object that appears in the image. This is different from <cit.>. The work of decision tree fields <cit.> is close to ours in that they also use decision trees to model the pairwise potentials. The major difference is that in <cit.> potential functions areconstructedby directly summing the energy tables associated with the set of nodes taken during evaluating the decision trees. Their trees are generally deep, with depth 15 for the unary potential and 6 for the pairwise potential in their experiment. By contrast, we model the potential functions as an ensemble of decision trees and learn them in the large margin framework. In our method, the decision trees are shallow and simplewith binaryoutputs.§ LEARNING TREE POTENTIALS INWe present the details of our method in this section by first introducing the models for segmentation, then formulating the energy functions and showing how to learn decision tree potentials in the large-margin framework. §.§ Segmentation using models Before presenting our method, we first revisit how to use models to perform image segmentation. Given an image instanceand its corresponding labelling , <cit.> models the conditional distribution of the formP(|; ) = 1/Zexp (- E(, ;)).where are parameters and Z is the normalization term. The energy E of an imagewith segmentation labelsover the nodes (superpixels) N and edges S, takes the following form:E(, ; )=∑_p ∈ N^(1)(y^p, ;) + ∑_(p,q) ∈ S^(2)(y^p, y^q, ; ).Here ∈ X, ∈ Y; ^(1) and ^(2) are the unary and pairwise potentials, both of which depend on the observations as well as the parameter . seeks an optimal labeling that achieves maximum a posterior (MAP), which mainly involves a two-step process <cit.>: 1) Learning the model parameters from the training data; 2) Inferring a most likely label for the test data given the learned parameters. The segmentation problem thus reduces to minimizing the energy (or cost) overby the learned parameters , which is ^*=_∈ Y E(, ; ). When the energy function is submodular, this inference problem can be efficiently solved via graph cuts <cit.>.§.§ Energy Formulation Given the energy function in Eqn. (<ref>), we show how to construct the unary and pairwise potentials using decision trees. We denote ^p as the features of superpixel p (p=1, …, n), with its label y^p∈{ 1, …, K}, where K is the number of classes. Let H be a set of decision trees, which can be infinite. Each _j^(1)(·) ∈ H takes ^p as the input, and _j^(2)(·, ·) ∈ H takes a pair (^p, ^q) as the input to output {0, 1}. We introduce (K+1) groups of decision trees, in which K groups are for the unary potential and one group for the pairwise potential. For the unary potential, the K groups of decision trees are denoted by _c^(1) (c=1,…, K), which correspond to K categories. Each _c^(1) is associated with the c-th class. In other words, for each class, we maintain its own unary feature mappings. Each group of decision trees for the unary potential can be written as: _c^(1)=[_c1^(1), _c2^(1), …]^, which are the output of decision trees: _cj^(1). All decision trees of the unary potential are denoted by ^(1)=[_1^(1), _2^(1), …, _K^(1)]. Accordingly, for the pairwise potential, the group of decision trees is denoted by ^(2), and ^(2)=[_1^(2), _2^(2), …]^ being the output of all _j^(2). The whole set of decision trees is denoted by = [^(1), ^(2)]. We then construct the unary and pairwise potentials as^(1)(y^(p),)= _y^p^(1)_y^p^(1)(^p). ^(2)( y^(p), y^(q), )= ^(2)^(2)(^p, ^q) I (y^p≠ y^q ).where I(·) is an indicator function which equals 1 if the input is true and 0 otherwise. Then the energy function in Eqn. (<ref>) can be written as:E(, ; , )= ∑_p ∈ N_y^p^(1)_y^p^(1)(^p)+∑_(p,q) ∈ S^(2)^(2)(^p, ^q) I (y^p≠ y^q ).Next we show how to learn these decision tree potentials in the large-margin framework.§.§ Learning in the large-margin frameworkInstead of directly minimizing the negative log-likelihood loss, we here learn the parameters in the large margin framework, similar to <cit.>. Given a set of training examples {_i,_i}_i=1^m,the large-margin based learning solves the following optimization:min_, ≥ 0 ^2 +Cm ∑_i ξ_iE(, _i; , )- E (_i, _i; ,)≥ ( _i, ) - ξ_i, ∀ i=1, …, m, and ∀∈ Y;.where : ×↦ is a loss function associated with the prediction and the true label mask.In general, we have ( ,) = 0 and (,) > 0 for any ≠. Intuitively, the optimization in Eqn. (<ref>) is to encourage the energy of the ground truth label E(_i,_i; ) to be lower than that of any other incorrect labels E(, _i; ) by at least a margin ( _i, ).To learn the potential functions we proposed in <ref> in the large-margin framework, we introduce the following definitions. For the unary part, we define ^(1)= _1 ^(1)⊙_2 ^(1)⊙…⊙_K^(1), where ⊙ stacks two vectors, andΨ^(1)(, ; ^(1))= ∑_p ∈ N_y^p^(1)(^p) ⊗ y^p.where ⊗ denotes the tensor operation (, ^p⊗ y^p = [I(y^p=1)^p, …, I(y^p=K)^p] ^). Recall that ^p denotes the p-th superpixel of the image . Here, Ψ^(1) acts as the unary feature mapping. Clearly we have:^(1)Ψ^(1)(, ; ^(1)) = ∑_p ∈ N^(1)(y^p, ).For the pairwise part, we define the pairwise feature mapping as:Ψ^(2)(, ; ^(2))= ∑_(p,q) ∈ S^(2)(^p, ^q) I (y^p≠ y^q ).Then we have the following relation:^(2)Ψ^(2)(, ; ^(2)) = ∑_(p,q) ∈ S^(2)(y^p, y^q, ).We further define =^(1)⊙^(2), and the joint feature mapping asΨ (, ; ) = Ψ^(1)(, ; ^(1)) ⊙Ψ^(2)(, ; ^(2)).With the definitions ofand Ψ, the energy function can then be written as:E(, ; , )=∑_p ∈ N^(1)(y^p, ;, ^(1)) + ∑_(p,q) ∈ S^(2)(y^p, y^q, ; , ^(2)) = ^Ψ(, ; ).Now we can apply the large-margin framework to learn using the proposed energy functions by rewriting the optimization problem in Eqn. (<ref>) as:min_,^2 +Cm ∑_i ξ_i ^[ Ψ(, _i ; ) - Ψ(_i, _i ; ) ] ≥ ( _i, ) - ξ_i, ∀ i=1, …, m, and ∀∈ Y; ≥ 0, ≥ 0.Note that we add the ≥ 0 constraint to ensure submodular property of our energy functions, which we will discuss the details later in <ref>. Up until now, we are ready to learnand Ψ (or ) in a single optimization problem formulated in Eqn. (<ref>), but it is not clear how. Next we demonstrate how to solve the optimization problem in Eqn. (<ref>) by using column generation and cutting-plane. §.§ Learning tree potentials using column generation We aim to learn a set of decision treesand the potential parameterby solving the optimization problem in Eqn. (<ref>). However, jointly learningandis generally difficult. Here we propose to apply column generation techniques <cit.> to alternatively construct the set of decision trees and solve for . From the point of view of column generation techniques, the dimension of the primal variableis infinitely large; the column generation is to iteratively select (generate) variables for solving the optimization. In our case, infinitely many dimension ofcorresponds to infinitely many decision trees, thus we iteratively generate decision trees to solve the optimization.Basically, we construct a working set of decision trees (denoted as 𝒲_). During each column generation iteration we perform two steps. In the first step, we generate new decision trees and add them to 𝒲_. In the second step, we solve a restricted optimization problem in Eqn. (<ref>) on the current working set 𝒲_ to obtain the solution of . We repeat these two steps until convergence. Next we describe how to generate decision trees in a principal way by using the dual solution of the optimization in Eqn. (<ref>), which is similar to the conventional column generation technique. First we derive the Lagrange dual problem of Eqn. (<ref>), which can be written asmax_,∑_i, λ_ (i, ) ( _i,)- {∑_i, λ_ (i, ) [ Ψ(, _i; ) - Ψ(_i, _i; ) ] + θ}^20 ≤∑_λ_ (i, ) ≤Cm, ∀ i=1,…, m; ≥ 0, ≥ 0.Here θ, λ are the dual variables. When using column generation technique, one need to find the most violated constraint in the dual. However, the constraints of the dual problem do not involve decision trees . Instead of examiningthe dual constraint, we inspect the KKT condition, which is an important difference compared to existing column generation techniques. According to the KKT condition, when at optimal, the following condition holds for the primal solutionand the current working set 𝒲_:≥∑_i, λ_ (i, )[ Ψ(,; ) - Ψ(_i,; )].All of those generated ∈𝒲_ satisfy the above condition. Obviously, generating new decision trees which most violate the above condition would contribute the most to the optimization of Eqn. (<ref>). Hence the strategy of generating new decision trees is to solve the following problem: ^⋆ = _∑_i, λ_ (i, )[ Ψ (, _i; ) - Ψ (_i, _i; )].Then ^⋆ is added to the current working set 𝒲_. If ^⋆ still satisfies the condition in Eqn. (<ref>), the current solution ofandis already the globally optimal one.The optimization in Eqn. (<ref>) for generating new decision trees can be independently decomposed into solving the unary part and the pairwise part. Hence ^⋆ can be written as: ^⋆ = [^(1)⋆, ^(2)⋆].For the unary part, we learn class-wise decision trees, namely, we generate K decision trees corresponding to K categoriesat each column generation iteration.Hence ^(1)⋆ is composed of K decision trees: ^(1)⋆=[_1^(1)⋆, …, _K^(1)⋆]. More specifically, according to the definition of Ψ(, ) in Eqn. (<ref>), we solve the following K problems: ∀ c= 1,…, K:_c^(1)⋆(·) = _∈ ∑_i, λ_ ( i,) [ ∑_p ∈ N,y^p=c _y^p^(1)(_i^p) - ∑_p ∈ N,y_i^p=c _y_i^p^(1)(_i^p) ] = _∈ ∑_i, [ ∑_p ∈ N,y^p=c λ_ ( i,) _y^p^(1)(_i^p)_positive - ∑_p ∈ N,y_i^p=c λ_ ( i,) _y_i^p^(1)(_i^p)_negative].To solve the above optimization problems, we here train K weighted decision tree classifiers. Specifically, when training decision trees for the c-th class, the training data is composed of those superpixels whose ground truth label or predicted label is equal to the category label c. Since the output of the decision tree is in {0, 1 } andλ_(i, )≥ 0, the maximization in Eqn. (<ref>) is achieved if _c^(1) outputs 1 for each of the superpixel p with y^p=c, and outputs 0 for each of the superpixel pwith y_i^p=c. Therefore, as indicated by the horizontal curly braces in Eqn. (<ref>),superpixels with the predicted labels of category c are used as positive training examples, while superpixels with ground truth labels of category c are used as negative training examples. The dual solutionserves as weightings of the training data. For the pairwise part, we generate one decision tree in each column generation iteration, hence ^(2)⋆ can be written as ^(2)⋆= [^(2)⋆], the new decision tree for the pairwise part is generated as:^(2)⋆(·, ·)= _∈ ∑_i, λ_ ( i,) [∑_(p,q) ∈ S^(2)(^p, ^q) I (y^p≠ y^q ) _positive-∑_(p,q) ∈ S^(2)(^p, ^q) I (y_i^p≠ y_i^q )_negative].Similar to the unary case, we train a weighted decision tree classifier withas training example weightings. The positive and negative training data are indicated by the horizontal curly braces in Eqn. (<ref>). ^(2) is the response of a decision tree applied on the pairwise features constructed by two neighbouring superpixels (^p, ^q), , color differences or shared boundary lengths.With the above analysis, we can now apply column generation to jointly learn the decision trees ^(1), ^(2) and . The column generation (CG) procedure iterates the following two steps:1) Solve Eqn. (<ref>), Eqn. (<ref>) to generate decision trees ^(1)⋆, ^(2)⋆;2) Add ^(1)⋆ and ^(2)⋆ to working set 𝒲_ and resolve for the primal solutionand dual solution . We show two segmentation examples on the Oxford flower dataset produced by our method with different CG iterations in Fig. <ref>. As can be seen, our method refines the segmentation with the increase of CG iterations. Since this dataset is relatively simple, a few CG iterations are enough to get satisfactory results. For solving the primal problem in the second step, it involves a large number of constraints due to the large output space {∈}. We next show how to apply the cutting-plane technique <cit.> to efficiently solve this problem. §.§ Speeding up optimization using cutting-plane To apply cutting-plane for solving the optimization in Eqn. (<ref>), we first derive its formulation. The formulation was first introduced by <cit.>. The formulation of our method can be written as:min_≥ 0, ξ≥ 0 12^2 + C ξ1/m^ [∑_i=1^m r_i ·[ Ψ(, _i; ) - Ψ(_i, _i; ) ] ]≥1/ m ∑_i=1^m r_i(_i, ) - ξ, ∀ r∈{0,1}^m; ∀∈.Cutting-plane methods work by finding the most violated constraint for each example i_i^⋆ = ^Ψ(, ; ) -( _i, )at every iteration and add it to the constraint working set. The sketch of our method is summarized in Algorithm <ref>,which calls Algorithm <ref> to solve the optimization. Implementation details To deal with the unbalanced appearance of different categories in the dataset, we define ( _i, ) as weighted Hamming loss, which weighs errors for a given class inversely proportional to the frequency it appears in the training data, as similar in <cit.>. In the inference problem of Eqn. (<ref>), when using the hamming loss as the label cost , the label cost term can be absorbed into the unary part. We therefore can apply Graph-cut to efficiently solve Eqn. (<ref>). As for more complicated label cost functions, an efficient inference algorithm is proposed in <cit.>. During each CG iteration, our method first solves Eqn. (<ref>), (<ref>) given the currentand ξ, and then solves a quadratic programming (QP) problem given . When solving Eqn. (<ref>), (<ref>), we train weighted decision tree classifiers using the highly optimized decision tree training method of <cit.>. Discussions on the submodularity It is known that if graph cuts are to be applied to achieve globally optimum labelling in segmentation, the energy function must be submodular. For foreground/background segmentation in which a (super-)pixel label takes value in {0, 1}, we show that our method keeps this submodular property. It is commonly known that an energy function is submodular if its pairwise term satisfies: η_pq(0, 0) + η_pq(1, 1) ≤η_pq(0, 1) + η_pq(1, 0).Recall that our pairwise energy is written as η_pq(y^p, y^q) = ^(2)^(2)(^p, ^q) I (y^p≠ y^q ). Clearly we have (η_pq(0, 0)=η_pq(1, 1)=0) because of the indicator functionI(y^p ≠ y^q). The second thing is to ensure η_pq(1, 0)+ η_pq(0, 1) ≥ 0. Given the non-negativeness constraint we impose onin our model, and the output of decision trees in our method taking values from {0, 1}, we have η_pq(1, 0)≥ 0 andη_pq(0, 1) ≥ 0. We thus accomplish the proof of the submodularity of our model. In the case of multi-object segmentation, the inference is done by theα-expansion of graph cuts. Discussions on the non-negative constraint onOur learning framework aligns with boosting methods, where we learn a non-negative weighted ensemble of weak structured learners (constructed by decision trees), which is analogous to weak learners in boosting methods. This is similar to boosting methods, such as AdaBoost, LPBoost <cit.>, where the non-negative weighting is commonly used. Further, a weak structured learner generated by our column generation method is expected to make positive contribution to the learning objective. If it is of no use to the objective, the weight will approach zero. Therefore it is reasonable to enforce the non-negative constraint on . § EXPERIMENTSTo demonstrate the effectiveness of the proposed method, we first compare our model with some most related baseline methods, which are , and . In section <ref>, we show that our method achieves state-of-the-art results by exploiting recent advances in feature learning<cit.>. §.§ Experimental setupThe datasets evaluated here include three binary datasets (Weizmann horse, Oxford flower and Graz-02) and two multi-class datasets (MSRC-21 and PASCAL VOC 2012).The Weizmann horse dataset[< http://www.msri.org/people/members/eranb/> ]consists of 328 horse images from various backgrounds, with groundtruth masks available for each image. We use the same data split as in <cit.> and <cit.>. The Oxford 17 category flower dataset <cit.> is composed of 849 flower images. Those with too small foreground are removed, which leaves 753 for segmentation purpose <cit.>. The data split stated in <cit.> is used to perform the evaluation. During our experiment, images of the Weizmann horse and the Oxford flower datasets are resized to 256×256. The Graz-02 dataset[<http://www.emt.tugraz.at/ pinz/> ] contains 3 categories (bike, car and people). This dataset is considered challenging as the objects appear at various background and with different poses. We follow the evaluation protocol in <cit.> to use 150 for training and 150 for testing for each category. The MSRC-21 dataset <cit.> is a popular multi-class segmentation benchmark with 591 images containing objects from 21 categories. We follow the standard split to divide the dataset into training/validation/test subsets. The PASCAL VOC 2012 dataset [<http://host.robots.ox.ac.uk/pascal/VOC/voc2012/> ] is a widely used benchmark for semantic segmentation, which contains 2913 images from the trainval set and 1456 images from the test set, making up 21 categories. Unlike many state-of-the-arts methods such as <cit.>, we do not use any additional training data for this dataset.We start with over-segmenting the images into superpixels using SLIC <cit.>, with ∼ 700 superpixels generated per image. We extract dense SIFT descriptors and color histograms around each superpixel centroid with different block sizes (12×12, 24×24, 36×36). The dense SIFT descriptors are then quantized into bag-of-words features using nearest neighbour search with a codebook size of 400. We construct four types of pairwise features also using different block sizes to enforce spatial smoothness, which are color difference in LUV space, color histogram difference, texture difference in terms of LBP operators as well as shared boundary length <cit.>. The column generation iteration number of our is set to 50 based on a validation set. We learn tree potentials with the tree depth being 2. Training on the MSRC-21 dataset on a standard PC machine takes around 16 hours. §.§ Comparing with baseline methods We first compare with some conventional methods, which are linear , and to demonstrate the superiority of our method. For and , each superpixel is classified independently without . We mainly evaluate on the more challenging Graz-02 and MSRC-21 dataset in this part. The regularization parameter C of , and our are selected from {1, 10, 100, 1000 } based on a validation set. We use depth-2 decision trees for training AdaBoost and our . The maximum iteration number of is chosen from {50, 100, 200}. For our method, we treat the foreground and background as two categories in the binary case to learn class-wise potentials. Graz-02 For a comprehensive evaluation, we use twomeasurements to quantify the performance on the Graz-02 dataset, which are intersection over union score and the pixel accuracy (including foreground and background). We report the results in Table <ref>. As can be observed, based on a depth-2 decision tree performs better than the linear . On the other hand, structured methods which jointly consider local information and spatial consistency are able to significantly outperformthe simple binary models. By introducing nonlinear and class-wise potential learning,our method is able to gain further improvement over .MSRC-21 We learn class-wise potentials using our for each of the 21 classes on the MSRC dataset. The compared results are summarized in Table <ref> (upper part). Similar conclusions can be drawn as on the Graz-02 dataset and our again outperforms all its baseline competitors.§.§ Comparing with state-of-the-art methods Since features play a pivotal role in the performance of vision algorithms,we exploit recent advances in feature learning to pursue state-of-the-art results, , unsupervised feature learning <cit.> and convolutional neural networks (CNN) <cit.>. Specifically, for the unsupervised feature learning, we first learn a dictionary B of size 400 and patch size 6×6 based on the evaluated image dataset using Kmeans, and then use the soft threshold coding <cit.> to encode patches extracted from each superpixel block. The final feature vectors (we call it encoding feature here) are obtained by performing a three-level max pooling over the superpixel block. For the CNN features, we use the Alex model <cit.> trained on the ImageNet[<http://image-net.org>] to generate CNN features. These two versions of our method are denoted as (FL) and (CNN) respectively. We only report the results of (CNN) on the MSRC-21 and PASCAL VOC 2012 datasets since our method already performs very well by using the encoding features on the three binary datasets.Weizmann horse We quantify the performance by the global pixel-wise accuracy S_a and the foreground intersection over union score S_o, as did in <cit.>. S_a measures the percentage of pixels correctly classified while S_o directly reflects the segmentation quality of the foreground. The results are reported in Table <ref>. Our method performs better than the kernel structural learning method of <cit.>, which may result from the fact that they only introduced nonlinearity into the unary part while our method achieves nonlinearity on both unary and pairwise terms. The best S_a score is obtained by <cit.>. However their method relies on an assumption that a perfect bounding box of the horse is available for each test image, which is not practically applicable. On the contrary, we provide a principal and general way of nonlinearly learning parameters. We show some segmentation examples of our method in Fig. <ref>.Oxford flower As in <cit.>, we also use S_a and S_o to measure the performance on the Oxford flower dataset, and report the results inTable <ref>. Our method performs comparable to the original work of <cit.> on this dataset in terms of S_o while again obtains better resultsthan the closely related state-of-the-art work of <cit.>. It is also worth noting that the method in <cit.> is very domain specific, which relies on modelling the flower's shape (center and petal), while ours is generally applicable.Graz-02 As in the work of <cit.>, <cit.>, <cit.>, <cit.>, we also evaluate the F-score on the Graz-02 dataset besides the above mentioned intersection over union score and pixel accuracy. The F-score is defined as F=2pr/(p+r), where p is the precision and r is the recall. We summarize the results in Table <ref> and Table <ref>. From Table <ref>, it can be seen that our method significantly outperforms all the compared methods, which fully demonstrate the power of nonlinear and class-wise potential learning. Furthermore, we can observe from Table <ref> that compared with the previous results, adding more features help toimprove the performance.MSRC-21 The compared results with state-of-the-art works are reported in the lower part of Table <ref>. As we can see, by incorporating more advanced features, our gains significant improvements over the previous results which only use bag-of-words and color histogram features. It is worth noting that our method performs better than the closely related work of Lucchi <cit.> which claims exploiting non-linear kernels. It has to be pointed out that we did not employ any global potentials (while in<cit.>, they improve the global and average per-category accuracy from 70, 73 to 82 and 76 by adding global information). If global or higher potentials are incorporated into our model, further performance promotion can be expected. We show some qualitative evaluation examples in Fig. <ref>.PASCAL VOC 2012 We generate deep features of each superpixel by averaging the pixel-wise feature map scores within the superpixel obtained from a pretrained FCN model <cit.>. We then train our model on the standard PASCAL VOC 2012 training dataset with the generated deep features. Following the standard evaluation procedure for the Pascal VOC challenge, we upload our segmentation results to the test server and use the average intersection over union as the evaluation metric. We compare against several state-of-the-art methods (<cit.>, <cit.>, <cit.>, <cit.>) on the test set of the PASCAL VOC 2012 dataset. The results are reported in Table <ref>. As seen from the table, our beats the Hypercolum <cit.> and the CFM <cit.> and outperforms the FCN <cit.> by a notable margin. Although our method is triumphed by <cit.>, it should be noted that their result is obtained by using extratraining data (11,685 images vs 1456 images used for training our ). Some qualitative evaluation examples of our method are illustrated in Fig. <ref>.§ CONCLUSIONNonlinear structured learning has been a promising yet challenging topic in the community. In this work, we have proposed a nonlinear structured learning method of tree potentialsfor image segmentation. The unary and pairwise potentials are ensembles of class-wise trees, with the ensemble parameters and the trees jointly learned in a unified large-margin framework. In this way, nonlinearity is easily introduced into the learning. The resulted model involves exponential number of variables and constraints. We therefore derive a novel algorithm combining a modified column generation method and the cutting-plane technique for efficient model training. We have exemplified the superiority of the proposed nonlinear potential learning method by comparing against state-of-the-art methods on both binary and multi-class object segmentation datasets. A potential disadvantage of our method is that it is prone to overfitting due to the outstanding non-linear learning capacity. This can be alleviated by using more training data. On the other hand, as we show in Table <ref>, our method using pre-trained CNN features has shown the best performance. Therefore it is worth exploiting to further combine our method with deep learning techniques in the future work. ieee
http://arxiv.org/abs/1703.08764v1
{ "authors": [ "Fayao Liu", "Guosheng Lin", "Ruizhi Qiao", "Chunhua Shen" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170326041510", "title": "Structured Learning of Tree Potentials in CRF for Image Segmentation" }
Adversarial Source Identification Game with Corrupted Training Mauro Barni, Fellow, IEEE, Benedetta Tondi, Student Member, IEEE M. Barni is with the Department of Information Engineering and Mathematics, University of Siena, Via Roma 56, 53100 - Siena, ITALY, phone: +39 0577 234850 (int. 1005), e-mail: barni@dii.unisi.it; B. Tondi is with the Department of Information Engineering and Mathematics, University of Siena, Via Roma 56, 53100 - Siena, ITALY, e-mail: benedettatondi@gmail.com.Received: date / Accepted: date ===============================================================================================================================================================================================================================================================================================================================================================================================================================================empty § INTRODUCTIONtocsectionIntroductionMagnetic resonance tomography (MRT) is one of the most important tools of today's modern medicine diagnostics. Nuclear Magnetic Resonance (NMR) in general benefits from high spectral resolution but suffers of low sensitivity which is in a 1/2 spin system proportional to the spin polarisation P=[ N_u - N_d] / [ N_u + N_d] and the number of spins N=N_u + N_d. Due to the Boltzmann distribution this fraction is just e-5 at room temperature and standard conditions in humane medicine. An enhancement of this factor is directly related to the lateral resolution and could increase the quality of MRT by orders of magnitude. MRT is based mainly on the visualization and the dynamics of proton spins. However in the last century a large number of new tracers have been tested in order to visualize dynamic processes inside the body. Functionality tests using hyperpolarised 3He or 129Xe nuclei in the gas phase for lung cancer is an example and routinely applied nowadays but still under research and development <cit.>.In contrast the electron spin of a negativly charged nitrogen vacancy (NV) centre, consisting of a substitutional nitrogen atom next to a vacancy in the diamond lattice, can be nearly polarized to 100 by irradiationwith green light <cit.>. Common goal of a number of research groups is the transfer of the electron polarisation to nuclear spins.The NV centre - carbon-13 nucleus structure is an outstanding model system which combines the possibility of a nearly fully polarised electron spin reservoir and a easy to measure nuclear spin species. In thermal equilibrium the polarisation is given by P=tanh(ħω/k_B T) and a brute force method to enhance the signal is cooling down the spin system. This can be combined with the optical induced polarisation of the the NV centre in high magnetic fields which also leads to a non-equilibrium spin polarisation of C-13 nuclei in diamond <cit.>. Also more sophisticated microwave driven nuclear polarisation (DNP) techniques in the manner of Overhauser <cit.> like the solid effect (SE) <cit.> the cross effect (CE) <cit.> or the thermal mixing <cit.> can be used and are already combined with the properties of NV centres <cit.>. An approach without additional micro wave appliancation to overcome the low signal to noise ratio (SNR) is to deploy the hyperfine or dipole interaction between a polarised and an unpolarised spin system. This was done in Ref. <cit.> in which the authors attribute their polarisation effect to the exited state level anti-crossing (ESLAC), which takes place at around 51mT. Previous optical hole-bleaching experimenal studies reported already cross polarisation of negatively charged NV and substitutional nitrogen (P1) centres by field dependent absorption messurements with a narrow-band laser <cit.>. However this technique could not distingush between up or down polarisation and disregarding nuclear spins. More recently this effect was published for single NV centres as well as for neutral NV centres detected by Optical Detected Magnetic Resonance (ODMR) <cit.>. Both consider only coupling between the electron spins. To exploit the nuclear hyperpolarisation effect we use a modified NMR system. With this approach we are sensitive to carbon-13 spins including those far away from all ODMR-active centres and other defects.§ METHODSA home-made Helmholtz coil in combination with a programmable power supply provides a constant low magnetic field with a stability of about 0.1. The diamond sample is fixed on a metal shuttle and aligned with one of the four NV axes along the magnetic field vector. For the NV polarisation we use a 532nm laser with a power up to 5. The laser light is guided through a 400 core multimode optical fiber with an numerical aperture of 0.22. The SMA905 connector at the end of the fiber is screwed several millimeters before the sample (Fig. <ref>).The shuttle is guided inside a rectangular hollow profile from the low field coils to a probehead in the middle of a commercial superconducting 300 NMR magnet. After a shuttling duration of about one second the sample is placed between a pair of Helmholtz-like RF coils with a diameter of 6 fixed on a ceramic frame. Subsequently a π /2 pulse of 15s is applied and the NMR signal of the C-13 spins is recorded. Afterwards the shuttle is lifted up via a pneumatic system back into the radiation unit and is fixed. After short cooling by several air jets the next run with different settings e.g. magnetic field or laser duration starts completely automatically. This enables a systematic and reproducible investigation of the hyperpolarisation process. In comparison to ODMR techniques the presented approach allows the observation of the polarisation effect of nuclei far away from the ODMR active centre. This is in particular important to investigate spin diffusion processes in solids and consequently a development of an overall bulk polarisation and application.All measurements were performed with a commercially available HPHT diamond sample from Element Six with dimensions of about 2 × 2 × 2 mm^3 and a naturally abundance of 1.1 C-13 isotopes moreover a nitrogen content of 100200 ppm. After a 10MeV electron irradiation with a fluence of 2 e 16the sample was annealed at 800 for 2 (tab.<ref>).During irradiation the sample reaches a maximal temperature of about 190. Before and after each preparation step an photoluminescencespectrum was taken during a 80 ×80m x-y-Scan 50m below the sample surface in a self-made confocal microscope with an air objective MPlanoApo 100x/0.95 from Olympus to monitor the NV creation and its density. For excitation we use a 532 Nd:YAG Laser and the spectrometer containing a 150 grid as monochromator from Jobin Yvon with a CCD sensor of 2048pixels.§ RESULTS AND DISCUSSION §.§ Optical SpectrumFigure <ref> shows three different photo luminescence (PL) spectra each after specific preparation step. For the original sample ex factory (black) just the Raman peak at 573, a second feature at about 612 and no NV signal is noticeable. The 612 peak is actually known as PL from some natural type I diamonds <cit.>. After electron irradiation a clear NV^- zero phonon line (ZPL) at 637 with its characteristic side bands arises and the 612 peak disappears (blue). Besides the Raman peak a small NV^0 at 575 was vaguely perceptible. Subsequent annealing leads to an massive boost in the NV signal (red). Please note while the spectrum for the untreated (black) and electron irradiated (blue) sample is recorded under similar conditions apparent from similar heights of the Raman peaks the spectrum after annealing (red) is recorded four times shorter and with only 1.5 of the laser power. This corresponds to a factor of 1400 in signal intensity and therefore NV density compared to the sample that was untreated and not annealed. §.§ NMR measurementsAll presented measurements were conducted at ambient conditions and the noise was checked to be thermal. First we estimate the π/2 pulse length with a nutation experiment at about 7T to 15 with a carrier frequency of 75.4689.Figure <ref> depicts the integral NMR signal intensity depending on the magnetic field during laser irradiation and a shuttling into the NMR probe within about one second.One representative spectrum of the acquired spectra and the corresponding thermal spectrum are plotted in Fig. <ref>. Both spectra were fitted to a Gaussian function. The thermal spectrum (a) has a linewidth (FWHM) of 733±22 and the hyper-polarised one has a smaller width of 298±12, both based on the fits. However at least the hyperpolarised spectrum has an ever narrower linewidth slightly below this value regarding the raw data. This discrepancy can be explained with the rather short last delay of 640 due to the requirement of an adequate accumulation time in the thermal case compared to the measured spin-lattice relaxation time of some hours (see below). This means that in the thermal experiment we measure the C-13 nuclei with shorter T1 times in proximity to impurities like P1 centres which are indeed present all over the investigated sample (see nitrogen content in Tab.<ref>). The narrow line width in the hyper-polarised case can be compared with theoretical calculation of the line width of C-13 nuclei solely dipolar coupled to each other. An estimation for homonuclear dipolar coupling by numerical simulations lead to a linewidth of maximal 0.5 based on the second moment for a Gaussian line shape <cit.>. We also performed hyper-polarised Hahn spin echo measurements via phase cycling to determine the spin-spin relaxation time, but we could not record any signal. The inset in Fig. <ref> shows an average solid/dipolar echo signal of 32 scans in a phase cycling experiment in the time domain withbetween the two π/2 pulses. As an evidence for dipolar coupling of C-13 spins a refocusing of the signal can be observed at 0.5 <cit.>.The field-dependence of the signal displays a non-trivial behaviour of pronounced peaks and sign changes (Fig. <ref>). We subdivide these peaks in three classes A, B and C. The peaks of A and B can be understood if one takes into account coupling and therefore cross relaxation between the NV-carbon-13 systems and the P1-N14 systems. The energy matching between these two systems at a certain magnetic field due to each Zeeman splitting leads to a resonant coupling and consequently a cross relaxation (Hartmann-Hahn condition)<cit.>. A P1 centre experiences a hyperfine coupling to its intrinsic 14N nuclei spin (I=1). Due to a Jahn-Teller distortion towards one neigboring carbon atom this coupling depends on the direction of the applied magnetic field B <cit.>. Under our experimental conditions P1 centres can be divided in two groups. One is aligned in B direction and the other is one oriented under an angle of about 71 relative to the magnetic field. The latter one displays a weaker hyperfine coupling to their 14N nuclei. The smaller peaks (B) can be associated with coupling between NV centres and P1 centres both aligned parallel to the applied field where the electron spin polarisation of the NV centres is most efficient. The case of coupling between an aligned NV centre with an unaligned P1 centre is three times more likely due to the three equivalent orientations in the diamond lattice. The peaks of the group A correspond to such a coupling and are therefore roughly three times higher than that ones of group B. A closer look at the data represented in Fig. <ref> suggests a similar but much weaker signal (C) between the peaks of highest intensity. To support this we measured this field region twice under the same experimental conditions to enhance the signal to noise ratio (Fig. <ref>). The origin of these peaks is unclear. It could result from other spin species in the sample or 15N (I=1/2) associated P1 centres. The latter case would in principal explain the position and shape of this peaks but not its intensity which should be about 270 times weaker than the peaks referring to the 14N-P1 centres due to the natural abundance. Unlikely this hyperpolarisation signal could also be induced from the ESLAC.For detailed account and explanation see the theory and simulation section below. Furthermore, the sharp resonances in the polarisation may suggest that just those carbon-13 nuclei, which are weakly coupled to the NV centre, can transport their spin polarisation to neighboring spins. Otherwise the NMR signal should be detected over a broader field range. In this picture a frozencore of strongly coupled and polarised spins surrounding the NV centre. These spins are commonly probed in ODMR experiment and show a spin polarisation in much broader field regions <cit.>. Apparently, the Larmor frequencies of neighbouring spins in this region differ to much for a resonant spin exchange.Thus the carbon nuclei in a more or less well defined shell-like region around the NV centre can pass on their spin polarisation into the bulk via the nuclear dipole network <cit.>. Additionally it is statistically more unlikely to have a carbon-13 spin next to the NV centre than at some distance.For time resolved measurement of the hyperpolarisation effect, we choose the negative peak at 49.35 in field region A1 in Fig. <ref>. The experimental procedure for each experiment is illustrated schematically next to the corresponding plot in Fig. <ref>. To find the characteristic pumping time, the sample was irradiated in the corresponding field with constant power but different duration. Afterwards the sample was shuttled in the NMR probe and the π / 2 pulse was applied immediately (Fig. <ref>a). The characteristic pumping time in this experiment was determined to about T_pump=104.The depolarisation in the same field was measured with a pumping time of 250 which is sufficient to be in the saturated region of the pumping process. The characteristic decay time back to the thermal equilibrium was determined to be T^LF_1=28 (Fig. <ref>b). This is about 3.7 times faster than the characteristic pumping time. Reasons for this can be the fact, that the NV system is frequently in the excited state during the polarisation. This is accompanied by an other electronic configuration as in the ground state, where the NV centre is in resonance with the P1 defects. Also it should be mentioned that an unstable laser output in the first seconds can cause longer pumping times.Figure <ref>c) shows the exponential-like decay of the integral signal intensity over 10. The depolarisation behaviour back to the equilibrium state in the 7 field shows a much longer relaxation time of about T^HF_1=165. Due to this long time no change in the signal can be recognized within in the first 300 (See Fig. <ref> in appendix) and even an observation time over 200 identify just a slow decrease of the signal (see Fig. <ref> in appendix).The extreme long T1 time might be a manifestation of the wide off-resonant Zeeman splittings of the two defect systems and may be an advantage for future developments and novel applications.§.§ Theory and SimulationWe assume two quantum objects one consisting of a NV centre in its optical ground state with the electron spin S^NV and a nearby spin I^C of a carbon-13 nucleus. The other one is composed of the electron spin S^P1 of a P1 centre and the hyperfine coupled intrinsic nitrogen spin I^N. Here we consider the N-14 isotope (I^N=1) due to a natural abundance of 99.63. In this model we neglect the interaction of the intrinsic nitrogen spin of the NV centre which is weak in the ground state. The situation is illustrated in Fig. <ref>. This leads to the following two Hamiltonians,H^NV = D(S_z^NV)^2 + γB S_z^NV +A^13C S⃗^NV·I⃗^CH^P1_A,B = γB S_z^P1 +A^A,B S⃗^P1·I⃗^N,where A^13C denotes the isotropic interaction parameter for the NV and 13C spin, A^A,B the hyperfine coupling parameter for the aligned (A) and the non-aligned P1 centres (B) along the z-axis and γ the gyromagnetic ratio of an electron (28.03) The applied external magnetic field B is oriented parallel to the z-axis. The zero field splitting D in the NV optical ground state is 2870. Additionally, we assume in this model a dipole-dipole coupling of the NV coupled carbon-13 spin to an initially unpolarised C-13 spin reservoir, which is not directly considered in the Hamiltonian but ensures the distribution of spin polarisation over the hole diamond lattice. For our simulations we choose a relatively weak coupling of the NV centre to the carbon spin of A^13C=2 consistent with the narrow polarisation signal in the field dependent measurements. But it should be mentioned, that simulations, where the coupling does not exceed values of 20 display a very similar polarisation pattern. For higher values the energy level is further shifted which produces unobserved peaks and peak shapes.The values for the hyperfine coupling are taken from the literature as A^A=85 and A^B=114 <cit.>. By solving the Hamilton eigenvalue problem the energy levels of both systems are calculated. In the case of the P1 complex we distinguish between the two possible orientations in the crystal lattice and its relative abundance (1:3). The nitrogen content in the sample results in an average distance of nitrogen associated defects of about 10. This corresponds to a dipolar coupling strength of 200 – or for a distance of 5 even 2.5 are found. A resonant energy transfer between both quantum systems achieved within an energy difference of ±0.5 in the presented simulation (Fig. <ref>), being characteristic for the coupling strength among the defects. For each initial and final state of the possible transitions the expectation value I^C is stored to determine the preferable spin polarisation. In this simple algorithm the differences of I_z reflects the spin flip probability of the carbon-13 nucleus. Now each polarised carbon-13 nucleus acts as source of either positive or negative spin polarisation for dipole coupled neighbouring spins with similar Larmor frequencies.The resolution of the field sweep is modelled as a convolution with a Gaussian with a full width at half maximum of 0.1. With this approach the shape of the marked regions in the experimental data in Fig. <ref> can be well reproduced. In this model it is not obvious which physical parameter determines the recorded line width of the polarisation peaks in Fig. <ref>. Either both the electron interaction NV-P1 and the NV-C-13 coupling are weak or the interaction to the P1 system is weak (narrow resonant energy transfer) and the coupling of the NV centre to the carbon-13 spin is strong or vice versa. Nevertheless, the concept of a coupling scenario in the NV optical ground state is reasonable because of the rather short life time of several nano seconds inthe excited state <cit.>. This short life time corresponds to a short interaction time with very slow time evolution of the state vectors, which are responsible for the polarisation effect in terms of hyperfine coupling and cross polarisation. This only allows efficient interaction with strongly coupled spins. In contrast this is not the case in the ground state where also weak coupling could contribute to a polarisation effect.§.§ Conclusion and OutlookIn the present contribution we highlight a magnetic field dependent nuclear polarisation around 50 measured by NMR. Our data suggest that the observed effect can not be allocated to the hyperfine coupling in ESLAC but is much more likely related to a cross polarisation between a NV carbon-13 system and a nearby P1 centre.The dominant polarisation pattern can be explained by and attributed to cross relaxation between dipolar coupled NV and 14N-P1 centres in good agreement with the theoretical simulation.Two minor features in the data are observed whose origin is still unclear at present but may be associated with 15N-P1 centres. If this conclusion is true it would suggest that an interaction with a P1 centre generates an efficient bulk polarisation even at relatively low concentrations.In this picture the P1 defect acts as mediator for the nuclear polarisation effect much more efficient than the ESLAC and should be exploited further in future hyperpolarisation applications.In order to investigate this effect in more detail the defect distance has to be modified, which can be accomplished by samples with different nitrogen content. A fluence variation of the electron irradiation would change the NV-P1 ratio and therefore also the average distance between those defect types. One cannot rule out the possibility that with shrinking distance the interaction becomes so strong that the process of spin flips has to be described by quantum state overlaps and time depending Schrödinger equation. For the current experiments this was not necessary. Furthermore it becomes apparent that maybe a selective coupling to C-13 or other spins with a minimum distance to the NV centre is necessary for an effective spin diffusion into the surrounding bulk. § ACKNOWLEDGMENTS tocsectionAcknowledgmentsThis work was supported by the VolkswagenStiftung. We thank Dr. W. Knolle from the Leibniz Institute of Surface Modification (IOM) for helpful discussions and valuable assistance during the high energy electron irradiation.plainnat
http://arxiv.org/abs/1703.09243v1
{ "authors": [ "Ralf Wunderlich", "Jonas Kohlrautz", "Bernd Abel", "Jürgen Haase", "Jan Meijer" ], "categories": [ "cond-mat.mes-hall", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20170327180539", "title": "Room temperature bulk diamond 13-C hyperpolarisation -- Strong evidence for a complex four spin coupling" }
http://arxiv.org/abs/1703.08973v1
{ "authors": [ "S. Leurini", "F. Herpin", "F. van der Tak", "F. Wyrowski", "G. J. Herczeg", "E. F. van Dishoeck" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170327084934", "title": "Distribution of water in the G327.3-0.6 massive star-forming region" }
firstpage–lastpage 2015On certain orbits of geodesic flow and (a,b)-continued fractionsManoj Choudhuri,Institute of Infrastructure Technology Research and Management, Near Khokhara Circle, Maninagar (East),Ahmedabad-380026, Gujarat, Indiaemail: manojchoudhuri@iitram.ac.in Accepted . Received , in original form========================================================================================================================================================================================================The O8f?p star HD 108 is implied to have experienced the most extreme rotational braking of any magnetic, massive star, with a rotational period P_ rot of at least 55 years, but the upper limit on its spindown timescale is over twice the age estimated from the Hertzsprung-Russell diagram. HD 108's observed X-ray luminosity is also much higher than predicted by the XADM model, a unique discrepancy amongst magnetic O-type stars. Previously reported magnetic data cover only a small fraction (∼3.5%) of P_ rot, and were furthermore acquired when the star was in a photometric and spectroscopic `low state' at which the longitudinal magnetic field  was likely at a minimum. We have obtained a new ESPaDOnS magnetic measurement of HD 108, 6 years after the last reported measurement. The star is returning to a spectroscopic high state, although its emission lines are still below their maximum observed strength, consistent with the proposed 55-year period. We measured =-325 ± 45 G, twice the strength of the 2007-2009 observations, raising the lower limit of the dipole surface magnetic field strength to B_ d≥ 1 kG. The simultaneous increase in  and emission strength is consistent with the oblique rotator model. Extrapolation of the  maximum via comparison of HD 108's spectroscopic and magnetic data with the similar Of?p star HD 191612 suggests that B_ d > 2 kG, yielding t_ S, max<3 Myr, compatible with the stellar age. These results also yield a better agreement between the observed X-ray luminosity and that predicted by the XADM model. Stars : rotation – Stars: massive – Stars : individual : HD 108 – Stars: magnetic fields – Stars: winds, outflows.§ INTRODUCTIONThe rapid spindown of magnetic stars due to angular momentum loss through their magnetized winds has been well-explored theoretically (e.g. , ). In the case of some rapidly-rotating Bp stars with magnetic, photometric, and spectroscopic observations spanning a long temporal baseline, magnetic braking has been measured directly (e.g. CU Vir, ; σ Ori E, ; HD 37776, ), and the inferred timescales are in reasonably good agreement with the predictions of MHD simulations <cit.>. Magnetic braking is expected to increase with both the size of the magnetosphere, and the mass-loss rate. Consistent with this picture, magnetic OB stars are in general more slowly rotating than their non-magnetic kin, with median rotational periods of about 9 days <cit.>. Longer rotational periods of weeks and even months are frequently measured. However, there are also cases of extreme rotational braking, with apparent rotational periods on the order of decades. One star in particular is an exemplar of this class: the O8f?p star HD 108, for which evidence from spectroscopy <cit.> and photometry <cit.> indicates a rotational period of between 50 and 60 years <cit.>. Magnetic measurements of HD 108 were reported by <cit.> (hereafter M2010) for the period 2007 to 2009. They found an essentially constant longitudinal magnetic field  ∼ -150 G, from which a lower limit to the dipole surface magnetic field strength of B_ d > 0.5 kG was inferred. The lower limit on the spindown timescale, as inferred from a 55 yr rotation period and the lower limit on B_ d, is t_ S,max = 8.5 Myr <cit.>: this is substantially longer than the age of the star estimated from its position on the Hertzsprung-Russell diagram (HRD), 4±1 Myr (M2010). An additional discrepancy is that the star's X-ray luminosity, thought to originate in magnetically confined wind shocks <cit.>, is almost 1 dex higher than predicted, a unique occurence amongst magnetic O-type stars for which the opposite is typically the case <cit.>. However, the magnetic data cover only a very small fraction (∼3.5%) of the star's presumed rotational cycle. Furthermore, the magnetic data were obtained when the star was at photometric and spectroscopic minimum. In the context of the oblique rotator model in its simplest, dipolar form, this is interpreted as a consequence of the magnetosphere being seen closest to edge-on <cit.>, corresponding to the rotational phase at which the magnetic equator bisects the stellar disk, and thus  is closest to zero. It is therefore likely that B_ d is substantially higher than the lower limit determined by M2010. In this paper we report a new ESPaDOnS observation of HD 108 that enables new constraints on the stellar rotational period and spindown timescale. In  2, we describe the observation. In  3 we examine HD 108's long-term spectroscopic variability. The magnetic analysis is presented in  4, and updated magnetic and magnetospheric parameters, including the spindown timescale and predicted X-ray luminosity, are determined in  5. In  6 we predict the longitudinal magnetic field variation over the full stellar rotation period, and discuss the implications of this for the star's magnetic and magnetospheric properties. Conclusions are summarized in  7. § OBSERVATIONS We obtained two circularly polarized (Stokes V) spectropolarimetric sequences of HD 108 on 2015 September 3 with ESPaDOnS, the high-dispersion (R∼65,000) spectropolarimeter mounted at the 3.6 m Canada-France-Hawaii Telescope (CFHT). A detailed description of this instrument is provided by <cit.>. We followed the same strategy as that adopted by M2010: the measurement consisted of two consecutive spectropolarimetric sequences, each consisting of 4 polarized 1290 s sub-exposures, with a total exposure time of 2.9 h. Each observation yielded four unpolarized intensity (Stokes I) spectra, one Stokes V spectrum, and two diagnostic null N spectra. The data were reduced with the Upena pipeline, which incorporates the automated reduction package Libre-ESpRIT <cit.>. The peak SNR per spectral pixel was 802 in the first observation, 933 in the second, and 1325 in the co-added spectrum. We have also downloaded the ESPaDOnS and Narval observations reported by M2010. Narval is a clone of ESPaDOnS mounted at the Bernard Lyot Telescope, and obtains essentially identical results to those of ESPaDOnS <cit.>. The SNR of the observations reported in this work are comparable to the mean SNR of 1295 in the nightly ESPaDOnS observations presented by M2010. The spectra were normalized by fitting polynomial splines to the continuum flux in individual orders. § VARIABILITYThe left column of Fig. <ref> shows a comparison between the line profiles of Hγ, Hβ, and Hα in 2015 vs. 2007-2009. The Balmer lines are much stronger in the 2015 observation, confirming that HD 108 is moving towards a high state. Comparison of the 2015 observation to the Hβ and Hγ variability reported by <cit.> and the Hα variability reported by M2010 shows that the 2015 data are most similar in appearance to observations collected between 1996 and 1998. As expected, the star's emission lines are not yet at their most intense: the peak strength of Hβ, last reported in 1987, was approximately 1.65× the continuum, as compared to 1.2× the continuum in the 2015 data. The right column of Fig. <ref> shows the comparison described in the previous paragraph for the He i 447.1 nm line, the C iii and N iii lines near 464 nm, and the He ii 468.6 nm line. We confirm the same pattern of variability as observed by <cit.>: in comparison to the 2008 data, He ii and N iii are essentially unchanged, C iii is noticeably stronger, and He i is much weaker, having been significantly filled by emission. We measured the followingequivalent widths (EWs) from the 2015 data: for Hγ, we found 0.097±0.003 nm; for Hβ, 0.048±0.003 nm; for Hα, -0.533±0.002 nm; and for He i 447.1 nm, 0.065±0.002 nm. While an extended time series of Hα EW measurements has not been published, EW time series exist for Hγ, Hβ, and He i 447.1 nm. Fig. <ref> shows EW measurements for Hβ, Hγ, and He i 447.1 nm, where we have combined our data with the measurements presented by <cit.>, <cit.>, and M2010. The long-term modulation is apparent in all lines, but is especially clear in He i as the H lines show a degree of scatter. The EW time series confirms the inference from visual comparison of emission lines that the star is in a state similar to that observed in 1998 (HJD ∼ 2451000). Assuming the spectroscopic variability to be approximately symmetric about the low state, HD 108 should return to the previously observed maximum emission state in approximately 16 years. This is consistent with the rotation period of ∼55 yrs suggested by <cit.>. If this period is correct, maximum emission should next be observed in 2036. § MAGNETIC FIELD DIAGNOSIS As a first step in evaluating the star's magnetic field we performed Least-Squares Deconvolution (LSD; ) using the iLSD package developed by <cit.>. We used the two line lists published by M2010 (see their Table 2). The first line list contains 17 spectral lines which were manually selected so as to minimize contamination by the wind. The second line list contains the 5 spectral lines identified as having the smallest blue-shifted absorption with respect to the stellar wind, and thus the absolute minimum of contamination by wind emission. In the following we shall refer to the first list as that containing `all' spectral lines (i.e., all those lines included by M2010), and the second line list as the `minimum wind' line mask. The LSD profiles extracted from the 2015 ESPaDOnS observation with these masks are shown in Fig. <ref>, where they are compared to the `low state' grand mean LSD profile obtained by combining all LSD profiles extracted from the observations reported by M2010 using the full line list. The amplitude of Stokes V is noticeably stronger in the most recent observation as compared to the 2007-2009 grand mean, however the Stokes I LSD profile extracted using all lines is much weaker. This suggests that many of the spectral lines included in the larger line mask are in fact significantly affected by the stellar magnetosphere, notwithstanding the attempt by M2010 to select lines with only small contamination with wind emission. Conversely, the Stokes I LSD profile extracted using the minimum wind mask is similar in depth to that of the 2007-2009 `low state' grand mean, indicating that this smaller line mask is largely successful in eliminating wind contamination. The 2015 Stokes V profiles extracted with the two masks are similar, although the minimum-wind mask yields a lower SNR due to the smaller number of included lines. This supports the assumption that Stokes V is unaffected by circumstellar emission, as expected given that the magnetic field should be much stronger at the photosphere than in the circumstellar environment, and confirming the results of M2010 who performed the same comparison.To evaluate the longitudinal magnetic field  (e.g. ), we would ideally like to measure  corresponding as closely as possible to the true photospheric value, thus as far as possible contamination from magnetospheric emission should be avoided. This can be done simply by using the minimum-wind profile, but this sacrifices precision in Stokes V. Instead, we used the LSD profiles extracted using the full line mask, but fixed the Stokes I EW to the maximum EW measured in this dataset. This makes the assumption that all variability in Stokes I is a consequence of the magnetosphere, and that the maximum EW gives the best approximation of the true photospheric line strength. This assumption seems warranted given the much lower level of variability in the minimum wind LSD profiles on either short or long timescales. These  measurements are shown as a function of time in the bottom panel of Fig. <ref>, where they are compared to the EWs, and alone in Fig. <ref> where the time axis is zoomed in to show only the epoch spanned by the magnetic data. The weighted mean  measurement in the 2007-2009 epoch is -128±8 G, with a standard deviation for individual measurements of 46 G (solid and dashed lines in Figs. <ref> and <ref>), close to the mean error bar of 54 G.  in a given year is thus consistent with no variation. The annual weighted mean  (black squares in Figs. <ref> and <ref>) are suggestive of a slight increase in the strength of  over time. These results are consistent with those of M2010, confirming that in the 2007-2009 epoch the wind was minimally affecting the lines used for LSD. In contrast, in 2015 =-325±46 G, approximately 3 times the strength measured in 2007-2009. If instead we use the minimum wind LSD Stokes V profile to evaluate , =-375 ± 93 G, which is consistent within error bars. From the LSD Stokes I and V profiles extracted with the full line mask used by M2010, we obtain  =-643 ± 107 G, where the much higher  is a result of the smaller Stokes I EW. § MAGNETIC, MAGNETOSPHERIC, AND ROTATIONAL PARAMETERSHD 108's stellar, magnetic, magnetospheric, and rotational parameters are summarized in Table <ref>. The theoretical framework concerning the rotational and magnetic characteristics of stellar magnetospheres has been summarized by <cit.>, whose development we follow to determine magnetospheric confinement radii, rotation parameters, and spindown timescales.The lower limit to the dipole magnetic field strength B_ d can be inferred from the maximum  measurement _ max and the limb darkening coefficient <cit.>. According to the tables calculated by <cit.>, a star with HD 108's  and logg should have a limb darkening coefficient of ∼0.3. Using the formula provided by <cit.>, B_ d≥ 3.5 |⟨ B_ Z⟩ |_ max = 1150 ± 160 G. The extent of the star's magnetically confined wind is given by the Alfvén radius R_ A <cit.>. This is determined via a scaling relation with the wind magnetic confinement parameter η_*, which is the ratio of magnetic to kinetic energy density in the stellar wind <cit.>. In order to evaluate η_* and R_ A, we must know the star's wind parameters, for which we must first determine the stellar parameters. We adopt =35±2 kK and log(L/L_⊙) = 5.7 ± 0.1, as given by M2010 based on a spectroscopic analysis of archival IUE observations. The stellar radius is then R_* = 19.4±1.5 R_⊙. Placing the star on the HRD (Fig. <ref>) and comparing to the non-rotating evolutionary tracks and isochrones calculated by <cit.>, we find the star to have a zero-age main sequence (ZAMS) mass of M_ ZAMS = 50 ± 3 M_⊙, a present mass of M_* = 42 ± 5 M_⊙, and an age of 3.3±0.3 Myr. These are similar to the parameters determined by M2010, M_*=43 M_⊙ and t=4 ± 1 Myr, where their slightly greater age is due to their adoption of the rotating evolutionary tracks published by <cit.>. We use the mass loss recipe of <cit.>, which yields a surface mass flux of log[Ṁ/(M_⊙  yr^-1)] = -5.55 ± 0.17, calculated using a wind terminal velocity =2000 ± 300  as measured from UV lines by <cit.> (this  is compatible with the value determined from scaling the star's escape velocity by 2.6, as suggested by ). This mass-loss rate is much higher than that determined via spectral modelling of ultraviolet resonance lines by <cit.>, logṀ = -7 ± 1. However,used spherically symmetric mass-loss models which they noted were unable to simultaneously reproduce emission and absorption features. Comparisons of spherically symmetric to magnetohydrodyanmic fits to the UV lines of HD 57682 <cit.> and HD 191612 <cit.> have also found that MHD models require mass-loss rates about 1 dex higher than those measured using spherically symmetric models. MHD mass-loss rates are furthermore similar to those calculated by the <cit.> recipe. We therefore adopt themass-loss rate. From Eqns. 7 and 8 of <cit.>, we then find η_* ≥ 11.5 and R_ A≥ 2.1 R_*.The corotation or Kepler radius R_ K is determined via the rotation parameter W ≡ v_ eq / v_ orb, i.e. the ratio of the equatorial velocity to the orbital velocity <cit.>. Assuming P_ rot=55 yr, we find W = (8 ± 1)× 10^-5 and R_ K = 560 ± 70 R_* (, Eqns. 11 and 14). <cit.> provided a scaling relation for the spindown timescale τ_ J due to angular momentum loss via the magnetosphere. This scales with the star's moment of inertia, the (non-magnetic) mass-loss timescale, and R_ A. Assuming initially critical rotation, i.e. W_0 = 1, the maximum spindown time t_ S,max (i.e. the time required for the star to have decelerated from critical rotation to its current rotation rate) can be estimated from τ_ J and W. Using Ṁ, R_ A, and W as determined above, we find τ_ J≤ 0.5 Myr (, Eqn. 12), and t_ S, max≤ 5 Myr. This is somewhat higher than the age inferred from the HRD, t=3.3 ± 0.3 Myr, but is closer than the 8 Myr spindown age found by <cit.>. If instead the mass-loss rate measured from UV lines by <cit.> is used, t_ S, max < 25 Myr, much longer than the main sequence lifetime of the star. HD 108 has an X-ray luminosity of log[L_ X / ( erg s^-1)] = 33 <cit.>, which is overluminous in comparison to similar non-magnetic stars. <cit.> developed a semi-analytic scaling relationship that has proven successful in predicting the X-ray luminosities of most magnetic OB stars, although HD 108 was predicted to be about 0.5 dex less luminous than observed <cit.>. Using the lower limit on B_ d determined from the new magnetic data, we find that the XADM model predicts log[L_ X / ( erg s^-1)]≥ 33.3 ± 0.4, consistent with the observed X-ray luminosity. The uncertainty accounts for the uncertainty in , which strongly affects logL_ X, and the uncertain efficiency of X-ray production, which may be between 5% and 20% depending on the degree of self-absorption of shock-produced X-rays within the magnetosphere's cool plasma. § DISCUSSION The wind-sensitive lines of HD 108 show variability on two timescales: a long-term modulation, and short-term variability manifesting as scatter in a given epoch. Similar scatter has been observed in the Hα EWs of the magnetic O stars θ^1 Ori C <cit.>, HD 148937 <cit.>, and CPD -28^∘ 2561 <cit.>. This scatter can be qualitatively reproduced in 3D magnetohydrodynamic simulations as a consequence of time-variable structure within the magnetosphere <cit.>. A period search by M2010 on the EWs of variable spectral lines found no stable periods, suggesting stochastic behaviour within the magnetosphere. We performed our own analysis of Hα EW data and have confirmed their conclusion. The long-term modulation of magnetic O-type star EWs seen in wind sensitive lines is produced by the changing projection of the stellar magnetosphere on the sky. If the rotational and magnetic axes are misaligned, then as the star rotates the angle between the line of sight and the magnetic axis changes. Magnetically confined plasma collects in a disk or torus-like structure in closed loops surrounding the magnetic equator, and corotates with the star. Thus, as the star rotates, the magnetosphere is seen from varying perspectives. When the magnetosphere is closest to face-on, emission strength is at a maximum, whereas when it is seen edge-on, emission is at a minimum. These phases correspond to the magnetic axis being, respectively, closest to parallel or perpendicular to the line of sight, thus also corresponding to maximum and minimum .The magnetic data presented in  4 indicate that HD 108's surface magnetic dipole is at least twice as strong as the previously reported lower limit. However, since the star has not yet returned to emission maximum, and since  and emission line EWs tend to correlate, it is likely that  will continue to increase. If this is the case, to estimate the maximum strength of , we used the  and Hβ EWs for HD 191612 (O6-8f?p), which has the most similar stellar parameters to HD 108's of any other magnetic O-type star ( =36±1 kK, logL=5.4 ± 0.2),  of similiar magnitude, and complete phase coverage of both  and Hβ <cit.>. We used Hβ rather than the more sensitive Hα lines as HD 108's Hα time series does not extend to phases of emission maximum. Fig. <ref> shows a linear regression of Hβ EW vs. . The correlation coefficient for HD 191612 is r^2 = 0.96, indicating a good correlation. HD 108's measurements fall along this regression line, suggesting that using the regression to extrapolate _ max is not unreasonable. The dotted line shows HD 108's Hβ EW at emission maximum; it intersects the regression at  ∼-550 G, implying that B_ d > 1.9 kG. While the distribution of surface magnetic dipole strengths amongst magnetic O-type stars ranges from a few hundred G to 20 kG, the majority of such stars have 1 kG < B_ d < 4 kG <cit.>. The 2 kG lower limit obtained from the HD 191612 extrapolation is very close to the centre of the distribution, while the lower limit determined from the 2015  measurement is within 1 standard deviation. This suggests that HD 108 is unlikely to have a magnetic field stronger than about 4 kG.Recalculating the magnetospheric parameters and spindown timescales with the lower limit inferred from HD 191612's Hβ EWs as the magnetic field strength of HD 108 yields η_* > 33, R_ A > 2.7 R_*, τ_ J≤ 0.3 Myr and t_ S,max≤ 3 Myr. The upper limit on the maximum spindown age is approximately the same as the age inferred from the non-rotating evolutionary models of <cit.>, t = 3.3 ± 0.3 Myr (Fig. <ref>). The X-ray luminosity predicted by the XADM model, assuming 10% efficiency, increases to logL_ X≥ 33.6, about 0.6 dex higher than the observed X-ray luminosity, and outside the 0.4 dex uncertainty. This difference is small enough that it can be reconciled by reducing the efficiency to 2%, and/or if the X-ray luminosity is rotationally modulated, as has been observed for some Of?p stars, e.g. HD 191612 for which logL_ X varies by about 0.13 dex throughout a rotational cycle <cit.>. Efficiency is expected to decrease with stronger magnetic confinement, due to the higher density and greater volume of the circumstellar plasma, which absorbs a greater fraction of X-rays <cit.>. HD 108 has stronger emission than any magnetic O-type star but NGC 1624-2, which has by far both the strongest magnetic field and the strongest magnetospheric emission of any magnetic O-type star <cit.>, and is also the most X-ray underluminous with respect to the XADM model <cit.>. Given this, greater absorption of X-rays by HD 108's magnetosphere than in those of most other magnetic O-type stars would make sense.If only one magnetic pole is visible throughout a rotational cycle, the Hα EW will show a single-wave variation, with a single emission maximum at _ max and a single minimum at  ∼ 0 (e.g. HD 191612, and NGC 1624-2; ). If both magnetic poles are visible, the emission strength will show a double-wave variation, with two local maxima corresponding to the positive and negative extrema of the  curve, and two local minima corresponding to  =0 (e.g. θ^1 Ori C, HD 57682, HD 148937, and CPD -28^∘ 2561; ). While HD 108's variability is consistent with a 55 year period, with no more than about 50% of a rotation cycle covered, a double wave variation cannot be ruled out on the basis of spectroscopic data alone, in which case the rotation period would be 110 years. It may be significant that all  measurements to date have been negative. For a double-wave variation, emission minimum corresponds to a polarity change in : thus, if the most recent data had been of positive polarity, the period would have to be 110 years. To explore this further, we calculated least-squares sinusoidal fits to  using both periods. These are shown in Fig. <ref>. To help constrain the fits, we estimated  under the assumption that a sinusoidal  curve must be symmetrical about phase 0, which we define here using JD0 = 2454000, corresponding to the time of minimum emission (see Fig. <ref>). For a 55 year period,  should be negative at all phases. For a 110 yr period,  should be positive between phases 0.5 and 1.0, thus, the polarity of the reflected  estimates was reversed. In the case of a 110 yr period, there should have been an approximately linear decrease in  between 2007 and 2015, whereas for a 55 yr period the 2007-2009 epoch should have corresponded to an extremum of the  curve, with the rate of change increasing from that epoch to the present. The latter case seems to be a better match to the observations, especially considering the higher precision of the 2007-2009 annual mean measurements as compared to the 2015 data. This is reflected in the smaller uncertainty in the fit obtained using a 55 yr period.§ CONCLUSIONS HD 108 is moving back into a high emission state. The line profiles are similar to those seen in 1998. This is consistent with the 55 year rotation period suggested by <cit.>.The new magnetic measurement shows that HD 108's magnetic field is at least twice as strong (B_ d > 1.2 kG) as the previous lower limit. This increase in  accompanying the increase in emission strength confirms the oblique rotator model as a unified explanation for HD 108's magnetic and spectroscopic variability. Comparison of HD 108's Hβ EW curve to those of the similar magnetic O-type star HD 191612 suggests that B_ d is >2 kG. This places HD 108 within one standard deviation of the centre of the observed B_ d distribution of magnetic O-type stars. A 2 kG dipole yields better agreement between the spindown timescale and the stellar age inferred from HD 108's position on the HRD. The higher lower limit to B_ d also resolves the discrepancy between HD 108's observed X-ray luminosity and that predicted by the XADM model: HD 108 is now slightly less luminous than predicted by XADM, as is the case for all other magnetic O-type stars. Indeed, bringing the observed and predicted X-ray luminosities into agreement now requires an efficiency of ∼2%, somewhat less than the 5-10% required for most other magnetic O-type stars. This reduced efficiency may be consistent with the fact that HD 108's H emission is stronger than any star but NGC 1624-2, implying a larger magnetosphere that absorbs a greater fraction of the X-rays produced by the magnetically confined wind shocks.The increased rate of change in  between the 2007-2009 epoch, when  was essentially flat, and the 2015 measurement, is more consistent with a single-wave EW variation in which only one magnetic pole is visible. This indicates that the period is likely 55 rather than 110 years. Further magnetic data will be essential to constraining the star's surface magnetic field strength. From the EW variation, and assuming a 55 year rotation period, magnetic maximum should next occur in 2036. Until then, a new magnetic measurement should be collected at least once every 5 years, in order to sample the rotational phase curve in increments of at least 0.1. Additional X-ray observations should also be obtained in similar intervals, in order to determine to what degree rotational modulation can explain the discrepancy between observed and predicted X-ray luminosities. Future spectroscopic data may also be instrumental in distinguishing between 55 yr and 110 yr periods. Unless i+β is exactly 180^∘, the EW curve will not be perfectly symmetric between times of positive and negative magnetic polarity. Thus, if the next emission maximum (which will correspond to the next magnetic maximum) is substantially stronger or weaker than the previous, this will be good evidence that the period is actually 110 yr, while if the maxima are of the same strength, it will be more likely that P_ rot=55 yr. Comparison of observed EW curves to those predicted by MHD models of HD 108's stellar wind and magnetosphere, as performed for HD 57682 <cit.>, HD 191612 <cit.>, and CPD -28^∘ 2561 <cit.>, will be helpful in determining the star's magnetic geometry, as such models can help to constrain the inclination angle. Due to its extremely slow rotation HD 108 also presents an excellent target for spectral monitoring, which could be used to explore the characteristic timescales of turbulent plasma flows within stellar magnetospheres.The conclusion that HD 108's spindown age and stellar age are compatible is tentative, as it relies upon evolutionary models that do not account for the effects of magnetic fields on stellar structure. <cit.> explored the impact of rotational braking and the inhibition of mixing on stellar evolution, however a truly self-consistent treatment has yet to be performed. Future models should investigate the inter-relation of mass-loss quenching due to magnetic wind confinement, as well as the decline in the surface magnetic field strength due either to flux conservation in an expanding stellar atmosphere, and/or to magnetic flux decay <cit.>. These effects have the potential to modify stellar evolution directly, while at the same time, the evolution of the star may have an influence on the magnetic field, and hence on the magnetosphere and magnetic braking. When such models are available, the gyrochronological ages of massive, magnetic stars should be revisited.Acknowledgements This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. MS and GAW acknowledge support from the Natural Science and Engineering Research Council of Canada (NSERC). We acknowledge the Canadian Astronomy Data Centre (CADC).
http://arxiv.org/abs/1703.08996v1
{ "authors": [ "M. Shultz", "G. A. Wade" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170327101815", "title": "Confirming the oblique rotator model for the extremely slowly rotating O8f?p star HD 108" }
For a null-homologous transverse linkin a general contact manifold with an open book, we explore strongly quasipositive braids and Bennequin surfaces. We define the defect δ() of the Bennequin-Eliashberg inequality. We study relations between δ() and minimal genus Bennequin surfaces of . In particular, in the disk open book case, under some large fractional Dehn twist coefficient assumption, we show that δ()=N if and only ifis the boundary of a Bennequin surface with exactly N negatively twisted bands. That is, the Bennequin inequality is sharp if and only if it is the closure of a strongly quasipositive braid. Cross section and transverse single-spin asymmetry of muons fromopen heavy-flavor decays in polarized p+p collisions at √(s)=200 GeV L. Zou December 30, 2023 ====================================================================================================================================== § INTRODUCTION Let B_n be the n-strand braid group with the standard generators σ_1,…, σ_n-1. For 1 ≤ i < j ≤ n, let σ_i,j be the n-braid given by σ_i,j= (σ_j-1σ_j-2⋯σ_i+1) σ_i (σ_j-1σ_j-2⋯σ_i+1)^-1 In particular, σ_i, i+1=σ_i.The braid σ_i,j (resp. σ_i,j^-1) can be understood as the boundary of a positively (resp. negatively) twisted band attached to the i-th and the j-th strands (see Figure <ref>). The elements in the set {σ_i,j}_1≤ i<j≤ n are called the band generators.Band generators appear in many papers in the literature. The work of Bennequin in <cit.> identifies braid words inband generators and transverse knots and links in the standard tight contact 3-sphere (S^3, ξ_std). Rudolph uses band generators in a series of works including <cit.> where he develops and popularizes the concepts of quasipositive and strongly quasipositive knots and links.See also Rudolph's survey article <cit.>. Using band generators, Xu in <cit.> gives a new presentation of B_3 and a new solution to the conjugacy problem in B_3.Birman, Ko and Lee in <cit.> generalize the results of Xu to B_n.From the modern viewpoint, their work can be understood that the band generators give rise to a Garside structure, which is a certain combinatorial structure allowing us to solve various decision problems like the word and conjugacy problem (see <cit.>). Today the Garside structure defined by band generators is called the dual Garside structure on B_n.Conventions:In this paper, unless otherwise stated, we assume the following:* Every contact structure is co-oriented.* Every braid word w ∈ B_n is written in the band generators σ_i,j, rather than in the standard Artin generators σ_1,…,σ_n-1. * By “a link” we mean an oriented, null-homologous knot or a link. * By “a transverse link” we mean a transverse knot or a transverse link which is null-homologous and oriented so that it is positively transverse to the contact planes.Letbe a transverse link in (S^3, ξ_std). We say that a word w in the band generators σ_i,j is a braid word representative ofif the closure of the n-braid w is .For a braid word representative w of , starting with n disjoint disks and attaching a twisted band for each σ_i,j^± 1 in the word w we get a Seifert surface F=F_w of , which we call the Bennequin surface associated to w (see Figure <ref>).A Bennequin surface is defined by Birman and Menasco in <cit.> for a topological link which generalizes Bennequin's Markov surface <cit.> (every Bennequin surface is a Markov surface, but there are Markov surfaces which are not Bennequin surfaces <cit.>), where they require one more additional condition that F has the maximal Euler characteristic among all Seifert surfaces. However, in this paper, F_w may not necessarily realize the maximal Euler characteristic.A braid K ∈ B_n is called strongly quasipositive <cit.> if K admits a word representative w such that its associated Bennequin surface F_w has no negatively twisted bands.That is, w is a product of positive band generators.Using the dual Garside structure on B_n with the band generators, one can check whether a given braid K is conjugate to a strongly quasipositive braid or not <cit.>. Bennequin in <cit.> shows that for a braid word representative w of , the self-linking number sl() is given by the formulasl()=-n(w)+exp(w)where n(w) and exp(w) denote the number of braid strands and the exponent sum of w.He also proves a fundamental inequality called the Bennequin inequalitysl() ≤ -χ() := 2g()-2+||,where g() denotes the 3-genus (of the topological oriented link type) ofand || denotes the number of link components of .The topological invariant χ() is called the Euler characteristic of . We note that in general χ() ≥χ(F_w). To measure how far the Bennequin inequality (<ref>) is from the equality, we define the defect of the Bennequin inequality for a transverse linkbyδ(): =1/2(-χ()-sl()). Note that δ() is a non-negative integer. For a braid word representative w of ,Definition <ref>,(<ref>) and (<ref>) imply that 0 ≤δ() = 1/2(-χ()-sl()) ≤ 1/2(-χ(F_w)- sl()) =F_w.Therefore, we observe the following: The genus of the Bennequin surface F_w is equal to g()if and only if the number of negatively twisted bands of F_w is equal to δ(). In particular, for a strongly quasipositive braid word w, its Bennequin surface F_w gives a minimum genus Seifert surface ofand the Bennequin inequality is sharp, i.e. δ()=0.Related to Observation <ref> we conjecture the following:Every transverse linkin (S^3, ξ_std) is represented by a braid word w whose Bennequin surface F_w contains δ() negative bands.Equivalently, due to Observation <ref>, everybounds a Bennequin surface of genus g(). In Conjecture <ref>, we do not require that the braid word w realizes the braid index of the transverse linkdefined byb():= min{n ∈_> 0| n}.In fact, in <cit.> Hirasawa and Stoimenow give an example of b()=4 represented by = σ_1,2 (σ_2,4)^2 (σ_1,2)^-1σ_1,3σ_1,2 (σ_2,4)^-1 (σ_1,2)^-2 (σ_1,3)^-2 (note the sign convention is altered here) and none of whose Bennequin surfaces consisting of four disks and twisted bands have the genus g()=3. However, studying the open book foliation of the genus 3 surface depicted in <cit.> we can verify that one positive stabilization (cf Figure <ref>) of this 4-braid produces a 5-braid representative ofthat bounds a Bennequin surface of genus g()=3 as sketched in <cit.>.Concerning the the braid index, we give the following stronger version of Conjecture 1.Stronger Form of Conjecture 1. Every transverse linkin (S^3, ξ_std) is represented by a braid word w of the braid index at most b()+δ() such that its Bennequin surface F_w contains δ() negative bands. Under a condition of large fractional Dehn twist coefficient (FDTC in short, and see the definition in Section <ref>), Conjecture <ref> holds as stated in Theorem <ref>. A special case of Conjecture <ref> where δ()=0 is of our interest.For a transverse linkin (S^3, ξ_std), the Bennequin inequality is sharp if and only ifis represented by a strongly quasipositive braid.Stronger Form of Conjecture 2. For a transverse linkin (S^3, ξ_std), the Bennequin inequality is sharp if and only ifis represented by a strongly quasipositive braid of the braid index b().The statement of Conjecture <ref> has been existing for more than a decade as a question or as a conjecture among a number of mathematicians, including Etnyre, Hedden <cit.>, Rudolph and Van Horn-Morris. Under a condition on large FDTC, both Conjecture <ref> and its stronger form hold as stated in Corollary <ref>. Using Hedden's result of topological fibered knots <cit.>, we can immediately show that Conjecture <ref> holds for fibered transverse knots in (S^3, ξ_std). More generally, Etnyre and Van Horn-Morris give a characterization of fibered transverse links in general contact 3-manifolds on which the Bennequin-Eliashberg inequality (cf. Theorem <ref>) is sharp <cit.>. The aim of this paper is to study these conjectures in the setting of general contact 3-manifolds. First we recall a fundamental fact repeatedly used in this paper:In a general closed oriented contact 3-manifold supported by an open book (S, ϕ), every closed braid with respect to (S, ϕ) can be seen as a transverse link.Conversely, every transverse link can be represented by a closed braid with respect to (S, ϕ), which is uniquely determined up to positive braid stabilizations, positive braid destabilizations and braid isotopy (see <cit.> for the case of disk open book (D^2, id) and <cit.> for general case).Next, we set up some terminologies. Letbe a null-homologous transverse link in a contact 3-manifold (M,ξ). We say that α∈ H_2(M,;) is a Seifert surface class if α =[F] for some Seifert surface F of . This is equivalent to α∈∂^-1([]), where [] ∈ H_1(;) is the fundamental class of ≅ S^1∪⋯∪ S^1 and ∂: H_2(M,;) → H_1(;) is the boundary homomorphism of the long exact sequence of the pair (M,).Let sl(,α) denote the self-linking number ofwith respect to α. We say that a Seifert surface F ofis an α-Seifert surface if [F]=α∈ H_2(M,;).Let g(F) be the genus of F and χ(F) be the Euler characteristic of F.We define the genus and the Euler characteristic ofwith respect to α by g(,α) := min{g(F) | F α}, χ(, α) := max{χ(F)| F α}.We have χ(,α)=2-2g(,α)-||,where || denotes the number of link components of . We recall a theorem of Eliashberg.The contact manifold (M,ξ) is tight if and only iffor any null-homologous transverse linkand its Seifert class α we havesl(, α)≤ - χ(, α). For an overtwisted contact manifold (M,ξ),the same inequality holds for any null-homologous, non-loose transverse linkand its Seifert class α. The second statement is attributed to Światkowski and a proof can be found in Etnyre's paper <cit.>.Theorem <ref> guides us to introduce the following invariant. We define the defect of the Bennequin-Eliashberg inequality with respect to α byδ(,α) := 1/2(-χ(,α)-sl(,α)). Note that δ(,α) is an integer and it can be any negative integer when ξ is overtwisted: To see this, we observe that a transverse push-off of an overtwisted disk gives an transverse unknot U bounding a disk, D, with sl(U,[D])=1 and δ(U,[D]) = -1.Taking some boundary connect sum of n copies of D (with bands each of which contains one positive hyperbolic point as illustrated in Figure <ref> (i)) we get a disk, D_n, with sl(∂ D_n, [D_n])= 2n-1 and δ(∂ D_n,[D_n])=-n.In Definition <ref>, we definean α-Bennequin surface with respect to a general open book (S,ϕ) that supports a general contact 3-manifold as an α-Seifert surface admitting a disk-band decomposition adapted to the open book (S,ϕ).We say that a closed braid with respect to (S, ϕ) is α-strongly quasipositive if it is the boundary of an α-Bennequin surface without negatively twisted bands.We say that an α-Bennequin surface F is a minimum genus α-Bennequin surface if g(F)=g(,α). The definition of α-strongly quasipositive has been discussed by Etnyre, Hedden, and Van Horn-Morris since around 2009.It was formally introduced by Baykur, Etnyre, Hedden, Kawamuro and Van Horn-Morris in the SQuaRE meeting at the American Institute of Mathematics in July 2015, and is printed in the official SQuaRE report <cit.>. Later, Ito independently came up with the same definition.Hayden also has the same definition <cit.>.As we will see in Lemma <ref>, ifbounds a minimum genus α-Bennequin surface, then δ(,α) ≥ 0. We expect that the converse is true:Let (S,ϕ) be an open book decomposition supporting a contact 3-manifold (M,ξ). Letbe a null-homologous transverse link in (M,ξ) with a Seifert surface class α∈ H_2(M,;). If δ(,α) ≥ 0 thenbounds a minimum genus α-Bennequin surface with respect to (S,ϕ).We list evidences for Conjecture <ref>. First, we show that the minimum genus Bennequin surface always exists, if we forget the contact structure and only consider the topological link type ofin M. The statement is proved by Birman and Finkelstein in <cit.> for a special case where M=S^3 and (S, ϕ)=(D^2, id) the disk open book.Let M be an oriented, closed 3-manifold with an open book decomposition (S,ϕ).For every null-homologous topological link typein M and its Seifert surface class α∈ H_2(M,;),bounds a minimum genus α-Bennequin surface with respect to (S,ϕ). Second, in Proposition <ref> we show that under some condition on the fractional Dehn twist coefficient, a transverse link bounds a minimum genus Seifert surface which is almost an α-Bennequin surface. Third, recall Bennequin's <cit.>. The following stronger statement in <cit.> is proved by Birman and Menasco, in which a subtle gap in <cit.> concerning pouches is fixed:Any minimal genus Seifert surface of a closed 3-braid is isotopic to a Bennequin surface with the same boundary. This statement implies that Conjecture <ref> holds for closed 3-braids with respect to the disk open book (D^2,id).The following Proposition <ref> and Theorem <ref> motivate us to study Conjecture <ref>. Due to Theorem <ref>, (M,ξ) is tight if and only if δ(,α)≥ 0 for all null-homologousand its Seifert class α.Thus, if Conjecture <ref> is truethen we obtain a new formulation of tightness in terms of α-Bennequin surfaces: Let (S,ϕ) be an open book decomposition supporting a contact 3-manifold (M,ξ). For every null-homologous transverse linkin (M,ξ) andits Seifert surface class α, we suppose thatthe linkbounds a minimum genus α-Bennequin surface with respect to (S,ϕ).Then (M,ξ) is tight.The converse of the above statement is true if Conjecture <ref> is true.Namely, if Conjecture <ref> is true and (M,ξ) is tight, thenfor every null-homologous transverse linkin (M,ξ) andevery Seifert surface class α∈ H_2(M,;),the linkbounds a minimum genus α-Bennequin surface with respect to (S,ϕ). In the setting of general open books,Conjectures <ref> and <ref> can be extended to Conjectures  <ref>' and <ref>', respectively, as below:Let (S,ϕ) be an open book decomposition supporting a contact 3-manifold (M,ξ).Letbe a null-homologous transverse link in (M, ξ) and α∈ H_2(M, ;) be a Seifert surface class. Conjecture <ref>'. If δ(, α)≥ 0 thenbounds an α-Bennequin surface with δ(, α) negative bands with respect to (S, ϕ). Conjecture <ref>'. If δ(, α)=0 (in which case we say that the Bennequin-Eliashberg inequality is sharp on (, α)) thenis represented by an α-strongly quasipositive braid with respect to (S, ϕ). Conjecture <ref>' is raised as a question in the SQuaRE report <cit.>. It is also stated in <cit.> that a strongly quasipositive link bounds a minimal genus Bennequin surface. We remark that for a general open book, the counterpart of the stronger form of Conjecture <ref> does not hold. In Example <ref>, with a fixed open book (S, ϕ) of a contact manifold (M, ξ), we give an example of transverse knotin (M,ξ) with δ()=0 which does bound a minimum genus Bennequin surface but any braid representative ofrealizing the minimum braid index with respect to (S, ϕ) cannot bound minimum genus Bennequin surfaces. If Conjecture <ref> is true then Conjectures <ref>' and <ref>' are true.Theorem <ref> and the above mentioned 3-braid result yield the following.Letbe a transverse link in (S^3, ξ_std) of the braid index b()=3 with respect to the disk open book (D^2,id).Thenbounds a minimal genus Bennequin surface that consists of δ() negatively twisted bands, a number of positively twisted bands, and three disks.In particular, the Bennequin inequality is sharp onif and only if the 3-braid is (braid isotopic to) a strongly quasipositive braid.§.§ Main results Our first main result Theorem <ref> confirms Conjecture <ref>' under some assumptions.Let (S, ϕ) be an open book and C be a connected component of the binding of (S, ϕ), which we will call a binding component.Let K be a closed braid with respect to (S, ϕ) and c(ϕ,K,C) be the fractional Dehn twist coefficient (FDTC) of the closed braid K with respect to the binding component C(see Section <ref> for the definition). Let (S,ϕ) be an open book decomposition supporting a contact 3-manifold (M,ξ) andbe a null-homologous transverse link in (M,ξ) with a Seifert surface class α∈ H_2(M,;).Assume the following: (i) S is planar.(ii) M does not contain a non-separating 2-sphere (i.e., M does not contain an S^1× S^2 in its connected summands). (iii)has a closed braid representative K with respect to (S, ϕ) which bounds an α-Seifert surface F such that:(iii-a) g(F)=g(,α).(iii-b) Among all the binding components of (S, ϕ) only C intersects F. (iii-c) c(ϕ,K,C)>1. Then δ(,α)=0 if and only if K is α-strongly quasipositive with respect to (S, ϕ). In particular, δ(,α)=0 if and only ifis represented by an α-strongly quasipositive braid.If we drop the assumption (i) or (iii-c), as shown in Examples <ref> and <ref>, K may not be α-strongly quasipositive. However, we note that this does not mean failure of Conjecture <ref>' since some positive stabilizations of K has a good chance to be α-strongly quasipositive. Our second main result Theorem <ref> (and Corollary <ref>)shows that Conjecture <ref> holds for the disk open book (D^2,id) under an assumption of large FDTC.Letbe a transverse link in (S^3, ξ_std).Consider the disk open book (D^2,id) that supports (S^3, ξ_std). Ifadmits a closed braid representative K such thatc(id,K,∂ D^2) > δ()/2+1then(in fact, K itself or K with one positive stabilization) bounds a minimum genus Bennequin surface with respect to (D^2,id).Moreover, if δ() =0 and c(id,K,∂ D^2) >1 then K is a strongly quasipositive braid. Letbe a transverse link in (S^3, ξ_std).Assume thatis represented by a closed braid K with c(id,K, ∂ D^2) >1 and realizing the braid index b(). Then the Bennequin inequality foris sharp if and only ifis represented by a strongly quasipositive braid of braid index b() with respect to (D^2, id). (Namely, the stronger form of Conjecture <ref> holds.)In Example <ref> we present examples of braids satisfying conditions in Theorem <ref> and Corollary <ref>. In particular, our example contains many non-fibered knots which shows independency of our results from Hedden's <cit.>.Although it looks restrictive, the large FDTC assumption is satisfied by almost all braids: Indeed, given a random n-braid β and a number C, the probability that |c(id, β,∂ D^2)|≤ C is zero (see <cit.> for the precise meaning of“random”).§ THE FDTC FOR CLOSED BRAIDS IN OPEN BOOKSIn this section we review closed braids in open books and the FDTC for closed braids.Let S be an oriented compact surface with non-empty boundary, and P={p_1,…,p_n} be a (possibly empty) finite set of points in the interior of S. Let MCG(S,P) (denoted by MCG(S) if P is empty) be the mapping class group of the punctured surface S ∖ P; that is, the group of isotopy classes of orientation-preserving homeomorphisms on S, fixing ∂ S point-wise and fixing P set-wise.With respect to a connected boundary component C of S, the fractional Dehn twist coefficient (FDTC) of ϕ∈ MCG(S, P), defined in <cit.>, is a rational number c(ϕ,C) and measures to how much the mapping class ϕ twists the surface near the boundary C. Let (M,ξ) be a closed oriented contact 3-manifold supported by an open book decomposition (S,ϕ).(See <cit.> for the meaning of “supported”.) Let B⊂ M be the binding of the open book and π: M∖ B → S^1= [0,1] (0∼ 1) be the associated fibration.For t ∈ S^1 we denote the closure of the fiber π^-1({t}) by S_t and call it a page.The topological type of S_t is S and the orientation os S_t induces the orientation of the binding B.Since M∖ B is diffeomorphic to S × [0,1] / (x, 1)∼(ϕ(x), 0) we may denote S ×{t} by the same notation S_t.Let p: M∖ B → S be the projection map such that p|_S_t: S_t → S gives a diffeomorphism.A closed braid K with respect to (S,ϕ) is an oriented link in M ∖ B which is positively transverse to each page. Two closed braids are called braid isotopic if they are isotopic through closed braids. The number of intersection points ofK and the page S_t is denoted by n(K) and called the braid index of K with respect to (S, ϕ). Let B_n(S) be the n-stranded surface braid group for S (see <cit.> for the definition). Cutting M ∖ B along the page S_0 we get a cylinder int(S) × (0,1) and the closed braid K gives rise to a surface braid β_K∈ B_n(S) with n=n(K) strands.The converse direction; namely, obtaining a closed braid from a surface braid β∈ B_n(S), requires more care. Recall the generalized Birman exact sequence <cit.> 1 ⟶ B_n(S) i⟶ MCG(S,P)f⟶MCG(S) ⟶ 1where i is the push map and f is the forgetful map.Since f is not injective we have various ways to construct a closed braid K from a given braid β∈ B_n(S). We recall the definition in <cit.> of the FDTC c(ϕ,K,C) of K as follows.Suppose that the mapping class ϕ∈ MCG(S) is represented by a homeomorphism f ∈ Homeo^+(S, ∂ S). For a connected boundary component C of S, let us choose a collar neighborhood ν(C) ⊂ S of C.We may assume that f fixes ν(C) point-wise.We say that a closed braid K is based on C if P= p(K ∩ S_0) ⊂ν(C). We may isotop K through closed braids so that K is based on C. Since f|_ν(C) = id, the puncture set P is pointwise fixed by f; thus, we may view f as an element of Homeo^+(S, P, ∂ S). In order to distinguish f∈ Homeo^+(S, ∂ S) and f∈ Homeo^+(S, P, ∂ S) we denote the latter by j(f).The map j induces a homomorphismj_*:MCG(S) → MCG(S,P)which satisfies j_*(ϕ) = [j(f)]. Let K be a closed braid with respect to (S,ϕ) and based on C. The distinguished monodromy of the closed braid K with respect to C is the mapping class ϕ_K = i(β_K) ∘ j_*(ϕ) ∈ MCG(S,P).Here i denotes the push map in the generalized Birman exact sequence.The FDTC of a closed braid K with respect to C is defined byc(ϕ,K,C) := c(ϕ_K,C). The FDTC c(ϕ, K, C) is well-defined; namely, if braids K_1 and K_2 are braid isotopic and p(K_i ∩ S_0) ⊂ν(C) for both i=1, 2 then c(ϕ, K_1, C)=c(ϕ, K_2, C). In fact, a stronger statement can be found in <cit.>.Due to the dual Garside structure of the braid group B_ncoming from the band generators σ_i,j, <cit.> states that each β∈ B_n can be represented by the unique left canonical normal formN(β) = δ^Nx_1 ⋯ x_k,where δ = σ_n-1,nσ_n-2,n-1⋯σ_1,2and x_1,…,x_k are certain strongly quasipositive braids called the dual simple elements.As a homeomorphism of a disk with n evenly distributed punctures along the boundary, δ rotates the disk by 2 π/n.The integer N in the normal form N(β)= δ^Nx_1 ⋯ x_k is called the infimum of β <cit.> and denoted by inf(β). The infimumof an n-stranded closed braid K with respect to the disk open book (D^2, id) is defined by inf(K) = max{inf(β)|β∈ B_n K}. We observe that K is strongly quasipositive if and only if inf(K)≥ 0.Although both inf(K) and c(id,K,∂ D^2) count the number of twists near the boundary ∂ D^2 in certain ways, in general, there is no direct connection between them.For example, for n≥ 3 let β=σ_1,2σ_2,3⋯σ_n-1,nσ_1,n∈ B_n(read from left to right) and K_m be the braid closure of β^m where m∈ℕ. The following discussion shows that inf(K_m)=0 and c(id,K_m,∂ D^2)=m. Let x_i:=σ_[i],[i+1] for every i∈ℕ, where 1≤ [i]≤ n denotes the unique integer that satisfies [i] ≡ in and σ_n,1:=σ_1,n.The normal form of β^m isN(β^m) = x_1 x_2 ⋯ x_nm = ((σ_1,2)(σ_2,3)⋯ (σ_n-1,n)(σ_1,n) )^m.Therefore, inf(β^m)=0. Furthermore, β^m is rigid; that is, conjugation by the 1st factor of the normal form x_1=σ_1,2 x_1^-1N(β^m)x_1=x_2x_3⋯ x_nmx_1 produces a normal form of the braid σ_1,2^-1β^mσ_1,2. This implies that β^m attains the maximum infimum among its conjugacy class <cit.>; hence, inf(K_m)=0.On the other hand, looking at the image of some properly embedded arc γ under the braid β^m we obtain T_∂ D^m-1 (γ) ≥β^m(γ) ≥ T_∂ D^m (γ) for all m∈ℕ. Thus by <cit.> and <cit.> we have1-1/m≤ c(id,K_1,∂ D^2) ≤ 1 for all m∈ℕ.This gives that c(id,K_1,∂ D^2) =1, and it follows thatc(id,K_m,∂ D^2)=m.§ SUMMARY OF RESULTS IN OPEN BOOK FOLIATIONS In this section, we review properties of open book foliations that areneeded to prove our main theorems. For details, see <cit.>.Let (S,ϕ) be an open book decomposition supporting a contact 3-manifold (M,ξ).Throughout this section, the open book decomposition (S, ϕ) is fixed.Let K be a closed braid with respect to (S, ϕ) and F be a Seifert surface of K.With an isotopy fixing K=∂ F, <cit.> shows that F can admit a singular foliation (F):={ F ∩ S_t|t ∈ [0, 1] } induced by the intersection with the pages of the open book and satisfying the following conditions.(ℱ i)The binding B pierces F transversely in finitely many points.At each p ∈ B ∩ F there exists a disc neighborhood N_p⊂ F of p on which the foliation (N_p) is radial with the node p, see Figure <ref>-(i). We call p an elliptic point. (ℱ ii)The leaves of (F) are transverse to K=∂ F. (ℱ iii)All but finitely many pages S_t intersect F transversely. Each exceptional page is tangent to F at a single point that liesin the interiors of both F and S_t. In particular, (F) has no saddle-saddle connections.(ℱ iv)All the tangent points of F and fibers are of saddle type, see Figure <ref>-(ii).We call them hyperbolic points.Such a foliation (F) is called an open book foliation on F. An elliptic point p is positive (resp. negative) if the binding B is positively (resp. negatively) transverse to F at p. A hyperbolic point q ∈ F ∩ S_t is positive (resp. negative) if the orientation of the tangent plane T_q(F) agrees (resp. disagrees) with the orientation of T_q(S_t). A leaf of (F), a connected component of F ∩ S_t, is called regular if it does not contain a hyperbolic point, and called singular otherwise. The regular leaves are classified into the following three types. a-arc: An arc one of whose endpoints lies on B and the other lies on K.b-arc: An arc whose endpoints both lie on B.c-circle: A simple closed curve. The leaves of (F) are equipped with orientations as follows (cf. <cit.> and <cit.>). Take a non-singular point p on a leaf l in a page S_t. Let n⃗_F ∈ T_p S_t ⊂ T_pM be a positive normal vector to the tangent space T_p F and let v ∈ T_pl be a vector such that (n⃗_F, v) gives an oriented bases for the tangent space T_pS_t.The vector v defines the orientation of the leaf l. With this orientation of leaves, everypositive (resp. negative) elliptic point becomes a source (resp. sink), andthe leaves are pointing out of the surface F along the boundary ∂ F.According to the types of nearby regular leaves, hyperbolic points are classified into six types: Type aa, ab, ac, bb, bc and cc. Each hyperbolic point has a canonical neighborhood as depicted in Figure  <ref>, which we call a region. We denote by (R) the sign of the hyperbolic point contained in the region R.If (F) contains at least one hyperbolic point, then we can decompose F as the union of regions whose interiors are disjoint. We call such a decomposition a region decomposition. One can read the Euler characteristic and the self-linking number from the open book foliation.<cit.>Let F be a Seifert surface of a transverse linkadmittingan open book foliation (F). Let e_± (resp. h_±) be the number of positive and negative elliptic (resp. hyperbolic) points of (F). Then the self-linking number hassl(,[F])= -(e_+ - e_-) + (h_+-h_-).For the Euler characteristics we haveχ(, [F]) ≥χ(F) = (e_+ + e_-) - (h_++h_-).Therefore, δ(,[F]) ≤ h_--e_-.In particular, if g(F)=g(,[F]) then δ(,[F]) = h_--e_-.We say that a b-arc b in a page S_t is essential if b is not boundary-parallel as an arc of the punctured page S_t∖ (S_t∩ K).We say that an open book foliation (F) is essential if all the b-arcs are essential (cf. <cit.>). The next theorem shows that an incompressible surface admits an essential open book foliation, after desumming essential spheres that are 2-spheres that do not bound 3-balls:<cit.>Suppose that F is an incompressible Seifert surface of a closed braid K. Then there exist a Seifert surface F' of K admitting an essential open book foliation and essential spheres 𝒮_1,…,𝒮_k such that F is isotopic to F'#𝒮_1 #⋯#𝒮_k by an isotopy that fixes K=∂ F. Moreover, if F does not intersect a binding component C then neither does F'. Here is a corollary of Theorem <ref> which we use later for the proofs of our main results. Assume that M contains no non-separating 2-spheres. Let K be a closed braid representative of a null-homologous transverse linkand F be an incompressible Seifert surface of K.Then there is an incompressible Seifert surface F' of K with the following properties: * F' admits an essential open book foliation.* [F]=[F'] ∈ H_2(M,;). * g(F')=g(F).* If F does not intersect a binding component C then neither does F'.The following theorem gives a connection between essential open book foliations and the FDTC of braids.<cit.>Let F be an incompressible Seifert surface of a closed braid K equipped with an essential open book foliation. Let v_1,…, v_n be negative elliptic points which lie on the same binding component C. Let N be the number of negative hyperbolic points that are connected to at least one of v_1,…,v_n by a singular leaf. Then we havec(ϕ,K,C) ≤N/n.§ GENERALIZED BENNEQUIN SURFACES §.§ Definition of α-Bennequin surfacesIn this subsection, we generalize the notion of Bennequin surfaces in S^3 with respect to the disk open book (D^2, id) to Bennequin surfaces in a general manifold M with respect to a general open book (S, ϕ). Let (S,ϕ) be an open book.We take an annular neighborhood ν=ν(∂ S) ⊂ S of ∂ S and fix a homeomorphism ν≈|∂ S|⊔ S^1× [0,1)so that ∂ S ⊂ν is identified with |∂ S|⊔S^1 ×{0}. Take a set of points P={p_1,…,p_n} so that P ⊂|∂ S|⊔ S^1×{1/2}. Let 1/2ν:=|∂ S|⊔ S^1× [0,1/2) ⊂ν. See Figure <ref>.We view B_n(S) as a subgroup of MCG(S,P) through the push map i in the generalized Birman exact sequence (<ref>). We say that a braid w ∈ B_n(S) is a positive (resp. negative) band-twist if w∈ MCG(S,P) is a positive (resp. negative) half twist about a properly embedded arcin S ∖1/2ν connecting two distinct points in P. See Figure <ref> and Figure <ref>.A band-twist factorization of a braid β∈ B_n(S) is a factorization of β into a word w_1⋯ w_m, where each w_i is a band-twist.In the case of S=D^2, a band-twist factorization is nothing but a factorization using the band generators σ_i,j.When g(S)>0, some braids in B_n(S) may not admitband-twist factorizations: For example, a non-trivial 1-braid in B_1(S)≅π_1(S) does not admit a band-twist factorizations.For a closed braid representative K of a transverse linkwe may isotop K through closed braids so that p(K ∩ S_0)=P. Let β_K∈ B_n(S) be the n-braid obtained from K by cutting M along S_0.Suppose that β_K admits a band-twist factorization w=w_1⋯ w_m.Take n disjoint meridional disks of the binding B bounded byP× [0,1]/(x,1)∼(x,0). Take a sequence 0< t_1 < ⋯ < t_m < 1.For each positive (resp. negative) band-twist w_i ∈ MCG(S, P), let γ_i be a properly embedded arc in S ∖1/2ν joining distinct points x_i and y_i∈ P such that a positive (resp. negative) half-twist about γ_i represents w_i.For each i=1,⋯,m, we attach a positively twisted band whose core is γ_i ×{t_i}⊂ S_t_i to the two meridional disks corresponding to the puncture points x_i and y_i, see Figure <ref>. The resulting surface is a Seifert surface of the closed braid K and is denoted by F_w. Let (S,ϕ) be an open book decomposition supporting a contact 3-manifold (M,ξ). Letbe a null-homologous transverse link in (M,ξ) and α∈ H_2(M,;) be a Seifert surface class.Let K be a closed braid representative ofwith respect to (S, ϕ). (1): An α-Seifert surface F of K is called an α-Bennequin surface of K with respect to (S,ϕ) if F admits an open book foliation whose region decomposition consists of only aa-tiles.(2): We say that the closed braid K is α-strongly quasipositive with respect to (S,ϕ) if it is the boundary of an α-Bennequin surface without negative hyperbolic points.(In this case, we also say thatis α-strongly quasipositive.)* As noted in Section <ref> the definition of α-strongly quasipositive with respect to an open book (Definition <ref> (2)) has been introduced in <cit.>. Hayden independently has the same definition <cit.>.* It is straightforward from Definition <ref> (2) that the Bennequin-Eliashberg inequality is sharpon every α-strongly quasipositive transverse link, which is also stated in <cit.>. * When (S, ϕ)=(D^2, id), the above α-Bennequin surface is the same as the Bennequin surface defined by Birman and Menasco in <cit.>, and the above α-strongly quasipositive is the same as strongly quasipositive defined by Rudolph <cit.>.The Seifert surface F_w constructed from a band-twist factorization w of a braid β_K is an α-Bennequin surface, where α = [F_w]∈ H_2(M, K;).The open book foliation of each meridional disk of F_w contains one positive elliptic point at the intersection with the binding B and a-arcs emanating from the elliptic point.The open book foliation of each ±-twisted band contains one singular leaf with ± hyperbolic point such that its stable separatrix is the arc γ_i ×{t_i} using the notation in Definition <ref>. The boundary of every α-Bennequin surface with respect to (S,ϕ) is a closed braid with respect to (S, ϕ) which admits a band-twist factorization.Since an α-Bennequin surface F admits an aa-tile decomposition, F is a union of disks each of which is a regular neighborhood of a positive elliptic point q_k of (F) and twisted bands each of which is a rectangular neighborhood of a singular leaf containing one hyperbolic point of (F).The sign of each twisted band is equal to the sign of the corresponding hyperbolic point.Up to isotopy that preserves the topological type of the open book foliation (F) we may assume that:* Each disk is centered at q_k and its boundaryis described as p_k × [0,1]/(p_k,1) ∼ (p_k,0) for some point p_k ∈1/2ν, and* The stable separatrices η_1,⋯,η_m of the singular leaves of (F) lie on distinct pages S_t_i for some 0<t_1 <⋯ < t_m<1.Then the projection γ_i=p(η_i)⊂ S is a properly embedded arc in S joining points, say q_k_i and q_k'_i∈∂ S. We may further assume that p_k_i, p_k'_i∈γ_i and denote the subarc of γ_i joining p_k_i and p_k'_i by γ'_i.Let w_i be a band-twist represented by an ϵ_i half-twist about the arc γ'_i where ϵ_i ∈{±1} is the sign of the hyperbolic point that γ_i contains.Let w = w_1 ⋯ w_m ∈ MCG(S, P).Then we see that F is homeomorphic to the surface F_w. If a closed braid K is α-strongly quasipositive with respect to (S, ϕ) if and only if β_K is a product of positive band-twists. §.§ Minimum genus Bennequin surfaces In this section, we prove Theorems <ref> and <ref>. We begin with a simple observation that if a transverse link is the boundary of a Bennequin surface then it satisfies the Bennequin-Eliashberg inequality.Letbe a null-homologous transverse link and α∈ H_2(M,;) be a Seifert surface class. Ifis the boundary of a minimal genus α-Bennequin surface then δ(,α) ≥ 0. Since any α-Bennequin surface has e_-=0, Lemma <ref> gives δ(,α)= h_-≥ 0. Proposition <ref> and Theorem <ref> easily follow fromLemma <ref>.If (M,ξ) is overtwisted, then by the Bennequin-Eliashberg inequality theorem (Theorem <ref>), there is a transverse linkand its Seifert surface class α such that δ(,α)<0 (e.g. take a transverse push-off of the boundary of an overtwisted disk). By Lemma <ref> such a transverse linkcannot bound a minimum genus α-Bennequin surface. This proves the contrapositive of the first statement of the proposition. To see the second statement of the proposition, we assume that (M, ξ) is tight.Theorem <ref> and the truth ofConjecture <ref> imply that for any null-homologous transverse linkand its Seifert surface class α,bounds a minimal genus α-Bennequin surface with respect to (S, ϕ). Assume that δ(,α)≥ 0 for someand α.The truth of Conjecture <ref> implies that there exists an α-Bennequin surface F with g(F)=g(,α). Let p (resp. n) be the number of positively (resp. negatively) twisted bands in F. By a property of the geometric definition of an α-Bennequin surface, p (resp. n) is equal to the number of positive (resp.negative) hyperbolic points of the open book foliation (F). Any α-Bennequin surface has e_-=0. By Lemma <ref> δ(,α) = h_-=n. Thus Conjectures 1' and 2' hold.Next we prove Theorem <ref>, which guarantees the existence of minimum genus Bennequin surfaces for every topological link type.Let C be a connected component of the binding of the open book (S, ϕ). Let μ_C be a meridian of C whose orientation is induced from the orientation of C by the right hand rule.We say that a closed braid K' is a positive(resp. negative) stabilization of a closed braid K about C, if K' is the band sum ofμ_C and K with a positively (resp. negatively) twisted band. See Figure <ref> (i). Here, a positively (resp. negatively) twisted band is an oriented rectanglewhose foliation induced by the pages of the open bookhas a unique positive (resp. negative) hyperbolic point.The boundary edges are oriented by the boundary orientation of the rectangle. The edges bc and da are in braid position; namely, positively transverse to the pages of the open book. On the other hand, the edges ab and cd negatively transverse to the pages. (Therefore, the condition (ℱ ii) of open book foliations is not satisfied at the corner points a, b, c and d.) The edge ab is attached to μ_C and the edge cd is attached to K.Both positive and negative stabilizations preserve the topological link type of the closed braid K.A positive stabilization preserves the transverse link type of K, whereas a negative stabilization does not. Recall the fact that one can remove an ab-tile by a stabilization, see Figure <ref> (ii) and <cit.>.Let K be a null-homologous closed braid with respect to (S, ϕ) and F be a Seifert surface of K admitting an open book foliation.Assume that the region decomposition of F has an ab-tile R. Let C denote the binding component on which the negative elliptic point of R lies.If (R)=+1 (resp. -1) then a negative (resp. positive) stabilization of K about C can remove the ab-tile R. As a result, ab-tiles, bb-tiles and bc-annuli that share the negative elliptic point become aa-tiles, ab-tiles and ac-annuli, respectively.We push K=∂ F across the unstable separatrix of the hyperbolic point in R.See Figure <ref> (ii). Then the resulting closed braid is a stabilization of K about C and the sign of stabilization is positive (resp. negative) if (R)=-1 (resp. (R)=+1).Take a closed braid representative K of a null-homologous topological link type .Let F be an α-Seifert surface of K with g(F)=g(,α). By an isotopy fixing K we may put F in a position so that F admits an open book foliation (F).By <cit.> we may assume that (F) contains no c-circles. By Lemma <ref>, after sufficiently many positive and negative stabilizations, we can remove all the ab-tiles without producing new c-circles. As a consequence, existing bb-tiles may become ab-tiles. Then we remove the new ab-tile as well by further stabilizations. After removing all the ab-tiles and bb-tiles, the region decomposition consists of only aa-tiles; thus, we obtain an α-Bennequin surface. § PROOFS OF THE MAIN THEOREMSThe goal of this section is to prove the main results (Theorems <ref> and <ref> and Corollary <ref>).§.§ Lemmas for the main results Let F be a Seifert surface of a closed braid K with respect to (S,ϕ). Assume that F admits an open book foliation (F).Fix a region decomposition of (F).To relate the open book foliation and the FDTC, we use the following graph G_– which is a slight modification of the graph G_– introduced in <cit.>.Let R be an ab-tile, a bb-tile or a bc-annulus in the region decomposition of (F).If (R)=-1 then the graph G_R on R is as illustrated in Figure <ref>.If (R)=+1 then G_R is defined to be empty. Also, if R is an aa-tile, an ac-annulus or a cc-pants then G_R is defined to be empty.The union of graphs G_R over all the regions of the region decomposition andall the negative elliptic points gives a(possibly not connected) graph, G_–, contained in F.We call the graph G_– the extended graph of G_–.There are two types of vertices in G_–.We say that a vertex of G_– is fake if it is not a negative elliptic point as depicted with a hollow circle in Figure <ref>. A negative elliptic point is called a non-fake vertex. In Lemma <ref> and Propositions <ref>, <ref> and <ref>, we assume that K is a closed braid with respect to an open book (S,ϕ) representing a null-homologous transverse link , and F is a Seifert surface of K with(I) δ(, [F])≥ 0, (II) g(F)=g(K, [F]); that is, F is incompressible and δ(,[F]) ≤ h_--e_- by Lemma <ref>, (III) F admits an open book foliation (F) (which may be not essential).If the open book foliation (F) contains negative elliptic points then the extended graph G_– contains a non-fake vertex of valence less than or equal to δ(,[F])+2.First suppose that e_-=1. Let d denote the valence of the unique negative elliptic point of (F).Note that h_- ≥ d because every edge of G_– contains one negative hyperbolic point.By the condition (II), we have δ(,[F]) = h_- - e_-. Thus,δ(,[F]) = h_- - e_-≥ d-1 and d ≤δ(,[F])+1. Next we assume that e_- ≥ 2. For i≥ 0, let v_i be the number of vertices of G_– whose valence is i and let w be the number of edges of G_–. Then we have∑_i≥ 0iv_i = 2wandχ=∑_i≥ 0v_i-w,whereχ=χ(G_–) is the Euler characteristic of the extended graph G_–. Therefore, ∑_i≥ 2(i-2)v_i = -2χ + v_1 +2v_0. If there is a non-fake vertex of valence less than or equal to two, then we are done since 2 ≤δ(,[F])+2 by the condition (I). Suppose that every non-fake vertex has valence grater than two. (i.e., v_0=v_2=0).Then, since every fake vertex has valence one, v_1 is equal to the number of fake vertices.By Definition <ref> we have h_-≥ w ande_- = ∑_i≥ 0v_i - #{} = ∑_i≥ 0v_i - v_1. By (<ref>)δ(, [F]) = h_- - e_-≥ w-e_- =w - (∑_i≥ 0v_i) + v_1 = -χ +v_1.Therefore by (<ref>) we get an inequality∑_i≥ 2(i-2)v_i ≤2δ(, [F]) - v_1. Recall that v_0=v_2=0.Let j=min{i>2 | v_i ≠ 0}. Since G_– contains at least two non-fake vertices, by (<ref>)(j-2) · 2 ≤(j-2) · e_- ≤(j-2) ∑_i≥ 3 v_i ≤∑_i≥ 2(i-2)v_i ≤2δ(, [F]) - v_1. Therefore,j ≤δ(,[F])-v_1/2 +2 ≤δ(,[F])+2.Propositions <ref> and <ref> below show that under an assumption of large FDTC and essentiality of the open book foliation, an α-Seifert surface is `close' to an α-Bennequin surface in the sense that its open book foliation has no negative elliptic points (but may have c-circles).Assume that the open book foliation (F) is essential. If c(ϕ,K,C)>δ(,[F])+2 for every binding component C that intersects F, then (F) has no negative elliptic points (but possibly it has c-circles). Assume to the contrary that the braid foliation (F) has negative elliptic points.By Lemma <ref>, there exists a non-fake vertex v of G_– whose valence is less than or equal to δ(,[F])+2. By Theorem <ref>, this implies that c(ϕ,K,C) ≤δ(,[F])+2, which contradicts our assumption. If F intersects exactly one binding component thenwe can say more with a smaller lower bound on the FDTC. Assume that the open book foliation (F) is essential.Let C be a binding component. Let e_C denote the number of negative elliptic points in (F) that are on the binding component C. * If c(ϕ,K,C)> 1/kδ(,[F]) + 1 for some k≥ 1 and δ(,[F]) >0 then e_C < k. * If c(ϕ,K,C)> 1 and δ(,[F])=0 then e_C=0.* If c(ϕ,K,C)> 1 for all the binding components and δ(,[F])=0 then e_-=h_-=0.If e_- = 0 then we are done. We may assume that e_- ≥ 1. Let e_i denote the number of negative elliptic points that are on the binding component C_i. We have e_-=∑_i e_i. Suppose that e_i≥ 1.Let N (≥ 0) be the number of negative hyperbolic points of type either ab, bb or bc. Note that h_- ≥ N.Every negative hyperbolic point of type ab, bb or bc is connected to at least one negative elliptic point by a singular leaf. By Theorem <ref> and the condition (II) we have c(ϕ,K,C_i) ≤N/e_i≤h_-/e_i= δ(,[F])/e_i+ e_-/e_i≤δ(,[F])/e_i+ 1. * Assume that 1/kδ(,[F]) + 1<c(ϕ,K,C) for some k≥ 1 and δ(,[F]) >0. If k=1 and e_i≥ 1 then inequalities (<ref>) and (<ref>) yield e_i < 1 which is a contradiction.Therefore, when k=1 we must have e_i=0. If k≥ 2 and e_i≥ 1 then inequalities (<ref>) and (<ref>) yield e_i < k.* If δ(,[F])=0 and 1< c(ϕ,K,C_i) then inequality (<ref>)gives 1 < 1, which is a contradiction.Therefore in this case e_i=0.* The last statement follows from (2) and e_-=∑_i e_i.The next proposition gives a criterion of strongly quasipositive braids. Assume the following. (i) All the elliptic and hyperbolic points of (F) are positive. (ii) The page S is planar.(iii) Only one binding component intersects F. Then F is an [F]-Bennequin surface and K is a strongly quasipositive braid.Let C be the unique binding component that intersects F.By the assumption (ii), if there exists a c-circle, c, in a page S=S_t then c separates S into two components.Let X be the connected component of S ∖ c that contains C. Recall our orientation convention for leaves as defined in Section <ref>. We say that c is coherent with respect to C if the leaf orientation of c agrees with the boundary orientation of c ⊂∂ X. Otherwise, we say that c is incoherent.For simplicity, we omit writing `with respect to C' in the following.By the assumption (i), there are no negative elliptic points.Therefore,the region decomposition of F consists of only aa-tiles, ac-annuli and cc-pants each of which has a positive hyperbolic point. First, let us considerhow an ac-singular point changes the types of local regular leaves. (1)An a-arc forms a positive hyperbolic point h with itself then splits into an a-arc and a c-circle, c, see Figure <ref> (1).By the assumption (iii), every a-arc starts at C.This shows that the c-circle c must be incoherent. (2)An a-arc and a c-circle merge and form a positive hyperbolic point h. Then they become one a-arc, see Figure <ref> (2). By the assumption (iii), this c-circle must be coherent.Next, let us consider how a cc-hyperbolic point changes the types of local regular leaves.(3) Suppose that a c-circle forms a positive cc-hyperbolic point h with itself then splits into two c-circles.There are two possibilities. (3-a)An incoherent c-circle splits into two incoherent c-circles, see Figure <ref> (3-a).(3-b) A coherent c-circle splits into one coherent c-circle and one incoherent c-circle, see Figure <ref> (3-b).(4)Suppose that two c-circles merge and form a positive cc-hyperbolic point then become one c-circle. There are two possibilities. (4-a) Two coherent c-circles merge into one coherent c-circle, see Figure <ref> (4-a).(4-b) One coherent c-circle and one incoherent c-circle mergeinto one incoherent c-circle, see Figure <ref> (4-b).The above discussion shows that passing a type ac or cc positive hyperbolic point never decreases (resp. increases) the number of incoherent (resp. coherent) c-circles.For a regular page S_t (t ∈ [0,1]), let N(t) be the number of incoherent c-circles in S_t.Since a type aa hyperbolic point does not affect c-circles we get N(t) ≤ N(t') for t <t'.Our strategy is to show that all the regions in the region decomposition are of type aa; hence, F is an[F]-Bennequin surface. Assume to the contrary that there exist c-circles.If no a-arcs interact with those c-circles (i.e., no ac-annuli exist), then the surface F contains a component consisting of only aa-tiles.In other words, F is disconnected, which is a contradiction. Therefore, (F) contains ac-annuli.If at least one ac-annuls of type (1)exists then we have an strict inequality N(0) < N(1). However, the page S_1 is identified with the page S_0 by the monodromy ϕ of the open book. Since F is orientable, ϕ identifies an incoherent c-circle in S_1 with an incoherent c-circle in S_0, which means N(0)=N(1). This is a contradiction. If (F) contains an ac-annulus of type (2) then a parallel argument about the number of coherent c-circles holds and we get a contradiction. Thus, c-circles do not exist.Careful readers may notice that the conditions (I) and (II) are not used in the proof.§.§ Proofs of the main results(⇐) The statement is trivial. (⇒) We assume δ(,α)=0 and show that K is an α-strongly quasipositive braid.By the assumptions (ii) and (iii-a) and Corollary <ref>, afterdesumming essential spheres, we may assume that the new F (we abuse the same notation) admits an essential open book foliation (F). Here desumming an essential sphere can be understood as a disk exchange: We removean embedded disk D ⊂ F from F and then put back a disk E ⊂ M ∖ F such that E ∪ D is an essential sphere 𝒮.Corollary <ref> states that the binding component C in the condition (iii-b) is still the only binding component that intersects the new F. Applying the assumptions δ(,α)=0 and (iii-c) to Proposition <ref>, we can conclude that (F) has e_- =h_-= 0.By Proposition <ref> and the assumptions (i) and (iii-b), F is an α-Bennequin surface with the strongly quasipositive boundary K.Let F be a minimum genus Seifert surface of K. By Corollary <ref>, we may assume that F admits an essential open book foliation. In the case of disk open book (S, ϕ)=(D^2, id), <cit.> and <cit.> showthat any incompressible surface can be put in a position so that its open book foliation is essential without c-circles.Therefore, our Seifert surface F admits an essential open book foliation without c-circles. This means that F contains no bc-tiles and all the fake vertices (if they exist) of G_– lie on K. Since c(id,K, ∂ D^2) > δ()/2+1,Proposition <ref> implies that (F) has at most one negative elliptic point; e_-=0 or 1. If e_-=0 then the region decomposition of (F) consists of only aa-tiles; that is, F is a Bennequin surface.If e_-=1 then (F) does not contain bb-tiles.Let v denote the unique negative elliptic point.All the ab-tiles of (F) meet at v.Suppose that the valence of v in the graph G_– is N.By Theorem <ref> we have δ()/2+1 < c(id,K, ∂ D^2) ≤N/1=N, which shows that there are N ≥ 2 negative ab-tiles meeting at v. By applying a positive stabilization along one of the negative ab-tiles (cf. Figure <ref> (ii)) we may remove the negative elliptic point v.Note that the genus of the surface is preserved.As a consequence we get a Seifert surface whose region decomposition consists of only aa-tiles. Hence by Definition <ref>-(1) it is a Bennequin surface.Moreover, if δ()=0 and c(id, K, ∂ D^2) >1 then by Proposition <ref>-(3) we have e_- =h_-= 0. The Seifert surface F is already a Bennequin surface without negatively twisted bands; hence, by Definition <ref>-(2) K is strongly quasipositive.§.§ Examples We close the paper with examples related to the main results. Some of the examples are described via movie presentations.A movie presentation is a sequence of slices of a Seifert surface by some pages S_t. See <cit.> for the definition of a movie presentation.First we see that the planar condition (i) of Theorem <ref> is necessary.Suppose that S is an oriented genus 1 surface with connected boundary.Choose ϕ so that the the manifold M_(S, ϕ) is a rational homology sphere. The condition (ii) of Theorem <ref> is automatically satisfied. Since the Seifert surface class is uniquely determined we may drop α- from our notation.Take a base point near the boundary so that ϕ fixes it. Let ρ be an oriented loop at this base point as depicted in Figure <ref> (1).Under the identification B_1(S) = π_1(S) we may identify ρ with a 1-braid in the surface braid group B_1(S).For N≥1, let K_N be the closure of the 1-braid ρ^N with respect to the open book (S,ϕ). Then c(ϕ,K_N,∂ S)=N.The condition (iii-c) is satisfied if N>1. Note that K_1 is smoothly isotopic to the binding of the open book.This shows that g(K_1)= g(S)=1. Since K_N is an (N,1)-cable of the binding, K_Nis a connected sum of N copies of K_1, which yields g(K_N)=N.The Seifert surface F of K_N defined bythe movie presentation in Figure <ref> (2) gives a genus N surface. Therefore, the condition (iii-a) is satisfied.The movie presentation also determines the open book foliation (F) of F as depicted in Figure <ref> (3).We observe that (F) is essential (there are no b-arcs) and all the hyperbolic and elliptic points are positive.By Lemma <ref> it follows that δ(K_N)=0.If K_N were a strongly quasipositive braid bounding a Bennequin surface F' thendue to the one-strand constraintthe open book foliation (F') must be built of only a-arcs emanating from a single positive elliptic point.This means that K_N is a meridional circle of the binding; that is, an unknot. This contradicts the above conclusion g(K_N)=N≠ 0. We conclude thatif N>1, all the conditions of Theorem <ref> are satisfied except for the planar assumption (i) on S, and K_N is not a strongly quasipositive braid. With the above example we can further see that the stronger forms of Conjectures <ref> and <ref> do not hold for general open books (S, ϕ) ≠ (D^2, id).More concretely, we show that the transverse knot typerepresented by the closed braid K_1 in Example <ref> does bound a minimum genus Bennequin surface (indeed, is strongly quasipositive) at the cost of raising the braid index. To see this, we consider a different Seifert surface F' of K_1 given by a movie presentation as depicted in Figure  <ref>.Using the Euler characteristic formula in Lemma <ref> we see that both F and F' have genus 1, which is the genus of the transverse knot type .However the open book foliations of F and F'are different.For instance, the region decomposition of the open book foliations (F) consists of two ac-annuli whereas (F') consists of four ab-tiles.More precisely, (F') contains one negative elliptic point and one negative hyperbolic point and they belong to the same unique negative ab-tile. By a positive stabilization along the negative ab-tile we can remove both the negative elliptic and negative hyperbolic points of F'. Since any stabilization preserves the Euler characteristic of the surface, the resulting surface, F”, also has genus 1.The surface F” consists of only positive aa-tiles and its boundary is a closed braid of braid index 2. In summary, we obtain a minimum genus Bennequin surface F” ofwhose boundary is a strongly quasipositive 2-braid. Knowing that the braid index b()=1 andis not an unknot, any closed 1-braid representatives ofare not strongly quasipositive.For the closed braid K_N of N≥ 2, we add extra N-1 pairs of positive and negative elliptic points as shown in Figure <ref>.The movie presentation in Figure <ref> gives a Seifert surface F'_N for K_N and F'_N has genus N.A parallel argument for F'=F'_1 works for general F'_N and we obtain the same conclusion.Next we see that thecondition (iii-c) on the FDTC in Theorem <ref> is also necessary.Let S be a genus 0 surface with four boundary componentsC_0,C_1,C_2,C_3.Let X be a simple closed curve that separates C_1 and C_2 from C_3 and C_4.See Figure <ref> (1). Let ϕ∈ Diffeo^+(S) be a diffeomorphism defined by ϕ = T_XT_C_1^n_1T_C_2^n_2T_C_3^n_3, where T_X (resp. T_C_i) denotes a positive Dehn twist about X (a circle parallel to C_i) and n_1,n_2,n_3 ∈∖{0}. Sincen_1,n_2,n_3 ≠ 0 the ambient manifold M=M_(S, ϕ) has H_1(M;)=/n_1⊕/n_2⊕/n_3 (cf. <cit.>) and it yields H_2(M; ℚ)=0 by the universal coefficient theorem; hence, M is a rational homology sphere and the condition (ii) of Theorem <ref> is automatically satisfied. The movie presentation shown in Figure <ref> (3) gives a surface, which we call D. The trace of the point ⊙ gives the boundary, K, of D.In particular, K is a 1-braid with respect to (S, ϕ).The open book foliation of D as depicted in Figure <ref> (2) shows that* D is a disk and K is an unknot ((iii-a) is satisfied). * Among all the binding components of (S,ϕ), only C_0 intersects D ((iii-b) is satisfied). * c(ϕ,K,C_0)=0, which is obtained by noticing thatthe arc γ in Figure <ref> (1) is fixed by ϕ_K and thenapplying <cit.>.Therefore, all the conditions of Theorem <ref> are satisfied except for the condition (iii-c) on the FDTC.Indeed, the region decomposition of D consists of two ab-tiles and D is not even a Bennequin surface; thus K is not a strongly quasipositive braid.(We remark that after one positive stabilization, we get a strongly quasipositive braid representative of the transverse knot type [K]).[Example for Theorem <ref>]We demonstrate that the conditionc(id,K,∂ D^2) > δ()/2+1 in Theorem <ref> can be satisfied by links that are neither 3-braid links or fibered knots.That is, Theorem <ref> is independent of Corollary <ref> and Hedden's <cit.>.For a non-negative integer δ≥ 0, let us consider an n-braid word of the form w=xy where x ∈ B_n is a strongly quasipositive braid word and y∈ B_n is a braid word containing δ negative band generators.Let K be the closure of w andbe the transverse knot type represented by K.Let e_± (resp. h_±) denotes the number of ± elliptic (resp. hyperbolic) points in the open book foliation (F_w) of the Bennequin surface F_w associated to the band-twist factorization w.By Lemma <ref> we getδ()≤h_- - e_-.The proof of Proposition <ref> shows that h_-=δ and e_-=0; therefore,δ()≤δ.Let c:B_n → be the FDTC map defined by c(β):=c(id,β,∂ D^2). The map has the following properties for α, β∈ B_n.(i) |c(αβ)-c(α)-c(β)| ≤ 1 and c(α)=c(β^-1αβ).(ii) If p∈ B_n is a strongly quasipositive word thenc(α p β) ≥ c(αβ) ≥ c(α p^-1β) (indeed this holds for right-veering braids p <cit.>). (iii) c(σ_i,j^± 1) = 0.(iv) If β is the product of m negative band generators thenc(β) > -m+1/n. Property (i) can be found in <cit.> and <cit.>. Property (iii) follows from <cit.>. Property (iv) follows from the proof of <cit.>, which is an estimate of another invariant of braids called the Dehornoy floor [β]_D, together with <cit.>.By (ii) and (iv) we have c(y) > - δ+1/n. Now let us take a strongly quasipositive braid word x such that c(x) ≥δ/2 + δ+1/n +2:For example,x=(σ_1,2σ_2,3⋯σ_n-1,nσ_1,n)^N for N ≥1/2δ + δ+1/n + 2 satisfies this condition (see Remark <ref>). By (i) we havec(w)≥ c(x)+c(y)-1 > ( δ/2 + δ+1/n +2 ) - δ+1/n-1 = δ/2+1 ≥δ()/2+1. The closed braid K satisfies the conditions in Theorem <ref> and Corollary <ref>. Thus, admits a minimum genus Bennequin surface with exactly δ() negative bands, even though the Bennequin surface F_w may not have the minimum genus g().For suitable choices of x and y, we can easily makenon-fibered:Let x= (σ_1,3σ_2,4σ_1,3σ_2,4)^N+1σ_1,3∈ B_4 for N ≥3δ+9/4 and y ∈ B_4 be a braid word in {σ_1,3^± 1, σ_2,4^± 1} containing δ negative band generators.The closure K of the 4-braid w=xy realizes the braid index b() of . Since the Bennequin surface F_w is not connected, the Alexander polynomial ofis zero (see <cit.>). In particular,is not fibered. Using<cit.> we obtain c(x)≥ N.The above argument shows that K satisfies the assumptions of Theorem <ref> and Corollary <ref>. § ACKNOWLEDGEMENTSThe authors would like to thank Inanc Baykur, John Etnyre, Matthew Hedden, Jeremy Van Horn-Morris, and the referee. TI was partially supported by JSPS Grant-in-Aid for Young Scientists (B) 15K17540. KK was partially supported by NSF grant DMS-1206770 and Simons Foundation Collaboration Grants for Mathematicians.1B I. Baykur, J. Etnyre, M. Hedden, K. Kawamuro, and J. Van Horn-Morris,Contact and symplectic geometry and the mapping class groups.Official report of the 2nd SQuaRE meeting, July 2015, American Institute of Mathematics. be D. Bennequin,Entrelacements et équations de Pfaff, Astérisque, 107-108, (1983) 87-161. bf J. Birman and E. Finkelstein,Studying surfaces via closed braids, J. Knot Theory Ramifications, 7 (1998), 267-334.bgg J. Birman, V. Gebhardt, and J. González-Meneses, Conjugacy in Garside groups. I. Cyclings, powers and rigidity. Groups Geom. Dyn. 1 (2007), no. 3, 221-279.bkl J. Birman, K.H. Ko and S.J. Lee, A new approach to the word and conjugacy problems in the braid groups, Adv. Math. 139 (1998), no. 2, 322-353.bm1 J. Birman and W. Menasco,Studying links via closed braids. I. A finiteness theorem Pacific J. Math. 154 (1992), no. 1, 17-36.bm2 J. Birman and W. Menasco,Studying links via closed braids. II. On a theorem of Bennequin. Topology Appl. 40 (1991), no. 1, 71-82.BM J. Birman and W. Menasco, Stabilization in the braid groups I: MTWS,Geom. Topol. 10 (2006) 413-540. ddgkm P. Dehornoy,Foundations of Garside theory. With François Digne, Eddy Godelle, Daan Krammer and Jean Michel. Contributor name on title page: Daan Kramer. EMS Tracts in Mathematics, 22. European Mathematical Society (EMS), Zürich, 2015. xviii+691 pp.EY. Eliashberg,Contact 3-manifolds twenty years since J Martinet's work, Ann. Inst. Fourier Grenoble 42 (1992) 165-192.Et J. EtnyreLectures on open book decompositions and contact structures. Floer homology, gauge theory, and low-dimensional topology, 103–141, Clay Math. Proc., 5, Amer. Math. Soc., Providence, RI, 2006.EO J.Etnyre and B. Ozbagci,Invariants of contact structures from open books, Trans. Amer. Math. Soc. 360(6) (2008), 3133-3151.ev J. Etnyre and J. Van Horn-Morris, Fibered transverse knots and the Bennequin bound, Int. Math. Res. Not. IMRN 2011, 1483-1509. E2J. Etnyre,On knots in overtwisted contact structures.Quantum Topol. 4 (2013), no. 3, 229-264. FMB. Farb and D. Margalit,A primer on mapping class groups,Princeton Mathematical Series, 49. Princeton University Press, Princeton, NJ, 2012. xiv+472 pp.Hayden K. Hayden, Quasipositive links and Stein surfaces, arXiv:1703.10150v1. he M. Hedden,Notions of positivity and the Ozsváth-Szabó concordance invariant, J. Knot Theory Ramifications 19 (2010), 617-629. H2M. Hedden,http://users.math.msu.edu/users/mhedden/CV_files/research.pdfHS M. Hirasawa and A. Stoimenow,Examples of knots without minimal string Bennequin surfaces.Asian J. Math. 7 (2003), no. 3, 435-445. hkm K. Honda, W. Kazez and G. Matić, Right-veering diffeomorphisms of compact surfaces with boundary, Invent. Math. 169, (2007), 427-449. it0 T. Ito,Braid ordering and the geometry of closed braid, Geom. Topol. 15 (2011), 473-498.it T. Ito,On a structure of random open books and closed braids,Proc. Japan Acad. Ser. A Math. Sci. 91 (2015), 160-162.ik1-1T. Ito and K. Kawamuro,Open book foliations, Geom. Topol. 18, (2014) 1581-1634. ik2T. Ito and K. Kawamuro,Essential open book foliation and fractional Dehn twist coefficient,Geom. Dedicata 187 (2017), 17-67. ik3 T. Ito and K. Kawamuro,Operations on open book foliations,Algebr. Geom. Topol. 14 (2014), 2983-3020.ik4 T. Ito and K. Kawamuro,On a question of Etnyre and Van Horn-Morris,Algebr. Geom. Topol. 17 (2017), 561-566. ik-QRV T. Ito and K. Kawamuro,Quasi right-veering braids and non-loose links,(2017) preprint.L R. Lickorish,An introduction to knot theory.Graduate Texts in Mathematics, 175. Springer-Verlag, New York, 1997.M A. Malyutin, Twist number of (closed) braids, St. Petersburg Math. J. Vol. 16 (2005), No. 5, Pages 791-813. MM Y. Mitsumatsu, A. Mori, On Bennequin's Isotopy Lemma,an appendix to Convergence of contact structures to foliations. Foliations 2005, 365-371, World Sci. Publ., Hackensack, NJ, 2006.OS S. Y. Orevkov and V. V. Shevchishin, Markov theorem for transversal links, J. Knot Theory Ramifications 12 (2003), 905-913. OzSt B. Ozbagci and A. Stipsicz,Surgery on contact 3-manifolds and Stein surfaces. Bolyai Society Mathematical Studies, 13. Springer-Verlag, Berlin; János Bolyai Mathematical Society, Budapest, 2004. 281 pp.Pav E. Pavelescu,Braids and Open Book Decompositions,Ph.D. thesis, University of Pennsylvania (2008),http://www.math.upenn.edu/grad/dissertations/ElenaPavelescuThesis.pdfP2E. Pavelescu,Braiding knots in contact 3-manifolds, Pacific J. Math. 253 (2011), no. 2, 475-487. Ru83 L. Rudolph,Braided surfaces and Seifert ribbons for closed braids. Comment. Math. Helv. 58(1983), no. 1, 1-37. RL. Rudolph,Knot theory of complex plane curves, Handbook of knot theory, 349-427, Elsevier B. V., Amsterdam, 2005.X Xu, Peijun.The genus of closed 3-braids.J. Knot Theory Ramifications 1 (1992), no. 3, 303-326.
http://arxiv.org/abs/1703.09322v4
{ "authors": [ "Tetsuya Ito", "Keiko Kawamuro" ], "categories": [ "math.GT" ], "primary_category": "math.GT", "published": "20170327220335", "title": "The defect of Bennequin-Eliashberg inequality and Bennequin surfaces" }
On the dynamics of the singularities of the solutions of some non-linear integrable differential equations Igor Tydniouk March 24, 2017 ========================================================================================================== Stevens Institute of Technology,1 Castle Point Terrace, Hoboken, NJ 07030, USA E-mail:itydniou@stevens.eduAbstractThis paper concerns with some of the results related to the singular solutions of certain types of non-linear integrable differential equations (NIDE) and behavior of the singularities of those equations. The approach heavily relies on the Method of Operator Identities <cit.> which proved to be a powerful tool in different areas such as interpolation problems, spectral analysis, inverse spectral problems, dynamic systems, non-linear equations. We formulate and solve a number of problems (direct and inverse) related to the singular solutions of sinh-Gordon, non-linear Schrödinger and modified Korteweg - de Vries equations. Dynamics of the singularities of these solutions suggests that they can be interpreted in terms of particles interacting through the fields surrounding them. We derive differential equations describing the dynamics of the singularities and solve some of the related problems. The developed methodologies are illustrated by numerous examples.§ INTRODUCTIONMethod of Operator Identities <cit.> plays an important role in different areas of both pure and applied mathematics. This method appeared to be a universal tool for solving the interpolation, spectral analysis problems, investigation of dynamic systems and nonlinear integrable equations. Solutions of many problems that became already classical are much simpler and more transparent under the prism of Method of Operator Identities and it reveals the striking similarities between very different at the first glance fields of research. In this paper we apply Method of Operator Identities to the investigation of the properties of the singular solutions of some non-linear integrable equations obtained by solving the inverse spectral problem for the associated self-adjoint canonical system of differential equations. In particular, we consider the following non-linear equations∂ ^2 ϕ (x, t)/∂ x ∂ t = 4 sinhϕ (x, t) sinh-Gordon equation (SHG) ; ∂ψ (x, t)/∂ t = - 1/4∂ ^3 ψ (x, t)/∂ x ^3 + 3/2 | ψ (x, t) |^2 ∂ψ (x, t)/∂ x modified Korteweg - de Vries equation (MKdV);∂ρ (x, t)/∂ t = /2 [∂ ^2 ρ (x, t)/∂ x ^2 - 2 | ρ (x, t) |^2 ρ (x, t)]non-linear Schrödinger equation (NSE). Some of the results concerning these equations were obtained previously by different methods but for the completeness of the picture and to show the universality and power of the Method of Operator Identities we present the solutions and proofs here. The main subject of investigation is the study of the properties of the singular solutions and the behavior of the singularities of those solutions. Initially the idea of investigation of singular solutions was suggested in <cit.> where the "gluing" procedure was applied to the inverse scattering problem as a method of analysis. Method of Inverse Spectral Problem powered by Method of Operator Identities proved to be more efficient in these investigations and allowed to perform more general and more detailed analysis of the considered solutions. The properties of the singular solutions (already discussed in <cit.>) point out that on global scale they behave very similar to the classical soliton solutions: asymptotically N-wave solution is represented as N independent elementary waves; after the interaction elementary waves preserve their shapes and the only change they experience is the phase shift; during the interaction elementary waves exchange their energies. Singular solutions admit interpretation in terms of particles interacting through the fields surrounding them. As opposed to the soliton solutions, presence of the singularities allows to derive dynamical equations and investigate in much more details the region of close interaction between singular waves/particles.The plan of the paper is as follows. Section 2 is auxiliary. There we introduce a class of structured matrices (paired Cauchy matrices) related to the equations (<ref>)-(<ref>) and using Method of Operator Identities (matrix version) we investigate the invertibility of these matrices and calculate the transfer matrix function of the corresponding dynamic system. Studying the properties of the dynamic system led us to the investigation of some related rational direct and inverse interpolation problems. Obtained results are applied in further sections to study the properties of the singularities of non-linear equations. At the same time, results of the Section 2 are of independent interest in the field of structured matrices and related interpolation problems. In particular, we investigate the following interpolation problemIP Problem. Given the sets of numbersμ = {μ_1, μ_2, …, μ_n},ν = {ν_1, ν_2, …, ν_n},ξ ={ξ_1, ξ_2, …, ξ_n}, find 2 × 2 matrix polynomial X(λ) = { X_i j(λ) }_i, j = 1^2satisfying the relations X(ξ_j) [ ν_j; μ_j ] = 0,1 ≤ j ≤ n.This and similar interpolation problems were studied by the number of the authors (see for example <cit.> and <cit.> - <cit.>, <cit.>). The use of the Method of Operator Identities reveals some interesting connecting links among different areas of analysis such as dynamic systems, structured matrices and non-linear differential equations. In Section 3 (Subsection 3.1) we consider explicit singular solutions of non-linear integrable equations. The procedure relies on the operator version of the Method of Operator Identities. It is shown that those solutions can be represented in terms of determinants of the paired Cauchy and paired Vandermonde matrices (Theorems 3.3 and 3.4).In Subsection 3.2 we study the properties of the singular solutions. Using results of Section 2 we obtain an efficient parametrization of the zeros of those determinants and investigate the connection between transfer matrix function of the corresponding dynamic system and singular solutions of non-linear equations (Theorem 3.5). In this way we formulate and solve an inverse problem of singular solutions: given some information about the solution, restore the full system (Theorems 3.6, 3.7 and 3.8). Developed methodologies are illustrated by simple examples. Subsection 3.3 is dedicated to the investigation of the dynamics of the singularities given by the parametrizations obtained in Subsection 3.2. It is shown that the dynamics of the singularities is described by completely integrable Hamiltonian system and action-angle variables for this system are found (Theorem 3.11). We also derive a system of non-linear differential equations describing the dynamics of the parameters and study the properties of the system for some special simple cases (2-wave interaction). Numerous examples showing different aspects of the solutions are presented. For the case of two-wave interaction we formulate and solve an inverse problem (problem 3.35, Assertion 3.36). In general (N-wave interaction), dynamics of the singularities is quite complicated and cannot be integrated in closed form. In Appendix we present some of the examples of singularities behavior obtained by numerical analysis and give an interpretation in terms of particles.§ ACKNOWLEDGEMENTSI am deeply grateful to Dr. A.L. Sakhnovich for carefully reading this paper and correcting numerous typos and mistakes. His ideas and insights had a crucial influence on my way of thinking. § DYNAMIC SYSTEMS, OPERATOR IDENTITY AND ASSOCIATED INTERPOLATION PROBLEMSConsider matrix S of the formS={a_i b_j + c_i d_j/g_i - h_j} ^N _i,j=1,wherea={ a_i } ^N _1 , b={ b_i } ^N _1 , c={ c_i } ^N _1 , d={ d_i } ^N _1 , g={ g_i } ^N _1 , h={ h_i } ^N _1- are the sets of complex numbers such thatg_i ≠ h_j( 1≤ i, j ≤ N ) ; g_i ≠ g_j, h_i ≠ h_j, i ≠ j.In the special case when a_i b_j + c_i d_j = 1 ( 1 ≤ i, j ≤ N ) matrix S is a pure Cauchy matrix. Matrices of the type (<ref>) represent a special case of generalized Cauchy matrices in the sense of <cit.>. They were studied by the number of the authors (see for example <cit.> and <cit.> - <cit.>). Numerous interpolation problems connected to the matrices of this class were investigated in <cit.>. Results of this section slightly generalize the ones obtained in <cit.>. Our approach is based on matrix identityAS-SB=Π _1 Π ^T _2,where A = {g_1 , g_2 , … g_N } , B={h_1 , h_2 , … h_N },Π_1 = [ a_1 c_1; a_2 c_2; ⋯ ⋯; a_N c_N; ], Π_2 = [ b_1 d_1; b_2 d_2; ⋯ ⋯; b_N d_N; ]; and the symbol M^T denotes transposition of the matrix M. This is a matrix version of operator identity thoroughly investigated and used in <cit.>, <cit.> and a number of papers (see for example <cit.> - <cit.>). In this section we review the results related to the rational interpolation problems and invertibility of the matrices of type (<ref>) which play an important role in further considerations concerning singular solutions of NIDE.Let's introduce 2 × 2 matrix-function W_A ( λ) by the equalityW_A ( λ) = I_2 -Π ^T _2 S^-1(A-λ I_N ) ^-1Π_1,where I_k - is the k × k identity matrix. Note that W_A ( λ) is transfer matrix-function of the dynamic systemdx/dt = Ax+Π_1 u, y=Π ^T _2 S^-1 x + u,where u={u_i ( t ) } ^2 _1 - input, y={y_i ( t ) } ^2 _1 - output, and x={x_i ( t ) } ^N _1 - is the inner state of the system. Matrix-function W_B ( λ) = W^-1 _A ( λ) that can be represented in the form <cit.>W_B ( λ) = I_2 +Π ^T _2 (B-λ I_N ) ^-1 S^-1Π_1also plays an important role in the following studies. As one can see from (<ref>), (<ref>) the existence of W_A ( λ) and W_B ( λ) depends on the invertibilityof the matrix S. Let's define the ordered sets μ = {μ_i} = [a_1 , a_2 , … a_N , d_1 , d_2 , … d_N, ], ν = {ν_i} = [-c_1 , -c_2 , … -c_N , b_1 , b_2 , … b_N ],ξ = {ξ_i} = [g_1 , g_2 , … g_N , h_1 , h_2 , … h_N ]. The criteria of regularity of the matrix S is given by the following theoremLet a matrix S have the form (<ref>) and assume that μ_i_k = 0 and ν_i_k = 0 for some sets i_1, i_2, ⋯, i_p and j_1, j_2, ⋯, j_r of natural numbers (less or equal 2N) such that 0 ≤ p ≤ 2N, 0 ≤ r ≤ 2N and i_k ≠ j_m for all 1 ≤ k ≤ p and 1 ≤ m ≤ r. Then the relations μ_n Q_1 ( ξ_n ) + ν_n Q_2 ( ξ_n ) = 0, n=1, 2, … , 2Nwith some polynomials Q_1 and Q_2 of the form Q_1 ( λ) = Q̃_̃1̃( λ) ∏^r_m=1( λ - ξ_j_m),Q_2 ( λ) = Q̃_̃2̃( λ) ∏^p_k=1( λ - ξ_i_k), where Q̃_̃l̃( λ) ( l=1,2 ) are arbitrary polynomials such that {Q̃_̃1̃( λ)}≤ N-1-r, {Q̃_̃2̃( λ)}≤ N-1-p, are necessary and sufficient for the matrix S to be singular i.e. S = 0.The proof of the Theorem 2.1 can be easily obtained from the results of <cit.>. We give it here for the completeness of the considerations. Proof. Assume for now that μ_i ≠ 0, ν_i ≠ 0; i = 1, 2, … , 2N. Condition S = 0 is equivalent to the existence of the non-trivial solution x = {x_i}_1 ^N of the system of equations S x = 0 or ∑_j = 1^Na_i b_j - c_i d_j/g_i - h_jx_j = 0, i = 1, 2, … , N.System (<ref>) can be rewritten asa_i ∑_j = 1^Nb_j x_j/g_i - h_j = y_i, 1 ≤ i ≤ N, c_i ∑_j = 1^Nd_j x_j/g_i - h_j = y_i, 1 ≤ i ≤ N, where y = {y_i}_1^N - non-trivial vector. Consider functionsG_1(λ) = ∑_j = 1^Nb_j x_j/λ - h_j = ∑_j = 1^Nb_j x_j ∏_k ≠ j(λ - h_k)/∏_i = 1^N (λ - h_i) = f_1(λ)/H(λ), G_2(λ) = ∑_j = 1^Nd_j x_j/λ - h_j = ∑_j = 1^Nd_j x_j ∏_k ≠ j(λ - h_k)/∏_i = 1^N (λ - h_i) = f_2(λ)/H(λ).From (<ref>), (<ref>) it follows that a_i G_1(g_i) = c_i G_2(g_i) = y_i; 1 ≤ i ≤ N. Substituting in (<ref>) λ = h_j we obtainf_1(h_j)=b_j x_j H'(h_j),f_2(h_j)=d_j x_j H'(h_j),orf_1(h_j) d_j = f_2(h_j) b_j, 1 ≤ j ≤ N.It follows from (<ref>) that deg f_1(λ) ≤ N - 1; deg f_2(λ) ≤ N - 1. On the other hand, using (<ref>),expressions (<ref>), (<ref>) can be represented as y_i = a_i f_1 (g_i)/H(g_i), 1 ≤ i ≤ N, y_i = c_i f_2 (g_i)/H(g_i), 1 ≤ i ≤ N,ora_i f_1 (g_i) = c_i f_2 (g_i), 1 ≤ i ≤ N.Formulas (<ref>), (<ref>) prove the necessity of the conditions of the theorem in the case μ_i ≠ 0,ν_i ≠ 0; i = 1, 2, … , 2N. Reverse considerations give the sufficiency. Let now μ_i_k = 0, k = 1, 2, …, p for some multi-index i_1 , i_2, … i_p, 0 ≤ p ≤ 2N. Then from (<ref>), (<ref>) it follows that f_2 (ξ_i_k) = 0, k = 1, 2, …, p. In other words, f_2 (λ) = f̃_̃2̃ (λ) ∏_k = 1^p (λ - ξ_i_k) and f̃_̃2̃(λ) is the polynomial such that deg f̃_̃2̃(λ) ≤ N - 1 - p. If ν_j_m = 0, m = 1, 2, …, r for some multi-index j_1 , j_2, … j_r,0 ≤ r ≤ 2N, then f_1 (ξ_j_m) = 0, m = 1, 2, …, r and f_1 (λ) = f̃_̃1̃ (λ) ∏_m = 1^r (λ - ξ_j_m) where f̃_̃1̃(λ) is the polynomial such that deg f̃_̃1̃(λ) ≤ N - 1 - r. It's easy to see that equalities i_k = j_m for any of the pairs (k, m); k = 1, 2, …, p; m = 1, 2, …, r are impossible because in these cases the determinant of the matrix S equals zero.Equations (<ref>) parametrize the equality S = 0 by means of the coefficients of the polynomials Q̃_̃1̃( λ) and Q̃_̃2̃( λ).The parametrization is understood in the following sense. Let {q_i^(1)}_i = 1^N - r and {q_i^(2)}_i = 1^N - p represent the coefficients of the polynomials Q̃_̃1̃(λ) and Q̃_̃2̃(λ) respectively then considering S as a function F(q_1^(1), q_2^(1), …, q_N - r^(1), q_1^(2), q_2^(2), …, q_N - p^(2))of the parameters {q_i^(1)} and {q_i^(2)}, we have F(q_1^(1), q_2^(1), …, q_N - r^(1), q_1^(2), q_2^(2), …, q_N - p^(2)) = 0,which can also be considered as equation of the surface in (2 N - 2 - r - p) - dimensional space.Let the sets μ , ν , ξ be such that S ≠ 0. Then from (<ref>) it follows thatW_A ( λ) =(∏^N_i=1( g_i - λ) ^-1 ) { D_jk( λ) }^2_j,k=1,where D_jk( λ)( j, k = 1, 2 ) - are polynomials such that { D_11( λ) }≤ N;{ D_22( λ) }≤ N, { D_21( λ) }≤ N - 1;{ D_12( λ) }≤ N - 1.We now formulate and solve related interpolation problems. Note, that the similar problems were considered in <cit.>. The proofs become much simpler and more transparent if one uses identity (<ref>) and general expression for the transfer matrix-function W_A(λ). Relations (<ref>) and (<ref>) allow a unified approach to the problems from different areas, i.e. dynamic systems, interpolation, spectral problems, non-linear differential equations, as we'll see in the following sections.Let's introduce the projectors P_k (1 ≤ k ≤ N) as N × N matrices defined byP_k = {p_ij}_i, j = 1^N: p_ij = 0wheni ≠ jandp_ij = 1wheni = j = k. ThenΠ_2^T P_k = [ 0 ⋯ 0 b_k 0 ⋯ 0; 0 ⋯ 0 d_k 0 ⋯ 0 ],1 ≤ k ≤ N.Multiplying from the right both sides of (<ref>) by Π_2^T P_k we getW_A ( λ ) Π_2^T P_k =[Π_2^T - Π_2^T S^-1 (A - λ E_N)^-1Π_1 Π_2^T ]P_k, 1 ≤ k ≤ N.From (<ref>) it follows that Π_1 Π_2^T = (A - λ E_N) S - S (B - λ E_N).Substituting (<ref>) in (<ref>) and passing to the limit λ→ h_k results in W_A Π_2^T P_k = 0, 1 ≤ k ≤ N.Taking into account (<ref>), equalities (<ref>) can be written asb_k D_i1 (h_k) + d_k D_i2 (h_k) = 0; i = 1, 2; 1 ≤ k ≤ N.From (<ref>) and relation W_B^-1 (λ) = W_A (λ) follows the representation W_B ( λ ) = ∏_i = 1^N (g_i - λ)/ W_A(λ)[D_22 (λ) -D_12 (λ); -D_21 (λ)D_11 (λ) ].Now, multiplying from the left both sides of (<ref>) by P_k^T Π_1 and passing to the limit λ→ g_k we obtainP_k^T Π_1 W_B(g_k) = 0, 1 ≤ k ≤ N.Taking into account (<ref>), equalities (<ref>) becomea_k D_i2(g_k) - c_k D_i1 (g_k) = 0; i = 1, 2; 1 ≤ k ≤ N. Expressions (<ref>), (<ref>) can be re-written in the form[ D_11 (ξ_k) D_12 (ξ_k); D_21 (ξ_k) D_22 (ξ_k) ][ ν_k; μ_k ] = 0, 1 ≤ k ≤ 2N.Equalities (<ref>) can be reformulated in terms of the interpolation problem:IP Problem. Given the sets of numbers μ, ν, ξ, find 2 × 2 matrix polynomial X(λ) = { X_i j(λ) }_i, j = 1^2satisfying the relations X(ξ_j) [ ν_j; μ_j ] = 0,1 ≤ j ≤ 2N.Let's note that this IP Problem has infinitely many solutions. Indeed, for any given vector polynomials X_k 1(λ) (or X_k 2(λ)), k = 1,2 using (<ref>) or (<ref>), Lagrange-Sylvester formulas give the way to recover corresponding polynomials X_k 2(λ) (or X_k 1(λ)), k = 1,2.From the set of the solutions of IP Problem we choose the one for which X_1 1(λ) =X_2 2(λ) = N; X_1 2(λ) ≤ N - 1; X_2 1(λ) ≤ N - 1.In this case the solution { X_i j(λ) }_i, j = 1^2 is called the basis solution of the IP Problem and N is called the degree () of the solution. The basis solution { X_i j(λ) }_i, j = 1^2 is called normalized basis solution if the coefficients of the highest degree of the polynomials X_1 1 and X_2 2 are equal to 1.The following considerations are devoted to the construction of the basis solution of the IP Problem. With the notations V_k ( η,ζ)=[η_1η_2⋯ η_2N;ζ_1 η_1ζ_2 η_2⋯ ζ_2Nη_2N;⋯⋯⋯⋯;ζ^k_1 η_1ζ^k_2 η_2⋯ ζ^k_2Nη_2N;]; Λ_k = [1,λ, ⋯, λ^k];V = [ V_N - 1( μ , ξ); V_N - 1( ν , ξ); ];Δ =V we prove the following statements. The matrix V is non-singular if and only if the matrix S is non-singular. Proof. We rewrite matrix S in terms of the sets μ, ν and ξ S={μ_i ν_N + j - ν_i μ_N + j/ξ_i - ξ_N + j} ^N _i,j=1 and consider two related matricesS_1={μ_i ν_N + j/ξ_i - ξ_N + j} ^N _i,j=1, S_2={ - ν_i μ_N + j/ξ_i - ξ_N + j} ^N _i,j=1.These matrices can be represented as S_1 = M_1 S_0 N_2; S_2 = M_2 S_0 N_1,where M_1, M_2, N_1, N_2 are diagonal matricesM_1 =[ μ_1, μ_2, …, μ_N], M_2 =[ μ_N + 1, μ_N + 2, …, μ_2N ]; N_1 =[ ν_1, ν_2, …, ν_N], N_2 =[ ν_N + 1, ν_N + 2, …, ν_2N ];and S_0 is a Cauchy matrixS_0 = {1/ξ_i - ξ_N + j}_i, j = 1^N.The determinants of the matrices S_1 and S_2 can be easily calculated as S_1 =∏_i = 1^N (μ_i ν_N + i) S_0;S_2 = (-1)^N ∏_i = 1^N (ν_i μ_N + i) S_0,where S_0 = ∏_1 ≤ i < j ≤ N(ξ_i - ξ_j)∏_N + 1 ≤ i < j ≤ 2 N(ξ_i - ξ_j)/∏_1 ≤ i ≤ N N + 1 ≤ j ≤ 2 N(ξ_i - ξ_j).Let τ = {τ_i }_i = 1^N be an N-tuple of integers such that* 1 ≤τ_i ≤ 2 N;* τ_i > τ_j, i > j;* τ_iN ≠τ_j, i ≠ j.and T be the set of all permutations of τ. For the convenience we represent each tuple τ as τ = τ_1 ∪τ_2 where τ_1 = {τ_1,i}_i = 1^c_1,τ_2 = {τ_2,j}_j = 1^c_2 and τ_1,i≤ N, 1 ≤ i ≤ c_1;τ_2,j > N, 1 ≤ j ≤ c_2. It's easy to observe that the set {τ_1,τ_2 - N } rearranged in increasing order of values coincides with the set { 1, 2, …, N }. Using elementary properties of the determinants, S can be represented in the following formS = ∑_τ∈ T S_τ∏_i = 1^c_1μ_N + τ_1,i∏_j = 1^c_2ν_τ_2,j,where S_τ - are the matrices whose (τ_1,i)-th columnsare constructed from (τ_1,i)-th columns of the matrix S_1 (1 ≤ i ≤ c_1) and (τ_2,j - N)-th columns are constructed from (τ_2,j - N)-th columns of the matrix S_2 (1 ≤ j ≤ c_2).It follows then that S_τ are paired Cauchy matrices whose properties were investigated in <cit.>. In order to calculate the determinants S_τ, consider two multi-sets of integers ϰ = {ϰ_i}_i = 1^c_1 and ϰ̅ = {ϰ̅_j}_j = 1^c_2 defined in the following way: * 1 ≤ϰ_i≤ N, 1 ≤ i ≤ c_1; 1 ≤ϰ̅_j≤ N, 1 ≤ j ≤ c_2;* ϰ_i > ϰ_jifi > j; ϰ̅_k > ϰ̅_lifk > l;* ϰ∩ϰ̅ = ∅,ϰ∪ϰ̅ = {1, 2, …, N }.Let K be the set of all multi-sets ϰ. Using Laplace theorem and formulas (<ref>), (<ref>) the determinant of paired Cauchy matrix corresponding to the tuple τ∈ T is calculated as S_τ = ∑_ϰ∈ K(-1)^c_2 + ∑_k = 1^c_1ϰ_k + ∑_m = 1^c_1τ_1,m∏_i = 1^c_1μ_ϰ_i∏_j = 1^c_2ν_ϰ̅_j S_ϰ S_ϰ̅,whereS_ϰ = ∏_1 ≤ i < j ≤ c_1(ξ_ϰ_i - ξ_ϰ_j)∏_1 ≤ i < j ≤ c_1(ξ_ϰ_N + i - ξ_ϰ_N + j)/∏_1 ≤ i ≤ c_1 1 ≤ j ≤ c_1(ξ_ϰ_i - ξ_ϰ_N + j);S_ϰ̅ = ∏_1 ≤ i < j ≤ c_2(ξ_ϰ̅_i - ξ_ϰ̅_j)∏_1 ≤ i < j ≤ c_2(ξ_ϰ̅_N + i - ξ_ϰ̅̅̅_N + j)/∏_1 ≤ i ≤ c_2 1 ≤ j ≤ c_2(ξ_ϰ̅_i - ξ_ϰ̅_N + j).Let's observe that summations over T in (<ref>) and K in (<ref>) produce unique combinations of the productsP_τ = ∏_i = 1^c_1(μ_N + τ_1,i μ_τ_2,i - N)∏_j = 1^c_2(ν_τ_2,j ν_τ_1,j)expressed in terms of the tuple τ. In order to unify and simplify indexation generated by τ and ϰ we introduce two N-tuples ρ = {ρ_i }_i = 1^N and ρ̅ = {ρ̅_i }_i = 1^N defined as * 1 ≤ρ_i≤ 2N, 1 ≤ i ≤ N; 1 ≤ρ̅_j≤ 2N, 1 ≤ j ≤ N;* ρ_i > ρ_jifi > j; ρ̅_k > ρ̅_lifk > l;* ρ∩ρ̅ = ∅,ρ∪ρ̅ = {1, 2, …, 2N }.Let R represent the set of all permutations of ρ. Then it's easy to see that for each tuple τ there exists the set ρ such that expressions for P_τ in terms of ρ and ρ̅ take the formP_ρ = ∏_i = 1^Nμ_ρ_i∏_j = 1^Nν_ρ̅_j.After multiplying numerator and denominator of each term in (<ref>) corresponding to the tuple τ by ∏_1 ≤ i ≤ c_1 1 ≤ j ≤ c_2(ξ_ϰ_i - ξ_ϰ̅_N + j)∏_1 ≤ i ≤ c_2 1 ≤ j ≤ c_1(ξ_ϰ̅_i - ξ_ϰ_N + j)and substituting (<ref>) into (<ref>), the expression for the coefficient C_τ by the term P_τ yieldsC_τ= ∏_1 ≤ i < j ≤ c_1(ξ_ϰ_i - ξ_ϰ_j)(ξ_ϰ_N + i - ξ_ϰ_N + j)∏_1 ≤ i < j ≤ c_2(ξ_ϰ̅_i - ξ_ϰ̅_j)(ξ_ϰ̅_N + i - ξ_ϰ̅_N + j)×∏_1 ≤ i ≤ c_1 1 ≤ j ≤ c_2(ξ_ϰ_i - ξ_ϰ̅_N + j)∏_1 ≤ i ≤ c_2 1 ≤ j ≤ c_1(ξ_ϰ̅_i - ξ_ϰ_N + j) ( ∏_1 ≤ i ≤ N N + 1 ≤ j ≤ 2N(ξ_i - ξ_j) )^-1. In terms of ρ the expression for C_τ translates intoC_ρ = ∏_1 ≤ i < j ≤ N(ξ_ρ_i - ξ_ρ_j)∏_1 ≤ i < j ≤ N(ξ_ρ̅_i - ξ_ρ̅_j)/∏_1 ≤ i ≤ N N + 1 ≤ j ≤ 2N(ξ_i - ξ_j).On the other hand, using Laplace theorem for the determinant Δ the following representation can be obtained Δ = ∑_ρ∈ R(-1)^∑_k = 1^Nk + ∑_m = 1^Nρ_m∏_i = 1^Nμ_ρ_i∏_j = 1^Nν_ρ̅_j∏_1 ≤ i < j ≤ N(ξ_ρ_i - ξ_ρ_j)∏_1 ≤ i < j ≤ N(ξ_ρ̅_i - ξ_ρ̅_j).Comparing (<ref>) with the previously obtained relations we conclude that S and Δ are related by the formulaS = (-1)^N(N - 1)/2Δ∏_1 ≤ i ≤ N N + 1 ≤ j ≤ 2N(ξ_i - ξ_j)^-1.As ξ_i≠ξ_j, 1 ≤ i ≤ N, N + 1 ≤ j ≤ 2N the assertion of the lemma follows.□Let the sets of numbers μ, ν,ξ be such that S ≠ 0 then the transfer matrix-function W_A ( λ) has the form (<ref>) whereD_11( λ) = ( -1 ) ^ N[ V_N - 1( μ , ξ) 0; V_N( ν , ξ) Λ_N; ]Δ^-1; D_12( λ) = ( -1 ) ^ N[ V_N - 1( μ , ξ) Λ_N - 1; V_N( ν , ξ) 0; ]Δ^-1; D_21( λ) =[ V_N( μ , ξ) 0; V_N - 1( ν , ξ) Λ_N - 1; ]Δ^-1; D_22( λ) =[ V_N( μ , ξ) Λ_N; V_N - 1( ν , ξ) 0; ]Δ^-1; and the matrix-functionD(λ) = {D_ij(λ)}_i,j = 1^2is the basis solution of the IP Problem with the degree N - p - r where p - is the number of indices i_k, 1 ≤ k ≤ p for which μ_i_k = 0 and r - is the number of indices j_k, 1 ≤ k ≤ r for which ν_j_k = 0. Corresponding normalized basis solution of the IP Problem is unique.Proof. By virtue of the theorem conditions and Lemma 2.2, Δ≠ 0. Hence polynomials (<ref>) make sense. From  (<ref>) we find that ν _i D_k 1( ξ _i ) + μ _i D_k 2( ξ _i ) = 0, 1 ≤ i ≤ 2N, k = 1, 2.Indeed, consider for example a combination D_1(λ) = ν_i D_1 1( λ ) + μ_i D_1 2( λ )for some index i: 1 ≤ i ≤ 2N. It can be represented in the formD_1(λ) =( -1 ) ^ N[ V_N - 1( μ , ξ) μ_i Λ_N - 1; V_N( ν , ξ) ν_i Λ_N; ]Δ^-1.Setting λ = ξ_i in the last expression, we see that D_1(ξ_i) represents a determinant of the matrix whose i-th and last columns coincide. Thus D_1(ξ_i) = 0, (1 ≤ i ≤ 2N). A combination corresponding toD_2(λ) =( -1 ) ^ N[ V_N( μ , ξ) μ_i Λ_N; V_N - 1( ν , ξ) ν_i Λ_N - 1; ]Δ^-1is treated analogously. So the equalities (<ref>) are satisfied and { D_i j(λ) }_i, j = 1^2 is the basis solution of the IP Problem. We'll show now that the corresponding normalized basis solution is unique. First, consider the case μ _i ≠ 0 and ν _i ≠ 0, 1 ≤ i ≤ 2N. Assume that there exists another solution {D̃_i j(λ) }_i, j = 1^2 of degree N such that the equalities (<ref>) are satisfied and the coefficients of the highest degree of the polynomial pairs {D_1 1(λ), D̃_1 1(λ)} and {D_2 2(λ), D̃_2 2(λ)} respectively, are equal to 1. Expressions (<ref>) can be considered as two systems of 2N equations each with respect to the coefficients of the polynomials D_1 j(λ) and D_2 j(λ), j = 1, 2. We represent the polynomials D_1 1(λ), D_1 2(λ) and D̃_1 1(λ), D̃_1 2(λ) in the form D_1 1(λ) = λ ^N + ∑_i = 1^Nλ ^N - i d_11^(i), D_1 2(λ) = ∑_i = 1^Nλ ^N - i d_12^(i),D̃_1 1(λ) = λ ^N + ∑_i = 1^Nλ ^N - id̃_11^(i), D̃_1 2(λ) = ∑_i = 1^Nλ ^N - id̃_12^(i) and consider the systemsν _i D_1 1( ξ _i ) + μ _i D_1 2( ξ _i ) = 0, 1 ≤ i ≤ 2Nand ν _i D̃_1 1( ξ _i ) + μ _i D̃_1 2( ξ _i ) = 0, 1 ≤ i ≤ 2N.Subtracting corresponding equations in (<ref>) from (<ref>) we arrive at the system ν _i D̂_1 1( ξ _i ) + μ _i D̂_1 2( ξ _i ) = 0, 1 ≤ i ≤ 2N,whereD̂_1 1(λ) = ∑_i = 1^Nλ ^N - i (d̃_11^(i) - d_11^(i)), D̂_1 2(λ) = ∑_i = 1^Nλ ^N - i (d̃_12^(i) - d_12^(i)).According to the Theorem 2.1 for the arbitrary polynomials D̂_1 i(λ), (i = 1, 2) of the degree less or equal N - 1 there exists a matrix Ŝ with the elements constructed from the sets {ν_i},{μ_i} and {ξ_i}; (1 ≤ i ≤ 2N) and having the form (<ref>) (hence Ŝ = S) such that S = 0. Again using Lemma 2.2 we conclude that Δ = 0 but thiscontradicts the condition of the theorem. Hence, d̃_11^(i) - d_11^(i) = 0 and d̃_12^(i) - d_12^(i) = 0, i = 1, 2, …, N. The systemsν _i D_2 1( ξ _i ) + μ _i D_2 2( ξ _i ) = 0, 1 ≤ i ≤ 2Nand ν _i D̃_2 1( ξ _i ) + μ _i D̃_2 2( ξ _i ) = 0, 1 ≤ i ≤ 2Nare considered analogously.Let now μ_i_k = 0, k = 1, 2, …, p for some multi-index i_1 , i_2, … i_p, 0 ≤ p ≤ 2N and ν_j_m = 0, m = 1, 2, …, r for some multi-index j_1 , j_2, … j_r, 0 ≤ r ≤ 2N. Consider first the system (<ref>). In this case polynomials D_1k(λ), k = 1, 2 can be represented asD_11(λ) = B_11 (λ) ∏_k = 1^p (λ - ξ_i_k), D_12(λ) = B_12 (λ) ∏_m = 1^r (λ - ξ_j_m), where B_1 1(λ) = λ ^N - p + ∑_i = 1^N - pλ ^N - p - i b_11^(i), B_1 2(λ) = ∑_i = 1^N - rλ ^N - r - i b_12^(i). Assume that there exists another solution D̃_1k(λ), k = 1,2 of the system (<ref>) which can be represented asD̃_11(λ) = B̃_11 (λ) ∏_k = 1^p (λ - ξ_i_k),D̃_12(λ) = B̃_12 (λ) ∏_m = 1^r (λ - ξ_j_m),where B̃_1 1(λ) = λ ^N - p + ∑_i = 1^N - pλ ^N - p - ib̃_11^(i), B̃_1 2(λ) = ∑_i = 1^N - rλ ^N - r - ib̃_12^(i). In case of normalized basis solution the coefficients of the highest degree of the polynomials B_11 (λ) and B̃_11 (λ) equal one. Substituting (<ref>) into (<ref>) we arrive at two systems of equations with respect to the coefficients { b_11^(i)}_i = 1^N - p, { b_12^(i)}_i = 1^N - r and {b̃_11^(i)}_i = 1^N - p, {b̃_12^(i)}_i = 1^N - r respectively. Subtracting corresponding equations of these systems we get ν _j ∑_i = 1^N - pξ _j ^N - p - i (b̃_11^(i) - b_11^(i)) + μ _j ∑_i = 1^N - rξ _j ^N - r - i (b̃_12^(i) - b_12^(i)) = 0, 1 ≤ j ≤ 2N.System (<ref>) can have only trivial solutions otherwise it is required for the matrix of the coefficients to be singular which is equivalent to the condition S = 0 implying (according to Lemma 2.2) that Δ = 0 but this contradicts the theorem's assumptions. Hence, b̃_11^(i) = b_11^(i), 1 ≤ i ≤ N - p and b̃_12^(i) = b_12^(i), 1 ≤ i ≤ N - r. Case of the polynomials D_2 j(λ), j = 1, 2 is considered analogously. □ Now we summarize the properties of the polynomials D_ij( λ)( i, j = 1, 2) under the condition Δ≠ 0.D_11(λ) =D_22(λ) = N. Assertion follows directly from (<ref>). It's easy to see that the coefficients of the highest degree of the polynomials D_11(λ) and D_22(λ) equal Δ. Coefficients of the polynomials D_ij(λ), (i, j = 1, 2) do not depend on the absolute values of the parameters μ_i,ν_i, (1 ≤ i ≤ 2N) but are determined up to the values of the ratios ϵ_i = μ_i / ν_i if μ_i ≠ 0 and ν_i ≠ 0.This follows from the equalities (<ref>) where if μ_i ≠ 0 and ν_i ≠ 0, one can divide both sides by μ_i or ν_i without violating the equalities.The following equality is trueD_11( λ) D_22( λ) - D_12( λ) D_21( λ) = ∏^2N_k=1( λ - ξ_k ). Indeed, formula (<ref>) is the consequence of the expressionsD_11(ξ _k) D_22(ξ _k) - D_12(ξ _k) D_21(ξ _k) = 0, k = 1, 2, …, 2Nfollowing from (<ref>). If i_1 , i_2 , … i_p ( 0 ≤ i_j ≤ 2N, j = 1, 2, … , p ) are such that μ_i_j = 0 ( j = 1, 2, …, p ) then D_11( ξ_i_j) = D_21( ξ_i_j) = 0 and vice versa, if D_11( ξ_i_j) = 0 ( D_21( ξ_i_j) = 0 ) then D_21( ξ_i_j) = 0 ( D_11( ξ_i_j) = 0 ) and μ_i_j = 0.Indeed, if one takes into account that simultaneous equalities μ_i_j = 0,ν_i_j = 0 are impossible then direct assertion follows from (<ref>). Now let's prove the inverse one. Assume that D_11(ξ_i_j) = 0. If ν_i_j≠ 0, then D_k2(ξ_i_j) ≠ 0, (k = 1, 2) and from (<ref>) it follows that μ_i_j = 0 and D_21(ξ_i_j) = 0. Assuming that ν_i_j = 0 implies the equalities D_k2(ξ_i_j) = 0, (k = 1, 2) and the fact that the multiplicity of the root ξ_i_j of the polynomial D_12(λ) is greater than one. This contradicts the equality (<ref>).The following property is proved similarly.If i_1 , i_2 , ⋯ i_p ( 0 ≤ i_j ≤ 2N, j = 1, 2, ⋯ , p ) are such that ν_i_j = 0( j = 1, 2, ⋯, p ) then D_22( ξ_i_j) = D_12( ξ_i_j) = 0 and vice versa, if D_22( ξ_i_j) = 0 ( D_12( ξ_i_j) = 0 ) then D_12( ξ_i_j) = 0 ( D_22( ξ_i_j) = 0 ) and ν_i_j = 0( j = 1, 2, ⋯, p ) The pairs of polynomials {D_k1( λ),D_k2( λ)} ( k = 1, 2 ) do not have common roots.This follows from the fact that according to (<ref>) polynomials {D_1j( λ), (j = 1, 2) cannot have common roots other than ξ_k. But if ξ_k - is the common root then Property 2.4 implies that μ_k = 0 and ν_k = 0 which is impossible. The pair {D_2j(λ), (j = 1, 2) is considered analogously. Formulas (<ref>) give a method of construction of the transfer matrix-function of the dynamic system (<ref>) and basis solution of the corresponding interpolation problem (<ref>) (IPProblem). The Properties 1 - 4 are necessary for the existence of the functions W_A ( λ) and W_B ( λ). For the applications considered in this paper the inverse interpolation problem (IIP Problem) also plays an important role. The problem is formulated as follows: IIP Problem. Given basis solution of IP Problem {X_ij(λ) } _i, j = 1^2, find the sets {μ _i}, {ν _i}, {ξ _i }, 1 ≤ i ≤ 2N satisfying the equalities (<ref>).Asit was noted in the Property 2.2 the mapping {D_ij( λ)} _i, j = 1^2 →{μ, ν, ξ} is not unique. Given the polynomials D_ij( λ)( i, j = 1, 2) the sets {μ, ν, ξ} can be restored only up to the ratios μ_i / ν_i ( i = 1, 2, ⋯, 2N ). So the mapping {μ, ν, ξ}→{D_ij( λ)} under some conditions is surjective. The following theorem formulates these conditions and gives the solution of IIP Problem.Let the polynomials D_ij( λ)( i, j = 1, 2) be such thata){D_11( λ)} = {D_22( λ)} = N,{D_12( λ)}≤ N - 1,{D_21( λ)}≤ N - 1;b) pairs of the polynomials { D_k1( λ), D_k2( λ)}, ( k = 1, 2 ) do not have common roots;c) polynomial D( λ) = D_11( λ) D_22( λ) - D_12( λ) D_21( λ) has simple roots.Then the sets {μ, ν, ξ} can be restored up to the ratiosμ_i / ν_i ( i = 1, 2, ⋯, 2N ) and the corresponding matrix S is non-singular.Proof. Indeed, given the polynomials D_ij( λ) ( i, j = 1, 2) satisfying the requirements a) - c) of the Theorem, let ξ̂_̂k̂ ( k = 1, 2, ⋯, 2N) be the simple roots of the polynomial D( λ). Consider the following cases: 1) for all k = 1, 2, …, 2N we have D_ij( ξ̂_̂k̂) ≠ 0,( i, j = 1, 2 ); 2) for some i_1,i_2, ⋯, i_p : 1 ≤ i_j ≤ 2N,(j = 1, 2, ⋯, p ) one has D_k1( ξ̂_̂î_̂ĵ) = 0,( k = 1, 2 ); 3) for some l_1,l_2, ⋯, l_m : 1 ≤ l_s ≤ 2N,(s= 1, 2, ⋯, m ) one has D_k2( ξ̂_̂l̂_̂ŝ) = 0,( k = 1, 2 ). In the case 1) let μ̂_̂î(1 ≤ i ≤ 2N ) be a set of arbitrary non-zero numbers. Defineν̂_̂î = μ̂_̂îD_12( ξ̂_̂î)/D_11( ξ̂_̂î) = μ̂_̂îD_22( ξ̂_̂î)/D_21( ξ̂_̂î), 1 ≤ i ≤ 2N. In the case 2) we put μ̂_i_j = 0(j = 1, 2, ⋯, p ) and let μ̂_k ( k ≠ i_j ) be arbitrary non-zero numbers. Choose ν̂_i_j (j = 1, 2, ⋯, p ) as arbitrary non-zero numbers and letν̂_k = μ̂_kD_12( ξ̂_̂î)/D_11( ξ̂_̂î) = μ̂_kD_22( ξ̂_̂î)/D_21( ξ̂_̂î), 1 ≤ i ≤ 2N, k ≠ i_j. In the case 3) we put ν̂_l_s = 0(l = 1, 2, ⋯, m ) and let ν̂_k ( k ≠ l_s ) be arbitrary non-zero numbers. Choose μ̂_l_s (l = 1, 2, ⋯, m ) as arbitrary non-zero numbers and letμ̂_̂k̂ = ν̂_kD_11( ξ̂_̂î)/D_12( ξ̂_̂î) = ν̂_kD_21( ξ̂_̂î)/D_22( ξ̂_̂î), 1 ≤ i ≤ 2N, k ≠ l_s. Then the sets {ξ̂},{μ̂},{ν̂} solve our problem and the corresponding matrix S is non-singular. □ § SINGULAR SOLUTIONS OF NON-LINEAR INTEGRABLE DIFFERENTIAL EQUATIONSMaterial of this section is based on the results obtained in a number of papers (see for example <cit.> - <cit.>) where the method of operator identities was successfully applied to obtaining explicit solutions of some NIDE using the inverse spectral problem approach. Later this approach was extended to obtain more general classes of solutions (see <cit.>). The majority of the results of this section are not new and can be interpreted as scalar analogues of the formulas from <cit.>. At the same time, the simplicity of our special case allows to perform more thorough investigation of the properties of the solutions and obtain more detailed information of their behavior. We consider the following NIDEs∂ ^2 ϕ (x, t)/∂ x ∂ t = 4 sinhϕ (x, t) sinh-Gordon equation (SHG) ; ∂ψ (x, t)/∂ t = - 1/4∂ ^3 ψ (x, t)/∂ x ^3 + 3/2 | ψ (x, t) |^2 ∂ψ (x, t)/∂ x modified Korteweg - de Vries equation (MKdV);∂ρ (x, t)/∂ t = /2 [∂ ^2 ρ (x, t)/∂ x ^2 - 2 | ρ (x, t) |^2 ρ (x, t)]non-linear Schrödinger equation (NSE).The method of inverse spectral problem as opposed to the method of inverse scattering, allows to weaken the requirement of the solution regularity and investigate solutions with singularities (inverse scattering approach requires the regularity of the solutions on the axis x ∈ (-∞, ∞) while inverse spectral problem method requires regularity of the solutions on semi-axis x ∈ (0, ∞)). Below we sketch the results obtained in this way following <cit.> and <cit.>. §.§ Explicit solutions of NIDETo the equations (<ref>)- (<ref>) we associate the following linear system of differential equations:∂ W/∂ x =z H(x, t) W, W(0, t, z) = I_2, where H(x, t) = [0 exp[ξ (x, t) - ξ (0, t)]; exp[ξ (0, t) - ξ (x, t)] 0, ]. ξ (x, t) - is the solution of either of the equations (<ref>)- (<ref>) and I_2 - is 2 × 2 identity matrix. In further considerations we'll always, unless specifically stated, assume that the function ξ (x, t), 0 ≤ x ≤∞, 0 ≤ t ≤∞ is real valued. In this case equalities  (<ref>), (<ref>) represent self-adjoint canonical system of differential equations for which Weyl-Titchmarsh function v(t,z) is defined by the following inequality∫_0^∞[1 v^* (t, z) ] W^* (x, t, z) [J H(x, t) ] W(x, t, z) [1; -v(t, z) ]dx < ∞,where z > 0 and J = [ 0 1; 1 0 ].In case of rational function v(t,z), explicit solutions of non-linear equations can be constructed. We consider a special class 𝒫 of functions v(z) satisfying the following conditions * Function v(z) is rational with poles in the lower half plain z < 0, and v(z) > 0ifz ≥ 0; v(z) = 0whenz = 0;* lim _z →∞v(z) = 0; - v(0) > 0. Function v(z) belongs to Nevanlinna class , i. e. satisfies the conditionsv(z) > 0,ifz > 0. Below, closely following <cit.>, we describe the procedure of construction of the explicit solutions which consists of the several steps.Step 1. Let v_0 (z) ≡ v(0, z) ∈𝒫, and v_0 (z) = ∏ _k = 1^Nz + γ _k, 0/z + α _k, 0; γ _k, 0≠γ _j, 0, α _k, 0≠α _j, 0;k ≠ j,i.e. function v_0 (z) is rational Nevanlinna-type function with distinct sets of zeros {- γ_k, 0} and poles {- α_k, 0};1 ≤ k ≤ N. On the sufficiently small interval 0 ≤ t ≤ T function v(z, t) has the formv (z, t) = ∏ _k = 1^Nz + γ _k (t)/z + α _k (t); γ _k (0) = γ _k, 0, α _k (0) = α _k, 0.Step 2. We construct the polynomialQ (z) = (-1)^N/2 [P_1 (z, t) P_2 (-z, t) + P_1 (-z, t) P_2 (z, t)],where P_1 (z, t) = ∏ _k = 1 ^N [z - α _k (t)];P_2 (z, t) = ∏ _k = 1 ^N [z - γ _k (t)].Assume that the roots ω _k (1 ≤ k ≤ 2N) of the polynomial Q (z) are such that ω _k ≠ω _i if k ≠ i. It has been proven in <cit.> that the numbers ω _k (1 ≤ k ≤ 2N) are integrals of motion (do not depend on t). The following theorem is true The following evolution (t-dependance) formulas holdP_i (ω_k , t)/P_i (-ω_k , t) = P_i (ω_k , 0)/P_i (-ω_k , 0)exp(-2t Θ (ω_k));i = 1, 2;1 ≤ k ≤ 2N, where P_i(t) are expressed via α_k(t) and γ_k(t) in (<ref>), ω_i (1 ≤ i ≤ 2N) are zeros of Q(z) andΘ (x) =1/x in case of SHG equation; -x^3in case of MKdV equation; x^2in case of NSE equation. From  (<ref>) it follows that P_1 (ω_k , 0)/P_1 (-ω_k , 0) = -P_2 (ω_k , 0)/P_2 (-ω_k , 0),1 ≤ k ≤ 2N. Given a set of quantities Y = {y_i }_i = 1^N and ordered sets ℐ_j = {i_k }_k = 1^j of indexes such that i_m ≠ i_n, m ≠ n; i_m > i_n, m > n; 1 ≤ k ≤ j, the elementary symmetric form σ _j (Y) of order j is defined as σ _j (Y) = ∑ _i_k ∈ℐ_j∏_k = 1^jy_i_k, 1 ≤ j ≤ N; σ _0 (Y) = 1. Equalities (<ref>) can be written as two systems of linear equations with respect to the symmetric forms σ (A(t)) and σ (G (t)) where A(t) = {α _k (t) }_k = 1^N and G(t) = {γ _k (t) }_k = 1^N:∑_j = 0^N(-1)^N + jω_k^j σ_j (A(t)) = P_1 (ω_k , 0)/P_1 (-ω_k , 0)exp(-2t Θ (ω_k))∑_j = 0^Nω_k^j σ_j (A(t)),and∑_j = 0^N(-1)^N + jω_k^j σ_j (G(t)) = P_2 (ω_k , 0)/P_2 (-ω_k , 0)exp(-2t Θ (ω_k))∑_j = 0^Nω_k^j σ_j (G(t)),where 1 ≤ k ≤ 2N. It's easy to see that systems (<ref>), (<ref>) have unique solutions. Step 3. Solving (<ref>) and (<ref>) with respect to σ (A(t)) and σ (G (t)) and substituting results in v(t, z) = ∑ _k = 0^N^N - k z^k σ_N - k (G (t))/∑ _k = 0^N^N - k z^k σ_N - k (A (t)) we obtain an explicit representation for the evolution (t-dependence) of the Weyl-Titchmarsh function. In <cit.> it has been provenIf v(z) ∈𝒫, then v(t, z) ∈𝒫 and the number of zeros and poles, including their multiplicities, is preserved. Step 4. Introduce in L_2 (0, ζ) an operatorS_ζg = 2 g(x) + ∫_0 ^ζg(u) K(x - u, t) du,wherer(t, z) = 1/π[ v(t, z) - 1], K(x, z) = ∫ _-∞^∞e ^ z x r(t, z) dz.The inverse S_ζ ^-1 admits the representation S_ζ ^-1g = 1/2g(x) + ∫_0 ^ζg(u)Γ _ζ (x, u, t) du. Step 5. The solution of the equations (3.1)-(3.3) is represented asξ (x, t) =8 ∫ _x ^∞Γ _2s (2s, 0, t)dsin case of SHG equation ; -4 Γ _2x (2x, 0, t)in case of MKdV and NSE equation .In case of SHG equation the solution ξ (x, t) can also be written asξ (x, t) = -sinh^-1 (2 ∂/∂ tΓ _2x (2x, 0, t)).Let's note that sinh^-1(x) = ln(x + √(1 + x^2)).Although the above procedure has been developed for the case when the system (<ref>), (<ref>) is self-adjoint canonical (function ξ(x, t) is real-valued), the explicit nature of the solution allows its analytical prolongation into the complex domain.Equalities (<ref>), (<ref>), (<ref>) suggest that the dynamics (dependence on t) of the functions α_k(t) and γ_k(t), (1 ≤ k ≤ N) is quite similar. This allows to express Weyl-Titchmarsh function and the solution of NIDE in terms of only {ω_i }_k = i^2N and {α_k,0}_k = i^N. Consider the case when the set A(α_0) = {α_k,0}_k = i^N is symmetric with respect to the real axis, then the roots of the polynomial  (<ref>) Ω= {ω_k }_k = i^2N are symmetric with respect to both real and imaginary axis. In further considerations by Ω (ω) we denote the set {ω_k }_k = i^N where ω_k > 0, 1 ≤ k ≤ N and ω_i ≠ω_k, i ≠ k and assume ω_N + k = -ω_k, 1 ≤ k ≤ N. Application of (<ref>)- (<ref>) gives the following representation for the function Γ _2x (2x, 0, t): Γ _2x (2x, 0, t) =1/2[ψ_1 (x, t)⋯ ψ_N (x, t)ψ_1^-1 (x, t)⋯ ψ_N^-1 (x, t)]S^-1(x, t) [1 ⋯ 1_N0 ⋯ 0_N ]^T,whereS(x, t) =[ 1/ω_1 + α_1,0 ⋯ 1/ω_N + α_1,0 1/α_1,0 - ω_1 ⋯ 1/α_1,0 - ω_N; ⋯ ⋯ ⋯ ⋯ ⋯ ⋯; 1/ω_1 + α_N,0 ⋯ 1/ω_N + α_N,0 1/α_N,0 - ω_1 ⋯ 1/α_N,0 - ω_N;;ψ_1 (x, t)/ω_1 - α_1,0 ⋯ψ_N (x, t)/ω_N - α_1,0 -1/ψ_1 (x, t) ( α_1,0 + ω_1 ) ⋯ -1/ψ_N (x, t) ( α_1,0 + ω_N ); ⋯ ⋯ ⋯ ⋯ ⋯ ⋯;ψ_1 (x, t)/ω_1 - α_N,0 ⋯ψ_N (x, t)/ω_N - α_N,0 -1/ψ_1 (x, t) ( α_N,0 + ω_1 ) ⋯ -1/ψ_N (x, t) ( α_N,0 + ω_N ); ]and ψ_k (x, t) = exp[2(ω_k x + Θ (ω_k) t)]. Performing calculations on the right hand side of  (<ref>) (for the details see <cit.>) one obtains the following expression for the function Γ _2x (2x, 0, t)Γ _2x (2x, 0, t) = (-1)^N/2Δ_1 (x, t)/Δ_2 (x, t), where Δ_1 (x, t) = det [1⋯11⋯1;ω_1⋯ω_N -ω_1⋯ -ω_N;⋯⋯⋯⋯⋯⋯;ω_1^N - 2⋯ω_N^N - 2(-1)^N - 2ω_1^N - 2⋯(-1)^N - 2ω_N^N - 2;ψ_1(x, t)⋯ψ_N(x, t) 1/ψ_1 (x, t)⋯ 1/ψ_N (x, t)(x, t); ω_1 ψ_1 (x, t)⋯ ω_N ψ_N (x, t)-ω_1/ψ_1 (x, t)⋯-ω_N/ψ_N (x, t);⋯⋯⋯⋯⋯⋯;ω_1 ^N ψ_1 (x, t)⋯ω_N ^N ψ_N (x, t) (-1)^N ω_1 ^N/ψ_1 (x, t)⋯(-1)^Nω_N ^N/ψ_N (x, t);], Δ_2 (x, t) = det [ 1 ⋯ 1 1 ⋯ 1; ω_1 ⋯ ω_N-ω_1 ⋯-ω_N; ⋯ ⋯ ⋯ ⋯ ⋯ ⋯; ω_1^N ⋯ ω_N^N (-1)^Nω_1^N ⋯ (-1)^Nω_N^N;ψ_1 (x, t) ⋯ ψ_N(x, t)1/ψ_1 (x, t) ⋯1/ψ_N (x, t);ω_1 ψ_1 (x, t) ⋯ω_N ψ_N (x, t) -ω_1/ψ_1 (x, t) ⋯ -ω_N/ψ_N (x, t); ⋯ ⋯ ⋯ ⋯ ⋯ ⋯;ω_1 ^N - 2ψ_1 (x, t) ⋯ω_N ^N - 2ψ_N (x, t) (-1)^N - 2ω_1 ^N - 2/ψ_1 (x, t) ⋯ (-1)^N - 2ω_N ^N - 2/ψ_N (x, t); ]. The following theorems summarize the above. Let Ω = {ω_k }_k = 1^N;_0 = {α_k, 0}_k = 1^N be two sets of numbers such that ω_i ≠ω_k, i ≠ k;ω_k > 0. Then the solution of MKdV and NSE equations can be represented as ξ (x, t) = 2 (-1)^N -1Δ_1(x, t)/Δ_2(x, t)where Δ_k(x, t), k = 1, 2 are defined by (<ref>) and  (<ref>) and ψ_k (x, t) = exp[2 (ω_k x -ω_k^3 t - C_k)]in case of MKdV equation; exp[2 (ω_k x +ω_k^2 t - C_k)]in case of NSE equation.with C_k = 1/2ln (∏ _i = 1^N |ω_j - α_i, 0/ω_j + α_i, 0|).Let's introduce the notationsδ_1 (x, t) = {ω_j ^N∂ ^k/∂ t^kcoshχ_j(x, t)}_0 ≤ j, k ≤ N - 1, δ_2 (x, t) = {ω_j ^N∂ ^k/∂ t^ksinhχ_j(x, t)}_0 ≤ j, k ≤ N - 1,where χ_j = ω_j x + t / ω_j -1/2ln (∏ _i = 1^N |ω_j - α_i, 0/ω_j + α_i, 0|). Let Ω = {ω_k }_k = 1^N;_0 = {α_k, 0}_k = 1^N be two sets of numbers such that ω_i ≠ω_k, i ≠ k;ω_k > 0 and each of the sets Ω,_0 is symmetric with respect to the real axis. Then the solution ξ (x, t) of SHG equation is represented asξ (x, t) = 2 ln|δ_1 (x, t)/δ_2 (x, t) |.We illustrate formulas (<ref>) and (<ref>) on some simple examples. Let N = 1 and ω_1 = ω̅_1 ≡ω,α_1, 0 = α̅_1, 0≡α_0, then according to (<ref>) and (<ref>)δ_1 (x, t) = sinh (ω x + t/ω - 1/2ln | ω - α_0/ω + α_0 |), if ω > α_0; cosh (ω x + t/ω - 1/2ln | ω - α_0/ω + α_0 |), if ω < α_0. and δ_2 (x, t) = cosh (ω x + t/ω - 1/2ln | ω - α_0/ω + α_0 |) if ω > α_0 sinh (ω x + t/ω - 1/2ln | ω - α_0/ω + α_0 |) if ω < α_0It's easy to verify that the function ξ (x, t) = 2 ln |δ_1 (x, t)/δ_2 (x, t)| satisfies SHG equation (<ref>). Let N = 1, then from (<ref>) we deduce that if ω > α_0 then the functionξ (x, t) = 2 ωexp (χ (x, t)) / sinh (χ (x, t))satisfies mKdV equation withχ (x, t) = 2 (ω x - ω ^3 t - 1/2ln | ω - α_0/ω + α_0 |),and NSE withχ (x, t) = 2 (ω x + ω ^2 t - 1/2ln | ω - α_0/ω + α_0 |).If ω < α_0 then sinh (χ (x, t)) is replaced by cosh (χ (x, t)).§.§ Dynamic systems and associated inverse problems for NIDEIn this paragraph we refer to the dynamic system corresponding to the S-node defined by (<ref>) and associated matrix S of type (<ref>). Matrix-function S(x, t) (<ref>) is a special case of the matrix S. Indeed, the equalitiesa_k = 1, a_N + k = 0; 1 ≤ k ≤ N;c_k = 0, c_N + k = 1;≤ k ≤ N;d_k = ψ_k (x, t), d_N + k = ψ_k^-1 (x, t); 1 ≤ k ≤ N;g_k = α_k, 0, g_N + k = -α_k, 0; 1 ≤ k ≤ N;h_k = ω_k, h_N + k = -ω_k, 1 ≤ k ≤ N; b_k = 1, 1 ≤ k ≤ 2N; map matrix S onto S(x, t). Results obtained for the matrix S in the previous section and the fact that matrix-function S(x, t) is a special case of the matrix S allow us to make a connection between dynamic systems and NIDE. In this section we formulate and solve some of the related problems.For the convenience we present here the definitions for the matrices A, B, Π_1 , Π_2 from matrix identity (<ref>) in this special case. A=diag{α_1, 0 , … , α_N, 0, -α_1, 0 , … , -α_N, 0} ,B=diag{-ω_1 , … , -ω_N, ω_1 , … , ω_N}, Π ^T _1 = [ 1 … 1 0 … 0; 0 … 0 1 … 1; ],Π ^T _2 = [ 1 … 1 1 … 1;ψ_1 (x, t) ⋯ψ_N (x, t) ψ_1^-1 (x, t) … ψ_N^-1 (x, t); ].According to Theorem 2.1., the transfer matrix-function W_A ( x, λ) of the dynamic system corresponding to matrix-function S(x, t) has the formW_A ( x, t; λ) = ∏^N_i = 1( λ ^2 - α^2_i, 0){ D_kj( x, t; λ) } ^2 _k, j = 1,whereD_11( x, t; λ) = det[V_N - 1( 1 , Ω) V_N - 1( 1 , -Ω)0;V_N( X , Ω)V_N( X^-1 , -Ω)Λ_N;]Δ^-1 (x, t), D_12( x, t; λ) = det[V_N - 1( 1 , Ω) V_N - 1( 1 , -Ω)Λ_N - 1;V_N( 1 , Ω) V_N( 1 , -Ω)0;]Δ^-1 (x, t), D_21( x, t; λ) =det[V_N( 1 , Ω) V_N( 1 , -Ω)0;V_N - 1( 1 , Ω) V_N - 1( 1 , -Ω)Λ_N - 1;]Δ^-1 (x, t), D_22( x, t; λ) =det[ V_N - 1( 1 , Ω)V_N - 1( 1 , -Ω) 0; V_N - 1( X , Ω) V_N - 1( X^-1 , -Ω) Λ_N; ]Δ^-1 (x, t),withΔ (x, t) = det[ V_N - 1( 1 , Ω)V_N - 1( 1 , -Ω); V_N - 1( X , Ω) V_N - 1( X^-1 , -Ω); ]. X = {χ _k (x, t) }_k = 1^N and operations on the sets Ω , X are assumed to be performed member-wise. The following relation is true∂/∂ xW_A ( x, t; λ) = 2 ( λ [j, W_A ( x, t; λ)] + Γ _2x (2x, 0, t) j_1 W_A ( x, t; λ) ),where j = [ 0 0; 0 1; ], j_1 = [ 0 1; 1 0; ]; and [·, ·] - is a commutator symbol defined by [M_1, M_2] = M_1 M_2 - M_2 M_1. Formula  (<ref>) is an analogue of the formulas obtained in the series of papers <cit.> where more general setup was considered.Proof.Differentiating (<ref>) with respect to x we obtain∂/∂ xW_A ( x, t; λ) = -∂Π_2^T/∂ x S^-1 (A - λ I_2N)^-1Π_1 + Π_2^T S^-1∂ S/∂ x S^-1 (A - λ I_2N)^-1Π_1.It's easy to verify that∂ S/∂ x = 2 JSB,∂Π_2^T/∂ x = 2jΠ_2^T B,where J = [ 0 0; 0 I_N; ].In our case differential equation for the matrix S(x, t) is essentially different from the equation obtained in <cit.>-<cit.>, where ∂ S / ∂ x is expressed in terms of Π_1 and Π_2. Also from (<ref>) it follows thatA - SBS^-1 = Π_1 Π_2^T S^-1. First consider the second term on the right hand side of  (<ref>). In view of  (<ref>) we getΠ_2^T S^-1 J S B S^-1 (A - λ I_2N)^-1Π_1 = Π_2^T S^-1 J (A - λ I_2N + λ I_2N - Π_1 Π_2^T S^-1) (A - λ I_2N)^-1Π_1= Π_2^T S^-1 J Π_1 + λΠ_2^T S^-1 J (A - λ E_2N)^-1Π_1 - Π_2^T S^-1 J Π_1 Π_2^T S^-1 (A - λ I_2N)^-1Π_1_I_2 - W_A ( x, t; λ ) = λΠ_2^T S^-1 J (A - λ I_2N)^-1Π_1 + Π_2^T S^-1 J Π_1 W_A ( x, t; λ )= λΠ_2^T S^-1 (A - λ I_2N)^-1Π_1 j + Π_2^T S^-1Π_1 j W_A ( x, t; λ )= λ ( j - W_A ( x, t; λ )) + Π_2^T S^-1Π_1 j W_A ( x, t; λ ). Substituting the relation B S^-1 = S^-1 (A - Π_1 Π_2^T S^-1), following from  (<ref>), into the first term on the right hand side of  (<ref>) we obtainj Π_2^T B S^-1 (A - λ I_2N)^-1Π_1 = j Π_2^T S^-1 ( A - Π_1 Π_2^T S^-1 ) (A - λ I_2N)^-1Π_1= j Π_2^T S^-1 (A - λ I_2N + λ I_2N - Π_1 Π_2^T S^-1) (A - λ I_2N)^-1Π_1= j Π_2^T S^-1Π_1 + λ j Π_2^T S^-1 (A - λ I_2N)^-1Π_1_I_2 - W_A ( x, t; λ )- j Π_2^T S^-1Π_1 Π_2^T S^-1Π_2^T S^-1 (A - λ I_2N)^-1Π_1_I_2 - W_A ( x, t; λ ) = j Π_2^T S^-1Π_1 + λ j ( I_2 - W_A ( x, t; λ ) ) - j Π_2^T S^-1Π_1 ( I_2 - W_A ( x, t; λ ) )= λ j ( I_2 - W_A ( x, t; λ ) ) + j Π_2^T S^-1Π_1 W_A ( x, t; λ ). Combining  (<ref>) and  (<ref>) yields∂/∂ xW_A ( x, t; λ ) = 2( λ [j, W_A ( x, t; λ )] + [Π_2^T S^-1Π_1, j] W_A ( x, t; λ )).Taking into account (<ref>), expression [Π_2^T S^-1Π_1, j] can be written as Γ _2x (2x, 0, t) j_1. This completes the proof. □Formula (<ref>) establishes the connection between dynamic systems of the type (<ref>) and solutions of NIDE.Let's rewrite the quantities { D_k j( x, t; λ) } ^2 _k, j = 1 (the elements of the matrix-polynomial in the representation of the transfer matrix-function  (<ref>)) as polynomials with respect to λ D_1 1( x, t; λ) = (-1)^N ∑_i = 0^N (-1)^i a_i (x, t) λ ^N - i, a_0 = 1; D_1 2( x, t; λ) = ∑_i = 0^N - 1b_i (x, t) λ ^N - 1 - i; D_2 1( x, t; λ) = (-1)^N - 1∑_i = 0^N - 1(-1)^i b_i (x, t) λ ^N - 1 - i;D_2 2( x, t; λ) = ∑_i = 0^N a_i (x, t) λ ^N - i, a_0 = 1.Substituting (<ref>) into (<ref>), one can establish the relationship between the coefficients { a_i (x, t) }_i = 1^N and { b_i (x, t) }_i = 0^N - 1 of the polynomials. In this way we prove the following statement Let X(x, t) = col[ b_0 (x, t)⋯b_N - 1 (x, t) a_1 (x, t)⋯a_N (x, t)],where { a_i (x, t) }_i = 1^N and { b_i (x, t) }_i = 0^N - 1 are the coefficients of the polynomials (<ref>), then X(x, t) satisfies the following Riccati-type system of differential equations∂/∂ x X(x, t) = X(x, t) F X(x, t) - G X(x, t),where F and G are constant matrices F =[0 0⋯0_N2 0⋯0_N],00 2 0 ⋯ 0 0 0 2 ⋯ 0 ⋯ ⋯ ⋯ ⋯ ⋯ 0 0 0 ⋯ 2 0 0 0 ⋯ 0 G=[ [ [0]0 [0]0; [0]00 ]],and b_0 (x, t) = -1/2Γ_2x (2x, 0, t).Proof. Let's rewrite the equations (<ref>) as[ b_0^' = 2 ( b_0 a_1 - b_1 ) a_1^' = 2 b_0^2; b_1^' = 2 ( b_0 a_2 - b_2 ) a_1^' = 2 b_0 b_1; ⋯⋯⋯⋯⋯ ⋯⋯⋯⋯⋯; b_N - 2^' = 2 ( b_0 a_N - 1 - b_N - 1 ) a_N - 1^' = 2 b_0 b_N - 2; b_N - 1^' = 2 b_0 a_N - 1 a_N^' = 2 b_0 b_N - 1; ]In (<ref>) we omitted dependence on x and t and used 'prime' to designate the derivative with respect to x. Differentiating { D_k j( x, t; λ) } ^2 _k, j = 1 and using (<ref>) we getD_11^'= (-1)^N [-λ ^N - 1 a_1^' + λ ^N - 2 a_2^' + ⋯ + (-1)^N a_N^']= 2 (-1)^N [-λ ^N - 1 b_0^2 + λ ^N - 2 b_0 b_1 + ⋯ + (-1)^N - 1 b_0 b_N - 1 ] = 2 b_0 D_21,D_22^'= λ ^N - 1 a_1^' + λ ^N - 2 a_2^' + ⋯ + a_N^' = 2 [λ ^N - 1 b_0^2 + λ ^N - 2 b_0 b_1 + ⋯ + b_0 b_N - 1 ] = 2 b_0 D_12,D_12^'=λ ^N - 1 b_0^' - λ ^N - 2 b_1^' + ⋯ + (-1)^N - 1b_N - 1^' = 2[ λ ^N - 1 (b_0 a_1 - b_1) + λ ^N - 2 (b_0 a_2 - b_2) + ⋯ + b_0 a_N] = 2 ( b_0[a_1 λ ^N - 1 + a_2 λ ^N - 2 + ⋯ + a_N]- [b_1 λ ^N - 1 + b_2 λ ^N - 2 + ⋯ + λ b_N - 1 ] )= 2 ( b_0[ λ ^N + a_1 λ ^N - 1 + a_2 λ ^N - 2 + ⋯ + a_N]- λ [ b_0 λ ^N - 1 + b_1 λ ^N - 2 + ⋯ + b_N - 1 ] ) = 2 ( b_0 D_22 - λ D_12 ), D_21^'= (-1)^N - 1 [ λ ^N - 1 b_0^' - λ ^N - 2 b_1^' + ⋯ + (-1)^N - 1b_N - 1^' ]= 2 (-1)^N - 1 [ λ ^N - 1 (b_0 a_1 - b_1) - λ ^N - 2 (b_0 a_2 - b_2) + ⋯ + (-1)^N - 1 b_0 a_N] = 2 (-1)^N - 1 ( b_0[a_1 λ ^N - 1 - a_2 λ ^N - 2 + ⋯ + (-1)^N - 1 a_N]-[ b_1 λ ^N - 1 - b_2 λ ^N - 2 + ⋯ + (-1)^N - 2λ b_N - 1 ] )= 2 (-1)^N - 1 ( b_0[- λ ^N + a_1 λ ^N - 1 - a_2 λ ^N - 2 + ⋯ + (-1)^N - 1 a_N]- λ [ -b_0 λ ^N - 1 + b_1 λ ^N - 2 - ⋯ + (-1)^N - 2 b_N - 1 ] ) = 2 ( b_0 D_11 + λ D_21 ).This is equivalent to (<ref>).By comparing (<ref>) - (<ref>),  (<ref>),  (<ref>) and  (<ref>), it's easy to see that (<ref>) is valid. □Formula (<ref>) establishes the connection between the coefficients of the polynomials { D_k j( x, t; λ) } ^2 _k, j = 1 and solutions of NIDE.Potentials corresponding to the solutions of NIDE of the type (<ref>) in <cit.> are called Pseudo Exponential (PE) so our solutions of NIDE can be considered as an analogue of PE potentials. In further considerations we'll be using the notation PE(N) to reflect the fact that the potential is parametrized by 2N parameters according to the Theorems 3.3 and 3.4. In <cit.> it was given a characterization of PE potentials in terms of their Taylor coefficients and reflection coefficient. We re-formulate and prove this result in the context of our case.Let b_0(x, t) (Γ_2x(2x, 0, t))at some point t = t_0 be a PE(N) potential, meromorphic on ℝ× [0, ∞] and analytic at (x_0, t_0). Then it is uniquely defined by b_0(x_0, t_0), b_0^'(x_0, t_0),⋯, b_0^(2N - 1)(x_0, t_0) (the derivatives are taken with respect to x).Proof. First, let's fix t: t = t_0. The problem then reduces to the reconstruction of the PE(N) potential, in other words, to build two sets of parameters Ω = {ω_k }_k = 1^N;_0 = {α_k, 0}_k = 1^N given its first 2N - 1 derivatives at some point of analyticity x_0. The procedure is based on the relations (<ref>). For the convenience we perform the following transformation of the variable x: x → 2 x. By differentiating equations (<ref>) 2 N - 1 times at x_0 we arrive at the system of 2 N - 1 linear equations C X = Y with respect to the quantities X = { x_k }_k = 1^2 N - 1≡{ a_1⋯a_N b_1⋯b_N - 1}. Elements of the matrix C = { c_i, j}_i, j = 1^2N - 1 and vector Y = col [y_1 y_2⋯y_2N - 1] are calculated as followsc_i, j = 0; j > i; j < N; c_i - 1, j^' + b_0 c_i - 1, j + N -1; i ≥ j, j < N; (-1)^i; j - i = N, N < j ≤ 2 N - 1; 0; j - i = N - 1, N < j ≤ 2 N - 1; c_i - 1, j^' + b_0 c_i - 1, j - N - c_i - 1, j - 1; j - i ≤ N - 2, N < j ≤ 2 N - 1; y_i = b_0^(i) - b_0^(i - 2) b_0^2 - y_i - 1^'; 1 ≤ i ≤ 2N - 1.If det|C| ≠ 0 then this system has a unique solution. Then according to (<ref>) we construct the polynomials { D_k j( x_0, λ) } ^2 _k, j = 1. Applying Theorem 2.3 to this special case, we reduce the problem to IIP Problem that has a unique solution given by the following procedure: * Find the roots {ω̃_k }_k = 1^N; of the polynomialD(x_0, λ) = D_11(x_0, λ) D_22(x_0, λ) - D_12(x_0, λ) D_21(x_0, λ)and set ω_k = ω̃_k; 1 ≤ k ≤ N* Using relations (<ref>)- (<ref>) compute the ratios R = D_12 (ω_k)/D_11 (ω_k= D_22 (ω_k)/D_21 (ω_k ,which can be considered as a system of linear equations with respect to the elementary symmetric forms σ (A_0) where A_0 = {α _k (0) }_k = 1^N;* By solving this system and then finding the roots of the polynomialP(σ (A_0), λ) = ∑_k = 0^N σ_k (A_0) λ ^N - k; i = 1one recovers the set A_0. □ The above result can be re-formulated in terms of the inverse problem for the solution of NIDE.Inverse NIDE problem. Let ξ(x, t) be a solution of NIDE such that at some point t = t_0ξ(x, t_0) ∈ PE(N). Given the derivatives ξ(x_0, t_0), ξ^'(x_0, t_0),⋯, ξ^(2N - 1)(x_0, t_0) at some point x_0 of analyticity of ξ(x, t), restore the solution ξ(x, t) on ℝ× [0, ∞].Theorem 3.7 solves the Inverse NIDE problem. The form of the solution of SHG equation ϕ(x, t) in spatial variable x differs from the function Γ_2x(2x, 0, t) by one extra derivative and constant multiplier, which suggests a slight modification to the procedure described above: it requires the derivatives of order 1, 2, …, 2N for the solution of the Inverse NIDE problem. Without loss of generality, in further considerations we'll be referring to the Inverse NIDE problem as applied to the function ξ(x, t) = Γ_2x(2x, 0, t) To illustrate the methodology consider function ξ(x, t) when N = 1. Let t_0 = 0, thenξ(x, 0) ≡ξ(x) = ω (2 ω x - ln |ω - α_0/ω + α_0|). It's easy to verify that the equations ξ(x)^' = 2 ξ (x) a_1, a_1(x)^' = 2 ξ(x)^2 derived from (<ref>), are satisfied with a_1(x) = -ω (2 ω x - ln |ω - α_0/ω + α_0|). Then we construct the polynomials { D_k j( x, λ) } ^2 _k, j = 1D_11(x, λ) = - λ + a_1(x); D_12(x, λ) = ξ(x); D_22(x, λ) =λ + a_1(x);D_21(x, λ) = ξ(x);and calculate the roots λ_1, 2 of the polynomialD(x, λ) = (- λ + a_1(x) ) ( λ + a_1(x) ) - ξ(x)^2 = -λ^2 + a_1(x)^2 - ξ(x)^2.After elementary calculations we find that λ_1, 2 = ±ω. By computing the ratiosR = ξ(x)/-ω + a_1(x) = ω + a_1(x)/ξ(x),we obtain the quantity ln R = 2ω x - lnω - α_0/ω + α_0 from which it's easy to calculate α_0 asα_0 = ω1 - R_1/1+ R_1; R_1 = exp (2ω x ) / R.It's also easy to see that R_1 doesn't depend on x. Let N = 2. Given the point x_0 and numbers ξ(x_0), ξ^'(x_0), ξ^''(x_0), ξ^'''(x_0) we show how to construct the coefficients a_1, a_2, b_1 of the polynomials { D_k j( x, λ) } ^2 _k, j = 1. Matrix C and vector Y have the following representationC = [ξ_00 -1;ξ_0^' -ξ_00; ξ_0^'' -ξ_0^' -ξ_0^2;]; Y = col [ξ_0^',ξ_0^'' - ξ_0^3,ξ_0^''' - 4 ξ_0^'ξ_0^2 ].The solution X = col [a_1, a_2, b_1] of the system C X = Y isa_1 =ξ_0^'ξ_0^'' - ξ_0 ξ_0^''' + 4 ξ_0^'ξ_0^3/ξ_0^4 + ξ_0^'^2 - ξ_0 ξ_0^''; a_2 =4 ξ_0^'^2 ξ_0^2 + ξ_0^''^2 - ξ_0^'ξ_0^''' - 2 ξ_0^3 ξ_0^'' + ξ_0^6/ξ_0^4 + ξ_0^'^2 - ξ_0 ξ_0^''; b_1 =4 ξ_0^4 ξ_0^' - ξ_0^2 ξ_0^''' - ξ_0^'^3 + ξ_0 ξ_0^'ξ_0^''/ξ_0^4 + ξ_0^'^2 - ξ_0 ξ_0^'' It's interesting to note that there is a connection between the solutions of the considered NIDE and other non-linear differential equations. For example, Miura transformationM[f(x)] = f(x)^2 ±d f(x)/d x converts solutions of MKdV equation into the solutions of Korteweg - deVries (KdV) equation ∂ u (x, t)/∂ t = - 1/4∂ ^3 u (x, t)/∂ x ^3 + 3/2 | u (x, t) | ∂ u (x, t)/∂ x,and solutions of NSE again into the solutions of NSE with opposite sign by the non-linear term. The corresponding image M[ξ(x, t)] can be represented in the standard form P(x, t) = -2 ∂ ^2 ln (δ (x, t))/∂ x^2whereδ (x, t) = δ_2 (x, t) when choosing "+" in (<ref>),δ_1 (x, t) when choosing "-" in (<ref>);and δ_1, 2 (x, t) are defined by (<ref>), (<ref>). If δ (x, t) ≠ 0;∀ (x, t) ∈ (-∞, ∞ ) then P(x, t) - is the N-soliton solution of the corresponding non-linear equation. We illustrate the above assertions by simple examples. Consider the solution of MKdV for the case N = 1ψ (x, t) = 2 ω ( 2 χ (x, t) ) where χ (x, t) =ω x - ω ^3 t - 1/2 ln |(ω - α_0 ) / (ω + α_0 )|. It's easy to check by direct computation that the functionP_1(x, t) = ψ (x, t) ^2 + ∂ψ (x, t)/∂ x = -2 ω ^2 ^2( χ (x, t)) satisfies KdV equation (<ref>) with δ (x, t) = cosh (χ (x, t)). Analogously, functionP_2(x, t) = ψ (x, t) ^2 - ∂ψ (x, t)/∂ x = -2 ω ^2 ^2( χ (x, t)) satisfies KdV equation (<ref>) with δ (x, t) = sinh (χ (x, t)).In Example 3.13 P_1(x, t) represents a classical 1-soliton solution of KdV equation. As opposed to P_1(x, t), function P_2(x, t) is singular on ℝ× [0, ∞] and doesn't belong to N-soliton family, but because of the similar nature we'll refer to the Miura-transformed PE(N)-functions as soliton-like (SL(N)) solutions of NIDE. Combining results obtained in Theorem 3.7 and properties of Miura-transformed PE(N)-functions, we can solve an inverse problem for the SL(N) solutions of NIDE. Let q(x, t) be SL(N) solution of NIDE, meromorphic on ℝ× [0, ∞] and analytic at (x_0, t_0). Then it is uniquely defined by q(x_0, t_0), q^'(x_0, t_0),⋯, q^(2N - 1)(x_0, t_0) (the derivatives are taken with respect to x).Proof. First, as in Theorem 3.7, let's fix t : t = t_0. Using relations (<ref>) and Miura transformation (<ref>) by consecutive differentiation of the system (<ref>) we arrive at the system of equations C̃X̃ = Ỹ with respect to the quantities X = { x_k }_k = 1^2 N - 1≡{ a_1⋯a_N b_0⋯b_N - 1}. Matrix C̃ of the size 2 N × N is represented in block form asC̃ =[ C̃_1; C̃_2 ],where the elements of the matrices C̃_i; i = 1, 2 of the size N × N each, are computed as follows. For the matrix C̃_1 we havec_i, j = 0 j > i + 1; j ≤ N; (-1)^j - 1j = i + 1; j ≤ N; (-1)^i - 1j = i, j ≤ N; (-1)^i - 1c_i + 1, j + 11 ≤ i, j ≤ N;(-1)^j - 1ϑ_N - j (b_N - j - 1 + a_N - j) i = N, 1 ≤ j ≤ N;where quantities ϑ_N - j are constructed as ϑ_-1 = 1;ϑ_0 = b_0;ϑ_1 = q_0 = b_0^2 ± b^'_0;ϑ_2 = -q^'_0;ϑ_3 = q^''_0 - q_0^2;ϑ_i + 1 = -ϑ^'_i - ∑_j = 1^i - 1ϑ_i - jϑ_j(i = 2, 3, …).Corresponding vector of unknowns X̃_1 is organized in the following way x_i = b_i - 1 + a_i; 1 ≤ i ≤ N,and elements of the vector Ỹ_1 are computed asy_i = (-1)^i - 1ϑ_i; i = 1, 2, ⋯, N.For the matrix C̃_2 we havec_i, j = (-1)^i - 1c_i + 1, j + 1, 1 ≤ i, j ≤ N; ϑ_N - j + 1i = 1, 1 ≤ j ≤ N;Corresponding vector of unknowns X̃_2 isx_i = b_i - 1 + a_i; 1 ≤ i ≤ Nand elements of the vector Ỹ_2 are computed asy_i = ϑ_N + i; i = 1, 2, …, NThe system is solved in four simple steps:* If det[C̃_2] ≠ 0 then the system C̃_2 X̃_2 = Ỹ_2 has a unique solution. Solving this system we find the quantities d_i = b_i - 1 + a_i; 1 ≤ i ≤ N;* Substituting d_i; 1 ≤ i ≤ N into the last equation of the system C̃_1 X̃_1 = Ỹ_1, we compute b_0;* Propagating backwards from (N - 1)-th to the first equation in the system C̃_1 X̃_1 = Ỹ_1, we calculate b_i; 1 ≤ i ≤ N - 1;* Compute a_i = d_i - b_i - 1; 1 ≤ i ≤ N.The rest of the procedure is the same as in Theorem 3.7. □We illustrate the calculation steps by an example. Let N = 2 and given the quantities q_0, q^'_0, q^''_0, q^'''_0.Thenq_0 = b_0(a_1 + b_0) - b_1;q^'_0 = q_0(a_1 + b_0) - b_0(a_2 + b_1);q^''_0 = q^'_0(a_1 + b_0) - q_0(a_2 + b_1) + q^2_0;q^'''_0 = (q^''_0 - q^2_0)(a_1 + b_0) - q^'_0(a_2 + b_1) + 4 q_0 q^'_0.From the last two equations we obtain d_1 = a_1 + b_0 = -q^'_0(q^''_0 - q^2_0) + q_0(q^'''_0 - 4 q_0 q^'_0)/-q^' 2_0 + q^''_0 q_0 - q^3_0; d_2 = a_2 + b_1 = q^'_0(q^'''_0 - 4 q_0 q^'_0) - (q^''_0 - q^2_0)^2/-q^' 2_0 + q^''_0q_0 - q^3_0. And from the first two equations we have b_0 = q^'_0 - q_0 d_1/d_2; a_1 = d_1 - b_0; b_1 = d_1 b_0 - q_0; a_2 = d_2 - d_1 b_0 + q_0. A thorough analysis of reflectionless (RL) potentials in Sturm-Liouville problem is given in <cit.>. In particular, it was considered a closure of the sets of RL potentials in the topology of uniform convergence of the functions on every compact of the real axis. These results are important in the problems of approximation of the functions by RL potentials. The criteria are given in terms of functions ϑ_j(x), j = -1, 0, 1, … defined by the relations (<ref>). Let ℬ(-μ^2) (μ≥ 0) represent a set of RL potentials for which the spectrum of the corresponding operators lies to the right of the point -μ^2 and ℬ represents the set of all RL potentials i.e. ℬ = ⋃_μ≥ 0ℬ(-μ^2). The following assertion is true (the proof is beyond the scope of this paper and can be found in <cit.>). For the real function q_0(x) to belong to the set ℬ it is necessary and sufficient that it is infinitely smooth at point x and there exists a number R < ∞ such that defined by relations (<ref>) functions ϑ_j(x), j = -1, 0, 1, … satisfy the inequalities |ϑ_j(x)| ≤ (2 R)^j R.If for some function q̃_0(x) conditions (<ref>) are satisfied then it can be approximated by RL potentials with given accuracy.Theorem 3.8. extends the results of <cit.> for the case of SL(N) solutions of NIDE. §.§ Dynamics of the singularities of the PE(N) and SL(N) solutions of NIDE.As mentioned above, PE(N) and SL(N) solutions of NIDE can have singularities. Point (x_0, t_0) on the plain (x, t), -∞ < x, t < ∞ is called a singularity point if |ξ(x, t)| →∞ when x → x_0 and t → t_0 where ξ(x, t) is the solution of NIDE.A set of singularity points on (x, t) - plain is called a singularity line. Dependence of the singularity point on the parameter t forms a singularity line. In the next section we investigate the dynamics of the singularity lines of the PE(N) and SL(N) solutions of NIDE.In <cit.> the following assertion is proved If v_0(z) ∈𝒫, where v_0(z) = v(0, z) and v(t, z) is Weyl-Titchmarsh function of the system (2.4), (2.5), then the solution ξ(x, t) of NIDE is regular in the region (x, t) ≥ 0.It follows from the relations (<ref>) and  (<ref>) that singularity lines of the solution ξ(x, t) of NIDE satisfy the equationsδ_j(x, t) = 0, j = 1, 2; so the investigation of the dynamics of the singularities is equivalent to the study of the properties of the solutions of the system (<ref>). Material of this section is based on the results obtained in <cit.>. Some of the proofs will be omitted here due to the simplicity.First, we look at the asymptotics of the singularity lines when t →±∞. It's easy to verify that the following assertion is trueLet x and t be such that 0 < δ < |χ_j(x, t) ±1/2lnA_j| < ϵ, A_j = ∏_i = 1^j - 1ω_j + ω_i/ω_j - ω_i∏_i = j + 1^Nω_i - ω_j/ω_i + ω_j,then for the solution ϕ(x, t) of sinh-Gordon equation when t →±∞ the following representation is validϕ(x, t) = 2 (-1)^N + jln|tanh(χ_j(x, t) ±1/2lnA_j + O(1)|, j = 1, 2, …, N.Here χ_j(x, t) = ω_j x + 1/ω_j t - 1/2lnC_j and C_j = ∏ _i = 1^N |ω_j - α_i, 0/ω_j + α_i, 0|. From Assertion 3.17 immediately follow the corollariesFor the sufficiently large values of |t| and t < 0 function ϕ(x, t) has singularities in the region |χ_j(x, t) - 1/2lnA_j| < ϵ. For the sufficiently large values of t function ϕ(x, t) has singularities in the region |χ_j(x, t) + 1/2lnA_j| < ϵ.Analogous result takes place for the solutions ψ(x, t) and ρ(x, t) of the equations (3.2) and (3.3).Let x and t be such that 0 < δ < |χ_j(x, t)±1/2ln|A_j|| < ϵ, A_j = (-1)^N - 1∏_i = 1^j - 1ω̅_j + ω_i/ω_j - ω_i∏_i = j + 1^Nω_i - ω_j/ω̅_i + ω_j,then the solution ξ(x, t) when t →±∞ can be represented asξ(x, t) = 2 (-1)^N + jω_jexp (χ_j(x, t) - A_j)/sinh(χ_j(x, t)±ln|A_j|) + O(1), j = 1, 2, …, N.Here ξ(x, t) ≡ψ(x, t),χ_j(x, t) = 2 (ω_j x - ω_j^3 t - 1/2lnC_j) in case of mKdV equation and ξ(x, t) ≡ρ(x, t),χ_j(x, t) = 2 (ω_j x + ω_j^2 t - 1/2lnC_j) in case of NSE equation.From Assertion 3.20 immediately follow the corollaries For the sufficiently large values of |t| and t < 0 function ξ(x, t) has singularities in the region |χ_j(x, t) - lnA_j| < ϵ.For the sufficiently large values of t function ξ(x, t) has singularities in the region |χ_j(x, t) + lnA_j| < ϵ. From asymptotic formulas (<ref>) and (<ref>) it follows that if x is considered as spacial and t - as temporal variables then the solutions ϕ(x, t), ψ(x, t), ρ(x, t) when t →±∞ are represented as a complex of N elementary singular waves. These waves interact, and after the interaction they preserve their shapes. The only change they suffer is the phase shift Δ_j = ln|A_j|. This behavior is quite similar to the behavior of the classical soliton solutions. Presence of singularities and soliton-like nature of their interaction suggests that the solutions ϕ(x, t), ψ(x, t), ρ(x, t) can be treated in terms of particles interacting by their surrounding field and corresponding singularity lines can be identified as world lines of the particles. Consider some simple examples. In case N = 1 and ω = ω̅,α_0 = α̅_0 we have one singularity line that satisfies the equationω x + Θ (ω) t -1/2ln (|ω - α_0/ω + α_0|) = 0. Equation (<ref>) represents a straight line that corresponds to the world line of "free" particle propagating with velocity v = Θ (ω) / ω.In <cit.> the following assertion has been proved(∂ϕ(x, t) / ∂ x) = ± 1, (ψ(x, t)) = ± 1 and (ρ(x, t)) = ± 1 In "particle language" this means that there are two types of particles (corresponding to the sign of the residue). The following example demonstrates the interaction between particles with different combinations of the types. We consider solutions ϕ(x, t) of SHG equation (conceptually, the dynamics of singularity lines in case of mKdV and NSE equations is the same). When N = 2 we consider three cases* ω_i = ω̅_i,α_i, 0 = α̅_i, 0, C_i > 0, i = 1, 2;* ω_i = ω̅_i,α_i, 0 = α̅_i, 0, i = 1, 2; C_1 < 0, C_2 > 0;* ω_2 = ω̅_1,α_i, 0 = α̅_i, 0, i = 1, 2;where C_j = (ω_j - α_1, 0)(ω_j - α_2, 0)/(ω_j + α_1, 0)(ω_j + α_2, 0), j = 1, 2. In all the cases there are two singularity lines. In case 1. solution ϕ(x, t) has the formϕ(x, t) = 2 ln | (ω_1 - ω_2) sinh(η_1(x, t)) - (ω_1 + ω_2) sinh(η_2(x, t))/(ω_1 - ω_2) sinh(η_1(x, t)) + (ω_1 + ω_2) sinh(η_2(x, t)) |, where η_1(x, t) = χ_1(x, t) + χ_2(x, t),η_2(x, t) = χ_2(x, t) - χ_1(x, t). In this case singularity lines satisfy the equationsX_1, 2 = ±sinh^-1 ( ω_1 + ω_2/ω_2 - ω_1sinh(Y) ), where X = (ω_1 + ω_2) x + (Θ (ω_1) + Θ (ω_2)) t -1/2ln|C_1 C_2|; Y = (ω_2 - ω_1) x + (Θ (ω_2) - Θ (ω_1)) t -1/2ln|C_2/ C_1|.In case 2. solution ϕ(x, t) has the formϕ(x, t) = 2 ln | (ω_1 - ω_2) cosh(η_1(x, t)) - (ω_1 + ω_2) cosh(η_2(x, t))/(ω_1 - ω_2) cosh(η_1(x, t)) + (ω_1 + ω_2) cosh(η_2(x, t)) |, and singularity lines satisfy the equationsX_1, 2 = ±cosh^-1 ( ω_1 + ω_2/ω_2 - ω_1cosh(Y) ). In case 3. solution ϕ(x, t) is represented asϕ(x, t) = 2 ln | ω_1 sinh(ζ_1(x, t) + ω_1 sin(ζ_2(x, t))/ω_1 sinh(ζ_1(x, t) - ω_1 sin(ζ_2(x, t)) |. Here ζ_1(x, t) = 2 χ(x, t),ζ_2(x, t) = 2 χ(x, t) and χ(x, t) = ω_1 x + 1/ω_1 t - 1/2ln |C_1|. Corresponding singularity lines satisfy the equationsX_1, 2 = ±sinh^-1 ( ω_1/ω_1sin(Y) ), where X = ζ_1(x, t), Y = ζ_2(x, t).Singularity lines corresponding to those three cases are depicted on the figures 1, 2 and 3 respectively in Appendix.Case 1. presents the interaction of the particles of different types. When the values of |t| are large and t < 0 the lines are close to the straight lines corresponding to the asymptotic solutions (<ref>) and (<ref>) when t → -∞. Then the lines become closer and intersect. This suggests that the corresponding particles attract each other and collide. After the collision the particles diverge. When t increases the world lines become closer to the straight lines corresponding to the asymptotic solutions (<ref>) and (<ref>) when t →∞. So the interaction between particles results in exchange of energy and phase shift which can be calculated as the distance between corresponding asymptotes.Case 2. presents the interaction of the particles of the same type. When the values of |t| are large and t < 0 the lines are close to the straight lines corresponding to the asymptotic solutions (<ref>) and (<ref>) when t → -∞. Then after some convergence the lines diverge and do not intersect. This suggests that the corresponding particles repulse each other. When t increases the world lines become closer to the straight lines corresponding to the asymptotic solutions (<ref>) and (<ref>) when t →∞. So as in the case 1. the interaction between particles results in exchange of energy and phase shift but without collision.Case 3. corresponds to the periodical solutions that can be interpreted as bound state of two particles of different types. This is similar to the "breathers" in case of classical soliton solutions of NIDE. Dynamics of the bound state is similar to the dynamics of the "free" particle: particles oscillate around common center that propagates with the speed v that can be calculated as v = 1 / (ω_1)^2. When N > 2 it's not possible to calculate singularity lines explicitly so numerical methods (i.e. finding the zeros of the transcendental functions δ_k(x, t), k = 1, 2) should be applied. Nevertheless, there are some very interesting global properties of the singularity lines that can be derived and investigated in details. It's worth noting that in quantum mechanics the problem of studying a gas of one-dimensional Bose particles interacting via delta-function potential reduces to investigation of the Schrödinger equation( -∑_i = 1^N ( ∂ ^2 / ∂ x_i^2 ) + 2 c ∑_i, j = 1^N δ(x_i - x_j)) ψ = E ψwith boundary conditions. ( ∂/∂ x_j - ∂/∂ x_k) ψ|_x_i = x_k+ - . ( ∂/∂ x_j - ∂/∂ x_k) ψ|_x_i = x_k- = . 2 c ψ|_x_i = x_k,i.e. ψ is continuous whenever two particles touch, but the jump in the derivative of ψ is 2 c (see for example <cit.>). In this context our case can be considered as a generalization of the problem (<ref>), (<ref>) and reduces to the one when ω_i >> ω_j, i > j, 1 ≤ i, j ≤ N. In this case in the limit (ω_i - ω_j) →∞, i > j, 1 ≤ i, j ≤ N the region of particles' interaction collapses to the point. From (<ref>) it follows that singularity lines satisfy the system of equationsd x_i(t)/d t = - ( ∂δ_1(x, t)/∂ t / ∂δ_1(x, t)/∂ x )_x = x_i(t), i = 1, 2, …, l; d x_i(t)/d t = - ( ∂δ_2(x, t)/∂ t / ∂δ_2(x, t)/∂ x )_x = x_i(t), i = l + 1, l + 2, …, N.Let's introduce quantitiesp_i = Θ(ω_i)/ω_i;q_i = -p_i t + ln|C_i| / ω_i, 1 ≤ i ≤ N,where Θ(x) is defined in (<ref>). In <cit.> the following theorem is provedSystem (<ref>) is completely integrable Hamiltonian system with the HamiltonianH = 1/2∑_i = 1^Np_i^2,and quantities (<ref>) are action-angle variables for this system.Proof. We just need to verify the validity of the identityd x/d t = { x, H }, where {x, H} is the Poisson bracket defined by{f, g} = ∑_i = 1^N( ∂ f/∂ q_i∂ g/∂ p_i - ∂ f/∂ p_i∂ g/∂ q_i).Indeed, from (<ref>) and (<ref>) it follows that ∂ H/∂ q_i = 0,∂ H/∂ p_i = p_i; 1 ≤ i ≤ N.On the other hand, we have∂ x/∂ q_i = ∂δ(x, t)/∂ q_i / ∂δ(x, t)/∂ x,whereδ(x, t) =δ_1(x, t), if1 ≤ i ≤ l; δ_2(x, t), ifl < i ≤ N. Substituting ∂δ(x, t)/∂ q_i = ∂δ(x, t)/∂ t / ∂ q_i/∂ t = -∂δ(x, t)/∂ t / p_iinto (<ref>) and combining with (<ref>) we obtain (<ref>). □The total energy of the system of particles with the dynamics described by (<ref>) is an integral of motion.System (<ref>) carries a complete information about the solution ξ(x, t) of NIDE (it is contained in the sets A = {α_k,0}_k = i^N and Ω = {ω}_k = i^N). Having this information and using (<ref>)- (<ref>) one can reconstruct the solutions. Equations (<ref>) solve N-body problem with a special potential. Considered NIDE themselves can be formulated in terms of Hamiltonian systems in infinite dimensional space so we face a hierarchy of the Hamiltonian systems: infinite dimensional system generates the finite dimensional one.Even though Theorem 3.11. states an important and powerful result, it's not constructive in a sense that it describes dynamics of the system implicitly: on the right hand side of the equations (<ref>) one cannot distinguish one singularity line from another. It would be interesting to get some more detailed information about the behavior of singularity lines. Using the results of Section 2 we obtain the parametrization of the singularity lines and derive differential equations for the parameters. In order to do this we need a simple result obtained in <cit.>: connection between the determinants of paired Cauchy and paired Vandermonde matrices.Matrix S is called paired Cauchy (PC) matrix if it (or its transposed) has the following block representationS = [ S_1; S_2; ]where S_k, k = 1, 2 are pure Cauchy matrices.For example, matrix S(x, t) represented by formula (<ref>) is PC matrix. Matrix V is called paired Vandermonde (PV) matrix if it (or its transposed) has the following block representationV = [ V_1; V_2; ]where V_k, k = 1, 2 are pure Vandermonde matrices.For example, matrices V_k(x, t),k = 1, 2 whose determinants Δ_k(x, t) are represented by formulas (<ref>), (<ref>) are PV matrices. The following assertion is trueLet the sets of numbers {ω_i }_i = 1^m + n and {α_i }_i = 1^m + n be such that ω_i ω_k,α_i α_k; ik and ω_i-α_k, 1 ≤ i, k ≤ m + n. Define PC matrix S byS = [ 1/ω_1 + α_1 1/ω_2 + α_1 ⋯ 1/ω_m + n + α_1; ⋯ ⋯ ⋯ ⋯; 1/ω_1 + α_m 1/ω_2 + α_m ⋯ 1/ω_m + n + α_m;; γ_1/ω_1 + α_m + 1 γ_2/ω_2 + α_m + 1 ⋯ γ_m + n/ω_m + n + α_m + 1; ⋯ ⋯ ⋯ ⋯; γ_1/ω_1 + α_m + n γ_2/ω_2 + α_m + n ⋯ γ_m + n/ω_m + n + α_m + n; ]and VC matrix V byV = [11⋯1;ω_1ω_2⋯ω_m + n;⋯⋯⋯⋯;ω_1^m - 1ω_2^m - 1⋯ω_m + n^m - 1;;ϵ_1ϵ_2⋯ϵ_m + n;ω_1 ϵ_1ω_2 ϵ_2⋯ ω_m + nϵ_m + n;⋯⋯⋯⋯; ω_1^n - 1ϵ_1 ω_2^n - 1ϵ_2⋯ ω_m + n^n - 1ϵ_m + n;],thenS = ∏_1 ≤ l < k ≤ m(α_k - α_l)∏_m + 1 ≤ j < i ≤ m + n(α_i - α_j)/∏_1 ≤ k ≤ m + n; 1 ≤ i ≤ m(ω_k + α_i) V,whereϵ_k = γ_k ∏_1 ≤ i ≤ m; m + 1 ≤ j ≤ m + n (ω_k + α_i) / (ω_k + α_j); 1 ≤ k ≤ m + n. The proof is based on the application of the Laplace rule to the calculation of the determinants and the properties of the determinants of pure Cauchy and Vandermonde matrices. It's a straightforward but bulky calculation and will be skipped (we refer the interested reader to <cit.> for the full proof; also in <cit.> one can find more links between different types of structured matrices).Combining results obtained in Section 2 (Theorem 2.1.), formulas (<ref>)- (<ref>) and Lemma 3.32, it's easy to see that the following statement is valid. S(x, t) = 0 δ_1(x, t) = 0, δ_2(x, t) = 0. Taking into account Remark 2.2. we see that singularity lines of the solutions ξ(x, t) of NIDE are parametrized by the coefficients ω_k, 1 ≤ k ≤ N of some polynomials. As an example consider real solutions ξ(x, t) for the case N = 2 (this example was also considered in <cit.>). The parametrizing polynomials f_k(z), k = 1, 2 in this case are of order one: f_1(z) = z - p, f_2(z) = z + p; p = p̅.Let ω_k = ω̅_k,α_k = α̅_k, k = 1, 2 and ω_2 > ω_1. Consider two cases: * C_k > 0, k = 1, 2,* C_1 > 0, C_2 < 0, whereC_k =(ω_k - α_1, 0)(ω_k - α_2, 0)/(ω_k + α_1, 0)(ω_k + α_2, 0), k = 1, 2.In the first case two singularity lines x_k(t), k = 1, 2 solve the systemsexp2(ω_k x + Θ(ω_k) t) = C_k (ω_k + p) / (ω_k - p), -exp2(ω_k x + Θ(ω_k) t) = C_k (ω_k + p) / (ω_k - p); k = 1, 2.One line (L_1) corresponds to the values of p in the interval ]-ω_1, ω_1[ and for the other one (L_2), p ∈]-∞, -ω_2[∪]ω_2, ∞[. Here Θ(x) is defined in (<ref>). Solving (<ref>) with respect to x and t one getsx = ϑ_1(p) Θ(ω_2) - ϑ_2(p) Θ(ω_1)/ω_1 Θ(ω_2) - ω_2 Θ(ω_1), t = ϑ_2(p) ω_1 - ϑ_1(p) ω_2/ω_1 Θ(ω_2) - ω_2 Θ(ω_1),whereϑ_k(d) = 1/2ln( C_k (ω_k + p) / (ω_k - p) ), k = 1, 2.This case corresponds to the "attracting" particles (interaction between different types of particles) considered in Example 3.25. Case 1. Lines L_1 and L_2 intersect each other. From (<ref>)- (<ref>) it follows that coordinates (x_0, t_0) of the intersection point satisfy the relationx_0 = 1/2lnC_1Θ(ω_2) - lnC_2Θ(ω_1)/ω_1 Θ(ω_2) - ω_2 Θ(ω_1), t_0 = 1/2lnC_2ω_1 - lnC_1ω_2/ω_1 Θ(ω_2) - ω_2 Θ(ω_1).This is achieved by setting p = 0 in case of line L_1 and p →±∞ in case of line L_2. By calculating derivatived x(t)/d t = ω_1 Θ(ω_2)(ω_2^2 - p^2) - ω_2 Θ(ω_1)(ω_1^2 - p^2)/ω_1 ω_2 (ω_1^2 - ω_2^2)for both lines, and setting p = 0 in case of L_1 and p →±∞ in case of L_2 we see that .d x(t)/d t|_p = 0 =0in case of SHG equation(Θ(x) = 1/x), -(ω_1^2 + ω_2^2)in case of mKdV equation(Θ(x) = -x^3);and .d x(t)/d t|_p →±∞→ -∞.So in case of SHG equation singularity lines intersect at the angle π / 2.In the second case singularity lines solve the systems-exp2(ω_1 x + Θ(ω_1) t) = C_1 (ω_1 + p) / (ω_1 - p);exp2(ω_2 x + Θ(ω_2) t) = |C_2| (ω_2 + p) / (ω_2 - p).Corresponding intervals for the parameter p are ]-ω_2, -ω_1[ and ]ω_1, ω_2[. Solutions of the systems (<ref>) have the same representation (<ref>) as in case 1. but this time they correspond to the "repulsing" particles (interaction between same type particles) considered in Example 3.25. Case 2.Consider now a general case of parametrization of the singularity lines of the real solutions ϕ(x, t) of SHG equation (MKdV and NSE equations can be investigated in the similar manner). Parametrizing polynomials Q_1, 2(x) from (<ref>) in this case are of order N - 1. Taking into account symmetries imposed on the sets {ω_i }_i = 1^N, {α_i, 0}_i = 1^N, polynomials Q_1, 2(x) can be represented asQ_1, 2(x) = ∏_i = 1^N - 1(x ± p_i),where p_i = p̅_i, 1 ≤ i ≤ N - 1 so the parametrization is performed by the roots {± p_i }_i = 1^N - 1 of the polynomials Q_1, 2(x) where Q_2(x) = (-1)^N - 1Q_1(-x). The parametrization takes the form(C_k) exp [2 (ω_k x + 1/ω_k t ) ] = C_k ∏_i = 1^N - 1ω_k + p_i/ω_k - p_i, 1 ≤ k ≤ N.Now we prove the following theoremLet the sets of numbers {ω_i }_i = 1^N, {α_i, 0}_i = 1^N be such that ω_i = ω̅_i,α_i = α̅_i; 1 ≤ i ≤ N;ω_i ω_k,α_i α_k; ik and ω_i α_k, 1 ≤ i, k ≤ N, then parameters {p_i }_i = 1^N - 1 considered as functions of t, satisfy the nonlinear system of differential equationsd p_k(t)/d t = (-1)^N ∏_1 ≤ i ≤ N -1,i ≠ kp_i^2(t)∏_1 ≤ i ≤ N(ω_i^2 - p_k^2(t))/∏_1 ≤ i ≤ Nω_i^2∏_1 ≤ i ≤ N - 1,i ≠ k(p_k^2(t) - p_i^2(t)), 1 ≤ k ≤ N - 1. Proof. Suppose for definiteness that in (<ref>) C_k > 0, 1 ≤ k ≤ N. Then fix some index k (without loss of generality we can take k = 1) and calculate xx = 1/ω_1( 1/2∑_i = 1^N - 1lnω_1 + p_i/ω_1 - p_i - 1/ω_1 t + 1/2ln C_1).To simplify the notations, the dependence of p_k(t) from t is omitted.Substituting (<ref>) into (<ref>) for all k > 1 we come up with the systemF_k(x, t, p)≡ω_k/ω_1 ( 1/2∑_i = 1^N - 1lnω_1 + p_i/ω_1 - p_i+ 1/ω_1 t - 1/2ln C_1 )+ t/ω_k + 1/2ln C_k - 1/2∑_i = 1^N - 1lnω_k + p_i/ω_k - p_i = 0,where 2 ≤ k ≤ N. After differentiating (<ref>) with respect to t to obtain∂ F_k(x, t, p)/∂ t + ∑_i = 1^N - 1( ∂ F_k(x, t, p)/∂ p_id p_i/d t) = 0 and substituting the derivatives∂ F_k(x, t, p)/∂ p_i =ω_k (ω_k^2 - ω_1^2 )/(ω_1^2 - p_i^2)(ω_k^2 - p_i^2)∂ F_k(x, t, p)/∂ t = -ω_k^2 - ω_1^2/ω_1^2 ω_kwe come up with the system of linear equations with respect to the derivatives d p_i / d t∑_i = 1^N - 11/(ω_1^2 - p_i^2)(ω_k^2 - p_i^2)d p_i/d t = 1/ω_k^2 ω_1^2, 2≤ k ≤ N.The determinantΔ = {1/(ω_1^2 - p_i^2)(ω_k^2 - p_i^2)}_2 < k ≤ N, 1 ≤ i ≤ N - 1of the matrix coefficient of the system (<ref>) can be expressed asΔ = ∏_i = 1^N - 1(ω_1^2 - p_i^2)^-1{1/ω_k^2 - p_i^2}_2 < k ≤ N, 1 ≤ i ≤ N - 1,where the second factor on the right hand side is the determinant of the Cauchy matrix. So the matrix coefficient is non-singular and the system (<ref>) has a unique solution. Using Cramer's rule, after simple manipulations with explicit formulas for the Cauchy matrix determinants, we arrive at (<ref>). □Analysis of the system (<ref>) is quite non-trivial and will be carried out in subsequent publications. It's easy to see, though, that the system (<ref>) has some important properties that will be useful in our further considerations. For example, equations do not depend on the set {α_i, 0}_i = 1^N which makes it easier to address inverse problems. Also, the system (<ref>), as opposed to  (<ref>), allows to distinguish between different singularity lines. This is based on the following observation. Let's assume that the set {ω_i }_i = 1^N is such that ω_i = ω̅_i, 1 ≤ i ≤ N. Because of the symmetry, without loss of generality, we can assume that ω_i > 0, 1 ≤ i ≤ N and numbers ω_i are enumerated such that ω_i > ω_k, i > k. In this setup real axes ]-∞, ∞[ is divided into non-overlapping intervals Ω_2N = ]-∞, -ω_N[ ∪ ]ω_N, ∞[,Ω_2N - 1 = ]-ω_N, -ω_N - 1[,…,Ω_N = ]-ω_1, ω_1[,…,Ω_1 = ]ω_N - 1, ω_N[. There is one-to-one correspondence between initial values of the parameters p_i(t_0), 1 ≤ i ≤ N - 1 and intervals Ω_k, 1 ≤ k ≤ 2N such that the values p_i(t_0) can only belong to the different intervals and over time the initial mapping doesn't change (the proof of this statement in general setup requires a non-trivial analysis of the system (<ref>) and will be addressed in the subsequent publication). So the particular singularity line L_k is characterized by the particular function p_k(t) taking values from the particular interval Ω_k; 1 ≤ k ≤ N. We illustrate the above statements by a simple example. Consider SHG equation and let N = 2 and ω_i = ω̅_i,ω_i > 0,α_i, 0 = α̅_i, 0; i = 1, 2;ω_2 > ω_1. In this case we have one parameter p(t) and system (<ref>) takes the formd p(t)/d t = p^2(t) (ω_1^2 - p^2(t))(ω_2^2 - p^2(t))/ω_1^2 ω_2^2. Wealso supply a special initial condition p^*(t_0^*) = p_0^*. Equation (<ref>) can be easily integrated giving a general solutiont - t_0^* = ω_1^2 ω_2^2/2 (ω_2^2 - ω_1^2)(1/ω_1ln|ω_1 + p(t)/ω_1 - p(t)| - 1/ω_2ln|ω_2 + p(t)/ω_2 - p(t)|).It follows from (<ref>) that p_0^* should satisfy the consistency condition|ω_1 + p_0^*/ω_1 - p_0^*|^ω_2 = |ω_2 + p_0^*/ω_2 - p_0^*|^ω_1. Equation (<ref>) has four distinct solutions: * p_0, 1^* = 0; * p_0, 2^* = ±∞; * p_0, 3^* = p^*; * p_0, 4^* = -p^*;where p^* > 0, p^* ∈ ]ω_1, ω_2[. So each value of p_0^* belongs to one of the intervals ]-∞, -ω_2[∪]ω_2, ∞[, ]-ω_2, -ω_1[, ]-ω_1, ω_1[, ]ω_1, ω_2[.In case p_0^* = 0 we have p(t) ∈ ]-ω_1, ω_1[ and corresponding singularity line x(t) solves each of the equationsexp2(ω_k x(t) + t / ω_k) = C_k (ω_k + p(t)) / (ω_k - p(t)), k = 1, 2. It is required in this case that C_k > 0, k = 1, 2.In case p_0^* = ±∞ we have p(t) ∈ ]-∞, -ω_2[∪]ω_2, ∞[ and corresponding singularity line x(t) solves each of the equations-exp2(ω_k x(t) + t / ω_k) = C_k (ω_k + p(t)) / (ω_k - p(t)), k = 1, 2.It is also required in this case that C_k > 0, k = 1, 2. Considered cases (1. 2.) correspond to the case of "attracting" particles discussed in Example 3.25. Case 1. Analogously, consider the cases 3. and 4. In case p_0^* = p^*, p^* > 0, p^* ∈ ]ω_1, ω_2[ we have p(t) ∈ ]ω_1, ω_2[ and corresponding singularity line x(t) solves each of the equations-exp2(ω_1 x(t) + t / ω_k) = C_1 (ω_1 + p(t)) / (ω_1 - p(t)), exp2(ω_2 x(t) + t / ω_2) = |C_2| (ω_2 + p(t)) / (ω_2 - p(t)).It is required in this case that C_1 > 0, C_2 < 0.In case p_0^* = -p^* we have p(t) ∈ ]-ω_2, -ω_1[ and corresponding singularity line x(t) solves each of the previous equations. It is also required in this case that C_1 > 0, C_2 < 0. This corresponds to the case of "repulsing" particles discussed in Example 3.25. Case 2.From the considered example it follows that the triplets (x_0, i^*, t_0^*, p_0, i^*), 1 ≤ i ≤ 4 are completely determined by the sets {ω_1, ω_2},{α_1, 0, α_2, 0}. Taking into account (<ref>) it's easy to calculate t_0^* and x_0, i^*, 1 ≤ i ≤ 4:t_0^* = ω_1 ω_2 (ω_2 ln |C_1| - ω_1 ln |C_2| )/2 (ω_2^2 - ω_1^2), x_0, i^* = 1/2 ω_1( ln|ω_1 + p_0, i/ω_1 - p_0, i| +ω_1 (ω_2 ln |C_1| - ω_1 ln |C_2| )/ω_2^2 - ω_1^2), 1 ≤ i ≤ 4.Thus given the sets {ω_1, ω_2},{α_1, 0, α_2, 0} the alternative method of construction of the singularity lines is reduced to the following steps: * Step1: From (<ref>) calculate p_0, i^*,1 ≤ i ≤ 4 and t_0^* from (<ref>); * Step2: Solve differential equation (<ref>) with initial data (t_0^*, p_0, i^*) to obtain p_i(t), 1 ≤ i ≤ 4; * Step3: Substitute p_i(t) into corresponding equation (<ref>)- (<ref>) to obtain x_i(t), 1 ≤ i ≤ 4.Described methodology is also valid in general case but the solution of the system (<ref>) cannot be constructed in closed form and should involve numerical methods.As it was pointed out before, singularity lines contain full information about the PE(N) solutions of NIDE. In this respect it would be interesting to consider the following problem:Given some information about singularity lines, restore the corresponding PE(N) solutions of NIDE.We restrict ourselves to considering a special case of the Problem 3.35 for the SHG equation when N = 2 (general case will be considered in further publications). In this case Problem 3.35 is solved by the following assertion:The system (PE(N) solutions of NIDE) is characterized by the following data{t_0,. d x_j^i(t)/d t^i|_t = t_0}, j = 1, 2; i = 0, 1, 2at some point t_0 ∈ ]-∞, ∞[, and index j enumerates singularity lines for the particular case ("attracting" (A-case) or "repulsing" (R-case)).Proof. To simplify the notations we adopt the following designations:ẍ_j ≡. d x_j^2(t)/d t^2|_t = t_0,ẋ_j ≡. d x_j(t)/d t|_t = t_0, x_j ≡ x_j(t_0), p_j ≡ p_j(t_0); j = 1, 2.It suffice to show that given data (<ref>), one can uniquely recover the sets {ω_1, ω_2},{α_1, 0, α_2, 0}. Indeed, differentiating equations (<ref>) - (<ref>) corresponding to the particular case, in the neighborhood of t_0 with respect to t and using (<ref>) we obtain the following relationsω_1^2 ω_2^2 d x_j(t)/d t = -p_j^2 (t), j = 1, 2. Differentiating (<ref>) one more time with respect to t and using again (<ref>), results in ω_1^4 ω_2^4 d x_j^2(t)/d t^2 = -2 p_j(t) (p_j^2 (t) - ω_1^2 ) (p_j^2 (t) - ω_2^2 ), j = 1, 2.Setting t = t_0 in (<ref>) and (<ref>) we arrive at the system of four non-linear equationsσ_2(ω^2) ẋ_j = -p_j^2,σ_2^2(ω^2) ẍ_j = -2 p_j (p_j^4 - σ_1(ω^2) p_j^2 + σ_2(ω^2)); j = 1, 2with respect to the unknowns p_j and σ_j(ω^2), j = 1, 2, where σ_j(ω^2) are symmetric functions of the set {ω_1^2, ω_2^2 }: σ_1(ω^2) = ω_1^2 + ω_2^2,σ_2(ω^2) = ω_1^2 ω_2^2. Simple algebra gives the following quadratic equations for p_j, j = 1, 2: ẋ_1 (ẋ_1 - ẋ_2) p_2^2 - p_2/2( ẋ_1 ẍ_2/ẋ_2 + √(ẋ_2/ẋ_1)ẍ_1 ) + ẋ_1 - ẋ_2 = 0,ẋ_2 (ẋ_1 - ẋ_2) p_1^2 + p_1/2( ẋ_2 ẍ_1/ẋ_1 + √(ẋ_1/ẋ_2)ẍ_2 ) + ẋ_1 - ẋ_2 = 0.Solving (<ref>) we obtain p_j, j = 1, 2. Then using the first of the equations (<ref>) we calculate σ_2(ω^2). Substituting σ_2(ω^2) into the second equation we find σ_1(ω^2). Calculating the roots of the polynomial f(y) = y^2 - y σ_1(ω^2) + σ_2(ω^2) we find the values for ω_j, j = 1, 2. Let's note that equations (<ref>) have two extra solutions that should be dropped by matching the values of p_j, j = 1, 2 and the intervals they fall into according to the considered case (A or R). Next we calculate {α_1, 0, α_2, 0}. It follows from (<ref>) - (<ref>) that symmetric functions σ_j(α_0), j = 1, 2 satisfy the system of equationsσ_1(α_0) ω_j (1 + κ_j) - σ_2(α_0) (1 - κ_j) = ω_j^2 (1 - κ_j) , j = 1, 2,whereκ_j = (C_j) exp2 (ω_j x_j + t_0/ω_j -1/2lnω_j + p_j/ω_j - p_j ), j = 1, 2.System (<ref>) has a unique solution from which we recover {α_1, 0, α_2, 0} by solving quadratic equation y^2 - y σ_1(α_0) + σ_2(α_0) = 0.In R-case when t_0 = t_0^*we have the following symmetry relationsp_1 = -p_2,ẋ_1 = ẋ_2,ẍ_1 = -ẍ_2, and from (<ref>) - (<ref>) it follows thatp_j = ẋ_j/ẍ_j,σ_2(ω^2) = -a_j^2/ẋ_j,σ_1(ω^2) = p_j^2 + p_j ẍ_j/2 ẋ_j^2 - 1/ẋ_j.In (<ref>) index j can be either 1 or 2.In A-case when t_0 = t_0^* the values of p_j and the derivatives ẋ_j, ẍ_j, j = 1,2 are trivial and don't carry any information so in this case the problem cannot be solved uniquely. □ We illustrate the methodology developed in Assertion 3.36 by the numerical examples. Given the following data for the R-case:t_0 = -0.479042987; x_1 = 0.610504874; x_2 = -0.709437736; ẋ_1 = -0.713296278; ẋ_2 = -0.78498714; ẍ_1 = 0.448732074; ẍ_2 = -0.407660883.Calculation steps: * Step 1: Calculate p_j, j = 1, 2 using (<ref>): p_1, 1 = -0.75418; p_1, 2 = 1.68914; p_2, 1 = 0.79117; p_2, 2 = -1.77199;* Step 2: Calculate σ_j(ω^2), j = 1, 2: σ_1, 1(ω^2) = 1.6381478;σ_1, 2(ω^2) = 5.0;σ_2, 1(ω^2)= 0.7973978;σ_2, 2(ω^2) = 4.0;* Step 3: Calculate ω_j, j = 1, 2: ω_1, 1 - complex;ω_1, 2 = ± 1.0;ω_2, 1 - complex;ω_2, 2 = ± 2.0;* Step 4: Verify the results: Values σ_1, 1(ω^2) = 1.6381478;σ_2, 1(ω^2)= 0.7973978; and corresponding complex ω_1, 1 and ω_2, 1 should be dropped; p_2, 1∈ ]-2.0, -1.0[, p_1, 2∈ ]1.0, 2.0[;* Step 5: Calculate α_0, j, j = 1, 2 using (<ref>): α_0, 1 = -0.71651,α_0, 2 = 1.116515.Similarly, consider calculation steps for A-case. Given the following data for the A-case:t_0 = -0.550122329; x_1 = 0.012826762; x_2 = -0.201327063; ẋ_1 = -0.00003606; ẋ_2 = -6.285525817; ẍ_1 = -0.006; ẍ_2 = 319.9146357.Calculation steps:* Step 1: Calculate p_j, j = 1, 2 using (<ref>): p_1, 1 = -13.24693417; p_1, 2 = 0.01201; p_2, 1 = -5.01419019; p_2, 2 = 5530.611771;* Step 2: Calculate σ_j(ω^2), j = 1, 2: σ_1, 1(ω^2) = 30610058.78;σ_1, 2(ω^2) = 5.0;σ_2, 1(ω^2)= 4866365.591;σ_2, 2(ω^2) = 4.0;* Step 3: Calculate ω_j, j = 1, 2: ω_1, 1 - complex;ω_1, 2 = ± 1.0;ω_2, 1 = ± 5532.635804;ω_2, 2 = ± 2.0;* Step 4: Verify the results: Values σ_1, 1(ω^2) = 30610058.78;σ_2, 1(ω^2)= 4866365.591; and corresponding ω_1, 1 and ω_2, 1 should be dropped; p_2, 1∈ ]-∞, -2.0[, p_1, 2∈ ]-1.0, 1.0[;* Step 5: Calculate α_0, j, j = 1, 2 using (<ref>): α_0, 1 = 0.0,α_0, 2 = 0.5. In Appendix we present some of the examples of the behavior of the singularity lines for the cases N > 2 obtained by numerical methods. These examples, on the one hand, reflect some general laws discussed previously e.g. asymptotic behavior when |t| →∞, the nature of the intersections of the singularity lines; on the other hand, they introduce new effects admitting a non-trivial interpretation.Figure 4 represents the interaction between three particles of the same type. As in the case of two particles of the same type, singularity lines do not intersect (particles "repulse" each other). Also one can select regions where particles interact in pairs so complex interaction can locally be described in term of a simpler model (N = 2). This happens when corresponding values of the parameters ω are very distinguished from each other.Figure 5 exhibits an interaction between three particles where two of them are of the same type and one is of different type. As in the previous example, one also can select regions where particles interact in pairs. Singularity lines corresponding to the particles of different types, intersect (particles "attract" each other and "annihilate") and particles of the same type "repulse" each other.Figure 6 demonstrates the interaction between "free" particle and a bound state. "Free" particle "penetrates" into the bound state and "knocks out" the one of the same type. A "knocked out" particle becomes "free" and the "knocking" particle gets "captured" by the particle of different type creating a new bound state.Figures 7 - 10 focus on the case N = 4. When parameters ω and α_0 are real numbers the behavior of the singularity lines is similar to the considered cases N = 2, 3 (Figures 7, 8). An interesting phenomena occurs in case of "bound states" interaction (Figures 9, 10). "Weak" interaction is presented on Figure 9. In this case "bound" states are interacting as "free" particles of the same type - they "repulse" each other. There is an exchange of energy between "bound states" but there is no exchange of individual particles. Figure 10 shows "strong" interaction between "bound states" with a complex exchange of particles between them. Closer look at the interaction region (inserts on the right and left hand sides) reveals a new type of interaction that couldn't be observed in cases of simpler systems (N = 2, 3): "generation" and "annihilation" of the virtual particles (encircled points of "generation" are marked by "G" and points of "annihilation" are marked by "A"). Some of the "virtual" particles exist for a short period of time and then "annihilate" with another "virtual" or "permanent" particle. But some of them "convert" to a "permanent" state replacing "annihilated" ones and form new "bound states" with "survived" particles of different types. One still can observe the exchange of energy between "bound states" on a large scale but tracking the behavior pattern of the individual particles in the presence of the "virtual" ones is quite problematic.56lev1 L. A. Sakhnovich, Spectral Theory of Canonical Differential Systems. Method of Operator Identities, Operator Theory: Advances and Applications, vol. 107 Birkhauser Verlag, 1991.pogreb1 A. K. Pogrebkov, Polivanov M. C., Some topics in the theory of singular solutions of nonlinear equations, Twistor geometry and nonlinear systems, Lecture Notes in Math., 970, pp. 129-–145, Springer, Berlin, 1982.pogreb2 A. K. Pogrebkov, Singular solitons: an example of a sinh-Gordon equation, Letters in Mathematical Physics, 5:4, 277-–285, 1981. heinig1 G. Heinig, Inversion of generalized Cauchy matrices and other classes of structured matrices, Linear Algebra for Signal processisng, IMA Volums in Mathematics and its Applications. 69: 95–114, Springer-Verlag, 1994.heinig2 G. Heinig, K. Rost, Algebraic methods for Toeplitz-like matrices and operators, Birkhauser Verlag, 1994.gohberg1 I. Gohberg, I. Koltracht, P. Lancaster, Efficient solution of linear systems of equations with recursive structure, Linear Algebra and its Applications, 30: 80–113, 1986gohberg2 I. Gohberg, T. Kailath, I. Koltracht, P. Lancaster, Linear complexity parallel algorithms for linear systems of equations, Linear Algebra and its Applications, 30: 80–117, 1988saed A. Saed, T. Kailath, H. Lev-Ari, T. Constantinescu, Recursive solutions of rational interpolation problems via fast matrix factorization, Integral equations and Operator Theory, 20: 84–118, 1994ball J. Ball, I. Gohberg, L. Rodman, Interpolation of rational matrix functions, OT-series, vol. 45, Birkhauser Verlag, 1990.heinig3 G. Heinig, L. A. Sakhnovich, I. F. Tydniouk, Paired Cauchy matrices, Linear Algebra and it's Applications, 251: 189–214, 1997sasa1 A. L. Sakhnovich, L. A. Sakhnovich, I. Y. Roitberg, Inverse Problems and Nonlinear Evolution Equations, De Gruyter Studies in Mathematics, vol. 47 De Gruyter, 2013.lev2 L. A. Sakhnovich. The explicit formulas for the spectral characteristics and solution of the sinh-Gordon equation, Ukr. Math. J., 42(11): 1359–1365, 1990.lev3 L. A. Sakhnovich, The method of operator identities and problems of analysis, St. Petersburg Math. J., 5(1): 1-–69, 1994.lev4 L. A. Sakhnovich, Factorization problems and operator identities, Uspekhi Mat. Nauk, 41(1):3–55, 1986, Translated in: Russian Math. Surveys, 41(1): 1-–64, 1986.lev5L. A. Sakhnovich,The non-linear equations and the inverse problems on the half-axis,Preprint, Inst. Mat. AN Ukr.SSR. Kiev: Izd-vo Inst. Matem. AN Ukr.SSR, 1987.lev6 L. A. Sakhnovich, Evolution of spectral data and nonlinear equations, Ukr. Mat. Zh. 40(4): 533-–535, 1988. Translated in: Ukr. Math. J., 40(4): 459-–461, 1988.lev7 L. A. Sakhnovich, I. F. Tydniouk, An explicit solution of the Sh-Gordon equation, Dokl. Akad. Nauk Ukrain. SSR, ser. A, no. 9, 20–24, 1990.sasa2I. Gohberg, M. A. Kaashoek and A. L. Sakhnovich,Canonical systems with rational spectral densities: explicit formulas and applications,Math. Nachr., 194: 93-–125, 1998.sasa3I. Gohberg, M. A. Kaashoek and A. L. Sakhnovich,Pseudo-canonical systems with rational spectral densities: explicit formulas and applications,J. Differential Equations, 146(2): 375-–398, 1998.sasa4I. Gohberg, M. A. Kaashoek and A. L. Sakhnovich,Sturm–Liouville systems with rational Weyl functions: explicit formulas and applications,Integral Equations Operator Theory,30(3): 338-–377, 1998.sasa5 A. L. Sakhnovich, Nonlinear Schrödinger equation on a semi-axis and an inverse problem associated with it, Ukr. Mat. Zh. 42(3): 356-–363, 1990. Translated in: Ukr. Math. J., 42(3): 316-–323, 1990.sasa6 A. L. Sakhnovich, Exact solutions of nonlinear equations and the method of operator identities, Linear Algebra and its Applications, 182: 109-–126, 1993lev8 L. A. Sakhnovich, Integrable nonlinear equations on the half-line, Ukrain. Math. Zh., vol. 43, no. 11 1578–1584, 1991. Translated in: Ukr. Math. J., 43, 1991dis I. F. Tydniouk, On the soliton-like explicit solutions of non-linear integrable equations, Doctoral Dissertation, Institute of Applied Mathematics and Mechanics, Academy of Sciences of the USSR, 1992sasa7 I. Gohberg, M. A. Kaashoek and A. L. Sakhnovich, Taylor coefficients of a pseudo-exponential potential and the reflection coefficient of the corresponding canonical system, Mathematische Nachrichten, vol. 278, no. 12–13, 1579–1590, 2005mar V. A. Marchenko, Cauchy problem for the Korteweg - de Vries equation with non-decreasing initial data, Integrability and kinetic equations for solitons, Naukova Dumka, pp. 168–212, Kiev, 1990qm1 E. Lieb, W. Liniger, Exact analysis of an interacting Bose gas. I. The general solution and the ground state, Physical Review, vol. 130, no. 4, 1963qm2 A. R. Its, A. G. Izergin, V. E. Korepin, N. A. Slavnov, The quantum correlation functions as the τ function of classical differential equations, in: Important Developments in Soliton Theory, Springer Series in Nonlinear Dynamics, pp. 407–417, 1993 § APPENDIX.Here we present the results of numerical calculations of the singularity lines for the cases N = 2, 3, 4 and different combinations of the parameters ω and α_0. Singularity lines corresponding to the particles of the same type have the same color.
http://arxiv.org/abs/1703.09047v1
{ "authors": [ "Igor Tydniouk" ], "categories": [ "math-ph", "math.MP", "nlin.SI" ], "primary_category": "math-ph", "published": "20170327131311", "title": "On the dynamics of the singularities of the solutions of some non-linear integrable differential equations" }
Nucleon spin structure functions at NNLO in the presence of target mass corrections and higher twist effects S. Atashbar Tehrani^4December 30, 2023 ============================================================================================================In a d-dimensional strip with d≥ 2, we study the non-stationary Stokesequation with no-slip boundary condition in the lower and upper plates and periodic boundary condition in the horizontal directions.In this paper we establish a new maximal regularity estimate in the real interpolation norm ||f||_(0,1)=inf_f=f_0+f_1{⟨sup_0<z<1 |f_0|⟩+⟨∫_0^1 |f_1| dz/(1-z)z⟩} ,where the brackets ⟨·⟩ denotes the horizontal-space and time average. The norms involved in the definition of ·_(0,1) are critical for two reasons:the exponents are borderline for the Calderón-Zygmund theory and the weight 1/zjust fails to be Muckenhoupt. Therefore, the estimate is only true under horizontal bandedness condition, (i. e. a restriction to a packet of wave numbers in Fourier space).The motivation to express the maximal regularity in such a norm comes from an application to the Rayleigh-Bénard problem(see <cit.>). Keywords.Non-stationary Stokes equations, no-slip boundary condition, maximal regularity, real interpolation .§ INTRODUCTIONIn the d-dimensional strip [0,L)^d-1×[0,1],d≥ 2,we consider the non-stationary Stokes equation for the vector field u(x',z,t) and the scalar field p(x',z,t) {[∂_t u-Δ u+∇ p=ffor0<z<1,; ∇· u=0 for0<z<1 ,;u=0 forz∈{0,1} ,;u=0 fort=0 ,;]. where x'∈[0,L)^d-1 and z∈ [0,1] indicate the spatial variables and t∈^+ denotes the time variable. In what follows it is important to distinguish the horizontal component u'∈^d-1 and the vertical component u^z∈ of the vector field u.Motivated by an application to the Rayleigh-Bénard convection problem (see <cit.>), in this paper we establish the following maximal regularity estimate :There exists R_0∈(0,∞) depending only on d and L such that the following holds.Let u,p,f satisfythe equation (<ref>). Assume f is horizontallyband-limited , i.ef(k',z,t)=01≤ R|k'|≤ 4 R<R_0.Then,||(∂_t -∂_z^2)u'||_(0,1)+||∇'∇ u'||_(0,1)+||∂_t u^z||_(0,1)+||∇^2 u^z||_(0,1)+ ||∇ p||_(0,1)≲||f||_(0,1),where ||·||_(0,1) denotes the norm||f||_(0,1):=||f||_(R,(0,1))=inf_f=f_0+f_1{⟨sup_0<z<1 |f_0|⟩+⟨∫_0^1 |f_1| dz/(1-z)z⟩} ,where f_0 and f_1 satisfy the bandedness assumption (<ref>) . In the Theorem above,denotes the horizontal Fourier transform, k' the conjugate variable of x' and the brackets ⟨·⟩ stand for long-time and horizontal-space average. See Section <ref> for notations.The Theorem as stated above is used in this form in <cit.>. Alternatively, the theorem canbe stated with the brackets ⟨·⟩ denoting the integration in t>0, see Remark <ref> at the beginning of Section <ref>.The maximal regularity in the strip is expressed in terms of theinterpolation between the norms ofL^1(dtdx'1/z(1-z)dz) andL^∞_z(L^1_t,x'), which are both borderlinefor the Calderón-Zygmund estimates. We notice that the norm of L^1(dtdx'1/z(1-z)dz) is critical both because of the exponent and the weight 1/z(1-z) are borderline,therefore, estimate (<ref>) is only true under bandedness assumptions (i. e. a restriction to a packet of wave numbers in Fourier space). We observe that only bandedness in the horizontal variable x' is assumed and this is extremely convenient since the horizontal Fourier transform (or rather, series), with help of which bandedness is expressed, is compatible with the lateral periodic boundary conditions.We notice that in the maximal regularity theory the no-slip boundary conditionis a nuisance : As opposed to the no-stress boundary condition in the half space, the no-slip boundary conditiondoes not allow for an extension by reflection to the whole space, and thereby the use of simple kernels or Fourier methods also in the normal variable. The difficulty coming from the the no-slip boundary condition in the non-stationary Stokes equations when deriving maximal regularity estimates is of course well-known; many techniques have been developed to deriveCalderón-Zygmund estimates despite this difficulty.In the half space Solonnikov in <cit.> has constructed a solution formula for (<ref>) with zero initial data via the Oseen an Green tensors. An easier and more compact representation of the solution to the problem (<ref>) with zero forcing term and non-zero initial value was later given by Ukai in <cit.> by using a different method. Indeed he couldwrite an explicit formulaof the solution operator as a composition of Riesz' operators and solutions operator for the heat and Laplace'sequation. This formula is an effective tool to get L^p-L^q (1<q,p<∞) estimates for the solution and its derivatives.In the case of exterior domains, Maremonti and Solonnikov <cit.> deriveL^p-L^q (1<q,p<∞) estimates for (<ref>), going through estimates for the extended solution in the half space and in the whole space.In particular in the half space they propose a decomposition of (<ref>)with non-zero divergence equation.The book of Galdi <cit.> provides with a complete treatment of the classical theory andresults on the non-stationary Stokes equations and Navier-Stokes equations.In <cit.> the authors make substantial use of the estimate (<ref>) in Theorem <ref> to get bounds on thethe Nusselt number, which is the natural measure of the enhancement of upward heat flux for the Rayleigh-Bénard convection. There, the quantity of interest is the second vertical derivative ∂_z^2 of the vertical velocity component u^z=u· e_z.The motivation for expressing the maximal regularity in the borderline spacesL^1(dtdx'1/z(1-z)dz) and L^∞_z(L^1_t,x') comes from the nature of the right-hand-side f= Te_z-1/Pr(u·∇) u in the problem studied in <cit.>. Indeed, thanks to the no-slip boundary conditions,the convective nonlinearity is well controlled in the L^1(dtdx'1/z(1-z)dz)-norm, hence, a maximal regularity theory for the non-stationary Stokes equations with respect to this normis required. The L^∞_z(L^1_t,x')- norm arises for two unrelated reasons: It is needed to estimate the buoyancy term Te_zdriving the Navier-Stokes equations and it is the natural partner of L^1(dtdx'1/z(1-z)dz) in the maximal regularity estimate. Aside from their application to the Rayleigh Bénard convectionall the estimates in Theorem <ref> might have an independent interest since theyshow the full extent of what one can obtain under the horizontal bandedness assumption only. § MAXIMAL REGULARITY IN THE STRIP §.§ From the strip to the half spaceLet us consider the non-stationary Stokes equations{[∂_t u-Δ u+∇ p=ffor0<z<1,; ∇· u=0 for0<z<1 ,;u=0 forz∈{0,1} ,;u=0fort=0.;].In order to prove the maximal regularityestimate in the strip we extend the problem(<ref>) in the half space. By symmetry, it is enough to consider for the moment the extension to the upper half space. Consider the localization (ũ, p̃):=(η u,η p) where η(z) [0,1/2)[0,1). Extending (ũ, p̃) by zero they can be viewed as functions in the upper half space. The couple(ũ, p̃) satisfies {[∂_t ũ-Δũ+∇p̃ =f̃forz>0 ,; ∇·ũ =ρ̃ forz>0,; ũ = 0forz=0 ,; ũ = 0 fort=0,; ].where f̃:=η f-2(∂_z η)∂_z u-(∂_z^2η )u+(∂_zη )pe_z, ρ̃:=(∂_zη )u^z .§.§ Maximal regularity in the upper half spaceIn the half space, takingadvantages from the explicit representation of the solutionvia Green functions, we provethe regularity estimates which will be crucial in the proof of Theorem <ref>.Consider the non-stationary Stokes equations in the upper half-space{[ ∂_t u-Δ u+∇ p = fforz>0 ,;∇· u = ρ forz>0,; u = 0forz=0 ,; u = 0fort=0 .; ]. Suppose that f and ρ are horizontally band-limited , i.ef(k',z,t)=01≤ R|k'|≤ 4 R∈(0,∞) ,and ρ(k',z,t)=01≤ R|k'|≤ 4 R∈(0,∞) .Then||∂_t u^z||_(0,∞)+||∇^2 u^z||_(0,∞)+||∇ p||_(0,∞)+||(∂_t -∂_z^2)u'||_(0,∞)+||∇'∇ u'||_(0,∞)≲ ||f||_(0,∞)+||(-Δ')^-1/2∂_t ρ||_(0,∞)+||(-Δ')^-1/2∂_z^2 ρ ||_(0,∞)+||∇ρ||_(0,∞),where ||·||_(0,∞) denotes the norm||f||_(0,∞):=||f||_R;(0,∞)inf_f=f_0+f_1{⟨sup_0<z<∞ |f_0|⟩+⟨∫_0^∞ |f_1|dz/z⟩} ,where f_0 and f_1 satisfy the bandedness assumption (<ref>).The first ingredient to establish Proposition <ref> is a suitable representationof the solution operator (f=(f',f^z),ρ)→ u=(u',u^z) of the Stokes equations with the no-slipboundary condition. In the case of no-slip boundary condition the Laplace operator has to be factorized as Δ=∂_z^2+Δ'=(∂_z+(-Δ')^1/2)(∂_z-(-Δ')^1/2). In this way the solution operatorto the Stokesequations with the no-slip boundarycondition (<ref>) can be written as the fourfold composition of solution operatorsto three more elementary boundary value problems: * Backward fractional diffusion equation (<ref>): {[ (∂_z-(-Δ')^1/2)ϕ= ∇· f-(∂_t-Δ )ρ for z>0,;ϕ→0forz→∞.;].* Heat equation (<ref>):{[ (∂_t-Δ)v^z= (-Δ')^1/2(f^z-ϕ)-∇'· f'+(∂_t-Δ)ρ for z>0,;v^z=0 for z=0,;v^z=0 for t=0.;].* Forward fractional diffusion equation (<ref>):{[ (∂_z+(-Δ')^1/2)u^z=v^z forz>0 ,;u^z=0for z=0 .;].* Heat equation (<ref>): {[ (∂_t-Δ)v' = (1+∇'(-Δ')^-1∇'·)f'forz>0 ,;v' = 0forz=0 ,;v' = 0 for t=0 .; ].Finally setu'=v'-∇'(-Δ')^-1(ρ-∂_z u^z) .In order to prove the validity of the decomposition we need to argue that (∂_t-Δ)u-f,whichreduces to prove that (∂_t-Δ)u'-f'x' and ∂_z ((∂_t-Δ)u'-f')=∇'((∂_t-Δ)u^z-f^z) .Let us consider for simplicity ρ=0. The first statement follows easily from the definition. Indeed by definition (<ref>) andequation (<ref>),(∂_t-Δ)u'-f' =∇' ((-Δ')^-1∇'· f'+(-Δ')^-1∂_z u^z).Let us now focus on (<ref>), which by using (<ref>) and (<ref>) can be rewritten as∂_z ∇'((-Δ')^-1∇'· f'+(-Δ')^-1(∂_t-Δ)∂_zu^z)=∇'((∂_t-Δ)u^z-f^z) .Because of the periodic boundary conditions in the horizontal direction, the latter is equivalent to∂_z (-Δ')((-Δ')^-1∇'· f'+(-Δ')^-1(∂_t-Δ)∂_zu^z)=(-Δ')((∂_t-Δ)u^z-f^z),that, after factorizingΔ=(∂_z-(-Δ')^1/2)(∂_z+(-Δ')^1/2), turns into(∂_z-(-Δ')^1/2)(∂_t-Δ)(∂_z+(-Δ')^1/2) u^z=(-Δ')f^z-∂_z∇'· f' .One can easily check that the identity holds true by applying (<ref>), (<ref>) and (<ref>). The no-slip boundary condition is trivially satisfied, indeed by (<ref>) we have u^z=0 and ∂_z u^z=0.The combination of (<ref>) with ∂_z u^z=0 gives u'=0. For each step of the decomposition of the Navier Stokes equationswe will derive maximal regularity-type estimates. These are summedup in the following* Let ϕ,f,ρ satisfy the problem (<ref>) and assume f,ρ arehorizontally band-limited, i.e f(k',z,t)=01≤ R|k'|≤ 4and ρ(k',z,t)=01≤ R|k'|≤ 4. Then,||ϕ||_(0,∞)≲ ||f||_(0,∞)+||(-Δ')^-1/2∂_t ρ||_(0,∞)+||∇ρ||_(0,∞) .* Let v^z, f, ϕ, ρ satisfy the problem (<ref>) and assume f,ϕ,ρ arehorizontally band-limited, i.e f(k',z,t)=01≤ R|k'|≤ 4 ,ϕ(k',z,t)=01≤ R|k'|≤ 4andρ(k',z,t)=01≤ R|k'|≤ 4 . Then, [ ||∇ v^z||_(0,∞)+||(-Δ)^-1/2(∂_t-∂_z^2)v^z||_(0,∞); ≲||f||_(0,∞)+||ϕ||_(0,∞)+||(-Δ')^-1/2∂_tρ||_(0,∞); +||(-Δ)^-1/2∂_z^2ρ||_(0,∞)+||∇ρ||_(0,∞) . ]* Let u^z, v^z satisfy the problem (<ref>) and assume v^z ishorizontally band-limited, i.e v^z(k',z,t)=01≤ R|k'|≤ 4 .Then,[||∂_t u^z||_(0,∞)+||∇^2u^z||_(0,∞)+||(-Δ')^-1/2∂_z(∂_t-∂_z^2)u^z||_(0,∞); ≲ ||∇ v^z||_(0,∞)+||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞) .;]* Let v',f', satisfy the problem (<ref>) and assumef' is horizontally band-limited, i.e f(k',z,t)=01≤ R|k'|≤ 4 .Then,||∇'∇ v'||_(0,∞)+||(∂_t-∂_z^2)v'||_(0,∞)≲ ||f'||_(0,∞) . §.§ Proof of Proposition <ref> By an easy application of Proposition <ref>, we will now prove the maximal regularity estimate on the upperhalf space. From Proposition <ref> we have the following bound for thevertical component of the velocity u ||∂_t u^z||_(0,∞)+||∇^2u^z||_(0,∞)+||(-Δ')^-1/2∂_z(∂_t-∂_z^2)u^z||_(0,∞)(<ref>)≲ ||∇ v^z||_(0,∞)+||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞)(<ref>)≲ ||f||_(0,∞)+||ϕ||_(0,∞)+||(-Δ')^-1/2∂_tρ||_(0,∞) +||(-Δ)^-1/2∂_z^2ρ||_(0,∞)+||∇ρ||_(0,∞)(<ref>)≲||f||_(0,∞)+||(-Δ')^1/2∂_tρ||_(0,∞)+||(-Δ)^-1/2∂_z^2ρ||_(0,∞)+||∇ρ||_(0,∞) . Instead for the horizontal components of the velocity u' we have ||(∂_t-∂_z^2)u'||_(0,∞)+||∇'∇ u'||_(0,∞)(<ref>)≲||(∂_t-∂_z^2)v'||_(0,∞)+||∇'∇ v'||_(0,∞)+ ||(-Δ')^-1/2(∂_t-∂_z^2)ρ||_(0,∞)+||∇ρ||_(0,∞)+ ||(-Δ')^-1/2∂_z(∂_t-∂_z^2)u^z||_(0,∞)+||∂_z∇ u^z||_(0,∞)(<ref>),(<ref>),(<ref>)≲||f||_(0,∞)+||(-Δ')^-1/2∂_tρ||_(0,∞)+||(-Δ)^-1/2∂_z^2ρ||_(0,∞)+||∇ρ||_(0,∞) . Summing up we obtain[ ||∂_t u^z||_(0,∞)+||∇^2u^z||_(0,∞)+||(∂_t-∂_z^2)u'||_(0,∞)+||∇'∇ u'||_(0,∞); ≲ ||f||_(0,∞)+||(-Δ')^-1/2∂_tρ||_(0,∞)+||(-Δ)^-1/2∂_z^2ρ||_(0,∞)+||∇ρ||_(0,∞) . ]The bound for the ∇ p follows by equations (<ref>) and applying (<ref>).§.§ Proof of Proposition <ref> This section is devoted to the proof of Proposition <ref>, which rely on a series of Lemmas (Lemma <ref>, Lemma <ref> and Lemma <ref>) that we state here and prove in Section <ref>.The following Lemmas contain the basic maximal regularityestimates for the three auxiliary problems. These estimates, together with the bandedness assumption in the formof (<ref>), (<ref>) and (<ref>) will be the main ingredients for the proof of Proposition <ref>. Let u,f satisfythe problem{[ (∂_z-(-Δ')^1/2)u=f forz>0 ,;u→0 forz→∞ ]. and assume f to be horizontally band-limited, i.e f(k',z,t)=01≤ R|k'|≤ 4 .Then,||∇ u||_(0,∞)≲||f||_(0,∞) . Let u,f,g=g(x',t) satisfythe problem {[ (∂_z+(-Δ')^1/2)u=f forz>0 ,;u=g forz=0 ]. and define the constant extension g̃(x',z,t):=g(x',t). Assume f and g to be horizontally band-limited, i.ef(k',z,t)=01≤ R|k'|≤ 4andg(k',z,t)=01≤ R|k'|≤ 4 .Then||∇ u||_(0,∞)≲||f||_(0,∞)+||∇'g̃||_(0,∞) . Clearly if g=0 in Lemma <ref>, then we have||∇ u||_(0,∞)≲||f||_(0,∞) . Let u,f satisfythe problem {[(∂_t-Δ) u=f forz>0 ,;u=0 forz=0 ,;u=0 fort=0;]. and assume f to be horizontally band-limited, i.e f(k',z,t)=01≤ R|k'|≤ 4 . Then, ||(∂_t-∂_z^2)u||_(0,∞)+||∇'∇ u||_(0,∞)≲ ||f||_(0,∞) . * Subtracting the quantity (∂_z-(-Δ')^1/2)(f^z+∂_zρ)from both sides of equation (<ref>)and then multiplying the new equation by (-Δ)^-1/2 we get (∂_z-(-Δ')^1/2)(-Δ')^-1/2(ϕ-f^z-∂_zρ)= ∇'·(-Δ')^-1/2 f'+f^z-(-Δ')^-1/2∂_tρ+∂_z ρ-(-Δ')^1/2ρ . From the basic estimate (<ref>) we obtain ||∇'(-Δ')^-1/2(ϕ-f^z-∂_zρ)||_(0,∞)≲ ||∇'·(-Δ')^-1/2 f'||_(0,∞)+ ||f^z||_(0,∞)+||(-Δ')^-1/2∂_tρ||_(0,∞)+||∂_z ρ||_(0,∞)+||(-Δ')^1/2ρ||_(0,∞) . Thanks to the bandedness assumption in the form of(<ref>)and (<ref>) we have||ϕ-f^z-∂_zρ||_(0,∞)≲|| f'||_(0,∞)+||f^z||_(0,∞)+||(-Δ')^-1/2∂_tρ||_(0,∞)+||∂_z ρ||_(0,∞)+||∇'ρ||_(0,∞)and from this we obtain easily the desired estimate (<ref>).* After multiplying the equation(<ref>) by (-Δ')^-1/2,the application of (<ref>) to (-Δ')^-1/2v^z yields ||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞)+||(-Δ')^-1/2∇'∇ v^z||_(0,∞)≲||f^z||_(0,∞)+||ϕ||_(0,∞)+||∇'·(-Δ')^-1/2f'||_(0,∞)+ ||(-Δ')^-1/2(∂_t-∂_z^2)ρ||_(0,∞)+||(-Δ')^1/2ρ||_(0,∞) . The estimate (<ref>) follows after observing (<ref>) and applying the triangle inequality to the second to lastterm on the right hand side.* We need to estimate the the three terms on the right hand side of (<ref>) separately. We start with the term ∇^2 u^z:since ||∇^2 u^z||_(0,∞)≤||∇'∇ u^z||_(0,∞)+||∂_z^2 u^z||_(0,∞),we tackle the term ∇'∇ u^z and ∂_z^2 u^z separately. First multiply by ∇' the equation (<ref>).An applicationof the estimate (<ref>) to ∇' u^z yields||∇∇'u^z||_(0,∞)≲ ||∇' v^z||_(0,∞) .Now multiplying the equation (<ref>) by ∂_z^2 ∂_z^2 u^z=-(-Δ')^1/2∂_z u^z+∂_z v^z=-Δ'u^z-(-Δ')^1/2v^z+∂_z v^zand using the bandedness assumption in the form (<ref>) we have [||∂_z^2 u^z||_(0,∞)≤ ||∇'^2 u^z||_(0,∞)+||∇ v^z||_(0,∞);(<ref>)≤||∇ v^z||_(0,∞) . ]The second termof (<ref>), i.e (-Δ')^-1/2∂_z(∂_t-∂_z^2)u^z, can be bounded in the following way: We multiply the equation(<ref>) by (-Δ')^-1/2(∂_t-∂_z^2) {[ (∂_z+(-Δ')^1/2)(-Δ')^-1/2(∂_t-∂_z^2)u^z =(-Δ')^-1/2(∂_t-∂_z^2)v^z forz>0,;(-Δ')^-1/2(∂_t-∂_z^2)u^z =(-Δ')^-1/2∂_zv^z forz=0, ]. where we have usedthat at z=0 (∂_t-∂_z^2)u^z=-∂_z^2 u^z(<ref>)=∂_zv^z.Applying (<ref>) to (-Δ')^-1/2(∂_t-∂_z^2)u^z and using the bandedness assumption in the form of (<ref>), ||∇(-Δ')^-1/2(∂_t-∂_z^2)u^z||_(0,∞)≲ ||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞)+||∂_zv^z||_(0,∞) . Finally we can bound the last term of (<ref>), i.e ∂_t u^z: We observe that ∂_t u^z=(∂_t -∂_z^2 )u^z+∂_z^2 u^z thus||∂_t u^z||_(0,∞)≤ ||(∂_t -∂_z^2 )u^z||_(0,∞)+||∂_z^2 u^z||_(0,∞) . For the first term in the right hand side of (<ref>) we notice that ||(∂_t-∂_z^2)u^z||_(0,∞)(<ref>)≤ ||(-Δ')^-1/2∇'(∂_t-∂_z^2)u^z||_(0,∞)(<ref>)≲ ||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞)+||∂_z v^z||_(0,∞)≲ ||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞)+||∇ v^z||_(0,∞) .Thesecond termon the right hand side of (<ref>) is bounded in (<ref>).Thus we have the following bound for ∂_t u||∂_t u^z||_(0,∞)≤ ||(-Δ')^-1/2(∂_t-∂_z^2)v^z||_(0,∞)+||∇ v^z||_(0,∞) .Putting together all the above we obtain the desired estimate.* From the defining equation (<ref>), thebasic estimate (<ref>)and the bandedness assumption in form of (<ref>), we get ||(∂_t-∂_z^2)v'||_(0,∞)+||∇'∇ v'||_(0,∞)≲ ||f'||_(0,∞) .§.§ Proof of Theorem <ref> Let u,p,f be the solutions of the non-stationary Stokes equations in the strip 0<z<1 (<ref>). Then ũ=η u,p̃=η p (with η defined in (<ref>) satisfy (<ref>), namely {[∂_t ũ-Δũ+∇p̃ =f̃forz>0 ,; ∇·ũ =ρ̃ forz>0,; ũ = 0forz=0 ,; ũ = 0 fort=0,; ].where f̃:=η f-2(∂_z η)∂_z u-(∂_z^2η )u+(∂_zη )pe_z, ρ̃:=(∂_zη )u^z .Since, by assumption f,ρare horizontally band-limited ,then also f̃ and ρ̃satisfy the horizontal bandedness assumption (<ref>) and (<ref>) respectively. We can therefore apply Proposition <ref> to the upper half space problem (<ref>) and get||(∂_t -∂_z^2)ũ'||_(0,∞)+ ||∇'∇ũ'||_(0,∞)+||∂_t ũ^z||_(0,∞)+||∇^2 ũ^z||_(0,∞)+||∇p̃||_(0,∞)≲ ||f̃||_(0,∞)+||(-Δ')^-1/2∂_t ρ̃||_(0,∞)+||(-Δ')^-1/2∂_z^2 ρ̃ ||_(0,∞)+||∇ρ̃||_(0,∞) .By symmetry, we also have the same maximal regularity estimates in the lower half space. Indeed, letũ̃ ,p̃̃̃ satisfy the equation{[ ∂_t ũ̃-Δũ̃+∇p̃̃̃= f̃̃̃ forz<1 ,; ∇·ũ̃= ρ̃̃̃forz<1,; ũ̃=0 forz=1 ,; ũ̃=0fort=0,;].where f̃̃̃:=(1-η) f-2(∂_z (1-η))∂_z u-(∂_z^2(1-η) )u+(∂_z(1-η) )pe_z, ρ̃̃̃:=(∂_z(1-η) )u^z .Again by Proposition <ref> we have ||(∂_t -∂_z^2)ũ̃'||_(-∞,1)+ ||∇'∇ũ̃'||_(-∞,1)+ ||∂_t ũ̃^z||_(-∞,1)+||∇^2 ũ̃^z||_(-∞,1)+||∇p̃̃̃||_(-∞,1)≲ ||f̃̃̃||_(-∞,1)+||(-Δ')^-1/2∂_t ρ̃̃̃||_(-∞,1)+||(-Δ')^-1/2∂_z^2 ρ̃̃̃ ||_(-∞,1)+||∇ρ̃̃̃||_(-∞,1),where ||·||_(-∞,1) is the analogue of (<ref>) (see Section (<ref>) for notations). Since u=ũ+ũ̃ in the strip [0,L)^d-1× (0,1), by the triangle inequality and using the maximal regularity estimates above, we get||(∂_t -∂_z^2)u'||_(0,1)+||∇'∇ u'||_(0,1)+||∂_t u^z||_(0,1)+||∇^2 u^z||_(0,1)+||∇ p||_(0,1)≲ ||(∂_t -∂_z^2)ũ'||_(0,∞)+||(∂_t -∂_z^2)ũ̃'||_(-∞,1)+||∇'∇ũ'||_(0,∞)+||∇'∇ũ̃'||_(-∞,1)+ ||∂_t ũ^z||_(0,∞)+ ||∂_t ũ̃^z||_(-∞,1)+ ||∇^2 ũ^z||_(0,∞)+ ||∇^2 ũ̃^z||_(-∞,1)+ ||∇p̃||_(0,∞)+||∇p̃̃̃||_(-∞,1)≲ ||f̃||_(0,∞)+||f̃̃̃||_(-∞,1)+||(-Δ')^-1/2∂_t ρ̃||_(0,∞)+||(-Δ')^-1/2∂_t ρ̃̃̃||_(-∞,1)+ ||(-Δ')^-1/2∂_z^2 ρ̃ ||_(0,∞)+||(-Δ')^-1/2∂_z^2 ρ̃̃̃ ||_(-∞,1)+||∇ρ̃||_(0,∞)+||∇ρ̃̃̃||_(-∞,1) .By the definitions of f̃ and f̃̃̃ we get||f̃||_(0,∞)+||f̃̃̃||_(-∞,1)≲ ||f||_(0,1)+||∂_z u||_(0,1)+||u||_(0,1)+||p||_(0,1) and similarly for ρ̃ and ρ̃̃̃ we have||∇ρ̃||_(0,∞)+||∇ρ̃̃̃||_(-∞,1)≲ ||∇ u||_(0,1)+||u||_(0,1) ||(-Δ')^-1/2∂_t ρ̃||_(0,∞)+||(-Δ')^-1/2∂_t ρ̃̃̃||_(-∞,1)≲ || (-Δ')^-1/2∂_t u||_(0,1)and ||(-Δ')^-1/2∂_z^2 ρ̃ ||_(0,∞)+||(-Δ')^-1/2∂_z^2 ρ̃̃̃ ||_(-∞,1)≲ ||(-Δ')^-1/2u^z||_(0,1)+||(-Δ')^-1/2∂_zu^z||_(0,1)+||(-Δ')^-1/2∂^2_zu^z||_(0,1) .Therefore, collecting the estimates, we have ||(∂_t -∂_z^2)u'||_(0,1)+||∇'∇ u'||_(0,1)+||∂_t u^z||_(0,1)+||∇^2 u^z||_(0,1)+||∇ p||_(0,1)≲ ||f||_(0,1)+||p||_(0,1)+||∇ u||_(0,1)+||u||_(0,1)+|| (-Δ')^-1/2∂_t u||_(0,1)+||(-Δ')^-1/2u^z||_(0,1)+||(-Δ')^-1/2∂_zu^z||_(0,1)+||(-Δ')^-1/2∂^2_zu^z||_(0,1) .Incorporating the horizontal bandedness assumption we find ||∂_z u||_(0,1) ≤R||∇'∂_z u||_(0,1) , ||u||_(0,1) ≤R^2||(∇')^2 u||_(0,1) ,||p||_(0,1) ≤R||∇'p||_(0,1) , ||∇ u||_(0,1) ≤R||∇'∇ u||_(0,1), || (-Δ')^-1/2∂_t u||_(0,1) ≤R || ∂_t u||_(0,1) , ||(-Δ')^-1/2u^z||_(0,1) ≤ R^3||∇'^2u^z||_(0,1) , ||(-Δ')^-1/2∂_zu^z||_(0,1) ≤R^2 ||∇'∂_zu^z||_(0,1) , ||(-Δ')^-1/2∂^2_zu^z||_(0,1) ≤R||∂^2_zu^z||_(0,1) . Thus, for R<R_0 where R_0 is sufficiently small, all the terms in the right hand side, except f can be absorbed into the left hand side and the conclusion follows.§ PROOF OF MAIN TECHNICAL LEMMAS In the proof of Lemma <ref>, Lemma <ref> and Lemma <ref> wewill derive inequalities between quantitieswhere t is integrated between 0 and ∞.From the proof it is clear that the same inequalities are truewith t integrated between 0 and t_0 with constants that are not depending on t_0. Therefore dividing by t_0 and taking lim sup_t_0→∞ (see (<ref>)) we shall obtain the desired estimates in terms of the interpolation norm (<ref>).§.§ Proof of Lemma <ref> In order to simplify the notations, in what follows we will omit the dependency of the functions from the time variable. It is enough to show ||∇' u||_(0,∞)≲ ||f||_(0,∞),since, by equation (<ref>) ∂_z u=(-Δ')^1/2 u+f. We claim that, in order to prove (<ref>), it is enough to showsup_z⟨|∇' u|⟩'≲sup_z ⟨|f|⟩'and ||∇'u||_(0,∞)≲∫⟨|f|⟩'dz/z . Indeed, by definition of the norm ||·||_(0,∞) (see (<ref>))if we select an arbitrary decomposition ∇'u=∇'u_1+∇'u_2, where u_1 and u_2 are solutions of the problem (<ref>) with right hand sides f_1 and f_2 respectively, we have||∇' u||_(0,∞) ≤||∇' u_1||_(0,∞)+sup_z⟨|∇' u_2|⟩'≤ ∫⟨| f_1|⟩'dz/z+sup_z ⟨|f_2|⟩' .Passing to the infimum over all the decompositions of f we obtain ||∇' u||_(0,∞)≲ ||f||_(0,∞).We recall that by Duhamel's principle we have the following representationu(x',z)=∫_z^∞ u_x',z_0(z)dz_0,where u_z_0is the harmonic extension of f(·,z_0) onto {z<z_0}, i.e it solves the boundary value problem{[ (∂_z-(-Δ')^1/2) u_z_0 = 0forz<z_0 ,; u_z_0 = fforz=z_0 .; ]. Argument for (<ref>): Using the representation of the solution of (<ref>) via the Poisson kernel, i.e u_z_0(x',z)=∫z_0-z/(|x'-y'|^2+(z_0-z)^2)^d/2f(x',z_0) dy' we obtain the following bounds ⟨|∇' u_z_0(·,z)|⟩'≲{[⟨|∇' f(·,z_0)|⟩',; 1/(z_0-z)⟨| f(·,z_0)|⟩',; 1/(z_0-z)^2 ⟨|∇'(-Δ')^-1f(·,z_0)|⟩'.;]. By using the bandedness assumption in the form of (<ref>) and (<ref>), we have ⟨|∇' u_z_0(·,z)|⟩'≲min{1/R, R/(z_0-z)^2}⟨|f(·,z_0)|⟩', hence ⟨|∇' u(·,z)|⟩' ≲ ∫_z^∞min{1/R,R/(z_0-z)^2}⟨|f(·,z_0)|⟩'dz_0≲ sup_z_0∈ (0,∞)⟨|f(·,z_0)|⟩'∫_z^∞min{1/R,R/(z_0-z)^2}dz_0≲ sup_z_0∈ (0,∞)⟨|f(·,z_0)|⟩', which, passing to the supremum in z, implies (<ref>).From the above and applying Fubini's rule, we also have ∫_0^∞⟨|∇' u(·,z)|⟩' dz ≤∫_0^∞∫_z^∞min{1/R,R/(z_0-z)^2}⟨|f(·,z_0)|⟩'dz_0 dz≤∫_0^∞∫_0^z_0min{1/R,R/(z_0-z)^2}dz⟨|f(·,z_0)|⟩'dz_0 ≲∫_0^∞⟨|f(·,z)|⟩' dz. Argument for (<ref>): Let us consider χ_2H≤ z≤ 4Hf where χ_2H≤ z≤ 4H is the characteristic function on the interval [2H,4H] and let u_H be the solution to(∂_z-(-Δ')^1/2)u_H=χ_2H≤ z≤ 4Hf. We claim sup_z≤ H⟨|∇' u_H|⟩'≤∫_0^∞⟨|χ_2H≤ z≤ 4Hf|⟩'dz/z and ∫_H^∞⟨|∇' u_H|⟩'dz/z≤∫_0^∞⟨|χ_2H≤ z≤ 4Hf|⟩'dz/z .From estimate (<ref>) and (<ref>) the statement (<ref>) easily follow. Indeed, choosing H=2^n-1 and summing up over the dyadic intervals, we have||∇'u|| ≤ ∑_n∈||∇'u_2^n-1||_(0,∞)≤ sup_z≤ 2^n-1⟨|∇' u_2^n-1|⟩'+∫_2^n-1^∞⟨|∇' u_2^n-1|⟩'dz/z≤ ∑_n∈∫_0^∞⟨| χ_2^n≤ z≤ 2^n+1f|⟩'dz/z= ∫_0^∞⟨|f|⟩'dz/z . Argument for (<ref>): Fix z≤ H. Then, we have⟨|∇'u_H|⟩' (<ref>)≤ ∫_z^∞1/(z_0-z)⟨|χ_2H≤ z≤ 4Hf(·,z_0)|⟩' dz_0≲ ∫_2H^4H1/(z_0-z)⟨|χ_2H≤ z≤ 4Hf(·,z_0)|⟩' dz_0≲ 1/H∫_2H^4H⟨|χ_2H≤ z≤ 4Hf(·,z_0)|⟩' dz_0≤ ∫_2H^∞⟨|χ_2H≤ z≤ 4Hf(·,z_0)|⟩' dz_0/z_0≤ ∫_0^∞⟨|χ_2H≤ z≤ 4Hf(·,z_0)|⟩' dz_0/z_0 .Taking the supremum over all z proves (<ref>).Argument for (<ref>): For z≥ H we have ∫_H^∞⟨|∇'u_H|⟩'dz/z ≲ 1/H∫_0^∞⟨|∇'u_H|⟩' dz(<ref>)≲ 1/H∫_0^∞⟨|χ_2H≤ z≤ 4Hf|⟩' dz= 1/H∫_2H^4H⟨|χ_2H≤ z≤ 4Hf|⟩' dz≲ ∫_0^∞⟨|χ_2H≤ z≤ 4Hf|⟩'dz/z . §.§ Proof of Lemma <ref> Let us first assume g=0. It is enough to show sup_z⟨|∇'u|⟩'≲sup_z⟨|f|⟩' and ∫_0^∞⟨|∇'u|⟩'dz/z≲∫_0^∞⟨|f|⟩'dz/z . Recall that by Duhamel's principle we have the following representation u(z)=∫_0^z u_z_0(·,z)dz_0, where u_z_0is the harmonic extension of f(z_0) onto {z>z_0}, i.e it solves the boundary value problem {[ (∂_z+(-Δ')^1/2) u_z_0 = 0forz>z_0 ,; u_z_0 = fforz=z_0 .; ]. From the Poisson's kernel representation we learn that ⟨|∇' u_z_0(·,z)|⟩'≲⟨|∇' f(·,z_0)|⟩' , 1/(z-z_0)^2⟨|∇'(-Δ')^-1f(·,z_0)|⟩' .Using the bandedness assumption in the form of (<ref>) and (<ref>) ⟨|∇' u_z_0(·,z)|⟩'≲min{1/R,R/(z-z_0)^2}⟨| f(·,z_0)|⟩'and observing(<ref>), we obtain [⟨|∇' u(·,z)|⟩' ≲∫_0^zmin{1/R,R/(z-z_0)^2}⟨| f(·,z_0)|⟩' dz_0; ≤ sup_z_0⟨| f(·,z_0)|⟩'∫_0^zmin{1/R,R/(z-z_0)^2} dz_0; ≲ sup_z_0⟨| f(·,z_0)|⟩' . ] Estimate (<ref>) follows from (<ref>) by passing to the supremum in z.From the above (<ref>), multiplying by the weight 1/z and observing that z>z_0 we have ⟨|∇' u(·,z)|⟩'1/z≲∫_0^zmin{1/R,R/(z-z_0)^2}⟨| f(·,z_0)|⟩'dz_0/z_0 . After integrating in z∈ (0,∞) and applying Young's estimatewe get (<ref>). Let's assume now the general case, with g≠ 0. We want to prove (<ref>). Recall that by definition g̃(x',z):=g(x') and consider u-g̃. By construction it satisfies{[ (∂_z+(-Δ')^-1/2)(u-g̃)=f-(-Δ')^-1/2g forz>0 ,; u-g̃=0forz=0.;]. Using the first part of the proof of (<ref>) and triangle inequality, we have||∇ u||_(0,∞)≲ ||∇g̃||_(0,∞)+||f||_(0,∞)+||(-Δ')^1/2g̃||_(0,∞) .Therefore by the bandedness assumption in the form of (<ref>) we can conclude (<ref>).§.§ Proof of Lemma <ref> We will show that, for the non-homogeneous heat equation with Dirichlet boundary condition {[(∂_t-Δ)u = fforz>0 ,; u = 0 for z=0 ,; u = 0 for t=0 ,; ]. we have the following estimates⟨∫|(∂_t-∂_z^2)u(·,z,·)| dz/z⟩+⟨∫|∇'^2 u(·,z,·)|dz/z⟩≲⟨∫|f(·,z,·)| dz/z⟩ , ⟨|∇'∂_z u(·,z,·)|_z=0⟩≲⟨∫|f(·,z,·)| dz/z⟩ , ⟨sup_z|∇'^2 u(·,z,·)|⟩≲⟨sup_z|f(·,z,·)|⟩ , ⟨sup_z|∇'∂_z u(·,z,·)|⟩≲⟨sup_z|f(·,z,·)|⟩ . In order to bound the off-diagonal components of the Hessian, we consider the decomposition u=u_N+u_C, where u_N solves {[(∂_t-Δ)u_N = fforz>0 ,; ∂_z u_N = 0 for z=0 ,; u_N = 0 for t=0 ,; ]. and u_C solves {[(∂_t-Δ)u_C = 0forz>0 ,; ∂_z u_C = ∂_z u for z=0 ,; u_C = 0 for t=0 .; ]. The splitting (<ref>) is valid by the uniqueness of the Neumann problem. For the auxiliary problems(<ref>) and (<ref>) we havethe following bounds ⟨∫|∇'∂_z u_N(·,z,·)|dz/z⟩≲⟨∫|f(·,z,·)| dz/z⟩ , ⟨sup_z|∇'∂_z u_C(·,z,·)|⟩≲⟨|∇'∂_z u(·,z,·)|_z=0⟩ . We claim that estimates (<ref>), (<ref>),(<ref>), (<ref>), (<ref>) and (<ref>)yield (<ref>). Let us first consider the bound for ∇'^2. Consider u=u_1+u_2, where u_1 and u_2 satisfy (<ref>) with right hand side f_1 and f_2 respectively. We have ||∇'^2 u||_(0,∞) ≲ ⟨sup_z|∇'^2u_1|⟩+⟨∫|∇'^2 u_2|dz/z⟩ (<ref>) & (<ref>)≲ ⟨sup_z|f_1|⟩+⟨∫|f_2|dz/z⟩, which implies, upon taking infimum over all decompositions f=f_1+f_2 ||∇'^2 u||_(0,∞)≲ ||f||_(0,∞). We now consider a further decomposition of u_2 , i.e u_2=u_2C+u_2N where u_2C satisfies (<ref>) and u_2N satisfies (<ref>). Therefore u=u_1+u_2C+u_2N and we can bound the off-diagonal components of the Hessian ||∇'∂_z u||_(0,∞) ≲ ⟨sup_z|∇'∂_z u_1|⟩+⟨sup_z|∇'∂_zu_2C|⟩+⟨∫|∇'∂_z u_2N|dz/z⟩(<ref>),(<ref>),(<ref>) & (<ref>)≲ ⟨sup_z|f_1|⟩+⟨∫|f_2|dz/z⟩ . From the last inequality, passing to the infimum over all the possible decompositions of f we get ||∇'∂_z u||_(0,∞)≲ ||f||_(0,∞). On one hand estimate (<ref>) and (<ref>) imply ||∇∇' u||_(0,∞)≲||∇'^2 u||_(0,∞)+||∇'∂_z u||_(0,∞) , on the other hand equation (<ref>) andestimate (<ref>) yield ||(∂_t-∂_z^2) u||_(0,∞)≲ ||f||_(0,∞) . Argument for (<ref>) Let u be a solution of problem of (<ref>).Keeping in mind Remark (<ref>) it is enough to show ∫_0^∞∫_0^∞⟨|∇'^2u|⟩'dz/zdt≲∫_0^∞∫_0^∞⟨|f|⟩'dz/zdt. By the Duhamel's principle we have u(x',z,t)=∫_s=0^t u_s(x',z,t)ds , where u_s is the solution to the homogeneous, initial value problem {[ (∂_t-Δ)u_s=0 for z>0, t>s ,;u_s=0 for z=0, t>s ,;u_s=f for z>0, t=s .;]. Extending u and f to the whole space by odd reflection [with abuse of notation we will call again u and f these extensions.], we are left to study the problem {[(∂_t-Δ)u_s = 0forz∈, t>s ,; u_s = f forz∈,t=s ,; ]. the solution of which can be represented via heat kernel as [u_s(x',z,t)= ∫_Γ(·,z-z̃, t-s)∗_x'f(·,z̃,s)dz̃; = ∫_0^∞[Γ(·,z-z̃,t-s)-Γ ( ·,z+z̃,t-s)]∗_x'f(·,z̃,s)dz̃ .;] The application of∇'^2 to the representation above yields ∇'^2u_s(x',z,t)= ∫_0^∞∫_^d-1∇'Γ_d-1(x'-x̃'̃,t-s)(Γ_1(z-z̃,t-s)-Γ_1( z+z̃,t-s))∇'f(x̃'̃,z̃,s)dx̃'̃dz̃ , ∫_0^∞∫_^d-1∇'^3Γ_d-1(x'-x̃'̃,t-s)(Γ_1(z-z̃,t-s)-Γ_1 (z+z̃,t-s))(-Δ')^-1∇'f(x̃'̃,z̃,s)dx̃'̃dz̃ . Averaging in the horizontal direction we obtain, on the one hand ⟨|∇'^2u_s(·,z,t)|⟩'≲ ∫_0^∞⟨|∇'Γ_d-1 (·,t-s)|⟩'|Γ_1(z-z̃,t-s)-Γ_1 ( z+z̃,t-s)|⟨|∇'f(·,z̃,s)|⟩' dz̃(<ref>)&(<ref>)≲ ∫_0^∞1/(t-s)^1/2|Γ_1(z-z̃,t-s)-Γ_1 ( z+z̃,t-s)|1/R⟨|f(·,z̃,s)|⟩' dz̃ and, on the other hand ⟨|∇'^2u_s(·,z,t)|⟩'≲ ∫_0^∞⟨|∇'^3Γ_d-1 (·,t-s)⟩'|Γ_1(z-z̃,t-s)-Γ_1( z+z̃,t-s)|⟨|(-Δ')^-1∇'f(·,z̃,s)|⟩'dz̃ (<ref>)&(<ref>)≲ ∫_0^∞|Γ_1(z-z̃,t-s)-Γ_1( z+z̃,t-s)|1/(t-s)^3/2 R⟨|f(·,z̃,s)|⟩'dz̃ . Multiplying by the weight 1/z and integrating in z∈(0,∞) we get ∫_0^∞⟨|∇'^2u_s(·,t)|⟩'dz/z≲(sup_z̃∫_0^∞K_t-s(z,z̃) dz) 1/(t-s)^1/21/R∫_0^∞⟨|f(·,z̃,s)|⟩' dz̃/z̃ , R/(t-s)^3/2∫_0^∞⟨ |f(x',z̃,s)|⟩'dz̃/z̃ , where we called K_t-s(z,z̃)=z̃/z|Γ_1(z-z̃,t-s)-Γ_1( z+z̃,t-s)|. From Lemma <ref> we infer sup_z̃∫_0^∞K_t-s(z,z̃) dz(<ref>)≲∫_|Γ_1(z,t-s)|dz+sup_z∈(z^2|∂_zΓ_1(z,t-s)|)(<ref>)&(<ref>)≲1 and therefore we have ∫_0^∞⟨|∇'^2u_s(·,z,t)|⟩'dz/z≲1/(t-s)^1/21/R∫_0^∞⟨|f(·,z̃,s)|⟩' dz̃/z̃ , 1/(t-s)^3/2 R ∫_0^∞⟨ |f(·,z̃,s)|⟩'dz̃/z̃ . Finally, inserting the previous estimate into the Duhamel formula (<ref>) and integrating in time we get ∫_0^∞⟨|∇'^2 u(·,z,t)|⟩'dz/z dt (<ref>)≲ ∫_0^∞∫_0^t⟨|∇'^2 u_s(·,z,t)|⟩' dz/z ds dt≲ ∫_0^∞∫_s^∞min{1/R(t-s)^1/2,R/(t-s)^3/2}∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ dt ds≲ ∫_0^∞∫_s^∞min{1/R(t-s)^1/2,R/(t-s)^3/2}dt∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ ds ≲ ∫_0^∞∫_0^∞min{1/Rτ^1/2,R/τ^3/2}dτ∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ ds,≲ ∫_0^∞∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ ds, where in the second to last inequality we used ∫_0^∞min{1/Rτ^1/2,R/τ^3/2}dτ≲ 1 . Argument for (<ref>): Let u be a solution of problem of (<ref>). Recall that we need to prove ∫_0^∞⟨|∇'∂_z u|_z=0(·,z,t)|⟩' dt≲∫_0^∞∫_0^∞⟨|f(·,z,t)|⟩' dtdz/z . The solution of the equation (<ref>) extended to the whole space by odd reflection can be represented by (<ref>) (see argument for (<ref>)). Therefore ∇'∂_zu_s(x',z,t)|_z=0= -2∫_^d-1∫_0^∞Γ_d-1(x'-x̃'̃,t-s)∂_zΓ_1(z̃,t-s)∇'f(x̃'̃,z̃,s)dx̃'̃dz̃ , -2∫_^d-1∫_0^∞∇'Γ_d-1(x'-x̃'̃,t-s) ∂_zΓ_1(z̃,t-s)∇'(-Δ')^-1∇'f(x̃'̃,z̃,s) dx̃'̃dz̃ . Taking the horizontal average we get, on the one hand ⟨|∇'∂_zu_s(·,z,t)|_z=0|⟩'≲ ∫_0^∞⟨|Γ_d-1(·,t-s)|⟩'|∂_z Γ_1(z̃,t-s)|⟨|∇'f(·,z̃,s)|⟩' dz̃(<ref>)≲ ∫_0^∞|∂_z Γ_1(z̃,t-s)|⟨|∇' f(·,z̃,s)|⟩'dz̃(<ref>)≲ 1/R∫_0^∞|∂_z Γ_1(z̃,t-s)|⟨|f(·,z̃,s)|⟩'dz̃≲ 1/Rsup_z̃|z̃∂_zΓ_1(z̃,t-s)|∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ and on the other hand ⟨|∇'∂_zu_s(·,z,t)|_z=0|⟩'≲ ∫_0^∞⟨|(∇')^2Γ_d-1(·,t-s)|⟩'|∂_z Γ_1(z̃,t-s)|⟨|(-Δ')^-1∇'f(·,z̃,s)|⟩' dz̃(<ref>)≲ 1/(t-s)∫_0^∞|∂_z Γ_1(z̃,t-s)|⟨|(-Δ')^-1∇' f(·,z̃,s)|⟩'dz̃(<ref>)≲ R/(t-s)∫_0^∞|∂_z Γ_1(z̃,t-s)|⟨|f(·,z̃,s)|⟩'dz̃≲ R/(t-s)sup_z̃|z̃∂_zΓ_1(z̃,t-s)|∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ . Using the estimate (<ref>) we get ⟨|∇'∂_zu_s(x',z,t)|_z=0|⟩' ≲1/(t-s)^1/2R∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ , R/(t-s)^3/2∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ . Finally,inserting into Duhamel's formula and integrating in time we have∫_0^∞⟨|∇'∂_z u(·,z,t)|_z=0⟩' dt (<ref>)≲ ∫_0^∞∫_0^t⟨|∇'∂_z u_s(·,z,t)|_z=0⟩' ds dt≲ ∫_0^∞∫_s^∞min{1/R(t-s)^1/2,R/(t-s)^3/2}∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ dt ds(<ref>)& (<ref>)≲ ∫_0^∞∫_0^∞⟨|f(x',z,s)|⟩'dz̃/z̃ ds. Argument for (<ref>):Let u be the solution of problem (<ref>).We recall that we want to prove sup_z∫_0^∞⟨|∇'^2u(·,z,t)|⟩'dt≲sup_z∫_0^∞⟨|f(·,z,t)|⟩'dt. The solution of equation (<ref>) extended to the whole spacecan be represented by (<ref>) (see argument for (<ref>)). Therefore applying ∇'^2 to (<ref>) and considering the horizontal average we have, on the one hand ⟨|∇'^2u_s(·,z,t)|⟩'≲ ∫_⟨|∇'Γ_d-1(·,t-s)|⟩'|Γ_1(z-z̃,t-s)|⟨|∇'f(·,z̃,s)|⟩' dz̃(<ref>)& (<ref>)≲ ∫_1/(t-s)^1/2|Γ_1(z-z̃,t-s)|1/R⟨|f(·,z̃,s)|⟩' dz̃and on the other hand⟨|∇'^2u_s(·,z,t)|⟩'≲ ∫_⟨|∇'^3Γ_d-1(·,t-s)|⟩'|Γ_1(z-z̃,t-s)|⟨|(-Δ')^-1∇'f(·,z̃,s)|⟩' dz̃ (<ref>) & (<ref>)≲ ∫_1/(t-s)^3/2|Γ_1(z-z̃,t-s)|R⟨|f(·,z̃,s)|⟩' dz̃ . Inserting the above estimates in the Duhamel's formula (<ref>), we have∫_0^∞∫_0^t ⟨|∇'^2u_s(z,·)|⟩'dsdt≲ ∫_0^∞∫_s^∞min{1/R(t-s)^1/2,R/(t-s)^3/2}∫_|Γ_1(z-z̃,t-s)|⟨|f(·,z̃,s)|⟩'dz̃ dsdt≲ ∫_(∫_0^∞min{1/R τ^1/2,R/τ^3/2}|Γ_1(z-z̃,τ)|dτ)∫_0^∞⟨|f(·,z̃,s)|⟩'dsdz̃≲ sup_z̃∫_0^∞⟨|f(·,z̃,s)|⟩'ds∫_∫_0^∞min{1/R τ^1/2,R/τ^3/2}|Γ_1(z-z̃,τ)|dτ dz̃(<ref>)≲ sup_z̃∫_0^∞⟨|f(·,z̃,s)|⟩'ds∫_0^∞min{1/R τ^1/2,R/τ^3/2}dτ∫_|Γ_1(z-z̃,τ)|dz̃(<ref>)≲ sup_z̃∫_0^∞⟨|f(·,z̃,s)|⟩'ds . Taking the supremum in z we obtain the desired estimate. Argument for (<ref>): Let u be the solution of problem (<ref>). We claimsup_z∫_0^∞⟨|∇'∂_zu|⟩'dt≲sup_z∫_0^∞⟨|f|⟩'dt. The solution of the equation (<ref>) extended to the whole spacecan be represented by (see argument for (<ref>)) u_s(x',z,t) =∫_Γ(·,z-z̃, t-s)∗_x'f(·,z̃,s)dz̃ . Applying ∇'∂_z and considering the horizontal average we obtain, on the one hand ⟨|∇'∂_z u_s(·,z,t)|⟩'≲ ∫_⟨|Γ_d-1(·,t-s)|⟩'|∂_zΓ_1(z-z̃,t-s)|⟨|∇'f(·,z̃,s)|⟩' dz̃(<ref>)≲ ∫_|∂_zΓ_1(z-z̃,t-s)|1/R⟨|f(·,z̃,s)|⟩' dz̃ and, on the other hand ⟨|∇'∂_z u_s(·,z,t)|⟩'≲ ∫_⟨|∇'^2Γ_d-1(·,t-s)|⟩'|∂_zΓ_1(z-z̃,t-s)|⟨|(-Δ')^-1∇'f(·,z̃,s)|⟩' dz̃(<ref>)≲ ∫_1/(t-s)|∂_zΓ_1(z-z̃,t-s)|R⟨|f(·,z̃,s)|⟩' dz̃ . Inserting the above estimates in the Duhamel's formula (<ref>), we have∫_0^∞∫_0^t ⟨|∇'∂_z u_s(z,·)|⟩'dsdt≲ ∫_0^∞∫_s^∞min{1/R,R/(t-s)}∫_|∂_zΓ_1(z-z̃,t-s)|⟨|f(·,z̃,s)|⟩'dz̃ dtds≲ ∫_(∫_0^∞min{1/R,R/τ}|∂_zΓ_1(z-z̃,τ)|dτ)∫_0^∞⟨|f(·,z̃,s)|⟩'dsdz̃≲ sup_z̃∫_0^∞⟨|f(·,z̃,s)|⟩'ds∫_∫_0^∞min{1/R ,R/τ}|∂_zΓ_1(z-z̃,τ)|dτ dz̃(<ref>)≲ sup_z̃∫_0^∞⟨|f(·,z̃,s)|⟩'ds∫_0^∞min{1/R τ^1/2,R/τ^3/2}dτ(<ref>)≲ sup_z̃∫_0^∞⟨|f(·,z̃,s)|⟩'ds . Taking the supremum in z we obtain the desired estimate. Argument for (<ref>)We recall that we want to show ∫_0^∞∫_0^∞⟨|∇'∂_zu_N|⟩'dz/zdt≲∫_0^∞∫_0^∞⟨|f|⟩'dz/zdt , where u_N be the solution to the non-homogeneous heat equation with Neumann boundary conditions (<ref>). By the Duhamel's principle we have u_N(x',z,t)=∫_s=0^t u_N_s(x',z,t)ds, whereu_N_s is solution to {[ (∂_t-Δ)u_N_s=0 for z>0, t>s ,; ∂_zu_N_s=0forz=0, t>s ,;u_N_s=fforz>0, t=s ,;].is the solution of problem (<ref>). Extending this equation to the whole space by even reflection [With abuse of notation we will denote with u_N_s and f their even reflection], we are left to study the problem {[ (∂_t-Δ)u_N_s=0 forz∈, t>s ,;u_N_s=ffor t=s ,;]. the solution of which can berepresented via heat kernel as u_N_s(x',z,t) =∫_Γ(·,z-z̃, t-s)∗_x'f(·,z̃,s)dz̃=∫_0^∞[Γ(·,z̃+z,t-s)+Γ (·,z̃-z,t-s)]∗_x'f(·,z̃,s)dz̃ . Applying ∇'∂_z to the representation above ∇'∂_zu_N_s(x',z,t)= ∫_0^∞∫_^d-1Γ_d-1(x'-x̃'̃,t-s)(∂_zΓ_1(z̃+z,t-s)-∂_zΓ_1 ( z̃-z,t-s))∇'f(x̃'̃,z̃,s)dx̃'̃dz̃ , ∫_0^∞∫_^d-1∇'^2Γ_d-1(x'-x̃'̃,t-s)(∂_zΓ_1(z̃+z,t-s)-∂_zΓ_1 (z̃-z,t-s))(-Δ')^-1∇'f(x̃'̃,z̃,s)dx̃'̃dz̃ and averaging in the horizontal direction we obtain, on the one hand ⟨|∇'∂_zu_N_s(·,z,t)|⟩'≲ ∫_0^∞⟨|Γ_d-1(·,t-s)|⟩'|∂_zΓ_1(z̃+z,t-s)-∂_zΓ_1 (z̃-z,t-s)|⟨|∇'f(·,z̃,s)|⟩' dz̃(<ref>)&(<ref>)≲ 1/R∫_0^∞|∂_zΓ_1(z̃+z,t-s)-∂_zΓ_1 (z̃-z,t-s)|⟨|f(·,z̃,s)|⟩' dz̃ and, on the other hand ⟨|∇'∂_zu_N_s(·,z,t)|⟩'≲ ∫_0^∞⟨|∇'^2Γ_d-1(·,t-s)|⟩'|∂_zΓ_1(z̃+z,t-s)-∂_zΓ_1(z̃-z,t-s)|⟨|(-Δ')^-1∇'f(·,z̃,s)⟩'dz̃(<ref>)&(<ref>)≲ R/(t-s)∫_0^∞|∂_zΓ_1(z̃+z,t-s)-∂_zΓ_1(z̃-z,t-s)|⟨|f(·,z̃,s)|⟩'dz̃ . Multiplying by the weight 1/z and integrating in z∈(0,∞) we get ∫_0^∞⟨|∇'∂_zu_N_s(·,z,t)|⟩'dz/z≲sup_z̃∫_0^∞K_t-s(z,z̃) dz 1/R∫_0^∞⟨|f(·,z̃,s)|⟩' dz̃/z̃ , 1/(t-s) R∫_0^∞⟨ |f(·,z̃,s)|⟩'dz̃/z̃ , where we called K_t-s(z,z̃)=z̃/z|∂_zΓ_1(z̃-z,t-s)-∂_zΓ_1( z+z̃,t-s)|. Recalling sup_z̃∫_0^∞K_t-s(z,z̃) dz(<ref>)≲∫_|∂_zΓ_1(z,t-s)|dz+sup_z∈(z^2|∂^2_zΓ_1(z,t-s)|) and observing that, in this case ∫_|∂_z Γ_1(z,t-s)|dz+sup_z∈(z^2|∂_zΓ_1(z,t-s)|)(<ref>)&(<ref>)≲1/(t-s)^1/2 , we can conclude that ∫_0^∞⟨|∇'∂_zu_N_s(·,t)|⟩'dz/z≲1/(t-s)^1/21/R∫_0^∞⟨|f(·,z̃,s)|⟩' dz̃/z̃ 1/(t-s)^3/2 R ∫_0^∞⟨ |f(·,z̃,s)|⟩'dz̃/z̃. Finally,inserting (<ref>) and integrating in time we have∫_0^∞∫_0^∞⟨|∇'∂_z u_N_s(·,z,t)|⟩'dz/z dt (<ref>)≲ ∫_0^∞∫_0^∞∫_0^t⟨|∇'∂_z u_N_s(·,z̃,t)|⟩' dz/z ds dt≲ ∫_s^∞∫_0^∞min{1/R(t-s)^1/2,R/(t-s)^3/2}∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ ds dt(<ref>)&(<ref>)≲ ∫_0^∞∫_0^∞⟨|f(·,z̃,s)|⟩'dz̃/z̃ ds . Argument for (<ref>):Recall that we need to prove sup_z∫_0^∞|∇'∂_zu_C|dt≲⟨|∇'∂_zu|_z=0⟩' .Byequation (<ref>), the even extension u_C satisfies (∂_t-Δ)u_C=-[∂_zu_C]δ_z=0=-2∂_z u_Cδ_z=0=-2∂_z u|_z=0δ_z=0 and therefore we study the following problem on the whole space {[ (∂_t-Δ)u_C=-2∂_z u|_z=0δ forz∈, t>0 ,;u_C=0for t=0 .;]. By Duhamel's principleu_C(x',z,t)=∫_s=0^tu_C_s(x',z,t)ds ,where u_C_s solves the initial value problem {[(∂_t-Δ)u_C_s = 0 for z∈, t>s ,; u_C_s = -2∂_z u|_z=0δforz∈, t=s .; ]. The solution of problem (<ref>) can be represented via the heat kernel as u_C_s(x',z, t) = ∫Γ(z-z̃,t-s)∗_x'(-2∂_z u|_z=0δ)(z̃,s) dz̃,= -2Γ(z,t-s)∗_x'∂_z u(z, s)|_z=0 . We apply ∇'∂_z to the representation above ∇'∂_zu_C_s(x',z,t)= ∫_^d-1-2Γ_d-1(x'-x̃'̃,t-s)∂_zΓ_1(z,t-s)∇'∂_z u(·,z, s)|_z=0dx̃'̃ and then average in the horizontal direction, ⟨|∇'∂_zu_C_s(x',z,t)|⟩'≲ ⟨|Γ_d-1(x',t-s)|⟩'|∂_zΓ_1(z,t-s)|⟨|∇'∂_z u(·,z, s)|_z=0|⟩'(<ref>)≲ |∂_zΓ_1(z,t-s)|⟨|∇'∂_z u(x̃'̃,z, s)|_z=0|⟩' . Inserting the previous estimate in the Duhamel formula <ref> and integrating in time we get ∫_0^∞⟨|∇'∂_zu_C(x',z,t)|⟩'dt≤ ∫_0^∞∫_0^t⟨|∇'∂_zu_C_s(x',z,t)|⟩'dsdt≲ ∫_0^∞∫_s^∞|∂_zΓ_1(z,t-s)|dt⟨|∇'∂_z u(x̃'̃,z, s)|_z=0|⟩' ds(<ref>)≲ ∫_0^∞⟨|∇'∂_z u(x̃'̃,z, s)|_z=0|⟩' ds . The estimate (<ref>) follows immediately after passing to the supremum in (<ref>). § APPENDIX§.§ PreliminariesWe start this section by proving some elementary bounds and equivalences, coming directly from thedefinition of horizontal bandedness (<ref>). These will turnto be crucial in the proof of the main result. a) Ifr(k',z,t)=0unlessR|k'|≥ 4then⟨|r(·,z,t)| ⟩'≤ R⟨|∇' r(·,z,t)|⟩' . In particular||r||_(0,∞)≤ R ||∇' r||_(0,∞) .b) If r(k',z,t)=0unlessR|k'|≤ 1then ⟨|∇'r(·,z,t)| ⟩'≤1/R⟨| r(·,z,t)|⟩' .In particular||∇'r||_(0,∞)≤1/R ||r||_(0,∞) . c)Ifr(k',z,t)=0unless1≤ R|k'|≤ 4then||∇'(-Δ')^-1/2r||_(0,∞)∼ ||r||_(0,∞) ,and||(-Δ')^1/2r||_(0,∞)∼||∇' r||_(0,∞) .All the results stated in Lemma <ref> are valid with the norm ||·||_(0,∞) replaced with ||·||_(0,1). Notice that from (<ref>) and (<ref>), it follows||∇'(-Δ')^-1∇'· r||_(0,∞)≲ ||r||_(0,∞) . a) By rescaling we may assume R=1. Let ϕ∈ (^d-1) be a Schwartz function such that ϕ(k')= 0|k'|≥ 1 1 |k'|≤ 1 and such that ∫_^d-1ϕ(x')dx'=1. We claim that, under assumption (<ref>), there exists ψ∈ L^1(^d-1) such that (Id-ϕ∗')r= ψ∗'∇ r . Since r=r-ϕ∗ r, if we assume (<ref>) the conclusion follows from Young's inequality ∫_^d-1|r(x',z)|dx'≤∫_^d-1|ψ(x')|dx'∫_^d-1|∇ r(x',z)|dx' . Argument for (<ref>): Using the assumptions on ϕ and performing suitable change of variables, we find r(x',z)-∫ϕ(x'-y')r(y',z) dy'= ∫ϕ(x'-y')(r(x',z)-r(y',z)) dy'= ∫_^d-1ϕ(x'-y')∫_0^1 (x'-y')∇' r(tx'+(t-1)(x'-y'), z) dy'dt= ∫_0^1∫_^d-1ϕ(ξ)∇'r(x'+(t-1)ξ,z)·ξ dξ dt= ∫_0^1∫_^d-1ϕ(ŷ'-x'/t)∇ r(ŷ',z)·ŷ'-x'/t dt 1/t^d-1dŷ'= ∫_^d-1∇' r(ŷ',z)·(∫_0^1ϕ(ŷ'-x'/t)ŷ'-x'/t^d dt) dŷ'= ∫_^d-1∇' r(ŷ',z)ψ(ŷ'-x'/t)dŷ', where ψ(x')=∫_0^1ϕ(-x'/t)x'/t^ddt . We notice that ψ∈ L^1(^d-1), in fact ∫_^d-1|ψ(x')|dx'≤∫_0^1 ∫_^d-1|ϕ(x'/t)x'/t^d| dx' dt=∫_^d-1|ϕ(ξ)ξ|dξ .b) In Fourier space we have ∇'r(k',z)=ik' r(k',z)=R^-1 G(Rk') r(k',z)=R^-1 G_R(k') r(k',z), where G is a Schwartz function and G_R(x')=R^-d G(x'/R). Since ∫ |G_R|dx'=∫ |G| dx' is independent of R, we may conclude by Young ∫ |∇' r| dx'≤1/R∫ |G_R| dx'∫ |r| dx'≲1/R∫ |r| dx' . Here we prove an elementary estimate that will be applied in the argument for (<ref>) and (<ref>), Lemma <ref>Let K=K(z) be a real function and define K(z,z̃)=z̃/z|K(z̃- z)-K( z+z̃)| .Thensup_z̃∫_0^∞K(z,z̃) dz≲∫_|K(z)|dz+sup_z∈(z^2|∂_zK(z)|) .Let us distinguish two regions:1/2|z̃/z|<1 and 1/2|z̃/z|>1.For |z|≥1/2|z̃| we havesup_z̃∫_|z|≥1/2|z̃||K(z,z̃)|dz ≤ max_z̃∫_|z|≥1/2|z̃||K(z̃-z)-K(z+z̃)|dz≲∫|K(z)|dz .While for the region |z|≤1/2|z̃| we have,max_z̃|z̃|∫_|z|≤1/2|z̃|1/|z||K(z̃-z)-K(z+z̃)|dz= max_z̃|z̃|∫_|z|≤1/2|z̃|1/|z||∫_-1^1K'(z̃+t z) zdt|dz≤ max_z̃|z̃|∫_-1^11/t∫_|z|≤t/2|z̃||K'(z̃+ z) |dzdt 1/2|z̃|≤ |z̃+z|≤ max_z̃∫_-1^11/t∫_|z|≤t/2|z̃|2|z̃+z||K'(z̃+ z)|dt dz≤ max_z̃∫_-1^12/tmax_|z|≤t/2|z̃|{|z̃+z||K'(z̃+ z)|}(∫_|z|≤t/2|z̃| dz)dt= max_z̃∫_-1^11/tmax_|z|≤t/2|z̃|{|z̃+z||K'(z̃+ z)|} t|z̃|dt= 2max_z̃|z̃|max_|z|≤t/2|z̃|{|z̃+z||K'(z̃+ z)|}1/2|z̃|≤ |z̃+z|≤ 4max_z̃max_|z|≤t/2|z̃|{| z+z̃|^2|K'(z̃+ z)|} .In conclusion we have max_ z∫|K̅(z,z̃)|dz≲∫|K(z)|dz+max_ z|z|^2|K'(z)|. §.§ Heat kernel: elementary estimatesIn this section we recall the definition of the heat kernel and someproperties and estimates that we will use throughout the paper. The function Γ:^d×→ is defined asΓ(x,t)=1/t^d/2exp(-|x|^2/4t)and we can rewrite it as Γ(x,t)=Γ_1(z,t)Γ_d-1(x',t)x'∈^d-1, z∈, whereΓ_1(z,t)=1/t^1/2exp(-z^2/4t)andΓ_d-1(z,t)=1/t^(d-1)/2exp(-|x'|^2/4t) . Here we list the bounds on the derivatives of Γ that are used in Section <ref>, Lemma <ref>:*⟨|(∇')^nΓ_d-1|⟩'≈1/t^n/2 .*∫_|∂_z^nΓ_1|dz≲1/t^n/2 .*∫_0^∞|∂_zΓ_1(z,t)|dt=∫_0^∞|1/t̂^3/2exp(-1/4t̂)|dt̃≲ 1 ,where we have used the change of variable t̂=t/z^2.*sup_z∈(z|∂_zΓ_1(z,t)|)=sup_ξ|1/t^1/2ξ^2exp^-ξ^2|≲1/t^1/2 ,where we have used the change of variable ξ=z/t^1/2 .*sup_z∈(z^2|∂_zΓ_1(z,t)|)=sup_ξ|ξ^3exp^-ξ^2|≲ 1 , where we have used the change of variable ξ=z/t^1/2 . § NOTATIONS The (d-1)-dimensional torus:We denote with [0,L)^d-1the (d-1)-dimensional torus of lateral size L. The spatial vector: x=(x',z)∈ [0,L)^d-1× . The horizontal average: ⟨·⟩'=1/L^d-1∫_[0,L)^d-1 ·dx' .Long-time and horizontal average:⟨·⟩= lim sup_t_0→∞1/t_0∫_0^t_0⟨ · ⟩'dt .Convolution in the horizontal direction: f∗_x'g(x')=∫_[0,L)^d-1f(x'-x')g(x')dx' .Convolution in the whole space: f∗ g(x)=∫_∫_[0,L)^d-1f(x'-x',z-z̃)g(x',z̃)dx'dz̃ . Horizontal Fourier transform: ℱ'f(k',z,t)=1/L^d-1∫ e^-ik'· x'f(x',z,t)dx' .where k' is the conjugatevariable of x'.Horizontally band-limited function: A function g=g(x',z,t) is called horizontallyband-limited with bandwidth R if it satisfiesthe bandedness assumption g(k',z,t)=01≤ R|k'|≤ 4R<R_0.Interpolation norms:||f||_(0,1)=||f||_R;(0,1)=inf_f=f_1+f_2{⟨sup_z∈(0,1)|f_1|⟩+⟨∫_(0,1)|f_2|dz/z(1-z)⟩} , ||f||_(0,∞)=||f||_R;(0,∞)=inf_f=f_1+f_2{⟨sup_z∈(0,∞)|f_1|⟩+⟨∫_(0,∞)|f_2|dz/z⟩} , ||f||_(-∞,1)=||f||_R;(-∞,1)=inf_f=f_1+f_2{⟨sup_z∈(-∞,1)|f_1|⟩+⟨∫_(-∞,1)|f_2|dz/1-z⟩} .where f_0, f_1 satisfy the bandedness assumption (<ref>).Throughout the paper we will denote with ≲ the inequality up to universal constants.§ ACKNOWLEDGEMENTC.N. was supported by IMPRS of MPI MIS (Leipzig). A.C. was partially supported by Whittaker Research Fellowship. unsrt
http://arxiv.org/abs/1703.09208v1
{ "authors": [ "Antoine Choffrut", "Camilla Nobili", "Felix Otto" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170327175105", "title": "A maximal regularity estimate for the non-stationary Stokes equation in the strip" }
Collision frequency measurement with the hairpin resonator probe]Electron neutral collision frequency measurement with the hairpin resonator probe[1] North Carolina State University, Nuclear Engineering Department, Raleigh NC, USA [2] Applied Materials, Sunnyvale CA, USA [3] Treasure Isle Jewelers, Cary NC, USA djpeter5@ncsu.edu March 2017Electron neutral collision frequency is measured using both grounded and floating hairpin resonator probes in a 27 MHz parallel plate capacitively coupled plasma (CCP). Operating conditions are 0.1-2 Torr (13.3-267 Pa) in Ar, He, and Ar-He gas mixtures. The method treats the hairpin probe as a two wire transmission line immersed in a dielectric medium. A minimization method is applied during the pressure and sheath correction process by sweeping over assumed collision frequencies in order to obtain the measured collision frequency. Results are compared to hybrid plasma equipment module (HPEM) simulations and show good agreement.§ INTRODUCTIONModerate pressure plasmas (1-10 Torr) are an area of growing interest as they have potential for faster processing, and are especially useful for thin film deposition[1]. Standard diagnostic approaches like Langmuir probes have formidable complications at these pressures due to the complex nature of collisional sheath dynamics[2]. Given the lack of easily implemented localized measurements at these pressures, any extra information that can be obtained is useful for improving industrial source design and validation efforts for plasma chemistry models.The hairpin resonance probe operates on a similar principle to the cavity perturbation technique. Both rely on the resonance frequency shift induced by the plasma's lossy dielectric properties to infer electron density (n_e), while the hairpin probe has the added benefit of allowing localized measurements. It has been previously suggested that resonance broadening can be used to directly determine the electron-neutral collision frequency[3-4]. This work extends the hairpin probe's capabilities, allowing measurement of electron neutral collision frequency (ν_en) using a relationship between hairpin probe resonance broadening and electron neutral collisions. § HAIRPIN THEORYThe vacuum resonant frequency of a hairpin resonator, with tine lengths l, is given by f_0 = c/4l, where c is the vacuum speed of light. In a plasma, the resonance shifts to a higher frequency f_r = c/4l√(ϵ'), where ϵ' is the real part of the complex plasma permittivity ϵ_p = 1 - ω_p^2/ω_r( ω_r - iν_en) . The previous relations, the angular resonant frequency ω_r = 2π f_r, and the electron plasma frequency ω_p = (n_ee^2/m_eϵ_0)^1/2 can be combined with measurements and a simple result from transmission line theory to produce a closed system of equations where ν_en is the only unknown. This approach implicitly assumes n_e is accurately determined by the hairpin probe. A value for electron temperature (T_e) must be assumed to correct for the presence of a sheath around the hairpin tines. Since the probe is not biased, sheaths act primarily as a geometric effect pertaining to the volume of free space between the hairpin width (w). Experimental error introduced by uncertainty in the sheath width (b) is therefore a quantity subject to optimization through probe design, further discussed in section 3.2.Sheath and pressure corrections must be applied to accurately determine n_e. The sheath correction factor is the same used by Sands et al [5], shown in equation (<ref>). ξ_s = 1 - f_0^2/f_r^2[ ln(w-a/w-b) + ln(b/a)] /ln( w-a/a) The correction is applied using the iterative approach developed by Piejak [6]. The sheath is assumed to extend one electron Debye length (λ_D = ϵ_0k_BT_e/e^2n_e) out from the radius of the hairpin tine (a) for both types of probes. Here ϵ_0, k_B, and e are the permittivity of free space, boltzmann's constant, and electronic charge, respectively. Measurements in this work are made with both floating and grounded hairpin probes. This is done primarily to quantify the error introduced by using grounded probes, as opposed to measurements made with a floating probe which only require a DC sheath correction [7]. Differences between floating and grounded probes are discussed in section 3.2.A pressure correction factor (ξ_p) must be applied at moderate pressures in order to correct for the centerline shift in the resonance frequency. This is done using the expression given by Sands, ξ_p = 1/1 + (ν_en/2π f_0)^2 .The pressure and sheath corrections update n_e in the manner shown in equation (<ref>). The pressure correction is applied first and then sent to the sheath correction, which iteratively solves for the corrected n_e. n_e = π m_e/e^2f_r2^2 - f_0^2/ξ_pξ_s The simplest approach to determining ν_en is to treat the plasma as a lossy dielectric. The plasma quality factor (Q_Plasma) can be defined using the ratio of the real and complex plasma permittivity [8], as seen in equation (<ref>). Q_Plasma = ϵ'/ϵ” = 1-ω_p^2/ω_r^2 + ν_en^2/ν_en/ω_rω_p^2/ω_r^2+ν_en^2The same result can also be obtained from transmission line theory, where the attenuation constant (α) is defined using distributed parameters and assumes negligible resistive losses in the probe. This is a valid assumption considering the conducting material of the probe is silver. The analysis assumes weak attenuation, where α l ≪ 1, also a valid assumption for the pressure regimes being investigated. A complete description of transmission line analysis of the hairpin probe can be found by Xu et al [4].When the probe is immersed in plasma, the hairpin can be treated as a loaded resonant quarter wave transmission line[9], resulting in the coupling of Q values in the manner shown by equation (<ref>). 1/Q_Measured = 1/Q_Vacuum + 1/Q_PlasmaQ_vacuum is the measured quality factor of the hairpin inside the chamber at vacuum. Measurements are made inside the chamber in order to completely account for parasitic loading of chamber components. Solving for Q_Plasma using equation (<ref>), we see that it now consists entirely of measured quantities. Both Q_Vacuum and Q_Measured are measured using Q=f_r/Δ f_r, where Δ f_r is the full width half max (FWHM) of the Lorentzian resonance profile. Collision frequency can then be determined by combining equations (<ref>) and (<ref>), since ω_r and ω_p are both known.§ EXPERIMENTAL §.§ Experimental Setup Experiments are performed on the Modular Radiofrequency Plasma Chamber (MrPC). MrPC is a parallel plate CCP reactor with two 150 mm diameter aluminum electrodes. The electrodes are mechanically fastened to spindles for adjusting the distance between the electrodes and their position inside the reactor. The electrode gap was 1.91 cm for all measurements. The electrodes are housed in a Rexolite plastic shroud which provides 180 pF of electrical isolation to the surrounding ground plate. One electrode is powered and the other grounded. The powered electrode is connected to a matching network through RG-393 MIL-C-17 RF cable and terminated on each end with N-Type connectors. MrPC is a stainless steel vessel pumped by a turbomolecular pump (TMP) with a base pressure <0.2 mTorr. The leak rate of the reactor when isolated from the TMP was 7.6 mTorr/min. The gas introduced by the leak is negligible compared to the controlled gas flow for all experiments. Gas is fed to the chamber using an analog mass flow controller (MFC) and pressure is controlled with a closed-loop capacitance manometer and throttle valve. Operating conditions are 0.1-2 Torr in Ar, He, and Ar-He gas mixtures powered by a 27.12 MHz LVG RF generator from ENI Products. A computer controlled HP8753C network analyzer is used for probe data acquisition. A diagram of the experimental setup is shown in figure 1. A more complete description of the setup has been given previously by Zhang et al[10]. §.§ Hairpin Design and Calibration Grounded hairpin probes provide a simple design for producing high Q resonances, which are necessary for measurements at moderate pressures. The high Q resonances are a result of direct electrical connection between the driving loop and the hairpin tines. Grounded probes come with a caveat. They require more sophisticated sheath analysis due to the presence of a grounded RF sheath across wire surfaces. Floating hairpin probes are used in this experiment to simplify the sheath correction process, only requiring DC sheath corrections. The floating probe design used in this work is optimized to keep Q high while still isolating the hairpin from ground to ensure no RF driven sheath is formed. A diagram of the design is shown in figure 2. A quartz sleeve ensures a small capacitive impedance for the microwave frequencies sent to the probe, which are typically in the low GHz range. Meanwhile, the drive frequency (27 MHz) impedance remains large enough to inhibit the formation of a RF driven sheath. This is done by keeping the thickness of the quartz sleeve (t) relatively thin, around 1.0 mm. The floating probe used in this work had t=1.15 mm, which corresponds to a capacitive impedance approximately 15 Ω at 2 GHz. Based on observed Q of this probe, a thinner quartz sleeve would further improve performance without introducing a significant RF sheath. Length of the metal ring (L) serves as an additional parameter for fine tuning of impedance by changing the effective area of the capacitor. Microwave plasmas can avoid this optimization step since RF driven sheaths will not exist with drive frequencies above the plasma frequency.The floating and grounded probe wire radii (a) and width (w) used in this experiment were a=0.22 and 0.325 mm, and w=1.85 and 3.05 mm, respectively. All probe components were made of sterling silver, which was annealed and then drawn through a draw plate to the specified radii. Rounding pliers were used to shape the loop to the correct orientation. The loop and tines were then TIG welded to their respective positions. A drop of silver was deposited to the base of the hairpin tines before welding in order to avoid unintentionally destroying the tines. Metal notches were used to secure the position of the quartz sleeve for the floating probe.Hairpin resonances are fit with Lorentz curves since they can be characterized as a RLC resonant circuit [11]. Acquired data is fed into open source Python and Fityk[12] scripts for automated peak fitting and data analysis. These were developed in-house and are freely available upon request. The collision frequency measurement technique relies on accurate determination of the FWHM of the hairpin resonance in both vacuum and plasma. A calibration is required to ensure accuracy. The calibration removes transmission line effects of the probe, which can produce substantially different Q factors. An illustration of these differences is shown in figure 3. Figure 3a shows an uncalibrated curve which has the characteristic transmission line curve along with the resonance. Figure 3b shows the same curve with an applied calibration, obtained by subtracting the calibration curve from the original. Applying the calibration produces up to a 20% difference in measured Q factors due to distortion of the resonance peak. Calibration curves are obtained by shorting the far end of the hairpin tines. In this case, copper tape was rolled up in way that produces two holes at each end, with one side being crimped, and the un-crimped side put around the open circuit end of the hairpin. This ensures that good electrical contact is made on both tines, shorting the open circuit, and fits well enough to not fall off during experiments. Since transmission line properties shift when immersed in plasma, calibrations were taken for each unique set of plasma conditions in order to ensure accurate FWHM measurements. Calibrations obtained for different plasma conditions result in differences in Q that are typically less than the uncertainty resulting from the fit itself. Uncertainty in peak center and FWHM associated with fitting are typically both around 0.5%. As a result, only one calibration is used for all plasma conditions. Q_Vacuum is measured immediately after the plasma is extinguished. This ensures that ion heating of the probe is taken into account since it slightly decreases the quality factor and resonant frequency, a phenomenon previously noted by Piejak.At higher pressures, one must also take steps to avoid probe perturbation of the plasma due to the drawing of excessive current. This begins to occur as the electron mean free path approaches the dimensions of the probe. The hairpin tine diameter is approximately 5 times larger than the electron mean free path at 1 Torr in argon, meaning that some plasma perturbation occurred during measurements. The perturbation presents a probe design optimization dilemma. Experimental design necessitates finding a balance between the desire for minimal perturbation and the need for a high enough Q factor to perform measurements at higher pressures.Measurements are limited to conditions where electron density can accurately be measured. The smallest measurable n_e, suggested by Karkari et al [13], corresponds to a plasma frequency f_p∼ (2/Q)^1/2f_0. The probes used in this experiment have Q_Vacuum≈ 380, which corresponds to a lower density limit near 3 × 10^8 cm^-3. Measurements are also bound by an upper n_e limit that stems from the degradation in signal quality. It is suggested that the vacuum resonant frequency be larger than the plasma frequency in order to avoid exciting waves in the plasma, which further perturbs local parameters and introduces an additional nonlinear loss mechanism. Avoiding this nonlinear loss mechanism is particularly important for this method. Pressure also acts as a limiting parameter to these measurements. At low pressures, in the 10 mTorr range, collisional broadening contributes a smaller fractional amount to the overall broadening and becomes harder to measure. The lowest measurable pressure for this experiment was 0.1 Torr. At higher pressures, in the 10 Torr range, resonances broaden significantly due to collisional damping. Novel probe designs with higher vacuum quality factors are capable of extending the measurable parameter space, and is the subject of future work. §.§ Determination of ν_en from Q In figure 4, Q_Measured is fit with equation (<ref>) by assuming a constant pressure normalized collision frequency ν_en/p=2.5 GHz/Torr. The fit clearly captures the shape of Q_Measured, confirming expectations that pressure and sheath corrected n_e plays an important role in determining resonance width. The peak in quality factor is accompanied by a drop in measured electron density, making the probe less lossy, which results in the unusual shape. Electron temperature can appreciably change over such a relatively large pressure range, contributing to deviations in the fit caused by assuming a constant pressure normalized collision frequency. Instead of assuming the value of ν_en, it can be solved for directly using equation (<ref>). However, the n_e used for electron plasma frequency is a value that has already been modified by a ν_en dependent pressure correction. This apparent problem can be decoupled by sweeping over the initial collision frequency (ν_en^i), producing a range of possible n_e. An example of this is illustrated in figure 5. The dark region corresponds to 1000 different pressure and sheath corrected density profiles, each assuming different normalized collision frequencies spanning 2-4 GHz. The unusual n_e profile is a result of shifts in the spatial distribution of n_e as pressure increases, a phenomenon also observed in simulations done for this work. Different ν_en^i that are used for the pressure correction yield different measured collision frequencies (ν_en^m) for the same experimental conditions. An accurate pressure correction will minimize the difference between ν_en^i and ν_en^m, which provides a path for the direct measurement of ν_en.§ RESULTS AND DISCUSSION Floating and grounded probe measurements are taken in pure Ar over a pressure range of 0.1-2 Torr and compared with hybrid plasma equipment module [14] (HPEM) simulations in figure 6(a). The floating probe measurements are limited to a smaller pressure range of 0.2-1 Torr due to a lower Q_Vacuum. Measurements fall between the maximum (Max) and hairpin location specific (at HP) simulated ν_en produced by HPEM. The large range of simulated ν_en stems from the fact that Ar is a Ramsauer gas. This means that relatively small differences in T_e can produce significantly different ν_en. For example, the simulated T_e that correspond to the maximum and at hairpin ν_en are approximately 1.8-3.3 eV, respectively. Floating probe measurements yielded smaller ν_en than the grounded probe measurements because floating probes only exhibit DC sheaths. The sheath on a floating probe will always be smaller than that of the grounded probe due to the rectification of RF current that occurs with grounded probes. The smaller sheath of the floating probe will result in a larger, and thus more accurate, uncorrected n_e. If identical sheath corrections are applied to both cases, as done in figures 6 (a) and (b), one would expect the larger n_e from the floating probe to yield smaller corresponding ν_en, easily seen in equation (<ref>). Sheath corrections assume T_e=3.2 eV, a typical value simulated in HPEM. Error bars correspond to sheath corrections with 1 eV <T_e<5 eV, shown primarily to capture rough uncertainty values resulting from an unknown T_e. HPEM simulations for pure Ar used an applied voltage of 55 V, corresponding to an simulated input power of 60 W, when assuming a matched 50 Ω load. Applied voltage was kept constant for all simulations, as opposed to letting voltage vary to reach a specified input power. Fixing applied voltage produced more consistent results and is recommended practice.Measurements are made in pure He with P=20 W in figure 6(b). Floating and grounded probe ν_en values start to diverge near 0.2 Torr in the expected directions for the same reason previously mentioned. Floating probe measurements were again limited to 0.1-0.8 Torr because Q_vacuum was too low. The measurable pressure range for the floating probe in He was slightly lower than for Ar because He measurements yielded higher n_e and ν_en for the same conditions. This limited the measurable range due to signal loss. Sheath corrections assume T_e=4 eV, a typical value simulated in HPEM. Error bars correspond to 2 eV <T_e<6 eV. HPEM simulations required larger than expected applied voltages of 140 V in order to achieve reasonable densities. Very close agreement between experiment and simulation is observed until around 1 Torr. Above 1 Torr, grounded probe collision frequency remains lower than simulated values. A slightly logarithmic trend in ν_en, as opposed to linear, is expected with increasing pressure since higher collisionality will shift the electron energy distribution function towards lower energies. This nonlinear trend is observed in figure 6(b) but is not clearly exhibited in figure 6(a). An explanation for this discrepancy is still an unresolved issue. The measured ν_en at higher pressures correspond to T_e≈ 2.5 eV, while HPEM simulated T_e≈ 3.7 eV at the location of the probe.Additional measurements are taken over a range of gas compositions in Ar-He mixture plasmas and compared to HPEM runs at p=0.75 Torr and P=20 W. Sheath corrections assume T_e=4 eV. Error associated with uncertainty in T_e is similar in magnitude to those in figure 6, but error bars were not included for illustrative purposes. Close agreement between measurement and simulation can be observed in figure 7. A notable increase in collision frequency from 90% → 100% He was observed in both experiment and simulation. This phenomenon is a result of a moderate increase in T_e when transitioning to pure He. All mixing simulations were run at a slightly larger than expected applied voltage of 45 V (corresponding to P=32 W) in order to achieve reasonable densities, except for the 100% He case which again required 140 V. Increasing the applied voltage typically decreased T_e in the simulation. If the 100% He case was capable of simulation at the self-consistent applied voltage of 45 V, one would expect simulation to yield a slightly more exaggerated increase in the 90%-100% He transition. Reaction rates may be obtained from hairpin measurements of collision frequency using the ideal gas law, shown in equation (<ref>). K(T_e) = T_gk_Bν_en/pIf this is used in conjunction with a tunable diode laser absorption measurement of neutral gas temperature (T_g), T_e may be self-consistently determined with the sheath correction at conditions that are difficult to obtain with standard techniques. This can easily be done by comparing the measured reaction rate to expected reaction rates using BOLSIG+[15]. The method developed here may even be sensitive enough to infer T_e in non-Ramsauer gases, albeit with less accuracy. Measurements are also amenable to time resolution using the boxcar method[13].§ CONCLUSIONS Collision frequency measurements are obtained using the hairpin resonance probe for Ar, He, and Ar-He mixture plasmas. Measurements match closely with simulation results, and have a maximum of approximately 20% difference. Primary sources of error for this method stem from assuming electron temperatures and one Debye length sheaths. The presented technique presents an useful route for obtaining important information from plasmas at conditions that are becoming increasingly desirable for a number of industrial applications. This seems particularly useful for accurately determining electron temperature in moderate pressure plasmas considering the difficulty associated with using Langmuir probes at these pressures, and a notable lack of other available diagnostic techniques. Extending the technique developed here to determine T_e is the subject of future work.§ ACKNOWLEDGEMENTSThis work is supported through a generous gift by Applied Materials Inc. The author would like to thank Yiting Zhang for her invaluable help with HPEM simulations. § REFERENCES 12 Hassouni Hassouni K, Gicquel A, Capitelli M and Lourreiro J 1999Plasma Sources Sci. Technol. 8 494 Godyak Godyak V A and Demidov V I 2011 J. Phys. D: Appl. Phys. 44 233001 Lieberman Lieberman M A and Lichtenberg A J, Principles of Plasma Discharges and Materials Processing. 2nd edn (New Jersey: Wiley) Xu Xu J, Nakamura K, Zhang Q and Sugai H 2009 Plasma Sources Sci. Technol. 18 045009 Sands Sands B L, Siefert S N, Ganguly N B 2007 Plasma Sources Sci. Technol. 16 716 Piejak Piejak R B, Godyak V A, Garner R, Alexandrovich B M and Sternberg N 2004 J. Appl. Phys. 95 3785-91 Piejak 2 Piejak R B, Al-Kuzee J and Braithwaite N St J 2005 Plasma Sources Sci. Technol. 14 734-43 Chen Chen L F, Microwave Electronics: Measurement and Materials Characterization. Chichester: Wiley, 2004. Print.Pozar Pozar D M, Microwave Engineering. Hoboken, NJ: J. Wiley, 2005. Zhang Zhang Y, Zafar A, Coumou D J, Shannon S C and Kushner M J 2015 J. Appl. Phys. 117 233302 Serway Serway R A and Beichner R J,Physics for Scientists and Engineers, 5th Ed., Saunders College Publishing, 2000.Fityk Wojdyr M 2010 J. Appl. Cryst. 43, 1126-1128 [reprint] Karkari Karkari S K, Gaman C, Ellingboe A R, Swindells I and Bradley J W 2007 Meas. Sci. Technol. 18 2649 Kushner Kushner M J 2009 J. Phys. D: Appl. Phys. 42 194013 BOLSIG Hagelaar G J M and Pitchford L C 2005 Plasma Sci Sources and Tech 14, 722-33.
http://arxiv.org/abs/1703.09334v1
{ "authors": [ "David J Peterson", "Philip Kraus", "Thai Cheng Chua", "Lynda Larson", "Steven C Shannon" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170327225849", "title": "Electron neutral collision frequency measurement with the hairpin resonator probe" }
[]davide.pincini@diamond.ac.uk London Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E6BT, UK Diamond Light Source Ltd., Diamond House, Harwell Science & Innovation Campus, Didcot, Oxfordshire OX11 0DE, UK[]j.vale@ucl.ac.uk London Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E6BT, UK Laboratory for Quantum Magnetism, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland London Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E6BT, UK California Institute of Technology, 1200 East California Blvd, 91125 Pasadena CA, USA Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA [Permanent address: ]Inorganic Chemistry, University of Oxford, South Parks Road, OX1 3QR, UK SUPA, School of Physics and Astronomy, and Centre for Science at Extreme Conditions, The University of Edinburgh, Mayfield Road, Edinburgh EH9 3JZ London Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E6BT, UK European Synchrotron Radiation Facility, BP 220, F-38043 Grenoble Cedex, France Department of Quantum Matter Physics, University of Geneva, 24 Quai Ernest-Ansermet, 1211 Geneva 4, Switzerland Swiss Light Source, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland London Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E6BT, UK The collective magnetic excitations in the spin-orbit Mott insulator (Sr_1-xLa_x)_2IrO_4 (x=0, 0.01, 0.04,0.1) were investigated by means of resonant inelastic x-ray scattering. We report significant magnon energy gaps at both the crystallographic and antiferromagnetic zone centers at all doping levels, along with a remarkably pronounced momentum-dependent lifetime broadening. The spin-wave gap is accounted for by a significant anisotropy in the interactions between J_eff=1/2 isospins, thus marking the departure of Sr_2IrO_4 from the essentially isotropic Heisenberg model appropriate for the superconducting cuprates. Anisotropic exchange and spin-wave damping in pure and electron-doped Sr_2IrO_4 D.F. McMorrow December 30, 2023 ===============================================================================§ INTRODUCTION The combination of strong spin-orbit coupling in the presence of significant electron correlations leads to radically new electronic and magnetic phases, the elucidation of which is the subject of intense experimental and theoretical efforts <cit.>. In their landmark paper, <cit.> established how for the case of a J_eff=1/2 ground state, relevant for octahedrally coordinatedtransition-metal oxides (TMOs), a unique balance arises between anisotropic and isotropic exchanges, enshrined in the Kitaev-Heisenberg model, which depends exquisitely on lattice topology. Therefore determining the nature of anisotropic interactions is central to the program of understanding the novel physics displayed by 5d (and, indeed, 4d) TMOs.Sr_2IrO_4 is of particular significance as the first spin-orbit Mott insulator to be identified <cit.> and because of its electronic and structural similarities <cit.> to the cuprate superconductor parent compound La_2CuO_4. This has led to the prediction of a superconducting state in doped Sr_2IrO_4 <cit.>. Analogous to the Cu^2+ (S=1/2) magnetic moments in La_2CuO_4, the spin-orbit entangled J_eff=1/2 <cit.> isospins of the Ir^4+ ions in Sr_2IrO_4 order at low temperature in a two-dimensional (2D) square lattice of antiferromagnetically coupled moments confined in the IrO_2 planes of the tetragonal crystal structure <cit.>. Upon doping Sr_2IrO_4 with electrons, the long-range order is suppressed, while short-range magnetic correlations have been shown to persist up to about 6% La substitution <cit.>. A similar behavior is also encountered in the hole-doped cuprates <cit.>, as expected given the opposite sign of the next-nearest-neighbor hopping amplitude <cit.>. Collective magnetic excitations with vanishing energy gap and a qualitatively similar energy dispersion have been reported to date in the parent and doped compounds of both La_2CuO_4 <cit.> and Sr_2IrO_4 <cit.>, which were interpreted in both cases in terms of a standard isotropic Heisenberg Hamiltonian extended to include next-nearest-neighbor interactions.The experimental reports of a purely isotropic exchange model for Sr_2IrO_4 are, in fact, at variance with the detailed predictions of <cit.>, who argued that there should be significant departures from the rotationally invariant Heisenberg model when Hund's coupling (J_H=0.45 eV in the case of Ir^4+ <cit.>) and the deviation from cubic symmetry are incorporated. In this scenario, collective magnetic excitations will acquire a finite energy gap at both the crystallographic and antiferromagnetic (AF) zone centers. Evidence supporting the presence of anisotropic magnetic interactions in Sr_2IrO_4 has come from a number of sources, including a detailed study of the magnetic critical scattering <cit.> and the observation of a small, zero-wave-vector magnon energy gap in electron spin resonance (0.83 meV) <cit.> and Raman spectroscopy (1.38 meV) <cit.>. Nonetheless, all previous resonant inelastic x-ray scattering (RIXS) investigations on the magnetic excitation spectrum of the parent <cit.> and electron-doped compounds <cit.> have not explicitly reported the presence of a gap, nor have they discussed the role of the anisotropic terms in the interaction Hamiltonian.In this paper we report on a comprehensive RIXS study of the collective magnetic excitations in both parent Sr_2IrO_4 and its electron-doped version (Sr_1-xLa_x)_2IrO_4. In contrast to earlier studies, we perform a full line-shape analysis of the RIXS spectra, including, most importantly, the effect of the finite momentum Q and energy resolution. The excitation spectrum is shown to be fully gapped at all wave vectors in the Brillouin zone (BZ) up to x=0.1, indicating the existence of anisotropic exchange interactions, and a previously unreported anisotropic damping away from the zone centers is revealed.§ SAMPLES AND EXPERIMENTAL SETUP Single crystals of (Sr_1-xLa_x)_2IrO_4 with varying La concentration [x=0, 0.01(1), 0.04(1), 0.10(1)] were flux grown using standard methods and characterized by resistivity and susceptibility measurements as described in Ref. de2015collapse for samples of the same batch. The doping level of each of the crystals was checked by means of energy-dispersive x-ray spectroscopy (EDX).Substitution of trivalent La for divalent Sr dopes the system with electrons (2x e^-/ Ir atom) and suppresses the long-range magnetic order for x>x_c=0.02(1) <cit.>, while short-range correlations persist in the basal plane of the crystal <cit.>.The RIXS measurements were performed at the ID20 beamline of the European Synchrotron Radiation Facility (Grenoble, France). The experiment was carried out in horizontal scattering geometry using a spherical (R=2 m) Si(844) diced analyzer with a 60 mm mask and a Si(844) secondary monochromator. This resulted in an overall energy resolution of FWHM=23.4 (28.0) meV for the x=0, 0.01, 0.04 (0.1) measurements and an in-plane momentum resolution of Δ Q_⊥≈0.18 Å^-1 <cit.>. The samples were cooled down to T=20 K (below the Néel transition at T_N≈ 230 K found in undoped Sr_2IrO_4 <cit.>) by means of a He-flow cryostat. The in-plane momentum transfer values 𝐐_⊥=(Q_x , Q_y) reported in this paper are quoted in units of 1/a, where a=3.89 Å is the in-plane lattice constant of the undistorted I4/mmm unit cell. The out-of-plane component was kept fixed to L=33 for all the spectra (L is the out-of-plane Miller index). The only exception is represented by the (0,0) spectrum in the x=0 sample: this was measured for L=32.85 to minimize the strong elastic signal arising from the ordered magnetic structure.§ RESULTS §.§ Spin-wave excitation spectrumRIXS spectra were collected keeping the incident energy fixed to the Ir L_3 absorption edge and measuring the energy of the scattered photons in the energy loss range E_loss=-0.2-0.6 eV. For each value x of La content, several spectra were collected for different values of 𝐐_⊥ along high-symmetry directions of the (0,0,33) first BZ (2θ≈90^∘). These are plotted in the intensity maps of Figs. <ref>(a)-<ref>(d). As first reported by <cit.>, the parent compound data show a collective magnetic excitation dispersing from the AF zone center (π,π) and extending up to about 0.2 eV. In agreement with earlier studies <cit.>, damped magnetic excitations with a similar in-plane dispersion survive in the doped compounds deep into the metallic phase, where the long-range magnetic order is suppressed <cit.>. In particular, the magnons in our heavily doped (x=0.1) Sr_2IrO_4 sample still reflect the persistence of commensurate short-range order, in contrast to hole-doped La_2CuO_4 <cit.>.A quantitative analysis of the spin-wave spectrum was achieved by fitting the RIXS data by a sum of an elastic line and the following inelastic features (Fig. <ref>): (A) single-magnon excitation, (B) a multimagnon continuum and (C) and (D) intra-t_2g excitations <cit.>. Each feature was modeled by a Voigt profile, with the width of the Gaussian component constrained to the experimental energy resolution: this allows the extraction of both the energy and the intrinsic Lorentzian lifetime broadening of the excitations <cit.>.One of the main features emerging from our data is the presence of a finite energy gap, which appears relatively robust with La doping. This is evident from the low-energy detail of thespectra collected at the crystallographic and AF zone centers shown in Figs. <ref>(a) and <ref>(b), respectively. Simple inspection reveals a separate energy-loss peak partially overlapping with the elastic line. Based on its energy and momentum dependence, we argue that it corresponds to a gapped spin-wave excitation. We note that the finite Q resolution of the spectrometer can generally lead to an artificial gap at minima of the dispersion. However, the measured gap values at all doping levels (Table <ref>) consistently exceed the artificial gaps simulated for the case of gapless excitations <cit.>. This result is robust against both the statistical (2σ confidence interval) and the estimated systematic error <cit.>: the presence of gapped magnons is thus to be considered an intrinsic property of the excitation spectrum in (Sr_1-xLa_x)_2IrO_4. As shown in the Supplemental Material <cit.>, the impact of the Q resolution can be factored out from the measured energy values, leading to an average gap of 19(9) and 16(4) meV at (0,0) and (π,π), respectively, with no systematic doping dependence (see Table <ref>). We note that recent electron spin resonance <cit.> and Raman spectroscopy <cit.> studies reported a smaller gap of, respectively, 0.83 and 1.38 meV at the crystallographic zone center. The origin of the discrepancy with the above analysis remains an open issue. Nonetheless, our value appears to be roughly consistent with a previous estimate in the undoped compound <cit.>.1.1The full energy dispersion (corrected for the finite Q resolution <cit.>) of the single magnon for the different doping levels across the BZ is summarized in Fig. <ref> along with the corresponding Lorentzian FWHM. Besides displaying a finite energy gap, Fig. <ref> reveals that the magnon peak at (0,0) and (π,π) does not considerably broaden as the dopant concentration is increased (the FWHM increases by only 60% going from the parent to the heavily doped x=0.1 sample). On the other hand, a remarkably pronounced anisotropic broadening occurs away from the zone centers. The largest effect is seen at the zone boundaries (0,π) and (π/2,π/2), where the FWHM increases by a factor of about 3 and 4, respectively, going from x=0 to x=0.1. Here, heavily damped magnetic excitations (paramagnons) are thus present. §.§ Magnetic Hamiltonian The spin-wave gap arises as a result of a significant easy-plane anisotropy in the exchange interaction between J_eff=1/2 isospins. In Ref. vale2015importance, the magnetic critical scattering and RIXS data were found to be correctly described by the following two-dimensional anisotropic Heisenberg (2DAH) Hamiltonian:H= ∑_⟨ i,j ⟩J̃[S_i^xS_j^x+S_i^yS_j^y+(1-Δ_λ)S_i^zS_j^z ] +∑_⟨⟨ i,j ⟩⟩J_2S⃗_i·S⃗_j+∑_⟨⟨⟨ i,j ⟩⟩⟩ J_3S⃗_i·S⃗_j,where J̃=J_1/(1-Δ_λ) is an effective nearest-neighbor (NN) exchange integral depending on the in-plane anisotropy parameter 0≤Δ_λ≤ 1 <cit.> and J_2 and J_3 model the next-NN. and third-NN exchange interactions, respectively. In the framework of linear spin-wave theory, Eq. (<ref>) gives rise to two momentum-dependent magnetic modes E_±(𝐐_⊥) <cit.>. For Δ_λ≠0, the latter are nondegenerate and display a finite energy gap at the crystallographic (E_-) and AF (E_+) zone centers (see Fig. <ref>). The two magnetic modes E_± are not resolved in our measurements. The reason is that their energy is almost degenerate for most of the 𝐐_⊥ values explored, with a non-negligible splitting present only at the zone centers. Moreover, the gapless mode carries a vanishingly small spectral weight at (0,0), while it is hidden by the elastic signal arising from a weak structural reflection <cit.> at (π,π). Only the gapped mode is thus expected to be visible in the RIXS spectra and to account for the observed gap. Following this reasoning, the measured dispersion was then fitted to (i) E_+ along the path (π/2,π/2)→(0,π)→(π,π)→(π/2,π/2) and (ii) E_- along the path (π/2,π/2)→(0,0). The results are shown in Fig. <ref>, while the corresponding best-fit parameters are summarized in Table <ref>.The Q-resolution-corrected gap at all doping levels is correctly reproduced by a value of the easy-plane anisotropy Δ_λ in the range 0.03-0.06 (Table <ref>), in good agreement with previous theoretical predictions <cit.> and experimental estimates <cit.> for the undoped compound. These values are significantly larger than the ones found in La_2CuO_4 <cit.>. Our results thus confirm the critical scattering data <cit.> and firmly establish the importance of easy-plane anisotropy in the low-energy Hamiltonian of Sr_2IrO_4. The expression of the anisotropic exchange of Eq. (<ref>) is consistent with the dipolar-like term expected to arise as a result of finite Hund's coupling in the model by <cit.>. Considering the exchange parameters of Table <ref>, the latter predicts values of the in-plane (Δ_xy≈Γ_2=3 meV) and out-of-plane (Δ_z≈√(J̃Γ_1)=9 meV) gaps, which, although smaller, are consistent in order of magnitude with the ones measured from the RIXS spectra. The anisotropy does not show any significant dependence on the La content within the experimental uncertainty, thus suggesting that it is robust with carrier doping. The impact of electron doping on the spin-wave energy dispersion is limited to a renormalization of the NN exchange interaction: this decreases as x is increased (Table <ref>), in agreement with what was reported by <cit.>. The pronounced anisotropic broadening, however, suggests that the injection of free carriers causes a strong enhancement of the scattering processes along the zone boundary (π/2,π/2)→ (0,π), which shortens the excitation lifetime with respect to the crystallographic and AF zone centers. Strikingly, a recent ARPES study of La-doped Sr_2IrO_4 <cit.> found coherent excitations in the form of Fermi arcs around (π/2,π/2) coexisting with strongly interacting, pseudo-gapped states at (π,0). Such a behavior might arise from strongly anisotropic coupling to magnetic fluctuations, qualitatively consistent with the anisotropic damping of spin waves reported here.§ CONCLUDING REMARKS In conclusion, our RIXS investigation has revealed the presence of gapped collective magnetic excitations in the electron-doped spin-orbit Mott insulator (Sr_1-xLa_x)_2IrO_4 up to x=0.1. The magnon is robust upon carrier doping at the crystallographic and AF zone center, while paramagnons exhibiting a pronounced anisotropic damping are found elsewhere in the BZ. Consistent with theoretical predictions <cit.>, the gap can be ascribed to a significant in-plane anisotropy in the interaction between the Ir^4+ J_eff=1/2 isospins that breaks the full rotational symmetry of the magnetic Hamiltonian. Despite apparent similarities with the superconducting cuprates, our results show that the spin-orbit entangled nature of the J_eff=1/2 ground state gives rise to magnetic interactions which differ significantly from the pure spin ones encountered in the cuprates. This will pave the way to a deeper understanding of the differences between the two classes of compounds in light of the long-sought-after superconductivity in iridate oxides. The authors would like to thank M. Rossi (ID20, ESRF) and S. Boseggia (UCL) for the helpful discussions and the support provided during the data analysis. This work is supported by the UK Engineering and Physical Sciences Research Council (Grants No. EP/N027671/1 and No. EP/N034694/1) and by the Swiss National Science Foundation (Grant No. 200021- 146995).D. Pincini and J.G. Vale contributed equally to this work.
http://arxiv.org/abs/1703.09051v4
{ "authors": [ "Davide Pincini", "James G. Vale", "Christian Donnerer", "Alberto de la Torre", "Emily C. Hunter", "Robin Perry", "Marco Moretti Sala", "Felix Baumberger", "Desmond F. McMorrow" ], "categories": [ "cond-mat.str-el", "cond-mat.supr-con" ], "primary_category": "cond-mat.str-el", "published": "20170327132310", "title": "Anisotropic exchange and spin-wave damping in pure and electron-doped Sr$_2$IrO$_4$" }
a4paper *rep@theorem@titlethmTheorem[section] lemma[thm]Lemma prop[thm]Proposition corr[thm]Corollary claim[thm]Claim*thm*Theorem *lemma*Lemma *prop*Proposition *corr*Corrolary *claim*ClaimtheoremTheorem remark rmk[thm]Remarkconj[thm]Conjecture quest[thm]Question*rmk*Remark *conj*Conjecture *quest*Question definition defn[thm]Definition exmp[thm]Example*defn*Definition *exmp*ExampletheoremTheorem corollaryCorollary propositionPropositionfancyequ[1]#1 equ*[1]#1 Department of MathematicsUniversity of Oxford [N. Heuer]heuer@maths.ox.ac.ukBounded cohomology of groups was first studied by Gromov in 1982 in his seminal paper <cit.>.Since then it has sparked much research in Geometric Group Theory. However, it is notoriously hard to explicitly compute bounded cohomology, even for most basic “non-positively curved” groups. On the other hand, there is a well-known interpretation of ordinary group cohomology in dimension 2 and 3 in terms of group extensions. The aim of this paper is to make this interpretation available for bounded group cohomology. This will involve quasihomomorphisms as defined and studied by Fujiwara–Kapovich <cit.>. Low-Dimensional Bounded Cohomology and Extensions of Groups Nicolaus Heuer December 30, 2023 =========================================================== § INTRODUCTIONIn <cit.> Gromov studied bounded cohomology of groups in connection to minimal volume of manifolds. Since then bounded cohomology has been established as an independent active research field due to its connection to other areas in Geometric Group Theory. Most prominent applications include stable commutator length (<cit.>), circle actions (<cit.>, <cit.>, <cit.>) and the Chern Conjecture. See <cit.> and <cit.> for an introduction to the topic.For a group G and a normed G-module V, denote by _b^n(G,V) the n-dimensional bounded cohomology of G with coefficients in V; see Subsection <ref>. _b^n(G,V) is notoriously hard to compute explicitly. Consider the most most basic case of V = with a trivial G action. If G is amenable then it is known that _b^n(G,)=0 for all n ≥ 1. On the other hand if G is “non-positively curved” then _b^2(G,) and _b^3(G,)are typically infinite dimensional as an -vectorspace, for example for acylindrically hyperbolic groups; see <cit.> and <cit.>.However, there is no full characterisation of all bounded classes in ^n_b(G, ) for n=2,3. For n ≥ 4, _b^n(G,) is usually fully unknown, even if G is a non-abelian free group. On the other hand, for ordinary n-dimensional group cohomology ^n(G,V) there is a well-known characterisation for n=2,3 in terms ofgroup extensions. The aim of this paper is to make this well-known correspondence available for bounded cohomology. For this, we first recall the classical connection between group extensions and ordinary group cohomology. An extension of a group G by a group N is a short exact sequence of groups1 → N ι→ E π→ G → 1.We say that two group extensions 1 → N ι_1→ E_1 π_1→ G → 1 and 1 → N ι_2→ E_2 π_2→ G → 1 of G by N are equivalent, if there is an isomorphism Φ E_1 → E_2 such that the diagramE_1 [rd, "π_1"] [dd, "Φ"] 1 [r] N [ru, "ι_1"] [rd, "ι_2"] G [r] 1 E_2 [ru, "π_2"]commutes. Any group extension of G by N induces a homomorphism ψ G →(N); see Subsection <ref>. Two equivalent extensions of G by N induce the same such map ψ G →(N).We denote by (G,N,ψ) the set of group extensions of G by N which induce ψ under this equivalence. If there is no danger of ambiguity we do not label the maps of the short exact sequence i.e. we will write 1 → N → E → G → 1 instead of (<ref>).It is well-known that one may fully characterise (G,N,ψ) in terms of ordinary group cohomology:Let G and N be groups and let ψ G →(N) be a homomorphism. Furthermore, let Z = Z(N) be the centre of N equipped with the action of G induced by ψ.Then there is a class ω = ω(G,N,ψ) ∈^3(G,Z), called obstruction, such that ω = 0 in^3(G,Z) if and only ifℰ(G,N,ψ) ≠∅. In this case there is a bijection between the sets ^2(G,Z) and ℰ(G,N,ψ).Theorem <ref> may be found in Theorem 6.6 of <cit.>, see also <cit.>. Moreover, for a G-module Z it is possible to characterise ^3(G,Z) in terms of these obstructions:For any G-module Z and any α∈^3(G, Z)there is a group N with Z = Z(N) and a homomorphism ψ G →(N) extending the action of G on Z such that α = ω(G, N, ψ).Theorem <ref> may be found in <cit.>, Section IV, 6. In other words, any three dimensional class in ordinary cohomology arises as an obstruction.The aim of this paper is to derive analogous statements to Theorem <ref> and Theorem <ref> involving bounded cohomology. This will use quasihomomorphisms as defined and studied by Fujiwara–Kapovich in <cit.>. Let G and H be groups. A set-theoretic function σ G → H is called quasihomomorphism if the setD(σ) = {σ(g) σ(h) σ(gh)^-1 | g,h ∈ G }is finite. We note that this is not the original definition of <cit.> but both definitions are equivalent; see Proposition <ref> and Subsection <ref>. We say that an extension 1 → N ι→ E π→ G → 1 of G by N is bounded, if there is a (set theoretic) section σ G → E such that (i) σ G → E is a quasihomomorphism and(ii) the map ϕ_σ G → Aut(N) induced by σ has finite image in (N).Here ϕ_σ G →(N) denotes the set-theoretic map ϕ_σ g ↦ϕ_σ(g) with^ϕ_σ(g) n = ι^-1(σ(g) ι(n) σ(g)^-1).We stress that ϕ_σ is in general not a homomorphism. See Remark <ref> for the notation. Condition (ii) may seem artificial but is both natural and necessary; see Remark <ref>. We denote the set of all bounded extensions of a group G by N which induce ψ by _b(G, N, ψ) and mention that this is a subset of (G,N,ψ).Analogously to Theorem <ref> we will characterise the set _b(G, N, ψ) ⊂(G, N, ψ) using bounded cohomology. Let G and N be groups and suppose that Z = Z(N), the centre of N, is equipped with a norm · such that (Z, ·) has finite balls.Furthermore, let ψ G →(N) be a homomorphism with finite image.There is a class ω_b = ω_b(G,N,ψ) ∈_b^3(G,Z) such that ω_b=0 in_b^3(G,Z) if and only ifℰ_b(G,N,ψ) ≠∅ and c^3(ω_b) = ω is the obstruction of Theorem <ref>.If ℰ_b(G,N,ψ) ≠∅,then the bijection between the sets^2(G,Z) and ℰ(G,N,ψ) described in Theorem <ref> restricts to a bijection between im(c^2) ⊂^2(G,Z)and ℰ_b(G,N,ψ) ⊂(G, N, ψ).Here, c^n ^n_b(G, Z) →^n(G, Z) denotes the comparison map; see Subsection <ref>. We say that a normed group or module (Z, ·) has finite balls if for every K > 0 the set { z ∈ Z | z ≤ K } is finite.Theorem <ref> is applied to examples in Subsection <ref>.Just as in Theorem <ref> we may ask which elements of ^3_b(G, Z) may be realised by obstructions. For a G-module Z we define the following subset of ^3_b(G,Z):ℱ(G,Z) := {Φ^*α∈^3_b(G,Z) |Φ G → Mis a homomorphism, Ma finite group, α∈_b^3(M,Z) }where Φ^* α denotes the pullback of α via the homomorphism Φ. As M is finite, ^3(M,Z) = ^3_b(M,Z). Analogously to Theorem <ref> we will show: Let G be a group, let Z be a normed G-module with finite balls and such that G acts on Z via finitely many automorphisms. Then{ω_b(G, N, ψ) ∈^3_b(G, Z) | Z = Z(N)and ψ induces the action onG } = ℱ(G,Z)as subsets of ^3_b(G,Z). As finite groups are amenable this shows that all such classes in ^3_b(G,Z) will vanish under a change to real coefficients; see Subsection <ref>. We prove Theorem <ref> and <ref> following the outline of the classical proofs in <cit.>. §.§ Organisation of the paperThis paper is organised as follows: In Section <ref> we recall well-known facts of (bounded) cohomology and quasihomomorphisms.In Section <ref> we will reformulate the problem of characterising group extensions using non-abelian cocycles; see Definition <ref>. Using this characterisation, we will prove Theorem <ref> in Subsection <ref>. In Section <ref> we prove Theorem <ref> which characterises the set of classes arising as obstructions ω_b. In Section <ref> we give examples to show that the assumptions of Theorem <ref> are necessary and discuss generalisations. The proof of Proposition <ref> is postponed to the Appendix in Section <ref>. § PRELIMINARIESIn this section we recall notation and conventions regarding the (outer) automorphisms in Subsection <ref>. We further recall basic facts on (bounded) cohomology of groups in Subsection <ref> and quasihomomorphisms by Fujiwara–Kapovich in Subsection <ref>.§.§ Notation and conventions,andThroughout this paper, Roman capitals (A, B) denote groups, lowercase Roman letters (a,b) denote group elements and greek letters (α, β) denote functions. We stick to this notation unless it is mathematical convention to do otherwise. In a group G the identity will be denoted by 1 ∈ G and by 0 ∈ G to stress that G is abelian. The trivial group will also be denoted by “1”.Let N be a group and let (N) be the group of automorphisms of N. Recall that (N) denotes the group of inner automorphisms. This is, the subgroup of (N) whose elements are induced by conjugations of elements in N. There is a map ϕ N →(N)via ϕ n →ϕ_nwhere ϕ_n g ↦ n g n^-1. Recall that (N) is a normal subgroup of (N) and that the quotient (N) = (N) / (N) is the group of outer automorphisms of N. It is well-known that there is an exact sequence1 → Z → N →(N) →(N) →(N) → 1where Z = Z(N) denotes the centre of N and all the maps are the obvious ones.We will frequently use the following facts.Let G be a group. Any homomorphism ψ G →(N) induces an action on Z = Z(N). This fact is also proved in detail in Subsection <ref>. Moreover, if n_1, n_2 ∈ N are two elements such that for every g ∈ N, ϕ_n_1 (g) = ϕ_n_2 (g) then n_1 and n_2 just differ by an element in the centre, i.e. there is z ∈ Z(N) such that n_1 = z n_2. This may be seen by the exactness of the above sequence.§.§ (Bounded) cohomology of groupsFor what follows we will define (bounded) cohomology using the inhomogeneous resolution. Let G be a group and let V be a G-module. In what follows we may refer to a G-module simply as G-module. Following <cit.>, a norm on a G-module V is a map · V →^+ such that * v= 0 if and only if v = 0* r v ≤ | r |v for every r ∈, v ∈ V* v + w ≤ v+w* g v=v for every g ∈ G, v ∈ V.Suppose that the G-module V is equipped with a norm ·. Set C^0(G, V) = C^0_b(G, V) = V and set for n ≥ 1, C^n(G, V) = {α G^n → V }. For an element α∈ C^n(G,V) we define α = sup_(g_1, …, g_n) ∈ G^nα(g_1, …, g_n) when the supremum exists and set α = ∞, else. For n ≥ 1 set C^n_b(G, V) = {α∈ C^n(G,V) |α < ∞}, the bounded chains.We define δ^nC^n(G,V) → C^n+1(G,V), the coboundary operator, as follows: Set δ^0C^0(G,V) → C^1(G,V) via δ^0(v)g_1 ↦ g_1 · v - v and for n ≥ 1 define δ^nC^n(G,V) → C^n+1(G,V) via rClδ^n(α) (g_1, …,g_n+1)↦ g_1 ·α(g_2,…,g_n+1) +∑_i=1^n(-1)^i α(g_1,…,g_i g_i+1, …, g_n+1) +(-1)^n+1 α(g_1,…, g_n).Note that δ^n restricts to a map C_b^n(G,V) → C_b^n+1(G,V) for any n ≥ 0. By abuse of notation we also call this restriction δ^n as well.It is well-known that (C^*(G,V),δ^*) is a cochain complex. The cohomology of G with coefficients in V is the homology of this complex and denoted by ^*(G,V). Similarly (C_b^*(G,V),δ^*)is a cochain complex and its homology is the bounded cohomology of G with coefficients in V and denoted by ^*_b(G,V). Let W be a normed H-module and let Φ G → H be a homomorphism. Denote by V the normed abelian group W equipped with G-module structure induced by Φ. We then obtain a map Φ^* ^*(H, W) →^*(G,V) via Φ^* α↦Φ^* α where Φ^* α denotes the pullback of α via Φ. Similarly we obtain a map Φ^* _b^*(H, W) →_b^*(G,V). For what follows it will be helpful to work with non-degenerate chains. A map α∈ C^n(G,V) is called non-degenerate if α(g_1, …, g_n) = 0 whenever g_i = 1 for some i=1, …, n. We define NC^0(G,V) = NC^0_b(G,V) = V and moreover NC^n(G, V) = {α∈ C^n(G, V) |α non-degenerate} and NC^n_b(G,V) = {α∈ C_b^n(G, V) |α non-degenerate} and observe that δ^* sends non-degenerate maps to non-degenerate maps.The homology of (NC^*(G,V), δ^*) is ^n(G,V) and the homology of (NC^*_b(G,V), δ^*) is _b^n(G,V).See Section 6 of <cit.>, where an explicit homotopy between the complexes (NC^*(G,V), δ^*) and (C^*(G,V), δ^*) is constructed. Moreover, one may see that this homotopy preserves bounded maps and hence yields a homotopy between (NC_b^*(G,V), δ^*) and (C_b^*(G,V), δ^*). Note that the inclusion C^*_b(G,V) ↪ C^*(G,V) commutes with the coboundary operator and hence induces a well defined map c^* ^*_b(G,V) →^*(G,V), called the comparison map. For a thorough treatment of ordinary and bounded cohomology see <cit.> and <cit.> respectively.§.§ Quasimorphisms and quasihomomorphisms Consider (c^2) the kernel of the comparison map in dimension 2. An element [ω] ∈(c^2) is a class of a bounded function ω∈ C^2_b(G, V) which vanishes in ordinary cohomology, i.e. such that there is a map, ϕ G → V such that for all g_1,g_2 ∈ G, ω(g_1,g_2) = δ^1 ϕ(g_1,g_2) = g_1 ·ϕ(g_2) - ϕ(g_1 g_1) + ϕ(g_2).If V =is equipped with the trivial G-module structure and the standard norm onthen we see that every class [ω] ∈(c^2) may be represented by the coboundary of a map ϕ G → such that there is a D > 0 such that for all g, h ∈ G |ϕ(g) + ϕ(h) - ϕ(gh)| ≤ D. We call such maps quasimorphisms. Maps ψ G → such that there is a constant D' > 0 and a homomorphisms η G → such that for all g ∈ G, | ϕ(g) - η(g) | ≤ D' are called trivial quasimorphisms. Many classes of “non-positively curved groups” support non-trivial quasimorphisms. For example acylindrically hyperbolic groups support non-trivial quasimorphisms; see <cit.>. On the other hand, amenable groups do not support non-trivial quasimorphisms.For a thorough treatment of quasimorphisms see <cit.>.There are different proposals of how to generalise quasimorphisms ϕ G → to maps with an arbitrary group as a target ϕ G → H. This paper exclusively treats the generalisation of Fujiwara–Kapovich (<cit.>). However, we note that there are other generalisations, for example one by Hartnick–Schweitzer (<cit.>). The latter are considerably more general than the one we are concerned with; see Subsection <ref>. (Fujiwara–Kapovich <cit.>) Let G and H be groups and let σ G → H be a set-theoretic map. Define G × G → H via (g,h) ↦σ(g) σ(h) σ(gh)^-1 and define D(σ) ⊂ H, the defect of σ viaD(σ) = {(g,h) | g,h ∈ G } = {σ(g) σ(h) σ(gh)^-1| g,h ∈ G }.The group Δ(σ) < H generated by D(σ) is called the defect group. The map σ G → H is called quasihomomorphism if the defect D(σ) ⊂ H is finite. When there is no danger of ambiguity we will write D=D(σ) and Δ = Δ(σ). This definition is slightly different from the original definition in <cit.>. Here, the authors required that the setD̅(σ) = {σ(h)^-1σ(g)^-1σ(gh) | g,h ∈ G }is finite. However, those two definitions may be seen to be equivalent: Let G, H be groups and let σ G → H be a set-theoretic map. Then σ is a quasihomomorphism in the sense of Definition <ref> if and only if it is a quasihomomorphism in the sense of Fujiwara–Kapovich (<cit.>) i.e. if and only if D̅(σ) is finite. We postpone the proof to the Appendix; see Section <ref>.We use Definition <ref> as it is more natural in the context of group extensions. Every set theoretic map σ G → H with finite image and every homomorphism are quasihomomorphisms for “trivial” reasons. We may also construct different quasihomomorphisms using quasimorphisms ϕ G →: Let C < H be an infinite cyclic subgroup and let τ→ H be a homomorphism s.t. τ() = C. Then it is easy to check that for every quasimorphism ϕ G →, τ∘ϕ G → H is a quasihomomorphism. Fujiwara–Kapovich showed that if the target H is a torsion-free hyperbolic group then the above mentioned maps are the only possible quasihomomorphisms. To be precise in this case every quasihomomorphism σ G → H has either finite image, is a homomorphism, or maps to a cyclic subgroup of H; see Theorem 4.1 of <cit.>.We recall basic properties of quasihomomorphisms. For what follows we use the following convention. If α∈(G) and g ∈ G then ^α g denotes the element α(g) ∈ G. If a ∈ G is an element then ^a g denotes conjugation by a, i.e. the element a g a^-1∈ G. Sometimes we successively apply automorphisms and conjugations. For example, ^a^α g denotes the element a α(g) a^-1∈ G. Let σ G → H be a quasihomomorphism, let D and Δ be as above and let H_0<H be the subgroup of H generated by σ(G). Then Δ is normal inH_0. The function ϕ G →(Δ)defined via ϕ(g)a ↦ ^σ(g) a has finite image and its quotient ψ G →(Δ) is a homomorphism with finite image. Moreover, the pair (,ϕ) satisfies ^ϕ(g)(h,i) (g,hi) = (g,h) (gh,i)for all g,h,i ∈ G.Proposition <ref> may be found in Lemma 2.5 of <cit.>. For any g,h,i ∈ G we calculate(g,h) (gh,i)= σ(g) σ(h) σ(i) σ(ghi)^-1= σ(g)(h,i) σ(g)^-1(g,hi) = ^ϕ(g)(h,i) (g,hi) so (,ϕ) satisfies the identity of the proposition. Rearranging terms we see that ^σ(g)(h,i) =(g,h) (gh,i) (g,hi)^-1so σ(g) conjugates any (h,i) ∈ D into the finite set D · D · D^-1. Here, for two sets A,B ⊂ H, we write A · B = {a · b ∈ H | a ∈ A, b ∈ B } and A^-1 denotes the set of inverses of A. This shows that Δ is a normal subgroup of H_0, as D generates Δ, and that ϕ G →(Δ) has finite image. To see that the induced map ψ G →(Δ) is a homomorphism, let g,h ∈ G and a ∈Δ. Observe that^ϕ(g) ϕ(h) a= σ(g) σ(h) a σ(h)^-1σ(g)^-1= ^(g,h)^σ(gh) a and hence ϕ(g) ∘ϕ(h) and ϕ(gh) differ by an inner automorphism. We conclude that ψ(g) ∘ψ(h) = ψ(gh) as elements in (Δ). So ψ G →(Δ) is a homomorphism. This shows Proposition <ref>. In light of Proposition <ref> the extra assumption in Theorem <ref> that the conjugation by the quasihomomorphism induces a finite image in (N) is natural:Given a short exact sequence 1 → N → E → G → 1 that admits a quasihomomorphic section σ: G → E one may see that 1 →Δ→ E_0 → G → 1 is a short exact sequence whereΔ = Δ(σ) < N and E_0 = ⟨σ(G) ⟩ < E and the map to (Δ) has finite image. In fact this assumption is necessary as Example <ref> shows.Let σ G → H be a quasihomomorphism. Then the map σ̃ G → H defined viaσ̃(g) =1ifg = 1σ(g)elseis also a quasihomomorphism.An immediate calculation shows that D(σ̃) ⊂ D(σ) ∪{ 1 }.We will use the last proposition to assume that quasihomomorphic sections of extensions satisfy σ(1) = 1. § EXTENSIONS AND PROOF OF THEOREM <REF>Recall from the introduction that an extension of a group G by a group N is a short exact sequence1 → N → E → G → 1and that each such extension induces a homomorphism ψ G →(N). We will recall the construction of such ψ in Subsection <ref>. In Subsection <ref> we will define non-abelian cocycles (see Definition <ref>) for group extensions of G by N which induce ψ. Those are certain pairs of functions (, ϕ) where G × G → N and ϕ G →(N).We will see that every group extension of G by N inducing ψ gives rise to a non-abelian cocycle (, ϕ) in Proposition <ref>. On the other hand every non-abelian cocycle (, ϕ) gives rise to an extension 1 → N →(, ϕ) → G → 1; see Proposition <ref>. We will use this correspondence to prove Theorem <ref> in Subsection <ref>. The proof will follow the outline of <cit.>, Chapter VI, 6. §.§ Group extensionsLet 1 → N ι→ E π→ G → 1 be an extension of G by N and let σ G → E be any set-theoretic section of π E → G. Then σ G → E induces a map ϕ_σ G →(N) via ϕ_σ(g)n ↦ι^-1(^σ(g)ι(n)). See Remark <ref> for notation.Let σ'G → E be another section of π. For every g ∈ G, π∘σ(g) = π∘σ'(g) hence there is an element ν(g) ∈ N such that σ'(g) = ν(g) σ(g). Let ϕ_σ' G →(N) be the induced map to (N). We see that for every n ∈ N,^ϕ_σ'(g) n = ^ν(g)( ^ϕ_σ(g) n )so ϕ_σ'(g) and ϕ_σ(g) only differ by an inner automorphism. We conclude that the projection ψ G →(N) of both ϕ_σ and ϕ_σ' is the same map ψ G →(N). Hence ψ does not depend on the section.To see that ψ is a homomorphism, let g,h ∈ G. As π(σ(g) σ(h) σ(gh)^-1) = 1, there is an element ν(g,h) ∈ N such that ι(ν(g,h)) = σ(g) σ(h) σ(gh)^-1. In particular, for every n ∈ N,^ϕ_σ(g) ∘ϕ_σ(h) n = ^ν(g,h)( ^ϕ_σ(gh) n )and hence ϕ_σ(g) ∘ϕ_σ(h) and ϕ_σ(gh) only differ by an inner automorphism, so ψ(g) ∘ψ(h) = ψ(gh) and ψ G →(N) is indeed a homomorphism. If 1 → N ι_1→ E_1 π_1→ G → 1 and 1 → N ι_2→ E_2 π_2→ G → 1 are two equivalent group extensions (see Definition <ref>) with isomorphism Φ E_1 → E_2 and if σ_1G → E_1 is a section of π_1G → E_1 then it is easy to see that σ_2 = Φ∘σ_1G → E_2 is a section of π_2E_2 → G and that ϕ_σ_1 = ϕ_σ_2. Hence the induced homomorphism ψ G →(N) is the same. We collect these facts in a proposition: Let 1 → N ι→ E π→ G → 1 be a group extension of G by N. Any two sections σ, σ'G → E of π induce the same homomorphism ψ G →(N). Moreover, two equvivalent group extensions (see Definition <ref>) induce the same homomorphism ψ G →(N). §.§ Non-abelian cocyclces To show Theorem <ref> we will transform the problem of finding all group extensions of G by N which induce ψ to the problem of finding certain pairs (, ϕ) called non-abelian cocycles where eG × G → N and ϕ G →(N) are certain set-theoretic functions.Let G, N be groups and letψ G →(N) be a homomorphism. Let : G × G → N and ϕ G →(N) be set-theoretic functions such that (i) ϕ G →(N) projects to ψ G →(N), ϕ(1) = 1 and for all g ∈ G, (1,g) = (g,1) = 1,(ii) for all g,h ∈ G and n ∈ N, ^(g,h) n = ^ϕ(g) ϕ(h) ϕ(gh)^-1 n and(iii) for all g,h,i ∈ G,^ϕ(g)(h,i) (g,hi) = (g,h) (gh,i).Then we say that (,ϕ) is a non-abelian cocycle with respect to (G, N, ψ). The idea of studying extensions using these non-abelian cocycles is classical; see Chapter IV, 5.6 of <cit.>. Here, the author simply calls this a “cocycle condition”. In order not to confuse it with the cocycle condition of an ordinary 2-cycle we call it “non-abelian cocycle” with respect to the data for group extensions. Consider Remark <ref> for the notation of conjugation and action of automorphisms.Every group extension 1 → N ι→ E π→ G → 1 that induces ψ G →(N) yields a non-abelian cocycle with respect to (G,N,ψ): As in Subsection <ref>, pick a set-theoretic section σ G → E such that σ(1) = 1, define ϕ_σ G →(N) via ^ϕ_σ(g) n = ι^-1( ^σ(g)ι(n) ) and define _σ G × G → N via _σ (g,h) ↦ι^-1(σ(g) σ(h) σ(gh)^-1). Observe that σ is a quasihomomorphism if and only if _σ has finite image.Let 1 → N → E → G → 1 be an extension which induces ψ. *For any section σ G → E with σ(1) = 1 the pair (_σ, ϕ_σ) is indeed a non-abelian cocycle with respect to (G, N, ψ).*Let ϕ G →(N) be a lift of ψ with ϕ(1) = 1. Then there is a section σ G → E with σ(1) = 1 such that ϕ_σ = ϕ, for ϕ_σ as above. If the extension is in addition bounded(see Definition <ref>) and ϕ has finite image, then σ may be chosen to be a quasihomomorphism with σ(1) = 1. Part (1) is classical and may be found in the proof of Theorem 5.4 of <cit.>.To see (<ref>), let τ G → E be any section of π E → G with τ(1) = 1. Both ϕ and ϕ_τ are lifts of ψ and hence differ only by an inner automorphism. Let ν G → N be a representative of such an inner automorphism with ν(1)=1. Then for every n ∈ N, g ∈ G,^ϕ(g) n = ^ν(g)( ^ϕ_τ(g) n ) = ^( ν(g) τ(g) )n.Let σ G → E be the section defined via σ(g) = ν(g) τ(g). Then we see that ϕ = ϕ_σ. Assume now that the extension isin addition bounded and that ϕ has finite image. Since the extension is bounded, there is a section τ G → E which is a quasihomomorphism and such that ϕ_τ G →(N) has finite image. By Proposition <ref> we may assume that τ(1) = 1. We see that we may choose ν G → N to also have finite image.We claim that the section σ G → E defined via σ g ↦ν(g) τ(g) is a quasihomomorphism. Indeed for any g,h ∈ G we calculateσ(g) σ(h) σ(gh)^-1 =ν(g) τ(g) ν(h) τ(h) τ(gh)^-1ν(gh)^-1= ν(g) ^τ(g)ν(h) ( τ(g) τ(h) τ(gh)^-1) ν(gh)^-1∈𝒩ℳ D(τ) 𝒩^-1where 𝒩 = {ν(g) | g ∈ G }, the image of ν, ℳ = { ^τ(g)ν(h) | g,h } which is finite. So all sets on the right hand side are finite and hence σ is a quasihomomorphism. This concludes the proof of Proposition <ref>. §.§ Non-abelian cocycles yield group extensions Let (, ϕ) be a non-abelian cocycle with respect to (G,N,ψ). We now describe how (, ϕ) gives rise to a group extension 1 → N →(,ϕ) → G → 1 which induces ψ. For this we define a group structure on the set N × G via(n_1, g_1) · (n_2, g_2) = (n_1 ^ϕ(g_1)n_2 (g_1,g_2), g_1 g_2)for two elements (n_1, g_1), (n_2, g_2) ∈ N × G. We denote this group by (,ϕ) and define the mapsι N →(, ϕ) via ι n ↦ (n,1), π(, ϕ) → G via π (n,g) ↦ g and σ G →(, ϕ) via σ g ↦ (1, g). Let (, ϕ ) be a non-abelian cocycle with respect to (G, N , ψ) and let (,ϕ), ι N →(,ϕ), π(,ϕ) → G and σ G →(,ϕ) be as above.Then * 1 → N ι→(, ϕ) π→ G → 1 is an extension of G by N inducing ψ G →(N). Moreover, σ is a section of π such that = _σ and ϕ = ϕ_σ.* If both ϕ G →(N) and G × G → N have finite image then the extension we obtain is bounded (see Definition <ref>).Part (1) is classical; see Chapter IV.6 of <cit.> where such extensions from non-abelian cocycles are implicitly constructed.For part (2), suppose that bothand ϕ have finite image then the section σ G →(, ϕ) is a quasihomomorphism as the defect is just the image ofand, moreover, the map ϕ_σ = ϕ has finite image. Hence the extension is bounded. This concludes the proof of Proposition <ref>.For the proof of Theorem <ref> we will need to determine when two non-abelian cocycles correspond up to equivalence to the same group extension. We will need the following statement which is stated, though not proved, at the end of IV.6 in <cit.>.Let G, N be groups, let ψ G →(N) be a homomorphism and let ϕ G →(N) be a lift with ϕ(1) = 1. Let , 'G × G → N be two set-theoretic functions such that for all g ∈ G, (1,g) = (g,1) = 1 and '(1,g) = '(g,1) = 1. *If (, ϕ) is a non-abelian cocycle with respect to (G,N, ψ) then (', ϕ) is a non-abelian cocycle with respect to (G, N, ψ) if and only if there is a map G × G → Z(N) = Z satisfying δ^2= 0 such that for all g,h ∈ G, '(g,h) = (g,h) ·(g,h) and for all g ∈ G, (1,g) = (g,1) = 1.* If both (, ϕ) and (', ϕ) are non-abelian cocycles with respect to (G,N, ψ) then the group extensionscorresponding to (, ϕ) and (', ϕ) are equivalent if and only if there is a map G → Z = Z(N) with (1) = 1 such that (g,h) = (δ^1 )(g,h) '(g,h). Recall that Z(N) = Z denotes the centre of N.To see (<ref>), note that for every g,h ∈ G, n ∈ N, ^(g,h) n = ^ϕ(g) ϕ(h) ϕ(gh)^-1 n = ^'(g,h) n by (ii) of Definition <ref>. Hence there is an element (g,h) ∈ Z(N) such that '(g,h) = (g,h) (g,h) and for all g ∈ G, (1,g) = (g,1) = 1. Moreover, for every g,h,i ∈ G, ^ϕ(g)'(h,i) '(g,hi)= '(g,h) '(gh,i)^ϕ(g)(h,i) ^ϕ(g)(h,i) (g,hi) (g,hi)= (g,h) (g,h) (gh,i) (gh,i)(δ^2(g,h,i)) ^ϕ(g)(h,i) (g,hi)= (g,h) (gh,i)δ^2 (g,h,i)= 1and hence for δ^2= 0 if we restrict to Z. On the other hand the same calculation shows that if (, ϕ) is a non-abelian cocycle and G × G → Z(N) satisfies δ^2= 0 then (', ϕ) is a non-abelian cocycle with '(g,h) = (g,h) (g,h). For (<ref>) suppose that there is a G → Z as in the proposition. Define the map Φ(, ϕ) →(', ϕ) via Φ (n,g) ↦ (n (g), g). Then for every (n_1,g_1), (n_2,g_2) ∈(, ϕ),Φ( (n_1,g_1) ) ·Φ( (n_2,g_2) )= (n_1 (g_1), g_1) · (n_2 (g_2),g_2) = (n_1 ^ϕ(g_1) n_2 (g_1) ^ϕ(g_2)(g_2) '(g_1, g_2), g_1 g_2) =(n_1 ^ϕ(g_1) n_2 (g_1 g_2) δ^1 (g_1,g_2) '(g_1, g_2), g_1 g_2)=(n_1 ^ϕ(g_1) n_2 (g_1,g_2) (g_1 g_2) , g_1 g_2)= Φ( (n_1,g_1) · (n_2,g_2))and hence Φ is a homomorphism. It is easy to see that Φ is an isomorphism and that Φ fits into the diagram of Definition <ref>. Hence the extensions corresponding to (, ϕ) and (', ϕ) are equivalent.On the other hand suppose that the extensions1 → N ι→(, ϕ) π→ G → 1 and 1 → N ι'→(', ϕ) π'→ G → 1 are equivalent with sections σ, σ' as before with Isomorphism Φ(, ϕ) →(', ϕ). Note that for all g ∈ G, π' ∘Φ( (1,g) ) = g and hencethe second coordinate of Φ((1,g)) ∈(, ϕ) is g. Define G → N via Φ((1,g)) = ((g),g). Observe that ^σ(g)ι(n) = (^ϕ(g) n, 1) and ^σ'(g)ι(n) =(^ϕ(g) n,1) and hence σ(g) and σ'(g) only differ by an element in the centre hence (g) ∈ Z. Note that for every g,h ∈ G,((g,h),1)= σ(g) σ(h) σ(gh)^-1 Φ( ((g,h),1) )= Φ( σ(g) ) ·Φ( σ(h) ) ·Φ( σ(gh) )^-1((g,h),1)= ((g) ^ϕ(g)(h) (gh)^-1'(g,h), 1).Comparing the last line we see that(g,h) = δ^1 (g,h) '(g,h) which concludes the proposition. §.§ Proof of Theorem <ref> We can now prove Theorem <ref> using the correspondence of group extensions with non-abelian cocycles. thm:mainLet G and N be groups and suppose that Z = Z(N), the centre of N, is equipped with a norm · such that (Z, ·) has finite balls.Furthermore, let ψ G →(N) be a homomorphism with finite image.There is a class ω_b = ω_b(G,N,ψ) ∈_b^3(G,Z) such that ω_b=0 in_b^3(G,Z) if and only ifℰ_b(G,N,ψ) ≠∅ and c^3(ω_b) = ω is the obstruction of Theorem <ref>.If ℰ_b(G,N,ψ) ≠∅,then the bijection between the sets^2(G,Z) and ℰ(G,N,ψ) described in Theorem <ref> restricts to a bijection between im(c^2) ⊂^2(G,Z)and ℰ_b(G,N,ψ) ⊂(G, N, ψ).Recall that a normed G-module Z is said to have finite balls if for every K > 0 the set { z ∈ Z | z ≤ K } is finite. We will split the proof into several claims. Claim <ref> associates to a tuple (G,N, ψ) as in the theorem a function ζ G × G → N which we then use to define the obstruction class ω_b = [_b] ∈_b^3(G,Z) in Equation (<ref>).In Claims <ref> and<ref> we see that _b is indeed a bounded cocycle and that ω_b = [_b] ∈_b^3(G,Z) is independent of the choices made. Finally in Claim <ref> we see that ω_b indeed encodes if (bounded) extensions for the data (G,N, ψ) exist. In Claim <ref> we construct a bijection Ψ between ^2(G,Z) (resp. (c^2)) and (bounded) extensions.Let G, N, ψ G →(N) and Z, · beas in the theorem. Choose a lift ϕ G →(N) of ψ with finite image such that ϕ(1) = 1.There is a function ζ G × G → N such that for all g,h ∈ G, n ∈ N,^ζ(g,h) n = ^ϕ(g) ϕ(h) ϕ(gh)^-1 nwhere ζ has finite image in N and for all g ∈ G, ζ(g,1)=ζ(1,g)=1.For g,h ∈ G we have that ψ(g) ψ(h) ψ(gh)^-1 = 1, since ψ is a homomorphism. Hence for every g,h ∈ G, the map ϕ(g) ϕ(h) ϕ(gh)^-1∈(N) is an inner automorphism.As ϕ has finite image in (N), the function (g,h) ↦ϕ(g) ϕ(h) ϕ(gh)^-1 has finite image in (N) < (N). We may find a lift ζ G × G → N of this map such that ζ has finite image and such that ζ(1,g) = ζ(g,1) = 1. This shows Claim <ref>. We now define the obstruction class.Define _bG × G × G → N so that for all g,h,i ∈ G, ^ϕ(g)ζ(h,i) ζ(g,hi) = _b(g,h,i) ζ(g,h) ζ(gh,i)and observe that _b necessarily has finite image as both ζ G × G → N and ϕ G →(N) have finite image. Also, observe that _b(g,h,i) = 1 if one of g,h,i ∈ G is trivial.The function _bG × G × G → N maps to Z=Z(N)<N the centre of N. Moreover, _b is a non-degenerate bounded cocycle, i.e. δ^3 _b = 0. First we show that _b maps to the centre of N. Observe that for all g,h,i ∈ G and n ∈ N,^^ϕ(g)ζ(h,i) ζ(g,hi) n= ^ϕ(g) ϕ(h) ϕ(i) ϕ(hi)^-1ϕ(g)^-1 (^ϕ(g) ϕ(hi) ϕ(ghi)^-1 n ) = ^ϕ(g) ϕ(h) ϕ(i) ϕ(ghi)^-1 n = ^ϕ(g) ϕ(h) ϕ(gh)^-1 ( ^ϕ(gh) ϕ(i) ϕ(ghi)^-1n ) = ^ζ(g,h) ζ(gh,i) nand hence ^ϕ(g)ζ(h,i) ζ(g,hi) and ζ(g,h) ζ(gh,i) induce the same map by conjugation on N and hence just differ by an element of the centre so _b(g,h,i) ∈ Z.Since ζ and ϕ have finite image, so does _b, i.e. _b ∈ C^3_b(G,Z) and it is easy to see that _b is non-degenerate. To see that _b satisfies δ^3 _b = 0 we calculate ^ϕ(g) ϕ(h)ζ(i,k) ^ϕ(g)ζ(h,ik) ζ(g,hik)for g,h,i,k ∈ G in two different ways. First observe that^ϕ(g) ϕ(h)ζ(i,k) ^ϕ(g)ζ(h,ik) ζ(g,hik) = ^ϕ(g) ϕ(h)ζ(i,k) ( ^ϕ(g)ζ(h,ik) ζ(g,hik) )= ^ϕ(g) ϕ(h)ζ(i,k) _b(g,h,ik) ζ(g,h) ζ(gh,ik)=ζ(g,h) ^ϕ(gh)ζ(i,k) _b(g,h,ik) _b(g,h,ik)=ζ(g,h) ζ(gh,i) ζ(ghi,k)_b(g,h,ik) _b(gh,i,k)then observe that^ϕ(g) ϕ(h)ζ(i,k) ^ϕ(g)ζ(h,ik) ζ(g,hik) =( ^ϕ(g) ϕ(h)ζ (i,k)^ϕ(g)ζ (h,ik) ) ζ (g,hik)= ^ϕ(g)( _b(h,i,k) ζ(h,i) ζ(hi,k) ) ζ(g,hik) =^ϕ(g)ζ(h,i)ζ(g,hi) ζ(ghi,k) ^ϕ(g)_b(h,i,k) _b(g,hi,k) =ζ(g,h) ζ(gh,i) ζ(ghi,k) _b(g,h,i)^ϕ(g)_b(h,i,k) _b(g,hi,k).Finally, comparing these two terms yieldsδ^3 _b(g,h,i,k) = ^ϕ(g)_b(h,i,k) - _b(gh,i,k) + _b(g,hi,k) - _b(g,h,ik) + _b(g,h,i) = 0.So _b indeed defines a bounded cocycle. This shows Claim <ref>.The class [_b] ∈^3_b(G,Z) is independent of the choices made for ζ and ϕ.Let ϕ, ϕ'G →(N) be two lifts of ψ as above and choose corresponding functions ζ, ζ'G → N representing the defect of ϕ and ϕ' as above. There is a finite function ν G → N with finite image such that ϕ(g) = ν̅(g) ϕ'(g) where ν̅(g) is the element in Inn(N) ⊂(N) corresponding to the conjugation by ν(g). We calculateϕ(g) ϕ(h) ϕ(gh)^-1 = ν̅(g) ^ϕ'(g)ν̅(h) ( ϕ'(g) ϕ'(h) ϕ'(gh)^-1) ν̅(gh)^-1.We see that for every n ∈ N,^ζ(g,h) n= ^ϕ(g) ϕ(h) ϕ(gh)^-1 n = ^ν̅(g) ^ϕ'(g)ν̅(h) ( ϕ'(g) ϕ'(h) ϕ'(gh)^-1) ν̅(gh)^-1 n = ^ν(g) ^ϕ'(g)ν(h) ζ'(g,h) ν(gh)^-1 n.So ζ(g,h) and ν(g) ^ϕ'(g)ν(h) ζ'(g,h) ν(gh)^-1 only differ by an element of the centre. Hence define z(g,h) ∈ Z viaζ(g,h) = z(g,h) ν(g) ^ϕ'(g)ν(h) ζ'(g,h) ν(gh)^-1and note that z:G × G → Z is a function with finite image as all functions involved in its definition have finite image.It is a calculation to show that _b, the obstruction defined via the choices ϕ and ζ and '_b, the obstruction defined via the choices ϕ' and ζ' differ by δ^2 z and hence define the same class in bounded cohomology. This shows Claim <ref>.We call this class [_b] ∈^3_b(G,Z) the obstruction for extensions G by N inducing ψand denote it by ω_b(G,N, ψ) or ω_b. We have seen that ω_b is a well defined class that depends only on G, N and ψ G →(N). Next we show that it is an obstruction to (bounded) extensions. Let ω_b ∈^3_b(G,Z) be as above. Then ω_b = 0 ∈^3_b(G,Z) if and only if _b(G,N,ψ) ≠∅. Moreover, c^3(ω_b) is equal to the classical obstruction. Recall that c^3 ^3_b(G, Z) →^3(G,Z) denotes the comparison map.Suppose that c^3(ω_b) = 0 ∈^3(G,Z). Then there is β∈ C^2(G,Z) possibly with unbounded, i.e. infinite image, such that_b(g,h,i) = ^ϕ(g)β(h,i) -β(gh,i) + β(g,hi) - β(g,h)for all g,h,i ∈ G. Moreover we may choose β such that for all g ∈ G, β(1,g) = β(g,1) = 0 by Proposition <ref> since _b is non-degenerate.Define G × G → N via(g,h) = ζ(g,h) β(g,h)^-1. We will show that (, ϕ) is a non-abelian cocycle with respect to (G, N, ψ). Indeed, ϕ is a lift of ψ which satisfies ϕ(1) = 1 and for all g ∈ G, (g,1) = (1,g) = 1. Moreover, observe that for all g,h ∈ G and n ∈ N,^(g,h) n = ^ζ(g,h) β(g,h)^-1 n = ^ζ(g,h) n = ^ϕ(g) ϕ(h) ϕ(gh)^-1 nas β(g,h) is in the centre of N. Finally, for all g,h,i ∈ G we calculate^ϕ(g)ζ(h,i) ζ(g,hi)= _b(g,h,i) ζ(g,h) ζ(gh,i) ^ϕ(g)( ζ(h,i) β(h,i)^-1) ζ(g,hi) β(g,hi)^-1 = ζ(g,h) β(g,h)^-1ζ(gh,i) β(gh,i)^-1^ϕ(g)(h,i) (g,hi)= (g,h) (gh,i)and hence indeed (, ϕ) is a non-abelian cocycle with respect to (G,N,ψ). By Proposition <ref>, (, ϕ) gives rise to an extension of G by N which induces ψ and hence (G,N,ψ) ≠∅.Analogously, suppose that ω_b = 0 in ^3_b(G, Z). Then we may find β∈ C^3_b(G, Z) satisfying Equation (<ref>), but with bounded i.e. finite image. Hence if we set (g,h) = ζ(g,h) β(g,h)^-1, we see that (g,h) has finite image as well, as both ζ andhave. By the above argument (, ϕ) is a non-abelian cocycle and, as bothand ϕ have finite image, (,ϕ) gives rise to a bounded extension of (N, G, ψ) by (2) of Proposition <ref>. Hence _b(G,N,ψ) ≠∅. On the other hand, suppose that (G,N,ψ) ≠∅.This means that there is some extension 1 → N → E → G → 1 of G by N which induces ψ. By Propositin <ref>, there is a section σ E → G such thatϕ_σ = ϕ and then (_σ, ϕ) is a non-abelian cocycle with respect to (G, N, ψ).Observe that for all g,h ∈ G, n ∈ N,^_σ(g,h) n = ^ϕ(g) ϕ(h) ϕ(gh)^-1 n = ^ζ(g,h) nand hence there is an β(g,h) ∈ Z<N such that _σ(g,h) = ζ(g,h) β(g,h)^-1.As (_σ, ϕ) satisfies (iii) of Definition <ref>, we see that for all g,h,i ∈ G^ϕ(g)(_σ(h,i)) _σ(g,hi)= _σ(g,h) _σ(gh,i) ^ϕ(g)( ζ(h,i) β(g,h)^-1) ζ(g,hi) β(g,hi)^-1 = ζ(g,h) β(g,h)^-1ζ(gh,i) β(gh,i)^-1^ϕ(g)ζ(h,i) ζ(g,hi)= ( ^ϕ(g)β(h,i) -β(gh,i) + β(g,hi) - β(g,h) ) ζ(g,h) ζ(gh,i)so_b(g,h,i) = ^ϕ(g)β(h,i) -β(gh,i) + β(g,hi) - β(g,h) = δ^2 β (g,h,i)and hence c^3(ω_b) = 0 ∈^3(G, Z).Now suppose that _b(G,N,ψ) ≠∅.This means that there is some extension 1 → N → E → G → 1 of G by N which induces ψ and which is in addition bounded. Applying (2) of Proposition <ref> once more we see that there is a section σ G → E such that σ is a quasihomomorphism satisfying that σ(1) = 1 by Proposition <ref> and ϕ_σ = ϕ. As σ is a quasihomomorphism, _σ has finite image.As _σ and ζ have finite image the map β∈ C^2(G, Z) defined via _σ(g,h) = ζ(g,h) β(g,h)^-1 also has finite image and hence β∈ C^2_b(G, Z). The above calculations show that _b = δ^2 β and hence ω_b = 0 in ^3_b(G, Z). This finishes the proof of Claim <ref>. Now suppose that _b(G, N, ψ) ≠∅. then there is an extension 1 → N → E_0 → G → 1 which induces ψand a section σ_0G → E_0 such that ϕ = ϕ_σ_0 and _0 :=_σ_0 have finite image and (_0, ϕ) is a non-abelian cocycle with respect to (G, N, ψ). Let Ψ^2(G, Z) →(G, N, ψ) be the map defined via Ψ [α] ↦( 1 → N →(α·_0, ϕ) → G → 1 ),where α is a non-degenerate representative. Then Ψis a bijection which restricts to a bijection between (c^2) ⊂^2(G,Z) and _b(G, N, ψ) ⊂(G, N, ψ).Here α·_0 denotes the map α·_0G × G → N defined via α·_0(g,h) ↦α(g,h) ·_0(g,h).We first show that the above map iswell defined: Let α∈ C^2(G, Z) be a non-degenerate cocycle. Then δ^2 α = 0 and hence by Proposition <ref>, (α·_0, ϕ) is a non-abelian cocycle with respect to (G,N, ψ). If [α'] = [α] in ^2(G,Z) then there is an element ∈ C^1(G, Z) such that α = α' + δ^1. Then, according to point (2) of Proposition <ref>, the group extensions are equivalent.Hence Ψ is well defined.Now suppose that Ψ([α]) = Ψ([α']). Then, according to Proposition <ref> (2) we have that there is a ∈ C^1(G, Z) such that (δ^1 ) α' _0 = α_0 and hence δ^1 α' = α. Hence [α] = [α'] in ^2(G,Z), so Ψ is injective.Next we show that Ψ is surjective. Let 1 → N → E' → G → 1 be any extension of G by N inducing ψ. By Proposition <ref>, there is a section σ'G → E such that ϕ_σ' = ϕ and such that (', ϕ) is a non-abelian cocycle with ' = _σ'. Hence both (', ϕ) and (_0, ϕ) are non-abelian cocycles with respect to (G, N, ψ) and by Proposition <ref> there is a map β∈ C^2(G, Z) such that ' = β·_0 and δ^2 β = 0. Then β induces a class and hence Ψ([β]) corresponds to this extension. This shows that Ψ is surjective and hence that Ψ is a bijection. If 1 → N → E' → G → 1 is a bounded extension then we may choose a section σ'G → E' such that ' as above has finite image. Moreover, β as above is bounded as both ' and _0 are. Hence [β] ∈(c^2) and hence Φ((c^2)) ⊃_b(G,N,ψ).Suppose that [α] ∈(c^2). Then we may assume that α∈ C^2_b(G, Z), i.e. that α has finite image and that α is non-degenerate. Hence α·_0 has finite image and hene the extension corresponding to (α·_0, ϕ) is bounded by (2) of Proposition <ref>. This shows that Ψ((c^2)) ⊂_b(G,N, ψ).This concludes the proof of Theorem <ref>. § THE SET OF OBSTRUCTIONS AND EXAMPLESTheorem <ref> provides a characterisation of non-trivial classes ω_b ∈^3_b(G,Z), called obstructions. One may wonder which such classes ω_b ∈^3_b(G,Z) arise in this way. Recall that in the case of general group extensions, every cocycle in ^3(G,Z) may be realised as such an obstruction:thm:classical obstructions For any G-module Z and any α∈^3(G, Z)there is a group Nwith Z = Z(N) and a homomorphism ψ G →(N) extensing the G-action on Z such that α = ω(G, N, ψ) in ^3(G,N, ψ).For a normed G-module Z with finite balls and a G-action with finite image define the set of bounded obstructions 𝒪_b(G,Z) ⊂_b^3(G,Z) as𝒪_b(G,Z) = {ω_b(G,N,ψ) ∈^3_b(G,Z) | Z = Z(N), ^ψ(g)z = g · z, ψ: G →(N)finite}.We refer to the introduction for the definition of ℱ(G,Z) and observe that Theorem <ref> from the introduction may now be restated as follows: theorem:obstructions Let G be a group and Z be a normed G-module with finite balls and a G-action with finite image. Then𝒪_b(G,Z) = ℱ(G,Z)as subsets of ^3_b(G,Z). This fully characterises obstructions we obtain in bounded cohomology.We have just seen that 𝒪_b(G,Z) ⊂ℱ(G,Z), as we may choose ω_b in the proof of Theorem <ref> so that it factors through (N) via ψ G →(N) and (N) is a finite group.To show ℱ(G,Z) ⊂𝒪_b(G,Z) we need to show that for every finite group M and any class α∈^3(M,Z) there is a group N and a homomorphism ψ M →(N) which induces α as a cocycle. We recall a construction from <cit.>. Working with non-degenerate cocycles (see Subsection <ref>) we may assume that α(1,g,h) = α(g,1,h) = α(g,h,1) = 0 for all g,h ∈ G.Define the abstract symbols ⟨ g,h ⟩ for each 1 ≠ g,h ∈ M and set ⟨ g,1 ⟩= ⟨ 1,g ⟩ = ⟨ 1,1 ⟩=1 for the abstract symbol 1.Let F be the free group on these symbols and set 1 to be the identity element and set N = Z × F. Define the function ϕ: M →(N) so that for g ∈ Mthe action of ϕ(g) on F is given by^ϕ(g)⟨ h,i ⟩= α(g,h,i) ⟨ g,h ⟩⟨ gh,i ⟩⟨ g,hi ⟩^-1and so that the action of ϕ(g) on Z is given by the M-action on Z. A direct calculation yields that for each g ∈ M, the map ϕ(g)N → N indeed defines an isomorphism. Here, we need the assumption α(1,g,h)=α(g,1,h)=α(g,h,1)=0. It can be seen that for all n ∈ N and g_1, g_2 ∈ F^ϕ(g_1) ϕ(g_2) n = ^⟨ g_1,g_2 ⟩ ^ϕ(g_1 g_2) nwhere we have to use the fact the α is a cocycle. Hence, ϕ M →(N) is well defined and induces a homomorphism ψ M →(N). It is easy to see that ψ induces the M-action on Z. If M ≇_2, the centre of N is Z. In this case, to calculate ω_b(M,N,ψ) we choose as representatives for ϕ(g)ϕ(h)ϕ(gh)^-1 simply ⟨ g,h ⟩ and then see by definition that ω_b(M,N,ψ) is precisely α. If M = _2 then the centre of N is not Z. However, we can enlarge M by setting M̃ = M ×_2. We have both a homomorphism π: M̃→ M via (m,z) ↦ m and a homomorphism ι: M →M̃ via m ↦ (m,1) such that π∘ι = id_M. Let α̃∈^3(M̃,Z) be the pullback of α via π. Let Ñ be the group constructed as above with this cocycle and let ϕ̃: M̃→(Ñ) and ψ̃: M̃→(Ñ) be the corresponding functions. The centre of Ñ is Z.Set ψ M →(Ñ) via ψ = ψ̃∘ι. Then the obstruction ω_b(M, Ñ, ψ) can be seen to be α. This shows Theorem <ref>.§ EXAMPLES AND GENERALISATIONSWe discuss Examples in Subsection <ref> where we show in particular that the requirements in Definition <ref> are necessary. Subsection <ref> discusses possible generalisations of Theorem <ref>.§.§ Examples The subset ℰ_b(G,N,ψ) ⊂ℰ(G,N,ψ) is generally neither empty nor all of ℰ(G,N,ψ). For any hyperbolic group we have ℰ_b(G,N,ψ) = ℰ(G,N,ψ) as the comparison map is surjective (<cit.>).We give different examples where the inclusion ℰ_b(G,N,ψ) ⊂ℰ(G,N,ψ) is strict.The examples we discuss will use the Heisenberg group . This group fits into the central extension 1 →→→^2 → 1.Elements of the Heisenberg group will be described by [c,z], where c ∈ and z ∈^2. The group multiplication is given by [c_1,z_1] · [c_2, z_2] = [c_1 + c_2 + ω(z_1, z_2), z_1 + z_2] where ω(z_1,z_2) = (z_1,z_2), the determinant of the 2× 2-matrix (z_1,z_2). Observe that [c,z]^-1 = [-c,-z], and that ^[c_1,z_1] [c_2,z_2] = [c_2 + 2 ω(z_1, z_2), z_2]. The inner automorphisms are isomorphic to ^2 with the identification ϕ^2 →() via ^ϕ(g) [c,z] = [c + 2 ω(g,z), z]. It is well-known that ω generates ^2(^2, ) and that ω can not be represented by a bounded cocycle, i.e. the comparison map c^2 ^2_b( ^2, ) →^2( ^2, ) is trivial. Let G = ^2, N = and let ψ G →(N) be the homomorphism with trivial image. The direct product1 → N → N × G → G → 1clearly has a quasihomomorphic section that induces a finite map to (N) and hence ℰ_b(G, N, ψ)≠∅. Let Z(N) = be equipped with the standard norm. Note that (c^2) = { 0 }, forc^2 ^2_b(^2, ) →^2(^2, ) the comparison map. By Theorem <ref>, _b(^2, , ψ)consists of exactly one element, which is the direct product described above. Note that the Heisenberg extension 1 →→→^2 → 1is not equivalent to (<ref>). This can be seen asis not abelian. Hence this extension is not bounded. So in this case ∅≠ℰ_b(^2,,id) ⊊ℰ(^2,,id). The assumption that the quasihomomorphism σ G → E has to induce a map ϕ_σ G →(N) with finite image may seem artificial, as the induced homomorphism ψ G →(N) already has finite image. However it is necessary as the following example shows.Consider extensions of G = ^2 by N = which induce ψ G →(N) with trivial image. Again, _b(G, N, ψ) is not empty as it contains the extension corresponding to the direct product 1 →→×^2 →^2 → 1. Moreover, Z(N) = Z() = and just as in Example <ref> the comparison map c^2 ^2_b(^2, ) →^2(^2, ) is trivial, i.e. (c^2) = { 0 }. So up to equivalence there is just one bounded extension, namely the one corresponding to the direct product ×^2.Pick an isomorphism ϕ^2 →(). We may construct the extension1 →→⋊_ϕ^2 →^2 → 1where ⋊_ϕ^2 denotes the semi-direct product. and observe that the action of ^2 on the centre ofis trivial as the automorphisms are all inner. The extension (<ref>) is not equivalent to the extension 1 →→×^2 →^2 → 1. Indeed, we show that ⋊_ϕ^2 is not isomorphic to ×^2. We will show that (×^2) ≅^3 and (⋊_ϕ^2) ≅, where (G) denotes the centre of the group G. First, observe that (×^2) ≅() ×(Z^2) ≅×^2 ≅^3.Now assume that ([c,z],n), ([c',z'],n') ∈⋊_ϕ^2 are two elements which commute. Then([c,z],n) · ([c',z'],n')=([c',z'],n') · ([c,z],n) ([c,z] · ^n [c',z'], n + n')=([c',z'] · ^n' [c,z], n + n') ([c,z] · [c' + 2 ω(n,z'),z'], n + n')=([c',z'] ·[c + 2 ω(n',z),z], n + n') ([c+ c' + 2 ω(n,z') + ω(z,z'), z + z'], n + n')=([c'+ c + 2 ω(n',z) + ω(z',z) ,z + z'], n + n')and hence such elements satisfyω(n,z') =ω(n',z) + ω(z',z) = ω(n'+z',z).Hence, if ([c,z],n) is in the centre of ⋊_ϕ^2, then n and z must be such that the above equation holds for every choice of n' and z', and hence n=z=0 ∈^2. We conclude that the centre of⋊_ϕ^2 is { ([c,0],0) | c ∈}and that(⋊_ϕ^2) ≅.Hence ×^2 and ⋊_ϕ^2 cannot be isomorphic.So extension (<ref>) is not bounded. On the other hand there are two special sort of sections σ G →⋊_ϕ^2: (i) The section σ_1g ↦ (1,g) to (<ref>) is a homomorphism and hence in particular a quasihomomorphism. However, the induced map ϕ_σ_1 G →(), has as the image the full infinite group of inner automorphisms. (ii) On the other hand, the section σ_2g ↦ ([1,-g],g) induces a trivial map ϕ_σ_2 G →() as seen in the proof of Claim <ref>. Indeed we calculate that for g, h ∈ G,σ_2(g) σ_2(h) σ_2(gh)^-1 = ([ω(g,h),0],0)and so D(σ_2) is unbounded and σ_2 is not a quasihomomorphism. We conclude that there is a section σ_1 which satisfies (i) of Definition <ref> and another section σ_2 which satisfies (ii) of Definition <ref> but no section which satisfies (i) and (ii) simultaneously.§.§ GeneralisationsOne interesting aspect of Theorem <ref> is that it characterises certain classes in third bounded cohomology, namely the obstructions.Moreover we have seen that the obstructions for bounded extensions factor through a finite group. Finite groups are amenable and hence all such classes in third bounded cohomology will vanish when passing to real coefficients.On the other hand every class in third ordinary cohomology may be realised by an obstruction; see Theorem <ref>. One may wonder if there is another type of extensions ⊂(G,N,ψ) which is empty if and only if a certain class ω̃ is non-trivial in ^3_b(G, ). This would be interesting as non-trivial classes in third bounded cohomology with real coefficients are notoriously difficult to construct.Recall that our Definition <ref> of bounded extensions 1 → N → E → G → 1 required the existence of sections σ G → E which satisfied two conditions. Namely (i) that σ is a quasihomomorphism, and (ii) that the induced a map ϕ_σ G →(N) by conjugation has finite image. One may wonder ifa modification of conditions (i) and (ii) yield different such obstructions with different coefficients. For modifications of (i) there are some generalisations of the quasimorphisms by Fujiwara–Kapovich, most notably the one by Hartnick–Schweitzer <cit.>. However, there does not seem to be a natural generalisation of condition (ii), i.e. a generalisation of ϕ_σ having finite image. However, such a generalisationis necessary as else the obstructions factor through a finite group and will yield trivial classes with real coefficients. On the other hand, there has to be some restrictions on the sort of sections σ allowed: Consider the bounded cohomology of a free non-abelian group F. Soma <cit.> showed that ^3_b(F,) is infinite dimensional. But every extension 1 → N → E → F → 1 will even have a homomorphic section σ F → E. Without a condition on ϕ_σ there would be no obstruction for such extensions. § APPENDIX: EQUIVALENT DEFINITIONS OF QUASIHOMOMORPHISMS We now prove Proposition <ref> which shows that the definition of quasihomomorphism given in <cit.> is equivalent to Definition <ref>. Recall that for a set-theoretic map σ G → H we defined D̅(σ) ⊂ H asD̅(σ) := {σ(h)^-1σ(g)^-1σ(gh) | g,h ∈ G }.Suppose that σ G → H is a quasihomomorphism in the sense of Definition <ref>. We start by noting the following easy property. Let σ G → H be a quasihomomorphism with defect group Δ and let A ⊂Δ be a finite subset of Δ. Then the set{ ^σ(g) A| g ∈ G }is also a finite subset of Δ. By Proposition <ref>, the set of automorphisms { a ↦ ^σ(g) a| g ∈ G }⊂(Δ) is finite. Hence we see that the set{ ^σ(g) A| g ∈ G } is the image of a finite set of Δ under finitely many automorphisms of Δ and hence a finite subset of Δ. Recall that D=D(σ), the defect of σ, is defined as D(σ) := {σ(g) σ(h) σ(gh)^-1| g,h ∈ G }. Observe that (1,1) = σ(1) σ(1) σ(1)^-1 = σ(1) and hence σ(1) ∈ D.Moreover, we see that (g,g^-1) = σ(g) σ(g^-1) σ(1)^-1, hence σ(g)^-1∈σ(g^-1) · D_0, where D_0 = σ(1)^-1· D^-1⊂Δ, a finite set. Combining the above expressions we see that for every g,h ∈ G,σ(h)^-1σ(g)^-1σ(gh) ∈σ(h^-1) D_0 σ(g^-1) D_0 D_0^-1σ((gh)^-1)^-1.Now observe that the setD_1 = {σ(g^-1) D_0 D_0^-1σ(g^-1)^-1| g ∈ G }⊂Δis finite by Claim <ref>. Henceσ(h)^-1σ(g)^-1σ(gh) ∈σ(h^-1) D_0 D_1 σ(g^-1) σ((gh)^-1).Using the claim again we see thatD_2 = {σ(h^-1) D_0 D_1 σ(h^-1)^-1| h ∈ G }is finite and hence thatD̅(σ) = {σ(h)^-1σ(g)^-1σ(gh) | g,h ∈ G }⊂ D_2 σ(h^-1) σ(g^-1) σ((gh)^-1) ⊂ D_2 Dso D̅(σ) is indeed a finite set. This shows that any quasihomomorphism in the sense of Definition <ref> is a quasihomomorphism in the sense of <cit.>.Now assume that σ G → H is a map such that the set D̅ = D̅(σ) is finite and let Δ̅ be the group generated by D̅.Just as before we have the following claim: Let fG → H be a map such that D̅ = D̅(f) is finite and let Δ̅ be the group generated by D̅. If A ⊂Δ̅ if a finite subset of Δ̅ then the set{ ^σ(g)^-1 A| g ∈ G }is also a finite subset of Δ̅.This follows from the same argument as for Claim <ref> using Lemma 2.5 of <cit.> instead of Proposition <ref>. Observe again that σ(1)^-1 = σ(1)^-1σ(1)^-1σ(1) ∈D̅(σ) and using that for all g ∈ G, σ(g)^-1σ(g^-1)^-1σ(1) ∈D̅ we see that σ(g) ∈σ(g^-1)^-1D̅_0 where D̅_0 = σ(1) D̅.Hence for every g,h ∈ G,σ(g) σ(h) σ(gh)^-1∈σ(g^-1)^-1D̅_0 σ(h^-1)^-1D̅_0 D̅_0^-1σ(h^-1 g^-1)By Claim <ref>, we see that the setD̅_1 = {σ(h^-1)^-1D̅_0 D̅_0^-1σ(h^-1) | h ∈ G }is finite and hence(g,h) = σ(g) σ(h) σ(gh)^-1∈σ(g^-1)^-1D̅_0 D̅_1 σ(h^-1)^-1σ(h^-1 g^-1).Using the claim once more we see that the setD̅_2 = { f(g^-1)^-1D̅_0 D̅_1 f(g^-1) | g ∈ G }is finite. Finally,(g,h) = σ(g) σ(h) σ(gh)^-1∈D̅_2 σ(g^-1)^-1σ(h^-1)^-1σ(h^-1 g^-1) ⊂D̅_2 D̅which is a finite set. Hence D(σ) is finite. So every quasihomomorphism in the sense of <cit.> is also a quasihomomorphism in the sense of Definition <ref>. I would like to thank my supervisor Martin Bridson for his helpful comments and support. I would further like to thank the anonymous reviewer for many very helpful comments. The author was funded by the Oxford-Cocker Graduate Scholarship. Part of this work was written at the Isaac Newton Institue while participating in the programme Non-Positive Curvature Group Actions and Cohomology, supported by the EPSRC Grant EP/K032208/1. mscplain
http://arxiv.org/abs/1703.08802v2
{ "authors": [ "Nicolaus Heuer" ], "categories": [ "math.GR", "math.GT", "20F65, 20F69" ], "primary_category": "math.GR", "published": "20170326104005", "title": "Low-dimensional bounded cohomology and extensions of groups" }
] Reduction of lattice equations to the Painlevé equations: P_ IV and P_ VDepartment of Physics and Mathematics, Aoyama Gakuin University, Sagamihara, Kanagawa 252-5258, Japan. nobua.n1222@gmail.com In this paper, we construct a new relation between ABS equations and Painlevé equations. Moreover, using this connection we construct the difference-differential Lax representations of the fourth and fifth Painlevé equations. [2010] 33E17, 37K05, 37K10, 34M55, 34M56, 39A14[ Nobutaka Nakazono December 30, 2023 ===================== § INTRODUCTIONIn recent works by Joshi-Nakazono-Shi <cit.>,the mathematical connection between two longstanding classifications of integrable systems in different dimensions,one by Adler-Bobenko-Suris (ABS equations) <cit.> andthe other by Okamoto and Sakai (Painlevé and discrete Painlevé equations) <cit.>,have been investigated by using their lattice structures. Moreover, a comprehensive method of constructing Lax representations of discrete Painlevé equations using this connection was provided in <cit.> and demonstrated in <cit.>. The whole picture of the connection between the ABS equations and the discrete Painlevé equations has been gradually revealed, but that between the ABS equations and the (differential) Painlevé equations was missing, that is, there is still a great distance between them. In the present paper, we fill this gap by erecting a bridge from the ABS equations to the Painlevé equations.A hierarchy of nonlinear ordinary differential equations (ODEs) found by Noumi-Yamada in <cit.> is sometimes referred to as NY-system. It is well known that NY-system contains the fourth and fifth Painlevé equations (P_ IV and P_ V) and has an A-type affine Weyl group symmetry.In this paper, we show that a system of ABS equations can be reduced to NY-system by a periodic type reduction. Through this connection we construct the difference-differential Lax representations of P_ IV and P_ V (see Theorems <ref> and <ref>). Moreover, we obtain remarkable results that the dependent variable of the system of ABS equations can be reduced to the Hamiltonians of P_ IV and P_ V (see Theorems <ref> and <ref>).§.§ The fourth and fifth Painlevé equationsIn this paper, we focus on the following Painlevé equations: P_ IV: f_0'=f_0(f_2-f_1)+3a_0, f_1'=f_1(f_0-f_2)+3a_1, f_2'=f_2(f_1-f_0)+3a_2,wheref_0+f_1+f_2=3t,a_0+a_1+a_2=1,and P_ V: 2t f_0'=f_0f_2(f_3-f_1)+4a_0f_2+2(1-2a_2)f_0, 2t f_1'=f_1f_3(f_0-f_2)+4a_1f_3+2(1-2a_3)f_1, 2t f_2'=f_2f_0(f_1-f_3)+4a_2f_0+2(1-2a_0)f_2, 2t f_3'=f_3f_1(f_2-f_0)+4a_3f_1+2(1-2a_1)f_3,wheref_0+f_2=f_1+f_3=2t,a_0+a_1+a_2+a_3=1.Note that in both cases, f_i=f_i(t) are dependent variables, t is the independent variable, a_i are complex parameters and ' denotes d/ dt. The polynomial Hamiltonians of P_ IV and P_ V <cit.> are respectively given by 3√(-3) h_ IV and 16 h_ V, whereh_ IV=1/3√(-3) (f_0f_1f_2-(a_1-a_2)f_0-(a_1+2a_2)f_1+(2a_1+a_2)f_2),h_ V=1/16 (f_0 f_1 f_2 f_3-(a_1+2 a_2-a_3) f_0 f_1-(a_1+2 a_2+3 a_3) f_1 f_2+(3 a_1+2 a_2+a_3) f_2 f_3-(a_1-2 a_2-a_3) f_3 f_0+4 (a_1+a_3)^2). P_ IV (<ref>) can be rewritten as the following “standard" symmetric form given in<cit.>:dF_0(s) ds=F_0(s)(F_1(s)-F_2(s))+a_0,dF_1(s) ds=F_1(s)(F_2(s)-F_0(s))+a_1,dF_2(s) ds=F_2(s)(F_0(s)-F_1(s))+a_2,by the following replacements:-√(-3) F_i(s)=f_i(t), i=0,1,2, s=√(-3) t.Also, we can express P_ V (<ref>) in the following “standard" symmetric form given in<cit.>:dF_0(s) ds=F_0(s)F_2(s)(F_1(s)-F_3(s))+a_0F_2(s)+1-2a_22F_0(s),dF_1(s) ds=F_1(s)F_3(s)(F_2(s)-F_0(s))+a_1F_3(s)+1-2a_32F_1(s),dF_2(s) ds=F_2(s)F_0(s)(F_3(s)-F_1(s))+a_2F_0(s)+1-2a_02F_2(s),dF_3(s) ds=F_3(s)F_1(s)(F_0(s)-F_2(s))+a_3F_1(s)+1-2a_12F_3(s),by the following replacements:F_i(s)=√(-1)/2f_i(t), i=0,1,2,3, s=2logt.Moreover, by using the replacements (<ref>) and (<ref>), Equations (<ref>) and (<ref>) can be rewritten ash_ IV=F_0F_1F_2+a_1-a_2/3F_0+a_1+2a_2/3F_1-2a_1+a_2/3F_2,h_ V=F_0 F_1 F_2 F_3+a_1+2 a_2-a_3/4 F_0 F_1+a_1+2 a_2+3 a_3/4F_1 F_2-3 a_1+2 a_2+a_3/4 F_2 F_3+a_1-2 a_2-a_3/4 F_3 F_0+ (a_1+a_3)^2/4,which are the Hamiltonians of Equations (<ref>) and (<ref>), respectively. §.§ Main results In this section, we outline four main results of this paper.Firstly, in <ref> we prove the following theorems for P_ IV (<ref>). The dependent variable of the system of ABS equations (<ref>) with n=2 can be reduced to the Hamiltonian (<ref>).The Lax representation of P_ IV (<ref>) is given by the following:Φ(x+1,t)=A_ IV(x,t)Φ(x,t),∂/∂ tΦ(x,t)=B_ IV(x,t)Φ(x,t),that is, the compatibility condition∂/∂ tA_ IV(x,t)+A_ IV(x,t)B_ IV(x,t)=B_ IV(x+1,t)A_ IV(x,t),is equivalent to P_ IV (<ref>). Here, A_ IV(x,t)= [ 1 f_1+ω_0+(2a_1+a_2)t/2+3t/2x; 0 1 ][0 -3(a_1+a_2)-3x+μ;1 -f_2 ][0 -3a_1-3x+μ;1 -f_1 ][ 0 -3x+μ; 1 f_2+ω_0-(2a_1+a_2+3)t/2-3t/2x ],B_ IV(x,t)= [ 1 ω_0+(2a_1+a_2)t/2+3t/2x; 0 1 ][0 -ω_0'-2a_1-2a_0-t^2/4-3/2x+μ;1 -ω_0-(2a_1+a_2)t/2-3t/2x ],where the variables ω_0 and ω_0' are given byω_0=2(f_0f_1f_2-(a_1-a_2)f_0-(a_1+2a_2)f_1+(2a_1+a_2)f_2)-(3+2t^2)t/6,ω_0'=-(f_0-t)(2t-f_0)-(f_1-t)(2t-f_1)-(f_2-t)(2t-f_2)+2(a_1-a_2)+1/2,and μ is an arbitrary complex constant. Other two main results are for P_ V (<ref>) given by the following theorems. The dependent variable of the system of ABS equations (<ref>) with n=3 can be reduced to the Hamiltonian (<ref>).The Lax representation of P_ V (<ref>) is given byΦ(x+1,t)=A_ V(x,t)Φ(x,t),∂/∂ tΦ(x,t)=B_ V(x,t)Φ(x,t),that is, the compatibility condition ∂/∂ tA_ V(x,t)+A_ V(x,t)B_ V(x,t)=B_ V(x+1,t)A_ V(x,t),is equivalent to P_ V (<ref>). Here, A_ V(x,t)= [ 1 ω_1-f_0+(3 a_1+2 a_2+a_3+5) t/2+2 t x; 0 1 ][0 -4 (a_1+a_2+a_3)-4 x+μ;1 -f_3 ][0 -4 (a_1+a_2)-4 x+μ;1 -f_2 ][0 -4 a_1-4 x+μ;1 -f_1 ][0 -4 x+μ;1 -ω_1-(3 a_1+2 a_2+a_3+1) t/2-2 t x ], B_ V(x,t)= [1 ω_0+(3 a_1+2 a_2+a_3)t/2+2 t x;01 ][ 0 -ω_0'-2 (3 a_1+2 a_2+a_3)-t^2-2/4-2 x+μ; 1-ω_0-(3 a_1+2 a_2+a_3) t/2-2 t x ],whereω_1=ω_0-f_2f_3-2(a_0+a_2)-t^2-1/2t,ω_0=f_0 f_1 f_2 f_3-(a_1+2 a_2-a_3) f_0 f_1-(a_1+2 a_2+3 a_3) f_1 f_2/8t+(3 a_1+2 a_2+a_3) f_2 f_3-(a_1-2 a_2-a_3) f_3 f_0+4 (a_1+a_3)^2-t^4-6 t^2-1/8t, ω_0'=-4 (a_1+a_3)^2-1+2 (3+2 a_1+4 a_2-2 a_3) t^2+3 t^4-8 t (a_3 f_2+a_2 f_3)/8 t^2-4 (a_1+a_3-t^2) f_2 f_3-2 t (f_2-f_3) f_2 f_3+f_2^2 f_3^2/8 t^2,and μ is an arbitrary complex constant. The proofs of Theorems <ref> and <ref> are given in <ref>.§.§ Background The six Painlevé equations: P_ VI, …, P_ I are nonlinear ODEs of second order which have the Painlevé property, i.e., their solutions do not have movable branch points. It is well known that the Painlevé equations, except for P_ I, have Bäcklund transformations, which collectively form affine Weyl groups.The following is the diagram of degenerations:[ P_ VI (D_4^(1)) →P_ V (A_3^(1)) → P_ III (2A_1^(1)); ↓ ↓; P_ IV (A_2^(1)) → P_ II (A_1^(1)) →P_ I ]where the symbols inside the parentheses indicate the types of affine Weyl groups. (See <cit.> for the details.) In <cit.>,Adler-Bobenko-Suris (ABS) et al. classified polynomials P, say, of four variables into 11 types: Q4, Q3, Q2, Q1, H3, H2, H1, D4, D3, D2, D1. The first four types, the next three types and the last four types are collectively called Q-, H^4- and H^6-types, respectively. The resulting polynomial P satisfies the following properties. (1) Linearity P is linear in each argument, i.e., it has the following form:P(x_1,x_2,x_3,x_4)=A_1x_1x_2x_3x_4+⋯+A_16,where coefficients A_i are complex parameters.(2) 3D consistency and tetrahedron property There exist a further seven polynomials of four variables: P^(i), i=1,…,7, which satisfy property (1)and a cube C on whose six faces the following equations are assigned: P(x_0,x_1,x_2,x_12)=0,P^(1)(x_0,x_2,x_3,x_23)=0,P^(2)(x_0,x_3,x_1,x_31)=0,P^(3)(x_3,x_31,x_23,x_123)=0,P^(4)(x_1,x_12,x_31,x_123)=0,P^(5)(x_2,x_23,x_12,x_123)=0, where the eight variables x_0, …, x_123 lie on the vertices of the cube (see Figure <ref>), in such a way thatx_123 can be uniquely expressed in terms of the four variables x_0, x_1, x_2, x_3 (3D consistency) and moreover the following relations hold (tetrahedron property):P^(6)(x_0,x_12,x_23,x_31)=0,P^(7)(x_1,x_2,x_3,x_123)=0.Since these equations relate the vertices of the quadrilateral on a lattice, they are often called quad-equations or lattice equations. Some polynomials of ABS type areQ1 :Q1(x_1,x_2,x_3,x_4;α_1,α_2;ϵ)=α_1(x_1x_2+x_3x_4)-α_2(x_1x_4+x_2x_3)-(α_1-α_2)(x_1x_3+x_2x_4)+ϵα_1α_2(α_1-α_2), H3 :H3(x_1,x_2,x_3,x_4;α_1,α_2;δ;ϵ)=α_1(x_1x_2+x_3x_4)-α_2(x_1x_4+x_2x_3)+(α_1^2-α_2^2)(δ+ϵα_1α_2 x_2x_4), H1 :H1(x_1,x_2,x_3,x_4;α_1,α_2;ϵ)=(x_1-x_3)(x_2-x_4)+(α_2-α_1)(1-ϵ x_2x_4),where α_1,α_2∈ℂ^∗ and ϵ,δ∈{0,1}. Many well known integrable partial difference equations (s) arise from assigning a polynomial of ABS type to quadrilaterals in the integer lattice ^2, for example: discrete Schwarzian KdV equation<cit.>Q1(U,U,U,U;α,β;0)=0 ⇔ (U-U)(U-U)(U-U)(U-U)=αβ ;lattice modified KdV equation<cit.>H3(U,U,-U,U;α,β;0;0)=0 ⇔ UU=αU-βUαU-βU ;lattice potential KdV equation<cit.>H1(U,U,U,U;α,β;0)=0 ⇔ (U-U)(U-U)=α-β , where U=U_l,m,α=α_l,β=β_m,:l→ l+1,:m→ m+1,l,m∈ℤ.Throughout this paper, we refer tosuch s as ABS equations.We note that in general a hypercube is said to be multi-dimensionally consistent, if all cubes contained in the hypercube are 3D consistent (see property (2) above). In a similar manner to the construction of ABS equations a hypercube causes a system of ABS equations by tilling it in the integer lattice (see, for example <ref>). It is well known that the Painlevé equations arise as the monodromy-preserving deformation of linear differential equations (see e.g., <cit.> and reference therein). The pair of linear differential equation and its deformation equation is referred to as the Lax representation (or, Lax pair) of the corresponding Painlevé equation. It has also been reported that a compatibility condition of linear difference equation and its deformation equation also give a Painlevé equation <cit.>. We here denote such a Lax representation as a difference-differential Lax representation. Lax representations of the Painlevé equations usually arise by reductions from the integrable partial differential equations such as KdV equation, modified KdV equation, and so on. In this paper, we show that difference-differential Lax representations of the Painlevé equations can be obtained from a system of integrable s of ABS type through periodic type reductions by using P_ IV and P_ V as examples. Note that a Lax representation of an ABS equation is given by a pair of linear difference equation and its spectrum-preserving deformation.For a relation between monodromy- and spectrum- preserving deformations, we refer to <cit.>. §.§ Plan of the paperThis paper is organized as follows. In <ref>, we first define the system of ABS equations (<ref>) and construct its Lax representation. Then, we consider the reduction of system (<ref>) to the system of ODEs (<ref>). In <ref>, using the symmetry of the integer latticewe obtain the affine Weyl group symmetry and the difference-differential Lax representation of system (<ref>). In <ref>, considering the relation between system (<ref>) and NY-system, we give the proofs of Theorems <ref>–<ref>. Some concluding remarks are given in <ref>.§ REDUCTION OF A SYSTEM OF ABS EQUATIONS TO A SYSTEM OF ODES In this section, we consider the periodic reduction of the system of ABS equations (<ref>) to the system of ODEs (<ref>), which is equivalent to NY-system (see <ref> for the details).In the same way that the lattice ^2 can be constructed by tiling the plane with squares, we construct the lattice ^n+2, where n∈_>0, by tiling it with (n+2)-dimensional hypercubes (i.e. (n+2)-cubes). We obtain a system of s on the lattice ^(n+2) in a similar manner to the constructions of the ABS equations (see <ref>). Indeed, assigning the function u and H1_=0 equations to the vertices and faces of each (n+2)-cube, we obtain the following system of ABS equations:(u-u_ )(u_-u_)+^(i)(l_i)-^(j)(l_j)=0, 0≤ i<j≤ n+1,where u=u(l_0,…,l_n+1) is the dependent variable and {…,^(i)(-1),^(i)(0),^(i)(1),^(i)(2),…}, i=0,…,n+1, are complex parameters. Here, the subscript (or, i ) for an arbitrary function F=F(l_0,…,l_n+1) means+1 shift (or, -1 shift) in the l_i-direction, that is,F_=F|_l_i↦ l_i+1,F_i=F|_l_i↦ l_i-1.Below, we also use these notations for other objects. For example, (2.1)_ denotes (2.1)|_l_i↦ l_i+1.We first rewrite system (<ref>) by perceiving the l_0-direction as special. Let ^(0)(l_0) and u(l_0,…,l_n+1) be the functions oft∈ as follows:^(0)(l_0)=(t+l_0),u(l_0,…,l_n+1)=u_l_1,…,l_n+1(t+l_0), where ∈. Then, system (<ref>) can be rewritten as (u-u̅_)(u̅-u_)+(t)-^(j)(l_j)=0,j=1,…, n+1,(u-u_ )(u_-u_)+^(i)(l_i)-^(j)(l_j)=0,1≤ i<j≤ n+1, where u=u_l_1,…,l_n+1(t). Here, the overlinefor an arbitrary function F=F(t) means + shift of t, that is,F=F(t+).Note that system (<ref>) is not a special case of system (<ref>). Indeed, shifting t to t+l_0 and using replacement (<ref>), we inversely obtain system (<ref>) from system (<ref>).Following the method given in <cit.>, we obtain the Lax representation of system (<ref>).The Lax representation of system (<ref>) is given by ψ=[ 1 u; 0 1 ][ 0 μ-(t); 1 -u̅ ]ψ, ψ_=[ 1 u; 0 1 ][ 0 μ-^(i)(l_i); 1 -u_ ]ψ,i=1,…,n+1, where u=u_l_1,…,l_n+1(t), ψ=ψ_l_1,…,l_n+1(t) and μ∈ is the spectral parameter, that is, the compatibility conditions(ψ_)=( ψ )_, j=1,…, n+1,(ψ_)_=(ψ_)_, 1≤ i<j≤ n+1, are equivalent to (<ref>) and (<ref>), respectively. In Appendix <ref>,we give the proof of Lemma <ref> and show how to construct a Lax representation of a system of ABS equations by using system (<ref>) as an example.We next consider a periodic reduction of system (<ref>) and its Lax representation (<ref>). Let (t)=-t^2+24+^-2,u_l_1,…,l_n+1(t)=U_l_1,…,l_n+1(t)+(_l_1,…,l_n+1-^-2)t,ψ_l_1,…,l_n+1(t)=^-t/[ 1 -^-2t; 0 1 ]ϕ_l_1,…,l_n+1(t), where _l_1,…,l_n+1=∑_i=1^n+1^(i)(l_i)2(n+1).By imposing the (1,…,1)-periodic conditionU_l_1+1,…,l_n+1+1(t)=U_l_1,…,l_n+1(t),with the following conditions of the parameters for l∈:^(i)(l)=^(i)(0)+(n+1)l, i=1,…, n+1,system (<ref>) is reduced to the following system of equations: U'+U_'-(U-U_)(t-U+U_)+2_l_1,…,l_n+1-^(j)(l_j)=0, j=1,…, n+1,(U-U_,-t)(U_-U_)+^(i)(l_i)-^(j)(l_j)=0,1≤ i<j≤ n+1, where U=U_l_1,…,l_n+1(t) and ' denotes d/ dt. Moreover, the Lax representation (<ref>) is also reduced to the Lax representation of system (<ref>) given by ϕ'=[ 1 U+_l_1,…,l_n+1t; 0 1 ][0 -U'-_l_1,…,l_n+1+t^2+2/4+μ;1 -U-_l_1,…,l_n+1t ]ϕ,ϕ_=[ 1 U+_l_1,…,l_n+1t; 0 1 ][ 0-^(i)(l_i)+μ; 1 -U_-(_l_1,…,l_n+1)_ t ]ϕ,i=1,…,n+1, where U=U_l_1,…,l_n+1(t) and ϕ=ϕ_l_1,…,l_n+1(t), that is,the compatibility conditionsd/ dt(ϕ_)=(ϕ')_, j=1,…, n+1,(ϕ_)_=(ϕ_)_, 1≤ i<j≤ n+1, are equivalent to (<ref>) and (<ref>), respectively. The proof of Lemma <ref> is given in Appendix <ref>.We are now in a position to get a system of ODEs. Let us define the variables ω_i=ω_i(t) and the parameters a_i, i=0,…,n, byω_0=U_0,…,0(t),ω_1=U_1,0,…,0(t),…,ω_n=U_1,…,1,0(t),a_0=^(1)(0)-^(n+1)(0)n+1+1,a_i=^(i+1)(0)-^(i)(0)n+1, i=1,…,n, where∑_i=0^na_i=1.Substituting l_1=⋯=l_i=1, l_i+1=⋯=l_n+1=0, j=i+1,into system (<ref>) and using the relation2_0,…,0-^(1)(0) =∑_k=1^n(n+1-k)a_k,which can be verified by the direct calculation, we obtain the following system of ODEs:ω_i'+ω_i+1'=-(ω_i-ω_i+1)(ω_i-ω_i+1-t)-∑_k=1^n (n+1-k)a_i+k,where i∈/(n+1). System (<ref>) is equivalent to NY-system (see <ref> for the details). Before explaining it, in the next section we consider the affine Weyl group symmetry and Lax representation of system (<ref>). § AFFINE WEYL GROUP SYMMETRY AND LAX REPRESENTATION OF SYSTEM (<REF>) In this section, we first consider a linear action of an affine Weyl group of A-type, which is the symmetry of the integer lattice. Then, lifting its action to a birational action we obtain affine Weyl group symmetry of system (<ref>). Using the symmetry group together with Lemma <ref>, we finally construct the difference-differential Lax representation of system (<ref>).§.§ Affine Weyl group Symmetry of the lattice ^n+1 In this section, considering a symmetry of the integer lattice ^n+1, we obtain the linear action of the affine Weyl group W(A_n^(1)).We define the automorphisms of the lattice ^n+1: s_i, i=0,…,n, and πby the following actions on the coordinates (l_1,…,l_n+1)∈^n+1: s_0:(l_1,…,l_n+1)↦ (l_n+1+1,l_2,…,l_n,l_1-1),s_i:(l_1… l_n+1)↦ (l_1,…,l_i-1,l_i+1,l_i,l_i+2,…, l_n+1), i=1,…,n,π:(l_1,…,l_n+1)↦ (l_n+1+1,l_1,…,l_n). For convenience, throughout this paper we use the following notation for the combined transformation of arbitrary mappings w and w':ww':=w∘ w'.The group of automorphisms ⟨ s_0,…,s_n,π⟩ forms the extended affine Weyl group of type A_n^(1), denoted by W(A_n^(1)). Namely, they satisfy the following fundamental relations:s_i^2=1, (s_is_i± 1)^3=1,(s_is_j)^2=1, j≠ i,i± 1,π s_i=s_i+1π,where i,j∈/(n+1). Note that W(A_n^(1)) is not the “full" extended affine Weyl group of type A_n^(1),since it only includes rotational symmetries of the affine Dynkin diagram, but not the reflections.Action of each element of W(A_n^(1)) on the coordinates (l_1,…,l_n+1)∈^n+1 is defined,but that on the each lattice parameter l_i is not defined. For example, in the case n=2 the transformation π acts on the origin as the following (see Figure <ref>):π.(0,0,0)=(1,0,0),but it cannot act on the parameter l_i like π.l_i. In the lattice ^n+1, there are (n+1) orthogonal directions, which naturally give rise to (n+1) translation operators. Operators T_i, i=1,…,n+1, whose actions on the coordinates (l_1,…,l_n+1)∈^n+1 are given byT_i:(l_1,…,l_n+1)↦ (l_1,…,l_n+1)_ ,can be expressed by the elements of W(A_n^(1)) as the following:T_i=π s_i+n-1⋯ s_i+1s_i,where i∈/(n+1). Note that π^n+1 is also a translation operator whose action is given byπ^n+1:(l_1,…,l_n+1)↦ (l_1+1,…,l_n+1+1),and can be expressed by compositions of T_i as the following:π^n+1=T_1⋯ T_n+1. §.§ Affine Weyl group Symmetry of system (<ref>)In this section, extending the linear action of W(A_n^(1)) given in <ref> to the birational action,we obtain the affine Weyl group symmetry of system (<ref>).From the definition, the variables U_l_1,…,l_n+1(t), defined by (<ref>), are assigned on the vertices and the quad-equations (<ref>) are assigned on the faces of the lattice ^n+1. Therefore, we can naturally lift the action of W(A_n^(1)) to the actions on the function U_l_1,…,l_n+1(t) by s_i.U_(l_1,…,l_n+1)=U_s_i.(l_1,…,l_n+1), i=0,…,n,π.U_(l_1,…,l_n+1)=U_π.(l_1,…,l_n+1),whereU_(l_1,…,l_n+1)=U_l_1,…,l_n+1(t).Moreover, we define the action of W(A_n^(1)) on the parameters ^(i)(l), i=1,…,n+1, where l∈ , as the following:s_0.^(j)(l)=^(n+1)(l-1) ifj=1, ^(1)(l+1) ifj=n+1, ^(j)(l) otherwise,s_i.^(j)(l)=^(j+1)(l) ifj=i, ^(j-1)(l) ifj=i+1, ^(j)(l) otherwise,π.^(j)(l)=^(j+1)(l) ifj=1,…,n, ^(1)(l+1) ifj=n+1, where i=1,…,n. These actions give the actions on the parameters a_i and the variables ω_i as the following lemma. The actions of W(A_n^(1))=⟨ s_0,…,s_n,π⟩on the parameters a_i, i=0,…,n, are given bys_i(a_j)=-a_j ifj=i, a_j+ a_i ifj=i± 1, a_j otherwise,π(a_i)=a_i+1,where i,j∈/(n+1), while those on the variables ω_i , i=0,…,n, are given by s_i(ω_j)=ω_i+(n+1)a_it-ω_i+n+ω_i+1 ifj=i, ω_j ifj≠ i,π (ω_i)=ω_i+1, where i,j∈/(n+1).From the periodic condition (<ref>), the definition (<ref>) and the actions (<ref>), we obtains_0(ω_0)=U_(1,0,…,0,-1)=U_(2,1,…,1,0)=T_1(ω_n),s_k(ω_k)=U_(1,…,1,0,1,0,…,0)=T_k+1(ω_k-1), k=1,…,n.Moreover, substituting l_1=⋯=l_n=1, l_n+1=0, i=n+1, j=1, l_1=⋯=l_k-1=1, l_k=⋯=l_n+1=0, i=k, j=k+1into (<ref>), we obtainT_1(ω_n)=ω_0+-^(n+1)(0)+^(1)(1)t-ω_n+ω_1=ω_0+(n+1)a_0t-ω_n+ω_1,T_k+1(ω_k-1)=ω_k+-^(k)(0)+^(k+1)(0)t-ω_k-1+ω_k+1=ω_k+(n+1)a_kt-ω_k+n+ω_k+1,where k=1,…,n, respectively. Therefore, from Equations (<ref>), (<ref>), (<ref>) and (<ref>), the actions (<ref>) hold. From the actions (<ref>) and the definition (<ref>), the others can be easily verified. Therefore, we have completed the proof. In general, for a function F=F(a_i,ω_j), we let an element w∈W(A_n^(1)) act as w.F=F(w.a_i,w.ω_j), that is, w acts on the arguments from the left. We can easily verify that under the birational actions given in Lemma <ref>, W(A_n^(1)) satisfies the fundamental relations (<ref>) and the following relation: π^n+1=1. We can also verify that by using system (<ref>) and the birational actions of W(A_n^(1)) given in Lemma <ref>, the following relations hold:s_j(ω_i'+ω_i+1')= d/ dt(s_j(ω_i)+s_j(ω_i+1)),π(ω_i'+ω_i+1')= d/ dt(π(ω_i)+π(ω_i+1)),where i,j∈/(n+1), which indicate that W(A_n^(1)) is a Bäcklund transformation group of system (<ref>). Therefore, the following lemma holds. Bäcklund transformations of system (<ref>) collectively formthe extended affine Weyl group W(A_n^(1)). §.§ Difference-differential Lax representation of system (<ref>)In this section, we define the birational action of W(A_n^(1)) on the wave function ϕ. Using this action, we obtain the difference-differential Lax representation of system (<ref>).In a similar manner to U_l_1,…,l_n+1(t),we can assign ϕ_l_1,…,l_n+1(t) on the vertices (l_1,…,l_n+1)∈^n+1. Then, the action of W(A_n^(1)) can be lifted to the action on the function ϕ_l_1,…,l_n+1(t) as the following:s_i.ϕ_(l_1,…,l_n+1)=ϕ_s_i.(l_1,…,l_n+1),π.ϕ_(l_1,…,l_n+1)=ϕ_π.(l_1,…,l_n+1),where ϕ_(l_1,…,l_n+1)=ϕ_l_1,…,l_n+1(t).Let us define the variables Φ_i=Φ_i(t), i=0,…,n, and the parameter x byΦ_0=ϕ_(0,…,0),Φ_1=ϕ_(1,0,…,0),…,Φ_n=ϕ_(1,…,1,0),x=^(1)(0)n+1.Substituting l_1=⋯=l_k=1, l_k+1=⋯,l_n+1=0into system (<ref>), we obtain the following system of equations: Φ_k'=[ 1 ω_k+(+k/2)t; 0 1 ][0 -ω_k'--k/2+t^2+2/4+μ;1 -ω_k-(+k/2)t ]Φ_k,T_i(Φ_k)= [ 1 ω_k+(+k/2)t; 0 1 ][ 0 -^(i)-(n+1)+μ; 1 -(ω_k)_-(+k+1/2)t ]Φ_k ifi≤ k,  [ 1 ω_k+(+k/2)t; 0 1 ][ 0 -^(i)+μ; 1 -(ω_k)_-(+k+1/2)t ]Φ_k ifi>k, where k=0,…,n and ^(i), i=1,…,n+1, andare defined by^(i)=^(i)(0)=(n+1)(x+∑_j=1^i-1a_j),=_0,…,0=(n+1)x2+∑_j=1^n(n+1-j)a_j2.Then, the action of W(A_n^(1)) on the variables Φ_i, i=0,…,n, and the parameter x are given in the following lemma. The action of W(A_n^(1))=⟨ s_0,…,s_n,π⟩on the parameter x is given bys_i(x)=x-a_0 ifi=0, x+a_1 ifi=1, x otherwise,π(x)=x+a_1,while that on the variables Φ_k , k=0,…,n, is given by s_0(Φ_0)=[ 0 -^(n+1)+n+1+μ; 1-ω_1-(+1/2)t ]^-1[ 1 ω_0-(ω_1)_n+1; 0 1 ][0-^(1)+μ;1 -ω_1-(+1/2)t ]Φ_0,s_i(Φ_i)=[ 1 ω_i-1+(+i-1/2)t; 0 1 ][ ^(i+1)-μ^(i)-μ0; T_i+1(ω_i-1)-ω_i^(i)-μ1 ][ 1 ω_i-1+(+i-1/2)t; 0 1 ]^-1Φ_i,s_j(Φ_k)=Φ_k, j,k=0,…,n, j≠ k,π(Φ_k)=[ 1 ω_k+(+k/2)t; 0 1 ][0-^(k+1)+μ;1 -ω_k+1-(+k+1/2)t ]Φ_k,k=0,…,n, where i=1,…,n and ω_n+1=ω_0. From (<ref>), (<ref>), (<ref>) and (<ref>), we obtain s_0(Φ_0)=ϕ_(1,0,…,0,-1)(t)=T_n+1^-1(Φ_1),T_n+1(Φ_1)=[ 1 ω_1+(+1/2)t; 0 1 ][ 0 -^(n+1)+μ; 1 -T_n+1(ω_1)-(+1)t ]Φ_1,Φ_1=T_1(Φ_0)=[ 1 ω_0+t; 0 1 ][0-^(1)+μ;1 -ω_1-(+1/2)t ]Φ_0,s_i(Φ_i)=ϕ_(1,…,1,0,1,0,…,0)(t)=T_i+1(Φ_i-1),T_i+1(Φ_i-1)=[ 1 ω_i-1+(+i-1/2)t; 0 1 ][ 0 -^(i+1)+μ; 1 -T_i+1(ω_i-1)-(+i/2)t ]Φ_i-1,Φ_i=T_i(Φ_i-1)=[ 1 ω_i-1+(+i-1/2)t; 0 1 ][0-^(i)+μ;1 -ω_i-(+i/2)t ]Φ_i-1,π(Φ_k-1)=T_k(Φ_k-1),T_k(Φ_k-1)=[ 1 ω_k-1+(+k-1/2)t; 0 1 ][0-^(k)+μ;1 -ω_k-(+k/2)t ]Φ_k-1. Therefore, from (<ref>), (<ref>) and (<ref>), we obtain (<ref>), (<ref>) and (<ref>), respectively. From the actions (<ref>) and (<ref>) and the definition (<ref>), the others can be easily verified.Therefore, we have completed the proof. Note that we can easily verify that under the actions on the variables Φ_i and the parameter x, W(A_n^(1)) satisfies the fundamental relations (<ref>) but does not satisfy the relation (<ref>). This unsatisfied relation is a key to construct a difference-differential Lax representation of an ODE.We are now in a position to construct a Lax representation of system (<ref>). Let us define the shift operator of x byT_x=π^n+1.The action of T_x on the parameter x is given byT_x:x↦ x+1,while that on the variables ω_i and parameters a_i, i=0,…,n, is given by an identity mapping, i.e.,T_x(ω_i)=ω_i, T_x(a_i)=a_i, i=0,…,n.Therefore, from (<ref>) and (<ref>) we obtain the following lemma. The difference-differential Lax representation of system (<ref>) is given by the following:T_x(Φ_i)=π^n+1(Φ_i),Φ_i'=[ 1 ω_i+(+i/2)t; 0 1 ][0 -ω_i'--i/2+t^2+2/4+μ;1 -ω_i-(+i/2)t ]Φ_i,where π(Φ_i)=[ 1 ω_i+(+i/2)t; 0 1 ][0-^(i+1)+μ;1 -ω_i+1-(+i+1/2)t ]Φ_i,π()=+1/2,π(t)=t,π(μ)=μ,π(ω_i)=ω_i+1,π(^(i))=^(i+1) ifi=1,…,n, ^(1)+n+1 ifi=n+1,that is, the compatibility conditions d/ dtT_x(Φ_i)=T_x(Φ_i'), i=0,…,n,are equivalent to system (<ref>). Note that the relations between the parameters ^(i),and the parameters a_j, x are given by (<ref>).§ DIFFERENCE-DIFFERENTIAL LAX REPRESENTATIONS OF P_ IV AND P_ V In this section, considering the relation between system (<ref>) and NY-system, we obtain the difference-differential Lax representations of P_ IV (<ref>) and P_ V (<ref>).Let us define the variables g_i=g_i(t), i=0,…,n, byg_i=ω_i-ω_i+1-t/2,where ω_n+1=ω_0. Then, system (<ref>) can be rewritten as the periodic dressing chain with period (n+1)<cit.>:g_i+1'+g_i'=g_i+1^2-g_i^2-(n+1)a_i+1,where i∈/(n+1). It is well known that from system (<ref>),we can obtain NY-system containing P_ IV and P_ V<cit.>. Through this relation we construct the Lax representations of P_ IV and P_ V from the Lax representation of system (<ref>). §.§ Case n=2: the Painlevé IV equationIn this section considering the case n=2 we obtain the difference-differential Lax representation of P_ IV from that of system (<ref>).Let us define the variables f_i=f_i(t), i=0,1,2, byf_0=ω_1-ω_2+t,f_1=ω_2-ω_0+t,f_2=ω_0-ω_1+t.Then, from system (<ref>) and the condition for the parameters (<ref>) with n=2,we obtain P_ IV (<ref>) with the conditions (<ref>).From the affine Weyl group symmetry of system (<ref>) given in Lemma <ref>,we obtain that of P_ IV as follows. The actions of W(A_2^(1))=⟨ s_0,s_1,s_2,π⟩on the parameters a_i, i=0,1,2, are given bys_i(a_j)=-a_j ifj=i, a_j+ a_i ifj=i± 1, a_j otherwise,π(a_i)=a_i+1,where i,j∈/3, while those on the variables f_i , i=0,1,2, are given bys_i(f_j)=f_j+3a_if_i ifj=i-1, f_j-3a_if_i ifj=i+1, f_j ifj=i,π (f_i)=f_i+1,where i,j∈/3. Under these actions, the fundamental relations for W(A_2^(1)) hold:s_i^2=1, (s_is_i± 1)^3=1,(s_is_j)^2=1, j≠ i,i± 1,π s_i=s_i+1π,π^3=1,where i,j∈/3. The corresponding Dynkin diagram is given by Figure <ref>. Before the discussion of the Lax representation of P_ IV, let us consider the role of ω-variables in the theory of the Painlevé IV equation. The following relation holds:ω_0=√(-3)h_ IV-(3+2t^2)t/6,where h_ IV is the Hamiltonian given by (<ref>).Letc_0=ω_0-√(-3) h_ IV+(3+2t^2)t/6.We can easily verify the following relations:s_i(c_0)=c_0, i=0,1,2,π(c_0)=c_0.Moreover, using the relationω_0'= ω_0'+ω_1'-(ω_1'+ω_2')+ω_2'+ω_0'/2 = -(f_0-t)(2t-f_0)-(f_1-t)(2t-f_1)-(f_2-t)(2t-f_2)+2(a_1-a_2)+1/2,we obtain dc_0/ dt=0.Equations (<ref>) and (<ref>) mean that c_0 is an arbitrary constant. It is obvious that without loss of generality we can put c_0=0.Therefore, we have completed the proof. Therefore, Theorems <ref> and <ref> follow from Lemma <ref> with n=2,i=0,Φ_0=Φ,and Lemma <ref>. §.§ Case n=3: the Painlevé V equationIn a similar manner to the case n=2 (see <ref>),the case n=3 gives P_ V (<ref>) and its difference-differential Lax representation.Let f_0=ω_1-ω_3+t,f_1=ω_2-ω_0+t,f_2=ω_3-ω_1+t,f_3=ω_0-ω_2+t.Then, we obtain P_ V (<ref>) and the conditions (<ref>) from system (<ref>) andthe condition (<ref>). The action of extend affine Weyl group W(A_3^(1))=⟨ s_0,s_1,s_2,s_3,π⟩ on the parameters a_i, i=0,…,3, are given bys_i(a_j)=-a_j ifj=i, a_j+ a_i ifj=i± 1, a_j otherwise,π(a_i)=a_i+1,where i,j∈/4, while those on the variablesf_i, i=0,…,3, are given bys_i(f_j)=f_j+4a_if_i ifj=i-1, f_j-4a_if_i ifj=i+1, f_j otherwise,π (f_i)=f_i+1,where i,j∈/4. Under these actions, W(A_3^(1)) satisfies the following fundamental relations:s_i^2=1, (s_is_i± 1)^3=1,(s_is_j)^2=1, j≠ i,i± 1,π s_i=s_i+1π,π^4=1,where i,j∈/4. The Dynkin diagram for W(A_3^(1)) is given by Figure <ref>. In a similar manner to the proof of Lemma <ref>, we can prove the following lemma. The following relation holds:ω_0=16 h_ V-t^4-6 t^2-1/8 t,where h_ V is the Hamiltonian given by (<ref>).Therefore, Theorems <ref> and <ref> follow from Lemma <ref> with n=3,i=0,Φ_0=Φ,and Lemma <ref>.§ CONCLUDING REMARKSIn this paper, we have constructed the relation between the ABS equations and NY-system through the periodic type reduction. Using this connection, we obtained the difference-differential Lax representations of P_ IV and P_ V. Moreover, we showed that the dependent variable of the system of ABS equations (<ref>) can be reduced to the Hamiltonians of P_ IV and P_ V.An interesting future project is to investigate the relations between ABS equations and the other Painlevé equations (i.e., P_ VI, P_ III, P_ II, P_ I). The results in this direction will be reported in forthcoming publications. §.§ AcknowledgmentThe author would like to express his sincere thanks to Profs M. Noumi and Y. Yamada for inspiring and fruitful discussions. I also appreciate the valuable comments from the referee which have improved the quality of this paper. This research was supported by a grant # DP160101728 from the Australian Research Council and JSPS KAKENHI Grant Number JP17J00092.§ PROOF OF LEMMA <REF> In this section, we construct the Lax representation of system (<ref>) following the method given in <cit.>.The key to constructing the Lax representation of system (<ref>) is to introduce a virtual direction from the lattice ^n+2, where system (<ref>) is assigned, to the multi-dimensionally consistent integer lattice ^n+3. Then, system (<ref>) can be extended to the following system of s:(W-W_ )(W_-W_)+^(i)(l_i)-^(j)(l_j)=0, 0≤ i<j≤ n+2,where W=W(l_0,…,l_n+1,l_n+2). Here, W(l_0,…,l_n+1,0)=u(l_0,…,l_n+1) is the dependent variable of system (<ref>). Distinguish u(l_0,…,l_n+1) from v(l_0,…,l_n+1):=W(l_0,…,l_n+1,1). Then, each of equations between u=u(l_0,…,l_n+1) and v=v(l_0,…,l_n+1):(u-v_)(u_-v)+^(i)(l_i)-μ=0, i=0,…,n+1,where μ=^(n+2)(0), can be regarded as the first order discrete system of Riccati type of the quantity v, which is linearizable. Indeed, substituting v(l_0,…,l_n+1)=F(l_0,…,l_n+1)G(l_0,…,l_n+1) ,in (<ref>) and dividing them into the numerators and the denominators, we obtain the following linear systems:ψ_=[ 1 u; 0 1 ][ 0 μ-^(i)(l_i); 1 -u_ ]ψ,i=0,…,n+1,where the vector ψ=ψ(l_0,…,l_n+1) is defined byψ(l_0,…,l_n+1)=[ F(l_0,…,l_n+1); G(l_0,…,l_n+1) ].We can easily verify that the compatibility conditions(ψ_)_=(ψ_)_, 0≤ i<j≤ n+1,are equivalent to system (<ref>). Finally, using the replacements (<ref>) andψ(l_0,…,l_n+1)=ψ_l_1,…,l_n+1(t+l_0),we have completed the proof of Lemma <ref>. § PROOF OF LEMMA <REF> In this section, we give a proof of Lemma <ref>. By using (<ref>), Equations (<ref>) and (<ref>) can be respectively rewritten as (U-U_+(_l_1,…,l_n+1-(_l_1,…,l_n+1)_)t-(_l_1,…,l_n+1)_ +^-1)×(U-U_+(_l_1,…,l_n+1-(_l_1,…,l_n+1)_)t+_l_1,…,l_n+1-^-1)=t^2+24-^-2+^(j)(l_j),j=1,…, n+1, (U-U_,+(_l_1,…,l_n+1-(_l_1,…,l_n+1)_,)t)(U_-U_+((_l_1,…,l_n+1)_-(_l_1,…,l_n+1)_)t)=-^(i)(l_i)+^(j)(l_j), 1≤ i<j≤ n+1, while Equations (<ref>) and (<ref>) can be respectively rewritten as ^-1(ϕ-ϕ)=[ 1 U+_l_1,…,l_n+1t; 0 1 ][ 0 U-U/-_l_1,…,l_n+1+t^2+2/4+μ; 1 -U-_l_1,…,l_n+1(t+) ]ϕ,ϕ_=[ 1 U+_l_1,…,l_n+1t; 0 1 ][ 0-^(i)(l_i)+μ; 1 -U_-(_l_1,…,l_n+1)_ t ]ϕ,i=1,…,n+1, where U=U_l_1,…,l_n+1(t) and ϕ=ϕ_l_1,…,l_n+1(t). The periodic condition (<ref>) imposes the condition that(<ref>) is equivalent to(<ref>)_1,…,n+1. From the condition that (<ref>) is equivalent to (<ref>)_1,…,n+1, we obtain^(i)(l_i+1)-^(i)(l_i)=^(j)(l_j+1)-^(j)(l_j), 1≤ i<j≤ n+1,which hold under the conditions of parameters (<ref>). From the condition that (<ref>) is equivalent to (<ref>)_1,…,n+1, we obtain the following condition:||≪ 1,which causes the continuum limit of systems (<ref>) and (<ref>) to systems (<ref>) and (<ref>), respectively. Therefore, we have completed the proof of Lemma <ref>.' ' 10AdlerVE1994:MR1280883 V. E. Adler. Nonlinear chains and Painlevé equations. Phys. D, 73(4):335–351, 1994.ABS2003:MR1962121 V. E. Adler, A. I. Bobenko, and Y. B. Suris. Classification of integrable equations on quad-graphs. The consistency approach. Comm. Math. Phys., 233(3):513–543, 2003.ABS2009:MR2503862 V. E. Adler, A. I. Bobenko, and Y. B. Suris. Discrete nonlinear hyperbolic equations: classification of integrable cases. Funktsional. Anal. i Prilozhen., 43(1):3–21, 2009.BS2002:MR1890049 A. I. Bobenko and Y. B. Suris. Integrable systems on quad-graphs. Int. Math. Res. Not. IMRN, (11):573–611, 2002.BollR2011:MR2846098 R. Boll. Classification of 3D consistent quad-equations. J. Nonlinear Math. Phys., 18(3):337–365, 2011.BollR2012:MR3010833 R. Boll. Corrigendum: Classification of 3D consistent quad-equations. J. Nonlinear Math. Phys., 19(4):1292001, 3, 2012.BollR:thesis R. Boll. Classification and Lagrangian Structure of 3D Consistent Quad-Equations. Doctoral Thesis, Technische Universität Berlin, submitted August 2012.FN1980:MR588248 H. Flaschka and A. C. Newell. Monodromy- and spectrum-preserving deformations. I. Comm. Math. Phys., 76(1):65–116, 1980.FIKN:MR2264522 A. S. Fokas, A. R. Its, A. A. Kapaev, and V. Y. Novokshenov. Painlevé transcendents, volume 128 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2006. The Riemann-Hilbert approach.HietarintaJ2005:MR2217106 J. Hietarinta. Searching for CAC-maps. J. Nonlinear Math. Phys., 12(suppl. 2):223–230, 2005.HirotaR1977:MR0460934 R. Hirota. Nonlinear partial difference equations. I. A difference analogue of the Korteweg-de Vries equation. J. Phys. Soc. Japan, 43(4):1424–1433, 1977.JNS:paper3 N. Joshi and N. Nakazono. Lax pairs of discrete Painlevé equations: (A_2+A_1)^(1) case. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 472(2196), 2016.JNS2014:MR3291391 N. Joshi, N. Nakazono, and Y. Shi. Geometric reductions of ABS equations on an n-cube to discrete Painlevé systems. J. Phys. A, 47(50):505201, 16, 2014.JNS2015:MR3403054 N. Joshi, N. Nakazono, and Y. Shi. Lattice equations arising from discrete Painlevé systems. I. (A_2 + A_1)^(1) and (A_1 + A_1^')^(1) cases. J. Math. Phys., 56(9):092705, 25, 2015.JNS2016:MR3584386 N. Joshi, N. Nakazono, and Y. Shi. Lattice equations arising from discrete Painlevé systems: II. A^(1)_4 case. J. Phys. A, 49(49):495201, 39, 2016.JNS:paper4 N. Joshi, N. Nakazono, and Y. Shi. Reflection groups and discrete integrable systems. Journal of Integrable Systems, 1(1):xyw006, 2016.KNY2015:arXiv150908186K K. Kajiwara, M. Noumi, and Y. Yamada. Geometric Aspects of Painlevé Equations. J. Phys. A, 50(7):073001, 2017.NC1995:MR1329559 F. Nijhoff and H. Capel. The discrete Korteweg-de Vries equation. Acta Appl. Math., 39(1-3):133–158, 1995. KdV '95 (Amsterdam, 1995).NijhoffFW2002:MR1912127 F. W. Nijhoff. Lax pair for the Adler (lattice Krichever-Novikov) system. Phys. Lett. A, 297(1-2):49–58, 2002.NCWQ1984:MR763123 F. W. Nijhoff, H. W. Capel, G. L. Wiersma, and G. R. W. Quispel. Bäcklund transformations and three-dimensional lattice equations. Phys. Lett. A, 105(6):267–272, 1984.NQC1983:MR719638 F. W. Nijhoff, G. R. W. Quispel, and H. W. Capel. Direct linearization of nonlinear difference-difference equations. Phys. Lett. A, 97(4):125–128, 1983.book_NoumiM2004:MR2044201 M. Noumi. Painlevé equations through symmetry, volume 223 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 2004. Translated from the 2000 Japanese original by the author.NY1998:MR1666847 M. Noumi and Y. Yamada. Affine Weyl groups, discrete dynamical systems and Painlevé equations. Comm. Math. Phys., 199(2):281–295, 1998.NY1998:MR1676885 M. Noumi and Y. Yamada. Higher order Painlevé equations of type A^(1)_l. Funkcial. Ekvac., 41(3):483–503, 1998.NY1999:MR1684551 M. Noumi and Y. Yamada. Symmetries in the fourth Painlevé equation and Okamoto polynomials. Nagoya Math. J., 153:53–86, 1999.OKSO2006:MR2277519 Y. Ohyama, H. Kawamuko, H. Sakai, and K. Okamoto. Studies on the Painlevé equations. V. Third Painlevé equations of special type P_ III(D_7) and P_ III(D_8). J. Math. Sci. Univ. Tokyo, 13(2):145–204, 2006.OkamotoK1979:MR614694 K. Okamoto. Sur les feuilletages associés aux équations du second ordre à points critiques fixes de P. Painlevé. Japan. J. Math. (N.S.), 5(1):1–79, 1979.OkamotoK1980:MR581468 K. Okamoto. Polynomial Hamiltonians associated with Painlevé equations. I. Proc. Japan Acad. Ser. A Math. Sci., 56(6):264–268, 1980.OkamotoK1980:MR596006 K. Okamoto. Polynomial Hamiltonians associated with Painlevé equations. II. Differential equations satisfied by polynomial Hamiltonians. Proc. Japan Acad. Ser. A Math. Sci., 56(8):367–371, 1980.OkamotoK1986:MR854008 K. Okamoto. Studies on the Painlevé equations. III. Second and fourth Painlevé equations, P_ II and P_ IV. Math. Ann., 275(2):221–255, 1986.OkamotoK1987:MR916698 K. Okamoto. Studies on the Painlevé equations. I. Sixth Painlevé equation P_ VI. Ann. Mat. Pura Appl. (4), 146:337–381, 1987.OkamotoK1987:MR914314 K. Okamoto. Studies on the Painlevé equations. II. Fifth Painlevé equation P_ V. Japan. J. Math. (N.S.), 13(1):47–76, 1987.OkamotoK1987:MR927186 K. Okamoto. Studies on the Painlevé equations. IV. Third Painlevé equation P_ III. Funkcial. Ekvac., 30(2-3):305–332, 1987.OR2016:arXiv160304393 C. M. Ormerod and E. M. Rains. A symmetric difference-differential Lax pair for Painlevé VI. arXiv:1603.04393.SakaiH2001:MR1882403 H. Sakai. Rational surfaces associated with affine root systems and geometry of the Painlevé equations. Comm. Math. Phys., 220(1):165–229, 2001.SHC2006:MR2263633 A. Sen, A. N. W. Hone, and P. A. Clarkson. On the Lax pairs of the symmetric Painlevé equations. Stud. Appl. Math., 117(4):299–319, 2006.VS1993:MR1251164 A. P. Veselov and A. B. Shabat. A dressing chain and the spectral theory of the Schrödinger operator. Funktsional. Anal. i Prilozhen., 27(2):1–21, 96, 1993.WalkerAJ:thesis A. Walker. Similarity reductions and integrable lattice equations. Ph.D. Thesis, University of Leeds, submitted September 2001.
http://arxiv.org/abs/1703.09215v3
{ "authors": [ "Nobutaka Nakazono" ], "categories": [ "nlin.SI", "math-ph", "math.MP" ], "primary_category": "nlin.SI", "published": "20170326093340", "title": "Reduction of lattice equations to the Painlevé equations: P$_{\\rm IV}$ and P$_{\\rm V}$" }
[On leave from: ]Aix Marseille Univ, Centrale Marseille, Marseille, France Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany[Email: ]smcavaletto@gmail.com Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany The quantum dynamics of a system of Rb atoms, modeled by a V-type three-level system interacting with intense probe and pump pulses, are studied. The time-delay-dependent transient-absorption spectrum of an intense probe pulse is thus predicted, when this is preceded or followed by a strong pump pulse. Numerical results are interpreted in terms of an analytical model, which allows us to quantify the oscillating features of the resulting transient-absorption spectra in terms of the atomic populations and phases generated by the intense pulses. Strong-field-induced phases and their influence on the resulting transient-absorption spectra are thereby investigated for different values of pump and probe intensities and frequencies, focusing on the atomic properties which are encoded in the absorption line shapes for positive and negative time delays.32.80.Qk, 32.80.Wr, 42.65.ReTransient-absorption phases with strong probe and pump pulses Stefano M. Cavaletto December 30, 2023 =============================================================§ INTRODUCTION Phases represent the essential feature of any wave-like phenomena, lying at the heart of coherence and interference effects in classical and quantum physics. In atoms and molecules, phases define the shape of a wave packet in a superposition of quantum states and hence determine its subsequent time evolution. Manipulating atomic and molecular dynamics with external electromagnetic fields <cit.>, e.g., by using strong femto- or attosecond pulses <cit.>, requires full control of the generated quantum phases. However, traditional spectroscopy methods usually do not provide access to the phase information: for instance, for nonautoionizing bound states, absorption spectra typically consist of Lorentzian lines, with spectral intensities quantifying the atomic populations. The manipulation of absorption line shapes in transient-absorption-spectroscopy experiments <cit.> has been recently identified as a key mechanism to gain access to atomic and molecular phase dynamics. Absorption lines originate from the interference between a probe pulse transmitting through the medium and the field emitted by the system <cit.>. The dipole response of the system and, consequently, the resulting absorption spectrum can be modified by applying an intense pump pulse, preceding or following the probe pulse at variable time delays <cit.>. Thereby, symmetric Lorentzian absorption lines are converted into Fano-like lines, with time-delay-dependent features quantifying the population and phase modification induced by the interaction with the strong pump pulse. When interpreting spectral line-shape changes in terms of the underlying atomic dynamics, the action of the weak probe pulse is usually assumed as a small, well understood perturbation. The attention is thus focused on the characterization of the action of the pump pulse as a function of its parameters such as, e.g., intensity and laser frequency, and the main line-shape modifications are exclusively attributed to its nonlinear interaction with the system. Recent investigations of transient-absorption spectra in Rb atoms were based on this assumption <cit.>. Here, in contrast, we fully account for the effect of a potentially intense probe pulse, investigating how the population and phase changes induced by both pulses are encoded in its absorption spectrum. On the one hand, this allows us to fully interpret transient-absorption spectra in terms of the pump and probe parameters of interest, without a priori assumptions, which may not correspond to the conditions featured in an experiment and, hence, could lead to an inappropriate or incomplete reconstruction of the strong-field dynamics of the system. On the other hand, by considering cases in which pump and probe pulses exhibit the same intensities, we can highlight the essential differences between spectra where the probe, i.e., measured, pulse either precedes or follows the pump pulse. A proper interpretation of transient-absorption spectra is crucial for the extraction of strong-field dynamical information from these spectra, and the implementation of recently suggested deterministic strong-field quantum-control methods <cit.>.We use a V-type three-level scheme to model an ensemble of Rb atoms, with the 5s ^2S_1/2→ 5p ^2P_1/2 (794.76 nm) and 5s ^2S_1/2→ 5p ^2P_3/2 (780.03 nm) transitions excited by femtosecond pump and probe pulses of variable intensities and time delays. In Sec. <ref>, we present the theoretical model used to describe the evolution of the system and to predict the associated transient-absorption spectra. The numerical results are presented in Sec. <ref>. In particular, time-delay-dependent transient-absorption spectra are shown in Subsec. <ref> for different pump- and probe-pulse intensities. An analytical model based on recently introduced interaction operators <cit.> is used in Subsec. <ref> to interpret the numerical results, focusing on the atomic-phase information which can be extracted from the spectra for different intensities and laser frequencies of the pump and probe pulses. Section <ref> summarizes the results obtained. Atomic units are used throughout unless otherwise stated. § THEORETICAL MODEL§.§ Three-level model and equations of motionWe consider the V-type three-level system depicted in Fig. <ref>, modeling fine-structure-split 5s ^2S_1/2→ 5p ^2P_1/2 and 5s ^2S_1/2→ 5p ^2P_3/2 transitions in Rb atoms <cit.>. In particular, we introduce the state |ψ(t,τ)⟩ = ∑_i=1^3 c_i(t,τ)|i⟩,written in terms of the ground state |1⟩≡ 5s ^2S_1/2 and the excited states |2⟩≡ 5p ^2P_1/2 and |3⟩≡ 5p ^2P_3/2, with associated quantum amplitudes c_i(t,τ) and energies ω_i, i∈{1, 2, 3}. The system interacts with a pump pulse, centered on t = 0 and modeled by the classical fieldℰ_pu(t) = ℰ_pu,0 f(t) cos(ω_Lt)ê_z,and a delayed probe pulse, centered on time delay t = τ and similarly described asℰ_pr(t) = ℰ_pr,0 f(t-τ) cos[ω_L(t-τ)]ê_z ,as shown in Fig. <ref>. Both pulses are aligned along the polarization vector ê_z, have the same frequency ω_L, vanishing carrier-envelope phases, and intensities I_pu/pr = ℰ_pu/pr,0^2/(8πα) related to the peak field strengths ℰ_pu/pr,0 via the fine-structure constant α. We model their envelope functions asf(t) = { cos^2(π t/T)if|t|≤ T/2,0if|t|> T/2, .with T = π T_FWHM/(2arccos√(1/2)) and T_FWHM = 30 fs, defined as the full width at half maximum (FWHM) of |f(t)|^2 <cit.>. Positive time delays correspond to a typical pump-probe setup, in which the system is first excited by the pump pulse and the resulting dynamics are measured by a probe pulse. In contrast, negative time delays describe experiments in which the dipole response generated by the first arriving probe pulse is subsequently modified by the pump pulse, resulting in an intensity- and time-delay-dependent modulation of the line shape of the absorption spectrum of the transmitted probe pulse. The linearly polarized pulses excite electric-dipole-(E1-)allowed transitions |1⟩→ |k⟩, k∈{2, 3}, with equal magnetic quantum number, Δ M = 0, and dipole-moment matrix elements D_1k = D_1kê_z. The formulas are written for general complex values of D_1k, although these are real and positive for our atomic implementation with Rb atoms, with D_12 = 1.75 a.u. and D_13 = 2.47 a.u. <cit.>. For the intensities considered here, we neglect the presence of higher excited states, to which states |2⟩ and |3⟩ could also be coupled. The total Hamiltonian of the systemĤ = Ĥ_0 + Ĥ_int(t,τ)then consists of the unperturbed atomic HamiltonianĤ_0 = ∑_i = 1^3 (ω_i-γ_i/2)|i⟩⟨ i|and the E1 light-matter interaction Hamiltonian in the rotating-wave approximation <cit.>Ĥ_int = -1/2∑_k = 2^3 _Rk(t,τ) ^ω_Lt |1⟩⟨ k| + H.c.In Eq. (<ref>), the complex eigenvalues (ω_i-γ_i/2) of Ĥ_0 are given by the energies ω_i and the decay rates γ_i, included in order to effectively account for broadening effects in the experiment and defining an effective time scale for the dipole decay <cit.>. Transition energies ω_ij = ω_i - ω_j are equal to ω_21 = 1.56 eV and ω_31 = 1.59 eV <cit.>, whereas we set γ_1 = 0 and γ_2 = γ_3 = 1/(500 fs). In Eq. (<ref>), the time- and time-delay-dependent Rabi frequencies have been introduced <cit.>:_Rk(t,τ) = D_1k[ℰ_pr,0f(t-τ)^-ω_Lτ + ℰ_pu,0f(t)].The equations of motion (EOMs) satisfied by the vectorc⃗(t,τ) = (c_1(t,τ), c_2(t,τ), c_3(t,τ))^T,of components given by the amplitudes of the state vector |ψ(t,τ)⟩, are determined by the Schrödinger equation|ψ(t,τ)⟩/ t = Ĥ |ψ(t,τ)⟩,which leads todc⃗/dt =[ 0_R2/2 ^ω_L t_R3/2 ^ω_L t; _R2^*/2 ^-ω_L t -γ_2/2-ω_21 0; _R3^*/2 ^-ω_L t 0 -γ_3/2-ω_31 ]c⃗ .The system is assumed to be initially in its ground state |ψ_0⟩ = |1⟩, i.e., c_i,0 = δ_i1. §.§ Transient-absorption spectrumWe solve the EOMs in Eq. (<ref>) in order to simulate experimental optical-density (OD) absorption spectra𝒮_exp(ω, τ) = -log[S_pr,out(ω,τ)/S_pr,in(ω)],where S_pr,in(ω,τ) is the spectral intensity of the incoming probe pulse, whereas S_pr,out(ω,τ) is that of the transmitted probe pulse, explicitly dependent upon the time delay between pump and probe pulses. For low densities and small medium lengths, where propagation effects can be neglected, the time-delay-dependent absorption spectrum 𝒮_exp(ω,τ) can be calculated in terms of the single-particle dipole response of the system <cit.>𝒮_1(ω) ∝-ω[∑_k=2^3 D_1k^*∫_-∞^∞ c_1(t,τ)c_k^*(t,τ) ^-ω tt/∫_-∞^∞ℰ^-_pr(t) ^-ω t t],whereℰ^-_pr(t) = 1/2ℰ_pr,0 f(t-τ) ^ω_L(t-τ)is the negative-frequency complex electric field <cit.> and c_1(t,τ)c_k^*(t,τ) here represents the dipole response of the kth transition. In the following calculations, the denominator in Eq. (<ref>) is approximated by ∫_-∞^∞ℰ^-_pr(t) ^-ω t t =^-ωτ ℰ_pr,0/2 ∫_-∞^∞ f(t-τ) ^-(ω - ω_L)(t-τ) t≈^-ωτ ℰ_pr,0/2∫_-∞^∞ f(t)t = ^-ωτ K_pr,which is valid for an incoming probe pulse much broader than the transition energy between the two excited states, such that its spectral intensity can be approximately considered constant in the frequency range of interest. Spectra associated with different probe-pulse intensities, therefore, need to be properly normalized via the multiplication factor K_pr for comparison. Equation (<ref>) can then be rewritten as𝒮_1(ω) ∝-ω[∑_k=2^3 D_1k^*∫_-∞^∞ c_1(t,τ) c_k^*(t,τ) ^-ω (t-τ)t/K_pr],with the Fourier transform in the numerator centered around the arrival time of the probe pulse.For the noncollinear geometry depicted in Fig. <ref>, fast oscillations of the measured transient-absorption spectrum as a function of time delay τ cannot be distinguished and are averaged out <cit.>. Here, this is taken into account by convolving S_1(ω,τ) with a normalized Gaussian function G(τ,Δτ) of width Δτ = 5× 2π/ω_L, which leads to𝒮(ω,τ) = ∫_-∞^∞G(τ - τ',Δτ) 𝒮_1(ω, τ') τ' .§.§ Analytical model in terms of interaction operators In order to interpret numerical results from the simulation of 𝒮(ω,τ), we employ the recently introduced strong-field interaction operators Û(I) to model the effect of a pulse of intensity I on the atomic system <cit.>.The time evolution of the system |ψ(t)⟩ from an initial time t_0, given by the solution of the EOMs (<ref>), can be written in terms of the evolution operator 𝒰̂(t,t_0),|ψ(t)⟩ = 𝒰̂(t,t_0) |ψ(t_0)⟩.In the absence of external fields, this reduces to the free-evolution operatorV̂(t) = ^-Ĥ_0 t,which describes the dynamics of the unperturbed atomic system. The evolution of the system in the presence of a single pulse of intensity I = ℰ_0^2/(8πα), peak field strength ℰ_0, centered around t_c = 0 and with the same envelope f(t) and pulse duration T we introduced in Sec. <ref>, is then associated with the evolution operator 𝒰̂_0(t,t_0), solution ofd𝒰̂_0(t,t_0)/dt=[ 0_R2/2 ^ω_L t_R3/2 ^ω_L t; _R2^*/2 ^-ω_L t -γ_2/2-ω_21 0; _R3^*/2 ^-ω_L t 0 -γ_3/2-ω_31 ]𝒰̂_0(t,t_0) ,with initial conditions 𝒰̂_0(t_0,t_0) = Î and the identity matrix Î. In Eq. (<ref>), the single-pulse Rabi frequencies _Rk(t) = D_1k ℰ_0 f(t) are used. For the scheme discussed in this paper, where pump and probe pulses of equal femtosecond duration are employed, the time information related to the continuous evolution of the system in the presence of the pulse can be difficultly extracted. For our purposes, it is therefore beneficial to focus on the total action of the pulse, i.e., on the state reached by the system at the conclusion of the interaction with a pulse. Equation (<ref>) can be used to calculate 𝒰̂_0(T/2,-T/2) and thus connect the initial state |ψ(-T/2)⟩ with the final state |ψ(T/2)⟩ at the end of the pulse:|ψ(T/2)⟩ = 𝒰̂_0(T/2,-T/2) |ψ(-T/2)⟩.However, one can also introduce effective initial (|ψ^-⟩) and final (|ψ^+⟩) states|ψ^±⟩ = V̂(∓ T/2)|ψ(± T/2)⟩= ^±Ĥ_0T/2|ψ(± T/2)⟩and thus define the unique, intensity-dependent interaction operatorsÛ(I) = V̂(-T/2) 𝒰̂_0(T/2,-T/2) V̂(-T/2)connecting them,|ψ^+⟩ = Û(I)|ψ^-⟩,thus capturing the essential features of the action of the pulse in terms of an effectively instantaneous interaction, as schematically represented in Fig. <ref>. An analytical model can then be derived to describe the associated 𝒮(ω,τ), which enables one to quantify how pulse-induced changes in the population and phase of the atomic states are encoded in observable time-delay-dependent spectra. For a weak and ultrashort pulse of peak field strength ℰ_0 and envelope f(t), we can introduce approximated Rabi frequencies_Rk(t)≈ϑ_k δ(t),with the Dirac δ and the pulse areasϑ_k = ∫_-∞^∞ D_1k ℰ_0 f(t)t.The solution of Eq. (<ref>) and the use of the definition (<ref>) allow one to calculate the associated interaction operator which, up to second order, readsÛ_weak = [ 1-|ϑ_2|^2 + |ϑ_3|^2/8 ϑ_2/2 ϑ_3/2; ϑ_2^*/2 1-|ϑ_2|^2/8- ϑ_2^*ϑ_3; ϑ_3^*/2- ϑ_2ϑ_3^* 1-|ϑ_3|^2/8 ]. In the following, we interpret intensity-dependent transient-absorption spectra in terms of the matrix elements of pump- and probe-pulse interaction operators for a probe-pump and pump-probe setup. In contrast to previous results <cit.>, population and phase changes due to the interaction with intense probe and pump pulses are both explicitly addressed. Since we are interested in atomic phases, and in particular in their connection with the phase of the time-delay-dependent oscillations displayed by transient-absorption spectra for positive and negative time delays, we do not focus on the case of overlapping pulses. We are therefore allowed to develop an analytical model in which the dynamics of the system are described in terms of well defined sequences of free evolution and interaction with a pump or a probe pulse of given intensity. §.§.§ Probe-pump setup In a probe-pump setup (τ<0), for nonoverlapping pulses and neglecting the details of the continuous atomic dynamics in the presence of a pulse, the time evolution of the system can be written in terms of the state|ψ(t,τ)⟩ = { |ψ_0⟩, t<τ, V̂(t-τ)Û_pr(I_pr)|ψ_0⟩,τ<t<0,V̂(t)Û_pu(I_pu)V̂(-τ)Û_pr(I_pr)|ψ_0⟩, t>0, .with |ψ_0⟩=|1⟩ and where we have introduced the pump- and probe-pulse interaction operators, Û_pu(I_pu) and Û_pr(I_pr), dependent upon the respective pulse intensities. This can be included into Eq. (<ref>) in order to model the probe-pump spectrum 𝒮_1(ω,τ), τ<0, in terms of interaction-operator matrix elements. This results in a sum of terms, each of which oscillates as a function of τ at a given frequency. Thereby, one can recognize, for the frequencies ω≈ω_k1 in which we are interested, those terms responsible for fast oscillations of 𝒮_1(ω,τ) as a function of time delay which would not be exhibited by a spectrum measured in a noncollinear geometry. After neglecting these fast oscillating terms, the time-delay-average probe-pump spectrum reads𝒮_prpu(ω,τ) ∝-ω/K_pr{∑_k=2^3 D_1k^*/(ω - ω_k1) + γ_k/2 ×[U_pr,11U_pr,k1^*(1-^ (ω - ω_k1)τ^γ_k/2τ)+ U_pu,11U_pu,k2^* U_pr,11 U_pr,21^* ^ (ω - ω_21)τ^γ_2/2τ+ U_pu,11U_pu,k3^* U_pr,11 U_pr,31^* ^ (ω - ω_31)τ^γ_3/2τ]}. §.§.§ Pump-probe setup When a pump-probe setup is utilized (τ>0), for nonoverlapping pulses and neglecting the details of the continuous atomic dynamics in the presence of a pulse, the atomic state can be modeled as|ψ(t,τ)⟩ = { |ψ_0⟩, t<0, V̂(t)Û_pu(I_pu)|ψ_0⟩, 0<t<τ,V̂(t-τ)Û_pr(I_pr)V̂(τ)Û_pu(I_pu)|ψ_0⟩, t>τ, .with |ψ_0⟩=|1⟩. By neglecting fast oscillating terms appearing in the resulting single-particle absorption spectrum (<ref>) at frequencies ω≈ω_k1, the time-delay-average pump-probe spectrum can be written in terms of the matrix elements of the interaction operators Û_pu(I_pu) and Û_pr(I_pr) as𝒮_pupr(ω,τ) ∝ -ω/K_pr[∑_k=2^3D_1k^*/(ω - ω_k1) + γ_k/2×( U_pr, 11U_pr,k1^*|U_pu,11|^2 +U_pr, 12U_pr,k2^*|U_pu,21|^2^-γ_2τ+U_pr, 12U_pr,k3^*U_pu,21 U^*_pu,31 ^ω_32τ ^-γ_2+γ_3/2τ+U_pr, 13U_pr,k2^* U_pu,31 U^*_pu,21 ^-ω_32τ ^-γ_2+γ_3/2τ+U_pr, 13U_pr,k3^*|U_pu,31|^2^-γ_3τ)].§ RESULTS AND DISCUSSION§.§ Transient-absorption spectra for intense probe and pump pulsesHere, we apply our three-level model to study Rb atoms excited by intense femtosecond probe and pump pulses. Simulated time-delay dependent transient-absorption spectra, obtained by numerically solving Eq. (<ref>) and then using this solution in Eqs. (<ref>) and (<ref>), are displayed in Fig. <ref> for representative values of pump- and probe-pulse intensities and for a laser frequency of ω_L = 1.59 eV. For all sets of intensities investigated, two absorption lines can be distinguished, respectively centered on the transition energies ω_21 = 1.56 eV and ω_31 = 1.59 eV. The shape and amplitude of these lines is modulated as a function of time delay, featuring oscillations whose period of 2π/ω_32 = 140 fs is given by the beating frequency ω_32. This is stressed by the black lines, showing the spectra evaluated at the two transition energies ω_21 and ω_31 as a function of τ. Figures <ref>(a), <ref>(b), and <ref>(c) show transient-absorption spectra for a weak pump intensity of I_pu = 1× 10^9 W/cm^2 and three different values of probe intensity. Firstly, we notice that the amplitude of the time-delay-dependent oscillations displayed by the spectra is very small for these weak values of the pump intensity. The shape and amplitude of the absorption lines remain almost completely unchanged throughout the range of τ displayed, with no significant features distinguishing between positive and negative time delays. By modifying the probe intensity, we notice a variation in the strength of the lines, going from absorption for a weak intensity of I_pr = 1× 10^9 W/cm^2 to emission at higher values of intensity.When higher values of pump-pulse intensity are employed, clear time-delay-dependent features can be distinguished. The amplitude and the phase of these oscillations in τ varies differently, for positive and negative time delays, as a function of pump and probe intensities. Figures <ref>(a), <ref>(d), and <ref>(g) show spectra evaluated for a weak probe intensity of I_pr = 1× 10^9 W/cm^2 and increasing values of I_pu. For intermediate values of the pump-pulse intensity (I_pu = 1× 10^10 W/cm^2) and for both positive and negative time delays, the phase of the exhibited time-delay-dependent spectra is the same for the two transition energies, as evinced by the red dashed lines which highlight the position of the minima of 𝒮(ω_21,τ) and 𝒮(ω_31,τ). However, as already discussed in Ref. <cit.>, a shift can be recognized for a higher pump intensity of I_pu = 2.8× 10^10 W/cm^2: while the spectra evaluated at ω_21 and ω_31 shift in opposite directions for τ<0 as a clear and distinguishable signature of the onset of strong-field effects, a common shift in the same direction takes place at τ>0 when the pump-pulse intensity is increased. Recognizing these strong-field-induced features and understanding them in terms of intensity-dependent atomic phases becomes more complex when a probe pulse is used which is not sufficiently weak. This appears clearly when one compares Figs. <ref>(d), <ref>(e), and <ref>(f), where results are shown for an intermediately strong pump pulse and different values of the probe intensity. Both at positive and negative time delays, absorption lines evaluated at ω_21 and ω_31 feature a shift in opposite directions, which becomes larger at high probe intensities. Similarly, spectra displayed in Figs. <ref>(g), <ref>(h), and <ref>(i) for a pump intensity of I_pu = 2.8× 10^10 W/cm^2 show that a probe-pulse-induced shift of the spectra evaluated at ω_21 and ω_31 arises for growing values of I_pr: at negative time delays, this enlarges the already existent shift due to the strong pump pulse; for positive time delays, where the increase in I_pu causes an aligned, common shift of 𝒮(ω_21,τ) and 𝒮(ω_31,τ), the presence of an intense probe pulse is reflected in additional shifts, analogous to those already recognized for I_pu = 1× 10^10 W/cm^2. It should be noticed that the spectra in Figs. <ref>(a), <ref>(e), and <ref>(i) are calculated for equal pump- and probe-pulse intensities. The dynamics of the system are, therefore, perfectly symmetric with respect to τ, and the system features the same time evolution when equally delayed pump and probe pulses are used, independent of their arriving order. Nevertheless, the spectra exhibited in the above listed figures are clearly not symmetric with respect to τ, and different amplitudes and phases of the time-delay-dependent features of 𝒮(ω,τ) can be recognized at τ>0 or τ<0, in spite of identical underlying dynamics. This can be understood by noticing that the spectrum arises from the interference between the electric dipole response of the atomic system with the probe pulse: even when the quantum dynamics are identical, the spectrum still reveals how these influence the first-(second-)arriving probe pulse for τ<0 (τ>0). This is also evident from the definition of the absorption spectrum (<ref>), where the Fourier transform is always centered on the central time τ of the probe pulse, and then from the analytic models in Eqs. (<ref>) and (<ref>), respectively describing time-delay-averaged probe-pump and pump-probe spectra from a noncollinear geometry. Even when identical pump and probe pulses are used (Û_pr = Û_pu), the spectra evaluated at positive and negative time delays are determined by different interaction-operator matrix elements and hence differ.In the previous discussion we have focused on the time-delay-dependent properties of the spectra 𝒮(ω_k1, τ), evaluated at the transition energies ω_k1. However, the identification of ω_21 and ω_31 may not be straightforward experimentally, affecting the properties of the observed time-delay-dependent features and the quantification of the associated phases. In order to better discuss this point and describe the line-shape changes ensuing from the presence of intense pump and probe pulses, in Figs. <ref> and <ref>, for a probe-pump and pump-probe setup, respectively, we present transient-absorption spectra 𝒮(ω, τ_k1), k ∈{2, 3}, evaluated as a function of frequency for fixed values of the time delay, τ_21 and τ_31. Here, the time delay τ_21 (τ_31) is the one for which 𝒮(ω_21,τ) [𝒮(ω_31,τ)] has a local minimum, as identified in Fig. <ref> by the red, dashed lines. The pictures show that the identified local-minimum points are not necessarily associated with emission peaks pointing downwards. Furthermore, for negative time delays, where additional frequency modulations appear as shown in Figs. <ref> and <ref>, one has to disentangle the behavior of the peaks centered on ω_k1 from the remaining modulations appearing as a function of frequency. Nevertheless, all panels confirm that it is possible to isolate the time-delay-dependent behavior of this central peak and, thereby, identify the particular time delay at which this is minimal. Encouraged by the results displayed in Figs. <ref> and <ref>, in the following we focus on 𝒮(ω_k1, τ) and the corresponding time-delay-dependent oscillations in order to draw conclusions about strong-field-induced atomic phases. Figure <ref> shows the amplitude of the numerically calculated spectra 𝒮(ω_21,τ) and 𝒮(ω_31,τ) as a function of probe-pulse intensity for two different values of I_pu. The shifts in the phase of the time-delay-dependent spectra is here clearly apparent. For τ>0 or τ<0, the effect of the intense pump and probe pulses appears in the spectrum as independent pump- and probe-induced phase shifts. In the following, in order to investigate this point further and identify how atomic phase changes are encoded in transient-absorption spectra, we interpret our results in terms of the interaction operators introduced in Subsec. <ref>.§.§ Interpretation of pump- and probe-pulse-induced phases in terms of interaction-operator matrix elements Here, we use Eqs. (<ref>) and (<ref>) in order to interpret the numerically calculated transient-absorption spectra presented in Subsec. <ref> in terms of interaction-operator matrix elements. In particular, we focus on the phase of the time-delay-dependent oscillations exhibited by 𝒮(ω_21,τ) and 𝒮(ω_31,τ) [Fig. <ref>], and show how these can be understood via the strong-field-induced atomic phases quantified in Û_pu and Û_pr. For both a probe-pump and a pump-probe setup, we develop analytical interpretation models, calculate Û_pu and Û_pr with Eqs. (<ref>) and (<ref>), and then use these interpretation models to understand the phase features displayed by the transient-absorption spectra in Figs. <ref> and <ref>. Finally, we further investigate the dependence of the phases extractable from transient-absorption spectra upon the laser frequency of the pump and probe pulses.§.§.§ Probe-pump setup Firstly, we focus on the probe-pump interpretation model given by Eq. (<ref>), aiming at better understanding the properties of the spectrum evaluated at ω = ω_k1. For interpretation purposes, since ω_32≫γ_k, we are allowed to neglect in first approximation the term proportional to D_1k'^*/(ω_kk' +γ_k/2), with k'∈{2, 3}, k'≠ k, ω_kk' = ±ω_32, thus obtaining𝒮_prpu(ω_k1,τ) ∝-ω/K_pr{2D_1k^*/γ_k×[U_pr,11U_pr,k1^*(1-^γ_k/2τ) + U_pu,11U_pu,kk^* U_pr,11 U_pr,k1^*^γ_k/2τ + U_pu,11U_pu,kk'^* U_pr,11 U_pr,k'1^* ^ω_kk'τ^γ_k'/2τ]}.The only term which displays oscillations as a function of τ is given by𝒮̃_prpu(ω_k1,τ) ∝-2ω/K_prD_1k/γ_k^γ_k'/2τ(Y_pu,k Y_pr,k ^ω_kk'τ),with Y_pr,k = U_pr,11 U_pr,k'1^*,Y_pu,k = U_pu,11 U_pu,kk'^*,and where we have used explicitly the fact that, for our atomic implementation with Rb atoms, the projections D_1k of the dipole-moment matrix elements D_1k along the pulse polarization axis ê_z are real. We can more explicitly write𝒮̃_prpu(ω_21,τ) ∝-2ω/K_prD_12/γ_2^γ_3/2τ |Y_pu,2| |Y_pr,2|×{^- [ω_32τ - (Y_pu,2) - (Y_pr,2)] } =-2ω/K_prD_12/γ_2^γ_3/2τ |Y_pu,2| |Y_pr,2| ×sin [ω_32τ - (Y_pu,2) - π - (Y_pr,2) ]and𝒮̃_prpu(ω_31,τ) ∝-2ω/K_prD_13/γ_3^γ_2/2τ |Y_pu,3| |Y_pr,3| ×{^ [ω_32τ + (Y_pu,3) + (Y_pr,3)] } =-2ω/K_prD_13/γ_3^γ_2/2τ |Y_pu,3| |Y_pr,3|×sin [ω_32τ + (Y_pu,3) + (Y_pr,3)] .WithY_pr,2 = U_pr,11 U_pr,31^*,Y_pr,3 = U_pr,11 U_pr,21^*,Y_pu,2 = U_pu,11 U_pu,23^*,Y_pu,3 = U_pu,11 U_pu,32^*,and the phasesφ_pr,2 = -π -(U_pr,11 U_pr,31^*), φ_pr,3 = (U_pr,11 U_pr,21^*), φ_pu,2 = -(U_pu,11 U_pu,23^*), φ_pu,3 = (U_pu,11 U_pu,32^*),this reduces to𝒮̃_prpu(ω_21,τ)=-2ω/K_prD_12/γ_2^γ_3/2τ |Y_pu,2| |Y_pr,2| ×sin [ω_32τ +φ_pr,2 + φ_pu,2 ]and𝒮̃_prpu(ω_31,τ)=-2ω/K_prD_13/γ_3^γ_2/2τ |Y_pu,3| |Y_pr,3|×sin [ω_32τ +φ_pr,3 + φ_pu,3 ].The intensity-dependent position of the minima of 𝒮(ω_k1,τ) for τ<0, shown in Fig. <ref> by the red dashed lines at negative time delays, can hence be quantified via Eqs. (<ref>) and (<ref>) in terms of φ_pr,k and φ_pu,k. The sine functions appearing therein have local minima, respectively centered aroundτ_21=τ_0 - (φ_pr,2+φ_pu,2)/ω_32,for ω = ω_21, τ<0, τ_31=τ_0 - (φ_pr,3+φ_pu,3)/ω_32,for ω = ω_31, τ<0,with the additive offset τ_0 = -9π/(2ω_32). For real, positive dipole-moment matrix elements D_1k, and hence real positive pulse areas ϑ_k, the intensity-dependent variables Y_pr,k and Y_pu,k can be explicitly written in the case of weak pulses via Eq. (<ref>) asY_pr,2^weak = -ϑ_3/2,Y_pr,3^weak = -ϑ_2/2,Y_pu,2^weak = -ϑ_2ϑ_3,Y_pu,3^weak = -ϑ_2ϑ_3,along with the associated phasesφ_pr,2^weak = - π/2, φ_pr,3^weak = - π/2, φ_pu,2^weak = ±π, φ_pu,3^weak = ∓π.For low intensities, the effect of the probe pulse is linearly proportional to the pulse areas ϑ_k and, therefore, of first order in the amplitude of the electric field, whereas the action of the pump pulse depends on the product of ϑ_2ϑ_3 and is hence of second order. This explains the small, almost vanishing amplitude of the time-delay-dependent oscillations displayed for τ<0 by the transient-absorption spectra in Figs. <ref>(a), <ref>(b), and <ref>(c), for a small pump-pulse intensity of I_pu = 1× 10^9 W/cm^2.In Figs. <ref>(a) and <ref>(b), the total phases [φ_pr,2 + φ_pu,2 - (φ_pr,2^weak + φ_pu,2^weak)] and [φ_pr,3 + φ_pu,3 - (φ_pr,3^weak + φ_pu,3^weak)] [Eqs. (<ref>) and (<ref>) after numerical calculation of Û_pr and Û_pu via Eqs. (<ref>) and (<ref>)] are exhibited, as a function of I_pr and for a discrete set of values of I_pu. The very good agreement between the intensity dependence of these phases and the shift displayed by the time-delay-dependent features of 𝒮(ω_21, τ) and 𝒮(ω_31, τ) [Fig. <ref> and Figs. <ref>(c) and <ref>(d) at negative time delays] confirms the validity of our analytical interpretation model and in particular of Eq. (<ref>). The shift in the phases [Figs. <ref>(a) and <ref>(b)] is reflected by an oppositely directed shift in the local-minimum points [Figs. <ref>(c) and <ref>(d)] as a function of I_pr and I_pu, as expected from the minus sign in Eq. (<ref>).In order to understand the physics underlying the phase shifts φ_pr,k appearing in the spectrum, we can use the schematic illustration of Û(I) in Fig. <ref> to clarify the meaning of the terms appearing in Eqs. (<ref>) and (<ref>). The associated terms Y_pr,k =U_pr,11 U_pr,k'1^*, k'≠ k, are the coherences (in amplitude and phase) generated by the first-arriving probe pulse acting on the ground state. The shift displayed by 𝒮(ω_k1,τ) is therefore related to the phase of these strong-field-induced coherences. The different sign appearing in the definition of φ_pr,2 and φ_pr,3 also explains why the time-delay-dependent oscillations of 𝒮(ω_21,τ) and 𝒮(ω_31,τ) shift in opposite directions for increasing probe-pulse intensities [Fig. <ref>(c) and <ref>(d)].The second-arriving intense pump pulse nonlinearly modifies an already existent superposition of excited states. The shifts φ_pu,2 and φ_pu,3 in the oscillating features of 𝒮(ω_21,τ) and 𝒮(ω_31,τ), respectively, quantify the changes in the atomic phases induced by the pump pulse. This can be recognized via inspection of the associated interaction-operator matrix elements, Y_pu,k = U_pu,11 U_pu,kk'^*, k'≠ k [Eqs. (<ref>) and (<ref>)], which describe how the pump pulse transforms an initial coherence between ground state and excited state |k'⟩ into a final coherence between ground state and excited state |k⟩ (see also the schematic illustration in Fig. <ref>). The ensuing phase change determines the shift appearing in the oscillating features of the transient-absorption spectrum. Also in this case, the shift in opposite directions displayed by 𝒮(ω_21,τ) and 𝒮(ω_31,τ) for rising values of I_pu [Fig. <ref>(c) and <ref>(d)] is a consequence of the opposite sign with which φ_pu,2 and φ_pu,3 are related to the interaction-operator matrix elements [Eq. (<ref>)]. §.§.§ Pump-probe setup Here, we focus on the positive-time-delay part of the spectrum, and use the associated interpretation model given by Eq. (<ref>) in order to better understand the properties of the spectra evaluated at ω = ω_k1. For this purpose, as already performed in the previous part, we can neglect terms given by D_1k'^*/( ω_kk'+ γ_k'/2) in Eq. (<ref>), and thus identify those contributions which are responsible for the oscillations exhibited by the spectrum as a function of τ:𝒮̃_pupr(ω_k1,τ) ∝ -2ω/K_prD_1k/γ_k ^-γ_2+γ_3/2τ×(U_pr,12 U_pr,k3^* U_pu,21 U^*_pu,31 ^ω_32τ+U_pr,13 U_pr,k2^* U_pu,31 U^*_pu,21 ^-ω_32τ).Also in this case, we have used explicitly the fact that the dipole-moment matrix elements D_1k are real. By introducing the intensity-dependent pump and probe variablesZ_pu = U_pu,21 U^*_pu,31,A_pr,k = U_pr,12 U_pr,k3^*,B_pr,k = U_pr,13 U_pr,k2^*,we can write Eq. (<ref>) as𝒮̃_pupr(ω_k1,τ) ∝ -2ω/K_prD_1k/γ_k ^-γ_2+γ_3/2τ×(A_pr,k Z_pu ^ω_32τ + B_pr,kZ_pu^* ^-ω_32τ),and observe that the pump pulse equally acts on both terms of the above sums, resulting in a phase shift ψ_pu = (Z_pu) = (U_pu,21 U^*_pu,31).Furthermore, since (z) = - (z^*), we have that {B_pr,k Z_pu^* ^ω_32τ} = - {B_pr,k^* Z_pu ^ω_32τ}, and hence𝒮̃_pupr(ω_k1,τ) ∝ -2ω/K_prD_1k/γ_k ^-γ_2+γ_3/2τ |Z_pu|×[(A_pr,k - B_pr,k^*)^(ω_32τ + ψ_pu )].By further introducing the phasesψ_pr,2 = (A_pr,2 - B_pr,2^*)= (U_pr,12 U_pr,23^*- U_pr,22 U_pr,13^*), ψ_pr,3 = (A_pr,3 - B_pr,3^*)= (U_pr,12 U_pr,33^*- U_pr,32 U_pr,13^*),the spectrum can be written as𝒮̃_pupr(ω_k1,τ) ∝ -2ω/K_prD_1k/γ_k ^-γ_2+γ_3/2τ |Z_pu||A_pr,k - B_pr,k^*|×sin(ω_32τ + ψ_pu + ψ_pr,k ) .This implies that the intensity-dependent positions of the minima of 𝒮(ω_k1,τ), shown in Fig. <ref> by the red dashed lines at positive time delays, can be quantified via Eq. (<ref>) in terms of ψ_pu and ψ_pr,k. The sine functions appearing therein have local minima respectively centered aroundτ_21 = τ_0 - (ψ_pu+ψ_pr,2)/ω_32,for ω = ω_21, τ>0,τ_31 = τ_0 - (ψ_pu+ψ_pr,3)/ω_32,for ω = ω_31, τ>0,with the additive offset τ_0 = 7π/(2ω_32). For real, positive dipole-moment matrix elements D_1k, and hence real positive pulse areas ϑ_k, the intensity-dependent variables Z_pu and |A_pr,k - B_pr,k^*| can be explicitly written in the case of weak pulses via Eq. (<ref>) asZ_pu^weak = ϑ_2ϑ_3/4,A_pr,2^weak - (B_pr,2^weak)^* = ϑ_3/2,A_pr,3^weak - (B_pr,3^weak)^* = ϑ_2/2,along with the associated phasesψ_pu^weak = 0, ψ_pr,2^weak = π/2, ψ_pr,3^weak = π/2.Also in a pump-probe setup, the effect of a weak probe pulse is linearly proportional to the pulse areas ϑ_k and, therefore, of first order in the amplitude of the electric field. The action of a weak pump pulse depends on the product of ϑ_2ϑ_3 and is hence of second order. Also in this case, this explains the small, almost vanishing amplitude of the time-delay-dependent oscillations displayed for τ>0 by the transient-absorption spectra in Figs. <ref>(a), <ref>(b), and <ref>(c), for a small pump-pulse intensity of I_pu = 1× 10^9 W/cm^2.Figures <ref>(e) and <ref>(f) display the total phases [ψ_pu + ψ_pr,2 - (ψ_pu^weak + ψ_pr,2^weak)] and [ψ_pu + ψ_pr,3 - (ψ_pu^weak + ψ_pr,3^weak)] [Eqs. (<ref>), (<ref>), and (<ref>), after numerical calculation of Û_pr and Û_pu via Eqs. (<ref>) and (<ref>)] as a function of I_pr and for different values of the pump-pulse intensity I_pu. The dependence of these phases on pulse intensities matches that exhibited by the time-delay-dependent features of 𝒮(ω_21,τ) and 𝒮(ω_31,τ) in Fig. <ref> and in Figs. <ref>(c) and <ref>(d) at positive time delays, confirming the validity of Eq. (<ref>) for the interpretation of the phase of the oscillating features displayed by transient-absorption spectra. Also in this case, the minus sign in Eq. (<ref>) results in a shift in the local-minimum points in Figs. <ref>(c) and <ref>(d) in a direction which is opposite to the change in phase exhibited by Figs. <ref>(e) and <ref>(f).In contrast to the previously discussed probe-pump case, here, the first arriving pump pulse equally influences the shift in the spectra evaluated at ω_21 and ω_31. This could already be observed in Figs. <ref> and <ref>, and is now confirmed by Eq. (<ref>). The same phase ψ_pu equally affects both spectra, with a common shift which quantifies the phase difference between excited states generated by the first-arriving pump pulse. This is apparent by the definition of ψ_pu [Eq. (<ref>)] and of the associated term Z_pu = U_pu,21U_pu,31^* [Eq. (<ref>)], which represents the coherence between excited states |2⟩ and |3⟩ resulting from the interaction with the pump pulse, as schematically illustrated in Fig. <ref>. Quantifying the shift in the spectra induced by the second-arriving probe pulse is more complex. In a pump-probe setup, the probe pulse modifies the state excited by the first-arriving pump pulse, inducing atomic phase changes which are encoded in the spectrum. However, in this case, the phases ψ_pr,k [Eq. (<ref>)] of the time-delay-dependent oscillations of 𝒮(ω_k1,τ) are due to a sum of terms [(A_pr,k-B_pr,k^*) from Eq. (<ref>)]. As a result, the phases ψ_pr,k, and hence the corresponding phase shifts featured by the spectra, are not only determined by the phases of the corresponding interaction-operator matrix elements (U_pr,12U_pr,k3 and U_pr,13U_pr,k2), but also by their amplitudes. The definition of the interaction operator Û(I) allows one to see that A_pr,k and B_pr,k^* describe how the probe pulse transforms an initial coherence between the excited states |2⟩ and |3⟩ into a coherence between ground state and excited state |k⟩ (see also the schematic illustration in Fig. <ref>). Amplitude and phase of these interaction-operator matrix elements both enter the definition of ψ_pr,k and are hence encoded in intensity- and time-delay-dependent transient-absorption spectra.§.§.§ Dependence on laser frequency Since we confirmed in the previous subsections the validity of Eqs. (<ref>) and (<ref>) for the interpretation of transient-absorption spectra in terms of pump- and probe-pulse-generated phases, here we focus on the previously introduced phases φ_pr,k, φ_pu,k, ψ_pu, and ψ_pr,k, and investigate their dependence upon the frequency of the laser. Also in this case, this is achieved by using Eqs. (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), after having numerically calculated Û_pr and Û_pu via Eqs. (<ref>) and (<ref>). However, while we assumed in the previous sections that both pump and probe pulses were characterized by a laser frequency ω_L = 1.59 eV, we display here intensity-dependent results for 5 discrete values of laser frequency, equally spaced between ω_L = ω_21 = 1.56 eV and ω_L = ω_31 = 1.59 eV. In Figs. <ref> and <ref>, we focus on a probe-pump setup and display the phases induced by the probe and pump pulses, respectively, as a function of their intensity and for different laser frequencies. Figure <ref>(a) shows the phase (φ_pr,2 -φ_pr,2^weak) which determines the probe-intensity-dependent shift featured by the absorption spectra 𝒮(ω_21,τ) evaluated at ω_21. These phases are related to the argument of Y_pr,2 [Fig. <ref>(b) and Eq. (<ref>)], which represents the coherence between states |1⟩ and |3⟩ generated by the first-arriving probe pulse. At low intensities, all curves are characterized by negative, purely imaginary values of Y_pr,2, in agreement with Eq. (<ref>). The laser frequency influences the path followed by Y_pr,2 at increasing intensities, and whether this will move towards regions characterized by positive or negative real parts. This influences the behavior of the phases in Fig. <ref>(a) as well, deciding whether the shift is towards values of φ_pr,2 larger or smaller than the weak-limit value. Similarly, the behavior of Y_pr,3 displayed in Fig. <ref>(d) determines the intensity-dependent shift (φ_pr,3 -φ_pr,3^weak) featured by the absorption spectra 𝒮(ω_31,τ) evaluated at ω_31. Here, Y_pr,3 is the coherence between states |1⟩ and |3⟩ generated by the first-arriving probe pulse. Also in this case, weak intensities correspond to negative, purely imaginary values of Y_pr,3, agreeing with Eq. (<ref>). A different dependence of Y_pr,3 on probe-pulse intensities is featured for different values of the laser frequency, analogously influencing the intensity-dependent shift φ_pr, 3 exhibited by Fig. <ref>(c).Figure <ref> shows the additional phase shift owing to a strong pump pulse as a function of its intensity. The (amplitude and phase) changes resulting from the interaction with the pump pulse are encoded in the complex numbers Y_pu,2 and Y_pu,3, whose dependence on intensity and laser frequency is shown in Figs. <ref>(b) and <ref>(d), respectively. As noticed in Eq. (<ref>), Y_pu,k are of second order in the pulse area ϑ for weak values of the pulse intensity. As a result, for small pulse intensities, Y_pu,2 and Y_pu,3 tend to vanishing values for all considered laser frequencies. The associated atomic-phase change results in the phase shifts displayed in Figs. <ref>(a) and <ref>(c). For all considered laser frequencies, φ_pu,2 and φ_pu,3 evolve in opposite directions for increasing values of the pump-pulse intensity.In Figs. <ref> and <ref> we consider a pump-probe setup and focus on the phases induced by probe and pump pulses, respectively, as a function of their intensity and for different laser frequencies. The phase (ψ_pr,2 -ψ_pr,2^weak), defining the intensity-dependent shift of 𝒮(ω_21,τ), is shown in Fig. <ref>(a). Also in this case, the displayed dependence upon intensity and laser frequency can be better understood by referring to the complex numbers (A_pr,2 - B^*_pr,2) [Eq. (<ref>)], displayed in Fig. <ref>(b). As discussed previously, these complex numbers are related to the transformation induced by the second arriving-probe pulse, quantifying how an initial coherence between states |2⟩ and |3⟩ is transformed into coherence between |1⟩ and |2⟩. At low intensities, all curves tend to positive, purely imaginary values of (A_pr,2 - B^*_pr,2), in agreement with Eq. (<ref>). The path followed by (A_pr,2 - B^*_pr,2) at increasing intensities depends on the laser frequency, and reveals interesting features about the intensity dependence of (ψ_pr,2 -ψ_pr,2^weak) shown in Fig. <ref>(a). For example, one can notice how relatively similar values of (A_pr,2 - B^*_pr,2), such as those displayed by the green, red, and brown curves in Fig. <ref>(b), can lead to a very different behavior of the corresponding phases [Fig. <ref>(a)]. This is due to the fact that the amplitude of (A_pr,2 - B^*_pr,2) is very close to vanish for all three considered curves. A small change in the actually followed path can therefore lead to a completely oppositely directed shift in the corresponding phase. The phases (ψ_pr,3 -ψ_pr,3^weak) shown in Fig. <ref>(c), determining the intensity-dependent shift of 𝒮(ω_31,τ), display a more regular dependence upon intensity and laser frequency. This is essentially related to the fact that the corresponding complex numbers (A_pr,3 - B^*_pr,3) do not approach vanishing values for the range of intensities and laser frequencies considered, as exhibited by Fig. <ref>(d). The complex numbers (A_pr,3 - B^*_pr,3) [Eq. (<ref>)] quantify how an initial coherence between states |2⟩ and |3⟩ is transformed into coherence between |1⟩ and |3⟩ and, at low intensities, tend to positive, purely imaginary values [Fig. <ref>(d)], in agreement with Eq. (<ref>).Finally, Fig. <ref> shows the pump-pulse-induced phase shift (ψ_pu - ψ_pu^weak) which equally affects the oscillations of 𝒮(ω_21,τ) and 𝒮(ω_31,τ), as described in Eq. (<ref>) for positive time delays. The associated complex numbers Z_pu, quantifying the coherence between excited states generated by the first-arriving pump pulse, are exhibited in Fig. <ref>(b), displaying a small dependence on the laser frequency ω_L. This is reflected in the associated phases, shown in Fig. <ref>(a). We notice that Z_pu tends to 0 for small intensities, being of second order in the pulse area ϑ as predicted by Eq. (<ref>). For increasing values of the intensity, however, we can see that Z_pu is characterized by values very close to the real axis, in agreement with the prediction of a vanishing weak-limit phase ψ_pu^weak = 0 [Eq. (<ref>)]. § CONCLUSION In conclusion, we have investigated the interaction of a sample of Rb atoms, modeled as a V-type three-level system, with intense probe and pump pulses separated by a positive or negative time delay in a transient-absorption-spectroscopy setup. The three-level model was used to describe the evolution of the atomic system and, thereby, to numerically simulate experimental time-delay- and pulse-intensity-dependent spectra. We developed an analytical interpretation model, which we used to connect the time-delay-dependent oscillations featured by the spectra with the pump- and probe-pulse-induced quantum phases of the atomic system. Thereby, we showed which strong-field information on atomic phases can be extracted from transient-absorption spectra, when intense probe and pump pulses are employed.We also studied the dependence of strong-field-generated atomic phases on the frequency of the utilized laser pulses. Further studies could include a more thorough analytical and theoretical description of the frequency dependence of the phases, as well as an atomic-system description going beyond the three-level model employed here. For high densities or long media, it could be important to further investigate how propagation effects can be included in our interpretation models.The authors acknowledge valuable discussions with Zoltán Harman, Christoph H. Keitel, and Thomas Pfeifer. The work of V. B. has been carried out thanks to the support of the A*MIDEX grant (No. ANR-11-IDEX-0001-02) funded by the French Government <<Investissements d'Avenir>> program. 35 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Brif et al.(2010)Brif, Chakrabarti, and Rabitz]1367-2630-12-7-075008 author author C. Brif, author R. Chakrabarti, and author H. Rabitz,title title Control of quantum phenomena: past, present and future, http://stacks.iop.org/1367-2630/12/i=7/a=075008 journal journal New J. Phys. volume 12,pages 075008 (year 2010)NoStop [Tannor(2007)]tannor2007introduction author author D. J. Tannor, @nooptitle Introduction to Quantum Mechanics: A Time-dependent Perspective (publisher University Science Books, address Sausalito, California,year 2007)NoStop [Silberberg(2009)]doi:10.1146/annurev.physchem.040808.090427 author author Y. Silberberg, title title Quantum coherent control for nonlinear spectroscopy and microscopy, 10.1146/annurev.physchem.040808.090427 journal journal Annu. Rev. Phys. Chem. volume 60, pages 277–292 (year 2009)NoStop [Ohmori(2009)]doi:10.1146/annurev.physchem.59.032607.093818 author author K. Ohmori, title title Wave-packet and coherent control dynamics, 10.1146/annurev.physchem.59.032607.093818 journal journal Annu. Rev. Phys. Chem. volume 60,pages 487–511 (year 2009)NoStop [Dudovich et al.(2001)Dudovich, Dayan, Gallagher Faeder, andSilberberg]PhysRevLett.86.47 author author N. Dudovich, author B. Dayan, author S. M. Gallagher Faeder, and author Y. Silberberg,title title Transform-limited pulses are not optimal for resonant multiphoton transitions, 10.1103/PhysRevLett.86.47 journal journal Phys. Rev. Lett. volume 86, pages 47–50 (year 2001)NoStop [Dudovich et al.(2005)Dudovich, Polack, Pe'er, andSilberberg]PhysRevLett.94.083002 author author N. Dudovich, author T. Polack, author A. Pe'er,and author Y. Silberberg, title title Simple route to strong-field coherent control, 10.1103/PhysRevLett.94.083002 journal journal Phys. Rev. Lett. volume 94, pages 083002 (year 2005)NoStop [Clow et al.(2008)Clow, Trallero-Herrero, Bergeman, andWeinacht]PhysRevLett.100.233603 author author S. D. Clow, author C. Trallero-Herrero, author T. Bergeman,and author T. Weinacht, title title Strong field multiphoton inversion of a three-level system using shaped ultrafast laser pulses, 10.1103/PhysRevLett.100.233603 journal journal Phys. Rev. Lett. volume 100, pages 233603 (year 2008)NoStop [Bayer et al.(2009)Bayer, Wollenhaupt, Sarpe-Tudoran, andBaumert]PhysRevLett.102.023004 author author T. Bayer, author M. Wollenhaupt, author C. Sarpe-Tudoran,andauthor T. Baumert, title title Robust photon locking, 10.1103/PhysRevLett.102.023004 journal journal Phys. Rev. Lett. volume 102, pages 023004 (year 2009)NoStop [Bruner et al.(2010)Bruner, Suchowski, Vitanov, and Silberberg]PhysRevA.81.063410 author author B. D. Bruner, author H. Suchowski, author N. V. Vitanov,andauthor Y. Silberberg, title title Strong-field spatiotemporal ultrafast coherent control in three-level atoms, 10.1103/PhysRevA.81.063410 journal journal Phys. Rev. A volume 81, pages 063410 (year 2010)NoStop [Mathies et al.(1988)Mathies, Brito Cruz, Pollard, andShank]Mathies06051988 author author R. A. Mathies, author C. H. Brito Cruz, author W. T. Pollard,and author C. V. Shank, title title Direct observation of the femtosecond excited-state cis-trans isomerization in bacteriorhodopsin, 10.1126/science.3363359 journal journal Science volume 240, pages 777–779 (year 1988)NoStop [Pollard and Mathies(1992)]doi:10.1146/annurev.pc.43.100192.002433 author author W. T. Pollard and author R. A. Mathies, title title Analysis of femtosecond dynamic absorption spectra of nonstationary states, 10.1146/annurev.pc.43.100192.002433 journal journal Annu. Rev. Phys. Chem. volume 43, pages 497–523 (year 1992)NoStop [Loh et al.(2007)Loh, Khalil, Correa, Santra, Buth, and Leone]PhysRevLett.98.143601 author author Z.-H. Loh, author M. Khalil, author R. E. Correa, author R. Santra, author C. Buth,and author S. R. Leone, title title Quantum state-resolved probing of strong-field-ionized xenon atoms using femtosecond high-order harmonic transient absorption spectroscopy, 10.1103/PhysRevLett.98.143601 journal journal Phys. Rev. Lett. volume 98, pages 143601 (year 2007)NoStop [Goulielmakis et al.(2010)Goulielmakis, Loh, Wirth, Santra, Rohringer, Yakovlev, Zherebtsov, Pfeifer, Azzeer, Kling, Leone, and Krausz]GoulielmakisNature466 author author E. Goulielmakis, author Z.-H. Loh, author A. Wirth, author R. Santra, author N. Rohringer, author V. S. Yakovlev, author S. Zherebtsov, author T. Pfeifer, author A. M. Azzeer, author M. F.Kling, author S. R. Leone,and author F. Krausz, title title Real-time observation of valence electron motion, http://dx.doi.org/10.1038/nature09212 journal journal Nature (London) volume 466, pages 739–743 (year 2010)NoStop [Holler et al.(2011)Holler, Schapper, Gallmann, and Keller]PhysRevLett.106.123601 author author M. Holler, author F. Schapper, author L. Gallmann,andauthor U. Keller, title title Attosecond electron wave-packet interference observed by transient absorption, 10.1103/PhysRevLett.106.123601 journal journal Phys. Rev. Lett. volume 106, pages 123601 (year 2011)NoStop [Fano and Cooper(1968)]RevModPhys.40.441 author author U. Fano and author J. W. Cooper, title title Spectral distribution of atomic oscillator strengths, 10.1103/RevModPhys.40.441 journal journal Rev. Mod. Phys. volume 40, pages 441–507 (year 1968)NoStop [Wu et al.(2016)Wu, Chen, Camp, Schafer, andGaarde]0953-4075-49-6-062003 author author M. Wu, author S. Chen, author S. Camp, author K. J. Schafer,and author M. B. Gaarde, title title Theory of strong-field attosecond transient absorption, http://stacks.iop.org/0953-4075/49/i=6/a=062003 journal journal J. Phys. B volume 49, pages 062003 (year 2016)NoStop [Wang et al.(2010)Wang, Chini, Chen, Zhang, He, Cheng, Wu, Thumm, and Chang]PhysRevLett.105.143002 author author H. Wang, author M. Chini, author S. Chen, author C.-H. Zhang, author F. He, author Y. Cheng, author Y. Wu, author U. Thumm,and author Z. Chang, title title Attosecond time-resolved autoionization of argon, 10.1103/PhysRevLett.105.143002 journal journal Phys. Rev. Lett. volume 105, pages 143002 (year 2010)NoStop [Chen et al.(2012)Chen, Bell, Beck, Mashiko, Wu, Pfeiffer, Gaarde, Neumark, Leone, and Schafer]PhysRevA.86.063408 author author S. Chen, author M. J. Bell, author A. R. Beck, author H. Mashiko, author M. Wu, author A. N. Pfeiffer, author M. B.Gaarde, author D. M.Neumark, author S. R.Leone,and author K. J.Schafer, title title Light-induced states in attosecond transient absorption spectra of laser-dressed helium, 10.1103/PhysRevA.86.063408 journal journal Phys. Rev. A volume 86, pages 063408 (year 2012)NoStop [Chini et al.(2013)Chini, Wang, Cheng, Wu, Zhao, Telnov, Chu, andChang]Chini-SciRep author author M. Chini, author X. Wang, author Y. Cheng, author Y. Wu, author D. Zhao, author D. A. Telnov, author S.-I.Chu,and author Z. Chang, title title Sub-cycle oscillations in virtual states brought to light, http://dx.doi.org/10.1038/srep01105 journal journal Sci. Rep. volume 3, pages 1105– (year 2013)NoStop [Ott et al.(2013)Ott, Kaldun, Raith, Meyer, Laux, Evers, Keitel, Greene, and Pfeifer]Ott10052013 author author C. Ott, author A. Kaldun, author P. Raith, author K. Meyer, author M. Laux, author J. Evers, author C. H.Keitel, author C. H.Greene,and author T. Pfeifer, title title Lorentz meets Fano in spectral line shapes: A universal phase and its laser control, 10.1126/science.1234407 journal journal Science volume 340, pages 716–720 (year 2013)NoStop [Beck et al.(2014)Beck, Bernhardt, Warrick, Wu, Chen, Gaarde, Schafer, Neumark, and Leone]1367-2630-16-11-113016 author author A. R. Beck, author B. Bernhardt, author E. R. Warrick, author M. Wu, author S. Chen, author M. B. Gaarde, author K. J.Schafer, author D. M.Neumark,and author S. R.Leone, title title Attosecond transient absorption probing of electronic superpositions of bound states in neon: detection of quantum beats, http://stacks.iop.org/1367-2630/16/i=11/a=113016 journal journal New J. Phys. volume 16,pages 113016 (year 2014)NoStop [Kaldun et al.(2014)Kaldun, Ott, Blättermann, Laux, Meyer, Ding, Fischer, andPfeifer]PhysRevLett.112.103001 author author A. Kaldun, author C. Ott, author A. Blättermann, author M. Laux, author K. Meyer, author T. Ding, author A. Fischer,and author T. Pfeifer, title title Extracting phase and amplitude modifications of laser-coupled Fano resonances,10.1103/PhysRevLett.112.103001 journal journal Phys. Rev. Lett. volume 112,pages 103001 (year 2014)NoStop [Meyer et al.(2015)Meyer, Liu, Müller, Mewes, Dreuw, Buckup, Motzkus, andPfeifer]Meyer22122015 author author K. Meyer, author Z. Liu, author N. Müller, author J.-M. Mewes, author A. Dreuw, author T. Buckup, author M. Motzkus,and author T. Pfeifer, title title Signatures and control of strong-field dynamics in a complex system, 10.1073/pnas.1509201112 journal journal Proc. Natl. Acad. Sci. U.S.A. volume 112, pages 15613–15618 (year 2015)NoStop [Cheng et al.(2016)Cheng, Chini, Wang, González-Castrillo, Palacios, Argenti, Martín, and Chang]PhysRevA.94.023403 author author Y. Cheng, author M. Chini, author X. Wang, author A. González-Castrillo, author A. Palacios, author L. Argenti, author F. Martín,and author Z. Chang, title title Reconstruction of an excited-state molecular wave packet with attosecond transient absorption spectroscopy, 10.1103/PhysRevA.94.023403 journal journal Phys. Rev. A volume 94, pages 023403 (year 2016)NoStop [Stooß et al.(2017)Stooß, Cavaletto, Blättermann, Birk, Keitel, Ott, andPfeifer]stooss_reconstructing author author V. Stooß, author S. M. Cavaletto, author A. Blättermann, author P. Birk, author C. H. Keitel, author C. Ott,and author T. Pfeifer, @noopjournal journal manuscript in preparation(year 2017)NoStop [Liu et al.(2015)Liu, Cavaletto, Ott, Meyer, Mi, Harman, Keitel, andPfeifer]PhysRevLett.115.033003 author author Z. Liu, author S. M. Cavaletto, author C. Ott, author K. Meyer, author Y. Mi, author Z. Harman, author C. H. Keitel,and author T. Pfeifer,title title Phase reconstruction of strong-field excited systems by transient-absorption spectroscopy, 10.1103/PhysRevLett.115.033003 journal journal Phys. Rev. Lett. volume 115, pages 033003 (year 2015)NoStop [Liu et al.(2017)Liu, Wang, Ding, Cavaletto, Pfeifer, and Hu]Liu-SciRep author author Z. Liu, author Q. Wang, author J. Ding, author S. M. Cavaletto, author T. Pfeifer,and author B. Hu, title title Observation and quantification of the quantum dynamics of a strong-field excited multi-level system, http://dx.doi.org/10.1038/srep39993 journal journal Sci. Rep. volume 7, pages 39993– (year 2017)NoStop [Cavaletto et al.(2017)Cavaletto, Harman, Pfeifer, andKeitel]cavaletto2016deterministic author author S. M. Cavaletto, author Z. Harman, author T. Pfeifer,andauthor C. H. Keitel, title title Deterministic strong-field quantum control, http://journals.aps.org/pra/accepted/49077Y82H3416354a8678f971b5cff724c25302ca journal journal Phys. Rev. A, accepted for publication(year 2017), note arXiv preprint http://arxiv.org/abs/1607.04570arXiv:1607.04570NoStop [Theodosiou(1984)]PhysRevA.30.2881 author author C. E. Theodosiou, title title Lifetimes of alkali-metal22atom Rydberg states, 10.1103/PhysRevA.30.2881 journal journal Phys. Rev. A volume 30, pages 2881–2909 (year 1984)NoStop [Netz et al.(2002)Netz, Feurer, Roberts, and Sauerbrey]PhysRevA.65.043406 author author R. Netz, author T. Feurer, author G. Roberts,andauthor R. Sauerbrey, title title Coherent population dynamics of a three-level atom in spacetime, 10.1103/PhysRevA.65.043406 journal journal Phys. Rev. A volume 65, pages 043406 (year 2002)NoStop [Safronova et al.(2004)Safronova, Williams, and Clark]PhysRevA.69.022509 author author M. S. Safronova, author C. J. Williams,and author C. W. Clark, title title Relativistic many-body calculations of electric-dipole matrix elements, lifetimes, and polarizabilities in rubidium, 10.1103/PhysRevA.69.022509 journal journal Phys. Rev. A volume 69, pages 022509 (year 2004)NoStop [Diels and Rudolph(2006)]diels2006ultrashort author author J. C. Diels and author W. Rudolph, @nooptitle Ultrashort laser pulse phenomena: fundamentals, techniques, and applications on a femtosecond time scale (publisher Academic Press, address Burlington, MA, year 2006)NoStop [Scully and Zubairy(1997)]Scully:QuantumOptics author author M. O. Scully and author M. S. Zubairy, @nooptitle Quantum Optics(publisher Cambridge University Press, address Cambridge, year 1997)NoStop [Kiffner et al.(2010)Kiffner, Macovei, Evers, andKeitel]Kiffner_review author author M. Kiffner, author M. Macovei, author J. Evers,and author C. H. Keitel, title title Vacuum-induced processes in multilevel atoms,in @noopbooktitle Prog. Opt.,Vol. volume 55, editor edited by editor E. Wolf (publisher Elsevier, address Amsterdam, year 2010)Chap. chapter 3, p. pages 85NoStop [Foot(2005)]Foot:AtomicPhysics author author C. J. Foot, @nooptitle Atomic Physics(publisher Oxford University Press, address Oxford, year 2005)NoStop
http://arxiv.org/abs/1703.09182v1
{ "authors": [ "Vadim Becquet", "Stefano M. Cavaletto" ], "categories": [ "physics.atom-ph", "physics.optics" ], "primary_category": "physics.atom-ph", "published": "20170327165048", "title": "Transient-absorption phases with strong probe and pump pulses" }
M. Siomau Physics Department, Jazan University, P.O. Box 114, 45142 Jazan, Kingdom of Saudi Arabia m.siomau@gmail.comAny Quantum Network is Structurally Controllable by a Single Driving Signal Michael Siomau Received: date / Accepted: date ===========================================================================Control theory concerns with the questions if and how it is possible to drive the behavior of a complex dynamical system. A system is said to be controllable if we can drive it from any initial state to any desired state in finite time. For many complex networks, the precise knowledge of system parameters lacks. But, it is possible to make a conclusion about network controllability by inspecting its structure. Classical theory of structural controllability is based on the Lin's structural controllability theorem, which gives necessary and sufficient conditions to conclude if a network is structurally controllable. Due to this fundamental theorem we may identify a minimum driver vertex set, whose control with independent driving signals is sufficient to make the whole system controllable. I show that Lin's theorem does not apply to quantum networks, if local operations and classical communication between vertices are allowed. Any quantum network can be modified to be structurally controllable obeying a single driving vertex.03.67.Ac 89.75.Fb Modern science is more diverse than ever. Seemingly distant terms such as quantum, network, learning, neural, complex and cryptography are now combined into all-new scientific disciplines. While quantum cryptography is a present technology <cit.>, the study of the structure and dynamics of complex quantum networks <cit.> is at infancy. These networks are radically different from their classical counterpart due to quantum superposition and nonlocality, and unique features of quantum dynamics and measurements <cit.>. Quantum networks exhibit non-classical clustering <cit.> and synchronization <cit.>, and may undergo non-trivial phase transitions <cit.>. Whenever a complex system is concerned, be it classical or quantum, we are likely to think if it's useful. And it surely is, if we can predict its behavior and control it. The Lin's structural controllability theorem <cit.> is a bedrock of modern control theory <cit.>, which severely restricts our ability to gain control over a complex system. I show that this restriction is no longer valid for quantum networks, if we allow local operations and classical communication (LOCC) between the network vertices. Following general geometrical consideration, I show that a quantum network with an arbitrary structure can exhibit structural controllability with a single driving vertex, if modified with a polynomial number of LOCC. In classical control theory, if a complex system (A;B) can be described with a set of linear differential equations ẋ(t) =A x(t) + B u(t) at any time, it is called Linear Time-Invariant system <cit.>. Here, x(t) is the vector of system parameters; u(t) is the input vector of independent driving signals; the state matrix A describes which system components interact with each other and the direction of the interaction; the input matrix B identifies externally driven system parameters.Such a system may be represented with a directed graph (digraph) G(A;B) = (V_G;E), which structure doesn't change in time. The vertex set V_G = V ⋃ U includes both the state vertices V corresponding to the N vertices of the network, and the driving vertices U, corresponding to the M input signals that are called theroots of the digraph G(A;B). The edge set E = E_V ⋃ E_U consists of the edges among state vertices E_V, corresponding to the connections of the network, and the edges connecting driving vertices to state vertices E_U. In this terms, Lin's theorem is given as: The system (A;B) is not structurally controllable if and only if it has inaccessible nodes or dilations <cit.>. A state vertex is inaccessible if there are no directed paths reaching it from the input vertices. An inaccessible vertex can not be influenced by driving signals, making the whole network uncontrollable. The digraph G(A;B) contains a dilation if there is a subset of vertices S ⊂ V such that the neighborhood set of S has fewer vertices than S itself. Roughly speaking, dilations are subgraphs in which a small subset of vertices attempts to rule a larger subset of vertices (see Fig. <ref>a). This formulation is not practical, because doesn't tell us how many driving signals we should have in a given network to make it controllable. Alternatively, we may state that: An LTI system ( A ; B ) is structurally controllable if and only if G(A;B) is spanned by cacti. A graph is spanned by a subgraph if the subgraph and the graph have the same vertex set. For a digraph, a sequence of oriented edges {v_1 → v_2, ... , v_k-1→ v_k }, where vertices { v_1, v_2, ... , v_k-1, v_k } are distinct, is called an elementary path C. When v_k coincides with v_1, the sequence of edges is called anelementary cycle O. For the digraph G(A;B), let me define the following subgraphs: (i) a stem is an elementary path originating from an input vertex; (ii) a bud is an elementary cycle C with an additional edge e that ends, but does not begin, in a vertex of the cycle; (iii) a cactus is defined recursively: a stem C is a cactus. Let C, O, and e be, respectively, a cactus, an elementary cycle that is disjoint with C, and an arc that connects C to O in G(A;B). Then, C ∪{e}∪ O is also a cactus. G(A;B) is spanned by cacti if there exists a set of disjoint cacti that covers all state vertices.A cactus is a minimal structure that contains neither inaccessible nodes nor dilations (see Fig. <ref>b). A complex network is unlikely to be spanned by a single cactus. But, it may be spanned by a few. Since the control of a single cactus requires a single root, to control a complex network we need as many driving signals as many cacti span the network. In practice, we want to find the minimal number of the driving signals to control a given network – the so-called minimum input problem <cit.>. Hence, we need to find the minimal number of cacti to span a given network. At first glance, this combinatorial problem is a NP-hard, but in fact can be resolved in polynomial time with the maximum matching algorithm (see Fig. <ref>). In a digraph, a matching is defined to be a set of directed edges that do not share common start or end vertices. A vertex is matched if it is the end vertex of a matching edge. Otherwise, it is unmatched. Maximum matching is a matching of the largest size. This definitions allow us to formulate the Minimum input theorem: To fully control a system G(A;B), the minimum number of driver vertices is N_D =max{ N - M, 1 }, where M is the size of the maximum matching in G(A;B). In other words, the driver vertices correspond to the unmatched vertices. If all vertices are matched M=N (as in case of an elementary cycle), we need at least one input to control the network, hence N_D = 1. We can choose any vertex as our driver vertex in this case. Note, that in general the maximum matching is not unique, i.e. the network may be controllable with different minimal sets of driver vertices. But, all these minimal sets are of the same size. The algorithm gives us the number and location of the drivers to apply to the network, hence concludes the study of the structural controllability. In quantum networks, a vertex possesses a quantum system, such as atom, quantum dot or molecule. If two vertices are connected with an edge, they may communicate photons and classical information. An edge is directed according to the ability of a vertex to send photons to others. Classical information may be communicated between connected vertices both ways irrespective of edge directions. This setup fits most quantum communication protocols <cit.>. Because the quantum systems interact with each other and the environment, that may be present in the vertices, the quantum network dynamics is the subject of open quantum system dynamics <cit.>, which is in general non-linear. However, there are few occasions, when the dynamics of an open quantum system may be described with linear differential equations as Eq. (<ref>). For example, dissipative harmonic oscillator may be described with linear Master and Langevin equations <cit.>. This type of quantum dynamics of the vertices we must imply to ensure that the quantum network can be described with Eq. (<ref>). Whenever the behavior of a quantum system cannot be correctly described with a system of linear differential equations (<ref>), further considerations are invalid. The difference between classical and quantum networks is that the latter are non-local. This leads to the fact that distant vertices may become entangled and display correlated dynamics, even so they are not connected with an edge. The entanglement may appear spontaneously as a result of the non-linear dynamics <cit.> or may be created by LOCC <cit.>. The entanglement between distant vertices may be interpreted as a new edge of the network <cit.>. The direction of this entanglement edge is defined by the direction of the classical communication. The LOCC may be used to loop all inaccessible vertices and dilations into elementary cycles, modifying the network so, that it is spanned by a single cactus, i.e. all its vertices are matched. To be specific, let us suppose that we have a connected quantum network represented by a digraph and of the same configuration as in Fig. <ref>a. We may investigate its classical structural controllability by executing the maximum matching algorithm to identify the minimal set of driving vertices. The network is spanned by two cacti, due to a dilation {V_3, V_7, V_4}. Let us choose one of the two elementary pathes, let say {V_6, V_5, V_4, V_2}, and loop it into an elementary cycle by LOCC. The procedure can be exemplified as follows. Suppose we have a quantum network consisting of three vertices {a, b, c}, so that a is connected to b and b is connected to c. The quantum system at the vertex a may be entangled with a quantum field mode, which is communicated to the vertex b. Subjected with the classical communication vertex a is entangled with b, i.e. we created an entanglement edge. Through this edge, the quantum system at b may influence the system at a by performing local manipulation at b and communicating information classically to a <cit.>. In this case, the direction of the classical communication sets the direction of the entanglement edge. If a is entangled to b and b is entangled to c, we may apply entanglement swapping at b. This creates a non-local entanglement edge a-c. The edge may be directed both ways depending on the classical information flow. Similarly, we may get rid of inaccessible vertices, in case of their presence. If we have a subgraph of inaccessible vertices in our connected graph, then there is at least one border vertex of the subgraph that has an edge directed from it to an accessible vertex. The inaccessible vertex may become entangled with the accessible vertex by LOCC. Hence, this two vertices are looped into an elementary cycle consisting of just two vertices. Iteratively, all the subgraph of inaccessible vertices may become accessible. As the result, the modified network is spanned by a single cactus, thus controllable by a single root. In general, the whole procedure can be executed for arbitrary two distant vertices in a finite connected network of size N with at most N^3 LOCC <cit.>.I showed that the restrictions of the Lin's theorem, that are fundamental for classical networks, doesn't apply to quantum networks. This is another radical difference between classical and quantum instances. The result is independent of the quantum network structure and is purely based on our ability to create nonlocal correlations with LOCC. To make a conclusion about structural controllability of a network, we must be sure that the structure of the network doesn't change in time. This may be guaranteed if the network is described with a system of linear equations. But, do LOCC change linearity of the quantum dynamical equations? Under some reasonable assumptions, the initial entanglement between two systems leaves the dynamical equations linear adding an inhomogeneous term <cit.>. But, this is an exception rather than the rule. Can we still conclude about the structural controllability? Yes, if we consider the entangled vertices as single super-vertex (see Fig. <ref>). Then, we must ensure that the super-vertex is coupled linearly to the neighbors, i.e. the output field operators of the super-vertex and the input field operators of the neighboring vertices are linearly dependant <cit.>. In this case the network remains linear, in spite of the non-liner internal dynamics of the super-vertex. Structural controllability analysis is the very first step to evaluate network controllability. Although a network may be structurally controllable, for some combination of the dynamical parameters, it may not be controllable <cit.>. Hence, we need to establish the so-called strong structural controllability, that is the network is controllable for any combination of the dynamical parameters. This analysis requires full information about the network parameters and analytical tools to process this information. The Kalman's criterion of controllability <cit.> is as fundamental as the Lin's theorem for structural controllability, and may also fail as the latter, if applied to quantum networks. This urges a revision of our knowledge and strategies to establish control over complex quantum networks. 20Liao:17 S.-K. Liao, et.al, Nature 549, 43-47 (2017).Kimble:08 H.J. Kimble, Nature 453, 10231030 (2008).Nielsen:00 M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, 2000).Pers:10 S. Perseguers, M. Lewenstein, A. Acin and J.I. Cirac, Nature Physics 6, 539-543 (2010).Manz:13 G. Manzano et.al, Scientific Reports 3, 1439 (2013).Acin:07 A. Acin, J.I. Cirac and M. Lewenstein, Nature Physics 3, 256-259 (2007).Siomau:16 M. Siomau, J. Phys. B.: At. Mol. Opt. Phys. 49, 175506 (2016).Lin:74 C.-T. Lin, IEEE Trans. Auto. Contr. 19, 201-208 (1974).Liu:16 Y.-Y. Liu and A.-L. Barabasi, Rev. Mod. Phys. 88, 035006 (2016).Gardiner:00 C.W. Gardiner and P. Zoller, Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics (Springer-Verlag, Heidelberg, 2000)Siomau:AIP M. Siomau, AIP Conf. Proc. 1742, 030017 (2016).Stelm:01 P. Stelmachovic and V. Buzek, Phys. Rev A 64, 062106 (2001).note See section 12 Cascaded Quantum Systems of Ref. <cit.> for details.
http://arxiv.org/abs/1703.10236v3
{ "authors": [ "Michael Siomau" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170327191948", "title": "Any Quantum Network is Structurally Controllable by a Single Driving Signal" }
Token-based Function Computation with Memory Macheng Shen University of Michigan Department of Naval Architecture and Marine EngineeringAnn Arbor, USAEmail: macshen@umich.edu Jing Sun University of Michigan Department of Naval Architecture and Marine EngineeringAnn Arbor, USAEmail: jingsun@umich.edu Ding Zhao University of MichiganDepartment of Mechanical EngineeringAnn Arbor, USACorresponding author:zhaoding@umich.edu December 30, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, wedeform the thermodynamics of a BTZ black hole from rainbowfunctions in gravity's rainbow. The rainbow functions will be motivated from resultsin loop quantum gravity and Noncommutative geometry. It will be observed that the thermodynamicsgets deformed due to these rainbow functions, indicating the existence of a remnant.However, the Gibbs free energy doesnot get deformed due to these rainbow functions, and so the critical behaviour from Gibbsdoes not change by this deformation.This is because the deformation in the entropy cancel's out the temperature deformation.§ INTRODUCTIONTheHořava-Lifshitz gravity is motivated by a deformation of the usual energy-momentum dispersionrelation in the UV limit <cit.>. Another UV modification of general relativity alsomotivated by a deformation of the usual energy-momentum dispersionrelation in the UV limit is thegravity's rainbow <cit.>.It is interesting to note thatthedeformationof the usual energy-momentum dispersion in the UV limitoccursin various approaches to quantum gravity such as discrete spacetime <cit.>, models based on string field theory <cit.>, spacetime foam <cit.>, spin-network in loop quantum gravity (LQG) <cit.> and non-commutative geometry <cit.>.The formalismhas been used to study various geometries motivated from string theory.In fact, the different Lifshitz scaling of space and time has been used to deformtype IIA string theory <cit.>, typeIIB string theory <cit.>, AdS/CFT correspondence<cit.>,dilaton black branes<cit.>,cylindrical solutions <cit.> and dilaton black holes <cit.> and . Gravity's rainbow is a more general theory that ismotivated by deformation of the energy-momentumdispersion relation, similar to what motivatesHořava-Lifshitz gravity. Hence both approaches are connected to same quantum gravity phenomenology. In fact, for a particular choice of rainbow functions, gravity's rainbow seem's to agree withHořava-Lifshitz gravity.Aswhat has been shown in <cit.>.The Lifshitz deformation of geometries has produced interesting results,and rainbow deformation has the same motivation,in this paper we will study the rainbow deformationof BTZ black holes. In gravity's rainbow, the geometry depends onthe energy of theprobe, and thusprobes ofof different energy see the geometrydifferently. Thus,a single metric is replacedby a family of energy dependent metrics forming a rainbow of metrics. Now theUV modification of the energy-momentum dispersion relationcan be expressed as E^2f^2(E/E_P)-p^2g^2(E/E_P)=m^2where E_P is the Planck energy, E is the energy at which the geometryis probed, andf(E/E_P) and g(E/E_P) are therainbow functions. As the general relativity should be recoveredin the IR limit, we have lim_E/E_P→0 f(E/E_P)=1,lim_E/E_P→0 g(E/E_P)=1.Now the metric in gravity's rainbow <cit.>h(E)=η^abe_a(E)⊗ e_b(E).So, the energy dependent frame fieldsare e_0(E)=1/f(E/E_P)ẽ_0,e_i(E)=1/g(E/E_P)ẽ_i.Here ẽ_0 and ẽ_i are the original energy independent frame fields. The rainbow deformation of geometry motivated from string theory, such as black rings <cit.>, andblack branes <cit.> has been studied. The rainbow deformationof higher dimensional black holes has important consequencesfor the detection of black holes at the LHC <cit.>.The rainbow deformation of modified theories of gravity, andof gravity coupled to non-linearsources has been studied <cit.>.The gravity's rainbow has also been used to address the information paradox in black holes<cit.>. It may be noted that general properties of energy dependentmetric for a BTZ black hole, and its coupling to non-linear sources has been discussedusing gravity rainbow <cit.>. In this paper, we analyse the thermodynamic aspectsof such a deformation explicitly.We are able to show that even though manythermodynamicquantities of aBTZ black hole are deformed by gravity's rainbow, the Gibbs free energy is notdeformed. Thus, the critical phenomena based on Gibbs free energy is not deformed by gravity's rainbow.§ BTZ BLACK HOLESIn 2+1 dimensions, Einstein field equations with negative cosmological constant (AdS spacetime) admit - in addition to the vacuum solution- a two-parameter family of black hole solutions found by Banados, Teitelboim and Zanelli <cit.>, given by the metric, without charge,[Planckian units is used throughout the manuscript k_b=c=G=ħ =1]ds^2 = -N^2 dt^2 +N^-2 dr^2 + r^2 ( d ϕ+ N^ϕ dt) ^2.Where the functions N^2 = (r^2 - r_+^2)(r^2 - r_-^2)/b^2 r^2 and (N^ϕ)^2 = r_+ r_-/b r^2. With b is the radius of AdS and r_± is obtained when the lapse function N vanishes (indicating the outer and inner horizons, respectively),r_± = [b^2M/2( 1±√(1- J^2/b^2 M^2)) ] ^1/2,with M and J being the mass and angular momentum of the BTZ black hole respectively. They can be therefore defined in terms of b, r± accordingly:M = r^2_+r^2_-/b^2, J = 2r_+r_-/b.In order to study the thermodynamics of BTZ black hole we first write the first law of black hole mechanics <cit.>dM= TdS+ JdΩ.The angular speedΩ is calculated from g_tt/g_ϕϕΩ = J/b^2 M.The temperature T_0 is calculating for the surface curvature, with the killing vector K = ∂_t + Ω∂_ϕ, <cit.>T_0= r^2_+-r^2_-/2 π r_+. The expression (<ref>) indicates an extremal limit when M< J̱.We observe that the temperature of BTZ black holes show a similar thermodynamic behaviour to their higher dimensional analogues figure <ref>. Now we use (<ref>) and (<ref>) to calculate the entropy S_0= 4 π r_+ ,which is, as expected,the forth of the horizon area `circumference'. Complying with the Bekenstein formula. We can also calculate the constant J heat capacity C_J :C_J = T(∂ S/∂ T) _J = 4 π r_+/2-√(1-( J/b M) ^2) [ 1+√(1-( J/b M) ^2)( J/b M) ^2 /2] ^ 1/2Similarly, the heat capacity at constant angular velocity C_Ω is calculated,C_Ω = 4 π b [ M/2 ( 1+√(1-( J/b M) ^2)) ] ^ 1/2. We wish also to investigate the thermodynamic pressure - volume relation for BTZ black holes and the associated critical phenomena.We define the ` volume' of BTZ black hole by the relation <cit.>, which is approximately the thermodynamic volume for slow rotating black holes J ≪1.V_0= A r_+ = 16 π r_+^2Now, we consider the thermodynamic pressure of BTZ black hole from the Van der Wall's fluid equation of state in the extended phase space <cit.>. P_0 := T/v +𝒪(J^2),with :v= 2 ( V/π) ^1/2.We observe from the PV diagramthat BTZ black holes admit the same critical phenomena as the higher dimensional Kerr-AdS black holes for some critical temperature T_c <cit.>.BTZ black holes show an interesting thermodynamic properties. In the next section, they shall be studied after gravity rainbow deformation on the BTZmetric.§ BTZ BLACK HOLES IN GRAVITY'S RAINBOWIn this section, we will deform the thermodynamics of a BTZ black hole bythe gravity's rainbow. Now hereE is the energyof aquantum particle near the event horizon of the BTZ black hole. This particle isemitted from the black holedue to the Hawking radiation, the energy of the particle is associated with the black hole temperature T <cit.>. In fact, in the geometric units used in the paper k_B=1, the black hole temperature is the same as the energy of the radiated particle, i.e T_BH = E.We can use theuncertainty principle, and write Δ p ≥ 1/Δ x. Thus, we can obtain a bound on energy of a black hole, E ≥ 1/Δ x <cit.>. This can be done for any black hole, including a BTZ black hole. It may be noted that theusual uncertainty principle is valid in gravity's rainbow <cit.>.So,the uncertainty in position of the particle near the horizon of the BTZ black hole is equal to the of theevent horizon,E≥ 1/Δ x≈ 1/r_+.It is important to noted from this that the energy used to deform the thermodynamics is adynamical function of the radial coordinate <cit.>The general relation for temperature of a black hole in gravity rainbow was found to be <cit.> :T= T_0 g(E)/f(E)Where f(E) and g(E) are the rainbow function defined in (<ref>). This conjecture is explained and proved in the following references <cit.> and many others. We may also show that the formula (<ref>) applies for BTZ black holes in gravity rainbow as well. First, consider the modification of the metric (<ref>) by the gravity rainbow functions <cit.>:ds^2 = -N^2/f^2(E) dt^2 +g^-2(E)N^-2 dr^2 + g^-2(E) r^2 ( d ϕ+ N^ϕ dt) ^2.An the modified temperature is given by the generic formula <cit.>:T = 1/4 π√(∂_r( A(r,0)) B(r,0)) Where A(r) = N^2/f^2(E) and B(r) = g^-2(E)N^-2.Therefore, we arrive at the formula for temperature in gravity rainbow (<ref>). The are many ways in witch we can define the rainbow functions f(E), g(E). Motivated by many theoretical<cit.>. And experimental <cit.> approaches.The choice of these functions in this paper is the one motivated by loop quantum gravity and non-commutative geometry <cit.>.f(E):= 1 g(E):= √(1-η(E/E_p)^ ν),for η and ν being free parameters. Now, we use (<ref>)(<ref>), and (<ref>) to obtain the modified BTZ temperature :T= r^2_+-r^2_-/2 π r_+√(1-η(1/r_+E_p)^ ν) In order to calculate the modified entropy, we use the first law dM= T dS. With (<ref>), and (<ref>) we get :S= π/2r_+/g(E)= π r_+/2√(1-η(1/r_+E_p)^ ν)It is interesting to look at the graphs betweenS_0 and r_+, and between S and r_+<ref> . Observing that the entropy of rainbow gravity modified BTZ black holes will diminish at some point with r_+ ≠ 0 , indicating the existence of remnant. This effect is observed in higher dimensional Kerr-AdS black holes in rainbow gravity <cit.>.Since the entropy of a black hole is related to its area ( circumference in 2+1 dimensions). Moreover, the 2+1 dimensional gravity is a topological theory, one might relate the remnant of rainbow BTZ black hole as a topological defect associated with the minimal length of the 2+1 gravity theory <cit.>. The radius of this topological defect is given by:r_min = η^1/ν/E_p∼ℓ_pWhich could be the minimal length of the 2+1 gravity rainbow.Now we calculate themodified heat capacities , observe that (∂ S/∂ T) _J and (∂ S/∂ T) _Ω, do not change by the gravity rainbow modifications. Hence the heat capacities are easily computed C_J = √(1-η(1/r_+E_p)^ ν)[4 π r_+/2-√(1-( J/b M) ^2)][ 1+√(1-( J/b M) ^2)( J/b M) ^2 /2] ^ 1/2C_Ω = 4 π b √(1-η(1/r_+E_p)^ ν)[ M/2 ( 1+√(1-( J/b M) ^2)) ] ^ 1/2.The volume of the modified BTZ is given from (<ref>):V= 2 π r^2_+/√(1-η(1/r_+E_p)^ ν)Moreover, the modifiedpressure is written as:P= 1/2( 1-η(1/r_+E_p)^ ν) ^1/4 P_0The P-V criticality of rainbow BTZ is not different from the ordinary BTZ black hole. This also can be seen from calculating the Gibbs free energy of ordinary and rainbow BTZ black hole,TheGibbs free energy, G given by the thermodynamic relation:G(T) = M+PV-TS.For an ordinary BTZ black holes it can be calculated using (<ref>)(<ref>) and (<ref>). G_0( T, Ω )=M-TS-Ω J = -π ^2 b ^2 /6√(1-( J/b M) ^2) × ( T^2 + √(2 π/b(1-( J/b M) ^2 ))T +1/4 π^2 b^2 [2+( J/b M) ^2 ]) ( 1/b Ω) We may conclude that G>0for critical BTZ black holes and G<0 for non critical onesThe Gibbs free energy for a rainbow BTZ is calculating using the same relation but, substituting T_0 and S_0 with T in (<ref>) and S in (<ref>). Since the rainbow function g(E) appear in T and its reciprocal appears in S.G for the rainbow BTZ black hole is the same as G_0. Indicating the same critical phenomena.§ CONCLUSIONIn this paper, we have deformed the geometryof a BTZ black hole by rainbowfunctions. Thus, the thermodynamics of the BTZ black was also deformedby these rainbow functions. The rainbow functions that were used for this deformationhave beenmotivated from resultsin loop quantum gravity and Noncommutative geometry. It was observed that the thermodynamicsof the BTZ black hole gotdeformed due to these rainbow functions. The graphs of the deformed entropy S and temperature T indicate the existence of a remnant at the last stage of evaporation, similar to the higher dimensional deformed black holes. However, the Gibbs free energy did not get deformed,and so the critical behaviour from Gibbsdid not change by this deformation. Thus, the critical behaviour of BTZ black holes in gravity'srainbow was the same thecritical behaviour of BTZ black hole in ordinary gravity. This is apparent because the temperature is deformed in an opposite way to the entropy, causing both deformations to cancel out in the Gibb's free energy.§ ACKNOWLEDGEMENTS timesWarm regards to Dr Mir Faizal for his generous help improving this work. This research project was supported by a grant from the " Research Center of the Female Scientiffic and Medical Colleges ", Deanship of Scientiffic Research, King Saud University. The author would like to thank the refrees for their helpful comments that improved the paper.plain
http://arxiv.org/abs/1703.09617v2
{ "authors": [ "Salwa Alsaleh" ], "categories": [ "physics.gen-ph" ], "primary_category": "physics.gen-ph", "published": "20170326135949", "title": "Thermodynamics of BTZ Black Holes in Gravity's Rainbow" }
2.5cm2.5cm1.0cm1.0cm theoremTheorem proposition[theorem]Proposition lemmaLemma corollary[theorem]Corollary definition[theorem]Definition example[theorem]Example algorithm[theorem]Algorithm equationsection thmTheorem[section]
http://arxiv.org/abs/1703.08804v1
{ "authors": [ "João R. Cardoso", "Amir Sadeghi" ], "categories": [ "math.NA", "65F60, 65F35, 15A16" ], "primary_category": "math.NA", "published": "20170326105935", "title": "On the conditioning of the matrix-matrix exponentiation" }
^1CAS Key Laboratory of Soft Matter Chemistry, Hefei National Laboratory for Physical Sciences at the Microscale, and Department of Physics, University of Science and Technology of China, Hefei 230026, People's Republic of China.^2State Key Laboratory of Surface Physics and Department of Physics, Fudan University, Shanghai 200433, People's Republic of China. In previous approaches to form quasicrystals, multiple competing length scales involved in particle size, shape or interaction potential are believed to be necessary. It is unexpected that quasicrystals can be self-assembled by monodisperse, isotropic particles interacting via a simple potential without multiple length scales. Here we report the surprising finding of the self-assembly of such quasicrystals in two dimensional systems of soft-core disks interacting via repulsions. We find not only dodecagonal but also octagonal quasicrystals, which have not been found yet in soft quasicrystals. In the self-assembly of such unexpected quasicrystals, particles tend to form pentagons, which are essential elements to form the quasicrystalline order. Our findings pave an unexpected and simple way to form quasicrystals and pose a new challenge for theoretical understanding of quasicrystals. Self-assembling two-dimensional quasicrystals in simple systems of monodisperse soft-core disks Mengjie Zu^1, Peng Tan^2, and Ning Xu^1 Received ??? / Accepted ??? ===============================================================================================Quasicrystal (QC) is a fantastic discovery in materials science and condensed matter physics <cit.>, which exhibits a rotational symmetry forbidden in periodic crystals. Since the first observation of a decagonal QC in Al-Mn alloys <cit.>, thousands of metallic QCs have been obtained <cit.>. These QCs intrinsically involve multiple length scales arising from multi-type atoms. Soft or mesoscopic (non-metallic) QCs have brought great attentions to the community of QCs recently <cit.> since the first finding of a 12-fold QC in supramolecular dendrimers <cit.>. Compared with metallic QCs, soft materials have displayed advantages in forming stable mono-component QCs. However, multiple length scales still seem to be inevitable to form soft QCs. Up to now, soft QCs are obtained by either introducing multiple competing length scales in the inter-particle potential <cit.> or using anisotropic particles naturally possessing multiple length scales, such as tetrahedral and patchy particles <cit.>. It is unexpected to self-assemble QCs by mono-component, isotropic particles interacting via a smooth potential without involving multiple length scales.Here we show that such an unexpected self-assembling of soft QCs do exist in high-density systems containing monodisperse, soft-core disks interacting via a simple pairwise repulsion, U(r)=ϵ/α( 1-r/σ)^αΘ(1-r/σ), where r is the separation between two disks, σ is the disk diameter, ϵ is the characteristic energy scale, α determines the softness of the potential, and Θ(x) is the Heaviside step function. With increasing number density ρ at fixed temperature T, solid phases with different structures emerge in sequence, as shown in Fig. 1a. Figure 1b shows that the inter-particle potential does not exhibit multiple length scales. Surprisingly, in certain (ρ, α) parameter regimes, both octagonal and dodecagonal QCs (OQCs and DDQCs) appear. To our knowledge, OQCs have not yet been convincingly observed in soft QCs. To avoid clustering of particles <cit.>, we vary α from 2 to 3.In Figs. 1c-1g, we first show the static configuration and diffraction pattern of five special crystals other than the ordinary triangular and square solids, including honeycomb (Hon), kite (Kite), sigma-phase (Sig), stripe (Str), and rhombus (Rho) solid. Each solid has a definite unit cell as outlined in the configuration. Although some unit cells are complicated, they repeat periodically in space, leading to a periodic diffraction pattern.QCs exist in three isolated regimes of the phase diagram. OQCs occupy a regime with small α and high ρ. DDQCs emerge in two regimes: One adjacent to OQCs and the other at relatively low ρ, covering a wider range of α.Figure 2a shows a part of static configuration of an OQC, which is rich of octagons and pentagons. The diffraction pattern shown in the top panel of Fig. 2b contains discrete sharp Bragg peaks with 8-fold symmetry, similar to that of the OQC of Cr-Ni-Si alloys <cit.>. The density profile shown in the top panel of Fig. 2c further confirms the loss of density periodicity.A close look at Fig. 2a reveals that each pentagon is surrounded by eight disks, which form a nice octagon. This implies that pentagons may be important structure elements in forming OQCs in our systems. Here we employ a polygonal order parameter δ= max{|e_i/e̅-1|} (i=1,2,...,5) to numerically identify pentagons, where e_i is the distance between the center of mass and vertex i of a 5-sided polygon, and e̅=∑_i=1^5 e_i/5. Only 5-sided polygons with δ<0.1 are identified as pentagons. By connecting centers of non-edge-adjacent pentagons, Fig. 2a shows that the OQC can be tessellated by 45^∘ rhombi and squares. The number ratio of squares to rhombi is approximately 0.701, close to 1:√(2) for perfect OQCs <cit.>. As shown in the bottom panel of Fig. 2b, by treating pentagons as units, the Bragg peaks become much sharper than those for single particles. Therefore, better quasicrystalline order is achieved by pentagons.In addition to structures, the quasicrystalline order and significance of pentagons can be further verified from dynamics. Figure 2d shows the trajectories of two randomly chosen particles in the OQC. The trajectories are composed of a chain of pentagon loops. A particle moves along edges of a pentagon for a long time and suddenly escape from the pentagon and form a new pentagon with other particles, corresponding to a phason flip, whose presence causes liquid-like diffusion in QCs <cit.>. The pentagon loops further emphasize the importance of pentagons in our QCs.Figure 2e shows the van Hove autocorrelation function G_a(r⃗,t) for the OQC, which quantifies the probability distribution that a particle has been displaced by r⃗ at time t. In an intermediate time regime (t=6000 here), particles exhibit clear heterogeneous displacement. There are particles vibrating around their equilibrium positions, forming the central peak in G_a(r⃗,t) at r⃗=0. Surrounding the central peak are satellite peaks with 8-fold symmetry, consistent with the QC symmetry shown in structures.Figures 2f-2j show the same structural and dynamical information for a DDQC. Interestingly, pentagons are still remarkable. As shown in Fig. 2f, each pentagon is surrounded by twelve disks sitting on vertexes of a dodecagon. Again, by connecting centers of non-edge-adjacent pentagons, the whole DDQC can be tiled by squares and triangles. The number ratio of triangles to squares is about 2.283, close to the ideal value of 4/√(3) for perfect DDQCs <cit.>. The significance of pentagons can also be told from their effects on sharpening Bragg peaks and the formation of pentagon loops in particle trajectories.Now there comes a question why QCs can survive in certain regimes of the phase diagram. Owing to the complex structures of our QCs, it is difficult to directly calculate their free energies. We instead compare in Fig. 3 the T=0 potential energy of QCs with that of crystalline solids next to them. We take two typical values of α as examples: α=2.0 and 2.5, which correspond to widely studied harmonic and Hertzian repulsions. In the density regimes where we find QCs, the corresponding QCs have the lowest potential energy. Because the structures of QCs are more random than those of crystals, it is plausible to assume that the entropy of thermal QCs is higher as well. Thus, at T>0, QCs should have a lower free energy than crystals and are stable enough to survive.All solid states shown in Fig. 1a are obtained by slowly quenching liquids below the melting temperature T_m. It has been proposed that prior to freezing some local orders may already start to develop in liquids <cit.>. Since pentagons are essential in our QCs, one may wonder whether a significant number of pentagons have already been formed in liquids. Moreover, the puzzling feature of our QCs is the lack of explicit competing length scales. It remains mysterious to us how the lengths established in Figs. 2a and 2f spontaneously emerge. To search for competing length scales in liquid states prior to the phase transition to QCs may provide us with some clues.We thus compare structures of liquids at T=1.1T_m over the whole range of densities of Fig. 1. The temperature envelop slightly above T_m chosen here assures that the liquids stay at approximately the same distance away from the establishment of (quasi)crystalline order. In Fig. 4, we show the density dependence of the fraction of particles forming pentagons, 5N_pentagon/N, and static structure factor, S(k), for the liquids with harmonic and Hertzian repulsions, where N_pentagon and N denote the number of pentagons and total number of particles.Figures. 4a and 4b indicate that pentagons have already accumulated in QC-forming liquids, leading to the maxima in 5N_pentagon/N. The contour plots of S(k) in Figs. 4c and 4d demonstrate two pronounced low-k peaks in the density regimes where QCs reside. In Fig. 1a, there are two regimes of triangular solids. Beyond the maximum density concerned here, there are more regimes of triangular solids. The two peaks in S(k) are apparently associated with the first peak of the liquids forming the two triangular solids on the lower and higher density sides of the QCs. It is the joint effects of high density and special capacity of the soft-core potentials to form multiple triangular solids that lead to the formation of pentagons, the building blocks of our QCs.The most surprising aspect of this work is the finding of a new class of soft QCs in so simple systems without any explicit multiple length scales. According to existing theories, QCs found here are unexpected. Thus, their existence poses a challenge to theories. Although we observe competing lengths in QC-forming liquids, the length scales are not those established in QCs. More in-depth studies are required to settle the underlying mechanisms of the spontaneous formation of the QC length scales. To track the microscopic pathways of the QC nucleation from supercooled liquids may be a necessary and direct approach toward this goal.The soft-core potentials employed here have considerable theoretical merit <cit.>, which can also mimic particle interactions in experimental systems such as poly N-isopropylacrylamide colloids, granular materials, and foams <cit.>. High densities always bring us surprises with such potentials <cit.>. Now we even see QCs there. Implied by Fig. 1a, a long-range and relatively harder inter-particle repulsion (with smaller α) needs to be modulated to verify our findings in experiments. Moreover, for both QCs found here, a pentagon surrounded by a n-side polygon forms the structural unit, which provides a promising motif to design n-fold QCs.MethodsOur systems are two-dimensional square boxes with side length L.Periodic boundary conditions are applied in both directions. The system contains N monodisperse disks with mass m. The units of energy, length, and mass are ϵ, σ, and m. The time and temperature are in units of √(mσ^2/ϵ) and ϵ / k_B with k_B being the Boltzmann constant. In this work, we mainly study N=10000 and 4096 systems.We perform molecular dynamics (MD) simulations in both the NVT and NPT ensembles. To outline the phase diagram, we slowly quench high-temperature liquids until solids are formed. We have verified that the quench rates are slow enough so that the phase boundaries are not sensitive to the change of the quench rate. To make sure that systems are in equilibrium, we first relax the system for a long time (5× 10^9 MD steps with a time step Δ t = 0.01 for solid states and 10^8 MD steps for liquid states) and then collect data in the following 10^8 MD steps. To get the static configurations shown in Figs. 1 and 2, we directly quench the equilibrium solid states to T=0 using the fast inertial relaxation engine algorithm <cit.>.The diffraction patterns and density profiles are calculated from the static structure factor and radial distribution function, respectively: S(k⃗)=1/N<ρ(k⃗)ρ(-k⃗)> and g(r⃗)=L^2/2N^2<∑_i=1^N∑_j≠i^Nδ(r⃗-r⃗_ij)>, where ρ(k⃗)=∑_i=1^Ne^ik⃗·r⃗_i is the Fourier transform of the density with r⃗_i being the location of disk i, k⃗ is the wave vector satisfying the periodic boundary conditions, r⃗_ij=r⃗_i -r⃗_j is the separation between disks i and j, the sums are over all disks, and < .> denotes the time average. The van Hove autocorrelation function is calculated from G_a(r⃗,t)=<1/N∑_i δ[r⃗-r⃗_i(t)+r⃗_i(0)]>, where <.> denotes the ensemble average and the sum is over all particles.1shechtman Shechtman, D., Blech, I., Gratias, D. & Cahn, J. W. Metallic phase with long-range orientational order and no translational symmetry. Phys. Rev. Lett. 53, 1951-1953 (1984).levine Levine, D. & Steinhardt, P. J. Quasicrystals: a new class of ordered structures. Phys. Rev. Lett. 53, 2477-2480 (1984).steurer Steurer, W. Twenty years of structure research on quasicrystals. Part I. Pentagonal, octagonal, decagonal and dodecagonal quasicrystals. Z.Kristallogr. 219, 391-446 (2004).hayashida Hayashida, K., Dotera, T., Takano, A. & Matsushita, Y. Polymeric quasicrystal: mesoscopic quasicrystalline tiling in ABC star polymers. Phys. Rev. Lett. 98, 195502 (2007).talapin Talapin, D. V. et al. Quasicrystalline order in self-assembled binary nanoparticle superlattices. Nature 461, 964-967 (2009).lee Lee, S., Bluemle, M. J., & Bates, F. S. Discovery of a Frank-Kasper σPhase in Sphere-Forming Block Copolymer Melts. Science 330, 349-353 (2010).fischer Fischer, S. et al. Colloidal quasicrystals with 12-fold and 18-fold diffraction symmetry. Proc. Natl Acad. Sci. USA 108, 1810-1814 (2011).xiao Xiao, C. et al. Dodecagonal tiling in mesoporous silica. Nature 487, 349-353 (2012).wasio Wasio, N. A. et al. Self-assembly of hydrogen-bonded two-dimensional quasicrystals. Nature 507, 86-89 (2014).ye Ye, X. et al. Quasicrystalline nanocrystal superlattice with partial matching rules. Nature Materials, 16, 214-219 (2016).zeng Zeng, X. et al. Supramolecular dendritic liquid quasicrystals. Nature 428, 157-160 (2004).dzugutov Dzugutov, M. Formation of a dodecagonal quasicrystalline phase in a simple monatomic liquid. Phys. Rev. Lett. 70, 2924-2927 (1993).engel Engel, M. & Trebin, H. Self-Assembly of Monatomic Complex Crystals and Quasicrystals with a Double-Well Interaction Potential. Phys. Rev. Lett. 98, 225505 (2007).iacovellaa Iacovellaa, C. R., Keysa, A. S. & Glotzer, S. C. Self-assembly of soft-matter quasicrystals and their approximants. Proc. Natl Acad. Sci. USA 108, 20935 (2011).archer Archer, A. J., Rucklidge, A. M. & Knobloch, E. Quasicrystalline Order and a Crystal-Liquid State in a Soft-Core Fluid. Phys. Rev. Lett. 111, 165501 (2013).dotera Dotera, T., Oshiro, T., & Ziherl, P. Mosaic two-lengthscale quasicrystals. Nature 506, 208-211 (2014).engel1 Engel, M., Damasceno, P. F., Phillips, C. L. & Glotzer, S. C. Computational self-assembly of a one-component icosahedral quasicrystal. Nature Mater. 14, 109 (2015).haji Haji-Akbari, A. et al. Disordered, quasicrystalline and crystalline phases of densely packed tetrahedra. Nature 462, 773-777 (2009).reinhardt Reinhard, A. Romano, F. & Doye, Jonathan P. K. Computing Phase Diagrams for a Quasicrystal-Forming Patchy-Particle System. Phys. Rev. Lett. 110, 255503 (2013).miyazaki Miyazaki, R. Kawasaki, T. & Miyazaki, K. Cluster Glass Transition of Ultrasoft-Potential Fluids at High Density. Phys. Rev. Lett. 117, 165701 (2016).kuo Wang, N., Chen, H., & Kuo, K. H. Two-dimensional quasicrystal with eightfold rotational symmetry. Phys. Rev. Lett. 59, 1010-1013 (1987).watanabe Watanabe, Y., Ito, M., &Soma, T. Nonperiodic tessellation with eightfold rotational symmetry. Acta Crystallogr. Sect. A 43, 133 (1987).dzugutov1995 Dzugutov, M. Phason Dynamics and Atomic Transport in an Equilibrium Dodecagonal Quasi-crystal. Europhys. Lett. 31(2), 95-100 (1995).kawamura Kawamura, H. Entropy of the randomtriangle-square tiling. Physica A 177, 73-78 (1991).tanaka Tanaka, H. Bond orientational order in liquids: Towards a unified description of water-like anomalies, liquid-liquid transition, glass transition, and crystallization. Eur. Phys. J. E 35, 113 (2012).liu Liu, Andrea J. & Nagel, Sidney R. The Jamming Transition and the Marginally Jammed Solid. Annu. Rev. Condens. Matter Phys. 1, 347-369 (2010).majmudar Majmudar, T. S., Sperl, M., Luding, S., & Behringer, R. P., Jamming Transition in Granular Systems. Phys. Rev. Lett. 98, 058001 (2007).zhang Zhang, Z. X. et al. Thermal vestige of the zero-temperature jamming transition. Nature 459, 230-233 (2009).desmond Desmond, K. W., Young, P. J., Chen, D., & Weeks, E. R., Experimental study of forces between quasi-two-dimensional emulsion droplets near jamming. Soft Matter 9, 3424-3436 (2013).xu Zu, M. J., Liu, J., Tong, H. & Xu, N. Density Affects the Nature of the Hexatic-Liquid Transition in Two-Dimensional Melting of Soft-Core Systems. Phys. Rev. Lett. 117, 085702 (2016).bitzek Bitzek, E. et al. Structural Relaxation Made Simple. Phys. Rev. Lett. 97, 170201 (2006).
http://arxiv.org/abs/1703.08783v1
{ "authors": [ "Mengjie Zu", "Peng Tan", "Ning Xu" ], "categories": [ "cond-mat.soft" ], "primary_category": "cond-mat.soft", "published": "20170326083025", "title": "Self-assembling two-dimensional quasicrystals in simple systems of monodisperse soft-core disks" }